text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Affect Recognition through Facebook for Effective Group Profiling Towards Personalized Instruction
. Social networks are progressively being considered as an intense thought for learning. Particularly in the research area of Intelligent Tutoring Systems, they can create intuitive, versatile and customized e-learning systems which can advance the learning process by revealing the capacities and shortcomings of every learner and by customizing the correspondence by group profiling. In this paper, the primary idea is the affect recognition as an estimation of the group profiling process, given that the fact of knowing how individuals feel about specific points can be viewed as imperative for the improvement of the tutoring process. As a testbed for our research, we have built up a prototype system for recognizing the emotions of Facebook users. Users’ emotions can be neutral, positive or negative. A feeling is frequently presented in unpretentious or complex ways in a status. On top of that, data assembled from Facebook regularly contain a considerable measure of noise. Indeed, the task of automatic affect recognition in online texts turns out to be more troublesome. Thus, a probabilistic approach of Rocchio classifier is utilized so that the learning process is assisted. Conclusively, the conducted experiments confirmed the usefulness of the described approach.
Introduction
Social networks seem to be a popular trend in modern life and a very important means of interactivity among people of different cultures.When people interact with peers, they can take advantage of crucial characteristics of social networks, such as directness and ease.Socialization has important pedagogical implications in learning by supporting the learners' personal relationships and social interaction with their classmates (Troussas et al., 2014).In this way, using social networks in instructional contexts can be consi-dered as a potentially powerful idea simply because students spend anyway a lot of their spare time on these online networking activities (Troussas et al., 2013).
Social networks can play a crucial role in education and especially is the area of Intelligent Tutoring Systems (ITSs) which can produce adaptive and individualized elearning systems.Indeed, adaptive individualized e-learning systems could enhance the educational procedure by offering a student-centered environment of learning and by prioritizing student's needs (Troussas et al., 2015).Individualization is based on a student models which are fundamental to the architecture of ITSs.
One important area of ITSs specializes in language learning which is referred to as Intelligent Computer-Assisted Language Learning (ICALL).In ICALL, students are being taught a language (e.g.English) through an ITS.When an ITS is incorporated in social networks, the need of group profiling emerges so that the collaboration among users is further promoted.One crucial value for group profiling is the affect recognition of the user.
Few studies on affect recognition in social networks have already been presented (Agrawal et al., 2011).These studies are mainly targeted to Twitter, for tweet updates about a specific topic (Agrawal et al., 2011).What people express through their status updates is sometimes neutral, but also some of them express a particular emotion.
On the other hand, intelligent tutoring systems in social networks can benefit from understanding the emotions of social network users.Positive emotions can facilitate learning and negative ones can be an obstacle for it.Therefore, it is helpful that by these natural avenues of emotional expression, intelligent tutoring systems can also have the facility to adapt to their users so as to help them in learning new concepts.
Given that social networks are now natural avenues where people express their thoughts and opinions about their everyday life, affect recognition emerges interest.Towards this direction, automated opinion mining can be used in such circumstances.Automated opinion mining is a type of natural language processing using machine learning for tracking the mood of users and involves collecting and examining opinions about the status.Textual emotion analysis is a sub-field of automated opinion mining that has attracted growing interest from researchers who would like to know whether a particular text expresses a positive or negative emotion.
The idea for this research work came from the need of affect recognition in education.Emotion is important in education as it drives attention, which in turn drives learning and memory.Emotion matters aim to increase understanding and awareness of the psycho-social aspects of living with a long term condition and to provide skills that will enable more holistic, collaborative and person-centered learning.
In view of the above, this paper seeks to investigate the relationship of the usefulness of affect recognition in a Facebook intelligent language learning application as a value in group profiling.For this reason, we have developed a system that is able to classify a status using sentence-level classification whether it entails positive, negative or neutral emotions by using a more probabilistic approach of Rocchio algorithm.Opinions are in the form of status updates in Facebook.The specific objectives of our study are to properly train the system to accept inputs in the form of status updates, disregarding updates that do not contain words or face emoticons and to classify the polarity of an opinion per status update basis.
Related Scientific Work
Affect recognition has been handled as a Natural Language Processing task.Starting from being a document level classification task, it has been handled at the sentence level and more recently at the phrase level.In this section, we present the related scientific work, firstly related to Grouping of students and secondly to affect recognition.
Literature on Students' Grouping in tutoring systems
In (Basile et al., 2011), the authors proposed the exploitation of machine learning techniques to improve and adapt the set of user model stereotypes by making use of user log interactions with the system.To do this, a clustering technique is exploited to create a set of user models prototypes; then, an induction module is run on these aggregated classes in order to improve a set of rules aimed as classifying new and unseen users.Their approach exploited the knowledge extracted by the analysis of log interaction data without requiring an explicit feedback from the user.
In (Nino, 2009), the author presented a snapshot of what has been investigated in terms of the relationship between machine translation (MT) and foreign language (FL) teaching and learning.Moreover, the author outlined some of the implications of the use of MT and of free online MT for FL learning.
In (Friaz-Martinez et al., 2007), the authors investigated which human factors are responsible for the behavior and the stereotypes of digital libraries users so that these human factors can be justified to be considered for personalization.To achieve this aim, the authors have studied if there is a statistical significance between the stereotypes created by robust clustering and each human factor, including cognitive styles, levels of expertise and gender differences.
In (Licchelli et al., 2004), the authors focused on machine learning approaches for inducing student profiles, based on Inductive Logic Programming and on methods using numeric algorithms, to be exploited in this environment.Moreover, an experimental session has been carried out from the authors, comparing the effectiveness of these methods along with an evaluation of their efficiency in order to decide how to best exploit them in the induction of student profiles.
In (Shi and Sha, 2012), the authors studied the problem of unsupervised domain adaptation, which aims to adapt classifiers trained on a labeled source domain to an unlabeled target domain, since many existing approaches first learn domain-invariant features and then construct classifiers with them.They propose a novel approach that jointly learn the both.
In (Vihn et al., 2010), the authors presented an organized study of information theoretic measures for clustering comparison.They have shown that the normalized information distance (NID) and normalized variation of information (NVI) satisfy both the normalization and the metric properties.Between the two, the NID is preferable since the tighter upper bound of the MI used for normalization allows it to better use the [0,1] range.They highlighted the importance of correcting these measures for chance agree-ment, especially when the number of data points is relatively small compared with the number of clusters.
In (Palubinskas et al., 1998), the authors proposed to embed the clustering problem into a Bayesian framework to automatically detect the number of clusters.The entropy is considered to define a prior and enables them to overcome the problem of defining a priori the number of clusters and an initialization of their centers.A deterministic algorithm derived from the standard k-means algorithm was proposed and compared with simulated annealing algorithms.
In (Troussas and Virvou, 2013), the authors proposed a novel approach of information theoretic clustering, based on entropy.Their approach generalizes the standard Euclidean distance, used in k-means clustering algorithm, by admitting arbitrary linear scaling and rotations of the feature space and models the problem in an information-theoretic setting.In this way, qualitative collaboration among students of the same cluster is achieved, so that they are capable of succeeding in multiple language learning, namely in the learning of the English and French language.
Literature on Affect Recognition
In (Boiy et al., 2007), the authors provided a good survey of various techniques developed in online sentiment analysis.It covers concept of emotion in written text (appraisal theory), various methodologies which can be broadly divided into two groups: (i) symbolic techniques that focuses on the force and direction of individual words (the so-called "bag-of words" approach), and (ii) machine learning techniques that characterizes vocabularies in context.Based on the survey, the authors found that symbolic techniques achieves accuracy lower than 80% and are generally poorer than machine learning methods on movie review sentiment analysis.
Another significant effort for sentiment classification on Twitter data is conducted by (Barbosa and Feng, 2010).The authors use polarity predictions from three websites as noisy labels to train a model and use 1000 manually labeled tweets for tuning and another 1000 manually labeled tweets for testing.They however do not mention how they collect their test data.They propose the use of syntax features of tweets like retweet, hash tags, link, punctuation and exclamation marks in conjunction with features like prior polarity of words and POS of words.
In (Gamon, 2004), the authors perform sentiment analysis on feedback data from Global Support Services survey.One aim of their study is to analyze the role of linguistic features like POS tags.They perform extensive feature analysis and feature selection and demonstrate that abstract linguistic analysis features contributes to the classifier accuracy.
In (Go et al., 2009), the authors use distant learning to acquire sentiment data.They use tweets ending in positive emoticons like ":)" ":-)" as positive and negative emoticons like ":(" ":-(" as negative.They build models using Naive Bayes, MaxEnt and Support Vector Machines (SVM).In terms of feature space, they try a Unigram, Bigram model in conjunction with parts-of-speech (POS) features.They note that the unigram model outperforms all other models.Specifically, bigrams and POS features do not help.
In (Pang and Lee, 2004), the authors applied minimum cuts in graphs to extract the subjective portion of texts they were studying and used machine learning methods to perform sentiment analysis on those snippets of texts only.
In (Mullen and Collier, 2004), the authors discussed the application of support vector machines in sentiment analysis with diverse information source.
In (Godbole et al., 2007), the authors developed techniques that algorithmically identify large number (hundreds) of adjectives, each with an assigned score of polarity, from around a dozen of seed adjectives.Their methods expand two clusters of adjectives (positive and negative word groups) by recursively querying the synonyms and antonyms from WordNet.Since recursive search quickly connects words from the two clusters, they implemented several precaution measures such as assigning weights which decrease exponentially as the number of hops increases.This confirms that the algorithmgenerated adjectives are highly accurate by comparing them to the results of manually picked word lists.It is worth pointing out that this work uses Lydia as the backbone to process large amount of news and blogs.
In (Wilson et al., 2005), the authors discussed categorizing texts into polar and neutral first before determining whether a positive or negative sentiment is expressed through the text.However, in (Godbole et al., 2007), the authors operate on the premise that little neutrality exists in online texts.
However, after a thorough investigation in the related scientific literature, we came up with the result that there is not any research describing affect recognition for the amelioration of an intelligent language learning system in Facebook using the Rocchio classifier.Moreover, the data used for training and testing are collected by search queries and is therefore biased.In contrast, we present features achieving a significant gain over a unigram baseline.Our data are a random sample of streaming Facebook status unlike data collected by using specific queries.The size of our hand-labeled data allows us to perform cross validation experiments and check for the variance in performance of the classifier across folds.
Methodology And Architecture
The main methodology for Affect Recognition is the Classifier method specifically the Rocchio Classifier where in a status update is being classified as positive or negative.Fig. 1 shows the overview of affect recognition using Rocchio Classifier.
In this section, we will present an analysis of the Rocchio classifier, which gives theoretical insight into the heuristics used in it, and particularly the word weighting scheme and the similarity metric.We also suggest improvements which lead to a probabilistic variant of the Rocchio classifier.
Text categorization is the procedure of clustering documents (and hence Facebook statuses) into different categories or classes.With the amount of online educational systems in Facebook growing rapidly, the need for reliable text categorization of users' statuses has increased.
One of the most widely applied learning algorithms for text categorization is the Rocchio algorithm.Although the algorithm is intuitive, it has a number of problems which lead to comparably low classification accuracy (Joachims, 1997): The objective of the Rocchio algorithm is to maximize a particular functional.a.
Nevertheless, Rocchio does not show why maximizing this functional should lead to a high classification accuracy.Heuristic components of the algorithm offer many design choices and there is b.
little guidance when applying this algorithm to a new domain.
The algorithm was developed and optimized for relevance feedback in informa-c.
tion retrieval; it is not clear which heuristics will work best for text categorization.
The major heuristic component of the Rocchio algorithm is the TFIDF (term frequency / inverse document frequency) word weighting scheme (Joachims, 1997).Different flavors of this heuristic lead to a multitude of different algorithmic approaches.If Rocchio uses probabilistic models for classification, it can allow the explicit statement of simplifying assumptions.
Because of its heuristic components, there is a number of characteristics promoting probability which are the word weighting method, the document length normalization using Euclidian vector length and the similarity measure (cosine similarity) (Joachims, 1997).
The algorithm returns a ranking of documents to define a decision rule for class membership and therefore the algorithm has to be adapted to be used for text categorization.The variant seems to be the most straightforward adaptation of the Rocchio algorithm to text categorization and domains with more than two categories.The algorithm builds on the following representation of text.Each text d is represented as a vector so that texts with similar content have similar vectors (according to a fixed similarity metric) and each element represents a distinct for a document (Joachims, 1997).The term frequency is the number of times a word is found in document and the document frequency is the number of documents in which word is found at least once.The inverse document frequency is proportionate to the document frequency.Intuitively, the inverse document frequency of a word is high if the words occurs in only one document and lower if it occurs in many documents.A word is an important indexing term for document if it is found frequently in it (the term frequency is high).
On the other hand, words which are found in many documents are rated less important indexing terms due to their low inverse document frequency.Learning is achieved by combining document vectors into a prototype vector for each class.First, both the normalized document vectors of the positive and negative examples for a class are summed up.The prototype vector is then calculated as a weighted difference of each.Using the cosine as a similarity metric, Rocchio shows that each prototype vector maximizes the mean similarity of the positive training examples with the prototype vector minus the mean similarity of the negative training examples with the prototype vector.The resulting set of prototype vectors, one vector for each class, represents the learned model.This model can be used to classify a new document.Again the document is represented as a vector using the scheme described above (Joachims, 1997).
In this way, we are working with conditional probabilities that allow us to flip the condition around conveniently.A conditional probability is a probability that event X will occur, given the evidence Y.That is normally written P(X | Y).Thus, we can determine this probability when all we have is the probability of the opposite result and of the two components individually: P(X | Y) = P(X) P(Y | X) / P(Y).
In this case, we are estimating the probability that a text is positive or negative, given its contents.We can restate that, so that is in terms of the probability of that text occurring if it has been predetermined to be positive or negative.This is convenient, because we have examples of positive and negative opinions from our data set.
The underlying idea is that we make a large assumption about how we can calculate the probability of the document occurring.We can estimate the probability of a word occurring, given a positive or negative emotion by looking through a series of examples of positive and negative emotions and counting how often it occurs in each class.This is what makes this supervised learning, the requirement for pre-classified examples to train on.
Creation of Corpus
Corpus consists of the collection of writings or recorded remarks used for linguistic analysis.In this Facebook application, recorded remarks are classified into groups of negative and positive feelings in Facebook users' status.Range of 5000 -10000 status updates will be the targeted number for corpus.It will be divided for two classes, negative and positive.Corpus should be large in number and for this reason the number of 5000 data appears to provide very satisfactory results.
Data will be collected from Facebook users based on the records in it.The system will be trained on the emotions of users, to whom our Facebook language learning application is addressed.The collected data will be manually identified whether they are positive or negative.Positive and negative status updates will then be stored in a class.
States Classification
A conditional probability is a probability that event X will occur, given the evidence.Hence, our initial formula has the following rationale: P(emotion | sentence) = P(emotion) P(sentence | emotion) / P(sentence) (1) We can drop the dividing P(line), as it's the same for both classes.The need is to rank them rather than calculate a precise probability.We can use the independence assumption to let us treat P(sentence | emotion) as the product of P(token | emotion) across all the tokens in the sentence.So, we estimate P(token | emotion) as: count(this token in class) + 1 / count(all tokens in class) + count(all tokens) (2) The extra 1 and count of all tokens stops a zero finding its way into the multiplications.If there was not any sentence with an unseen token in, it would score zero.
The classify function starts by calculating the prior probability (the chance of it being one or the other before any tokens are looked at) based on the number of positive and negative examples; in our case, that will always be 0.5, as for each observation (positive / negative status update), there are the same amount of data.We then tokenize the incoming document and for each class multiply together the likelihood of each word being seen in that class.We sort the final result and return the highest scoring class.
Our research classifies the polarity of the status update in a sentence level.Sentence level, in most cases, is more accurate than the phrase level because every status update has its own style in addressing users' emotion.
Fig. 2 illustrates two screenshots of the Facebook educational application.At the left side, there is the log-in page of the Facebook learning application and at the right side there is the recommendation for student collaboration based on the group profiling using their characteristics (including emotional state) which are presented at Section IV.
Affect Recognition In Intelligent Language Tutoring
Emotions are complex states of mind and body.Cognitively, individuals interpret an event as one that may be sad or happy.Behaviorally, a student may seek comfort when s/he is sad and seek help when s/he faces danger.Our emotional state has the potential to influence our thinking (Darling-Hammond et al., 2003).For instance, students learn and perform more successfully when they feel secure and happy about the subject matter (Oatley and Nundy, 1996).Although emotions have the potential to energize students' thinking, emotional states also have the potential to interfere with learning.If students are overly excited or enthusiastic, they might work carelessly or quickly rather than working methodically or carefully (Darling-Hammond et al., 2003).
Moreover, negative emotions have the potential to distract students' learning efforts by interfering with their ability to involve in the educational process successfully.Emotions can interfere with students' learning in several ways, including limiting the capacity to balance emotional issues with tutoring.Some students might need oneon-one time with their peers, which can be achieved by instant or asynchronous text messaging in Facebook, in order to help the process of their feelings or the resolution of a problem.
Towards the efficient creation of user clusters, we incorporate algorithmic approaches into the resulting Facebook intelligent multi-language learning application which receive as input, pre-stored data or data from empirical studies, either directly by asking Facebook users or indirectly by alleging them from users' profile.In our system, we have used several fundamental characteristics which in accordance with the authors' expertise in the domain and with past experiments conducted by them (Troussas et al., 2013 andTroussas et al., 2015) tend to influence the educational procedure: Emotional state: Emotions can affect the educational process by promoting or • downgrading the willingness of users in learning.Age: This characteristic provides significant information about the efficiency of • users to conceive new information.It is widely accepted that age can play a very crucial role in the understanding of new concepts and ideas.Score: This characteristic shows information about the prior-existent knowledge • of students in the curriculum being taught and may come of preliminary tests or preparatory lessons.Gender: This characteristic is used to check the likelihood of various differences • between the sexes.This characteristic shows the degree of differentiation in learning between male and female students.
Number of languages spoken:
This characteristic can answer the question "Do • you think that you have a flair for languages?".It is widely accepted that the more languages the user knows, the more apt s/he is in learning a new one.
Educational levels: This characteristic provides information concerning the levels • of education of the user.The underlying reasoning is that the language learning ability is proportional to the educational qualifications.
Work experience: This characteristic can show the responsibility of users and can • imply how experienced a user is in learning new concepts.Duration of computer use: This characteristic reveals information about users' • tendency in computers.Then computer-based approaches in learning may have better results in the educational process.
Using the prototype application, the aforementioned characteristics were extracted from each user.Basically, as mentioned before, all of them except score and duration of computer use were gathered from their Facebook profile.Concerning the emotional state of the user, it is drawn and analyzed from his/her status in Facebook by using Rocchio classifier.Based on the aforementioned characteristics, the system creates clusters of the already existing students.
In view of the above, in this paper we focused on "measuring" the emotional sate of each user and then according to this state and his/her personal user model, we provide him/her with advice concerning the ability to start or proceed with the language learning application and we propose him/her other users for collaboration.
Fig. 3 illustrates how affect recognition can be involved in the educational process.
Experimental Results And Discussion
In this study, we used the Rocchio classifier in order to compare its performance in predicting whether a Facebook status update is positive or negative with the emotional status of Facebook where a user can directly state his/her emotions as in the following figure.We collected around 7000 status updates from 90 users.The status updates were then manually labeled as positive or negative.The Table 1 contains sample of status updates in each class.Since there were a lot fewer negative samples, we based the distribution of the final dataset from it.We used the following data distribution (Table 2) for training and testing set (50%-50%): The dataset for each partition was selected randomly.The classifier was compared in terms of precision, recall and F-score performance using the computations shown below: or each partition was selected randomly.The classifier was compare cision, recall and F-score performance using the computations show recall are the basic measures used in evaluating search strategies.These that there is a set of records in the database which is relevant to the ords are assumed to be either relevant or irrelevant (these measures do rees of relevancy).The actual retrieval set may not perfectly match the cords.Precision is the ratio of the number of relevant records retrieved er of irrelevant and relevant records retrieved.Recall is the ratio of the ant records retrieved to the total number of relevant records in the lowing table summarizes the results.recall are the basic measures used in evaluating search strategies.These e that there is a set of records in the database which is relevant to the cords are assumed to be either relevant or irrelevant (these measures do grees of relevancy).The actual retrieval set may not perfectly match the ecords.Precision is the ratio of the number of relevant records retrieved ber of irrelevant and relevant records retrieved.Recall is the ratio of the vant records retrieved to the total number of relevant records in the llowing table summarizes the results.4).
recall and f-score comparison of rocchio classifier and direct of facebook users (4) Since there were a lot fewer negative samples, we based the distribution of the final d ataset from it.We used the following data distribution for training and testing set (50%-5 0%): The dataset for each partition was selected randomly.The classifier was compare d in terms of precision, recall and F-score performance using the computations show n below: Precision and recall are the basic measures used in evaluating search strategies.These measures assume that there is a set of records in the database which is relevant to the search topic Records are assumed to be either relevant or irrelevant (these measures do not allow for degrees of relevancy).The actual retrieval set may not perfectly match the set of relevant records.Precision is the ratio of the number of relevant records retrieved to the total number of irrelevant and relevant records retrieved.Recall is the ratio of the number of relevant records retrieved to the total number of relevant records in the database.The following table summarizes the results.The following table compares the precision, recall, and the F-score of Rocchio classifier and the direct state of emotions of Facebook users (Fig. 4).(5) Precision and recall are the basic measures used in evaluating search strategies.These measures assume that there is a set of records in the database which is relevant to the search topic Records are assumed to be either relevant or irrelevant (these measures do not allow for degrees of relevancy).The actual retrieval set may not perfectly match the set of relevant records.Precision is the ratio of the number of relevant records retrieved to the total number of irrelevant and relevant records retrieved.Recall is the ratio of the number of relevant records retrieved to the total number of relevant records in the database.The Table 3 summarizes the results.The Table 4 compares the precision, recall, and the F-score of Rocchio classifier and the direct state of emotions of Facebook users (Fig. 4).
Based on the F-score, Rocchio classifier performed very well without significant differences to the direct state of emotions by Facebook users.
The reason why the probabilistic approach of Rocchio algorithm has been used was that it can indeed show performance improvements of reduction of error rate and noise in Facebook status.
Conclusions And Future Work
In this paper, we described the affect recognition for intelligent language learning using Rocchio Classifier.Furthermore, we presented important features for achieving a probabilistic approach of Rocchio classifier.The significance of using a more probabilistic approach of Rocchio algorithm for affect recognition is that the probabilistic methods are preferable from a theoretical viewpoint, since a probabilistic framework allows the clear statement and easier understanding of the simplifying assumptions made.
The used data is a random sample of streaming Facebook states and were not collected by using specific queries.The size of our hand-labeled data allows us to perform cross validation experiments and check for the variance in performance of the classifier across folds.In this way, knowing the emotional state of each user, we can use this characteristic as a value of the vector used for the group profiling, which can further ameliorate the educational experience through Facebook.
Finally, we present our experimental results, which show that the accuracy in analyzing the emotional state of Facebook users, using Rocchio Classifier, is really high.The main findings of this study are the proper training of the system so that it can accept inputs in the form of Facebook status updates (disregarding updates that do not contain words or face emoticons) and classify the polarity of an opinion per status update basis.Hence, the affect recognition of students will serve as a characteristic for the group profiling to the direction of collaboration in the educational process.
Limitations of this study could be that the Rocchio algorithm cannot succeed to some extent in classifying multimodal relationships.For instance, two queries of similar emotions may appear much further apart in the vector space model.However, this does not affect the educational process at all, because the affect recognition will achieve to identify the student's emotions.
Different people can benefit from this study as follows: Students can gain knowledge from the collaboration with their peers of same or different groups and teachers can be assisted in the educational process given the grouping of their students.Moreover, the results of this study can also be used in other fields, e.g.special education needs, advertisement, user modeling and personalization, etc.
It is in our future plans to perform further study on the recognition and analysis of emotional states of Facebook users in order to further promote the language learning procedure.Furthermore, the relaxation as well as the combination of the assumptions resulting from the probabilistic framework provide promising starting points for future research.(1986).She is Head of the Department of Informatics, University of Piraeus (Greece), Director of the Software Engineering Laboratory and also Director of postgraduate studies in the same Department.She has published over 300 articles in international conferences, books and journals.Her research interests are in the areas of user modeling, human-computer interaction, knowledge-based software engineering, artificial intelligence in education, adaptive systems and affective computing.
ere a lot fewer negative samples, we based the distribution of the final d e used the following data distribution for training and testing set ( fewer negative samples, we based the distribution of the final d e used the following data distribution for training and testing set (was selected randomly.The classifier was compare ecision, recall and F-score performance using the computations show =
Fig. 4 .
Fig. 4. Way of direct state of emotions of Facebook users.
C.
Troussas is currently a Ph.D. Candidate in the Department of Informatics at the University of Piraeus in Greece.He received a M.Sc.degree in "Advanced Computing and Informatics Systems" (2010) and a B.Sc. degree in Informatics (2008) from the same Department.He has published over 30 articles in international conferences, books and journals.His current research interests are in the areas of user modeling, social networking services, artificial intelligence in education, mobile learning and adaptive systems.K.J. Espinosa is an Associate Professor and Head of the Department of Computer Science, University of the Philippines Cebu in Philippines.His research interests are in the areas of machine learning for big data specifically on event and pattern detection, sentiment analysis and recommender systems.M. Virvou is a Full Professor at the Department of Informatics, University of Piraeus in Greece.She received a Ph.D. Degree in Computer Science and Artificial Intelligence from the University of Sussex, UK (1993), a M.Sc.degree in Computer Science from the University College London, UK (1987) and a B.Sc. degree in Mathematics from the University of Athens, Greece
Table 4
Precision, recall and f-score comparison of rocchio classifier and direct emotional state of facebook users
Table 4
Precision, recall and f-score comparison of rocchio classifier and direct emotional state of facebook users | 7,639.8 | 2016-05-02T00:00:00.000 | [
"Computer Science",
"Education"
] |
INTERFERENCE IN LEARNING ENGLISH LANGUAGE: AN ANALYSIS OF SENTENCE CONSTRUCTION IN THE WRITTEN WORK OF YEAR 3 ESL LEARNERS IN WANGSA MAJU PRIMARY SCHOOL, MALAYSIA
One of the most significant and interesting aspects of human development is language acquisition. Since English is a universal language, it is an essential language of interaction and education. Therefore, we have to take this language of education for professional competence seriously. The aim of the study is to explore the analysis of cultural interference, an aspect that significantly hinders the learning and acquisition of English among young Malay learners. It aims to identify the students’ errors in sentence construction, particularly on the occurrence of mistakes in subject-verb-agreement’ (SVA), determiners, and the copula ‘be’. The data of this study came from an exploratory study of errors in the written essays by Year 3 students in Wangsa Maju Primary School, Malaysia. The findings of the study showed that the students were struggling and having difficulty in using accurate English grammar in their academic writing. The results revealed that the most frequent errors were the incorrect usage of the copula ‘be’ and the absence of determiners. The interference of mother tongue played a vital cause towards inhibiting the production of error free English language sentences. Lack of understanding of how the English language grammar function also accounted to this. All these could be seen as interlingual errors whereby the native language of the participants influenced their writing patterns.
INTRODUCTION
Second Language Acquisition (SLA) brings about dealing with errors through contrastive analysis.The discussion that comes from that analysis leads to error analysis.Lado (1957) believed that learners rely extensively on their native language in learning a second language.
"Individuals tend to transfer the forms and meanings, and the distribution of forms and meanings of their native language and culture to the foreign language and culture ─both productively when attempting to speak the language and to act in the culture, and receptively when attempting to grasp and understand the language and the culture as practiced by natives."(Lado, 1957, as cited in Gass, 2013).
Lado's work to produce NL [Native Language]-based materials necessitates to perform a contrastive analysis of the NL and the TL.Contrastive analysis is a way of comparing languages in order to determine potential errors.The purpose is to isolate what is required and is not required to be learned in L2 learning.A structure-by-structure comparison is made in the aspects of phonology, morphology, syntax and culture to analyze the similarities and differences between the L1 and L2.This will result in an assessment of recurring difficulties, and will help teachers to optimally allocate time and effort in teaching the learners.Despite its claims, Maros et al. (2007) asserted that CA [Contrastive Analysis] is only predictive in nature and is not always correct and regarded as an important tool for diagnostic purposes in language teaching.Therefore, it is important to study actual texts produced by L2 learners.
This study analyses the influence of the Malay language on learning English in Year 3 pupils in SK Wangsa Maju Seksyen 2, Malaysia.These analyses answered the research questions which include the categories of grammatical errors made by the pupils and the causes of the most dominant grammatical error found in their tasks.The English teacher discovered that the students often experience difficulty in the transition from Year 3 to Year 4 English language syllabus.The level of English language syllabus is significantly higher in the sense that there are more sentence construction tasks in the Year 4 syllabus.The difficulties encountered were consistently reflected in the Year 4 students' examination marks.Pupils were unable to construct sentences with minimal grammatical errors.ESL teachers or educators need to examine this matter to reduce this problem to the minimum by understanding both the linguistic and nonlinguistic reasons of the errors.Based on the second language acquisition perspective, two methods of examining these issues were (1) contrastive analysis and (2) error analysis.These two methods were the basis for analysis in this study.
The study aimed to identify the students' errors in sentence construction, particularly on the occurrence of mistakes in subject-verb-agreement' (SVA), determiners, and the copula 'be'.Analyzing the learner's production of target language text to identify the relationship between English sentence constructions and native language sentence constructions will clarify the nature of the problems.Two theoretical frameworks were used for reference and focus throughout the study: 1.Through Contrastive Analysis Hypothesis (Lado's Linguistics Across Cultures, 1957) we can predict possible sources of errors made by Malay ESL learners, and by analyzing these errors; teachers could gain some insights into future types of remedial instruction.2. Error Analysis by Stephen Corder (1967) draws on the formal distinctions between the learners' first and second languages to predict errors learners make.Unlike contrastive analysis (a comparison made with NL), error analysis is a comparison made with both NL and TL that enables insight into the current state of knowledge of learners.Corder (1967) highlights the issue of systematic errors in language learning, a systematic process that learners go through in the attempt of producing TL.These errors are not incidental imitation errors, nor mere slips of the tongue.
LITERATURE REVIEW
Various studies highlight the problems leading to this study.There are several related themes which are connected to the issues related to written work in the area of ESL learners.
The English Language Syllabus
The English as a Second Language (ESL) syllabus is designed to help learners who already have native knowledge of their mother tongue, to learn effectively another language in comprehensible stages.The Ministry of Education in Malaysia provides a three-part detailed English language syllabus for every level of studies beginning from the elementary grades up till the secondary grades.For the entire syllabus, one of the key objectives is to ensure that learners use correct and appropriate rules of grammar in speech and writing.
However, this study focuses on the transition of Year 3 to Year 4 ESL learning based on the government syllabus in the areas of writing.Although grammatical studies begin explicitly in Year 3 (it is taught implicitly in Year 1 and 2), learners are expected to attempt written tasks in the form of short paragraph writing in Year 4, utilising greater grammatical knowledge than in previous years.When learners enter Year 4, they are considered to be at Level 2. According to the government curriculum, learners are expected to be able to write sentences to form a paragraph through the guidance of teachers and eventually becoming independent writers (Ministry of Education Malaysia, n.d.).Also, as written in the Year 3 learning standards for writing, although the objective is to enable learners to create simple texts, there is no specification as to how much a learner is expected to know how to write by the end of Year 3. Learners' written tasks are focused on writing simple descriptive sentences (Curriculum Education Division , 2012).
Common English Grammatical Issues: Subject Verb Agreement
Subject and verb agree in number for English grammar, where both must be either singular or plural.In the present tense, one must add '-s' or '-es' at the end of the verb when the subjects performing the action is a singular third person: he, she, it, or any substitution word for pronouns.Inflection does not take place for other forms.They play basketball.Bahiyah and Wijayasuria (2018) found that Malay learners face difficulties in the subject-verb agreement because the Malay language does not differentiate between people; hence it is not necessary for verbs to agree with the subject.In contrast, this is the rule for English.This creates confusion among students.This analysis is supported by Surina and Kamarulzaman (2009) when they claim that the majority of the students in Malaysia have issues with English subject-verb agreement.
The copula 'be'
The origin word for 'copula' is from Latin noun meaning 'link or tie' which connects two different things.Linguistically, a copula is a word that is used to link the subject of a sentence with a predicate.(a subject complement or an adverbial).According to the hierarchy of difficulties cited in Brown (2014), the copula 'be' is an absent category is categorised at Level 2 (under differentiation) becomes a problem among Malay learners of English, as the Malay language does not have the copula (Marlyna, Tan & Khazriyati, 2007).
A third domain that has been identified as one of the most problematic grammatical areas besides subjectverb agreement and the copula 'be' in English is the correct use of determiners (Khazriyati, Tan & Marlyna, 2006;Marlyna et al., 2007).
Determiners
Determiners are a special class of words that limits (or determines) the scope of nouns that follow them.Structurally, a determiner precedes an adjective if there are adjectives in the noun phrase.If no adjectives are present, a determiner is positioned directly before the noun (Celce-Murcia & Larsen-Freeman, 1999, p. 19).
Figure 1
Examples of determiners are a, the; this, that, these, those; my, your, her, our, their, its; many, one, some, much
In comparing Malay grammar to that of English grammar, Karim (1995, p. 9) demonstrates the use of the English determiners as follows: Those structures agree that itu and ini has to be the final element in any Malay noun phrases.Should there be modifiers after the head noun, the modifiers come between the head noun (on the left) and the kata penentu (on the right).Hassan (1993,p.54) stressed that there must not be any other word after the kata penentu in the Malay noun phrase.Khazriyati et al. (2006) observed 826 uses of determiners in the students' writing, a total of 175 (21%) errors were detected.Although not all errors are due to mother tongue interference, a large number of errors that occurred in the use of determiners does indicate interference of Malay grammar.
Empirical studies
A number of researchers have done empirical research in this area of study, drawing various methods, theories and frameworks.Jamian, Sankaran and Noranisah ( 2006) conducted an investigation on the common errors by engineering students at UiTM Penang Malaysia.The aim of the study was to identify and classify the most frequent errors made by the students and to gain an understanding on the causes that triggered the errors.The study also intended to formulate some pedagogical implications of the results for teachers.The significance of this study is to introduce an effective language use among language learners which may hinder more crucial issues related to the importance of understanding the functions of grammar .The study was guided by three hypotheses: 1) Error Analysis 2) Global Analysis 3) Local Analysis.The errors were analysed according to Corder (1981) who aimed at identifying, describing and explaining the errorsin writing.It does not simply classify errors into different parts of grammatical aspects but also explore why a particular error is made.The findings of the study showed that 64.89% of students scored C and lower in their English paper, where 70% of the total scores were focused on the reading skill.However, the poor reading ability is the contributor to their low proficiency.
Subsequently, another study conducted by Sim Wee and Jusoff (2009) focused specifically on the problems with subject-verb agreement (SVA) in writing.The similarity of the context of this study is that they were examining the grammatical problems that occur in the students' writing.They intended to contribute the appropriate treatments that could be implemented in the form of focused teachings.The participants used for this study were 39 second-year students from a public university in Malaysia, who were selected from two different faculties.The method used in this study was based on a qualitative approach which focused on the problems of SVA that the learners faced in writing.The findings of the study show that the subjects made the most number of errors in the omission of verb-forms in the area of the third person singular verb (-s/-es/-ies).However, these occurrences were made when the participants tried to make the verb agree with either the singular or plural subject by dropping the '-s' inflection from the third person singular verb or making the verb plural by adding the '-s' inflection.The subjects were usually over-generalised and, therefore, either omitted the 'be' verb or used it wrongly.
Al-Khaza'leh (2021) investigated the possible writing errors by third and fourth year students of English in the Department of English, Shaqra University.The findings of the study show that students made errors in all their tasks, either in paragraphs or short sentences.Some of these errors were associated with punctuations, subject-verb agreement, capitalisation, singular-plurals but not limited to that.
Pasaribu (2021) explored the error analysis in students' academic writing.This study investigated the writing errors of 26 students in the Department of English at the University of HKBP Nommensen Medan.The findings of the study show that around 252 errors were found.The most dominant error category found in their writings was omission, which occurred 92 (36.51%) then followed by addition 64 (25.40%), misinformation 56 (22.22%), and disordering 40 (15.87%).
METHODOLOGY
A qualitative study was employed in this study in order to suit the nature of the problem addressed, which was to explore in depth the frequency of second language learners in making errors when constructing sentences in English.Firstly, this study predicted possible sources of errors made by Malay ESL learners by using Contrastive Analysis (CA) in Lado's analysis of linguistics across cultures (1957).By analysing these errors, teachers could gain insights into future types of remedial instruction.
Secondly, Corder's (1976) Error Analysis framework was adopted in order to discuss the errors made by participants using the following five steps: collecting data, identifying errors, describing errors, explaining errors, and finally evaluating errors.The main reason for choosing this framework was because this framework did not simply classify errors into different parts of grammatical aspects but also explore why a particular error was made.Further explanation on how the first native language actually interfered with the writing skills of ESL learners was provided after each error was identified.The significance of understanding why an error was made emphasized the importance of conducting this study as educators would be able to use these reasons to revise the methods of teaching in order to improve the English Proficiency level of ESL learners in Malaysia.
In the current study, 20 Year 3 pupils from a local primary school in Wangsa Maju were the participants.These participants were intermediate ESL learners, whose first language is Bahasa Malaysia.The participants were selected by their English language teachers based on their past experiences teaching them.The participants were chosen using the purposive sampling method which was chosen as it was the most cost and time effective method in choosing the participants.
The method could be explained as follows.With the assistance of English language teachers in the school where the research was carried out, the most suitable classroom (in terms of first language) was identified.
In addition, the school board's approval was granted for requesting the school to choose the participants required in this research.
The instrument used in this study was a test paper which included sentence construction questions.The students were to answer four questions in the test.The students were required to construct grammatical sentences in their written task.The test was done during English lessons after the end of year examination.The time allocated for the test was about 30 minutes.Participants did not receive any kind of help from the teacher and were not allowed to refer to books, related materials or the Internet.This was to ensure the written work was solely produced by the participants themselves, using their own writing skills and grammatical knowledge.This was to ensure the validity of the materials gathered.
The 20 papers were all marked and grammatical errors were noted and tabulated in a table.The errors were also categorised and labelled based on their types (including the frequency with which it occurred) with its correction.The Error Analysis framework by Corder (1976) was subsequently adopted to discuss the errors made by the participants.The types of grammatical error were (1) incorrect usage of "be" verb, (2) omission of the copula "be", (3) incorrect usage of determiners, (4) omission of determiners, (5) replacement of 'be' verb with the determiner 'a', and ( 6) unawareness of the subject-verb agreement (SVA) form in sentence construction.For instance, "Those is remote Control", "This a rice cooker", "Those is a two remote controls", "Robert a information...".All sentences constructed by students were tabulated.Later, the frequency of the errors made was also annotated for each type of errors.Lastly, categorization of errors, depending on whether it was intralingual or interlingual, was made for different types of errors.Further explanation on how first language actually interfered with the writing skills of ESL learners was provided after each error was identified.
FINDINGS
The 20 papers were all marked and grammatical errors were noted and tabulated.The errors were also categorised and labelled based on the types of grammatical errors that took place (including the frequency with which it occurred) with its correction.The data in the table above explains the types of grammatical errors made by the participants in their written tasks.Five different errors were made in terms of incorrect usage of grammatical items of copula 'be', determiners, and subject-verb-agreement sentence construction rule.Out of the 20 text samples analysed, 14 pieces of evidence pointed out an incorrect usage of the copula "be", for example, "Those is remote Control", "Laptop and computer is technology".Seven errors were identified as omission of the copula 'be' in sentence constructions.Eight errors were identified as incorrect usage of determiners.Determiners were not used in ten written texts.Within the same sample question, 'are' was replaced six times with 'a' in the sentence.All these could be qualified as interlingual errors, whereby the native language of the participants influenced their writing patterns.Finally, there were frequent errors in the construction of sentences in terms of subject-verb-agreement that could be due to intralingual factors.
The two most significant errors were the incorrect usage of copula 'be' (the highest error identified) and the exclusion of determiners (second highest error identified).The copula 'be' was used incorrectly without taking into account the number of objects in a sentence.For example, 'Those are remote controls' was written as 'Those is remote controls' and 'Laptop and computer are technology' were mostly written as 'Laptop and computer is technology'.The copula 'be' functions as a verb that links the subject and its predicative complement in a sentence.
The copula 'be' could precede a noun, an adjective, a numeral, a pronoun, an infinitive, or a gerund.However, in the sample writings analysed, unawareness of this could be due to interference of the mother tongue as the 'be' verb was not varied for singular or plural subject (is/are) in Malay; it was only expressed in one word 'sedang' which indicated the action was taking place.According to Marlyna et al., (2007), the copula 'be' is a common area which Malay ESL learners experienced difficulties when learning and applying it.There is a word in Malay that ties the subject and predicate in the same way as copula 'be' does, which is 'ialah' or 'adalah'.They are used merely to tie the subject and predicate together taking into account if the number of subject and predicate involved.In addition, the fact that it is not essential in Malay itself when tying a subject and predicate together often causes its misuse.This explanation could also be used to understand the replacement of 'are' with 'a' in Question 3 ('Laptop and computers a technology' instead of 'laptop and computers are technology').Apart from copula 'be' not being a necessity in Malay (Marlyna, Tan & Khazriyati, 2007), this error could be caused by the simplicity that exists in the structure of Malay.'Laptop and computers a technology' can be translated into Malay as 'Komputer riba dan komputer adalah sejenis teknologi' / 'Komputer riba dan komputer sejenis teknologi', which could both mean 'Laptop and computers are a technology'.Participants probably had chosen to write such in an attempt to directly translate from the simpler form of the Malay language structure.
The omission of determiners, on the other hand, could be seen in the same sentence such as 'This is rice cooker' whereby the correct sentence should be 'This is a rice cooker'.This significance could be attributed to the Malay language structure where determiners do not necessarily play a part in referring how many objects are involved in the predicate of a sentence.Doubling the name of the object is enough to express that there is more than one object involved.For example, 'rice cooker' is 'pemanas nasi'.Referring to the single picture of the rice cooker in the worksheet given direct translation to the Malay language in describing the picture could either be 'Ini ialah pemanas nasi' or 'Ini pemanas nasi'.Determiners do not play a role unless one is in a situation calling for specification of how many objects are present.If a picture of two rice cookers is given, it can be described in Malay without placing a specific determiner.Instead of 'Ini ialah dua pemanas nasi' (These are two rice cookers), the sentence can also be written as 'Ini ialah pemanas-pemanas nasi' (These are rice cookers).According to Khazriyati et al., (2006: 25), 'the Malay numerals are regarded as determiners since they like quantifying determiners, quantify the nouns'.
In terms of intralingual errors, there is no precise explanation for the reasons of these errors as being attributable to the existing native language (Malay) of the participants interfering with the learning and production of the target language (English).Gass (2013) explains the nature of intralingual error as being caused by the language that is being learned: 'are' was replaced with 'a' in 'Those a remote control'.In Question 4 in the writing worksheet, keywords given ('Robert' and 'information'), did not produce the expected answers such as 'Robert finds information…' or 'Robert seeks information…' (which are both grammatically correct and follows the SVA rule).Instead, grammatically incorrect sentences such as 'Robert is information' (Sample 2), 'Robert a information' (Sample 12), 'Robert computer information' (Sample 16), 'Robert a information' (Sample 18) and 'Robert the information' (Sample 9) were produced, to name a few.This could be due to a process in which a learner learns to make sense of how to connect a subject and object correctly in a sentence.Evidence in studies done by Dulay & Burt (1977) and Rosansky (1976) showed that children acquire an understanding of articles before they learn about the copula 'be' in second language learning of the English language.This is a common process in children learning any languages.
DISCUSSIONS
The results of the data gathered from the participants through their written work showed that it is common for pupils to make errors when writing in English.Often the error committed has a rational explanation.The interference of the mother tongue plays a vital role in hindering or preventing the production of error free English language sentences.Lack of understanding of how the English language grammar functions may also account for this phenomenon.Some interlingual errors cannot just be attributed to the grammatical structure of the learners' native language.Based on the analysis, morpheme order structure could be one of the reasons as to why some grammatical items tend to be used more frequently than others.However, there could be other reasons such as participants simply did not understand the meaning of the words given to them as cues when constructing sentences.The participants also were not familiar with the electrical appliances visualised in the worksheet, hence lacking related vocabulary to describe the pictures.
CONCLUSION
In conclusion, not only does a learner's understanding of the target language's grammatical structure plays an important role in aiding their English writing performance, but also the grammatical function of their mother tongue could also play a significant role in affecting the learning conditions of a new language.A greater emphasis should be placed on explaining the English grammatical rules explicitly or in a manner best for a learner to understand rather than to have him or her memorize the grammatical structures at the earlier stage of study.On a positive note, upon discovery of learners' errors due to direct grammar translation, shifting the focus in language classrooms by showing comparisons of grammatical structures in English and Malay.This would enable the learners to notice how sentences are constructed differently in the two languages (Malay and English languages) even when the aim is to convey the same idea which could provide better understanding and awareness of the English language structure.The number of participants in the study is relatively small.As such, future researchers should add more participants for more reliable and accurate results.
Table 1
Agreement with numbers in subject-verb
Table 2
Findings from Written Work | 5,650.4 | 2023-12-31T00:00:00.000 | [
"Linguistics",
"Education"
] |
Decoupled temperature and pressure hydrothermal synthesis of carbon sub-micron spheres from cellulose
The temperature and pressure of the hydrothermal process occurring in a batch reactor are typically coupled. Herein, we develop a decoupled temperature and pressure hydrothermal system that can heat the cellulose at a constant pressure, thus lowering the degradation temperature of cellulose significantly and enabling the fast production of carbon sub-micron spheres. Carbon sub-micron spheres can be produced without any isothermal time, much faster compared to the conventional hydrothermal process. High-pressure water can help to cleave the hydrogen bonds in cellulose and facilitate dehydration reactions, thus promoting cellulose carbonization at low temperatures. A life cycle assessment based on a conceptual biorefinery design reveals that this technology leads to a substantial reduction in carbon emissions when hydrochar replacing fuel or used for soil amendment. Overall, the decoupled temperature and pressure hydrothermal treatment in this study provides a promising method to produce sustainable carbon materials from cellulose with a carbon-negative effect.
Constant high pressure promoted the cellulose degradation from 2 to 6 MPa, and lower mass loss was achieved under higher pressures (from 6 to 20 MPa) ( Supplementary Fig. 3). From 2 to 6 MPa, the main role of pressure is to break the kinetic limits and thus promote the degradation at a low temperature. Above 6 MPa, the release of small molecule products is thermodynamically inhibited, resulting in slightly higher solid yields.
According to proximate analyses, elemental analyses, FTIR, and XRD, high pressures promoted the carbonization of cellulose (Supplementary Table 2 Error bars represent standard deviations of repeated tests. Supplementary The kinetics of cellulose hydrothermal reaction were calculated with the Coats-Redfern (C-R) method 6 . In general, the hydrothermal reaction rate can be expressed using the first-order rate law 7 : (1 ) where τ is the time of reaction (s); α is the conversion; k is the reaction rate constant (s 1 ) where A is the pre-exponential factor (s -1 ); E is the apparent activation energy (kJ mol -1 ); R is the universal gas constant (kJ mol -1 K -1 ); T is the absolute temperature (K).
In hydrothermal experiments, the heating rate β is constant: dT d Combining equations above, rearranging and integrating: Rearranging and taking logarithm: The kinetic parameters can be obtained by linear regression of this equation. Proximate analysis revealed that the volatile content decreased from 96.3 wt% (100 C) to 39.5 wt% (300 C), and the fixed carbon increased from 3.7 wt% (100 C) to 60.5 wt% (300 C). The transformation from volatile-rich material into fixed carbon-rich material was consistent with the color change from white to brownish-black (Supplementary Table 3 and Supplementary Fig. 16). Supplementary Figure The TGA experiments were conducted to evaluate the thermochemical properties of the hydrothermally treated cellulose. In the pyrolysis process (under pure N2), the raw cellulose had one single mass-loss process, which started from 300-315 ℃, with a sharp peak at 353 ℃ and ended at 360380 ℃ ( Supplementary Fig. 24). The pyrolysis of hydrothermal product at 100 ℃ was similar to the untreated cellulose. In contrast, the hydrothermal products at higher temperatures had more stable structures, making them difficult to be thermally decomposed, which might be related to the formation of the aromatic structures reflected in FTIR and Raman spectra. Two peaks at 344358 °C and 433500 C could be detected in the DTG curves of hydrothermally treated cellulose from 150 °C, 200 °C, and 250 °C. However, the first peak at ca. 350 ℃ disappeared in the DTG curve of the hydrochar from 300 °C, indicating the complete decomposition of hydroxy groups and six-member pyran rings in cellulose. Similar to pyrolysis, the DTG curves of cellulose combustion had only one peak, and that of hydrothermal products had two or three peaks, suggesting the transformation from the original cellulose structure to aromatic structures and fixed carbon during the hydrothermal carbonization.
In contrast, no carboxyls or carbonyls were observed from the pyrolysis of hydrochar from 300 °C, indicating the destruction of the inherent structure (the cleavage of hydroxyl and ether bonds) 8,9 . Interestingly, alkenyls could be detected in the FTIR, suggesting the double bonds in the hydrochar. For pyrolysis and combustion experiments in TGA, the kinetics were calculated using the peak analysis-least square method (PA-LSM) 10 . In the parallel reaction kinetic model, the reaction was regarded as the linear combination of a series of independent reactions 11 .
With each peak in the DTG curve representing an independent reaction, the whole reaction was divided into several reactions by peak analysis (PA). The kinetics of each reaction are expressed as 6 : ( In TGA experiments, the heating rate β was constant, rearranging equations: The least-square method (LSM) was used to obtain the Ei, Ai, and ni: where N is the number of data; (dα/dT)exp is the experimental result; (dα/dT)cal is the calculation result. Average deviation index (ADI) was used to evaluate the discrepancy between the experimental and calculation results: is the maximum among the experimental data.
The kinetics of pyrolysis and combustion of the hydrothermally treated cellulose (Supplementary Tables 6 and 7) The utilization of biomass resources has a great potential in reducing global net carbon emissions when it is used as solid fuel replacing fossil energy or for soil amendment purposes with carbon sequestration benefits. To quantify the sustainability of the DTPH carbonization conceptual biorefinery designs, on a scale-up capacity of 60,000 tonnes per year, a prospective LCA based on process simulation using Aspen Plus®v11 was applied.
This approach has been widely used to quantify the environmental impacts of emerging technology innovations [12][13][14] . Two types of waste biomass, wastepaper sludge (WPS) rich in cellulose and agricultural residue rice straw (RS), were selected as feedstocks in the prospective scenarios. The "cradle-to-grave" system boundary of LCA includes the transportation of WPS or the collection of RS, their DTPH treatment, biogas production in AD and its usage, transportation of products, and their applications in fossil fuel substitution or soil amendment.
Supplementary Figure 26 | Scheme of process designs for WPS and RS DTPH carbonization biorefineries.
(1) Area 100 (A100): DTPH carbonization. Once received at the plant, the biomass feedstock is firstly treated for dedusting and size reduction prior to DTPH carbonization.
Energy consumption is estimated to be 5% of the whole process 15 . Then the biomass feedstock with the reduced size is fed into the reactor, which is filled with water at 20 MPa.
DTPH carbonization reactor is then heated from ambient temperature to 200 °C. Due to the complexity of reactions, a RYIELD-type reactor is chosen 16,17 (2) Area 200 (A200): Anaerobic digestion (AD) and aerobic digestion (AE). Process water from DTPH carbonization is treated by AD and AE before sent to a centralized wastewater treatment (WWT) system. It is suggested that COD removal is expected to be higher than other high solid contenting wastewater stream 16 . In AD, 86% is converted to biogas (methane and carbon dioxide), and 5% is converted to cell mass. Cell mass is produced at a yield of 45 g per kg COD digested 19 . Conversion reaction equations for furfural, HMF, and other polysaccharides degradation products in DTPH carbonization were adopted from NREL process 20 , so as other input materials, such as urea and other additives. Fugitive emissions from the AD were assumed to be 3.00% of the biogas produced 21 , which is then sent to a scrubber for biogas cleaning. The liquid from the Then, 96% of the remaining soluble organic matter is removed, with 74% producing water and carbon dioxide and 22% forming cell mass. The overall COD removal achieves 99.6% after AD and AE. The mass and composition of digestate, as well as electricity consumption of dewatering, were estimated based on NREL processes 19 . The obtained digestate was assumed to be landfilled. (4) Area 400 (A400): Utility. The hot exhaust gas from A300 is sent to the boiler to generate high-pressure steam which is used to heat feedstock and water before flowing to DTPH carbonization and generate electricity for pumps. The steam generated preferentially provides energy to preheat feedstocks and the remaining steam flows to turbine for generating electricity. The exhaust gas is used to dry hydrochar obtained from the press filter before discharging. Cooling water is used to take away the heat generated in reactors. (A400) section.
The described processes were simulated in Aspen Plus ®V11 to generate information for the life cycle inventory. The capacity of simulation is set as 3000 L h -1 , corresponding to a reasonable size of DTPH carbonization reactor operated under high pressure. Process simulation specifications are listed in Supplementary Table 8.
Supplementary Process water 0.57 g/L a --5.74 g/L b --Note: a Dissolved ash in process water was measured; b Total organic carbon of process water was estimated. c HHV was calculated based on the method in Channiwala and Parikh (2002) 23 For example, RS-SF represents the DTPH carbonization technology process of RS at various B/W ratios with hydrochar used as solid fuel. Since the fossil-based products were substituted, the system expansion allocation method was applied to avoid environmental burdens associated with the conventional products. The 2% cut-off rule was applied, and therefore only major inputs above this threshold are included. Land-use change and infrastructure are excluded from the system.
Life cycle inventory (LCI). Mass and energy flows for cellulose are derived from our inhouse process simulation and corrected with cellulose content for WPS scenarios To describe the carbon positive/negative potential of a technology, carbon positive/negative efficiency is proposed herein, which is defined as the total carbon in the feedstock divided by the carbon that is released or stored. The carbon positive or negative efficiency of different energy conversion technologies was then compared systemically ( Supplementary Fig. 32). While biomass combustion or gasification without CCS are carbon neutral, the negative carbon efficiency of DTPH carbonization technology in this study is higher than that of biomass fermentation, comparable with biomass gasification with CCS, but lower than combustion with CCS. However, the introduction of CCS to biomass gasification or combustion will increase the capital cost and operational cost of the plant significantly, and thus these technologies are not industrially applied currently.
Furthermore, the reaction temperature of DTPH carbonization (~200 C) is lower than that of combustion (750900 C) or gasification (7501150 C). Fig. 33). DAC, EW, and AR require less land and water; however, EW and AR are limited by the carbon-negative potential, i.e., they cannot meet the 2 °C target with the single system. DAC needs a high energy input (156 EJ yr -1 ), which is 29% of the global energy demand 33 , limiting its investment and development. The DTPH carbonization in this study, together with BECCS, maybe one of the most potential NETs for the 2 °C target, though a significant amount of land and water are required. Therefore, it will be significant to use biomass waste, such as wastepaper sludge, agricultural waste, and forest waste, to save land and water utilization. | 2,573.2 | 2022-06-24T00:00:00.000 | [
"Materials Science"
] |
On testing exponentiality under Type-I censoring
Two new goodness-of-fit testing procedures are introduced to test exponentiality when data are subject to Type-I censoring. We proposed four test statistics for this purpose. Under extensive Monte Carlo simulations, we showed that the proposed tests maintain the nominal significance level and show good power for both monotonic and non-monotonic hazard function alternatives even for small samples as n = 10. A real dataset is studied for illustrative purposes.
. Introduction
In reliability and life testing problems, Type-I censoring has gained a significant amount of popularity due to the duration of the experiment being fixed prior to it being started and the fact that it is under the control of the experimenter.
It is of interest to study the lifetime of n items by performing a life testing experiment. By controlling the total time, the experiment can be terminated at the time of T, which can be determined before the life testing experiment begins. This means that d observations take the form of X 1 : n ≤ X 2 : n ≤ . . . ≤ X d : n , and n − d data values are censored, as discussed by Balakrishnan and Cohen [1] and Cohen [2].
The exponential distribution considered in this article is well-known and frequently uses lifetime models. The exponential model is a special case among many important statistical models such as Weibull and gamma distributions. The simplicity and the existence of closed form solutions for many problems make the exponential model appealing, which informs the current study (see also Balakrishnan and Basu [3]). We assume the following form of pdf for the exponential distribution with scale parameter θ Suppose n items are placed in a life testing experiment, which will be terminated at a predetermined time T > 0. Let X 1 : n , X 2 : n , . . . , X d : n be the corresponding Type-I censored sample from a distribution function F. Consider the following goodness-of-fit hypothesis For some positive scale parameter θ . Based on this, the current study was interested in testing for exponentiality. The maximum likelihood estimator (MLE) of θ , based on censored data X 1 : n , X 2 : n , . . . , X d : n is given byθ .
/fams. . Provided that d ≥ 1. However, hereafter we assume that d ≥ 1 and that at least one example of censored data are observed.
Pearson [4] was the first to study the problem of goodness-of-fit, which is a statistical procedure for testing the suitability of a specific model to describe a given set of complete or censored data. For a detailed discussion of this problem see D'Agostino and Stephens [5], Huber-Carol et al. [6], and Nikulin and Chimitova [7] among others.
Stephens [8] proposed a version of the Cramer-von Mises and Anderson-Darling goodness-of-fit test statistics for Type-I censored data. Pakyari and Balakrishnan [9] studied a goodness-of-fit testing procedure for the exponential distribution when the available data are Type-I censored. They studied the goodness-of-fit testing problem for the exponential model by treating the Type-I censored data as a complete sample and then performing classical goodness-of-fit tests for complete data.
Their method considered the Type-I censored sample X 1 : n ≤ X 2 : n ≤ . . . ≤ X d : n as order statistics from a complete sample of size d, from a right-truncated exponential distribution at time T.
This article presents new testing procedures for testing the goodness-of-fit of the exponential model when data are Type-I censored. We study several testing procedures in this regard such as tests based on order statistics, tests based on quantiles, and tests based on binomial distribution. However, our proposed method is based on order statistics followed by tests based on quantiles. We investigate the empirical power of the proposed tests through an extensive Monte Carlo simulation study.
This study aims to provide some easy yet powerful goodness-of-fit testing procedures for exponentiality, which is known to be a special case among many well-applied lifetime models.
The paper is structured as follows. Section 2 introduces some test statistics which are constructed based on order statistics. In Section 3 we propose a test statistic based on a linear combination of quantiles vector. Tests based on binomial distribution are discussed in Section 4. In Section 5, we investigate the validity of the proposed tests by calculating the empirical significance levels and comparing them with the nominated levels. We then perform a Monte Carlo simulation study to access the empirical power of the proposed tests so that we can compare them with the power of some known tests described in the literature on this subject. Finally, we explain the proposed tests using a real data example.
. Tests based on order statistics
Note that conditional on D = d, Where the order statistics V 1 : d , . . . , V d : d are a random sample of size d from exponential distribution but right truncated at T; see Arnold et al. [22] and David and Nagaraja [23].
On finding the MLE of θ , it will be useful to transform the Type-I censored sample X 1 : n , X 2 : n , . . . , X d : n to the complete uniformly distributed sample U 1 : d , U 2 : d , . . . , U d : d using the following transformation: Therefore, testing that the Type-I censored data X 1 : n , X 2 : n , . . . , X d : n follow exponential distribution is equivalent to testing that the complete data U 1 : d , U 2 : d , . . . , U d : d follow a uniform distribution.
If we then let ν i = U i : d − i n+1 be the deviation of each order statistics U i : d from its expected value, then several goodness-of-fit test statistics can be considered: . /fams. .
(4) Large values of these statistics will tend to reject the null hypothesis of exponentiality. In Section 5, we use the Monte Carlo simulation to determine the upper tail of the simulated values of the statistics T 1 , T 2 , and T 3 as critical points for testing exponentiality.
. Test based on quantiles
Note that the order statistics U i : d defined by Equation (3) follow beta distribution with parameters (i, d − i + 1). Define Where The quantiles vector can be used as a measure of goodness-of-fit. Extreme values of p i , i.e. values close to zero or one are signs of "badness-of-fit"! It is noteworthy that, although p i 's are uniformly distributed over (0, 1), they are not statistically independent.
We propose a test statistic in terms of a linear combination of p i and 1 − p i as follows: Where w = i−1 d , for i = 1, 2, . . . , d and p (i) 's are the ordered values of p i arranged from smallest to largest. Note that the test statistic T P will be calculated for values of p (i) in the interval (0, 1), i.e. we exclude the cases with p (i) = 0 or p (i) = 1. Note also that whilst u i : d 's are ordered in terms of their values, the p i 's are not necessarily ordered. Moreover, the test statistic T P , will be large whenever one of p i 's are close to zero or one. Hence, large values of T P provide evidence that the null hypothesis H 0 of exponentiality should be rejected. Hence, testing the null hypothesis of exponentiality (1), is equivalent to performing a binomial test say
. Test based on binomial distribution
Note that if we assume that the null hypothesis is true, i.e. under the validity of the exponential model, we expect to observe nF(T) failures. The usual binomial test may then be used to find the associated p-value.
For large values of sample size n, the binomial distribution is well approximated by the Gaussian model in which a z-test is performed to the test statistic Z, using continuity correction given by However, using the Monte Carlo simulation we found that the test statistic T B does not maintain the nominated significance level for small sample sizes even for sample sizes n ≤ 40, so we did not include the power of T B in our simulation study.
In the following section, we perform a Monte Carlo simulation to assess the power of the proposed tests for various alternative models, and for a combination of various sample sizes n and censoring proportion 1 − F(T) = exp(−T/θ).
. Simulation study
In this section, the performance of our proposed tests will be evaluated by studying the empirical significance level and the empirical power through extensive Monte Carlo simulations. We used the R pseudo-random generator with 50,000 iterations.
First, we investigate the null distribution of the test statistics presented in the previous section using the Monte Carlo estimate of the coefficient of skewness ( √ β 1 ) and the coefficient of kurtosis (β 2 ) when the underlying distribution is standard exponential. The results are shown in Table 1. The coefficient of skewness ( √ β 1 ) and the coefficient of kurtosis (β 2 ) are defined as: .
/fams. . and From Table 1, it is clear that the null distribution of all the test statistics are far from normality, as √ β 1 and β 2 are not close to 0 and 3 respectively, which are the coefficients of skewness and kurtosis of normal distribution. This is also evident from Figure 1, which depicts the simulated pdf curves for the test statistics under the validity of the null hypothesis. Indeed, it has been observed that all the test statistics are skewed to the right. Hence, we use empirical critical values to perform goodness-of-fit tests.
We compare the empirical power of the proposed tests to those of the EDF-based test statistics proposed by Pettitt and Stephens [24] and Stephens [8].
Stephens [8] studied the modification of the Kolmogorov-Smirnov statistic for the Type-I censored data from an exponential model in the form of: Where u (i) = 1 − exp(−x i : n /θ ) and u (d+1) = 1 − exp(−T/θ ) withθ being the MLE of the scale parameter θ given by Equation (2). Pettitt and Stephens [24] also studied the Cramér-von Mises statistic 1 W 2 T : n and the Anderson-Darling statistic 1 A 2 T : n under Type-I censoring in the form of: and We considered seven alternative models in three groups G 1 , G 2 and G 3 based on their behavior of hazard functions as follows: 2. Log-normal distribution with location parameter µ = 0 and scale parameter σ = 1.0, denoted by Log-normal (0, 1.0).
The following forms of probability density functions were used here.
The gamma distribution with density function Where α > 0 is the shape parameter and β > 0 is the scale parameter. The Weibull distribution with density function Where a > 0 and b > 0 are the shape and scale parameters, respectively.
. /fams. . The log-normal distribution with density function Where −∞ < µ < ∞ is the mean and σ > 0 is the standard deviation of the transformed normal distribution. Finally, the Lomax distribution (also known as Pareto Type II), with probability density function With the scale parameter c > 0 and the shape parameter d > 0.
The plot of CDFs of the alternative distributions in groups G 1 , G 2 and G 3 are depicted in Figure 2.
For a comprehensive discussion of these distributions, one may refer to Johnson et al. [25,26] and Kleiber and Kotz [27].
Verifying the empirical significance level is of great importance for the validity of any goodness-of-fit test statistic. To assess the validity of our tests we investigate the empirical significance level by generating 100, 000 Type-I censored random data from the exponential distribution with a rate equal to one (standard exponential). We considered a combination of various sample sizes n and proportions (probability) of failures F(T) = 1 − exp(−T). The empirical significance levels at nominated level α = 0.10 are tabulated in Table 2. The values in this table confirm the validity of our proposed tests in terms of preserving the nominated significance level.
The power of the proposed tests together with the powers associated with the classical EDF-based tests are recorded in Tables 3-5 for sample sizes n = 10, n = 20, and n = 30, respectively for the three alternative groups G 1 , G 2 , and G 3 . Figures 4-6 depict the corresponding heatmaps to provide better visualization of the results. The greyscale is given in Figure 7.
The test statistics T 3 and T P outperformed the classical EDFbased statistics for groups G 2 and G 3 , respectively for the monotonic increase and non-monotonic hazard function alternatives for all sample sizes considered here. The test statistic T 2 also had the best power in some cases in groups G 2 and G 3 . However, in the group G 1 alternative for monotonic decreasing hazard functions, the EDF-based test statistic AD performed better than the other tests. In Table 5, for log-normal (0, 0.5) alternative and n = 30, the empirical powers are equal to 1.00 for most tests when the censoring FIGURE Histogram of the complete data and fitted exponential pdf curve of the data in Table . proportion F(T) is at least 60%. This shows the consistency of the test statistics considered here. Moreover, as one would expect the empirical power values of all the tests considered here increase when the sample size n increases and/or when the censoring proportion F(T) increases.
In summary, for the monotonically increasing and nonmonotonic hazard rate alternatives, we recommend using the test statistics T 3 and T P . For the Lomax model alternative, we recommend T P for a small amount of censoring proportion and the AD statistic for large values of F(T).
. Numerical example
In this section, we study a numerical example to illustrate our proposed procedure and test statistics. The data concerning the times to breakdown of an insulating fluid tested at 34 kilovolts for n = 19 insulating fluids (see Nelson [28], Table 1.1, page 105).
Suppose we decided to terminate the experiment at time T = 15 so any data larger than 15 is censored. The complete and the Type-I censored data are summarized in Table 6.
. /fams. . The value of d is found to be d = 14 and using Equation (2) The values of the test statistics and the associated p-values are given in Table 7. The p-values are sufficiently large for all test statistics and thereby the null hypothesis of exponentiality is not significant and the exponential model fits the data. The histogram of the complete data and the fitted exponential pdf curve with scale parameter θ = 10 are depicted in Figure 3.
. Concluding remarks
In this paper, we proposed some new goodness-of-fit tests for exponentiality when the available data are Type-I censored. We employed two methods for this purpose: the first was based on the distance between the observed order statistics and its theoretical mean under the assumption of exponentiality.
The second method was based on the values of quantiles of uniform order statistics, which are known to follow the beta distribution, as is the fact that under the assumption of the null hypothesis, most of the quantiles p i 's should be close to 0.5. We proposed test statistics based on the weighted mean of the logarithm of p i .
Among the four test statistics presented in this article, the test statistic T 3 , based on order statistics, exhibits the most powerful test followed by the test statistic T 4 , which is based on quantiles.
The large sample properties of the proposed estimators will be examined in a separate future study through Monte Carlo simulation.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
Sections 1-4 were prepared by RP. Sections 5, 6 were prepared by OA-H (60%) and RP (40%). All authors contributed to the article and approved the submitted version. | 3,885 | 2023-02-14T00:00:00.000 | [
"Mathematics"
] |
INTEGRATED USE OF GIS, REMOTE SENSING DATA AND A SET OF MODELS FOR OPERATIONAL FLOOD FORECASTING
The research is aimed at the development and testing of the system for operational river flood forecasting. The system is based on the use of a complex of hydrological and hydrodynamic models, as well as in situ and satellite data integrated processing, and implemented on the basis of a service-oriented architecture. A distinctive feature of the system is the complete automation of the entire simulation cycle from loading initial data to interpreting results, visualizing and alerting interested parties. The theoretical basis for ensuring the coordinated functioning of all system components is the qualimetry of models and polymodel complexes. The practical implementation is carried out using open codes, free software and GIS platform «RegionView». All the complexity associated with the use of heterogeneous geographically distributed information resources is hidden from the user. This allows the system to be used not only by specialists in GIS, IT or relevant subject area, but also by other users interested in the results of flood monitoring and forecasting emergency services, local authorities, commercial organizations and citizens. The described technologies and the system of operational flood forecasting were tested in the Russian Federation on the Northern Dvina River, from the city of Velikiy Ustyug to the city of Kotlas in 2014-2019. Given test results prove that the application of such an approach ensures full implementation of the required functionality of operational flood forecasting systems, the fulfilment of the basic requirements for such systems and also indicate the possibility of a widespread use of such systems authorities and emergency services.
INTRODUCTION
The research in the field of creating systems for operational flood forecasting is currently very relevant. The frequency of these emergencies remain high which leads to severe consequences and significant economic losses. At the same time, there is still a lack of applications that can provide decision makers with the most reliable information about the dynamics of emergencies promptly and in a simple visual form. Describing the state-of-the-art of the problem regarding the development of operational flood forecasting technologies it is necessary to note at least the following important factors: current information systems and services for flood monitoring and forecasting; remote sensing data application; integration of heterogeneous information resources. Current information systems and services for flood monitoring and forecasting. Currently, the international community is actively developing information systems and services for solving problems of flood monitoring and forecasting. The functionality of such hydrodynamic models as Mike FLOOD (Danish Hydrological Institute, 2019), Delft 3D (Deltares, Delft3D Development Team, 2019), HEC-RAS (Hydrologic Engineering Center, 2019), LISFLOOD (University of Bristol, 2019) and others is improving. The most well-known information systems using mathematical models include: Flood Early Warning Systems (FEWS) , North American National Water Model (NWM) (NOAA National Water Center, 2019), European Flood Awareness System (Copernicus Emergency Management Service, EFAS, 2019), based on the LISFLOOD model, and some others. These systems are focused on the territories of North American and European countries and have a well-developed network of stations and observation posts -sources of hydrological and meteorological data. To forecast situations on Russian rivers, it is necessary to take into account such features as sparseness of hydrometeorological observations network, the occurrence of ice jams, the absence of highly detailed digital elevation and terrain models for potentially dangerous river valleys, etc. Recently, a number of new services, based on the use of remote sensing data, have been developed: Copernicus Emergency Management Service, which includes Mapping service (Copernicus Emergency Management Service, Mapping, 2019) and Global Flood Awareness System (Copernicus Emergency Management Service, GloFAS, 2019); Thematic Exploitation Platform -Hydrology (TEP Hydrology) (European Space Agency, 2019), which includes the Flood Monitoring Service. However, as before, the existing services do not involve the use of mathematical models that most adequately take into account the features of the Russian rivers and are intended mainly for the flow monitoring and forecasting tasks, and not for operational forecasting of river floods. To calculate flooded areas and water flow movement of Russian rivers, the most widespread hydrodynamic models are STREAM_2D models (Aleksyuk A.I., Belikov V.V., 2017). Despite the good testing results on a number of the Russian territories, they can now be used mainly by hydrologists or for solving particular modeling problems due to insufficient development of such issues as the automation of obtaining and processing of source data, locality of execution, and other information technology constraints. There are examples of projects that implement mathematical predictive models of river hydrological regime during floods and certain cases of automated and automatic systems application for collecting hydrological information (Borsch et al., 2015). However, software developers have often relied on the use of fairly complex proprietary software (ArcGIS, etc.) or certain information exchange standards (Bugayets et al., 2015), which leads to strong limitations on the scaling of obtained solutions. Freely distributed flood modeling software is represented only by a number of applications by the American Corps of Military Engineers (HEC-RAS, HEC-GeoRAS, HEC-HMS, HEC-GeoHMS). Free closed source solutions (for example, Flood Modeller (Jacobs, 2019)) have strict limitations on the functionality and dimension of the processed data. Remote sensing data application. Today remote sensing data is one of the main sources of information about the actual boundaries of rivers and flooded areas zones. This data is becoming more accessible and convenient to use. The European Space Agency is taking great practical steps in this direction, developing technologies for the operational use of data from Sentinel satellites. The quality of data from the Russian satellites "Kanopus-V" and "Resurs-P" (which are actively used by the Emercom services of Russia) is also being improved, with their increased use for solving flood monitoring tasks. Within the framework of international agreements, in particular, the International Charter on Space and Major Disasters, it is possible to use the resource of operational satellite imagery of all participants in the charter (15 organizations, including national and international space agencies) in case of an emergency. In addition, the rapid development of unmanned aerial vehicles (UAVs), related imaging equipment and software tools for processing UAV images also expand the set of input data to improve the accuracy of flood modeling and forecasting. There are a number of works (Ponomarenko M. R., Pimanov I. Y., 2017; Refice A., D'Addabbo A., Capolongo D., 2017) on automating the detection of flooded areas on the basis of remote sensing data and ground data. However, the questions of using this source of information in complex systems for flood modeling remain open. Integration of heterogeneous information resources. Today information technologies for integrating heterogeneous information resources are increasingly based on the use of open source software, including geographic information systems (GIS) that are an integral component of flood forecasting systems (CARTO, NextGIS, MapBox, Urbica). Modern technologies in this field make it possible to create simplified means of user interaction with complex systems (Zelentsov V.A., Potryasaev S.A., 2017). However to date there are not enough examples of using information technologies and software to fully automate the entire cycle of flood modelingfrom gathering and processing ground and aerospace data to publishing the results and notifying interested persons and organizations. In general, the analysis of existing developments of operational flood forecasting systems shows that there is still a gap between 3 categories of specialists: the developers of hydrological and hydrodynamic models; the developers of information technologies and software tools for processing heterogeneous data; practitioners who are still unable to quickly use the results of mathematical modeling due to insufficient automation of forecast systems and the absence of convenient and simple means for interaction with modeling complexes. To bridge this gap and provide highly accurate assessment of flood boundaries and water levels (taking into account the specific conditions of water flow distribution), the authors proposed an approach to creating intelligent information systems for operational forecasting of river floods based on the use of a complex of hydrological and hydrodynamic models (Alabyan A. M. et al., 2016;Zelentsov V.A. et al., 2016). The principal features of this approach are the following: the integrated use of ground and aerospace data for modeling; selection and application of approved mathematical models describing the catchment and water flow; full automation of all stages of modeling -from collecting and loading source data to analyzing potential damage and alerting interested parties. According to the proposed approach, the operational short-term (12-48 hours ahead) forecasting system with integrated use of GIS, remote sensing data and a set of models was developed. The article presents the most important aspects of system implementation and the results of its testing.
GENERAL STRUCTURE OF THE INTEGRATED SYSTEM
The proposed automated flood forecasting technology is based on the concept of a multi-model description of complex natural objects. This concept includes a mechanism for selection and adaptation (structural and parametric) of the most adequate model for each specific situation (Alabyan A. M. et al., 2016;Sokolov B.V. et al., 2015). The concept is currently being developed within the framework of qualimetry of models and polymodel complexes which is a new scientific field. According to this concept, there is no universal model of flooding in different parts of the river, varied by length and configurations. When choosing hydrodynamic models, it is advisable to implement a multi-model approach. Depending on the river valley length and data availability, it is possible to choose between 2 types of models: one-dimensional hydrodynamic models for long river valleys (100-1000 km); two-dimensional models for river valleys less than 100 km in length with a significant width floodplains, their complex configuration, the location of various structures in floodplains (Alabyan A. M. et al., 2016). Experience shows that it is efficient to study and monitor long river objects through joint (hybrid, integrated, multi-scale) calculation by one-dimensional and two-dimensional models. The general architecture of developed model-oriented operational flood forecasting system is shown in Fig. 1. The system was built with the use of a service-oriented architecture (SOA) (Zelentsov V.A., Potryasaev S.A., 2017;Paik H. et al., 2017) that provides flexible interaction between software modules, implementing subject area models (in this researchhydrodynamic and hydrological models); modules of heterogeneous data collecting and processing (including data from gauging stations and remote sensing data); control modules, etc. In this case, all system components are implemented as web services and can be geographically distributed and localized in various organizations, cities and countries.
The most important issue in the SOA implementation is the way of connecting disparate modules and organization of their interaction during system operation. The SOA does not imply any way of organizing the information flow between a multitude of services, apart from connecting applications on a point-topoint basis. Such interaction leads to the problem of a rapid increase in system complexity when adding new participants. Instead, it is advisable to create an infrastructure for information exchange in a way that third-party software systems are connected in the form of modules to a universal control application that organizes computational processes for solving consumer applied problems, and the information exchange itself is based on the principles of event-oriented architecture. Event-oriented approach in distributed information systems can practically be implemented as an Enterprise Service Bus (He W., Xu L.D., 2014). It provides centralized and unified eventoriented messaging between the various components of an information system. Messaging between different systems occurs through a single point, which provides transactional control, data conversion, message auditing. In case of changing any system component connected to the service bus, there is no need to reconfigure other subsystems. The concept of the service bus provides the possibility of organizing the synthesis of computational processes, but does not directly declare the way to realize this possibility. To describe the automatic management of a set of services, the term "web services orchestration" is used in literature (Wang Y., 2016). Orchestration describes how services should interact with each other using messaging, including business logic and workflow. In a service-oriented architecture, service orchestration is implemented according to the Business Process Execution Language (WS-BPEL) standard (Ting-Huan K., Chi-Hua C., Hsu-Yang K., 2016). Over the past decade, WS-BPEL has established itself as an effective language for describing the logic of work of applications based on distributed web services. The use of this language allows one to organize the logic of interaction between modules and web services when solving each specific application task, including the use of a visual editor. This ensures a visual designing of data processing algorithms involving the use of various sources and services. Approaches to creating most of the considered information technology solutions are based on the results presented in the previous research (Alabyan A. M. et al., 2016). The proposed and implemented system includes the following original architectural and software solutions for the automation of operational flood forecasting: service bus, represented by the software product OpenESB; BPEL script interpreter embedded in the OpenESB service bus; software for displaying data according to the standards of web-mapping GeoServer; PostgreSQL spatial database management system with PostGIS add-on; Python-based administration server; data collection service from hydrological sensors; service for receiving, processing and downloading remote sensing data; service for forecasting the values of hydrodynamic processes parameters; service for controlling the work of the calculated hydrodynamic model; service for forecasting the level and flow of water at gauging stations; service for processing and interpreting the results of calculations; user web interface -web application adapted for work on stationary and mobile user terminals. The operational flood forecasting system was developed with the use of open codes, free software and GIS platform «RegionView» (Zelentsov V.А., Kovalev А.P., Pimanov I.Yu., 2016).
All system data is stored in a bitemporal database. It is based on the temporal data model (TDM), which allows storing information about the data life cycle. TDM is used to store both source data (hydrometeorological) and simulation results. Bitemporality means storing both the time of the relevance of certain data and the transactional time (the moment data is added to the storage). The use of the bitemporal database ensures the operational work of the flood monitoring system and its operation in the mode of historical and scenario modeling. User access to temporal data is implemented as a time slider in the web interface. The user can view various data (source, historical, and forecast) without special knowledge (for example, a formal query language) simply by moving the time slider (Fig. 2.).
Figure 2. System web interface with a time slider
The developed operational flood forecasting system automatically performs full simulation cycle: from the gathering the source data from hydrological sensors to updating the forecast results in the user interface. As a result of the system work, records are formed in the spatial database, which are converted by the geoserver into the WMS format and delivered to the user interface. The interface provides the end user with the necessary minimum of tools for working with forecasting results: search string of spatial data, list of currently displayed data and time slider for working with temporal data. Thus, all the complexity associated with the use of heterogeneous geographically distributed information systems is hidden from the user due to the full automation of the computational process. This allows the system to be used not only by specialists with a high level of knowledge in the field of GIS and information technology, but also by specialists in the subject area (hydrology) and all other users interested in the results of flood forecasting (emergency services, executive authorities, commercial organizations and citizens). To prepare initial data (water levels and expenditures in predetermined sections of the river) for loading directly into hydrodynamic models the special service was developed. This service is based on a multi-model approach and the concept of choosing mathematical model that most adequately describes and forecasts the hydrological situation for a specific time interval. The alternative models applied for the calculation of water flow are: analytical-imitation hydrological models of runoff formation models of direct calculation using artificial neural networks (ANN). The choice of a specific model is determined by the specific conditions of the flood.
REMOTE SENSING DATA PROCESSING
An important distinctive feature of developed system is the active use of radar (SAR) and optical satellite imagery. Optical data is a valuable source of information for detecting flooding as water surfaces are usually characterized by low spectral brightness values in comparison with surrounding objects and are the darkest areas in optical images. An effective way to automatically detect water bodies using optical data is to calculate indices for estimating the intensity of reflected radiation in various spectral channels. The indices most commonly used for detecting water surfaces include the Normalized Difference Water Index (NDWI) and the Normalized Difference Vegetation Index (NDVI) (Refice A., D'Addabbo A., Capolongo D., 2017). In case of clouds, which often occur during the hydrological phenomena, the use of radar data significantly expands the capabilities of flood forecasting systems to obtain near real-time information about river ice, current flooded areas and to correct hydrodynamic models parameters (if necessary). The water surfaces on the radar images are usually represented by pixels with a low intensity value due to specular reflection of the radar signal. Today, the automatic detection of water surfaces on SAR data is based on threshold processing, texture analysis, interferometric and polarimetric processing (Refice A., D'Addabbo A., Capolongo D., 2017). The main method is thresholding when during processing all pixels of the image, whose value is less than the set threshold, are assigned to the class of water objects (Chini M. et al., 2017). The accuracy of object recognition depends on the nature of the water surface and is largely determined by the parameters of the source data, in particular, the polarization of the signal at which the images were taken. Most preferred for flood mapping is horizontal (HH) polarization. Cross polarizations (HV, VH) and their combined use are effective for studying partially flooded areas, as they allow selecting different objects of the terrain (Martinis S., Rieke C., 2015). At the same time, the use of radar data has its limitations: for instance, due to the specifics of SAR sensing geometry, flooding in urban areas may get into the zone of the radar shadow. An effective solution to this problem is a joint analysis of optical and radar data.
The developed system includes an original remote sensing data processing method based on: threshold processing of SAR images; the calculation of the NDWI index using optical data; joint analysis of obtained results. The use of this approach made it possible to avoid inaccuracies associated with clouds, and at the same time obtain data on flooded areas in urban areas. The results obtained solves a problem of automatic optical and radar data processing to identify flooded areas, and visual comparison of real and simulated contours of flooded areas. Moreover, its implementation as a separate web service allows automating the adjustment procedures of forecast models.
CASE STUDIES
The described technologies and developed system were tested in 2014-2018 on the Northern Dvina River, from the city of Velikiy Ustyug to the city of Kotlas, Russian Federation. The research area was chosen due to the high frequency of floods with large economic losses. In addition, this area has been studied by hydrologists and was a platform for testing hydrodynamic models (Belikov V.V., et al., 2015;Alabyan A.M., Lebedeva S.V., 2018). Experimental studies were conducted on historical data for the period from 1998 to 2017.
In the spring of 2018, testing was carried out in real time mode presenting the results to EMERCOM of Russia and local authorities. In experiments the model STREAM 2D was used as a hydrodynamic module. It was tested and showed good results on a number of Russian rivers. This model was previously used for scenario calculations in the key area (for evaluating various options for flood control measures, including protective dams). It showed high efficiency in reproducing maximum water levels and flooded areas: the difference between the maximum simulated levels and observed levels at flood peaks for the historical period from 1975 to 2013 did not exceed 30 cm; the areas of flooding differed by no more than 10% (Belikov V.V., et al., 2015;Agafonova S.A. et al., 2017). Water flow formation model ECOMAG (Motovilov Yu., 2013) and ANN were applied to calculate the flow rates entering the hydrodynamic model. ECOMAG was previously successfully employed for scenario calculations of the current runoff and its dynamics during the climate changes in Northern Dvina area (Krylenko I., 2015). During the development of the described forecasting system STREAM 2D and ECOMAG models were used in the operational mode. During real-time testing, the hydrological data entered the system from 12 fixed and 5 temporary gauging stations (some of them, located in the considered area of flooding simulation, are shown in Fig. 3). The results of the comparison of ANN and simulation modeling (ECOMAG) shows that the simulation model responded to changes in flood situation much more slowly. However, it more correctly responded to abnormal situations and more accurately determined the trends in the development of such situations in the long term because this type of models takes into account the processes of runoff formation for the entire period, starting from the autumn freezing, snow accumulation, snowmelt, etc. throughout the whole catchment area. Thus, it can be concluded that ANNs give the best result in the mode of "ordinary" ice drift and are able to forecast with high accuracy long-lasting and inertial changes affecting the water level. This is due to a large set of test cases during the period of normal ice drift, which made it possible to train the ANN with high accuracy to forecast the water level in such conditions. In turn, the forecast of abrupt changes in ice or meteorological situations gives a higher error, since there is not enough training data that would allow considering the level changes in all possible abnormal situations. On the other hand, the occurrence of such situations is due to the reasons arising during the period of freezing of the river or in the winter period. The ANN in its forecast uses only operational data on the current state of the river and, with limited training samples, and is not able to take into account these long-term causes and assess their impact on the water level. The ECOMAG model, in turn, can constructively consider all these data and ensure the forecasting of possible long-term anomalous situations. However, to verify the results obtained by the ECOMAG model in the on-line mode, further development of the forecast adjustment block, based on a comparison of calculated and observed water discharge, is needed. The results of this part of the experimental studies confirmed the need for the integrated use of various models to improve the accuracy of water levels forecasting required for subsequent calculations of flooded areas and their depths. Therefore, in the process of the system operation, during a normal ice drift, the forecast was made using the ANN. At the same time, during an abnormal situation caused by ice jams, when data on water levels from hydrological posts did not reflect the actual amount of water in the river, switching to a simulation model took place. It made it possible to more accurately forecast the nature of changes in water levels at gauging stations. After loading the data into the hydrodynamic model, the calculation of the contours and depths of flooding was performed every hour for 24 hours ahead. Then services interpreting simulation results performed visualization of contours and depths of flooding. The modeled contours of water objects and flooded zones, location of gauging stations (from which data is automatically loaded), graphs of changes in water levels at gauging stations, results of level change forecast are displayed in the system interface ( Fig. 2-3). In addition, the system allows displaying the processed remote sensing data. During the testing, the following remote sensing data was used: images Resurs-P and Canopus-V (Russia), Sentinel-1 and Sentinel-2 (European Space Agency), and RADARSAT-2 (Canada). As part of this study, automatic processing of optical and radar data was carried out in order to identify flooded areas. In particular, during the spring flood of 2018, from April 1 to May 16, more than 20 satellite images were placed in the system. The use of remote sensing data allowed one not only to assess the quality of the simulation, but also to provide additional information on ice conditions, as well as unique data on specific phenomena caused by local conditions. In addition to the operational mode, the scenario mode of operation was also tested during the experimental testing of the system: the simulation of the maximum contour of the flood zone, achieved during the catastrophic flood in 2016, was carried out. RADARSAT-2 data was applied for the analysis of flood in 2016. Processing was performed in accordance with the proposed approach using open source software: SNAP and QGIS. Pre-processing included radiometric calibration and speckle filtering. At the stage of thematic processing, the calculation of intensity thresholds for identifying water bodies with the construction of an open water and flooded areas mask was carried out. The post-processing included the automatic vectorization of raster data and visualization of the resulting vector layer containing flooded area in the system interface. The result of RADARSAT-2 data processing was compared with the results of the modeling (Fig. 4). Taking into account satellite data spatial resolution and errors arising from the use of the selected processing technology, it is possible to achieve a high overlap of processing results in open areas with a low level of urbanization. For the accurate detection, it is required that the open flooded area be at least 24 m 2 . In general, the discrepancy between the hydrodynamic model data and the results of remote sensing data processing the satellite image was 7%. The maximum discrepancy between the simulated and observed water levels at the observation post in Velikiy Ustyug was 15 cm. The forecast accuracy, assessed by the infrastructure objects in the flooded area, was at least 90%.
The developed system provides the following operations, which are important for the practical application: visualization of infrastructure objects in the flooded area according to the results of the forecast; preparation of reports on potential damage; automatic notification of citizens and organizations that own the defined objects.
CONCLUSIONS
The proposed approach of integrated use of GIS, remote sensing data and a set of models which was implemented on the basis of service-and event-oriented architectures has demonstrated its effectiveness in solving the problems of monitoring and operational flood forecasting. The results of system testing prove that such an approach fully meets the basic requirements for flood forecasting systems. The proposed system implementation based on an open software platform provides an easy access to the results of flood monitoring and forecasting. All the complexity associated with the use of heterogeneous geographically distributed information resources is hidden from the user due to the full automation of the computational process. This allows the system to be applied by all interested users including emergency services, authorities, commercial organizations and citizens. The further research is aimed at: development of modeling automation tools for identifying the locations of ice jams and their dynamics with a corresponding adjustment of the parameters of water distribution models; expansion of the set of hydrological and hydrodynamic models used for calculations. | 6,392.4 | 2019-08-23T00:00:00.000 | [
"Computer Science"
] |
First global next-to-leading order determination of diffractive parton distribution functions and their uncertainties within the xFitter framework
We present GKG18-DPDFs, a next-to-leading order (NLO) QCD analysis of diffractive parton distribution functions (diffractive PDFs) and their uncertainties. This is the first global set of diffractive PDFs determined within the xFitter framework. This analysis is motivated by all available and most up-to-date data on inclusive diffractive deep inelastic scattering (diffractive DIS). Heavy quark contributions are considered within the framework of the Thorne–Roberts (TR) general mass variable flavor number scheme (GM-VFNS). We form a mutually consistent set of diffractive PDFs due to the inclusion of high-precision data from H1/ZEUS combined inclusive diffractive cross sections measurements. We study the impact of the H1/ZEUS combined data by producing a variety of determinations based on reduced data sets. We find that these data sets have a significant impact on the diffractive PDFs with some substantial reductions in uncertainties. The predictions based on the extracted diffractive PDFs are compared to the analyzed diffractive DIS data and with other determinations of the diffractive PDFs.
Introduction
High precision calculations of hard scattering cross sections in lepton-hadron deep inelastic scattering (DIS) and hadron-hadron collider experiments can be done within the framework of perturbative quantum chromodynamics (pQCD). The computations of cross sections can be performed using the so-called factorization theorem that allows for a systematic separation of perturbative and nonperturbative physics [1,2]. Some examples for describing the latter in various processes are the well-known parton distribution functions (PDFs) [3][4][5][6][7], nuclear PDFs [8][9][10][11], and polarized PDFs [12][13][14][15][16][17][18], which are rather tightly constrained by global QCD fits to DIS and hadron collider data. In fact, they are crucial assets in all scattering processes involving hadrons (nucleons and nuclei) in the initial state. In this respect, phe-nomenological and experimental studies over the past three decades have provided important information on the structure of hadrons. A significant amount of PDF sets has been determined considering the most precise data from LHC Run I and II [3,5,7,[19][20][21][22][23][24]. In the literature, the relative importance of LHC data has been subject to considerable discussion. These new and up-to-date sets of PDFs have played an important role in the search for new physics, for example in the top quark and Higgs boson sectors [3,25].
Diffractive processes, ep → ep X, where X represents hadronic final state separated from the recoiled proton by a rapidity gap and the proton in the final state carries most of the beam momentum (see Fig. 1), have been studied extensively in the H1 and ZEUS experiments at the electron-proton (ep) collider HERA [2,[26][27][28][29][30][31]. At HERA, a substantial fraction of up to 10% of all ep DIS interactions proceeds via the diffractive scattering process initiated by a highly virtual photon. In the framework of the collinear factorization theorem, the theoretical calculation of diffractive cross sections requires a special type of nonperturbative functions as input, so that the universal diffractive PDFs may be defined. To be more precise, the factorization theorem predicts that the cross section can be expressed as the convolution of nonperturbative diffractive PDFs and partonic cross sections of the hard subprocess calculable within the framework of pQCD. Consequently, the dynamics of the diffractive processes can be formulated in terms of quark and gluon densities. The diffractive PDFs have properties similar to the PDFs of the free nucleon, but with the constraint of a leading proton or its low mass excitations being present in the final state. Like PDFs, it is well established that the diffractive PDFs are universal quantities, which can be extracted from diffractive DIS data through global QCD analyses. The knowledge of diffractive PDFs for different hadron species as well as the estimation of their uncertainties is therefore vital for precise theoretical Fig. 1 Representative Feynman diagram for the neutral current diffractive DIS process ep → ep X and experimental calculations and, hence, has received quite some interests in the past (see, for example, Ref. [32] for a recent review).
The main sources to constrain the diffractive PDFs are the inclusive diffractive DIS data measured at HERA. Given the diffractive PDFs, perturbative QCD calculations are expected to be applicable to other processes such as the jet and heavy quark production in diffractive DIS at HERA [29][30][31][33][34][35]. A full discussion of diffractive dijet production at HERA will be the main subject of our future work. Indeed, the next-toleading order (NLO) QCD predictions using diffractive PDFs describe these measurements rather well. There are several studies in which the diffractive PDFs have been determined from the QCD analyses of diffractive DIS data [27,28,[36][37][38][39][40][41]. In this paper, we present a new set of diffractive PDFs, referred to as GKG18-DPDFs, through a comprehensive NLO QCD analysis. The GKG18-DPDFs diffractive PDFs are determined using all available and up-to-date data from diffractive DIS cross section [42][43][44], including, for the first time, the H1 and ZEUS combined inclusive diffractive cross section measurements [45].
The outline of this paper is as follows: In Sect. 2.1, we briefly present the theoretical formalism adopted for describing the diffractive DIS at HERA. After reviewing the QCD factorization theorem in Sect. 2.2, we explain the heavy flavor contributions to the diffractive DIS structure function in Sect. 2.3. The phenomenological framework used in GKG18-DPDFs global QCD analysis is presented in Sect. 3. This section includes our parametrizations of the diffractive PDFs (Sect. 3.1), a detailed discussion of the description of different data sets included in GKG18-DPDFs global fit (Sect. 3.2), and the method of minimization and diffractive PDF uncertainties (Sect. 3.3). In Sect. 4, we present GKG18-DPDFs results for diffractive PDFs obtained from global fits to H1 diffractive DIS cross sections [42][43][44], and H1 and ZEUS combined inclusive diffractive data [45]. In Sect. 4.1, we compare the diffractive PDFs obtained in this work to the previously determined by other groups. Section 4.2 is also devoted to comparing the theoretical predictions based on the extracted diffractive PDFs with the analyzed diffractive DIS data. Finally, in Sect. 5, we present our summary and conclusions.
Theoretical framework and assumptions
In the following we describe the standard theoretical framework adopted for the diffractive DIS. Although, there are different theoretical approaches to describe the diffractive processes in literature [46], it is well known now that the approach, where the diffractive DIS is mediated by the exchange of the hard Pomeron and a secondary Reggeon can be remarkably successful for the description of most of diffractive DIS data.
Cross section for diffractive DIS
In order to discuss the cross section for diffractive DIS, one needs to introduce the kinematic variables first. The common variables in any DIS process are as follows: the photon virtuality Q 2 = −q 2 , where q = k − k is the difference of the four-momenta of the incoming (k) and outgoing (k ) leptons; the longitudinal momentum fraction x = −q 2 2P.q , where P is the four-momentum of the incoming proton; and the inelasticity y = P.q P.k . The representative Feynman diagram for the neutral current diffractive DIS process ep → ep X, proceeding via a virtual photon exchange, is depicted in Fig. 1. In the case of diffractive DIS, as illustrated in Fig. 1, the additional variables are the squared four-momentum transferred t = (P − P ) 2 , where P is the four-momentum of the outgoing proton, and the mass M X of the diffractive final state, which is produced by diffractive dissociation of the exchanged virtual photon. This mass is much smaller than the invariant photon-proton energy and should be considered as a further degree of freedom. It is usually replaced by the light-cone momentum fraction of the diffractive exchange β, The t-integrated differential cross section for the diffractive process, ep → ep X, is presented in the form of a diffractive reduced cross section σ where x IP = (P−P ).q P.q refers to the longitudinal momentum fraction lost by the incoming proton, which is carried away by the diffractive exchange; and t is the four-momentum transfer squared at the proton vertex. Note that the longitudinal momentum fraction β of the struck parton with respect to the colourless exchange can be also expressed as β = x/x IP . The diffractive reduced cross section is given by
QCD factorization theorem
It has been shown that the diffractive DIS cross sections at HERA [27,28,30] are well interpreted assuming the "proton vertex factorization" approach which provides a good description of diffractive DIS data in terms of a resolved Pomeron (IP) [47,48]. Within the Regge phenomenology [49], the cross sections of diffractive processes at high energies are described by the exchange of so-called Regge trajectories. The diffractive cross section is dominated by a trajectory usually called the Pomeron, while the subleading Reggeon (IR) contribution is significant only for x IP > 0.01. It has been shown that the QCD factorization theorem and the well-known DGLAP parton evolution equations can be applied to describe the dependence of the cross section on β and Q 2 , while a Regge inspired approach is used to express the dependence on x IP and t.
In the QCD factorization approach, the diffractive structure functions can be written as a convolution of hard scattering coefficient functions with the diffractive PDFs, where the sum runs over quarks and gluons. Considering QCD factorization theorem, various hard scattering diffractive processes are calculable by means of diffractive PDFs, such as the diffractive jet production in DIS. The concept of QCD hard factorization of the diffractive PDFs as well as the validity of the assumption of QCD hard factorization have been theoretically predicted to hold in diffractive DIS processes [1]. We should mentioned here that the hard QCD factorization has been tested at HERA in various diffractive processes. In recent H1 analyses the validity of the hard factorization has been successfully examined for open charm production in photoproduction and DIS with D mesons [29,50] and in diffractive production of dijets in DIS [30,34,35,51]. These studies support the validity of QCD hard scattering factorization in diffractive DIS.
We should notice here that in DGLAP NLO QCD global fits, NLO contributions to the splitting functions governing the evolution of unpolarized nonsinglet and singlet combinations of quark densities are the same as in fully inclusive DIS. Hence, the diffractive parton densities satisfy the same (DGLAP) evolution equations as the usual parton distributions in inclusive DIS [52][53][54]. The Wilson coefficient functions C 2 and C L in Eq. (4) are also the same as in inclusive DIS and calculable in perturbative QCD [55]. The diffractive PDFs f D i (β, Q 2 ; x IP , t) are universal and non-perturbative quantities, which can be obtained from the QCD fit to the inclusive diffractive data. Note that diffractive PDFs can be defined in terms of matrix elements of quark and gluon operators; the renormalization of divergencies at next-to-leading order is carried out similarly to the inclusive case and leads to the DGLAP evolution equations.
In GKG18-DPDFs analysis, the proton vertex factorization [47] is assumed, where the x IP and t dependencies of the diffractive PDFs factorize from the dependencies on β and Q 2 . In this framework, the diffractive PDFs can be written as, where f i/IP (β, Q 2 ) and f IR i/IR (β, Q 2 ) are the partonic structures of Pomeron and Reggeon, respectively. The emission of Pomeron and Reggeon from the proton can be described by the flux-factors of f IP/ p (x IP , t) and f IR/ p (x IP , t). The detail discussion on the parametrization of the diffractive PDFs in Eq. (5) will be presented in a separate section.
Heavy flavour contributions to the diffractive DIS structure function
In this section, we discuss a general framework for the inclusion of heavy quark contributions to diffractive DIS structure functions. The correct treatment of heavy quark flavours in an analysis of diffractive PDFs is essential for precision measurements at DIS colliders as well as for the LHC phenomenology. As an example, the cross section for the Wboson production at the LHC depends crucially on precise knowledge of the charm quark distribution. A detailed discussion on the impact of the heavy quark mass treatments in the parton distributions as well as the determination of the their uncertainty due to uncertainty in the heavy quark masses can be found in Ref. [56]. Like to the case of inclusive DIS, the treatment of heavy flavours has an important impact on the diffractive PDFs extracted from the global analysis of diffractive DIS, due to the heavy flavour contribution to the total structure function at small values of z. Recall that there are various choices that can be used to consider the heavy quark contributions. These are the so-called variable flavour number scheme (VFNS), fixed flavour number scheme (FFNS) and general-mass variableflavor-number scheme (GM-VFNS).
In the case of FFNS, Q 2 m 2 c , m 2 b , the massive quark may be regarded as being only produced in the final state and not as partons within the nucleon. Hence, the light up-, down-and strange-quarks are active partons and the number of flavours is fixed to n f = 3. However one can also con-sider charm or bottom quark as light quark at high scales. It has been shown that the accuracy of the FFNS becomes increasingly uncertain as Q 2 increases above the heavy quark mass threshold m 2 H [57]. In the zero-mass VFNS, the massive quarks behave like massless partons for Q 2 m 2 c , m 2 b . The ZM-VFNS misses out O(m 2 H /Q 2 ) contributions completely in the perturbative expansion, and hence, this scheme is not accurate enough to be used in a QCD analysis. One can also see a discontinuity in the parton distributions and total structure function at Q 2 = m 2 H in ZM-VFNS [57]. The GM-VFNS is the appropriate scheme to interpolate between these two regions and could correct FFNS at low Q 2 and ZM-VFNS at high Q 2 → ∞, and hence, could improve the smoothness of the transition region where the number of active flavours is changed by one [57]. Therefore, for a precise analysis of structure functions and other inclusive DIS or hadron colliders data, one can use the GM-VFNS, which smoothly connects the two well-defined scheme of VFNS and FFNS [57]. This scheme is that most commonly approach in variety of global fits. In H1-DPDFs-2006 [27] and ZEUS-DPDFs-2010 [28] diffractive PDFs analyses, the heavy quark structure functions have been computed using the FFNS and general-mass variable-flavor-number scheme of Thorne and Roberts (TR GM-VFNS), respectively. Our approach is based on the TR GM-VFNS [5,58,59] which extrapolates smoothly from the FFNS at low Q 2 to the ZM-VFNS at high Q 2 and produces a good description of the effect of heavy quarks on structure functions over the whole range of Q 2 .
In our analysis, we follow the MMHT14 PDFs analysis and adopt their default values for the heavy quark masses as m c = 1.40 and m b = 4.75 GeV [60]. In Ref. [60], the variation in the MMHT14 PDFs when the heavy quark masses m c and m b were varied away from their default values of m c = 1.40 and m b = 4.75 GeV has been investigated. The dependence of the MMHT14 PDFs and the quality of the comparison to analyzed data, under variations of the heavy quark masses away from their default values has been studied. It has been shown that the effects of varying m c and m b in the predictions of cross sections for standard processes at the LHC are small and the uncertainties on PDFs due to the variation of quark masses are not hugely important [60].
The method of diffractive PDFs global QCD analysis
In the following, we present the method of GKG18-DPDFs global QCD analysis. This section also includes our parametrizations of the diffractive PDFs, the detailed discussion of the description of different data sets included in our global fit, and the method of minimization and uncertainties of our resulting diffractive PDFs.
GKG18-DPDFs parametrizations of the diffractive PDFs
As we already mentioned, the scale dependence of the distributions f i=q,g (β, Q 2 ) of the quarks and gluons can be obtained by the DGLAP evolution equations, provided the diffractive PDFs are parametrized as functions of β at some starting scale Q 2 0 . In our analysis, the diffractive PDFs are modelled at the starting scale Q 2 0 = 1.8 GeV 2 (below the charm threshold) in terms of quark z f q (z, Q 2 0 ), and gluon z f g (z, Q 2 0 ) distributions. Here, z is the longitudinal momentum fraction of the struck parton, which enters the hard subprocess, with respect to the diffractive exchange. Considering the lowest-order quark-parton model process, we have z = β, while the inclusion of higher-order processes leads to 0 < β < z. For the quark distributions we assume that all light-quarks and their antiquarks distributions are equal, The heavy quark distributions f q (=c,b) are generated dynamically at the scale Q 2 > m 2 c,b above the corresponding mass threshold in the TR GM-VFN scheme.
Due to the significantly smaller amount of data for inclusive diffractive DIS data than for the total DIS cross section, we adopt a slightly less flexible, more economical functional form to parametrize the nonperturbative diffractive PDFs at the initial scale Q 2 0 = 1.8 GeV 2 . Our standard parametrizations for the quarks and gluon diffractive PDFs are as follows: An additional factor of e − 0.001 1−z is included to ensure that the distributions vanish for z → 1. Therefore, the parameters γ q and γ g have the freedom to take negative as well as positive values in the fit. We have tested that Eqs. (6) and (7) nevertheless yield a very satisfactory description of the analyzed diffractive DIS data. We found that the two parameters η q and η g had to be fixed to zero since the data do not constrain them well enough. These simple functional forms with significantly fewer parameters have the additional benefit of greatly facilitating the fitting procedure.
The (5) is parametrized by the Pomeron and Reggeon flux factors where the trajectories are assumed to be linear, α IP,IR (t) = α IP,IR (0) + α IP,IR t. The Pomeron and Reggeon intercepts, α IP (0) and α IR (0), and the normalization of the Reggeon term, A IR , are free parameters and should be extracted from the fit to data. Note that the value of the normalization parameter A IP is absorbed in α q and α g .
The Reggeon parton densities f IR i/IR (z, Q 2 ) presented in Eq. (5) are obtained from the GRV parametrization derived from a fit to pion structure function data [61]. The values of the parameters, which are fixed in GKG18-DPDFs fit, are the following: These values are taken from the following experimental measurements [26,62], In total, 9 free parameters are left in GKG18-DPDFs QCD analysis, which are
Diffractive DIS data sets used in GKG18-DPDFs fits
In this section, we present the new experimental data and their treatment in GKG18-DPDFs diffractive PDFs analysis. After reviewing the analyzed data sets, which include the recent H1 and ZEUS combined data, we discuss each of the new data sets in turn. We finally review the way in which the total diffractive DIS data sets are constructed and, in particular, which data and which cuts are included.
A list of all diffractive DIS data points used in GKG18-DPDFs global analysis is presented in Tables 1 and 2. These tables correspond to our two different scenarios for including inclusive diffractive DIS data in GKG18-DPDFs global analyses, namely Fit A and Fit B.
For each data set presented in these tables, we have provided the corresponding references, the kinematical coverage of β, x IP , and Q 2 and the number of data points. We strive to include as much of the available diffractive DIS experimental data as possible in our diffractive PDF analysis. However, some cuts have to be applied in order to ensure that only proper data are included in the analysis.
The first data set we have used in our QCD analysis is the inclusive diffractive DIS data from H1-LRG-11, which were taken with the H1 detector in the years 2006 and 2007. These data correspond to three different center-of-mass energies of √ s = 225, 252 and 319 GeV [42,43]. In this measurement, the reduced cross sections have been measured in the range In addition to the H1-LRG-11 data set, we have used for the first time the H1-LRG-12 data, where the diffractive process ep → eXY with M Y < 1.6 GeV and |t| < 1 GeV 2 has been studied with the H1 experiment at HERA [44]. This high statistics measurement covering the data taking periods 1999-2000 and 2004-2007, has been combined with previously published results [27] and covers the range of 3.5 < Q 2 < 1600 GeV 2 , 0.0017 ≤ β ≤ 0.8, and 0.0003 ≤ x IP ≤ 0.03.
Finally, for the first time, we have used the recent and upto-date H1/ZEUS combined data set for the reduced diffractive cross sections, σ [45]. This measurement used samples of diffractive DIS ep scattering data at a centre-of-mass energy of √ s = 318 GeV and combined the previous the H1 FPS HERA I [63], H1 FPS HERA II [64], ZEUS LPS 1 [65] and ZEUS LPS 2 [26] data sets. This combined data cover the photon virtuality range of 2.5 < Q 2 < 200 GeV 2 , 3.5 × 10 −4 < x IP < 0.09 in proton fractional momentum loss, 0.09 < |t| < 0.55 GeV 2 in squared four-momentum transfer at the proton vertex, and 1.8 × 10 −3 < β < 0.816.
While all H1-LRG data are given for the range |t| < 1 GeV 2 , the combined H1/ZEUS diffractive DIS, which is based upon proton-tagged samples, are restricted to the range 0.09 < |t| < 0.55 GeV 2 , so one needs to use a global normalization factor between those two measurement regions.
Assuming an exponential t dependence of the inclusive diffractive cross section, the extrapolation from 0.09 < |t| < 0.55 GeV 2 to |t| < 1 GeV 2 has been done using the H1 value of exponential slope parameter b 6 GeV −2 [45,64]. The slope parameter can be extracted from fits to the reduced cross section x IP σ D(4) r . With the above choice of constant slope parameter, a good description of the data over the full x IP , Q 2 and β range is obtained [63,64].
In addition to the extrapolation discussed above, distinct methods have been employed by the H1 and ZEUS experiments, and hence, cross sections are not always given with the corrections for proton dissociation background. The different contributions from proton dissociation in the different data sets should be considered by application of different global factors. Proton dissociation is simulated using an approximate dσ [27,41]. The combined H1/ZEUS diffractive DIS are corrected by a global factor of 1.21 to account for such contributions.
It should be noted that the two data normalization factors, which we described above, bring a small systematic uncertainty to the fitted data. However, since the extrapolation in |t| is rather modest and the slope parameter b is experimentally determined with better than 10% accuracy [63] and the factor due to proton dissociation is rather well-constrained phenomenologically and experimentally, this uncertainty is at the level of a few percent. Hence, it can be safely neglected compared to the total experimental error of the H1/ZEUS combined data [45]. To determine our diffractive PDFs, we apply β ≤ 0.80 over the data sets. The data with M X > 2 GeV are included in the fit and the data with Q 2 < Q 2 min are excluded to avoid regions, which are most likely to be influenced by higher twist (HT) corrections or other problems with the chosen theoretical framework.
To ensure the validity of the DGLAP evolution equations, we have to impose certain cuts on the above mentioned data sets. In order to finalize the cut on Q 2 , the sensitivity of χ 2 to variations in Q 2 > Q 2 min is investigated for data used in the analysis. Considering these χ 2 scans, our full diffractive PDFs fits are repeated for each different Q 2 > Q 2 min cut. In Fig. 2, the dependence of χ 2 per number of degrees of freedom, χ 2 /dof, on the minimum cut value of Q 2 has been presented as a function of Q 2 min for all inclusive diffractive DIS data sets used in GKG18-DPDFs (see Table 1). The Q 2 min dependence is reflected from this plot and no further improvement on χ 2 /dof can be expected for larger value of Q 2 > Q 2 min = 9 GeV 2 . Therefore, the lowest Q 2 data are omitted from our QCD fit and Q 2 min ≥ 9 GeV 2 is applied to the diffractive DIS data sets. We refer this fit to Fit A.
However, this choice is somewhat different from the cut used in Refs. [27,28] (Q 2 min > 8.5 GeV 2 ). Since this issue can be related to the possible tension between the H1-LRG-11 and H1-LRG-12 data sets with the H1/ZEUS combined data in low-Q 2 bins, some further investigations are required. To resolve this issue, we also present similar plots for the H1/ZEUS combined data as well as for all H1 LRG data sets. As one can see from the upper panel of Fig. 3, an improvement on χ 2 per number of data points, χ 2 /N pts , can be expected for larger value of Q 2 > Q 2 min = 16 GeV 2 for the H1/ZEUS combined data. In Fig. 3, we have also shown the same plot for the H1 LRG data sets. This plot clearly shows that the appropriate choice for the case of H1 LRG data sets is Q 2 min > 9 GeV 2 . This fact indicates that the choice of Q 2 min > 9 GeV 2 is still suitable for all data sets excluding the H1/ZEUS combined data. Hence, we repeated our analysis by applying an additional cuts on Q 2 min ≥ 16 GeV 2 for the H1/ZEUS combined and keeping Q 2 min ≥ 9 GeV 2 for other H1-LRG-11 and H1-LRG-12 data sets. We refer this fit to Fit B. The number of data points after all cuts for both Fit A and Fit B are summarized in Tables 1 and 2, respectively. Note that since higher twist (HT) can be potentially large is inclusive diffractive DIS [66], the choice of larger Q 2 min also tends to reduce the HT influence.
The method of minimization and diffractive PDF uncertainties
As we already discussed, GKG18-DPDFs diffractive PDFs are provided at NLO in perturbative QCD and the data used in our fits cover a wide range of β, x IP and Q 2 kinematics.
In order to achieve an accurate theoretical descriptions of both the diffractive PDFs evolution and the hard scattering cross sections, a well-tested software package is necessary. In GKG18-DPDFs analysis, we have used the xFitter [67] which is a standard package for performing the global QCD analysis of PDFs. Fortunately, the necessary tools for making theoretical predictions of the diffractive DIS observables have been implemented in the xFitter, allowing one to perform also a global analysis of diffractive PDFs. For the minimization, χ 2 definition and treatment of experimental uncertainties, we used the methodology implemented in xFitter to determine the unknown parameters of diffractive PDFs. The QCD fit strategy follows closely the one adopted for the determination of the PDFs in the HERAPDF methodology [68,69]. The QCD predictions for the inclusive diffractive cross section are obtained by solving the DGLAP evolution equations at NLO. As we mentioned, the heavy quark coefficient functions are calculated in the TR GM-VFNS [5,58] and the heavy quark masses for charm and beauty are chosen as m c = 1.40 GeV and m b = 4.75 GeV [60]. The strong coupling constant is fixed to the α s (M 2 Z ) = 0.1176 [70] which is close to the best-fit value of NNLO MMHT2014 global PDF analysis, α s (M 2 Z ) = 0.1172 + ±0.0013 [71]. The χ 2 function is minimized using the CERN MINUIT package [72]. The form of the χ 2 minimized during our QCD fits is expressed as follows [69], where μ i is the measured value of inclusive diffractive cross section at point i, and T i is the corresponding theoretical predictions. The parameters δ i,stat , δ i,unc , and γ i j are the relative statistical, uncorrelated systematic, and correlated systematic uncertainties. The nuisance parameters b j are associated to the correlated systematics which are determined simultaneously with the unknown parameters {ξ k } of our functional forms of Eq. (6) and (7). We minimize the above χ 2 value with the k = 9 unknown fit parameters {ξ k } of our diffractive PDFs. Table 3 contains the final results of χ 2 /N pts for our global fits. For each data set, the value of χ 2 /N pts has been presented for both Fit A and Fit B. In the last row of the table, the values of χ 2 /dof have also been presented as well. These table illustrates the quality of our QCD fits to inclusive diffractive cross section at NLO accuracy in terms of the individual χ 2 values obtained for each experiment. For Fit A and Fit B, we obtain χ 2 of 322 and 280 with the total 289 and 263 data points, respectively. As one can see from this Table, a Q 2 min ≥ 16 GeV 2 cut on the H1/ZEUS combined data set significantly reduces the χ 2 /N pts from 128/96 to 85/70. Note also that the values of χ 2 /N pts for H1-LRG-11 data sets at √ s = 225 and 252 GeV do not change from Fit A to Fit B and just a very small reduction is observed for the H1-LRG-11 ( √ s = 319 GeV) and H1-LRG-12 data sets. In conclusion, the quality of Fit B is slightly better than that of Fit A, indicating a better description of the inclusive diffractive DIS data. A substantial part of the improvement in the description is driven by the H1/ZEUS combined data.
In order to obtain the uncertainties on the diffractive PDFs, we use the xFitter framework, which includes both the experimental statistical and systematic errors on the data points and their correlations in the definition of the χ 2 function. The uncertainties on the diffractive PDFs as well as the corresponding observables throughout our analysis are computed using the standard "Hessian" error propagation [57,73,74].
Results and discussions
Key results of the current NLO diffractive PDFs fit compared to all previous analyses are the inclusion of all new and up-to-date experimental diffractive DIS data, in particular, the H1/ZEUS combined data set [45], and the error analysis of the extracted diffractive PDFs. Since these new data sets may have the potential to provide more information on the extracted diffractive PDFs, it is important to precisely study their impact on the diffractive PDFs as well as on their uncertainty bands. The second significant addition is the first determination of the diffractive PDFs in the framework of xFitter [67]. The diffractive PDFs in our fits are parameterized at the input scale Q 2 0 = 1.8 GeV 2 according to Eqs. (6) and (7), which provide considerable flexibility. As we mentioned, the available diffractive DIS experimental data are not sufficient enough to constrain all parameters of such a flexible parameterization. However, due to more precise data from H1/ZEUS combined experiments, an enhanced flexibility is maybe allowed for the quark and gluon parameterizations compared to the H1-2006 and ZEUS-2010 fits. We investigated Eqs. (6) and (7) in our analysis and found that relaxing η g and η q does not cause significant changes to the fit results. Therefore, in our Fit A and Fit B QCD analyses, we set these parameters to zero. The details of the fits are summarized in Table 4, which shows our best fit values of the free parameters. In this table, the values of the fixed parameters of α s (M 2 Z ), m c and m b for our Fit A and Fit B QCD analyses are also listed.
The total quark singlet z (z, Q 2 0 ) = q=u,d,s z[q(z, Q 2 0 ) +q(z, Q 2 0 )] and gluon densities zg(z, Q 2 0 ), obtained from our ever, in the case of the gluon distribution (right panel), the differences between the two analyses are noticeable almost for all kinematic ranges of z. This result can be considered as a evidence for the existence of a possible tension between the low Q 2 data points of the H1/ZEUS combined data. Note that in our Fit A there are more lower-Q 2 data points of the H1/ZEUS combined data than in our Fit B. Overall, it seems that Fit B can be considered as a more conservative analysis because the tension between these data sets has been decreased as much as possible by imposing a more restrictive cut on the H1/ZEUS combined data.
As a last point, we have shown the rations of z Fit B (z, Fig. 4. As illustrated in this figure, in view of the uncertainties of the obtained diffractive PDFs, there is no significant difference between Fit A and Fit B. Consequently, imposing a more restrictive cut on the H1/ZEUS combined data has a slight impact on the central values of the diffractive PDFs, though they do not reduce the uncertainty of the diffractive PDFs. However, from obtained χ 2 /ndf, one can conclude that the GKG18 predictions describe these data very well, particularly for Fit B.
In summary, despite slightly different central values, Fits A and Fit B have overlapping uncertainty bands and, hence, are compatible. The difference comes from the inclusion of the lower-Q 2 region of the combined H1/ZEUS data and thus reflects the overall compatibility of the used data sets. It is in turn related to a few-percent systematic uncertainty in the relative normalization of the data sets, see our discussion above.
The uncertainties on diffractive PDFs need to be improved in the future for very high precision predictions at present and future hadron colliders. Like the total DIS cross section, the diffractive DIS cross section is directly sensitive to the diffractive quark density, whilst the gluon density is only indirectly constrained through scaling violations. Since the gluons directly contribute to the jet production through the boson-gluon fusion process [34,35,50,51,75], one can use quadrature. The combined H1/ZEUS diffractive DIS data are corrected by a global factor of 1.21 to consider the contributions of proton dissociation processes and also corrected by a global normalization factor to extrapolate from 0.09 < |t| < 0.55 GeV 2 to |t| < 1 GeV 2 as described in the text the measurements of dijet production in diffractive DIS to further constrain the diffractive gluon PDF. As an example of the inclusion of dijet production data in the QCD analysis of the diffractive PDFs, one can refer to the ZEUS analysis [28].
Q 2 evolution and comparison to other diffractive PDFs
Having the optimised values of the free parameters, we study next the shape and behaviour of GKG18-DPDFs diffractive PDFs extracted from Fit A and Fit B analyses with an increase of Q 2 and also compare our results with those of other collaborations, in particular with the ZEUS-2010 Fit SJ and H1-2006 Fit B parton sets.
In order to study the scale dependence of diffractive PDFs, in Fig. 5 we show the obtained total quark singlet z (z, Q 2 ) and gluon zg(z, Q 2 ) densities with their uncertainties at some selected Q 2 values of Q 2 = 6, 20 and 200 GeV 2 . These plots also contain the related results of two previous analyses of diffractive PDFs from H1 [27] and ZEUS [28] Collabora-tions. Note that for the H1 analysis we have used the result of their H1-2006 Fit B, while for the ZEUS analysis, their standard analysis of ZEUS-2010 Fit SJ has been considered for comparison.
As can be seen from Fig. 5, due to the evolution effects, both the quark singlet and gluon distributions are undergone an enhancement at low values of z. For large value of z, one can see a reduction of the diffractive PDFs with an increase of Q 2 . For the gluon distributions (left panels), the results of our Fit A and Fit B are in good agreements with the ZEUS-2010 Fit SJ analysis. However, there are some deviations between our results and the H1 ones, especially at smaller and larger values of z. To summarize, the agreement between our results for the gluon diffractive PDFs and the ZEUS-2010 Fit SJ is somewhat better than for H1-2006 Fit B. The discrepancy between our results and H1 fit can be directly attributed to the inclusion of the H1-LRG-12 and H1/ZEUS combined data sets which is not used in the H1 analysis. Fit SJ are inside the error bands of the two Fit A and Fit B total quark singlet distributions. Overall, we have obtained comparable singlet distribution in comparison to the other groups. According to the obtained results, one can conclude that the preliminary impact of these new data sets on the extracted diffractive PDFs is mostly on the behavior of the quark diffractive PDFs.
We conclude this section by presenting the heavy quark diffractive PDFs determined in this analysis in the TR GM-VFNS. In Fig. 6, the charm z(c +c)(z, Q 2 ) (left) and bottom z(b +b)(z, Q 2 ) (right) quark diffractive PDFs obtained from our NLO QCD fits have been shown at selected Q 2 value of Q 2 = 60 and 200 GeV 2 . The error bands correspond to the fit uncertainties derived only from the experimental input. The results from ZEUS-2010 Fit SJ also presented for comparison. As one can see from these plots, only insignificant differences between our results and ZEUS-2010 Fit SJ can be found for all heavy quark diffractive PDFs at low values of z; z < 0.01.
Comparison to the diffractive DIS data
This section presents a detailed comparison of the theoretical predictions based on our diffractive PDFs extracted from the analyses Fit A and Fit B with the experimental data used in these analyses. Note that for all figures, the error bars shown on the experimental data points correspond to the statistical and systematic errors added in quadrature. It should be noted here that the data points excluded from the analysis with Q 2 ≤ Q 2 min = 9 GeV 2 , due to the requirement cuts mentioned in Sect. 3.2, are not shown in the figures in this section. In addition, note that the HERA combined data are corrected by a global factor of 1.21 to consider the contributions of proton dissociation processes as described in Sect. 3.2. As we discussed in Sect. 3.2, while all H1-LRG data sets have been given for the range of |t| < 1 GeV 2 , the combined H1/ZEUS diffractive DIS data are restricted to [44]. See the caption of Fig. 8 for further details the 0.09 < |t| < 0.55 GeV 2 range. Hence all the combined H1/ZEUS diffractive DIS data sets are corrected by a global normalization factor to extrapolate from 0.09 < |t| < 0.55 GeV 2 to |t| < 1 GeV 2 .
In the following, using the results of Fit A and Fit B, we compare the reduced diffractive cross section In the case of H1-LRG-2011 data [42,43], we present in Fig. 14 Fig. 11 The results of our NLO pQCD fit based on Fit B for the reduced diffractive cross section x IP σ D(3) r as a function of β for x IP = 0.03 in comparison with H1-LRG-2012 data [44]. See the caption of Fig. 8 for further details function of β for x IP = 0.003 and Q 2 = 11.5 GeV 2 in comparison with H1-LRG-2011 data at √ s = 225 GeV (left) and 319 GeV (right). The error bars on the data points and the yellow bands represent the uncorrelated uncertainties and the total uncorrelated and correlated uncertainties, respectively. As can be seen, in the kinematics considered, the theory is again in good agreement with the experiment. From the results presented in this section, one can conclude that our NLO QCD predictions based on the DGLAP approach and using diffractive PDFs extracted from our QCD analysis of inclusive diffraction DIS data describe all analyzed data well.
Summary and conclusions
In this paper, we have presented GKG18-DPDFs, the first global QCD analysis of diffractive PDFs that makes use of the H1/ZEUS combined and the most recent H1 data sets on the reduced cross section of inclusive diffractive DIS. Previous determinations of non-perturbative diffractive PDFs in the parton model of QCD [27,28,41] were based on the older diffractive inclusive DIS data from H1 and ZEUS collaboration. The advent of precise data from the H1 [42][43][44] and H1/ZEUS combined [45] data sets as well as the widely used xFitter package offer us the opportunity to obtain a new set of diffractive PDFs. The TR GM-VFNS provides a rigorous theoretical framework for considering the heavy-quarks We study the impact of the new inclusive diffractive DIS data sets by producing two diffractive PDFs using two different scenarios. Firstly, by considering simultaneously the Q 2 min = 9 GeV 2 cut on all analyzed diffractive DIS data sets, and secondly by removing H1/ZEUS combined data with Q 2 min < 16 GeV 2 in order to investigate possible tension between these data sets at small values of Q 2 . In order to validate the efficiency and emphasize the phenomenological impact of this selection, the differences between these two diffractive PDFs sets are presented and discussed. We find that both of our diffractive PDFs determinations are in very good agreement with the results in the literature for the total quark singlet densities.
We also find differences between our results and the H1-2006 DPDFs fit for the gluon density. There is much better agreement between GKG18 and ZUES-2010 for the gluon density. For the charm and bottom quark densities, there are insignificant discrepancies between GKG18-DPDFs results and ZEUS-2010 for the small values of z; z < 0.01. Our theory predictions based on the determined diffractive PDFs for [42,43] at √ s = 225 (left) and 319 (right). The error bars on the data points represent the uncorrelated uncertainties and the yellow bands represent the total uncorrelated and correlated uncertainties the reduced diffractive cross section are also in satisfactory agreements with the data sets analyzed as well as with the previous set of H1 data sets. The most significant changes are seen for the heavy quark densities at small values of z and in the increased precision in the determination of the gluon diffractive PDF due to the inclusion of new precise data. For the future, our main aim is to include the very recent diffractive dijet production data, which could provide an additional constraint on the determination of the diffractive gluon density.
A FORTRAN subroutine, which evaluates the leading order (LO) and NLO diffractive PDFs presented here for given values of β, x IP and Q 2 , can be obtained from the authors upon request via electronic mail. | 10,280.2 | 2018-04-01T00:00:00.000 | [
"Physics"
] |
Investigation of Internal Classification in Coarse Particle Flotation of Chalcopyrite Using the CoarseAIR TM
: This work introduces the CoarseAIR™, a novel system utilizing a three-phase fluidized bed and a system of inclined channels to facilitate coarse particle flotation and internal size classification. Internal classification in the CoarseAIR™ was investigated in a series of continuous steady-state experiments at different inclined channel spacings. For each experimental series, a low-grade chalcopyrite ore was milled to a top size of 0.53 mm and methodically prepared to generate a consistent feed. The air rate to the system was adjusted to determine the impact of the gas flux on coarse particle flotation and overall system performance, with a focus on maximizing both copper recovery and coarse gangue rejection. A new feed preparation protocol led to low variability in the state of the feed, and in turn strong closure in the material balance. Hence, clear conclusions were drawn due to the high-quality datasets. Inclined channel spacings of z = 6 and z = 9 mm were used. The z = 9 mm spacing produced more favourable copper recovery and gangue rejection. Higher gas fluxes of 0.30 to 0.45 cm/s had a measurable, adverse effect on the recovery of the coarser hydrophobic particles, while the gas flux of 0.15 cm/s delivered the best performance. Here, the cumulative recovery was 90%, and mass rejection was 60% at 0.50 mm, while the +0.090 mm recovery was 83% with a gangue rejection of 85%. The system displayed robust performance across all conditions investigated.
Introduction
Froth flotation is arguably one of the greatest innovations of the 20th century [1]. Challenges remain, however, especially in recovering ultrafine and coarse particles [2]. At the ultrafine sizes below about 20 µm [3], viscous lubrication forces impede the particle-bubble collision, preventing adhesion, while at the coarser sizes beyond 100 µm the particles readily adhere, but then detach, especially within the turbulent flow field of a mechanical flotation cell. Coarse particles also exhibit lower levels of surface liberation, further reducing the probability of coarse particle-bubble attachment and hence recovery [4]. Indeed, conventional flotation technologies have proven to be highly inefficient in achieving coarse particle flotation, requiring long residence times, and hence a large footprint, while requiring considerable energy input to maintain their suspension in the flotation cell [5]. An increasing number of studies have shown that the provision of a quiescent, fluidized bed environment provides sufficient opportunity for achieving bubble-particle adhesion while offering protection from particle-bubble detachment forces [6][7][8].
The grades of ore bodies across most commodities are in decline, with chalcopyrite grades invariably lower than 1 wt% copper. The hard rock must undergo crushing and grinding to a size sufficient for achieving concentration and recovery. The conventional approach has been almost one-dimensional, literally grinding the entire ore body to a grind (2) inclined channels, (3) vertical fluidized bed column, (4) differential pressure sensor, (5) buffer system to moderate underflow discharge.
It is well-known that the REFLUX™ Classifier is a powerful classifier due to the socalled Boycott effect [15]. Particles segregate onto the upward-facing surfaces of the inclined channels before returning to the lower zone while the finer particles continue to convey upwards. Here, hydrophobic particles that settle into the lower zone attach to the rising air bubbles, and in turn convey upwards through the system of inclined channels, into the overflow. Thus, the hydrophobic particles and the entrained hydrophilic particles report together with the overflow. These can be separated via an inefficient hydrocyclone It is well-known that the REFLUX™ Classifier is a powerful classifier due to the so-called Boycott effect [15]. Particles segregate onto the upward-facing surfaces of the inclined channels before returning to the lower zone while the finer particles continue to convey upwards. Here, hydrophobic particles that settle into the lower zone attach to the rising air bubbles, and in turn convey upwards through the system of inclined channels, into the overflow. Thus, the hydrophobic particles and the entrained hydrophilic particles report together with the overflow. These can be separated via an inefficient hydrocyclone allowing the coarse portion to undergo final comminution, while the fine portion undergoes flotation. Again, the relatively coarse gangue minerals discharge via the underflow stream.
Clearly, the internal classification of the CoarseAIR™, utilizing a system of inclined channels, represents a significant shift in the system hydrodynamics from that of both the HydroFloat and the Nova Cell. This internal classification replaces the need for an efficient upfront classifier, as required for the HydroFloat, enhancing fine particle gangue rejection directly to the underflow. This device does not seek to produce a concentrate, as occurs in the Nova Cell. Rather, the goal is to ensure the highest possible recovery of hydrophobic particles is achieved, utilizing the entrainment mechanism to deliver very close to 100% capture of the very finest particles. This approach then permits the most effective system of flotation to be applied to the overflow stream. The smaller coarse portion of the overflow is then subjected to further grinding.
The purpose of the present study was to investigate the internal classification of the CoarseAIR™ system in the context of coarse particle flotation. In gravity separation, the inclined channel perpendicular spacing, z, is usually set at 6 mm, but increasingly has been set at 3 mm, and even as narrow as 1.8 mm, to exploit shear-induced inertial lift, and hence upwards transport of lower-density particles. It is known, however, that particle size classification is promoted using wider channels; hence, for the present work the channel spacing was set at 6 mm and 9 mm in separate series of experiments. The effects of the gas flux and hence the bubble-particle transport were also investigated for each of these channel spacings. All the experiments involved a low-grade chalcopyrite feed, freshly ground, and supplied to the CoarseAIR™, with the system operated under continuous steady-state conditions. The system performance was assessed as a function of the particle size in terms of the solids and copper partitioning to the overflow. The goal was to maximize copper recovery and coarse gangue rejection.
Experimental Section
Experiments were conducted using a laboratory-scale CoarseAIR™ system, shown schematically in Figure 1. The system consisted of a vertical fluidized bed section, 1.5 m high with a 0.1 m × 0.1 m cross-sectional area. Fluidization water entered via a plenum chamber together with air, forming a flow of fine bubbles distributed via a series of nozzles located across the distributor. The upper portion of the system consisted of a series of channels, inclined at 70 • to the horizontal. The feed entered the system via an inlet 0.2 m below the inclined channels. In these experiments, the feed consisted of a low-grade chalcopyrite ore, nominally finer than 0.50 mm. An autogenous fluidized bed developed in the lower part of the system, while a more dilute zone formed above. The entering feed suspension tended to flow upwards via the system of inclined channels, while coarser particles settled and joined the lower bed. At steady state, a relatively coarse underflow discharged from the system at a rate governed by a peristaltic pump, informed by the suspension density and the level of the fluidized bed. The product overflow stream emerged from the overflow launder.
Feed Preparation
For each experimental program, a 250 kg sample of low-grade chalcopyrite ore with a top size of approximately 20 mm was crushed using a laboratory jaw crusher to a top size of 3 mm. This material was then washed over a Kason vibratory screen to prevent the overgrinding of fines generated during crushing. The oversized material was then fed to a laboratory rod mill to pass through a 0.53 mm screen.
One of the major challenges associated with experimental investigation of coarse particle flotation is in delivering a consistent feed suspension containing a relatively wide size distribution of particles. It is important to recognize that it is impossible to form a homogenous suspension of particles in a tank, especially for dense particles with a size range of several hundred microns. The basic goal of a mixing tank is to ensure all solids are re-suspended off the bottom of the tank. In earlier work, we used a powerful stirrer, baffles, and four impellors on the one shaft. We also used a centrifugal pump to withdraw the slurry from the base of the tank at~100 L/min to create a flow loop that returned the slurry to a higher level in the tank. Feed to the CoarseAIR™ was then withdrawn from the flow loop using a peristaltic pump. In this earlier work, it was discovered that the feed underwent changes with time due to the cumulative effects of particle segregation within the feed tank, and changes in mixing intensity within the feed tank as the feed volume decreased. The variable nature of the feed, covering short and longer time scales, created internal dynamic variations in the underflow and overflow throughout the experiment. These variations made it difficult to form clear conclusions from the early series of experiments. We believe that these issues have plagued other similar studies in the past.
A new method of feed preparation was therefore introduced to this work. The approach consisted of an initial phase of feed preparation using a relatively large tank. After milling, the ore was transferred to a 1300 L mixing tank and diluted to the desired pulp density. The resultant slurry was then conditioned with the collector, promoter, and the frother. After conditioning, the feed was evenly distributed sequentially into approximately one hundred 20 L buckets, each containing about 12 L of slurry, with pulp density ranging from 20 to 25% solids. This sequence of buckets was then randomized, using a random number generator, before being fed sequentially into a 300 L mixing tank, along with water conditioned with frother to dilute the slurry to the required feed pulp density.
Continuous Steady-State Operation
As previously noted, it is almost impossible to produce a homogeneous suspension in a mixing tank. It is best to treat the mixing tank as a process vessel that is subject to particle segregation. Then, if there is one input to the tank, and a fixed output, ultimately the inputs equal the outputs. Here, the 300 L stirred tank offered a degree of buffering against naturally occurring discrepancies in the compositions of each added bucket of feed. Buckets of feed were added to the feed tank at a rate sufficient to maintain a consistent level in the tank. Once a sufficient period had passed, typically about an hour (or the addition of 20 buckets of the feed), the output from the tank was deemed to be constant. This consistency of the feed to the CoarseAIR™ is illustrated below in Figure 2, which shows the particle size distribution of three feed samples collected over the course of a 5 h experiment, displaying minimal variation across the samples. It should be appreciated that the maintenance of this level of consistency over such a long period of time is extraordinary and is now the subject of a separate formal study within our group. It is further noted that the delivery of this consistent feed resulted in consistent output streams from the CoarseAIR™, with almost no need for undertaking mass balance reconciliation.
Once the feed suspension was added to the CoarseAIR™, and the fluidization water and gas flux were applied, the system evolved towards a steady-state separation. The underflow rate was adjusted to establish a constant bed level in the system. It is noted that an underflow buffer, described previously [16], was applied to help moderate the underflow removal. Buffer water was applied at a rate of 1.0 L/min, less than the rate of underflow removal; thus, there was no net buffer water flow into the system. The fluidization rate was adjusted to ensure a satisfactory bed density within the CoarseAIR™. This suspension density was measured using two pressure transducers located 50 mm and 300 mm above the base of the distributor. The fluidization rate produced a density consistent with a volume fraction of solids of order 0.45. Bed height was maintained through adjustment of the underflow rate; however, it is noted that the uniform feed condition resulted in the need for little or no adjustment to the underflow rate at any time.
Coarse Particle Sampling
A screen with a sieve aperture of 0.355 mm was located for set time intervals under the overflow stream to measure the rate of capture of the coarsest overflow particles. These samples provided a sensitive real-time measure of the coarse particle flotation. Careful wet screening of these samples was important to ensure the measured mass was accurate. Changes in the gas flux resulted in changes in the capture rates of these particles. Once the feed suspension was added to the CoarseAIR™, and the fluidization water and gas flux were applied, the system evolved towards a steady-state separation. The underflow rate was adjusted to establish a constant bed level in the system. It is noted that an underflow buffer, described previously [16], was applied to help moderate the underflow removal. Buffer water was applied at a rate of 1.0 L/min, less than the rate of underflow removal; thus, there was no net buffer water flow into the system. The fluidization rate was adjusted to ensure a satisfactory bed density within the CoarseAIR™. This suspension density was measured using two pressure transducers located 50 mm and 300 mm above the base of the distributor. The fluidization rate produced a density consistent with a volume fraction of solids of order 0.45. Bed height was maintained through adjustment of the underflow rate; however, it is noted that the uniform feed condition resulted in the need for little or no adjustment to the underflow rate at any time.
Coarse Particle Sampling
A screen with a sieve aperture of 0.355 mm was located for set time intervals under the overflow stream to measure the rate of capture of the coarsest overflow particles. These samples provided a sensitive real-time measure of the coarse particle flotation. Careful wet screening of these samples was important to ensure the measured mass was accurate. Changes in the gas flux resulted in changes in the capture rates of these particles.
Steady-State Sampling
Once the system was deemed to have reached steady state, simultaneous samples of the underflow and overflow were taken. Typically, each stream was sampled for a period of 12 min. There was no valve closure; hence, the underflow discharge was very
Steady-State Sampling
Once the system was deemed to have reached steady state, simultaneous samples of the underflow and overflow were taken. Typically, each stream was sampled for a period of 12 min. There was no valve closure; hence, the underflow discharge was very consistent. Once these samples had been taken, the feed to the system was diverted so that a feed sample could also be obtained. Sometimes back-to-back experiments were conducted, greatly reducing the overall run times required to reach steady state. In these experiments, the feed to the CoarseAIR™ was resumed once the feed sample had been taken.
Summary of Experimental Conditions
Experimental parameters for each set of tested conditions are outlined in Table 1. Reagent dosages were consistent throughout the experimental program and consisted of Aero MX and sodium isobutyl xanthate (SIBX) as promoter and collector, respectively. Methyl isobutyl carbinol (MIBC) and Matfroth-50 were used as frothers, with lime used to modify pH. The dosage of reagents and conditioning time during the feed preparation are shown in Table 2.
Results and Discussion
The experimental program was concerned with the internal classification of the particles within the CoarseAIR™. One series of experiments was conducted using an inclined channel spacing of z = 6 mm, and a second with z = 9 mm. Previous work on gravity separation in the REFLUX™ Classifier has been enhanced through use of closely spaced inclined channels with z = 6 mm, and more recently z = 3 and z = 1.8 mm. These closely spaced inclined channels promote inertial lift and hence particle transport of relatively low-density particles to the overflow. The shear-induced inertial lift declines rapidly as the channel spacing increases. Thus, particle size classification is expected to be strongly favoured by the wider channel spacing of z = 9 mm, in turn releasing more of the solids to the underflow.
The work in this paper was focussed on relatively low solids throughputs of~4 t/(m 2 ·h). The low throughputs permitted a more significant range of experimental conditions to be covered for a given quantity of the prepared feed. The low throughput also helped to build a stronger focus on the internal classification. The plan is to investigate the effects of increasing solids throughputs in future studies. The low throughput also permitted a series of low gas fluxes to be used in an extended series of experiments, with several different gas fluxes introduced as step changes to provide clear evidence on any shift in performance due to the gas flux. In future experiments involving higher throughputs, it will be necessary to establish whether these findings continue to apply.
Error Analysis
In an ideal steady-state process, the amount of mass entering the system equals the amount of mass exiting the system. Similarly, with no chemical reactions or particle size reduction, the mass of copper and total mass within a set particle size interval entering the system must equal the mass in that interval exiting the system. In practice, this condition is rarely met due to systematic and random errors incurred during measurement and sampling.
The new protocol established to deliver the feed to the CoarseAIR™ led to very consistent results. Table A1 shows the size distributions and assay values of the feed, product, and reject streams from two experiments, comparing the raw data and the corresponding data following mass balance reconciliation [17]. Figure 3 below shows an example of a log-log plot of the raw and balanced grade and size distribution data for Experiment 2-C. It is noteworthy that this experiment had the highest amount of relative error between experimental and mass balanced sizing values for this series of experiments. It is clear the data adjustment required to achieve a mass balance involved minimal adjustments, the standard deviation in the adjustment or error being only 5% for copper assays. For the size distributions, the adjustments were also relatively small with a standard deviation in the adjustments or error of only 11%. A Monte Carlo simulation technique was applied to these errors in the assay values to determine the corresponding standard deviation in the yields and recoveries by size, and cumulative yields and recoveries for each data set. Error bars shown in the following graphs represent a confidence interval of 95%.
reduction, the mass of copper and total mass within a set particle size interval entering the system must equal the mass in that interval exiting the system. In practice, this condition is rarely met due to systematic and random errors incurred during measurement and sampling.
The new protocol established to deliver the feed to the CoarseAIR™ led to very consistent results. Table A1 shows the size distributions and assay values of the feed, product, and reject streams from two experiments, comparing the raw data and the corresponding data following mass balance reconciliation [17]. Figure 3 below shows an example of a log-log plot of the raw and balanced grade and size distribution data for Experiment 2-C. It is noteworthy that this experiment had the highest amount of relative error between experimental and mass balanced sizing values for this series of experiments. It is clear the data adjustment required to achieve a mass balance involved minimal adjustments, the standard deviation in the adjustment or error being only 5% for copper assays. For the size distributions, the adjustments were also relatively small with a standard deviation in the adjustments or error of only 11%. A Monte Carlo simulation technique was applied to these errors in the assay values to determine the corresponding standard deviation in the yields and recoveries by size, and cumulative yields and recoveries for each data set. Error bars shown in the following graphs represent a confidence interval of 95%. Figure 4 shows the solids partition numbers versus the particle size for the z = 6 mm and the z = 9 mm inclined channels. Four cases are shown covering gas fluxes of (a) 0.075 cm/s, (b) 0.15 cm/s, (c) 0.30 cm/s, and (d) 0.45 cm/s. The graphs show the probability of a particle of a given size reporting to the overflow. The partition number to overflow was close to 1.0 at ultrafine sizes, decreasing rapidly to about 0.5 at particle sizes typically finer than 0.1 mm. In general, the 0.5 partition was reached at a finer size using the channel spacing of z = 9 mm. Thus, the classification was coarser using the 6 mm channels, and in fact became increasingly coarse as the gas flux increases. These results suggest the fluidized sands are more readily transported to the overflow in the 6 mm channels. The clear conclusion, therefore, is that the 9 mm channel spacing is more effective in rejecting the gangue particles. Moreover, the wider channel spacing should accommodate higher solids throughputs.
Particle Classification in the CoarseAIR™
At coarser sizes, the partition curve does not rapidly drop to zero, but rather declines slowly, maintaining a significant finite portion that reports to the overflow even for sizes as large as 0.60 mm. These particles are too large to be easily entrained so must be reporting to the overflow due to their hydrophobicity, and hence are a manifestation of coarse particle flotation. At low gas fluxes, the hydrophobic coarse particle recovery is marginally higher in the 6 mm channels, but this situation gradually shifts as the gas flux increases, with the coarse particle recovery ultimately higher in the 9 mm channels. Overall, the proportion recovered clearly declines as the particle size increases, reflecting the increased difficulty in recovering coarser particles due to both their larger mass and decreasing surface liberation. Given the improved particle classification achieved in the wider channels, and hence the improved prospects for higher solids throughputs and gas fluxes, the focus of the study moved to the cases involving a channel spacing of z = 9 mm. These findings are extremely valuable moving forward, as higher gas fluxes may prove necessary at higher solids throughputs in future experiments. close to 1.0 at ultrafine sizes, decreasing rapidly to about 0.5 at particle sizes typically finer than 0.1 mm. In general, the 0.5 partition was reached at a finer size using the channel spacing of z = 9 mm. Thus, the classification was coarser using the 6 mm channels, and in fact became increasingly coarse as the gas flux increases. These results suggest the fluidized sands are more readily transported to the overflow in the 6 mm channels. The clear conclusion, therefore, is that the 9 mm channel spacing is more effective in rejecting the gangue particles. Moreover, the wider channel spacing should accommodate higher solids throughputs. At coarser sizes, the partition curve does not rapidly drop to zero, but rather declines slowly, maintaining a significant finite portion that reports to the overflow even for sizes as large as 0.60 mm. These particles are too large to be easily entrained so must be reporting to the overflow due to their hydrophobicity, and hence are a manifestation of coarse particle flotation. At low gas fluxes, the hydrophobic coarse particle recovery is marginally higher in the 6 mm channels, but this situation gradually shifts as the gas flux increases, with the coarse particle recovery ultimately higher in the 9 mm channels. Overall, the proportion recovered clearly declines as the particle size increases, reflecting the increased difficulty in recovering coarser particles due to both their larger mass and
The Effect of Gas Flux on Coarse Particle Flotation
A 0.355 mm screen was placed under the overflow from the CoarseAIR™ to capture the relatively coarse hydrophobic particles in timed intervals, typically every 5 min. This sampling during the experiment provided a valuable and sensitive measure of the coarse particle flotation and proved effective in assessing the impact of the gas flux on the flotation of coarse particles. The results are shown in Figure 5. The sequence of imposed gas fluxes was deliberately ad hoc to eliminate the effects of bias and any long-term trends. In fact, the sequence returned to the original gas flux of 0.30 cm/s at the end of the experiment.
Recovery of Copper
A comprehensive analysis was conducted on two of the experiments conducted using the z = 9 mm channel spacing. The analysis was conducted on the highest performing experiment involving a gas flux of 0.15 cm/s and the poorest performing experiment involving a gas flux of 0.45 cm/s, thus providing the strongest possible contrast. For the experiments conducted at the other gas fluxes, assays were performed on the overflow product and underflow reject streams, and additional analysis was conducted on the assays above and below 0.090 mm. This particle size of 0.090 mm provided a useful basis for differentiating between the coarse particle flotation, and the recovery at the finer sizes.
The effect of the gas flux on copper recovery in the overflow product is shown in Figure 6. For the −0.090 mm overflow product, the copper recovery exceeded 99% in every case. This near complete recovery of the −0.090 mm portion reflects the strong effects of entrainment at ultrafine sizes and ensures the overflow can be confidently processed using fit-for-purpose highly efficient flotation cells, designed to maximize recovery. For the +0.090 mm overflow, the highest recovery was 83% at a gas flux of 0.15 cm/s, corresponding to an overflow mass yield of 14.7% (Figure 7), and hence a gangue rejection of 85.3%. These results help to further confirm the best overall performance was achieved at a gas flux of 0.15 cm/s, and poorer performance at 0.45 cm/s. It is evident the coarse particle flotation exhibited an intermediate level of performance at a gas flux of 0.30 cm/s, improved performance at a lower gas flux of 0.075 and 0.15 cm/s, and relatively poor performance at 0.45 cm/s. A noticeable feature is the sharp increase in the rate of coarse particle flotation soon after reducing the gas flux, evident at the beginning of the 0.075 and 0.15 cm/s increments, and a corresponding dip in the coarse particle flotation rate after increasing the gas flux. The coarse particle flotation rate took at least 20-30 min to recover from the step change in the gas flux.
It is unclear why there should be such a strong negative effect of increased gas flux on the coarse particle recovery. Perhaps the higher gas flux creates more disruption within the fluidized bed, and hence a less quiescent hydrodynamic state. It is noted that these samples only apply to the tail end of the size distribution where surface liberation is more limited, and hence particle recovery is more sensitive. Increasing bubble hold-up within the fluidized bed might lead to an increase in bubble coalescence, and hence the formation of larger, less effective bubbles rising through the bed. A higher gas flux may also lead to coalescence within the inclined channels and in turn particle detachment. It will be interesting to establish whether these trends continue to hold at higher solids throughputs in future studies.
Recovery of Copper
A comprehensive analysis was conducted on two of the experiments conducted using the z = 9 mm channel spacing. The analysis was conducted on the highest performing experiment involving a gas flux of 0.15 cm/s and the poorest performing experiment involving a gas flux of 0.45 cm/s, thus providing the strongest possible contrast. For the experiments conducted at the other gas fluxes, assays were performed on the overflow product and underflow reject streams, and additional analysis was conducted on the assays above and below 0.090 mm. This particle size of 0.090 mm provided a useful basis for differentiating between the coarse particle flotation, and the recovery at the finer sizes.
The effect of the gas flux on copper recovery in the overflow product is shown in Figure 6. For the −0.090 mm overflow product, the copper recovery exceeded 99% in every case. This near complete recovery of the −0.090 mm portion reflects the strong effects of entrainment at ultrafine sizes and ensures the overflow can be confidently processed using fit-for-purpose highly efficient flotation cells, designed to maximize recovery. For the +0.090 mm overflow, the highest recovery was 83% at a gas flux of 0.15 cm/s, corresponding to an overflow mass yield of 14.7% (Figure 7), and hence a gangue rejection of 85.3%. These results help to further confirm the best overall performance was achieved at a gas flux of 0.15 cm/s, and poorer performance at 0.45 cm/s. The partitions-to-overflow product for the best (0.15 cm/s) and poorest (0.45 cm/s) performing gas fluxes are shown in Figure 8, together with the copper recovery as a function of particle size. A summary of mass balance data for these gas fluxes is given in Appendix A. The highest recovery was maintained across the entire size range using the lower gas flux of 0.15 cm/s. Importantly, this higher recovery was achieved at a higher level of coarse particle gangue rejection. The partitions-to-overflow product for the best (0.15 cm/s) and poorest (0.45 cm/s) performing gas fluxes are shown in Figure 8, together with the copper recovery as a function of particle size. A summary of mass balance data for these gas fluxes is given in Appendix A. The highest recovery was maintained across the entire size range using the lower gas flux of 0.15 cm/s. Importantly, this higher recovery was achieved at a higher level of coarse particle gangue rejection.
(a) (b) Figure 7. Mass yield-to-overflow product as a function of gas flux for the particles passing throu a screen with sieve aperture of 0.090 mm (a) and +0.090 mm particles (b) with the best (green) poorest (red) performances highlighted. Error bars show 95% confidence.
The partitions-to-overflow product for the best (0.15 cm/s) and poorest (0.45 cm performing gas fluxes are shown in Figure 8, together with the copper recovery as a fu tion of particle size. A summary of mass balance data for these gas fluxes is given in A pendix A. The highest recovery was maintained across the entire size range using lower gas flux of 0.15 cm/s. Importantly, this higher recovery was achieved at a hig level of coarse particle gangue rejection. This overall condition is shown more clearly in Figure 9 in terms of the cumulative yield and the cumulative recovery of copper. At a gas flux of 0.15 cm/s, the cumulative recovery at a particle size of 0.50 mm was 90%, the cumulative yield was 40%, and hence the mass rejection was 60%. Cumulative recovery remained consistently high despite the lower mass yields for the lower gas flux case. The poorest result produced a cumulative recovery of 87.5%, yield of 44%, and hence mass rejection of 56%. This work confirms robust separation performance with the difference in performance across the work being relatively modest, but nevertheless significant. Clearly, there exists an optimum gas flux, which for this work was 0.15 cm/s. Figure 10 shows the upgrade achieved at both the lower gas flux of 0.15 cm/s and the higher gas flux of 0.45 cm/s. At the finer particle sizes, the strong entrainment led to a relatively low product grade. However, as the particle size increased, the entrainment contribution declined appreciably, leading to higher product grades. At the coarsest sizes, the grade declined again due to the poorer surface liberation. The curves passing through the data are provided to guide the eye. Interestingly, the two cases show a common peak; however, the upgrades fell away more rapidly at both coarser and finer particle sizes for the higher gas flux. recovery at a particle size of 0.50 mm was 90%, the cumulative yield was 40%, and hence the mass rejection was 60%. Cumulative recovery remained consistently high despite the lower mass yields for the lower gas flux case. The poorest result produced a cumulative recovery of 87.5%, yield of 44%, and hence mass rejection of 56%. This work confirms robust separation performance with the difference in performance across the work being relatively modest, but nevertheless significant. Clearly, there exists an optimum gas flux, which for this work was 0.15 cm/s. Figure 10 shows the upgrade achieved at both the lower gas flux of 0.15 cm/s and the higher gas flux of 0.45 cm/s. At the finer particle sizes, the strong entrainment led to a relatively low product grade. However, as the particle size increased, the entrainment contribution declined appreciably, leading to higher product grades. At the coarsest sizes, the grade declined again due to the poorer surface liberation. The curves passing through the data are provided to guide the eye. Interestingly, the two cases show a common peak; however, the upgrades fell away more rapidly at both coarser and finer particle sizes for the higher gas flux. Figure 10. Upgrade, defined as the ratio of product grade to feed grade, as a function of geometric mean particle size for best (green squares) and poorer (red triangles) performing runs. The impact of entrained gangue on the grade of product is shown, where the higher gas flux had a lower associated grade for almost all size intervals.
Discussion
Copper recovery at fine particle sizes (below 0.090 mm) for the best gas flux case was 99.3 ± 0.3%, while mass rejection was 21 ± 2.8%. This would seem to indicate that there is room for greater mass rejection from the system with minimal impact on copper recovery. However, it remains to be seen whether this is possible at higher solids throughputs. The Figure 10. Upgrade, defined as the ratio of product grade to feed grade, as a function of geometric mean particle size for best (green squares) and poorer (red triangles) performing runs. The impact of entrained gangue on the grade of product is shown, where the higher gas flux had a lower associated grade for almost all size intervals.
Discussion
Copper recovery at fine particle sizes (below 0.090 mm) for the best gas flux case was 99.3 ± 0.3%, while mass rejection was 21 ± 2.8%. This would seem to indicate that there is room for greater mass rejection from the system with minimal impact on copper recovery. However, it remains to be seen whether this is possible at higher solids throughputs. The advantage here is in the practical implications for downstream processing. The overflow product stream in this case contained 40% of the original feed mass, with 50% passing through a screen of an aperture size of 0.038 mm. Classified at 0.038 mm, the undersize would be suitable for conventional flotation in a suitable cleaner arrangement. The oversize from the classifier would require regrinding for further liberation, but only accounts for 20% of the original feed mass, greatly reducing the comminution load. The rejected portion is not reground, reducing energy and water consumption.
It is clear from this work that the recovery of the copper declines as the particle size increases. The chalcopyrite ore undergoes a process of breakage and comminution. Ideally, this comminution leads to the creation of relatively clean fractures and hence formation of high-grade particles that break away from much lower-grade gangue minerals. It is notable that despite the low recovery at the coarsest sizes, the overall recovery was still very strong, suggesting that, in fact, favourable breakage did occur during comminution. This observation is further supported by the very strong upgrade of~5.5 observed at the intermediate particle size of 0.30 mm, and much lower upgrade at 0.50 mm. Figure 11 shows images of the overflow product, which displays a strong mineral "lustre", and the underflow reject, which displays a dull tone and typically little or no surface liberation. A detailed appraisal of the mineralogy and fracture mechanics of this ore is beyond the scope of this study. What is of greater interest ultimately is the nature of the particles that were not floated in this study. Many of those particles will contain little or no copper, others will contain some copper locked entirely within the surface, and some will contain remnants of copper at the grain boundaries following fracture, but perhaps little else. A smaller portion will likely contain higher levels of copper at the surface that the process failed to recover. Unlike other separation technologies, coarse particle flotation presents significant challenges to both researchers and to industry in forming a rigorous and objective ap- What is of greater interest ultimately is the nature of the particles that were not floated in this study. Many of those particles will contain little or no copper, others will contain some copper locked entirely within the surface, and some will contain remnants of copper at the grain boundaries following fracture, but perhaps little else. A smaller portion will likely contain higher levels of copper at the surface that the process failed to recover. Unlike other separation technologies, coarse particle flotation presents significant challenges to both researchers and to industry in forming a rigorous and objective appraisal of the true separation performance. There are two key issues here; the first concerns the hydrodynamic and physicochemical performance of the separator itself, and the second is the value proposition of the separation for the ore in question. We need to develop rigorous answers to the first of these questions before we can begin to properly address the second question, i.e., the value proposition for the industry, which is best answered by deploying the best separator.
Conventional strategies for assessing the performance of the coarse particle flotation commence with bulk measurement of the copper grade, and then consideration of the mineralogy itself. Polished sections again offer insights into the bulk mineralogy of the particles, but also the prospect for inferring surface properties at the perimeter of the particles. X-ray CT scanning has also been used in recent years. These approaches will be valuable from a fundamental perspective but have so far had limited impact in the field. Mineralogy is important, but this must be married hydrodynamically to the performance of the separator. Our group is currently pursuing such an approach, the aim being to assess the separation performance in a manner that is accessible, rigorous, meaningful, and objective.
Conclusions
This paper is the first report on the CoarseAIR™. The paper has succeeded in establishing a point of reference on the first critical question concerning internal classification, focusing on the inclined channel spacing and the gas flux. Clear conclusions have been formed. On a real chalcopyrite ore, an overall copper recovery of 90% was achieved, corresponding to a solids gangue rejection of 60%. This performance was achieved using the wider channel spacing of z = 9 mm at a gas flux of 0.15 cm/s. The system performance was robust across the range of conditions investigated.
The new feed protocol has generated high-quality data that adhere consistently to material balance requirements over both short-and long-time scales. The knowledge generated here will inform the next phase of our work. That phase will examine the issue of solids throughput, and the importance of the bed height. If internal classification is to provide value, this will require strong separation performance at much higher solids loadings of between 10 and 20 t/(m 2 ·h). | 9,172.8 | 2022-06-20T00:00:00.000 | [
"Materials Science"
] |
Laser-Induced Breakdown Spectroscopy (LIBS) for Monitoring the Formation of Hydroxyapatite Porous Layers
Laser-induced breakdown spectroscopy (LIBS) is applied to characterize the formation of porous hydroxyapatite layers on the surface of 0.8CaSiO3-0.2Ca3(PO4)2 biocompatible eutectic glass immersed in simulated body fluid (SBF). Compositional and structural characterization analyses were also conducted by field emission scanning electron microscopy (FESEM), energy dispersive X-ray spectroscopy (EDX), and micro-Raman spectroscopy.
Introduction
A new era for tissue engineering has emerged since the discovery of a bioactive glass by Hench et al. in 1970 [1]. In particular, silicon and silicon calcium phosphate materials have attracted scientist's attention for being used as scaffolds in orthopaedic, oral, and maxillofacial applications. These materials, during exposure to simulated body fluid (SBF), develop a hydroxyapatite (HA) layer on their surface [2]. This reaction starts on the surface and usually leads to harmful shear-stress [3]. To enhance the ingrowth and the bioactivity of the ceramic implant, a suitable interconnected porous structure network is commonly utilized, which also provides a higher bioactivity rate and improves both the anchoring of the prosthesis and the blood and nutrition supply for the ingrowth of the new bone [4][5][6][7].
In this work, we report on the characterization of the hydroxyapatite porous layer developed on the surface of the W-TCP eutectic glass after being immersed in SBF by using the laser-induced breakdown spectroscopy (LIBS) technique that is based on the generation of micro-plasma and emission spectroscopy measurements. In this technique, a high-energetic laser pulse, used as an atomization and excitation source, is directly focused on the sample surface and the formed plasma is analysed to obtain the multi-elemental composition of samples. LIBS is a single step, fast, robust, and stable technique with high spatial resolution, which can be carried out under atmospheric conditions [28]. In addition, sample preparation is not required, thus, providing a wide range of advantages when compared to other analytical techniques [29][30][31][32][33]. It is well-known that the W-TCP eutectic composite is capable of rearranging its morphology when it is soaked in human parotid saliva (HPS) or SBF so that the W phase is dissolved and the TCP phase undergoes a pseudomorphic transformation into HA [8,9,23,24]. Hence, the dense W-TCP ceramic is turned into a HA porous layer. The principal aim of this work is to assess the Si content of the sample surface by LIBS analysis to confirm the absence of this element in the layer generated after the sample being soaked into SBF and to conduct compositional and structural characterization analyses by field emission scanning electron microscopy (FESEM), energy dispersive X-ray spectroscopy (EDX) and micro-Raman spectroscopy to corroborate the presence of HA.
Sample Fabrication
Eutectic glass samples were manufactured by means of the laser floating zone (LFZ) technique. This technique has been described in detail elsewhere [11,34,35]. For this purpose, tricalcium phosphate and wollastonite powders were mixed in the eutectic 20% Ca 3 (PO 4 ) 2 , 80% CaSiO 3 mol % composition. The resulting powders were isostatically pressed at 200 MPa for 2 min to obtain ceramic rods which were sintered at 1200 • C for 10 h. Samples were grown in air and annealed at 650 • C for 5 h to relieve inner stresses. The development of the HA layer on the surface of the glass samples was carried out by soaking, for a one-month period, a glass sample in SBF, prepared according to the standard process [36]. The sample was kept at the human body temperature of 37 • C by means of a Memmer Beschickung-loading-model 100-800 stove (Memmert GmbH, Schwabach, Germany).
Characterization Techniques
LIBS characterization was carried out by means of a Q-switched Nd:YAG laser (Brilliant Quantel, model Ultra CFR, Les Ulis Cedex, France) with emission at 1064 nm, emitting 7.7 ns laser pulses with 50 mJ maximum pulse energy. Plasma emission was collected by using a bifurcated optical fiber (QBIF600-UV-VIS, 600 µm, Premium Bifurcated Fiber, UV-VIS, 2 m, ATO, Largo, FL, USA) adjusted at 45 • to the sample surface and connected to a dual-channel Ocean-Optics spectrometer (LIBS 2500plus, Ocean Optics Inc., Dunedin, FL, USA). The laser beam was directly focused on the surface of the samples through a 150 mm focal length lens. In order to avoid detector saturation, pulse energy and irradiance were set at 30 mJ and 73.5 MW/cm 2 , respectively.
Semi-quantitative compositional analysis and morphology were characterized by means of field emission scanning electron microscopy (FESEM) using a Carl Zeiss MERLIN microscope with an incorporated energy dispersive X-ray detector (EDX) (Carl Zeiss microscopy GmbH, Munich, Germany). X-ray diffraction (XRD) analyses were carried out to determine the amorphous character of glass samples by means of a Bruker D8 Advance diffractometer (Bruker, Billerica, MA, USA). Raman dispersion measurements were performed using a confocal Raman spectrometer (Witec Alpha 300 M+) (Witec, Ulm, Germany) equipped with a thermoelectric-cooled CCD detector. As the excitation source, a 488 nm laser was used and the scattered light was collected through a 50× microscope objective lens. The output power of the laser was kept below 1 mW in order to avoid significant local heating of the sample. Figure 1 shows LIBS spectra recorded in the spectral range of 200-850 nm for both the W-TCP eutectic glass and layer developed on the sample surface after being soaked into SBF for one month. The LIBS spectrum of HA is also presented for comparison purposes [37]. The spectra show strong characteristic emission lines that can be assigned according to the National Institute of Standards and Technology (NIST). The main atomic emission lines corresponding to Si (I), Ca (I), Ca (II), Mg (II), Na (I), and O (I) are pointed out in the figure and the assigned wavelengths are listed in Table 1. When the dense W-TCP eutectic glass is immersed in SBF, the reaction of the material with the SBF gives rise to a porous layer of HA, which finally covers the surface of the sample. It is well known that for a glass to be bioactive and, hence, to bond to bone, a calcium phosphate layer must form at its surface. The mechanisms of this reaction were proposed by Hench et al. [1,2], and can be summarized in the following five stages: (i) rapid exchange of alkali or alkali-earth ions with H + or H 3 Figure 1 shows LIBS spectra recorded in the spectral range of 200-850 nm for both the W-TCP eutectic glass and layer developed on the sample surface after being soaked into SBF for one month. The LIBS spectrum of HA is also presented for comparison purposes [37]. The spectra show strong characteristic emission lines that can be assigned according to the National Institute of Standards and Technology (NIST). The main atomic emission lines corresponding to Si (I), Ca (I), Ca (II), Mg (II), Na (I), and O (I) are pointed out in the figure and the assigned wavelengths are listed in Table 1. When the dense W-TCP eutectic glass is immersed in SBF, the reaction of the material with the SBF gives rise to a porous layer of HA, which finally covers the surface of the sample. It is well known that for a glass to be bioactive and, hence, to bond to bone, a calcium phosphate layer must form at its surface. The mechanisms of this reaction were proposed by Hench et al. [1,2], and can be summarized in the following five stages: (i) rapid exchange of alkali or alkali-earth ions with H + or H3O + from solution; (ii) loss of soluble silica in the form of Si(OH)4 to the solution; (iii) condensation and repolymerization of SiO2-rich layer on the surface depleted in alkalis and alkaline-earth cations; (iv) migration of Ca 2+ and PO4 3− groups to the surface through the SiO2-rich layer forming a CaO-P2O5-rich film on top of the SiO2-rich layer, followed by the growth of the amorphous CaO-P2O5-rich film by incorporation of soluble calcium and phosphorous from solution; and (v) crystallization of the amorphous CaO-P2O5 film by incorporation of OH − anions from solution to form a hydroxyapatite layer. Next, both the chemical composition and structure of the layer produced on the surface of the sample after a one-month immersion in SBF was investigated by micro-Raman spectroscopy for two wavenumber regions; 50-1200 cm −1 and 3500-3700 cm −1 (Figure 3). The Raman spectrum of standard TCP is also presented for comparison purposes. The Raman spectra collected were made up of sharp peaks and broad bands which can be assigned to the HA Raman spectra previously reported in the scientific literature [9,26]; a narrow intense peak located at 962 cm −1 , corresponding to symmetric stretching of PO4 3− modes, and broad bands at 400-500, 570-625 and 1020-1095 cm −1 are attributed, respectively, to ν2 − , ν4 − , and ν3 − type internal PO4 3− modes, and a strong sharp peak located at 3576 cm −1 is assigned to the O-H stretching mode. It is worth highlighting that HA Raman spectra show significant variations when compared to TCP spectra, the most relevant of which is that the O-H stretching mode did not appear. Therefore, micro-Raman analyses carried out on the layer developed on the sample surface confirmed the generation of a HA layer. Finally, microstructural and semi-quantitative chemical composition analyses were carried out by SEM-EDX aiming at analysing the morphology of the sample, determining the elements both the Next, both the chemical composition and structure of the layer produced on the surface of the sample after a one-month immersion in SBF was investigated by micro-Raman spectroscopy for two wavenumber regions; 50-1200 cm −1 and 3500-3700 cm −1 (Figure 3). The Raman spectrum of standard TCP is also presented for comparison purposes. The Raman spectra collected were made up of sharp peaks and broad bands which can be assigned to the HA Raman spectra previously reported in the scientific literature [9,26]; a narrow intense peak located at 962 cm −1 , corresponding to symmetric stretching of PO 4 3− modes, and broad bands at 400-500, 570-625 and 1020-1095 cm −1 are attributed, respectively, to ν 2 − , ν 4 − , and ν 3 − type internal PO 4 3− modes, and a strong sharp peak located at 3576 cm −1 is assigned to the O-H stretching mode. It is worth highlighting that HA Raman spectra show significant variations when compared to TCP spectra, the most relevant of which is that the O-H stretching mode did not appear. Therefore, micro-Raman analyses carried out on the layer developed on the sample surface confirmed the generation of a HA layer. Next, both the chemical composition and structure of the layer produced on the surface of the sample after a one-month immersion in SBF was investigated by micro-Raman spectroscopy for two wavenumber regions; 50-1200 cm −1 and 3500-3700 cm −1 (Figure 3). The Raman spectrum of standard TCP is also presented for comparison purposes. The Raman spectra collected were made up of sharp peaks and broad bands which can be assigned to the HA Raman spectra previously reported in the scientific literature [9,26]; a narrow intense peak located at 962 cm −1 , corresponding to symmetric stretching of PO4 3− modes, and broad bands at 400-500, 570-625 and 1020-1095 cm −1 are attributed, respectively, to ν2 − , ν4 − , and ν3 − type internal PO4 3− modes, and a strong sharp peak located at 3576 cm −1 is assigned to the O-H stretching mode. It is worth highlighting that HA Raman spectra show significant variations when compared to TCP spectra, the most relevant of which is that the O-H stretching mode did not appear. Therefore, micro-Raman analyses carried out on the layer developed on the sample surface confirmed the generation of a HA layer. Finally, microstructural and semi-quantitative chemical composition analyses were carried out by SEM-EDX aiming at analysing the morphology of the sample, determining the elements both the glass and the layer were comprised of, and the Ca/P ratio of the layer. Figure 4 shows a general view micrograph of the sample soaked in SBF (a) and a detail of the layer (b). SEM observation showed Finally, microstructural and semi-quantitative chemical composition analyses were carried out by SEM-EDX aiming at analysing the morphology of the sample, determining the elements both the glass and the layer were comprised of, and the Ca/P ratio of the layer. Figure 4 shows a general view micrograph of the sample soaked in SBF (a) and a detail of the layer (b). SEM observation showed that a new layer was formed on the surface of the samples, which consisted of HA nanocrystals, fibrillar in shape, and randomly oriented, thus providing porosity to the new surface. The cracks observed revealed that the coating formed had different properties than the parent glass, as cracks were not present on starting samples. The EDX spectrum shown in Table 2 indicates that the composition of the glass is close to the theoretical value. In addition, these analyses revealed the formation on the surface of the glass of a layer rich in Ca and P, with a Ca/P ratio of about 1.3. It is worth mentioning that these analyses corroborated that Si was not present in the layer. Thus, during immersion, the bioactive glass surface dissolved and a new surface formed by precipitation and transformation reactions leading to a crystallized, Ca-deficient apatite, similar to bone in its composition. that a new layer was formed on the surface of the samples, which consisted of HA nanocrystals, fibrillar in shape, and randomly oriented, thus providing porosity to the new surface. The cracks observed revealed that the coating formed had different properties than the parent glass, as cracks were not present on starting samples. The EDX spectrum shown in Table 2 indicates that the composition of the glass is close to the theoretical value. In addition, these analyses revealed the formation on the surface of the glass of a layer rich in Ca and P, with a Ca/P ratio of about 1.3. It is worth mentioning that these analyses corroborated that Si was not present in the layer. Thus, during immersion, the bioactive glass surface dissolved and a new surface formed by precipitation and transformation reactions leading to a crystallized, Ca-deficient apatite, similar to bone in its composition.
Conclusions
W-TCP eutectic glasses were soaked in simulated body fluid for a one-month period of time in which a hydroxyapatite porous layer was developed on the surface. Laser-induced breakdown spectroscopy (LIBS) spectra acquired on the sample surface showed that Si (I) emission lines were not present in the layer developed after the immersion period. Micro-Raman spectroscopy analyses carried out on the surface confirmed the crystalline nature of this layer, the Raman spectra of which corresponded to hydroxyapatite. Finally, SEM-EDX characterization indicated that the layer composition was rich in Ca and P with a Ca/P ratio around 1.3 and at the same time corroborating the Si absence on the layer.
Author Contributions: Daniel Sola led the investigation, manufactured the glasses, analysed the micro-Raman spectra, and edited the manuscript. Daniel Paulés and Jesús Anzano performed the LIBS studies and analysed he LIBS spectra. Lorena Grima performed the in vitro experiments and characterized the samples by SEM-EDX and micro-Raman spectroscopy.
Conflicts of Interest:
The authors declare no conflict of interest.
Conclusions
W-TCP eutectic glasses were soaked in simulated body fluid for a one-month period of time in which a hydroxyapatite porous layer was developed on the surface. Laser-induced breakdown spectroscopy (LIBS) spectra acquired on the sample surface showed that Si (I) emission lines were not present in the layer developed after the immersion period. Micro-Raman spectroscopy analyses carried out on the surface confirmed the crystalline nature of this layer, the Raman spectra of which corresponded to hydroxyapatite. Finally, SEM-EDX characterization indicated that the layer composition was rich in Ca and P with a Ca/P ratio around 1.3 and at the same time corroborating the Si absence on the layer.
Author Contributions: Daniel Sola led the investigation, manufactured the glasses, analysed the micro-Raman spectra, and edited the manuscript. Daniel Paulés and Jesús Anzano performed the LIBS studies and analysed he LIBS spectra. Lorena Grima performed the in vitro experiments and characterized the samples by SEM-EDX and micro-Raman spectroscopy.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,784.2 | 2017-12-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
SPECIFICS OF SHAFTING ALIGNMENT FOR SHIPS IN SERVICE
Modern ships are means of transport which, during their entire operational lifespan, need to convey cargo and/or passengers in a safe and reliable way, without jeopardising their safety, and with least possible adverse impacts on the marine environment. The ship’s safety and functionality directly depend on the reliability of her propulsion system, the shafting being the essential unit of the system. The functionality of the ship’s shafting considerably depends on its correct installation. Installation of the ship propulsion shafting is an integral part of the overall positioning (alignment) procedure. Shafting alignment is performed in several stages, starting with the shaft line design, and includes calculating the elastic line and bearing loads, installation of shafting parts onboard ship in compliance with the calculation results, and verifying the alignment results. Procedures are different for ships in service and newly built ships. This paper deals with specific features of the propulsion shafting alignment that is carried out while a ship in service is being converted for a general reason. Unlike a newly built ship, an existing ship imposes additional constraints that should be dealt with in the calculation stage of the process as well as during shafting installation and alignment verification. A calculation approach for ships in service is always different, having specific features from case to case, depending on what is changed and what remains unchanged during the conversion of the ship. The same goes for the implementation and verification of the achieved results. The purpose of this paper is to underline the difference, its contribution being in suggesting the procedure to be followed in case of conversion of an existing vessel.
INTRODUCTION
The present world merchant fleet exceeds 96,000 units and 770m GT (Source: Lloyd's Register/Fairplay -World Fleet Statistics/ 2006).The fleet is an essential factor in long distance international transportation of goods and passengers, in particular with regard to the return on investment and the environment preservation.The primary task of the merchant fleet is conveying huge amounts of cargo, whether general or in bulk, across oceans, and this directly generates the requirements of high reliability of the ship propulsion plant.From a functional point of view, the ship propulsion shafting is a very important system which, in adverse sailing conditions, may have features of a system essential for the ship's safety.Its reliability depends, inter alia, on its correct positioning onboard ship.The shafting alignment includes the elastic line calculation, onboard installation and assembling of its parts (i.e. individual shafts) in compliance with the calculation results and, finally, the verification of the achieved condition, followed by re-alignment if necessary [1][2].The calculation determines the position of the stern tube bearings and intermediate bearings, as well as the propulsion engine bearings in transverse and vertical position relative to the shafting axis.These positions must ensure an acceptable elastic line, as well as proper distribution of bearing reactions and internal forces that the shafting transfers to the propulsion reduction gearbox or to the directly coupled prime mover [3-4-5].Calculations are used to model the shafting with a line system of girders of varying cross sections at a number of supports [6][7].Bearings can be modelled with solid supports, linear elastic sup-ports, or using the non-linear model of radial bearings.As the application of the latter two models is rather complex, today's calculations regularly use the solid support model [8].It is common practice to present the calculation results in the following way: the computer program evaluates the influence coefficients for the designed (solid support) bearing offsets (i.e. bearing load change for a unit offset of this or some other bearing), bearing reactions, displacements (deflections and slopes of the elastic line), internal forces (bending moments and axial forces), and stresses (due to bending, or equivalent stresses) [9][10].The shafting alignment verification is performed by measurements which define the deviation of the condition achieved upon the onboard installation from the design condition.The following steps are taken when verifying the shafting alignment on board: measuring GAP and SAG values at the open shaft flange connections, jack--up measurement of bearing reactions, and strain gages measurement [6,4,11].
When dealing with the existing vessels, it frequently occurs that the documentation related to the elastic line or shafting alignment criteria are not available.It is therefore recommended, prior the ship's docking, i. e. while she is afloat, to de-couple flanges of shafting sections, measure GAP and SAG, and take these values as reference condition [12].If the above mentioned measurement is repeated or performed on the ship on dock, considerable differences will be observed, as alignment is to be carried out afloat.During the construction of this very ship, the alignment was performed by measuring GAP and SAG values at the open flange connections; upon the installation, alignment was verified and adjusted by measuring bearing reactions.After an overhaul or inspection, shaft line sections are coupled again.The once measured SAG and GAP values at flanges are no more in accordance with the values defined in the technical documentation (if the latter exists altogether) or considerably differ from the values initially measured before docking.The question is whether this new condition, which we either cannot change or do not want to change, except for some minor adjustments, meets the criteria defining the acceptable limits for bearing loads, deflections, and internal forces [6,13].
This paper presents the calculation of the initial bearing offsets related to the measured GAP and SAG at the open flange connections.The alignment verification, based on computational bearing offsets that are calculated from SAG/GAP values, will show whether the existing condition is acceptable or not.If the measured condition is proved unacceptable, a minimum of modifications (subsequent adjustments and works on board) required in order to change any value which may have a favourable impact on the condition acceptability are to be searched for.
CALCULATION TERMINOLOGY AND ASSUMPTIONS
The propulsion shaft line, ready for GAP and SAG measurement at the open flange connections, is shown in a general way in Figure 1.
Shafting parts, i. e. separate shafts (propeller shaft, intermediate shafts, and the like) will be called elements.Places where flanges meet will be called nodes.The following assumptions are made: 1.Each shaft with de-coupled flanges represents a statically determined system (Figure 2) which moves and makes angular turn as a solid body.The position of each shaft is determined by two supports.
2. For each shaft, offsets at the ends of the coordinate system passing through the bearings, i. e. in the local coordinate system x y z , , , are calculated beforehand.
3. The deflections are small in relation to the distance between supports.Slopes are also small, so we may take that: sin , cos a a a » » 1.
4. Differences in diameters of flanges that are coupled together may be ignored.tirely determine the position of other shaft parts [14].
The assumption (1) is usually met.Calculation in (2) may be performed with the aid of a computer program, as described in [6].The assumptions (3) and ( 4) are regularly met, whereas caution is needed when using the assumption (5); if it is not met, a major mistake may develop.The calculation is feasible from the ship propeller to the main engine (in Figure 1 from left to right, common practice in new built ships) or vice--versa (more frequent practice in service ships) [15].
As a rule, an assembled shafting is a statically non-determined system, whether the ship is under construction or in service.In case of new built ships, the common alignment calculation approach is choosing (defining) offsets of all bearings, aiming at an unambiguous calculation of bearing reactions and all other values.Then these values are checked to make sure that they meet the criteria.In case of a ship in service, we shall start either from the measured SAG/ /GAP values at flanges, or from the measured bearing reactions within an assembled shafting, in order to determine bearing offsets necessary for the subsequent calculation.It should be always borne in mind that neither the measured SAG/GAP values nor the measured bearing force values can determine, in an unambiguous way, the position of the integral shafting as a solid body, e. g. the coordinates of the centre of the aft flange (in Figures 1-2: left) and its angular turn without deformation around the centre.The difficulty can be overcome by freely choosing the above displacement of the system as a solid body.
FLANGE OFFSETS EXPRESSED BY GAP AND SAG AT FLANGES FOR AN INDIVIDUAL NODE
For an observed node i (Figure 3) the values of SAG i and GAP i for the defined flange diameters , 1 are calculated according to the expressions (2.1) and (2.2).
SAG w w
2) where: are displacements of one of the two flanges in the global system.
The following signs are adopted: SAG i >0 if the left flange is closer to the axis x (i.e. higher), and GAP i >0 if the rims are closer to the axis x (i.e. if the opening is greater from the bottom side).
Transition from element i to element i+1
(i. e. from left to right): , ,
BEARING OFFSETS EXPRESSED BY FLANGE OFFSETS FOR AN INDIVIDUAL ELEMENT
For individual statically determined shafting elements in de-coupled condition, we now present the calculation procedure for determining bearing displacements, as well as the deflection and slope of the other flange, if the deflection and slope are known for one of the flanges.As we deal here with a statically determined system, the procedure is simple and easy to understand, not requiring too much explanation.
For an observed element i of the shafting (Figure 4) the following items are calculated: deflection and slope of the shaft as a solid body around the points L ; [mm] e) flange displacements in the global coordinate system: On the assumption that the displacements are significantly smaller than the distance between the bearings, it is considered that: sin ( , ) ( , ) 1 i.e. the angle bi between the axes x and x is small enough to meet the assumption.
Transition from left to right (from flange L, to flange D)
It is defined: Deflection and slope of the shaft around the point L as a solid body Dwi w w 2) Bearing displacements (in the global system) w w
Transition from right to left (from flange D, to flange L):
It is defined:
DESCRIPTION OF VALUES NEEDED FOR THE ENTIRE CALCULATION
The previously calculated bearing displacements and the obtained deflection and slope of the flange where these values were unknown, represent the starting data for further calculation.In this way it is possible to connect gradually the front and the end part of the entire shaft line.We should always bear in mind that flange deflection and slope may represent inaccurate values.Yet, except for measuring these values, we usually do not have any other possibilities.
The values needed for the entire calculation can be sorted in groups, as follows:
Individual shaft dimensions
For each element (shaft) within the system, we need to know: a) distance between bearing A and left end a i [mm] b) distance between bearing B and left end b i [mm] c) length of the element
Displacements of the shaft ends in the local coordinate system
For each element (shaft) in the local coordinate system x y z , , , whose axis x passes through the bearings A and B, the following needs to be calculated:
Diameter, GAP and SAG at the open flange connections
For each node (place where two flanges meet), we need to know the flange diameters.It is assumed that the difference between diameters of the two connecting flanges is negligible.It is also assumed that GAP and SAG measurements at flanges have been carried out onboard ship (Figure 5): a) flange diameter The positive direction of displacements and angular turn is determined by the global right rectangular system x z y , , .Positive GAP and SAG values are shown in Figure 5 [4,11].
SELECTION OF A GLOBAL COORDINATE SYSTEM AND THE CALCULATION OF THE RELATED BEARING OFFSETS
The purpose of this section is to choose the position of the global coordinate system and determine the bearing positions and offsets within the system, which allows the application of the calculation procedure for a ship in service to the new built one.If the calculation is performed from left to right, the global axis x is initially set in the local axis x of the element ¹ 1. Node and bearing offsets of all the elements are calculated within the global coordinate system, concluding with the right end of the element ¹ 1.If the calculation is performed from right to left, the global axis x is initially set in the local axis x of the element ¹ n.The node and bearing offsets of all the elements are calculated within the global coordinate system, concluding with the left end of the element ¹ 1.
In order to show the final results (in reference to bearing displacement and, if need be, flange displacements), we can retain the above mentioned global system or introduce a new system, by selecting two bearings L 1 and L 2 through which a new global axis x passes, as follows: may bearing displacements be calculated for each element: and slopes a L i , [rad] and a D i , [rad] at the ends in the old (initially selected) global coordinated system.The selected bearing displacements in the old global system, where the bearing L 1 lies to the right of the bearing L 2 , are: w L1 [mm] and w L2 [mm].
For each bearing, the axial position (abscissa) is calculated in the old global system.The abscissa is valid in the new system as well: (5.1) The bearing displacements in the new global coordinate system x y z ' ' ' (Figure 6) are: 1 2 is a positive direction according to Figure 6 1 0 , = -> Using this simple method, expressions (5.3) and (5.4) are used for calculating the displacements of all bearings of the completely assembled shafting, on the basis of SAG/GAP measurements at the open flange connections.The above displacements are used in further calculation by suitable programs, e. g. those in [6][7][8], designed for calculating values related to bearing reactions, deformations, and internal forces, which can be compared to the criteria of acceptability.
EXAMPLE OF APPLICATION TO A SIMPLE SHAFTING
A simple shafting consists of a propeller shaft and a gear shaft, with a total of five bearings, as shown in Figure 7.
The following values are known in this case: w L , á L -deflection and slope of the propeller shaft flange, SAG, GAP -sagging and opening at de-coupled flanges.Values w L , á L are determined by a separate calculation for a de-coupled propeller shaft on three supports (in its bearings), performed by a suitable program according to [6][7][8].and GAP values are either measured onboard before dry-docking, or specified by design documentation.
We shall present the procedure for calculating bearing offsets w 4 and w 5 , to meet the needs of the calculation of a fully assembled shaft line, and the comparison with the criteria of acceptability.
From the expressions of SAG and GAP at flanges: the deflection and slope of the right (forward) flange, i. e. propulsion gearbox shaft flange are determined: The calculation has to determine bearing displacements w 4 and w 5 .For this purpose we should consider the output gearbox shaft being moved to the position of the bearing displacements w 4 , and its subsequent rotation by angle â around bearing 4 in order to achieve value w 5 in bearing 5.The displacements w 4 and w 5 are measured in the reference coordinate system, selected for the left part of the shafting.
When the bearing displacements are w 4 =w 5 =0, the deflection at flange D is w D0 and its slope is á D0 .These are the known values, obtained by calculating the statically determined system of the output gearbox shaft with two bearings.The displacement values w 4 and w 5 are unknown.
The geometric relations shown in Figure 8 clearly show that the following applies to the displacements that are small in relation to the distance between the bearings: Solutions of the equation system (6.8)are: Owing to the above procedure, we have calculated the previously unknown values of the displacements w 4 and w 5 .These are to be used for further calculation of the fully assembled shafting as a statically non-determined system with 5 supports, aided by a suitable computer program, e. g. the one described in [6][7][8], with the purpose of verifying whether the criteria of acceptability have been met.
The specific values obtained for a real-life calculation case, shown in a general way in Figure 7 are presented more appropriately in Table 1.
It should be noted that very small values of deflection w D0 and slope â D0 for the output shaft indicate its relatively great stiffness, which is common in practice.The final displacement and angular turn values are indeed obtained by moving and rotating the output shaft as a solid body.
DISCUSSION
The procedure of aligning the propulsion shaft line for ships in service differs considerably from the procedure performed on a new built ship, in all stages: calculation, assembling, and verification.The only common feature is that in both cases the final verification is to be carried out afloat.Designing the elastic line of the propulsion shafting of a new built ship is subject only to the constraints resulting from the physics of the very procedure, described in detail in the available literature [1,3,4,6,7,8,12,13,15].For instance, all bearings must have reactions directed upwards; they must not be overloaded; the shafting must not transfer excessive internal forces (transverse forces and bending moment onto gearbox flange or directly-coupled engine); the greatest acceptable bearing offsets are limited by their clearance, etc. Varying and selection of certain calculation values aimed at meeting the criteria and constraints, are simple and, as a rule, feasible without difficulties, because the real ship has not been constructed yet: it exists only in its technical documentation (drawings).The conversion (overhaul) of an actually existing service ship propulsion shafting is always subject to additional constraints, resulting from the decisions regarding what, how, and why a ship part is to be changed (bearings and/or shafting, and/or gearbox/engine position due to installation of a new gearbox/engine), and what is to be left unchanged.This paper proposes the procedure for gradual de-coupling of shaft line flanges, and measurement of SAG and GAP values as references for the calculated condition after overhauling, prior to any other works onboard when afloat and prior to dry-docking.If need be, even during the stage when the ship is afloat, before overhauling, we can measure the values of the accessible bearing reactions, which may be as well useful for the forthcoming calculation, as SAG and GAP may be insufficiently accurate.Generally speaking, the basic idea is to de-couple the shafting into statically determined elements, take the position of the foremost or aftermost end as a reference and, using the measured SAG and GAP values, determine the bearing positions of all other elements in relation to the initially selected (fore or aft) end of the system.The obtained bearing displacements are to be entered into the suitable computer program for calculating the shafting alignment (computer program not being the subject of this paper as it is described in other papers listed below).Then we are to find out whether the obtained condition, and the respective values that have been measured, meet the calculation criteria.After that, using the calculation, we need to adjust the values of bearing displacement of other de-coupled statically determined system elements, so that the calculation of the fully-assembled condition could meet all the requirements of the criteria of acceptability.
Then the assembling (connecting) of individual elements must be carried out in compliance with the calculated (not initially measured) SAG/GAP values.
A likely objection that the presented procedure is not accurate enough, as it is based on SAG and GAP values, can be repudiated, given the fact that de-coupling of the shafting and measurement at open flange connections is often the only feasible procedure in onboard practice.Namely, it regularly occurs that some of the bearings are not accessible for direct measurement of their reactions.Upon the completion of conversion works, it is possible to check reactions at the accessible shafting bearings, and adjust their values in compliance with the calculation, by lifting/lowering as described in [6], prior to the final setting of the very bearing supports (customised metal supports, or supports cast in resin such as Epocast).Such an approach makes sense as the elastic line is determined along with all the respective values.
It is essential that the final verification and adjustment of the obtained bearing reactions are conducted afloat after all overhaul work is done, with the fully assembled shafting, and the still adjustable height of bearings where the reactions will be measured.
CONCLUSION
The reliability of shafting, hence the reliability of the very ship as a means of transport and as a complete system, depends considerably on the correct installation of the shafting.
This paper proposes the procedure for gradual de-coupling of shaft line flanges, and measurement of SAG and GAP values as references for the calculated condition after overhauling, prior to any other works onboard when afloat and prior to dry-docking.
Given the fact that SAG/GAP measurements are usually not sufficiently accurate, the final assessment of the obtained elastic line is to be performed by direct measurement of bearing reactions at accessible shafting bearings, their comparison with the calculated values, and their adjustment by lifting and lowering certain bearings, in order to achieve the calculated values.
Future efforts should address in more detail the approaches to likely specific real-life cases (replacement of worn-out bearings, replacement of a shafting when cracks are identified, replacement of a propulsion engine/gearbox by the one having different dimensions and features, etc.).Due to the need for generality in the present paper and the intention of remaining focused on a principled approach to all the above cases, the specific cases and approaches have not been discussed here.
Figure 2 -Figure 3 -
Figure 2 -Part of a shaft line as a statically determined system rad] Deflection and slope of the shaft around the point D as a solid body Dwi w
. 10 )Figure 4 -
Figure 4 -Displacements and angular offsets of element i a) deflection of the element left end w L i , [mm] b) slope turn of the element left end a L i , [rad] c) deflection of the element right end w D i , [mm] d) slope of the element right end a D i , [rad]
IFigure 8 -Table 1 -
Figure 8 -Displacement and angular turn of the output gearbox shaft | 5,306 | 2009-01-01T00:00:00.000 | [
"Engineering"
] |
Pillar-Layered Metal-Organic Frameworks for Sensing Specific Amino Acid and Photocatalyzing Rhodamine B Degradation
Metal-organic frameworks (MOFs) have presented potential for detection of specific species and catalytic application due to their diverse framework structures and functionalities. In this work, two novel pillar-layered MOFs [Cd6(DPA)2(NTB)4(H2O)4]n·n(DPA·5DMA·H2O) (1) and [Cu2(DPA)(OBA)2]n·n(2.5DMF·H2O) (2) [DPA = 2,5-di(pyridin-4-yl)aniline, H3NTB = 4,4′,4′′-nitrilotribenzoic acid, H2OBA = 4,4′-oxydibenzoic acid, DMA = N,N-dimethylacetamide, DMF = N,N-dimethylformamide] were successfully synthesized and structurally characterized. Both 1 and 2 have three-dimensional framework structures. The fluorescent property of 1 makes it possible for sensing specific amino acid such as L-glutamic acid (Glu) and L-aspartic acid (Asp). While MOF 2 was found to be suitable for photocatalytic degradation of Rhodamine B (RhB) in the presence of H2O2. The results imply that MOFs are versatile and metal centers are important in determining their properties.
Introduction
Nowadays, it attracts great attention with relation to the healthy and environmental issue such as detecting specific species and degrading harmful pollutants [1,2]. The detection of definite harmful species such as nitroaromatic compounds (NACs), ketone molecules, halogen flame retardants and so on have been extensively explored, however, the study on sensing biomolecules like amino acids (AAs) is limited [3][4][5][6]. It is known that AAs play key role in varied physiological activities [7,8]. Among them, L-glutamic acid (Glu) and L-aspartic acid (Asp) are important biological neurotransmitters, but may arise undesired side effects when their content exceeds the standard. For example, excessive Glu may lead to mental diseases such as Parkinson's syndrome and allergic reactions like headache and nausea [9]. Therefore, the accurate detection of the AA is meaningful for monitoring and diagnosing human health.
Among the reported studies, luminescent metal-organic frameworks (MOFs) have been recognized as efficient and versatile detector due to their variable responses to the analytes including the change of luminant color, enhancement or quenching the fluorescence [10,11]. Besides, the unique porous structure may be helpful for adsorbing target analytes which may be selectively interacting with the framework by the porous skeleton [12,13]. In our previous work, it was found that the amino-functionalized MOF, namely NH 2 -MIL-101, can be utilized for sensing specific AA in aqueous media via turn-on fluorescence [14]. In addition to the detection, MOFs have also been widely employed in photocatalysis, like water splitting, CO 2 reduction, organic reactions and so on [15][16][17][18][19][20]. The removal of organic pollutants in the waste water is a significant project with the methods like adsorption and in situ degradation [21][22][23][24][25][26][27]. Among the common pollutants, organic dyes are widely utilized in industrial production, which is in large dosage and strong persistence in the environment resulting in harm to human kidneys and organs [28]. Rhodamine
Crystal Structure Description of MOF 2
When H2OBA was used instead of H3NTB, and CuI and KI were added in the reaction, Cu-MOF 2, rather than a Cu-Cd bimetallic MOF, was achieved. 2 crystallizes in orthorhombic space group Pbcn (Table 1). The repeat unit has two Cu(II), one DPA and two OBA 2− (Figure 2a). Each Cu(II) is five-coordinated with four oxygen atoms from four different carboxylate groups of OBA 2− and a nitrogen from DPA. Two Cu(II) and four carboxylate groups of OBA 2− form a [Cu2(COO)4] paddle wheel-like SBU, which is extended into a 2D network by the connection of OBA 2− (Figure 2b). The 2D layers are further connected by DPA to form a 3D framework with the pillar-layered structure (Figure 2c). The pore volume in 2 is calculated to be 1259.9 Å 3 (34.7%) by PLATON after removing the solvent molecules. The Brunauer Emmett Teller (BET) surface area of MOF 2 is 136.31 m 2 /g
Crystal Structure Description of MOF 2
When H 2 OBA was used instead of H 3 NTB, and CuI and KI were added in the reaction, Cu-MOF 2, rather than a Cu-Cd bimetallic MOF, was achieved. 2 crystallizes in orthorhombic space group Pbcn (Table 1). The repeat unit has two Cu(II), one DPA and two OBA 2− (Figure 2a). Each Cu(II) is five-coordinated with four oxygen atoms from four different carboxylate groups of OBA 2− and a nitrogen from DPA. Two Cu(II) and four carboxylate groups of OBA 2− form a [Cu 2 (COO) 4 ] paddle wheel-like SBU, which is extended into a 2D network by the connection of OBA 2− (Figure 2b). The 2D layers are further connected by DPA to form a 3D framework with the pillar-layered structure (Figure 2c). The pore volume in 2 is calculated to be 1259.9 Å 3 (34.7%) by PLATON after removing the solvent molecules. The Brunauer Emmett Teller (BET) surface area of MOF 2 is 136.31 m 2 /g determined by N 2 adsorption data at 77 K ( Figure S3). Considering the SBU as a six-connected node and the ligand as a linear linker, the topology of 2 can be simplified to be {4 4 ·6 10 ·8} with a 1D channel (Figure 2d).
Powder X-ray Diffraction (PXRD) and Thermogravimetric Analyses (TGA)
PXRD data were utilized to ensure the phase purity of the as-synthesized samples 1 and 2. As shown in Figure S4, the characteristic diffraction peaks of the as-synthesized samples are consistent with the simulated ones, which imply that the synthesized samples are in pure phase. The thermal stability of the MOFs was estimated by TG measurements under N 2 atmosphere. As shown in Figure S5, gradual weight loss of ca. 19% in 1 was observed before 350 • C, which is caused by release of terminal water and lattice molecules (Calcd. 21.4%). The collapse of the framework of 1 starts from 360 • C. As for 2, weight loss of 18.2% was detected in the range of • C, which is corresponding to the loss of DMF and water molecules (Calcd. 18.5%). The framework was maintained until 325 • C. determined by N2 adsorption data at 77 K ( Figure S3). Considering the SBU as a six-connected node and the ligand as a linear linker, the topology of 2 can be simplified to be {4 4 ·6 10 ·8} with a 1D channel (Figure 2d).
Powder X-ray Diffraction (PXRD) and Thermogravimetric Analyses (TGA)
PXRD data were utilized to ensure the phase purity of the as-synthesized samples 1 and 2. As shown in Figure S4, the characteristic diffraction peaks of the as-synthesized samples are consistent with the simulated ones, which imply that the synthesized samples are in pure phase. The thermal stability of the MOFs was estimated by TG measurements under N2 atmosphere. As shown in Figure S5, gradual weight loss of ca. 19% in 1 was observed before 350 °C, which is caused by release of terminal water and lattice molecules (Calcd. 21.4%). The collapse of the framework of 1 starts from 360 °C. As for 2, weight loss of 18.2% was detected in the range of 25-195 °C, which is corresponding to the loss of DMF and water molecules (Calcd. 18.5%). The framework was maintained until 325 °C.
Stability of MOF 1 in Different Solvent
It is known that the practical application of MOFs is dependent on their stability [34]. Accordingly, the structural stability of 1 in different solvent was tested by PXRD. As exhibited in Figure S6, the as-synthesized 1 was respectively immerged in varied solvent
Stability of MOF 1 in Different Solvent
It is known that the practical application of MOFs is dependent on their stability [34]. Accordingly, the structural stability of 1 in different solvent was tested by PXRD. As exhibited in Figure S6, the as-synthesized 1 was respectively immerged in varied solvent including water, methanol (MeOH), ethanol (EtOH), acetonitrile, DMF, DMA, dichloromethane (DCM) and isopropanol (IPA). The PXRD patterns were almost maintained in these medium, except for the one in MeOH with slight disturbance. The high stability of 1 may be ascribed to the two-fold interpenetration [35,36].
Photoluminescence of MOF 1
It has been recognized that MOFs with d 10 metal centers and π-conjugated organic ligands may possess photoluminescence (PL) [37]. Thus, MOF 1 with 3d 10 metal nodes of Zn(II) may show PL. However, no PL can be expected for MOF 2 due to the 3d 9 Cu(II) centers, instead photocatalytic property of MOF 2 was tested (vide post). The PL of MOF 1 as well as H 3 NTB and DPA ligands was examined in the solid state at room temperature. As illustrated in Figure S7, the emission of 1 at 487 nm (λ ex = 383 nm) may mainly arise from the ligand H 3 NTB since H 3 NTB gives an emission at 460 nm (λ ex = 397 nm). The red-shift and enhancement of the emission in 1 is probably caused by the coordination between the Cd(II) and NTB 3− to increase the rigidity of the framework [38]. In addition, DPA shows negligible luminescence, which may be caused by the intramolecular resonance energy transfer (RET) and inner filter effect (IFE) due to presence of amino group [39,40]. To explore the influence of solvent on emission of 1 [41], PL spectra of 1 after immerging in varied solvent of DMF, EtOH, IPA, DMA, CH 3 CN, MeOH, toluene, DCM and H 2 O were recorded. As shown in Figure S8, 1 exhibits solvent dependent emission with different intensity and wavelength.
Fluorescence Sensing Specific AA by 1
The detection of specific AA is of great significance in nutritional conditioning and disease diagnosis [42]. In addition, water was employed as the detection medium since AAs generally exist in normal saline. The fluorescence sensing performance of 1 for specific AA was investigated in aqueous solution of L-tryptophan (Trp), L-tyrosine (Tyr), Lthreonine (Thr), L-isoleucine (Ile), L-phenylalanine (Phe), L-alanine (Ala), L-serine (Ser), Lleucine (Leu), L-proline (Pro), L-histidine (His), glycine (Gly), L-valine (Val), L-methionine (Met), L-lysine (Lys), L-arginine (Arg), L-asparagine (Asn), L-glutamine (Gln), L-cysteine (Cys), Glu and Asp. As shown in Figure S9, the obvious quenching was detected in the aqueous solution of Glu as well as Asp, implying the sensing capacity of 1 for specific AA of Glu and Asp. Furthermore, the titration experiment was performed for reflecting the relationship between the fluorescence intensity and the concentration of the analyte (Figure 3). The linear Stern-Volmer (S-V) equation of I 0 /I = K sv [Q] + 1 was utilized, where I 0 and I are the luminescence intensities before and after adding the analyte, Q is the molar concentration of the analyte, and K sv is quenching constant. As a result, the calculated K sv are 8.43 × 10 3 M −1 for Glu and 9.74 × 10 3 M −1 for Asp. The detection limits (DL) were determined according to the formula DL = 3 σ/K sv (σ is standard deviation) and the results are 4.44 × 10 −5 M for Glu and 1.05 × 10 −4 M for Asp (Table S2).
In order to clarify the mechanism of quenching process for 1 by Asp and Glu, PXRD and IR spectral measurements were conducted ( Figure S10) and comparison for the samples before and after the detection was carried out. It can be seen that the PXRD patterns and IR spectra are almost the same with the original one, excluding the quenching caused by collapse of the framework structures. Furthermore, the RET and IFE effect were excluded by the non-overlapping between the PL spectra of 1 (360-650 nm) and the UV absorption of Asp and Glu (205 nm) ( Figure S11). Therefore, the fluorescence quenching was considered to be in a static mode supported by the obvious increase of the fluorescence lifetime after detection ( Figure S12 and Table S3) [43].
Photocatalytic Degradation of RhB by MOF 2
RhB is commonly used in industry but toxic, thus it is necessary to completely remove RhB from the wastewater. In this study, MOF 2 was attempted to degrade RhB by photocatalysis with the assistance of H 2 O 2 . Firstly, the experimental standard curve ( Figure S13) was fitted by the Lambert Beer law: Abs = KBC, where Abs is the absorbance of the tested mixture, K is the molar absorbance coefficient, B is the thickness of the total solution volume and C is the concentration of RhB in aqueous solution. Then, based on the external standard method, a series of C t /C 0 photodegradation plots, where C 0 is the initial concentration of RhB and C t is the concentration at time t, were obtained with the varied reaction time (Figure 4) and the degradation efficiency was calculated by the formula (C 0 − C t )/C 0 [44]. It was found that high efficiency of 99% was achieved by combination of 2 and H 2 O 2 , which is satisfactory by comparing with the reported results ( Figure S14 and Table S4) [45][46][47][48]. Besides, the contrast experiments were conducted to find out the influential factors. The efficiency was declined to 25% in the dark condition and the degradation was 13% and 56% when separately catalyzed by sole MOF 2 and H 2 O 2 . In addition, the concentration and degradation efficiency were examined at varied pH and it was found that the acidic environment is more suitable for the degradation (Figures S15 and S16)
Photocatalytic Degradation of RhB by MOF 2
RhB is commonly used in industry but toxic, thus it is necessary to completely remove RhB from the wastewater. In this study, MOF 2 was attempted to degrade RhB by photocatalysis with the assistance of H2O2. Firstly, the experimental standard curve (Figure S13) was fitted by the Lambert Beer law: Abs = KBC, where Abs is the absorbance of the tested mixture, K is the molar absorbance coefficient, B is the thickness of the total solution volume and C is the concentration of RhB in aqueous solution. Then, based on the external standard method, a series of Ct/C0 photodegradation plots, where C0 is the initial concentration of RhB and Ct is the concentration at time t, were obtained with the varied reaction time (Figure 4) and the degradation efficiency was calculated by the formula (C0 − Ct)/C0 [44]. It was found that high efficiency of 99% was achieved by combination of 2 and H2O2, which is satisfactory by comparing with the reported results ( Figure S14 and Table S4) [45][46][47][48]. Besides, the contrast experiments were conducted to find out the influential factors. The efficiency was declined to 25% in the dark condition and the degradation was 13% and 56% when separately catalyzed by sole MOF 2 and H2O2. In addition, the concentration and degradation efficiency were examined at varied pH and it was found that the acidic environment is more suitable for the degradation (Figures S15 and S16), while in the basic environment the less degradation efficiency may be caused by the decomposition of H2O2 to O2 and H2O [31]. Based on the above results, the remarkable synergistic effect was present and the synergistic index (SI) was calculated by the photocatalytic degradation kinetics. As shown in Figure 4, the obtained data were fitted well with pseudo-first order in the formula ln (Ct/C0) = kt, where k is the kinetic rate constant for quantitatively analyzing the photocata- Based on the above results, the remarkable synergistic effect was present and the synergistic index (SI) was calculated by the photocatalytic degradation kinetics. As shown in Figure 4, the obtained data were fitted well with pseudo-first order in the formula ln (C t /C 0 ) = kt, where k is the kinetic rate constant for quantitatively analyzing the photocatalytic performance. The constant k is 0.0375 min −1 in the synergistic system of 2 and H 2 O 2 , which is much larger than the one catalyzed by sole catalyst of 2 with negligible result and H 2 O 2 with 0.007 min −1 . Therefore, the SI was calculated to be ca. 5, according to the formula SI = k (1+2) /(k 1 + k 2 ).
In order to analyse the photocatalytic degradation mechanism of RhB by 2, the free radical capture experiment was carried out on the photocatalytic degradation process ( Figure 5 and Figure S17). IPA, triethanolamine (TEOA) and ascorbic acid were respectively utilized as free radical trapping agents of hydroxyl radical ·OH, hole H + and superoxide radical ·O 2− [49][50][51][52]. Among them, the capture of ·OH by IPA slightly reduced the degradation efficiency and the obvious inhibition was observed for hole-scavenger TEOA and superoxide scavenger ascorbic acid with k = 0.00317 and 0.0005 min −1 , respectively, indicating that the H + and ·O 2− play major role in the degradation of RhB. The reactive oxygen species (ROSs) produced in the degradation process at the first one hour were checked by EPR spectra with the assistance of 5,5-dimethyl-1-pyrroline N-oxide (DMPO) that acted as the spin trapping agent. It is obviously that the DMPO-·OH adduct was observed with the characteristic intensities of 1:2:2:1 ( Figure S18), which supported the existence of hydroxyl radical at the initial stage of degradation [53]. togenerated holes H + in 2 were capable to directly react with RhB by the obvious attenuated efficiency after adding TEOA. As an electron acceptor, H2O2 was activated and produced ·OH ( Figure S18), which meanwhile inhibits the recombination of electron-hole pairs for improving the photocatalytic performance of 2. And the obtained hydroxyl radical ·OH was allowed to react with the excess H2O2 to generate ·O 2− [52]. In addition, the potential of CB is −0.56 eV in MOF 2, which is more negative than the required potential −0.33 eV vs. NHE for reducing O2 to ·O 2− . Thus, the oxygen existed in the solution or obtained by the decomposition of H2O2 was allowed to be reduced into ·O 2− . Finally, the degradation of RhB occurred by the reaction with effective superoxide radical ·O 2− . The photocatalytic performance of MOFs is similar to that of semiconductor material, in which the electrons were transferred between the conduction band (CB) and valence band (VB). The solid UV-vis diffuse reflection spectrum of 2 shows two kinds of absorption bands in the range of 200-420 nm and 520-800 nm ( Figure S19) [54]. Accordingly, the band gap of 2 was calculated to be 2.66 eV by the Tauc plot [15]. The conduction band potential, which is similar to the flat band position (VFB) of 2, was measured by Mott Schottky experiments at varied frequencies of 1, 1.5 and 2 kHz. It can be seen from Figure 6 that the slope of the curve is positive, which shows that 2 has an n-type semiconductor character. Based on these results, it can be determined that the CB of 2 is −0.56 eV. Therefore, the valence band potential VB was calculated to be 2.10 eV by considering the band gap of 2.66 eV [15]. Furthermore, the EIS of MOF 2 was presented in Figure S20.
Based on the above experimental results, the photocatalytic degradation mechanism is proposed ( Figure 7). Since MOF 2 serves as an n-type semiconductor, electrons in the VB are excited to CB forming electron-hole pairs upon visible light irradiation. The photogenerated holes H + in 2 were capable to directly react with RhB by the obvious attenuated efficiency after adding TEOA. As an electron acceptor, H 2 O 2 was activated and produced ·OH ( Figure S18), which meanwhile inhibits the recombination of electron-hole pairs for improving the photocatalytic performance of 2. And the obtained hydroxyl radical ·OH was allowed to react with the excess H 2 O 2 to generate ·O 2− [52]. In addition, the potential of CB is −0.56 eV in MOF 2, which is more negative than the required potential −0.33 eV vs. NHE for reducing O 2 to ·O 2− . Thus, the oxygen existed in the solution or obtained by the decomposition of H 2 O 2 was allowed to be reduced into ·O 2− . Finally, the degradation of RhB occurred by the reaction with effective superoxide radical ·O 2− .
Fluorescent Sensing AA by MOF 1
For sensing definite AA, the as-synthesized 1 was dispersed in H 2 O to produce a 0.5 mg mL −1 aqueous solution. All emission spectra were recorded in the range of 350-650 nm under excitation at 380 nm.
Photocatalyzing Degradation of RhB by MOF 2
MOF 2 (20 mg) and H 2 O 2 solution (400 µL, 30%) were added into RhB aqueous solution (50 mL, 10 mg/L) and the mixture was pre-treated with stirring in the dark for 30 min. The degradation reaction was conducted under the visible light for 2 h by a 300 W xenon arc lamp with AM 1.5G filter serving as the light source. The reaction solution was taken 3 mL in every 30 min to centrifuge and measure its corresponding UV absorption spectrum.
X-ray Crystallography
Single-crystal X-ray diffraction data were collected on a Bruker D8 Venture diffractometer with graphite-monochromated Mo Kα radiation (λ = 0.71073 Å). The integration of diffraction data and intensity corrections for the Lorentz and polarization effects were performed by using SAINT program [56]. Semi-empirical absorption corrections were applied using SADABS program [57]. The structures were solved by direct methods with SHELXT-2014, expanded by subsequent Fourier-difference synthesis, and all the non-hydrogen atoms were refined anisotropically on F 2 using the full-matrix least-squares technique with the SHELXL-2018 crystallographic software package [58,59]. Part of the free solvent molecules in 1 and the ones in 2 have been taken into account by SQUEEZE option of the PLATON program [60]. The details of crystal parameters, data collection and refinements for 1 and 2 are listed in Table 1, and the selected bond lengths and angles are given in Table S1. CCDC numbers 2,212,527 (for 1) and 2,212,528 (for 2) contain the supplementary crystallographic data for the reported compounds. These data can be obtained free of charge from The Cambridge Crystallographic Data Centre.
Conclusions
In this study, dipyridyl and multicarboxylate ligands were utilized to react with metal salts to generate 3D MOFs 1 and 2 with pillar-layered structure. The results show that 1 has high stability and presents distinct photoluminescence responses to the varied solvent. Furthermore, MOF 1 exhibits potential for sensing specific amino acid such as Glu and Asp through fluorescence quenching in the aqueous solution. In addition, MOF 2 has n-type semiconductor character and shows photocatalytic capacity for degradation of RhB in the presence of H 2 O 2 . The results of this study demonstrate the importance of the metal center in determining the property of the frameworks.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27217551/s1, Table S1: Selected bond lengths (Å) and angles ( • ) for 1 and 2; Table S2: Standard deviation and detection limit calculation of 1 for Glu and Asp in aqueous suspension. Table S3: Fluorescence lifetime of MOF 1 before and after addition of Glu and Asp. Table S4: H 2 O 2 -assisted photocatalytic degradation of aqueous RhB by MOF 2 and reported MOFs under visible light. Figure S1: FTIR-ATR spectra of 1 and 2; Figure S2: Coordination modes of NTB 3− in 1; Figure S3: The N2 adsorption isotherm of MOF 2 at 77 K; Figure S4: PXRD patterns of 1 and 2; Figure S5: TG curves of 1 and 2; Figure S6: PXRD of 1 after immerging in different solvent; Figure S7: Fluorescence spectra of MOF 1 and its ligands H 3 NTB and DPA in the solid state; Figure S8: PL spectra of 1 after immerging in varied solvent; Figure S9: Fluorescence quenching effect of 1 by adding amino acid aqueous solution (300 µL, 2.5 mM, λ ex = 338 nm); Figure S10: PXRD and FTIR-ATR spectra of MOF 1 before and after detecting Glu and Asp; Figure S11: Fluorescence emission spectra of 1 and UV absorption spectra of Glu and Asp in water; Figure S12: Fluorescence lifetime of 1 before and after detecting Glu (a) and Asp (b) in water; Figure S13: The standard curve of MOF 2 for the UV absorption versus concentration of RhB; Figure S14: The degradation efficiency of RhB at varied reaction time; Figure S15: The concentration of RhB in the degradation process at acidic (pH = 2 and 5) and basic (pH = 8 and 10) conditions; Figure S16: The degradation efficiency of RhB by 2 at acidic (pH = 2 and 5) and basic (pH = 8 and 10) conditions. Figure S17: The concentration of RhB at varied photodegradation time with different radical trapping agents; Figure S18: The EPR spectra of photodegradation of RhB by MOF 2 at different reaction time using DMPO as spin-trapping agent; Figure S19: UV-vis diffuse reflectance spectrum (a) and Tauc | 5,728.8 | 2022-11-01T00:00:00.000 | [
"Materials Science"
] |
The $e^+ e^-\to K^+ K^- \pi^+\pi^-$, $K^+ K^- \pi^0\pi^0$ and $K^+ K^- K^+ K^-$ Cross Sections Measured with Initial-State Radiation
We study the processes $e^+ e^-\to K^+ K^- \pi^+\pi^-\gamma$, $K^+K^-\pi^0\pi^0\gamma$ and $K^+ K^- K^+ K^-\gamma$, where the photon is radiated from the initial state. About 34600, 4400 and 2300 fully reconstructed events, respectively, are selected from 232 \invfb of \babar data. The invariant mass of the hadronic final state defines the effective \epem center-of-mass energy, so that the $K^+ K^- \pi^+\pi^-\gamma$ data can be compared with direct measurements of the $e^+ e^-\to K^+K^- \pipi$ reaction; no direct measurements exist for the $e^+ e^-\to K^+ K^- \pi^0\pi^0$ or $\epem\to K^+ K^- K^+ K^-$ reactions. Studying the structure of these events, we find contributions from a number of intermediate states, and we extract their cross sections where possible. In particular, we isolate the contribution from $e^+ e^-\to\phi(1020) f_{0}(980)$ and study its structure near threshold. In the charmonium region, we observe the $J/\psi$ in all three final states and several intermediate states, as well as the $\psi(2S)$ in some modes, and measure the corresponding branching fractions. We see no signal for the Y(4260) and obtain an upper limit of $\BR_{Y(4260)\to\phi\pi^+\pi^-}\cdot\Gamma^{Y}_{ee}<0.4 \ev$ at 90% C.L.
I. INTRODUCTION
Electron-positron annihilation at fixed center-of-mass (c.m.) energies has long been a mainstay of research in elementary particle physics. The idea of utilizing initialstate radiation (ISR) to explore e + e − reactions below the nominal c.m. energies was outlined in Ref. [1], and discussed in the context of high-luminosity φ and B factories in Refs. [2][3][4]. At high energies, e + e − annihilation is dominated by quark-level processes producing two or more hadronic jets. However, low-multiplicity exclusive processes dominate at energies below about 2 GeV, and the region near charm threshold, 3.0-4.5 GeV, features a number of resonances [5]. These allow us to probe a wealth of physics parameters, including cross sections, spectroscopy and form factors.
Of particular current interest are the recently observed states in the charmonium region, such as the Y (4260) [6], and a possible discrepancy between the measured value of the anomalous magnetic moment of the muon, g µ − 2, and that predicted by the Standard Model [7]. Charmonium and other states with J P C = 1 −− can be observed as resonances in the cross section, and intermediate states may be present in the hadronic system. Measurements of the decay modes and their branching fractions are important in understanding the nature of these states. For example, the glue-ball model [8] predicts a large branching fraction for Y (4260) into φππ. The prediction for g µ − 2 is based on hadronic-loop corrections measured from low-energy e + e − → hadrons data, and these dominate the uncertainty on the prediction. Improving this prediction requires not only more precise measurements, but also measurements over the entire energy range and inclusion of all the important subprocesses in order to understand possible acceptance effects. ISR events at B factories provide independent and contiguous measurements of hadronic cross sections from the production threshold to about 5 GeV.
The cross section for the radiation of a photon of energy E γ followed by the production of a particular hadronic final state f is related to the corresponding direct e + e − → f cross section σ f (s) by where √ s is the initial e + e − c.m. energy, x = 2E γ / √ s is the fractional energy of the ISR photon and E c.m. ≡ * Deceased † Also with Università di Perugia, Dipartimento di Fisica, Perugia, Italy ‡ Also with Università della Basilicata, Potenza, Italy § Also with IPPP, Physics Department, Durham University, Durham DH1 3LE, United Kingdom s(1 − x) is the effective c.m. energy at which the final state f is produced. The probability density function W (s, x) for ISR photon emission has been calculated with better than 1% precision (see e.g. Ref. [4]). It falls rapidly as E γ increases from zero, but has a long tail, which combines with the increasing σ f (s(1−x)) to produce a sizable cross section at very low E c.m. . The angular distribution of the ISR photon peaks along the beam directions, but 10-15% [4] of the photons are within a typical detector acceptance.
Experimentally, the measured invariant mass of the hadronic final state defines E c.m. . An important feature of ISR data is that a wide range of energies is scanned simultaneously in one experiment, so that no structure is missed and the relative normalization uncertainties in data from different experiments or accelerator parameters are avoided. Furthermore, for large values of x the hadronic system is collimated, reducing acceptance issues and allowing measurements at energies down to production threshold. The mass resolution is not as good as a typical beam energy spread used in direct measurements, but the resolution and absolute energy scale can be monitored by the width and mass of well known resonances, such as the J/ψ produced in the reaction e + e − → J/ψγ. Backgrounds from e + e − → hadrons events at the nominal √ s and from other ISR processes can be suppressed by a combination of particle identification and kinematic fitting techniques. Studies of e + e − → µ + µ − γ and several multi-hadron ISR processes using BABAR data have been reported [9][10][11][12], demonstrating the viability of such measurements.
The K + K − π + π − final state has been measured directly by the DM1 collaboration [13] for √ s < 2.2 GeV, and we have previously published ISR measurements of the K + K − π + π − and K + K − K + K − final states [11] for E c.m. < 4.5 GeV. We recently reported [14] an updated measurement of the K + K − π + π − final state with a larger data sample, along with the first measurement of the K + K − π 0 π 0 final state, in which we observed a structure near threshold in the φf 0 intermediate state. In this paper we present a more detailed study of these two final states along with an updated measurement of the K + K − K + K − final state. In all cases we require detection of the ISR photon and perform a set of kinematic fits. We are able to suppress backgrounds sufficiently to study these final states from their respective production thresholds up to 5 GeV. In addition to measuring the overall cross sections, we study the internal structure of the events and measure cross sections for a number of intermediate states. We study the charmonium region, measure several J/ψ and ψ(2S) branching fractions, and set limits on other states.
II. THE BABAR DETECTOR AND DATASET
The data used in this analysis were collected with the BABAR detector at the PEP-II asymmetric energy e + e − storage rings. The total integrated luminosity used is 232 fb −1 , which includes 211 fb −1 collected at the Υ (4S) peak, √ s = 10.58 GeV, and 21 fb −1 collected below the resonance, at √ s = 10.54 GeV.
The BABAR detector is described elsewhere [15]. Here we use charged particles reconstructed in the tracking system, which comprises the five-layer silicon vertex tracker (SVT) and the 40-layer drift chamber (DCH) in a 1.5 T axial magnetic field. Separation of charged pions, kaons and protons uses a combination of Cherenkov angles measured in the detector of internally reflected Cherenkov light (DIRC) and specific ionization measured in the SVT and DCH. For the present study we use a kaon identification algorithm that provides 90-95% efficiency, depending on momentum, and pion and proton rejection factors in the 20-100 range. Photon and electron energies are measured in the CsI(Tl) electromagnetic calorimeter (EMC). We use muon identification provided by the instrumented flux return (IFR) to select the µ + µ − γ final state.
To study the detector acceptance and efficiency, we use a simulation package developed for radiative processes. The simulation of hadronic final states, including K + K − π + π − γ, K + K − π 0 π 0 γ and K + K − K + K − γ, is based on the approach suggested by Czyż and Kühn [16]. Multiple soft-photon emission from the initialstate charged particles is implemented with a structurefunction technique [17,18], and photon radiation from the final-state particles is simulated by the PHOTOS package [19]. The accuracy of the radiative corrections is about 1%.
We simulate the K + K − ππ final states both according to phase space and with models that include the φ(1020) → K + K − and/or f 0 (980) → ππ channels, and the K + K − K + K − final state both according to phase space and including the φ → K + K − channel. The generated events go through a detailed detector simulation [20], and we reconstruct them with the same software chain as the experimental data. Variations in detector and background conditions are taken into account.
We also generate a large number of background processes, including the ISR channels e + e − → π + π − π + π − γ and π + π − π 0 π 0 γ, which can contribute due to particle misidentification, and φηγ, φπ 0 γ, π + π − π 0 γ, which have larger cross sections and can contribute via missing or spurious tracks or photons. In addition, we study the non-ISR backgrounds e + e − → qq (q = u, d, s, c) generated by JETSET [21] and e + e − → τ + τ − by KORALB [22]. The contribution from the Υ (4S) decays is found to be negligible. The cross sections for these processes are known with about 10% accuracy or better, which is sufficient for these measurements.
III. EVENT SELECTION AND KINEMATIC FIT
In the initial selection of candidate events, we consider photon candidates in the EMC with energy above 0.03 GeV and charged tracks reconstructed in the DCH or SVT or both that extrapolate within 0.25 cm of the beam axis in the transverse plane and within 3 cm of the nominal collision point along the axis. These criteria are looser than in our previous analysis [11], and have been chosen to maximize efficiency. We require a highenergy photon in the event with an energy in the initial e + e − c.m. frame of E γ > 3 GeV, and either exactly four charged tracks with zero net charge and total momentum roughly opposite to the photon direction, or exactly two oppositely charged tracks that combine with a set of other photons to roughly balance the highest-energy photon momentum. We fit a vertex to the set of charged tracks and use it as the point of origin to calculate the photon direction. Most events contain additional soft photons due to machine background or interactions in the detector material.
We subject each of these candidate events to a set of constrained kinematic fits, and use the fit results, along with charged-particle identification, both to select the final states of interest and to measure backgrounds from other processes. We assume the photon with the highest E γ in the c.m. frame is the ISR photon, and the kinematic fits use its direction along with the four-momenta and covariance matrices of the initial e + e − and the set of selected tracks and photons. Because of excellent resolution for the momenta in the DCH and good angular resolution for the photons in the EMC, the ISR photon energy is determined with better resolution through fourmomentum conservation than through measurement in the EMC. Therefore we do not use its measured energy in the fits, eliminating the systematic uncertainty due to the EMC calibration for high energy photons. The fitted three-momenta for each charged track and photon are used in further kinematical calculations.
For the four-track candidates, the fits have three constraints (3C). We first fit to the π + π − π + π − hypothesis, obtaining a χ 2 4π . If the four tracks include one identified K + and one K − , we fit to the K + K − π + π − hypothesis and retain the event as a K + K − π + π − candidate. For events with one identified kaon, we perform fits with each of the two oppositely charged tracks given the kaon hypothesis, and the combination with the lower χ 2 4π . If the event contains three or four identified K ± , we fit to the K + K − K + K − hypothesis and retain the event as a K + K − K + K − candidate.
For the events with two charged tracks and five or more photon candidates, we require both tracks to be identified as kaons to suppress background from ISR π + π − π 0 π 0 and K ± K 0 S π ∓ events. We then pair all non-ISR photon candidates and consider combinations with invariant mass within ±30 MeV/c 2 of the π 0 mass as π 0 candidates. We perform a six-constraint (6C) fit to each set of Events/unit χ 2 FIG. 1: Distribution of χ 2 from the three-constraint fit for K + K − π + π − candidates in the data (points). The open histogram is the distribution for simulated signal events, normalized as described in the text. The cross-hatched (hatched) histogram represents the background from non-ISR events (plus that from ISR 4π events), estimated as described in the text.
two non-overlapping π 0 candidates plus the ISR photon direction, the two tracks and the beam particles. Both π 0 candidates are constrained to the π 0 mass, and we retain the combination with the lowest χ 2 KKπ 0 π 0 .
A. Final Selection and Backgrounds
The experimental χ 2 KKπ + π − distribution for the K + K − π + π − candidates is shown in Fig. 1 as points, and the open histogram is the distribution for the simulated K + K − π + π − events. The simulated distribution is normalized to the data in the region χ 2 KKπ + π − < 10 where the backgrounds and radiative corrections are insignificant. The experimental distribution has contributions from background processes, but the simulated distribution is also broader than the expected 3C χ 2 distribution. This is due to multiple soft-photon emission from the initial state and radiation from the final-state charged particles, which are not taken into account by the fit, but are present in both data and simulation. The shape of the χ 2 distribution at high values was studied in detail [11,12] using specific ISR processes for which a very clean sample can be obtained without any limit on the χ 2 value.
The cross-hatched histogram in Fig. 1 represents the background from e + e − → qq events, which is based on the JETSET simulation. It is dominated by events with a hard π 0 producing a fake ISR photon, and the similar kinematics cause it to peak at low values of χ 2 KKπ + π − .
We evaluate this background in a number of E c.m. ranges by combining the ISR photon candidate with another photon candidate in both data and simulated events, and comparing the π 0 signals in the resulting γγ invariant mass distributions. The simulation gives an E c.m.dependence consistent with the data, so we normalize it by an overall factor. The hatched histogram represents the sum of this background and that from ISR e + e − → π + π − π + π − events with one or two misidentified π ± , which also contributes at low χ 2 values. We estimate the contribution as a function of E c.m. from a simulation using the known cross section [11]. All remaining background sources are either negligible or give a χ 2 KKπ + π − distribution that is nearly uniform over the range shown in Fig. 1. We therefore define a signal region χ 2 KKπ + π − < 30, and estimate the sum of the remaining backgrounds from the difference between the number of data and simulated entries in a control region, 30 < χ 2 KKπ + π − < 60. This difference is normalized to the corresponding difference in the signal region, as described in detail in Refs. [11,12]. The signal region contains 34635 data and 14077 simulated events, and the control region contains 4634 data and 723 simulated events. The invariant mass distribution for K + K − π + π − candidates in the data (points): the cross-hatched, hatched and open histograms represent, cumulatively, the non-ISR background, the contribution from ISR π + π − π + π − events, and the ISR background from the control region of Fig. 1. Figure 2 shows the K + K − π + π − invariant mass distribution from threshold up to 5.0 GeV/c 2 for events in the signal region. Narrow peaks are apparent at the J/ψ and ψ(2S) masses. The cross-hatched histogram represents the qq background, which is negligible at low mass but becomes large at higher masses. The hatched region represents the ISR π + π − π + π − contribution, which we estimate to be 2.4% of the selected events on average. The open histogram represents the sum of all backgrounds, including those estimated from the control region. They total 6-8% at low mass but account for 20-25% of the observed data near 4 GeV/c 2 and become the largest contribution near 5 GeV/c 2 .
We subtract the sum of backgrounds in each mass bin to obtain a number of signal events. Considering uncertainties in the cross sections for the background processes, the normalization of events in the control region and the simulation statistics, we estimate a systematic uncertainty on the signal yield that is less than 3% in the 1.6-3 GeV/c 2 mass region, but increases to 3-5% in the region above 3 GeV/c 2 .
B. Selection Efficiency
The selection procedures applied to the data are also applied to the simulated signal samples. The resulting K + K − π + π − invariant-mass distributions in the signal and control regions are shown in Fig. 3(a) for the phase space simulation. The broad, smooth mass distribution is chosen to facilitate the estimation of the efficiency as a function of mass, and this model reproduces the observed distributions of kaon and pion momenta and polar angles. We divide the number of reconstructed simulated events in each mass interval by the number generated in that interval to obtain the efficiency shown as the points in Fig. 3(b). The 3 rd order polynomial fit to the points is used for further calculations. We simulate events with the ISR photon confined to the angular range 20-160 • with respect to the electron beam in the e + e − c.m. frame, which is about 30% wider than the EMC acceptance. This efficiency is for this fiducial region, but includes the acceptance for the final-state hadrons, the inefficiencies of the detector subsystems, and event loss due to additional soft-photon emission.
The simulations including the φ(1020)π + π − and/or K + K − f 0 (980) channels have very different mass and angular distributions in the K + K − π + π − rest frame. However, the angular acceptance is quite uniform for ISR events, and the efficiencies are consistent with those from the phase space simulation within 3%. To study possible mis-modeling of the acceptance, we repeat the analysis with the tighter requirements that all charged tracks be within the DIRC acceptance, 0.45 < θ ch < 2.4 radians, and the ISR photon be well away from the edges of the EMC, 0.35 < θ ISR < 2.4 radians. The fraction of selected data events satisfying the tighter requirements differs from the simulated ratio by 3.7%. We conservatively take the sum in quadrature of this variation and the 3% model variation (5% total) as a systematic uncertainty due to acceptance and model dependence.
We correct for mis-modeling of the shape of the χ 2 KKπ + π − distribution by (3.0±2.0)% and the track finding efficiency following the procedures described in detail in Ref. [11]. We use a comparison of data and simulated χ 2 4π distributions in the much larger samples of ISR π + π − π + π − events. We consider data and simulated events that contain a high-energy photon plus exactly three charged tracks and satisfy a set of kinematical criteria, including a good χ 2 from a kinematic fit under the hypothesis that there is exactly one missing track in the event. We find that the simulated track-finding efficiency is overestimated by (0.8 ± 0.5)% per track, so we apply a correction of +(3 ± 2)% to the signal yield. We correct the simulated kaon identification efficiency using e + e − → φ(1020)γ → K + K − γ events. Events with a hard ISR photon and two charged tracks, one of which is identified as a kaon, with a K + K − invariant mass near the φ mass provide a very clean sample, and we compare the fractions of data and simulated events with the other track also identified as a kaon, as a function of momentum. The data-simulation efficiency ratio averages 0.990 ± 0.001 in the 1-5 GeV/c momentum range with variations at the 0.01 level. We conservatively apply a correction of +(1.0 ± 1.0)% per kaon, or +(2.0 ± 2.0)% to the signal yield.
C. Cross Section for e + e − → K + K − π + π − We calculate the e + e − → K + K − π + π − cross section as a function of the effective c.m. energy from where E c.m. ≡ m KKπ + π − c 2 , m KKπ + π − is the measured invariant mass of the K + K − π + π − system, dN KKπ + π − γ is the number of selected events after background subtraction in the interval dE c.m. , and ǫ KKπ + π − (E c.m. ) is the corrected detection efficiency. We calculate the differential luminosity, dL(E c.m. ), in each interval dE c.m. from ISR µ + µ − γ events with the photon in the same fiducial range used for the simulation; the procedure is described in Refs. [11,12]. From data-simulation comparison we conservatively estimate a systematic uncertainty on dL of 3%. This dL has been corrected for vacuum polar-TABLE II: Summary of corrections and systematic uncertainties on the e + e − → K + K − π + π − cross section. The total correction is the linear sum of the components and the total uncertainty is the sum in quadrature.
For the cross section measurement we use the tighter angular criteria on the charged tracks and the ISR photon, discussed in Sec. IV B, to exclude possible errors from incorrect simulation of the EMC and DCH edge effects. We show the cross section as a function of E c.m. in Fig. 4, with statistical errors only, and provide a list of our results in Table I. The result is consistent with the direct measurement by DM1 [13], and with our previous measurement of this channel [11] but has much better statistical precision. The systematic uncertainties, summarized in Table II, affect the normalization, but have little effect on the energy dependence.
The cross section rises from threshold to a peak value of about 4.7 nb near 1.85 GeV, then generally decreases with increasing energy. In addition to narrow peaks at the J/ψ and ψ(2S) masses, there are several possible wider structures in the 1.8-2.8 GeV region. Such structures might be due to thresholds for intermediate resonant states, such as φf 0 (980) near 2 GeV. Gaussian fits to the simulated line shapes give a resolution on the measured K + K − π + π − mass that varies between 4.2 MeV/c 2 in the 1.5-2.5 GeV/c 2 region and 5.5 MeV/c 2 in the 2.5-3.5 GeV/c 2 region. The resolution function is not purely Gaussian due to soft-photon radiation, but less than 10% of the signal is outside the 25 MeV/c 2 mass bin. Since the cross section has no sharp structure other than the J/ψ and ψ(2S) peaks discussed in Sec. VIII below, we apply no correction for resolution.
Our previous study [11] showed many intermediate resonances in the K + K − π + π − final state. With the larger data sample used here, they can be seen more clearly and, in some cases, studied in detail. Figure 5(a) shows a scatter plot of the invariant mass of the K − π + pair versus that of the K + π − pair, and Fig. 5(b) shows the sum of the two projections. Here we have suppressed the contributions from φπ + π − and K + K − ρ(770) by requir- where m(φ) and m(ρ) values are taken from the Particle Data Group (PDG) tables [5]. Bands and peaks corresponding to the K * 0 (892) and K * 0 2 (1430) are visible. In Fig. 5(c) we show the sum of projections of the K * 0 (892) bands, defined by lines in Fig. 5(a), with events in the overlap region plotted only once. No K * 0 (892) signal is seen, confirming that the e + e − → K * 0 (892)K * 0 (892) cross section is small. We observe associated K * 0 (892)K * 0 2 (1430) production, but it is mostly from J/ψ decays (see Sec. VIII).
We combine K * 0 /K * 0 candidates within the lines in Fig. 5(a) with the remaining pion and kaon to obtain the K * 0 π +− invariant mass distribution shown in Fig. 6(a), and the K * 0 π +− vs. K * 0 K −+ mass scatter plot in Fig. 6(b). The bulk of Fig. 6(b) shows a strong positive correlation, characteristic of K * 0 Kπ final states with no higher resonances. The horizontal band in Fig. 6(b) corresponds to the peak region in Fig. 6(a), and is consistent with contributions from the K 1 (1270) and K 1 (1400) resonances. There is also an indication of a vertical band in Fig. 6(b), perhaps corresponding to a K * 0 K resonance at ∼1.5 GeV/c 2 . We now suppress K * 0 Kπ by considering only events outside the lines in Fig. 5(a). In Fig. 7 we show the K ± π + π − invariant mass (two entries per event) vs. that of the π + π − pair, along with its two projections. There is a strong ρ(770) → π + π − signal, and the K ± π + π − mass projection shows further indications of the K 1 (1270) and K 1 (1400) resonances, both of which decay into Kρ(770). There are suggestions of additional structure in the π + π − mass distribution, including possible f 0 (980) shoulder and a possible enhancement near the f 2 (1270), however the current statistics do not allow us to make definitive statements.
ate states involving relatively wide resonances requires a partial wave analysis. This is beyond the scope of this paper. Here we present the cross section for the sum of all states including a K * 0 (892), and study intermediate states that include a narrow φ or f 0 resonance.
Signals for the K * 0 (892) and K * 0 2 (1430) are clearly visible in the K ± π ∓ mass distributions in Fig. 5(b) and, with a different bin size, in Fig. 8(a). We perform a fit to this distribution using P-wave Breit-Wigner (BW) functions for the K * 0 and K * 0 2 signals and a third-order polynomial function for the remainder of the distribution taking into account the Kπ threshold. The result is shown in Fig. 8(a). The fit yields a K * 0 signal of 19738 ± 266 events with m(Kπ) = 896.2 ± 0.3 MeV/c 2 and Γ(Kπ) = 50.6 ± 0.9 MeV, and a K * 0 2 signal of 1786±127 events with m(Kπ) = 1428.5±3.9 MeV/c 2 and Γ(Kπ) = 113.7 ± 9.2 MeV. These values are consistent with current world averages [5], and the fit describes the data well, indicating that contributions from any other resonances decaying into K ± π ∓ are small.
We perform a similar fit to the data in bins of the K + K − π + π − invariant mass, with the resonance masses and widths fixed to the values obtained by the overall fit. Since there is at most one K * 0 per event, we convert the resulting K * 0 yield in each bin into an "inclusive" e + e − → K * 0 Kπ cross section, following the procedure described in Sec. IV C. This cross section is shown in Fig. 8(b) and listed in Table III for the effective c.m. energies from threshold up to 3.5 GeV. At higher energies the signals are small and contain an unknown, but possibly large, contribution from e + e − → qq events. There is a rapid rise from threshold to a peak value of about 4 nb at 1.84 GeV, followed by a very rapid decrease with increasing energy. There are suggestions of narrow structure in the peak region, but the only statistically significant structure is the J/ψ peak, which is discussed below.
The e + e − → K * 0 Kπ contribution is a large fraction of the total K + K − π + π − cross section at all energies above its threshold, and dominates in the 1.8-2.0 GeV region. We are unable to extract a meaningful measurement of the K * 0 2 Kπ cross section from this data sample because it is more than ten times smaller. The K + K − ρ 0 (770) intermediate state makes up the majority of the remainder of the cross section and it can be estimated as a difference of the values in Table I and Table III for the K + K − π + π − and K * 0 Kπ final states.
Intermediate states containing relatively narrow resonances can be studied more easily. Figure 9(a) shows a scatter plot of the invariant mass of the π + π − pair versus that of the K + K − pair. Horizontal and vertical bands corresponding to the ρ 0 (770) and φ, respectively, are visible, and there is a concentration of entries on the φ band corresponding to the correlated production of φ and f 0 (980). The φ signal is also visible in the K + K − mass projection, Fig. 9(c). The large contribution from the ρ(770), coming from the K 1 decay, is nearly uniform in the K + K − mass, and the cross-hatched histogram shows the non-K + K − π + π − background estimated from the control region in χ 2 KKπ + π − . The cross-hatched his- togram also shows a φ peak, but this is a small fraction of the events. Subtracting this background and fitting the remaining data gives 1706±56 events produced via the φπ + π − intermediate state.
To study the φπ + π − channel, we select candidate events with a K + K − invariant mass within 10 MeV/c 2 of the φ mass, indicated by the inner vertical lines in Figs. 9(a,c), and estimate the non-φ contribution from the mass sidebands between the inner and outer vertical lines. In Fig. 9(b) we show the π + π − invariant mass distributions for φ candidate events, sideband events and χ 2 control region events as the open, hatched and crosshatched histograms, respectively, and in Fig. 9(d) we show the numbers of entries from the candidate events minus those from the sideband and control region. There is a clear f 0 peak over a broad mass distribution, with no indication of associated ρ 0 production.
A coherent sum of two Breit-Wigner functions is sufficient to describe the invariant mass distribution of the π + π − pair recoiling against a φ in Fig. 9(d). We fit with the function: where m is the π + π − invariant mass, m i and Γ i are the parameters of the i th resonance, ψ is their relative phase and N i are normalization parameters, corresponding to the number of events under each BW. One BW corresponds to the f 0 (980), but a wide range of values of the other parameters can describe the data. Fixing the relative phase to ψ = π and the parameters of the first BW to m 1 = 0.6 GeV/c 2 and Γ 1 = 0.45 GeV (which could be interpreted as the f 0 (600) [5]), we obtain the fit shown in Fig. 9(d). It describes the data well and gives an f 0 (980) signal of 262±30 events, with m 2 = 0.973±0.003 GeV/c 2 and Γ 2 = 0.065 ± 0.013 GeV, consistent with the PDG values [5]. There is a suggestion of an f 2 (1270) peak in the data, but it is much smaller than the f 0 peak and we do not consider it further.
We obtain the number of e + e − → φπ + π − events in bins of φπ + π − invariant mass by fitting the K + K − in-variant mass projection in that bin after subtracting non-K + K − π + π − background. Each projection is a subset of Fig. 9(c), where the curve represent a fit to the full sample. In each mass bin, all parameters are fixed to the values obtained from the overall fit except the numbers of events in the φ peak and the non-φ component.
The efficiency may depend on the details of the production mechanism. Using the two-pion mass distribution in Fig. 9(d) as input, we simulate the π + π − system as an S-wave comprising two scalar resonances, with parameters set to the values given above. To describe the φπ + π − mass distribution we use a simple model with one resonance, the φ(1680), of mass 1.68 GeV/c 2 and width 0.2 GeV, decaying to φf 0 . The simulated reconstructed spectrum is shown in Fig. 10(a). There is a sharp increase at about 2 GeV/c 2 due to the φf 0 (980) threshold. All other structure is determined by phase space and a m −2 falloff with increasing mass. Dividing the number of reconstructed events in each bin by the number of generated ones, we obtain the efficiency as a function of φπ + π − mass shown in Fig. 10(b). The solid line represents a fit to a third order polynomial, and the dashed line the corresponding fit to the phase space model from Fig. 3. The model dependence is weak, giving confidence in the efficiency calculation. We calculate the e + e − → φπ + π − cross section as described in Sec. IV C but using the efficiency from the fit to Fig. 10(b) and dividing by the φ → K + K − branching fraction of 0.491 [5]. We show our results as a function of energy in Fig. 11 and list them in Table IV. The cross section has a peak value of about 0.6 nb at about 1.7 GeV, then decreases with increasing energy until φ(1020)f 0 (980) threshold, around 2.0 GeV. From this point it rises, falls sharply at about 2.2 GeV, and then decreases slowly. Except in the charmonium region, the results at energies above 2.9 GeV are not meaningful due to small signals and potentially large backgrounds, and are omitted from Table IV. Figure 11 displays the cross-section up to 4.5 GeV to show the signals from the J/ψ and ψ(2S) decays. They are discussed in Sec. VIII. There are no previous measurements of this cross section, and our results are consistent with the upper limits given in Ref. [13]. We perform a study of the angular distributions in the φ(1020)π + π − final state by considering all K + K − π + π − candidate events with mass below 3 GeV/c 2 , binning them in terms of the cosine of the angles defined below, and fitting the background-subtracted K + K − mass projections. The efficiency is nearly uniform in these angles, so we study the number of events in each bin. We define the φ production angle, Θ φ as the angle between the φ momentum and the e − beam direction in the rest frame of the φπ + π − system. The distribution of cos Θ φ , shown in Fig. 12(a), is consistent with the uniform distribution expected if S-wave two-body channels φX, X → π + π − dominate the φπ + π − system. We define the pion and kaon helicity angles, Θ π + and Θ K + as those between the π + and the π + π − -system momenta in the π + π − rest frame and between the K + and ISR photon momenta in the φ rest frame, respectively. The distributions of cos Θ π + and cos Θ K + , shown in Figs. 12(b) and 12(c), respectively, are consistent with those expected from scalar and vector meson decays. IV: Measurements of the e + e − → φ(1020)π + π − cross section (errors are statistical only). The narrow f 0 (980) peak seen in Fig. 9(d) allows the selection of a fairly clean sample of φf 0 events. We repeat the analysis just described with the additional requirement that the π + π − invariant mass be in the range 0.85-1.10 GeV/c 2 . The fit to the full sample yields about 700 events; all of these contain a true φ, but about 10% are from e + e − → φπ + π − events where the pion pair is not produced through the f 0 (980).
We convert the numbers of fitted events in each mass bin into a measurement of the e + e − → φ(1020)f 0 (980) cross section as described above and dividing by the f 0 → π + π − branching fraction of two-thirds. The cross section is shown in Fig. 13 as a function of the effective c.m. energy and is listed in Table V. Its behavior near threshold does not appear to be smooth, but is more consistent with a steep rise to a value of about 0.3 nb at 1.95 GeV followed by a slow decrease that is interrupted by a structure around 2.175 GeV. Possible interpretations of this structure are discussed in Sec. VII. Again, the values are not meaningful for the effective c.m. above about 2.9 GeV/c 2 , except for the J/ψ and ψ(2S) signals, discussed in Sec. VIII.
A. Final Selection and Backgrounds
The K + K − π 0 π 0 sample contains background from the ISR processes e + e − → K + K − π 0 γ and K + K − ηγ, in which two soft photon candidates from machine-or detector-related background combine with the relatively energetic photons from the π 0 or η to form two fake π 0 candidates. We reduce this background using the helicity angle between each reconstructed π 0 direction and the direction of its higher-energy photon daughter calculated in its rest frame. If the cosines of both helicity angles are higher than 0.85, we remove the event. Figure 14 shows the distribution of χ 2 KKπ 0 π 0 for the remaining candidates together with the simulated FIG. 13: The e + e − → φ(1020)f0(980) cross section as a function of the effective e + e − c.m. energy obtained from the K + K − π + π − final state.
K + K − π 0 π 0 events. Again, the distributions are broader than those for a typical 6C χ 2 due to higher order ISR, and we normalize the histogram to the data in the region χ 2 KKπ 0 π 0 < 10. The cross-hatched histogram in Fig. 14 represents background from e + e − → qq events, evaluated in the same way as for the K + K − π + π − final state. The hatched histogram represents the sum of this background and that from ISR π + π − π 0 π 0 events with both charged pions misidentified as kaons, evaluated using the simulation.
The dominant background in this case is from residual ISR K + K − π 0 and K + K − η events, as well as ISR K + K − π 0 π 0 π 0 events. Their simulated contribution, shown as the dashed histogram in Fig. 14, is consistent with the data in the high χ 2 KKπ 0 π 0 region. All other backgrounds are either negligible or distributed uniformly in χ 2 KKπ 0 π 0 . We define a signal region, χ 2 KKπ 0 π 0 < 50, containing 4425 data and 6948 simulated events, and a control region, 50 < χ 2 KKπ 0 π 0 < 100, containing 1751 data FIG. 14: Distribution of χ 2 from the six-constraint fit for K + K − π 0 π 0 candidates in the data (points). The open histogram is the distribution for simulated signal events, normalized as described in the text. The cross-hatched, hatched and dashed histograms represent, cumulatively, the backgrounds from non-ISR events, ISR π + π − π 0 π 0 events, and ISR K + K − π 0 , K + K − η and K + K − π 0 π 0 π 0 events. and 848 simulated events. Figure 15 shows the K + K − π 0 π 0 invariant mass distribution from threshold up to 5 GeV/c 2 for events in the signal region. The qq background (cross-hatched histogram) is negligible at low masses but forms a large fraction of the selected events above about 4 GeV/c 2 . The ISR π + π − π 0 π 0 contribution (hatched region) is negligible except in the 1.5-2.5 GeV/c 2 region. The sum of all other backgrounds, estimated from the control region, is the dominant contribution below 1.6 GeV/c 2 and non negligible everywhere. The total background in the 1.6-2.5 GeV/c 2 region is 15-20% (open histogram).
We subtract the sum of backgrounds from the number of selected events in each mass bin to obtain a number of signal events. Considering uncertainties in the cross sections for the background processes, the normalization open histograms represent, cumulatively, the non-ISR background, the contribution from ISR π + π − π 0 π 0 events, and the ISR background from the control region of Fig. 14. of events in the control region and the simulation statistics, we estimate a systematic uncertainty on the signal yield after background subtraction as less than 5% in the 1.6-3.0 GeV/c 2 region, but increases to 10% in the region above 3 GeV/c 2 .
B. Selection Efficiency
The detection efficiency is determined in the same manner as in Sec. IV B. Figure 16(a) shows the simulated K + K − π 0 π 0 invariant mass distributions in the signal and control regions from the phase space model. We divide the number of reconstructed events in each 40 MeV/c 2 mass interval by the number generated ones in that interval to obtain the efficiency shown as the points in Fig. 16(b); a third order polynomial fit to the efficiency is used to calculate the cross section. Again, the simulation of the ISR photon covers a limited angular range, about 30% wider than EMC acceptance, and shown efficiency is factor 0.7 lower than for the hadronic system alone. Simulations assuming dominance of the φ → K + K − and/or f 0 → π 0 π 0 channels give consistent results, and we apply the same 5% systematic uncertainty for possible model dependence as in Sec. IV B.
We correct for mis-modeling of the track finding and kaon identification efficiencies as in Sec. IV B, and for the shape of the χ 2 KKπ 0 π 0 distribution analogously, using the result in Ref. [12], (0 ± 6)%. We correct the π 0finding efficiency using the procedure described in detail in Ref. [12]. From ISR e + e − → ωπ 0 γ→ π + π − π 0 π 0 γ events selected with and without the π 0 from the ω decay, we find that the simulated efficiency for one π 0 is too high by (2.8±1.4)%. Conservatively we apply a correction of +(5.6 ± 2.8)% for two π 0 in the event.
C. Cross Section for e + e − → K + K − π 0 π 0 We calculate the cross section for e + e − → K + K − π 0 π 0 in 40 MeV E c.m. intervals from the analog of Eq. 2, using the invariant mass of the K + K − π 0 π 0 system to determine the effective c.m. energy. We show the first measurement of this cross section in Fig. 17 and list the results obtained in Table VI. The cross section rises to a peak value near 1 nb at 2 GeV, falls sharply at 2.2 GeV, then decreases slowly. The only statistically significant structure is the J/ψ peak. The drop at 2.2 GeV is similar to that seen in the K + K − π + π − mode. Again, dL includes corrections for vacuum polarization that should be omitted from calculations of g µ −2.
The simulated K + K − π 0 π 0 invariant mass resolution is 8.8 MeV/c 2 in the 1.5-2.5 GeV/c 2 mass range, and increases with mass to 11.2 MeV/c 2 in the 2.5-3.5 GeV/c 2 range. Since less than 20% of the events in a 40 MeV/c 2 bin are reconstructed outside that bin and the cross section has no sharp structure other than the J/ψ peak, we again make no correction for resolution. The point-topoint systematic errors are much smaller than statistical ones, and the errors on the normalization are summarized in Table VII, along with the corrections that were applied to the measurements. The total correction is +9.2%, and the total systematic uncertainty is 10% at low mass, increasing to 14% above 3 GeV/c 2 . VII: Summary of corrections and systematic uncertainties on the e + e − → K + K − π 0 π 0 cross section. The total correction is the linear sum of the components and the total uncertainty is the sum in quadrature.
Source Correction Uncertainty
Rad. Corrections -1% Backgrounds -5%, m KKπ 0 π 0 < 3 GeV/c 2 10%, m KKπ 0 π 0 > 3 GeV/c 2 Model Dependence -5% χ 2 KKπ 0 π 0 Distn. 0% 6% Tracking Efficiency +1.6% 0.8% Kaon ID Efficiency +2% 2% π 0 Efficiency +5.6% 2.8% ISR Luminosity -3% Total +9.2% 10%, m KKπ 0 π 0 < 3 GeV/c 2 14%, m KKπ 0 π 0 > 3 GeV/c 2 D. Substructure in the K + K − π 0 π 0 Final State A scatter plot of the invariant mass of the K − π 0 versus that of the K + π 0 pair is shown in Fig. 18(a) with two entries per event selected in the χ 2 signal region. Horizontal and vertical bands corresponding to the K * + (892) and K * − (892), respectively, are visible. Figure 18(b) shows as points the sum of the two projections of Fig. 18(a); a large K * ± (892) signal is evident. Fitting this distribution with the function discussed in Sec. IV E gives a good χ 2 and the curve shown on Fig. 18(b). The K * ± (1430):K * ± (892) ratio is consistent with that for neutral K * seen in the K + K − π + π − channel, and the number of K * ± (892) in the peak is consistent with one per selected event. The hatched histogram in Fig. 18(b) represents the K ± π 0 mass in events with the other K ∓ π 0 mass within the lines in Fig. 18(a), but with events in the overlap region used only once, and shows no K * ± (892) signal. These results indicate that the e + e − → K * ± K * ∓ cross section is small and that the K * ± (892)K ∓ π 0 chan- 18: (a) Invariant mass of the K − π 0 pair versus that of the K + π 0 pair in selected K + K − π 0 π 0 events (two entries per event); (b) sum of projections of (a) (dots, four entries per event). The curve represents the result of the fit described in the text. The hatched histogram is the K ± π 0 distribution for events in which the other K ∓ π 0 combination is within the K * ± (892) bands indicated in (a), with events in the overlap region taken only once. nels dominate the overall cross section.
We find no signals for resonances in the K + K − π 0 or K ± π 0 π 0 decay modes. Since the K * ± (892)K ∓ π 0 channels dominate and the statistics are low in any mass bin, we do not attempt to extract a separate K * ± (892)K ∓ π 0 cross section. The total K + K − π 0 π 0 cross section is roughly a factor of four lower than the K * 0 (892)K ± π ∓ cross section observed in the K + K − π + π − final state. This is consistent with what one might expect from isospin and the charged vs. neutral K * branching fractions into charged kaons.
E. The φ(1020)π 0 π 0 Intermediate State The selection of events containing a φ(1020) → K + K − decay follows that in Section IV F. Figure 19(a) shows a scatter plot of the invariant mass of the π 0 π 0 pair versus that of the K + K − pair. A vertical band corresponding to the φ is visible, whose intensity decreases with increasing π 0 π 0 mass except for an enhancement in the f 0 (980) region. The φ signal is also visible in the K + K − invariant mass projection shown in Fig. 19(c). The relative non-φ background is smaller than in the K + K − π + π − mode, but there is a large background from ISR φπ 0 , φη and/or φπ 0 π 0 π 0 events, as indicated by the control region histogram (hatched) in Fig. 19(c). The contributions from non-ISR and ISR π + π − π 0 π 0 events are negligible. Selecting φ candidate and side band events as for the K + K − π + π − mode (vertical lines in Figs. 19(a,c)), we obtain the π 0 π 0 mass projections shown as the open and cross-hatched histograms, respectively, in Fig. 19(b). Control region events (hatched histogram) are concentrated at low masses. A peak corresponding to the f 0 (980) is visible over a relatively low background.
In Fig. 19(d) we show the numbers of entries from the candidate events minus those from the sideband and control regions. A sum of two Breit-Wigner functions is again sufficient to describe the data. Fitting Eq. 3 with the parameters of one BW fixed to the values given in Sec. IV F, corresponding to the f 0 (600), we obtain a good fit, shown as the curve in Fig. 19(d). This fit yields a f 0 (980) signal of 54 ± 9 events with a mass m = 0.970 ± 0.007 GeV/c 2 and width Γ = 0.081 ± 0.021 GeV consistent with PDG values [5]. Due to low statistics and high backgrounds, we do not extract an e + e − → φ(1020)π 0 π 0 cross section. Since the background under the f 0 (980) peak in Figs. 19(b,d) is relatively low we are able to extract the φ(1020)f 0 (980) contribution. As in Sec. IV G, we require the dipion mass to be in the range 0.85-1.10 GeV/c 2 and fit the background-subtracted K + K − mass projection in each bin of K + K − π 0 π 0 mass to obtain a number of φf 0 events. Again, about 10% of these are φπ 0 π 0 events in which the π 0 π 0 pair is not produced through the f 0 , but this does not affect the conclusions.
We convert the number of fitted events in each mass bin into a measurement of the e + e − → φ(1020)f 0 (980) cross section as described above and dividing by the f 0 (980) → π 0 π 0 branching fraction of one-third. The cross section is shown in Fig. 20 as a function of E c.m. and is listed in Table VIII. Due to smaller number of events, we have used larger bins at higher energies. The overall shape is consistent with that obtained in the K + K − π + π − mode (see Fig. 13), and there is a sharp Figure 21 shows the distribution of χ 2 4K for the K + K − K + K − candidates as points, and the open histogram is the distribution for simulated K + K − K + K − events, normalized to the data in the region χ 2 4K < 5 where the backgrounds and radiative corrections are small. The hatched histogram represents the background from e + e − → qq events, evaluated as for the other modes. The cross-hatched histogram represents the background from simulated ISR K + K − π + π − events with both charged pions misidentified as kaons.
We define signal and control regions of χ 2 4K < 20 and 20 < χ 2 4K < 40, respectively. The signal region contains 2,305 data and 20,616 simulated events, and the control region contains 463 data and 1,601 simulated events. . The open histogram is the distribution for simulated signal events, normalized as described in the text. The hatched histogram represents the background from non-ISR events, estimated as described in the text. The cross-hatched histograms is for simulated ISR K + K − π + π − events. Figure 22 shows the K + K − K + K − invariant mass distribution from threshold up to 5 GeV/c 2 for events in the signal region as points with errors. The qq background (hatched histogram) is small at low masses, but dominant above about 4.5 GeV/c 2 . Since the ISR K + K − π + π − background does not peak at low χ 2 4K values, we include it in the background evaluated from the control region, according to the method explained in Sec. IV A. It dominates this background, which is 10% or lower at all masses. The total background is shown as the open histogram in Fig. 22.
We subtract the sum of backgrounds from the number of selected events in each mass bin to obtain a number of signal events. Considering uncertainties in the cross sections for the background processes, the normalization of events in the control region, and the simulation statistics, we estimate a systematic uncertainty on the signal yield of less than 5% in the 2-3 GeV/c 2 region, increasing to 10% in the region above 3 GeV/c 2 .
B. Selection Efficiency
The detection efficiency is determined as for the other two final states. Figure 23(a) shows the simulated K + K − K + K − invariant-mass distributions in the signal and control regions from the phase space model. We divide the number of reconstructed events in each mass interval by the number of generated ones in that interval to obtain the efficiency shown as the points in Fig. 23(b). It is quite uniform, and we fit a third order polynomial, which we use to extract the cross section. A factor of 0.7 is again applicable for only hadronic system efficiency due to the limited angular coverage of the ISR photon simulation. A simulation assuming dominance of the φK + K − channel, with the K + K − pair in an S-wave, gives consistent results, and we apply the same 5% systematic uncertainty as for the other final states. We correct for mis-modeling of the track finding and kaon identification efficiencies as in Sec. IV B, and for the shape of the χ 2 4K distribution analogously, using the result in Ref. [11], (3.0 ± 2.0)%.
C. Cross Section for e + e − → K + K − K + K − We calculate the e + e − → K + K − K + K − cross section in 40 MeV E c.m. intervals from the analog of Eq. 2, using the invariant mass of the K + K − K + K − system to determine the effective c.m. energy. We show this cross section in Fig. 24 and list it in Table IX. It rises to a peak value near 0.1 nb in the 2.3-2.7 GeV region, then decreases slowly with increasing energy. The only statistically significant narrow structure is the J/ψ peak. Again, dL includes corrections for vacuum polarization that should be omitted from calculations of g µ −2. This supersedes our previous result [11]. The simulated K + K − K + K − invariant mass resolution is 3.0 MeV/c 2 in the 2.0-2.5 GeV/c 2 range, increasing with mass to 4.7 MeV/c 2 in the 2.5-3.5 GeV/c 2 range. Since the cross section has no sharp structure except for the J/ψ peak, we again make no correction for resolution. The errors shown in Fig. 24 and Table IX are statistical only. The point-to-point systematic errors are much smaller than this, and the errors on the normalization are summarized in Table X, along with the corrections applied to the measurement. The total correction is +10%, and the total systematic uncertainty is 9% at low mass, increasing to 13% above 3 GeV/c 2 . Figure 25 shows the invariant mass distribution for all K + K − pairs in the selected K + K − K + K − events (4 entries per event) as the open histogram. A prominent φ peak is visible, along with possible peaks at 1.5, 1.7 and 2.0 GeV/c 2 . The hatched histogram is for the pair in each event with mass closest to the nominal φ mass, and indicates that the φK + K − channel dominates the K + K − K + K − final state. Our previous finding of very little φ signal [11] was incorrect due to an error in the analysis algorithm. If the pair mass closest to the φ mass is within 10 MeV/c 2 of the φ mass, then we include the invariant mass of the other K + K − combination in Fig. 26. The contribution from events in the J/ψ peak is shown as the hatched histogram which is in agreement with the BES experiment [24] which studied the structures around 1.5, 1.7 and 2.0 GeV/c 2 in detail. There is no evidence for the φf 0 channel, but there is an enhancement at threshold that can be interpreted as the tail of the f 0 (980). This is expected in light of the φf 0 cross sections measured above in the K + K − π + π − and K + K − π 0 π 0 final states. However the statistics and uncertainties in the f 0 (980) → K + K − lineshape do not allow a meaningful extraction of the cross section in this final state. We observe no significant structure in the K + K − K ± mass distribution. We use these events to study the possibility that part of our φπ + π − signal is due to φK + K − events with the two kaons not from the φ taken as pions. No structure is present in the reconstructed K + K − π + π − invariant mass distribution from these events.
VII. e + e − → φf0 NEAR THRESHOLD The behavior of the e + e − → φf 0 cross section near threshold shows a structure near 2150 MeV/c 2 , and we have published this result in Ref. [14]. Here we provide a more detailed study of this cross section in the 1.8-3 GeV region. In Fig. 27 we superimpose the cross sections measured in the K + K − π + π − and K + K − π 0 π 0 final states (shown in Figs. 13 and 20); they are consistent with each other. The K + K − K + K − cross section (Sec. VI D) is also consistent with the presence of a structure near 2150 MeV/c 2 and shows a contribution from the φf 0 channel, but since we cannot extract a meaningful φf 0 cross section, we do not discuss this final state further.
First, we attempt to reproduce this spectrum with a smooth threshold function. In the absence of resonances, the only theoretical constraint on the cross section well above threshold is that it should decrease smoothly with increasing E c.m. . However the form of the cutoff at threshold is determined by the properties of the interme- The e + e − → φ(1020)f0(980) cross section measured in the K + K − π + π − (circles) and K + K − π 0 π 0 (squares) final states. The hatched histogram shows the simulated cross section, assuming no resonant structure. The solid (dashed) line represents the result of the one-resonance (no-resonance) fit described in the text.
diate resonances and the final state particle spins, phase space and detector resolution. The model discussed in Sec. IV F takes the φ and f 0 (980) lineshapes, the spins of all particles and their phase space into account, and postulates a simple E −4 c.m. dependence of the cross section. For the e + e − → φf 0 reaction, it predicts the cross section shown as the hatched histogram in Fig. 27, normalized to the same total area as the data. It shows a sharp rise from the threshold with a peak near 2070 MeV and is inconsistent with the data.
To account for uncertainties in the f 0 width and the shape of the cross section well above threshold, we seek a functional form that describes the simulation and whose parameters can be varied to cover a reasonable range of possibilities. This can be achieved by the product of a phase space term, an exponential rise and a second order polynomial: where the a i are free parameters, σ 0 is a normalization factor, and P (µ) is a good approximation of the two-body phase space for particles with similar masses. Both the φ(1020) and Table XI, and the latter is shown as the dashed curve on Fig. 27; all fits are inconsistent with the data. We now add a resonance and fit the data with the function where m 1 and Γ 1 are the mass and width of the resonance, σ 1 is its peak cross section, and ψ 1 is its phase relative to the non-resonant component. We obtain good fits both assuming no interference between the two components, ψ 1 = π, and with ψ 1 floating. The result of the latter fit is shown as the solid curve on Fig. 27. The data are somewhat above this curve near 2.4 GeV/c 2 and a fit with two resonances can also describe the data. Due to the sharp drop near 2.2 GeV/c 2 , the single-resonance fit with interference gives a resonance mass about 30 MeV/c 2 higher than the other two fits. All these fits, with or without resonances, give a peak non-resonant cross section in the range 0.3-0.4 nb, which is of independent theoretical interest, because it can be related to the φ → f 0 (980)γ decay studied at the φ-factory [25].
Under the hypothesis of one resonance interfering with the non-resonant component, the fit gives the resonance parameters σ 1 = 0.13 ± 0.04 nb, m 1 = 2.175 ± 0.010 GeV/c 2 , Γ 1 = 0.058 ± 0.016 GeV, and ψ 1 = −0.57 ± 0.30 radians, along with χ 2 /n.d.f.= 37.6/(56 − 9) (C.L. 0.84). We can estimate the product of its electronic width and branching fraction to φf 0 as where we fit the product Γ 1 σ 1 to reduce correlations, and the conversion constant C = 0.389 mb( GeV/c 2 ) 2 . The second error is systematic and corresponds to the normalization errors on the cross section. The significance of the structure calculated from the change in χ 2 between the best fit and the null hypothesis is 6.2 standard deviations. Since this calculation can be unreliable in the case of low statistics and functions that vary rapidly on the scale of the bin size, we perform a set of simulations in which we generate a number of events according to a Poisson distribution about the number observed in the data and with a mass distribution given by either the simulation or fitted function in Fig. 27 without resonant structure. On each sample, we perform fits to Eqs. 4 and 5 and calculate the difference in χ 2 . The fraction of trials giving a χ 2 difference larger than that seen in the data corresponds to a significance of approximately 5 standard deviations.
We search for this structure in other submodes with different and/or fewer intermediate resonances. The total cross sections are dominated by K * Kπ channels, and the K * 0 K + π − cross section is shown in Fig. 8. There is no significant structure in the 2.1-2.5 GeV region, but the point-to-point statistical uncertainties are large. If we remove events within the bands in Figs. 5 and 18, then most of the events containing a K * are eliminated and we obtain the raw mass distributions shown as the points with errors in Figs. 28 and 29, respectively. Both distribution show evidence of a structure around 2.15 GeV/c 2 and the K + K − π + π − distribution also shows a structure near 2.4 GeV/c 2 . We cannot exclude the presence of these structures in events with a K * , but we can conclude that they do not dominate those events, whereas they comprise a substantial fraction of the remaining events in that mass region.
Applying the further requirement that the dipion mass be in the range 0.85-1.10 GeV/c 2 , we remove most of the events without an f 0 , and obtain the mass distributions shown as the hatched histograms in Figs. 28 and 29. Peaks are visible at both 2.15 GeV/c 2 and 2.4 GeV/c 2 in both distributions, and they contain enough events to account for the corresponding structures in the distributions for all non-K * events. These peaks contain at least as many events as are present in the φf 0 samples, but the non-resonant components are higher and there is a substantial kinematic overlap between K + K − f 0 events and K * Kπ events in this mass range.
Since this f 0 (980) band appears to contain a large fraction of the events within the structure, we now consider all selected events with a dipion mass inside or outside this range. Figure 31 shows the mass distribution for all selected K + K − π 0 π 0 events as the open histogram, and the subsets of events with π 0 π 0 mass inside and outside the range 0.85-1.10 GeV/c 2 as the hatched and crosshatched histograms, respectively. It is evident that the K + K − f 0 channel contains the majority of the structure in the 2.0-2.6 GeV/c 2 range.
We show the corresponding distributions for the distribution for all selected K + K − π 0 π 0 events lying outside the K * 0 (892) bands of Fig. 18 (points), and the subset of these events with 0.85 < m(π 0 π 0 ) < 1.10 GeV/c 2 (hatched). Fig. 30. Due to the presence of the ρ 0 , the relative f 0 contribution is much smaller in this final state, but the events in the f 0 band show clear indications of structure in the 2.0-2.4 GeV/c 2 region. The remaining events may also have structure in this region, but the statistical significance is marginal and it could be due to other sources, such as the φf 2 (1270) threshold at 2.3 GeV/c 2 . Figures 32 and 33 show enlarged views of the mass distributions within the f 0 bands from Figs. 30 and 31, respectively. The two-peak structure is more evident here than in the φf 0 events. The 0.85 < m(ππ) < 1.10 GeV/c 2 requirement gives enough phase space for K + K − invariant mass to cover the region from threshold to ∼1.3 GeV/c 2 for m(K + K − ππ) ≈ 2.15 GeV/c 2 . From the measured kaon form factor we expect to find only about two-thirds of K + K − P-wave in our fitted φ peak. Since the non-ISR and ISR ππππ backgrounds have not been subtracted and the samples contain an unknown mixture of intermediate states, we fit them with a modi- XII: Summary of parameters obtained from the fits described in the text to the K + K − π + π − and K + K − π 0 π 0 events with dipion mass in the f0(980) band. An asterisk denotes a value that was fixed in that fit.
No Resonance
One Resonance Two Resonances Fit Here, the normalization is in terms of events rather than cross section (σ i → N i ) and a fraction a 4 of the nonresonant component does not interfere with the resonances. We first fit the distribution with no resonances (and a 4 = 1). The results are shown as the dashed lines in Figs. 32 and 33 and listed in Table XII; both are inconsistent with the data.
We next include one resonance in the fit. The parameter a 4 is not well constrained by the data and its value has a small influence on all other fit parameters except for the number of events assigned to the resonance, so we present results with a 4 fixed to the reasonable values of 0.75 and 0.50 for the K + K − π + π − and K + K − π 0 π 0 data, respectively. The results are shown as the solid lines in Figs. 32 and 33 and listed in Table XII. The fit quality is good in both cases, the fitted resonance parameters are consistent with those from the φf 0 study, and the calculated significance of the structure for the K + K − π + π − data is similar, 5.2 standard deviations. The K + K − π 0 π 0 data show much more pronounced structure than in the φf 0 study, allowing a full fit to this sample with a significance of 5.0 standard deviations.
We then add a second resonance to the fit, keeping a 4 fixed and floating all other parameters. The results are shown as the dotted lines in Figs. 32 and 33, and listed in Table XII. These fits are also of good quality, but do not change the χ 2 CL or the parameters of the first resonance significantly. We also perform fits with no interference between the non-resonant component and any resonance (a 4 = 1), obtaining good quality fits for both one resonance and two resonances with relative phase π/2. The fitted resonance parameters are consistent in all cases, except that the mass of the first resonance is lower by about 50 MeV/c 2 , similar to the 30 MeV/c 2 shift seen in the φf 0 study.
From these studies we conclude that we have observed a new vector structure at a mass of about 2150 MeV/c 2 with a significance of over six standard deviations. It decays into K + K − f 0 (980), with the K + K − pair produced predominantly via the φ(1020). There is an additional structure at about 2400 MeV/c 2 , and the two structures can be described by either two resonances or a single resonance that interferes with the non-resonant K + K − f 0 (980) process. More data and searches in other final states are needed to understand the nature of these structures.
If the main structure is due to a resonance, then it is relatively narrow and might be interpreted as the strange analog of the recently observed charmed Y(4260) state [6], which decays to J/ψπ + π − . The value of B φf0 · Γ ee = (2.5 ± 0.8 ± 0.4) eV measured here is similar to the value of B Y →J/ψπ + π − · Γ Y ee = (5.5 ± 1.0 ± 0.8) eV reported in Ref. [6].
VIII. THE CHARMONIUM REGION
The data at masses above 3 GeV/c 2 can be used to measure or set limits for the branching fractions of narrow resonances, such as charmonia, and the narrow J/ψ and ψ(2S) peaks allow measurements of our mass scale and resolution. Figures 34, 35 and 36 show the invariant mass distributions for the selected K + K − π + π − , K + K − π 0 π 0 and K + K − K + K − events, respectively, in this region, with finer binning than in the corresponding Figs. 2, 15 and 22. We do not subtract any background FIG. 32: The K + K − π + π − invariant mass distribution in the K + K − f0(980) threshold region for events with a π + π − mass inside the f0 band. The lines represent the results of the fits including no (dashed), one (solid) and two (dotted) resonances described in the text.
FIG. 33: The K + K − π 0 π 0 invariant mass distribution in the K + K − f0(980) threshold region for events with a π 0 π 0 mass inside the f0 band. The lines represent the results of the fits including no (dashed), one (solid) and two (dotted) resonances described in the text. from the K + K − π + π − or K + K − K + K − data, since it is small and nearly uniformly distributed, but we use the χ 2 KKπ 0 π 0 control region to subtract part of the ISR background from the K + K − π 0 π 0 data. Signals from the J/ψ are visible in all three distributions, and the ψ(2S) is visible in the K + K − π + π − mode.
We fit each of these distributions using a sum of two Gaussian functions to describe the J/ψ and ψ(2S) signals plus a polynomial to describe the remainder of the dis- tribution. We take the signal function parameters from the simulation, but let the overall mean and width float in the fit, along with the amplitude and the coefficients of the polynomial. The fits are of good quality and are shown as the curves on Figs. 34, 35 and 36. In all cases, the fitted mean value is within 1 MeV/c 2 of the PDG [5] J/ψ or ψ(2S) mass, and the width is consistent within 10% with the simulated resolution discussed in Sec. IV C, V C or VI C. The fits yield 1586 ± 58 events in the J/ψ peak for the K + K − π + π − final state, 203 ± 16 events for K + K − π 0 π 0 and 156 ± 15 events for K + K − K + K − . From these numbers of observed events in each final state f , N J/ψ→f , we calculate the product of the J/ψ branching fraction to f and the J/ψ electronic width: where dL/dE = 89.8 nb −1 / MeV and ǫ f (m J/ψ ) are the ISR luminosity and corrected selection efficiency, respectively, at the J/ψ mass and C is the conversion constant. We estimate ǫ K + K − π + π − = 0.202, ǫ K + K − π 0 π 0 = 0.069 and ǫ K + K − K + K − = 0.176. Using Γ J/ψ ee = 5.40 ± 0.18 keV [5], we obtain the branching fractions listed in Table XIII, along with the measured products and the current PDG values. The systematic errors include a 3% uncertainty on Γ J/ψ ee . The branching fractions to K + K − π + π − and K + K − K + K − are more precise than the current PDG values, which were dominated by our previous results of (6.25±0.80)×10 −3 and (7.4±1.8)×10 −4 , respectively [11]. This is the first measurement of the K + K − π 0 π 0 branching fraction. These fits also yield 91±15 K + K − π + π − events in the ψ(2S) peak, but no other significant signals. We expect 6.3 events from ψ(2S) → J/ψπ + π − → K + K − π + π − from the relevant branching fractions [5], which is less than the statistical error. Subtracting this contribution and using a calculation analogous to Eq. 7, with dL/dE = 115.3 nb −1 / MeV, we obtain the product of the ψ(2S)→ K + K − π + π − branching fraction and its electronic width. Dividing by the world average value of Γ ψ(2S) ee [5], we obtain the branching fraction listed in Table XIII; it is consistent with the current PDG value [5].
As noted in Sec. IV D and shown in Fig. 5, the K + K − π + π − final state is dominated by the K * 0 (892)Kπ channels, with a small fraction seen in the K * 0 (892)K * 0 2 (1430) + c.c. channels. Figure 37 shows a scatter plot of the invariant mass of a K ± π ∓ pair versus that of the K + K − π + π − system in events with the other K ∓ π ± pair near the K * 0 (892) mass, i.e. within the bands in Fig. 5(a) with overlapped region taken only once. There is a large concentration of entries in the J/ψ band with K ± π ∓ masses near 1430 MeV/c 2 , but no solid evidence for a horizontal band corresponding to K * 0 2 (1430) production other than in J/ψ decays. We show the K ± π ∓ mass projection for the subset of events with a K + K − π + π − mass within 50 MeV/c 2 of the J/ψ mass in Fig. 38 as the open histogram. The hatched histogram is the projection for events with a K + K − π + π − mass between 50 and 100 MeV/c 2 below the J/ψ mass. FIG. 37: The K ± π ∓ invariant mass versus K + K − π + π − invariant mass for events with the other K ∓ π ± combination in the K * 0 (892) bands of Fig. 5(a). The overlapped region is taken only once.
The J/ψ component appears to be dominated by the K * 0 2 (1430). Also seen is a small signal from K * 0 (892) indicating the K * 0 (892)K * 0 (892) decay of J/ψ: this is also seen as an enhancement in the vertical J/ψ band in Fig. 37. The enhancement at 1.8 GeV/c 2 of Fig. 38 can be explained by the J/ψ decay into K * 0 (892)K 2 (1770)+c.c. (or K * 0 (892)K 2 (1820) + c.c.), a mode which has not previously been reported. Subtracting the number of sideband events from the number in the J/ψ mass window, we obtain 317±23 events with a K ± π ∓ mass in the range 1200-1700 MeV/c 2 , which we take as a measure of J/ψ decays into K * 0 (892)K * 0 2 (1430), 25 ± 8 events in the 0.8-1.0 GeV/c 2 window for the K * 0 (892)K * 0 (892) decay and 110 ± 14 events for the K * 0 (892)K 2 (1770) or K * 0 (892)K 2 (1830) final state in the 1.7-2.0 GeV/c 2 region. We convert these to branching fractions using Eq. 7 and dividing by the known branching fractions of excited kaons [5]. The results are listed in Table XIII: they are considerably more precise than the PDG values. We cannot calculate B J/ψ→K * 0 K2(1770) because no branching fractions of K 2 (1770) or K 2 (1830) to Kπ are reported.
We study decays into φπ + π − and φπ 0 π 0 using the mass distributions shown in Figs. 39 and 40, respectively. The open histograms are for the events with a K + K − mass within the φ bands of Figs. 9(c) and 19(c). The cross-hatched histogram in Fig. 39 is from the φ sidebands of Fig. 9(c) and represents the dominant background in the φπ + π − mode. The hatched histogram in Fig. 40 is from the χ 2 KKπ 0 π 0 control region and represents the dominant background in the φπ 0 π 0 mode. Subtracting these backgrounds, we find 103±12 J/ψ → φπ + π − events, 23±6 J/ψ → φπ 0 π 0 events, and 10±4 ψ(2S) → φπ + π − events. We convert these to branching fractions and list them in Table XIII. This is the first measurement of the J/ψ → φπ 0 π 0 branching fraction, and the other two are consistent with current PDG values.
If the Y (4260) has a substantial branching fraction into φπ + π − , then we would expect to see a signal in Fig. 39. In the mass range |m(φπ + π − ) − m(Y )| < 0.1 GeV/c 2 , we find 10 events, and assuming a uniform distribution we estimate 9.2 background events from the 3.8-5.0 GeV/c 2 region. This corresponds to a signal of 0.8 ± 3.3 events or a limit of < 5 events at the 90% C.L. Using dL/dE = 147.7 nb −1 / MeV at the Y (4260) mass, we calculate B Y →φπ + π − · Γ Y ee < 0.4 eV which is well below the value of B Y →J/ψπ + π − · Γ Y ee = (5.5 ± 1.0 ± 0.8) eV [6]. No Y (4260) signal is seen in any other mode studied here. both cases, but φf 0 is not the dominant mode of the J/ψ → φπ + π − decay. Figure 41(b) shows the π + π − invariant mass distribution for events in the J/ψ peak of Fig. 39, 3.05 < m(K + K − π + π − ) < 3.15 GeV/c 2 . A twopeak structure is visible that can interpreted as due to the f 0 (980) and f 2 (1270) resonances. Fitting the distribution in Fig. 41(b) with a sum of two Breit-Wigner functions with parameters fixed to PDG values [5], we find 19.5 ± 4.5 J/ψ → φf 0 events and 44 ± 7 J/ψ → φf 2 events. From Fig. 42 we estimate 7.0 ± 2.8 φf 0 events in the π 0 π 0 mode. events (open histogram) and events in the φ side bands (cross-hatched) in the charmonium region; (b) the π + π − invariant mass distribution for φπ + π − events from the J/ψ peak of Fig. 39. The line represents the result of the fit described in the text.
Using Eq. 7 and dividing by the appropriate branching fractions, we obtain the J/ψ branching fractions listed in Table XIII. The measurements of B J/ψ→φf0 in the π + π − and π 0 π 0 decay modes of the f 0 are consistent with each other and with the PDG value, and combined they have roughly the same precision as listed in the PDG [5]. This is the first measurement of B J/ψ→φf2 , and the value is consistent with the previous upper limit [5]. We also KKπ 0 π 0 control region (hatched) in the charmonium region. observe 6 ± 3 ψ(2S) → φf 0 , f 0 → π + π − events, which we convert to the branching fraction listed in Table XIII; it is consistent with the PDG value [5], assuming B f0→π + π − = 2/3. | 20,122.2 | 2007-04-04T00:00:00.000 | [
"Physics"
] |
Silicon oxycarbide glass-graphene composite paper electrode for long-cycle lithium-ion batteries
Silicon and graphene are promising anode materials for lithium-ion batteries because of their high theoretical capacity; however, low volumetric energy density, poor efficiency and instability in high loading electrodes limit their practical application. Here we report a large area (approximately 15 cm × 2.5 cm) self-standing anode material consisting of molecular precursor-derived silicon oxycarbide glass particles embedded in a chemically-modified reduced graphene oxide matrix. The porous reduced graphene oxide matrix serves as an effective electron conductor and current collector with a stable mechanical structure, and the amorphous silicon oxycarbide particles cycle lithium-ions with high Coulombic efficiency. The paper electrode (mass loading of 2 mg cm−2) delivers a charge capacity of ∼588 mAh g−1electrode (∼393 mAh cm−3electrode) at 1,020th cycle and shows no evidence of mechanical failure. Elimination of inactive ingredients such as metal current collector and polymeric binder reduces the total electrode weight and may provide the means to produce efficient lightweight batteries.
C oncentrated efforts are currently employed to discover a practical replacement for traditional Li-ion battery electrodes that is, graphite anode and LiCoO 2 cathode with materials that continuously deliver high power and energy densities at high cycling efficiencies without damage [1][2][3][4][5] . Alloying reaction electrodes such as silicon that can deliver as much as 5-10 times higher discharge capacity than traditional graphite, are at the forefront of this research. High capacity electrodes, however, are prone to enormous volume changes (B300%) that generally lead to structural collapse and capacity fading during successive lithiation/delithiation [6][7][8][9][10][11][12] . Recent work has shown that decreasing particle size or electrode nanostructuring allows the electrode to withstand high volumetric strains associated with repeated Li alloying and dealloying. Pomegranate-inspired carbon-coated Si nanoparticles, yoke shell-structured SiC nanocomposites and Si/C core/shell composites (prepared at low mass loading) have proven to survive several hundred cycles without damage [9][10][11][12][13] . Yet, electrode nanostructuring has lead to new fundamental challenges such as low volumetric capacity (low tap density), increased electrical resistance between the nanoparticles, increased manufacturing costs and lower Coulombic efficiency due to side reactions with the electrolyte. These challenges have not been fully addressed. What's more, a particle-based electrode's long-term cyclability hinges on the inter-particle electrical connection and particle adhesion to the metallic substrate, which decreases rapidly with increasing charge/discharge cycles, particularly for thick high-loading electrodes 9 .
In this context, the graphene-based multicomponent composite anodes are an attractive alternative to traditional (binder and carbon-black) designs, chiefly because of graphene's superior electronic conductivity, mechanical strength and ability to be interfaced with Li active redox components, such as particles of Si, Ge, and transition metals sulfides/oxides resulting in electrodes that are intrinsically conducting and promote faster ion diffusion 14 À 38 . Additional advantages include weight savings of up to 10% of the total battery weight 7 , if the electrode is prepared in the freestanding form, improved corrosion resistance (elimination of metal foil), and enhanced flexibility, particularly for bendable, implantable, and roll-up electronics.
Continued search for better anodes has brought attention to unique, rarely studied molecular precursor-derived Si-based glass-ceramics (such as silicon oxycarbide or SiOC and silicon carbonitride or SiCN) materials [40][41][42][43][44][45][46][47][48][49][50] . SiOC is a high-temperature glass-ceramic with an open polymer-like network structure consisting of two interpenetrating amorphous phases of SiOC (Si bonded to O and C) and disordered carbon 42 . Its low weight density (B2.1 g cm À 3 ) and open structure enables high charge and discharge rates with a gravimetric capacity more than twice that of commercial graphite electrode. More important, major portion of the electrochemical capacity in SiOC is due to reversible Li-adsorption in the disordered carbon phase and not the conventional alloying reaction with Si, ensuing relatively lower volumetric changes 43,44 . Regrettably, the glass-ceramics that show high lithiation capacity are poor conductors of electronic/ionic current and consequently the electrode preparation involves incorporation of conducting agents and binders in order to hold the particles on a metal current collector, a method known as screen printing [45][46][47] . Such foil-based electrodes carry the dead weight of conducting agents, polymeric binders, and the metal foil that do not contribute towards the battery capacity.
As an attractive solution to screen printed electrodes, we present our results related to fabrication of a well-organized, interleaved, freestanding, large-area composite anode consisting of SiOC particles supported by crumpled reduced graphene oxide matrix. The electrode delivers higher volumetric capacity than the recently reported pomegranate Si/carbon nanotube (310 mAh cm À 3 ) paper-electrode 9 . Large micrometer size reduced graphene oxide (rGO) sheets serve as host material to SiOC particles, providing the necessary electronic path and consistent cycling performance at high current densities along with high structural stability. Because of their unique nanodomain amorphous structure, SiOC particles offer required chemical and thermodynamic stability and high Li intercalation capacity for the electrode. As a result the electrode (at least 2 mg cm À 2 weight loading) has first cycle charge capacity of 702 mAh g À 1 electrode (total weight of electrode considered) and B470 mAh cm À 3 electrode (total volume of electrode considered) at 100 mA g À 1 electrode and stable charge capacity of 543 mAh g À 1 electrode (B363 mAh cm À 3 electrode ) at charge current density of 2,400 mA g À 1 electrode . The capacity is B200 mAh g À 1 electrode when cycled at B-15°C. Further, the composite electrode has exceptionally high strain-to-failure (exceeds 2%) as measured in a uniaxial tensile test and the mode of failure differ significantly from pristine rGO papers.
Results
Material synthesis and electrode fabrication. Polymer-derived SiOC ceramic particles were prepared by controlled thermolysis of 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) polymeric precursor while graphene oxide (GO) was prepared by the modified Hummer's method 51 (for details, see Methods section). The polymer-to-ceramic transformation was complete at 1,000°C 41 . Detailed characterization of cross-linked polymer and resulting SiOC material is presented in Fig. 1a-g. SEM images of SiOC particles in Fig. 1a confirmed average particle size to be B4 mm (with s.d. ¼ 1.8 mm). X-ray photoelectron spectroscopy (XPS) showed O 1s, C 1s, Si 2s, Si 2p and O 2s peaks for both cross-linked and pyrolyzed SiOC ceramic (Fig. 1b). Close analysis of the deconvoluted silicon band (for Si 2p photoelectrons) in SiOC revealed the emergence of peaks at 103.5 and 102.2 eV, corresponding to SiO 4 and CSiO 3 phases, respectively (Fig. 1c). In addition, peaks at 534.5, 533.1 and 532.4 eV corresponding to C ¼ O, SiO 2 and Si-O phases, respectively, were observed in O 1s band (Fig. 1d), whereas the C 1s band (Fig. 1e) was fitted with 3 peaks at 286.5, 284.5 and 284.7 eV corresponding to C ¼ O, C À C and C À Si phases, respectively. Surface elemental composition from XPS was measured to be C ¼ 62. 55 Fig. 1f, five peaks could be fitted into the spectrum: D1 or D-band (B1,330 cm À 1 ), D2 (B1,615 cm À 1 ), D3 (B1,500 cm À 1 ), D4 (B1,220 cm À 1 ) and the G-band (B1,590 cm À 1 ) 52 . D1, D2 and D4 originate from disordered graphitic lattice (graphene layer edges, surface layers and polyenes and so on) while D3 is associated with amorphous carbon soot. G-band corresponds to the ideal graphitic lattice. In addition, two bumps centered at B2,640 (2*D overtone) and B2,915 cm À 1 (D þ G combination) were also observed ( Supplementary Fig. 3). Similarly, Fourier Transform Infrared Spectroscopy (FTIR) analysis also confirmed transformation of TTCS polymer to ceramic SiOC (Fig. 1g) 41 . Based on spectroscopic evidence, the predicted chemical structure of the cross-linked polymer and resultant ceramic is presented in Supplementary Fig. 4, which is in agreement with previous work on polymer-derived SiOC 42 . The composite papers were prepared following a vacuum filtration technique (see Materials section for details and schematic in Supplementary Fig. 5). Samples were labeled as rGO, 10SiOC, 40SiOC, 60SiOC and 80SiOC for rGO paper and GO with 10, 40, 60 and 80 wt% of SiOC in the paper, respectively. The digital camera image and schematic in Fig. 1h highlights the flexibility and structure of the composite paper, respectively.
Morphology of the composite and thermally reduced (annealed) freestanding papers was studied by electron and focused ion beam (FIB) microscopy. The transmission electron microscope (TEM) image (Fig. 1i) showed large micrometer-sized thin GO sheets along with random shape glass-like SiOC particles (also see Supplementary Fig. 6a-e). Large SiOC particles were seen to be covered with smaller nanometer size particles. The graphene platelets seem to ocassionaly fold and cover individual SiOC particles and other instances show GO being interlayered by SiOC. EDX elemental mapping performed in scanning-TEM mode (Supplementary Figs. 6a-e) confirmed the uniform distribution of Si, O, C in the particles with higher concentration of C observed near the edges possibly due to graphene platelets. For the selected area electron diffraction pattern in Fig. 1j, the multiple spot pattern is a result of polycrystallinity of restacked GO sheets and the faint ring pattern is attributed to amorphous SiOC material. The SEM images of the freestanding papers showed a sheet-like structure with a relatively smooth top surface for rGO paper [53][54][55][56] , which became increasingly rough and porous with higher loading of SiOC particles in the composite (Supplementary Fig. 7a-d). Cross-sectional SEM of the fractured samples revealed ordered stacks of rGO with SiOC particles interlayered between the sheets (Supplementary Fig. 7e-h). Several micrometer sized particles could be seen for 60SiOC specimen along with clumped nanometer sized particles. Nonetheless, mechanically fractured composite papers were largely uneven and showed signs of damage to the interface. To obtain a smooth and defect-free cross-section, the 60SiOC paper was sectioned by means of a FIB milling (see Methods section and Supplementary Fig. 8a for details regarding specimen preparation). The uniform distribution of SiOC particles and wrapping by large-area graphene platelets could be clearly observed in the electron beam ( Supplementary Fig. 8b) and ion-beam images ( Supplementary Fig. 8c). Elemental mapping by means of EDX ( Fig. 1k and Supplementary Fig. 8d-f) further established the inter-layered morphology of the composite. Depending up on the SiOC content, the average thickness of the papers varied between B20 and 30 mm.
The reduction of GO (non-conducting) to rGO (conducting) was confirmed by use of X-ray diffraction (XRD). As shown in Fig. 1l, both GO and unannealed composite papers, had peaks at 11.05 and 9.8°, corresponding to interlayer spacing of 8 and 12 Å, respectively. Interlayer spacing was large compared with that of graphite (with major peak (002) at 26.53°, corresponding to 3.36 Å) because of oxygen functional groups present in GO and water molecules held between the layers. After thermal annealing at 500°C for 2 h, the paper showed a broad peak at 2y ¼ 26°, typical of reduced GO material 55,56 . The broad peak observed in the spectra suggests inhomogeneous spacing between the layers. XRD spectra of cross-linked TTCS and SiOC particles were both featureless, confirming the amorphous nature of these ceramics (hallmark of these materials). Raman spectrum (I d /I g ) pre and post thermal reduction showed a slight change in accordance with previous reports (Supplementary Fig. 9) 39 . Reduction of GO to rGO was verified by the disappearance of oxide peaks in the high resolution XPS analysis of C 1s peak ( Supplementary Fig. 10).
Thermogravimetric analysis (TGA) was performed to ascertain the mass loading of SiOC in the composite papers. Figure 1m shows the percentage composition of filtered composite paper prior to their thermal reduction. Significant weight loss was observed in the 50-100°C and 100-400°C temperature ranges, which is attributed to evaporation of trapped water molecules in the GO and oxygen functionalities, respectively [57][58][59] . The weight loss was highest for GO and lowest for 80SiOC (see Supplementary Table 2). Final weight loss in the 400-800°C range is due to burning of carbon material. Comparatively, the initial weight loss was not observed in thermally reduced samples (mere 1.2% for rGO at 400°C, Supplementary Fig. 11) that suggests high degree of water removal and oxygen groups by thermal annealing. Approximately 3% and 6-10% residue was noted for GO and rGO material at B800°C. As a result SiOC content (or percentage weight remaining) in the thermally reduced composite was higher than unannealed specimens; SiOC content in 10SiOC, 40SiOC, 60SiOC and 80SiOC increased from B10-30%, B50-65%, B65-78% and B83-92%, respectively. In the traditional method of electrode preparation, active material (including recently reported graphene embedded PDC material) is mixed with polymeric binder and conductive agent in an B80:10:10 ratio, followed by slurry coating on metal current collector foil 47 . However, using the present method we have made a freestanding and lightweight electrode, containing up to B78% SiOC as active material and B22% of rGO (acting as binder and conductive agent). Paper electrodes were directly utilized as the working electrodes. Electrochemical performance is presented in the following section.
Electrochemical performance. Figure 2a shows charge capacities and columbic efficiency of rGO, 10SiOC, 40SiOC, 60SiOC electrodes asymmetrically cycled at varying charge current densities. For rGO, the first-cycle charge capacity at 100 mA g À 1 electrode was B210 mAh g À 1 electrode , it dropped to B200 mAh g À 1 electrode in the second cycle, and then the charge capacity stabilized at B180 mAh g À 1 electrode after five cycles. When charge current density increased to 2,400 mA g À 1 electrode , charge capacity was retained at B175 mAh g À 1 electrode . Returning the current density back to 100 mA g À 1 electrode led to the return of higher capacity of 192 mAh g À 1 electrode . High irreversible first-cycle capacity results from electrochemical reaction contributed to solid-electrolyte interphase (SEI) layer formation. For the composite electrode, the first-cycle charge capacity increased in correspondence to the percentage of SiOC in the electrode. For example, 10SiOC showed 376 mAh g À 1 electrode , while 40SiOC and 60SiOC showed 546 mAh g À 1 electrode and 702 mAh g À 1 electrode (volumetric capacity of B470 mAh cm À 3 electrode ), respectively. The 60SiOC capacity was lower than the capacity calculation based on a 'rule of mixture' approach (B793 mAh g À 1 ) with constituent rGO (first cycle reversible capacity B210 mAh g À 1 ) at B22 wt% as lower bound and SiOC (highest first cycle reversible capacity B958 mAh g À 1 from ref. 46) at B78 wt% as upper bound. Similar to rGO electrode, when charge current density increased to 2,400 mA g À 1 electrode , composites 10SiOC, 40SiOC and 60SiOC showed high reversible capacity at 296, 417 and 543 mAh g À 1 electrode , respectively. Capacity retention at 2,400 mA g À 1 electrode of 83.5% (compared with cycle number 5 at 100 mA g À 1 electrode ) and first-cycle efficiency of 68% for 60SiOC is among the highest reported performances for a freestanding graphenebased electrode (see Supplementary Table 3 and Supplementary Table 4 for summary and comparison, respectively) [14][15][16][17][18][19]23,25,32,38 . When charge current density was lowered again to 100 mA g À 1 electrode at cycle number 31, charge capacity increased to stable values of 304 mAh g À 1 electrode (B80% retained), 471 mAh g À 1 electrode (B96% retained) and 626 mAhg À 1 electrode (B97% retained) for 10SiOC, 40SiOC and 60SiOC, respectively.
In order to test cyclic stability of the electrodes, the same cells were subjected to symmetric cycling at a current density of 1,600 mA g À 1 electrode . Charge capacity for this test is shown in Fig. 2b. Charge capacity of 60SiOC showed some decline as the cells were subjected to prolong symmetric cycling at 1,600 mA g À 1 electrode . The capacity decay over the 970-cycle range was observed to be approximately 0.075 mAh g À 1 electrode per cycle. This decline was not observed in the rGO specimen, thereby demonstrating the importance of graphene in the composite material. Nonetheless, the average composite paper capacity in this range was approximately three times higher than pristine rGO electrode (B170 versus B58 mAh g À 1 electrode ). Most significantly, the cell capacities were B185 (rGO) and 568 mAh g À 1 electrode (60SiOC) at 1,010th cycle when the current density was brought back to 100 mA g À 1 electrode and stabilized to 186 and 588 mAh g À 1 electrode , respectively at 1,020th cycle before the tests were stopped for post-cycling analysis. This represents B94% capacity retention for 60SiOC when compared with capacity value at the 40th cycle prior to beginning of the longterm cycling test (see Supplementary Table 3). No measureable change in cycling efficiency of 60SiOC (B99.6%) was observed during this period. This shows that, even after 1,020 cycles, the composite electrode was robust and continued to function without appreciable degradation. Supplementary Fig. 12a shows voltage profiles of rGO for the 1st, 2nd and 1,010th cycle. Differential capacity profiles in Supplementary Fig. 12b were similar to previous reports on rGO electrodes, with a primary reduction peak at B50 mV, a secondary reduction peak at B(520-560) mV, and an oxidation peak at B(120-130) mV 39 . The peak at B50 mV, present in all subsequent cycles, is associated with lithiation of graphitic carbon, whereas the peak at B560 mV signifies formation of SEI, which exists only in the first cycle. Supplementary Fig. 12c and d show the voltage profile and differential capacity curves of 1st and 2nd cycle of 10SiOC, respectively. The first cycle contained three reduction peaks at around B50, B240 and B520 mV, attributed to rGO lithiation, irreversible Li x SiOC formation, and SEI formation, respectively 39,41,45 . In contrast, only one subtle extraction peak at B110 mV is observed, which represents rGO de-lithiation with an extended bulge at B500 mV that represents Li x SiOC de-lithiation 38,[45][46][47] . As the SiOC content increased to 40% (Supplementary Fig. 12e,f) and 60% (see Fig. 2c,d), domination of SiOC lithiation increased, as proven by increased intensity of the irreversible Li x SiOC formation peak at B(270-300) mV. Peak intensity of rGO de-lithiation at B120 mV diminished with respect to Li x SiOC Extended cycling behavior of rGO and 60SiOC electrodes cycled symmetrically at 1,600 mA g À 1 electrode . After 970 cycles, the electrodes showed good recovery when the current density was lowered back to 100 mA g À 1 electrode . Insets show the post-cycling digital and SEM images of the dissembled rGO and 60SiOC electrodes. Scale bar is 10 mm. (c) Voltage profile of 60SiOC electrode and corresponding (d) differential capacity curves for 1st, 2nd and 1,010th cycle. (e) Cycling behavior of 60SiOC at sub-zero temperature. After cooling down to B-15°C, the cell demonstrated a stable charge capacity of B200 mAh g À 1 electrode at 100 mA g À 1 electrode . The cell regained B86% of its initial capacity when returned to cycling at room temperature (B25°C). (f) Schematic representing the mechanism of lithiation/ delithiation in SiOC particles. Majority of lithiation occurs via adsorption at disordered carbon phase, which is uniformly distributed in the SiOC amorphous matrix. Large rGO sheets serve as an efficient electron conductor and elastic support. de-lithiation bulge at B500 mV. In addition, the 2nd and the 1,010th cycle charge/discharge and differential capacity curves of the electrodes had similar profiles, showing that no new phases formed even after more than 1,000 cycles. More importantly, the efficiency of 60SiOC remained high throughout the cycling test.
Additional rate capability test involving extreme symmetric cycling were performed on freshly prepared 60SiOC paper electrode with even higher mass loading (approximately 3 mg cm À 2 ). The data is presented in Supplementary Fig. 13. Stable capacity of B700 mAh g À 1 electrode was observed at 100 mA g À 1 electrode which decreased to B100 mAh g À 1 electrode at 2,400 mA g À 1 electrode and showed complete recovery when the current density was brought back to 100 mA g À 1 electrode . Such stable performance is rarely reported for precursor-derived ceramic materials even on traditionally prepared electrode on copper foil where the current density and capacity are reported with respect to the active material only [46][47][48] . Tests were also conducted on 80SiOC specimen to ascertain if the charge capacity of the freestanding paper-based electrodes can be improved even further due to higher SiOC content. These attempts, however, were not successful because electrodes prepared at 80% SiOC loading were brittle and showed erratic behavior after only a few initial cycles. First-cycle charge capacity for 80SiOC was B762 mAh g À 1 electrode and showed domination of Li x SiOC lithiation (B330 mV) and delithiation (B500 mV) over rGO peaks, similar to other composite electrodes ( Supplementary Fig. 14a,b). The 80SiOC electrode began to demonstrate random spikes in charge capacity and efficiency with increased cycle number at high C-rate possibly due to mechanical disintegration and loss of electrical contact due to insufficient rGO loading (Supplementary Fig. 15a). Crack could be observed in the post-cycling SEM images (see Supplementary Fig. 15b-e).
Four-point electrical conductivity measurements were performed and compared for all specimens (for details, see Supplementary Note 1 and Supplementary Fig. 16). Data is summarized in Supplementary Table 5. Although average four-point resistance for 60SiOC (580 O) was higher than rGO paper (40 O), it still represents an important achievement because TTCS derived SiOC (under present pyrolysis conditions and for the given composition) was observed to be poor electrical conductor and the improved conductivity of the composite paper (5 Â 10 À 2 S cm À 1 versus B10 À 12 S cm À 1 for SiOC powder 41 ) is key to better C À rate characteristics. This is more evident when we compare the C À rate data for SiOC particle electrode prepared on traditional copper current collectors 46 , where the electrochemical capacity was observed to be near zero for cycling current density of 1,600 mA g À 1 .
In addition to room temperature testing, the best performing specimen (that is, 60SiOC) was subjected to electrochemical cycling at sub-zero temperature at B À 15°C (for details, see Supplementary Note 2). When initially cycled at room temperature (Fig. 2e), the cell had a stable charge capacity of B600 mAh g À 1 electrode that then reduced to a stable charge capacity of B200 mAh g À 1 electrode when cycled at low temperature. The cell regained B86% of its initial capacity when it returned to cycling at room temperature.
In order to verify electrode integrity, the cells were dissembled in their lithiated state and the electrode was recovered for additional characterization. The inset in Fig. 2b and Supplementary Fig. 17 show the digital photograph and SEM image of the cycled electrodes. Post-cycling Raman spectroscopy data is presented in Supplementary Fig. 18 and Supplementary Table 6. No evidence of surface cracks, volume change, or physical imperfections were observed in the SEM images, suggesting high mechanical/structural strength of the composite paper towards continuous Li-cycling which could be attributed to unique structure of the electrode as shown in Fig. 2f. In all cases, evidence of SEI formation due to repeated cycling of Li-ions was observed. Contamination in the specimen, indicated by arrows, was a result of residue of glass separator fibers. The electrodes were briefly exposed to air during the transfer process, resulting in oxidation of Li, which appeared as bright spots in the images due to non-conducting nature.
To illustrate the kinetics of charge/discharge of the composite paper, Galvanostatic intermittent titration cycling was performed for the 60SiOC electrode at room and low temperature (for details, see Supplementary Note 3). Acquired D Li þ varied between B10 À 14 and B10 À 15 m 2 s À 1 during insertion and extraction (Supplementary Fig. 19). These values are comparable with values reported for polymer-derived SiOC (Kasper et al. 10 À 13 to 10 À 15 m 2 s À 1 ) 44 . In addition, total polarization potential and time dependent change in open-circuit voltage (OCV) at various states of charge were inferred for these experiments, as shown in Supplementary Fig. 20a-d. Reaction resistance to Li insertion and extraction from the 60SiOC electrode was calculated by taking a ratio of OCV to the current density ( Supplementary Fig. 20e,f). Reaction resistance was fairly constant at 2 Ohm g during room temperature insertion. However, it increased exponentially to 8 Ohm g during Li extraction in the 1.5-2.0 V range, which highlights the difficulty in extracting the very last Li atoms from amorphous SiOC structure (Fig. 2f). Density of state calculations ( Supplementary Fig. 21) show that Li is stored at several energy levels in the amorphous SiOC structure, with majority of the insertion occurring in the 0-0.5 V range. Further, a voltage hysteresis of B0.5 V exists during the extraction half, which could be attributed to the hydrogen (H-terminated edges of free carbon phase) that are generally present in the SiOC derived from thermal decomposition of organosilicon polymers. H content in pyrolyzed ceramic particles was measured to be B0.25-0.3 wt% (for details, see Methods section, Supplementary Fig. 2, Supplementary Table 1). Galvanostatic intermittent titration performed at low temperature (B À 15°C) showed D Li þ values in the B(10 À 15 to 10 À 13 ) m 2 s À 1 range during Li-ion insertion and extraction ( Supplementary Fig. 22). The total polarization potential, time dependent change in OCV at various states of charge performed at B À 15°C and corresponding reaction resistance plots are included in Supplementary Fig. 23.
Mechanical strength of the electrode. Static uniaxial tensile tests were conducted to quantify the strength and strain-to-failure for the freestanding composite papers by use of a custom-built setup. Figure 3a shows a schematic of the test setup, in which the load cell is attached to a digital meter, connected to a transducer electronic data sheet in order to transfer the data to host computer through an RS232 serial port using a program written in MATLAB. Engineering stress-strain plots and tensile modulus, derived from load-displacement curves for various paper electrodes are compared in Fig. 3b,c, respectively. The rGO sample showed average tensile strength of B10.7 MPa at a failure strain of 2.8%, while 60SiOC sample had tensile strength of B2.7 MPa at a strain of 1.1%. Low tensile strength of the 60SiOC specimen was expected considering that it contained only B20% rGO. Overall, strength and modulus for these crumpled composite papers was lower than GO and rGO papers prepared from techniques other than high temperature reduction 53,54 . However, the strain-to-failure was almost 5 to 10 times higher than a typical GO, rGO or rGO-composite paper, suggesting that crumpled composite papers may be able to sustain larger volume changes. Surface analysis using SEM of rGO (Fig. 3d) showed occurrence of micro features after tensile test, which we suggest, are due to rearrangement of rGO sheets under tensile load. These micro features are assumed to be due to curling of individual graphene sheets on the top surface when they lose contact with the sheets below them. However, for 60SiOC in Fig. 3e, ceramic particles acted as the point of fracture and caused rGO sheets to separate without stretching, as proven by SEM images that show no distinguishable changes before and after tensile test. Supplementary Fig. 24a-h are the top and cross-sectional view SEM images of fractured surface. The rGO because of higher elasticity had an irregular crumpled appearance, but composite papers were more brittle and had sharper cross-section. Mode of fracture in rGO and 60SiOC papers differed significantly, as presented in Supplementary Movies 1 and 2. A loud distinct sound indicated almost instantaneous fracture of the rGO specimen, accompanied by curling of both ends of the fractured paper. Fracture of 60SiOC specimen was similar to a thin plate with an edge crack, the crack propagation could be clearly observed. In addition, stress lines could be observed only in the rGO specimen, radiating from one clamp to another and indicating distribution of stress throughout the length of the specimen. These observations are explained with the help of a schematic in Fig. 3d,e. Ex situ Raman analysis ( Supplementary Fig. 25) from the top surface of the specimens before and after tests showed increase in average intensity ratio of the I d and I g peaks for rGO (0.88 versus 1.02) while the ratio was largely unaffected for composite specimen.
Discussion
Electrochemical characterization shows that 60SiOC is best long-term cycling electrode with reversible capacities of B702 mAh g À 1 electrode at 1st cycle and B588 mAh g À 1 electrode at 1,020th cycle, respectively. Although 80SiOC offers highest first reversible capacity of B762 mAh g À 1 electrode , it undergoes capacity fading and mechanical damage after few initial cycles at high currents. Hence, the capacity and cycling stability are affected by the relative amounts of SiOC and graphene in the composite, respectively. We ascribe the superior electrochemical performance of 60SiOC electrode to remarkable physical and chemical properties of its constituents and the unique ARTICLE morphological features of the paper. Because graphene sheets in 60SiOC occupy larger volume in the composite, well-dispersed GO sheets during the layer À by À layer filtration process arrange themselves around the SiOC particles to form a flexible composite paper. TEM ( Supplementary Fig. 6), SEM (Supplementary Fig. 7) and FIB (Supplementary Fig. 8) characterization shows that morphology of the composite paper is planar and porous. The porous design therefore facilitated liquid electrolyte to reach the very interior of the electrode thereby providing easy path for solvated ions to be transported on to the surface of SiOC particles. Further, rGO because of its high electrical conductivity and mechanical flexibility provided an electrically conducting (see Supplementary Table 5) and mechanically robust (see Fig. 3b) matrix for the Li-active SiOC particles thereby buffering volume changes in the electrode and maintaining inter particle connection during long-term cycling. Microscopy (Fig. 2b, Supplementary Fig. 17) and Raman spectroscopy ( Supplementary Fig. 18) of the disassembled cell reveal formation of stable SEI on a completely integral electrode, which could explain the high cycling efficiency observed in these composites.
We attribute high reversible capacity of molecular precursor derived SiOC to its amorphous structure, which is comprised of silica domains, -sp 2 carbon chains (or the free carbon phase), nano-voids and silicon/carbon open bonds ( Fig. 2f and Supplementary Fig. 4 for proposed SiOC structure), that offer large number of sites, in which Li-ion can be reversibly stored. We notice that even the composite electrodes are not free from charge-discharge voltage hysteresis (or energy inefficiency) that is generally observed in precursor derived ceramics during the extraction half 46,49,50 . Lowering hydrogen content 60 and doping of silica domains (such as B) in SiOC could be a useful strategy for improving electrical properties and lowering of voltage hysteresis in these ceramics 40,46 . Another important area for future investigation could be to tailor the rGO flakes for residual oxygen and hydrogen surface groups and edge defects so that lithium irreversibility and voltage hysteresis 60 that arises from active defect sites could be minimized without compromising Li-ions' mobility and access to the SiOC particles 4 .
In summary we have demonstrated fabrication of a freestanding multi-component composite paper consisting of SiOC glass-ceramic particles supported in rGO matrix as a stable and durable battery electrode. The porous 3-D rGO matrix served as an effective current collector and electron conductor with a stable chemical and mechanical structure while, embedded amorphous SiOC particles actively cycled Li-ions with high efficiency. Elimination of inactive ingredients such as metal current collector, non-conducting polymeric binder and conducting agent reduces the total electrode weight and provides the means to produce highly efficient lightweight batteries.
Methods
Preparation of polymer derived SiOC ceramic. SiOC was prepared through the polymer pyrolysis route 41 , liquid 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS, Gelest, PA) precursor (with 1 wt% dicumyl peroxide added as the cross-linking agent) was cross-linked at 380°C in argon for 5 h, which resulted in a white infusible mass. The infusible polymer was ball-milled in to fine powder and pyrolyzed at 1,000°C for 10 h in flowing argon resulting in a fine black SiOC ceramic powder.
Preparation of GO and SiOC composite paper. Modified Hummer's method was used to make GO 51 . A total of, 20 ml colloidal suspension of GO in 1:1 (v/v) water and isopropanol was made by sonication. Varying weight percentages of SiOC particles (with respect to GO) were added to the solution and the solution was sonicated for 1 h and stirred for B6 h for homogenous mixing. The composite suspension was then filtered by vacuum filtration through a 10 mm filter membrane (HPLC grade, Millipore). The GO/SiOC composite paper obtained was carefully removed from the filter paper, dried, and thermally reduced at 500°C under argon atmosphere for 2 h. The large-area paper with 60SiOC composition (with an B6.25 inch diameter, cut into rectangular strip) was similarly prepared by use of a Büchner funnel with a polypropylene filter paper (Celgard). The heat-treated paper was then punched (cut) into small circles and used as working electrode material for Li-ion battery half-cells.
Coin cell assembly and electrochemical measurements. Li-ion battery coin cells were assembled in an argon-filled glove box. 1 M LiPF 6 (Alfa Aesar) in (1:1 v/v) dimethyl carbonate:ethylene carbonate (ionic conductivity 10.7 mS cm À 1 ) was used as the electrolyte. A 25 mm thick (19 mm diameter) glass separator soaked in electrolyte was placed between the working electrode and pure Li foil (14.3 mm diameter, 75 mm thick) as the counter electrode. Washer, spring, and a top casing were placed to complete the assembly before crimping.
Electrochemical performance of the assembled coin cells was tested using a multichannel BT2000 Arbin test unit sweeping between 2.5 V to 10 mV versus Li/Li þ that followed a cycle schedule: (a) Asymmteric mode: Li was inserted at 100 mA g À 1 electrode , while the extraction was performed at increasing current densities of 100, 200, 400, 800, 1,600 and 2,400 mA g À 1 electrode for 5 cycles each, and returned to 100 mA g À 1 electrode for the next 10 cycles. (b) Symmetric mode: later, all the cells were subjected to symmetric cycling at a current density of 1,600 mA g À 1 electrode for up to 1,000 cycles, returning to 100 mA g À 1 electrode for the last 20 cycles.
Instrumentation and characterization. SEM of SiOC powder was carried out on a Carl Zeiss EVO MA10 system with incident voltage of 5-30 kV. TEM images were digitally acquired by use of a Phillips CM100 operated at 100 kV. TEM elemental mapping was performed by using a 200 kV S/TEM system (FEI Osiris) equipped with chemiSTEM technology, a high angle annular dark field (HAADF) and Super-X windowless EDX detector. Super-X windowless EDX detector system with silicon drift detector technology allowed fast EDX data collection (a factor of more than 50 enhancement in acquisition speed of EDX chemical mapping) and large field of view elemental mapping. Acceleration voltage was 200 kV and acquisition time was 10 min.
A FIB system (FEI Versa 3D Dual Beam) was used for milling and imaging cross-section of the paper electrodes following standard procedures. Briefly, a platinum protective layer (B25 mm  10 mm  5 mm in x, y and z axes, respectively) was first deposited at an ion beam current of B5 nA. Milling was then performed using regular cross section at an ion beam current of B65 nA to create trenches on either side and bottom face of platinum-coated area. Followed by cleaning cross-section feature (B20 mm  1 mm  6 mm in x, y and z axes, respectively) to fine mill contamination at the bottom face of platinum coated area. The acceleration voltage of Ga þ was 30 kV. An ion-beam current of B40 pA was used for imaging purposes. In-column detector for secondary electrons in beam deceleration mode was used for SEM imaging of the milled cross-section. Elemental mapping (EDS) was performed by use of an inbuilt energy dispersive spectroscopy silicon drift detector (Oxford Instruments).
Raman spectra were collected using a confocal Raman imaging system (Horiba Jobin Yvon LabRam ARAMIS) with 633 nm HeNe laser (laser power of 17 mW) as the light source with a  100 microscope objective. Data acquisition was performed at an exposure time of 20 s with at least four accumulations at each point. D1 filter (10% transparency) was employed for the ceramic powder samples. Additional material characterization was made using XRD operating at room temperature, with nickel-filtered CuKa radiation (l ¼ 1.5418 Å). The surface chemical composition was studied by XPS (PHI Quantera SXM-03 Scanning XPS Microprobe) using monochromatic Al Ka radiation. For XPS depth profiling, sputtering was performed with a 5 keV Argon ion gun for 20 min followed by survey scan. The sputtered area was set to B2 mm  2 mm. The process was repeated four times with total sputtering time reaching 80 min.
Further, bulk elemental composition of the pyrolyzed SiOC ceramic was measured following procedures similar to as described in the literature 46 . Analysis was done for carbon, oxygen and hydrogen content. Silicon content was calculated as a difference to 100%. The carbon content was measured by use of LECO Analyzer Model CS844 (LECO Corp. Analytical Bus, St Joseph, MI) by the combustion method and IR detection. Approximately 50 mg of SiOC powder mixed with accelerants as Iron chips and Lecocel II HP was used for this test. The oxygen and hydrogen contents were measured by use of LECO Analyzer Model No. ONH-836 (LECO Corp. Analytical Bus, St Joseph, MI) based on inert gas fusion thermal conductivity/infrared detection method. Specimen preparation involved mixing B34 mg of SiOC ceramic powder with graphite powder (LECO Corp.) as an accelerant in a nickel capsule (LECO Corp.) followed by placement in graphite crucible. The crucible was then heated to B3,000°C in the chamber and gaseous products transferred to IR/thermal conductivity detectors for analysis. The mass per cent of carbon and oxygen were quantified in reference to the IR spectrum generated from graphite and tungsten oxide powders, respectively.
Hydrogen content in SiOC ceramic was also confirmed by use of another equipment based on combustion/thermal conductivity detector method, CE-440 Elemental Analyser (Exeter Analytical, UK). Combustion of the weighed sample (1.8056 mg of fine powder) was carried out in the instrument chamber in pure oxygen under static conditions. Helium carried the combustion products through the analytical system to atmosphere. Between the thermal conductivity cells absorption trap removed water from the sample gas. The differential signal read before and after the trap reflected the water concentration and, therefore, the amount of hydrogen in the original sample. The hydrogen content by this method was observed to be 0.25 wt% with an error of 0.06%. TGA was performed using Shimadzu 50 TGA (limited to 800°C). Samples weighing, B2.5 mg, were heated in a platinum pan at a rate of 10°C min À 1 in air flowing at 20 ml min À 1 . Electrical conductivity measurements were carried out by use of a four-point probe setup and Keithley 2636A (Cleveland, OH) dual channel sourcemeter in the Ohmic region. Electrochemical cycling of assembled cells was carried out using multichannel Battery Test Equipment (Arbin-BT2000, Austin, TX) at atmospheric conditions.
Mechanical testing. Static uniaxial in-plane tensile tests were conducted in a custom-built test setup. One end of the setup was connected to a 1N load cell (ULC-1N Interface) and the other end was clamped to a computer-controlled translation stage (M-111.2DG from PI). The entire setup was located on a bench with self-adjusting feet. All tensile tests were conducted in controlled strain rate mode with a strain rate of 0.2% min À 1 . Paper electrodes were cut (punched out) into rectangular strips of B5 Â 15 mm 2 for testing without any further modification. | 9,318.4 | 2016-03-30T00:00:00.000 | [
"Materials Science"
] |
One‐Step Biocatalytic Synthesis of Sustainable Surfactants by Selective Amide Bond Formation
Abstract N‐alkanoyl‐N‐methylglucamides (MEGAs) are non‐toxic surfactants widely used as commercial ingredients, but more sustainable syntheses towards these compounds are highly desirable. Here, we present a biocatalytic route towards MEGAs and analogues using a truncated carboxylic acid reductase construct tailored for amide bond formation (CARmm‐A). CARmm‐A is capable of selective amide bond formation without the competing esterification reaction observed in lipase catalysed reactions. A kinase was implemented to regenerate ATP from polyphosphate and by thorough reaction optimisation using design of experiments, the amine concentration needed for amidation was significantly reduced. The wide substrate scope of CARmm‐A was exemplified by the synthesis of 24 commercially relevant amides, including selected examples on a preparative scale. This work establishes acyl‐phosphate mediated chemistry as a highly selective strategy for biocatalytic amide bond formation in the presence of multiple competing alcohol functionalities.
Methods and materials Materials and instrumentation
All chemicals and buffers were bought from Sigma Aldrich, Fluorochem or Fischer Scientific. Medium for cell growth was bought from Formedium. All materials relating to molecular biology work were purchased from New England Biolabs (NEB). All NMR spectra were recorded using a Bruker Avance 400 instrument.
HPLC analyses were performed using an Agilent 1260 Infinity II system. LC/MS analyses were performed using an Agilent 1200 series LC system equipped with a G1379A degasser, a G1312A binary pump, a G1329 autosampler unit, a G1316A temperature-controlled column compartment and a G1315B diode array detector. Compounds were ionized using API-electrospray technique and detected in positive mode on the LCMS System. Drying gas temperature 250 °C at 12 L min-1, and nebulizer pressure at 25 psig.
On both LC/MS and HPLC systems an ACE5 C18 column was used (Dimensions: 250 x 4.6 mm).
For HRMS analyses an Agilent 1200 series LC system was used, coupled to an Agilent 6520 QTOF mass spectrometer, ESI positive mode. The data was analysed using Agilent MassHunter software.
Protein expression and purification
CARmm-A and CHU genes, plasmids and expression strains (E. coli BL21 (DE3)) were prepared using previously described methods. [1] For protein expression, autoclaved baffled flasks containing 700 ml auto-induction medium containing the appropriate antibiotic, were inoculated with E. coli BL21 (DE3) cells and were grown at 30°C for 72 hours. Cells were harvested by centrifugation and the cell pellet was stored in zip-lock bags at -80°C.
To lyse the cells for purification, the cell pellet was resuspended in Equilibration buffer (50 mM Tris.HCl pH 8, 200 mM NaCl). The cells were then sonicated 20s on/20s off 25 times. The lysis mixture was subsequently centrifuged, the supernatant collected and the pellet discarded. The supernatant was then mixed with Ni-NTA agarose and left shaking at 4°C for 30 minutes. This mixture was then poured into a gravity column and was washed with Wash Buffer (10 mM imidazole, 50 mM Tris.HCl pH 8, 200 mM NaCl). The protein was then eluted using Elution Buffer (200 mM imidazole, 50 mM Tris.HCl pH 8, 200 mM NaCl). The eluted protein was concentrated using Vivaspin centrifugal concentrators (30.000 MWCO, Sartorius) and then desalted using PD-10 columns (GE Healthcare) following the respective protocols. Purity was checked using SDS-PAGE (staining with Instant Blue (Expedeon)) and concentration was determined by measuring absorbance at 280 nm using Nanodrop (Thermo Fisher).
For the production of cell-free lysates, the frozen cell pellets were resuspended in reaction buffer (100 mM HEPBS pH 8.5), then sonicated and subsequently centrifuged as described above. The supernatant protein concentration was measured, aliquoted and stored at -20°C.
Biotransformation procedure
In an example CARmm-A biotransformation, 5 mM of the carboxylic acid substrate (from a 0.5M stock in DMSO), 50 mM of amine (from a 0.25 M stock in buffer, adjusted to pH 8.5), 17.1 mM ATP (from a 0.1 M stock in buffer, adjusted to pH 8.5), 66.5 mM MgCl2 and CARmm-A (1 mg/mL) were added to HEPBS buffer (100 mM, pH 8.5) to a total volume of 0.5 mL in a 1.5 mL Eppendorf tube. The reaction was placed in a 37°C incubator for 16 hours shaking at 250 rpm.
The reaction was stopped by adding an equal volume of MeOH and shaking the mixture. This mixture was centrifuged and the supernatant was filtered and added to an HPLC vial for analysis.
Acylation of 13 C-decanoic acid
To investigate the reaction selectivity of amidation versus esterification, 13 C-labelled decanoic acid was reacted with amino sugar 1 ( Figure S1). The crude biotransformation was mixed 1:1 with MeOD and analysed using 13 C-NMR. For a negative control, a reaction without adding the enzyme catalyst was included, as well as a commercial standard of MEGA-10. The carbonyl regions of the 13 C-NMR spectra of these samples are shown in Figure S2. As the carbonyl region of the 13 C-NMR spectrum does not show any peaks other than the acid and amide peaks, it suggests the reaction is selective towards a single product without any unwanted ester byproducts.
F-NMR
To investigate the activity of amino sugars, initial CARmm-A catalysed biotransformations using 3fluoro cinnamic acid and amino sugar 1 were performed and analysed by 19 F-NMR (using previously described methods [1] , figure S3). Figure S4 shows the crude biotransformation (top) and the same biotransformation when spiked with the 3-fluoro cinnamic acid substrate (bottom). This indicated that the substrate in the biotransformation has been completely converted to product. To identify the reaction product in this reaction, it was additionally analysed by LC-MS ( Figure S5 We also investigated wether sorbitol (a poly-alchohol derivative of glucose) would lead to ester formation when used as a nucleophile using the optimized reaction conditions and 3-fluoro cinnamic acid ( Figure S6). We observed that the 19 F-NMR spectra for the sorbitol experiment was identical to the experiment that contained no nucleophile. A very small new peak appeared in these experiments which is the small amount of acyl adenylate that is present in solution. This peak did not show in the no enzyme control experiment as expected. Therefore we concluded that no ester formation occurs when using sorbitol as a nucleophile. Furthermore, a positive control reaction between 3-fluoro cinnamic acid and 1 was performed showing full conversion to the amide product.
Optimisation
Initial reaction test for reacting 7 with 1 resulting in MEGA-8 (11) was performed using previously reported conditions using an excess of amine and ATP. [1] The conversion was determined by RP-HPLC at a wavelength of 210 nm, using a commercial standard as a reference for a calibration curve ( Figure S7 and S8). The calculated conversion of this reaction was >99%. For reaction optimisation we used the reaction between 9 and 1 resulting in MEGA-10 (13), using the CHU enzyme to regenerate ATP from AMP and polyphosphate as a model reaction (figure S9). Figure S9: Reaction scheme for the CARmm-A catalysed reaction between 9 and 1 to synthesise 13, using the CHU enzyme to regenerate ATP from AMP and polyphosphate.
Using previously reported reaction conditions we investigated the effect of amine concentration on the conversion of substrates to 13 ( Figure S10). We performed design of experiments using the software JMP®, (Version 16 Pro. SAS Institute Inc., Cary, NC, 1989-2022 to optimize and better understand the CHU system. We constructed an empirical model for the effect of polyphosphate, AMP and Mg 2+ concentrations on conversion using data from a set of biotransformation conditions generated by the software ( Figure S11). Using the maximize desirability option in the prediction profiler, it was found that the optimum reaction conditions were 17.1 mM AMP, 66.5 mM MgCl2, and 14.9 mg/ml polyphosphate.
Analysis of analytical scale biotransformations
Reactions shown in Table 1 were stopped after 16 hours by adding methanol in a 1:1 ratio, the mixture was centrifuged and the supernatant was used for reversed-phase HPLC and LC/MS analysis.
To calculate conversions, a calibration curve was made using a dilution series of a commercially bought standard of MEGA-10 (13) using the concentration range 0.625 mM, 1.25 mM, 2.5 mM, 5 mM and 10 mM. These samples were run using the HPLC conditions described above, detecting the amide at 210 nm, taking the mAU value of the peak of the standard at a retention time of approximately 4.8 min. | 1,912.4 | 2021-11-15T00:00:00.000 | [
"Chemistry"
] |
Physics of animal health: on the mechano-biology of hoof growth and form
Global inequalities in economic access and agriculture productivity imply that a large number of developing countries rely on working equids for transport/agriculture/mining. Therefore, the understanding of hoof conditions/shape variations affecting equids' ability to work is still a persistent concern. To bridge this gap, using a multi-scale interdisciplinary approach, we provide a bio-physical model predicting the shape of equids’ hooves as a function of physical and biological parameters. In particular, we show (i) where the hoof growth stress originates from, (ii) why the hoof growth rate is one order of magnitude higher than the proliferation rate of epithelial cells and (iii) how the soft-to-hard transformation of the epithelium is possible allowing the hoof to fulfil its function as a weight-bearing element. Finally (iv), we demonstrate that the reason for hoof misshaping is linked to the asymmetrical design of equids' feet (shorter quarters/long toe) together with the inability of the biological growth stress to compensate for such an asymmetry. Consequently, the hoof can adopt a dorsal curvature and become ‘dished’ overtime, which is a function of the animal's mass and the hoof growth rate. This approach allows us to discuss the potential occurrence of this multifaceted pathology in equids.
Background
Equids (horses, mules, donkeys) are 'ungulates' or single-digit hoofed mammals. As the function of the hoof is to sustain the animal's weight and to balance external stresses during locomotion, hoof pathologies and associated shape variations have important repercussions on equids' health and have puzzled humankind for centuries, with reference being made in Aristotle's writings around 350 BC [1]. While equids are mostly considered as pets in economically advanced countries, the 110 million working equids worldwide involved in mining, transport and agriculture play an important global socioeconomic role in developing countries (https://www.thebrooke.org/). Owning a working equid provides financial benefits to families [2] and plays an important role socially [3]. In this context, the health of working equids remains an important problem as chronic hoof misshaping and related conditions are a serious problem in this genus [4,5] and time off work for recovery has a major impact on an owner's income [2].
As perissodactyls, or odd-toed ungulates, horses have a strong hoof capsule, protecting the internal structures of each single-digit foot. This horny structure encloses the distal part of the second phalanx, the distal phalanx and the navicular bone. The adhesion of the hoof to the distal phalange is warranted by the dermo-lamellar junction that, through its hierarchical design, allows a strong adhesion to take place during an abrupt rise in mechanical stresses, e.g. gallop [6][7][8] (see electronic supplementary material for a summary of the horse foot anatomy).
Abnormal hoof shapes that can be observed as dorsal curvature anomalies (a.k.a. Aladdin's slipper shape) develop over a long period of time and are commonly perceived as a warning sign underlying a past or present disorder associated with biological factors (e.g. hormonal disturbances) and sometimes, physical factors, including the specific loading of a limb [9][10][11]. However, to what extent can physical and biological factors be integrated together, or what is the weighting to give to either factor in case of hoof deformity, is unclear. Although the biology of keratinized tissues is being unravelled at the cellular, molecular and genetic levels [12][13][14][15]; the full understanding of the multi-scale interactions between the physics and biology in these tissues remains in its infancy.
Biomechanical studies of the equid's hoof capsule were the first to gather essential information regarding the stress/strain relationships and viscoelastic properties of the hoof capsule in relation to its morphology by considering the adult hoof as a static piece of tissue [16][17][18][19][20][21][22][23]. However, the hoof is not static but grows continuously over time and the question as to how the growth of a hoof responds to the physical environment has not received any adequate answers. As a result, a number of questions are still lingering, such as (i) how can the growth rate of the hoof capsule be approximately 0.1 mm day 21 [24] when keratinocyte cells proliferate at a typical rate approximately 10 mm day 21 (approx. 0.01 mm day 21 )? (ii) How can a hoof capsule with an elastic modulus approximately 10 8 Pa [21] emerge from soft keratinocyte tissues where single cells have a typical elastic modulus approximately 10 3 Pa [25,26]? (iii) What are the biological mechanisms promoting the growth stress to allow the hoof to be a weight-bearing element and how do these mechanisms impact the future shape of the hoof? (iv) To what extent is physics involved and can explain hoof deformities?
These questions underline the notion of growth rate/ stress and the transformation that the epithelium has to undergo to generate a solid hoof capsule. As a result, there is a need to investigate the dynamical growth of the hoof using a multi-scale approach from the cells to the entire hoof in live animals.
When studying problems at the interface between physics and biology and in particular any dynamical growths, the challenge is to relate a biological growth originating from soft tissues/cells to a physical stress such that problems can be dealt with physics, in turn informing the biology of the process at play. Using a bottom-up approach, we have therefore concentrated on the 'biology of the early stage of hoof growth' to deduce the 'physics of the early stage of hoof growth' providing a model for the hoof growth stress. This stress incorporated into a biomechanical model of the hoof treated as a solid will untangle the roles of physics (e.g. weight and hoof geometry) and biology (e.g. proliferation and differentiation of keratinocyte progenitor cells) to demonstrate the weakness of this system.
Sample collections
Hooves were obtained from horses that were not euthanized for research purposes. Horse hooves were collected from the abattoir 1 h post euthanasia following ethical approval by The School of Veterinary Medicine and Science, University of Nottingham. For three-dimensional imaging and histological sampling, PBS was injected through the medial and lateral palmar digital arteries to remove the blood, and tissues were fixed by replacing PBS with a 4% PFA/PBS (Sigma, UK) fixative solution under manual pressure. When needed, biopsies were taken from the dorsal and quarter parts of the coronary band and placed into a 4% PFA/PBS solution fixative solution prior processing. For primary cell isolation, hooves were aseptically cleaned and progenitor keratinocyte cells obtained as described below. All primary cell cultures were performed at 378C in 5% CO 2 .
Synchrotron imaging of the papillae
To investigate details of the circulatory system and tissues surrounding and within the papillae, a hoof specimen was imaged on Beamline I12-JEEP at the Diamond Light Source, UK [27] using 0.234 Å (53 keV) X-rays and a custom-built X-ray camera, including an X-ray sensitive scintillator emitting visible light (cadmium tungstate), visible light optics and a PCO.edge camera, with scientific grade 2560 Â 2160 pixel sCMOS sensor. 1800 images were collected at 0.18 intervals through a 1808 rotation for each 20 mm field of view. The sample was positioned at a distance of 1000 mm from the camera to take advantage of the propagation phase contrast in order to distinguish between tissue types [28]. Phase was retrieved using the method of Paganin et al. [29] to improve contrast. Three-dimensional volumes were reconstructed using an in-house high-speed filtered back projection reconstruction algorithm [30]. Three-dimensional datasets resulting from the reconstruction were rendered using commercial software (Avizo, France). The volume of the papillae was measured using the thickness option of the BoneJ plugin in Fiji software (Wikimedia Foundation Inc., USA).
Three-dimensional reconstruction of the equine foot and measurement of the hoof dorsal curvature
Individual hooves were scanned using a Phoenix vjtomejx m industrial scanner (GE, Germany). A maximum X-ray energy of 125 kV, 320 mA current and a 0.5 mm thick copper filter was used to scan each sample, consisting of 2160 projection images, with a detector exposure time of 333 ms, acquired over a 3608 rotation. The magnification and spatial resolution achieved were Â1.82 and 120 mm respectively. Data were reconstructed for visualization using datosjx software and were visualized using VGStudio MAX 2.2 (GE, Germany). The average dorsal curvature of the hoof was estimated from a ventro-dorsal sagittal section using Fiji (electronic supplementary material, appendix SM.1).
Field study
One hundred and twenty-nine horses from 12 different yards in the southeast of England were evaluated from 19 September 2016 to 3 October 2016. Eligibility criteria for inclusion in the study included an age greater than 5 years and height under 144 cm with shoes. All horses were healthy at the time of evaluation and none had a history of laminitis/hoof conditions, pre-existing health conditions or were treated for pituitary pars intermedia dysfunction. There were no selection criteria on breed, sex, management or exercise regime. Hoof diameters were measured using a 40 cm ruler across the widest part of the hoof. Horses were walked onto an electronic Horse Weight w bridge and the mass recorded to the nearest kilogram. General adiposity was assessed by body condition scoring (BCS) using the Henneke nine-point scale [31,32]. Lateral hoof photographs were taken for both forefeet with a wooden bar placed behind the heel bulbs. A 40 cm ruler was then placed at the widest aspect of the lateral hoof wall in parallel with the wooden bar to ensure no obliquity in the image. A 15 mm diameter blood tube lid was put on top of the ruler in contact with the lateral hoof wall to act as a scale to determine the parallax. A camera (Nikon Coolpix L330, UK) was placed at the end of the ruler and the photo taken. Images were imported into Fiji for average dorsal wall curvature calculation (electronic supplementary material, appendix SM.1).
Statistical analysis
All statistical analysis and linear regressions were performed using Prism6 (GraphPad, USA). p-Values less than 0.05 were taken as statistically significant.
Visual assessment of a dorsally curved hoof
The presence of a dorsal curvature (a.k.a. Aladdin's slipper shape, figure 1a) is indicative in equids, of a past or present pathology. Provided a more or less circular hoof a simple geometric argument suggests that such dorsal curvature is possible because the heel and dorsal regions do not grow at the same rate. This argument is further intuitively confirmed by the occurrence of diverging growth bands on the hoof capsule from the dorsal to the heel regions (see arrow in figure 1a), which underlines a differential growth between these regions. To better understand the occurrence of diverging growth bands, a thorough investigation of the initial stages of hoof growth was completed.
Biology of the early stages of hoof growth
The synthesis of the hoof capsule starts from the coronet, i.e. the papillary region, and upon dissection and H&E staining of this region dorsally, soft digit structures a.k.a. papillae became visible (figure 1b). Between papillae, it was possible to differentiate three interpapillary regions (figure 1c). The blue 'proximal nuclear region', comprising cells with little cytoplasm and where the nucleus occupies most of the cellular volume. The red 'cytosolic region', in which the cytoplasmic volume of cells enlarges prior to its dissipation under the form of a 'transition', after which there is no visible cytoplasm nor nucleus but only remains that are expected to be keratin proteins as the hoof is a keratinized tissue. While the location of the 'transition' within the interpapillary space did vary slightly between hooves (figure 1d), the near-linear rate of cytoplasmic accumulation before the transition (R 2 ¼ 0:981 + 0:008) and cell size at the transition (S cell ¼ 980 + 9 mm 2 ) were similar between hooves, suggesting that the rate of cytoplasmic volume accumulation over time is proportional to the surface area of the cells. Note that within this transition region, the borders of the papillae seemed to join (black arrow figure 1c), which may suggest the presence of a pressure within the interpapillary space, possibly linked to the ability of cells to increase their size via accumulation of cytosolic material (figure 1d). To relate the changes in the cell surface area (figure 1d) to any concomitant variation in the physical appearance of papillae, high energy X-rays were used to reconstruct a three-dimensional image of the papilla in the dorsal region (figure 1e(i)). This technique allowed us to extract the papillae from the stratum medium involved in royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 16: 20190214 the synthesis of the hoof capsule (figure 1e(ii)) and measure their diameter along their proximo-distal axis, i.e. along the axis of hoof growth (figure 1e(iii)). The result confirmed that the average diameter of the papilla decreases to become minimal at a typical distance of approximately 2 mm that corresponds to the position of the transition (figure 1d). Perpendicular to the direction of growth, the cells were visibly smaller close to the papillae in a region corresponding to approximately three cell-thickness (figure 1f ). This last result suggests that perpendicular to the direction of hoof growth the size of cells is relatively homogeneous within the interpapillary space.
Regarding key molecular determinants, Ki-67 (proliferation marker, figure 2a) [33], p63 (organization of the epithelium structure, figure 2b) [34], DNA fragmentation via TUNEL assay (dead tissue formation, figure 2c) [35], K14 and K10 (differentiation markers of the epithelium, figure 2d) [36] were labelled and measured. Ki-67 and p63 were expressed in cells lining the basement membrane of the papillae (i.e. the cells forming the papillae itself) and the interpapillary space and their expression decreased distally. However, by considering the variations in the cell surface area (figure 1d), the probabilities that cells were still expressing Ki-67 or p63 in the interpapillary space were at most approximately 1% of what was seen in the basement membrane. This result suggests that the proliferation process is negligible in the interpapillary space. Interestingly, although Ki-67 expression was visible over the entire length of the papillae (figure 2a(ii)), cells emerged in the interpapillary space only proximally (figure 2a(i)) suggesting that cells from the papillae move in the opposite direction of the hoof growth prior to entering in the papillary space. This counter-flow mechanism can be potentially rationalized taking into consideration the interpapillary pressure linked to the changes in cell volume applying a normal pressure against the surface of the papillae (electronic supplementary material, appendix SM.2). DNA fragmentation became clearly visible in the transition region (distance approx. 2 mm, figure 2c) and peaked when cell Hoof synthesis needs to be seen as a dynamical process and the remarkable changes in cell size as a function of their progression in the interpapillary space, concomitant with the variations in the diameter of papillae, are indicative that physical stresses linked to changes in pressure may be at play to initiate and synchronize the early stage of hoof morphogenesis (electronic supplementary material, appendix SM.2). This pressure would be required in any case for the hoof to grow to balance the weight of the animal and the adhesion of the capsule on the distal phalange. Furthermore, the near-constant cell surface area of transverse cells when the cells move along the direction of growth suggests that a one-dimensional model as a leading approximation is potentially valid to describe the growth stress.
Physics of the early stages of hoof growth
Histological pictures represent a steady-state of the early stage of hoof morphogenesis. To model cell volume changes anywhere at the coronet level, we note by, u, the angular position (figure 1b) by, R u (y), the average radius of cells (data from figure 1d correspond to R u¼0 ðyÞ) and by, R T (y), the biological target radius that cells would have if no physical stresses were involved. In this context, a physical interpapillary pressure can be defined by P u (y) K(1 À R 3 T (y)=R 3 u (y)) that results from the mismatch between the target and real cell sizes, where K 10 3 Pa is the typical elastic modulus of keratinocytes cells [25,26]. However, interpapillary cells have to share the limited interpapillary space and, as a result, the real cell size is also a function of the local amount of cells present in the interpapillary space. Neglecting cell division in the interpapillary space (figure 2a) and assuming that the interpapillary space volume does not depend on the angular positioning, by noting (N u ) the number of cells in the interpapillary space, the conservation of the interpapillary volume implies that (N u )(R u ) 3 is constant. As a result, variations in the interpapillary pressure are possible if more cells are present in the interpapillary space. By assuming that the differentiation process is independent of the physical stresses present in the interpapillary space (figure 1f ), it is possible to compare the difference in pressures between two angular positions by determining the number of cells that are present in the interpapillary space at these locations. In the remaining text one shall note, N u ¼ N u =(N u ) 0 , where the variable (N u ) 0 is a normalization constant linked at an 'initial state' to be defined.
The variation of the cell size along the y-axis can now be addressed making use of: (i) the balance of stresses ÀdP u =dy lv u =2R u where l is a drag constant, v u ¼ 2R u =t 0 the velocity of the cell and t 0 the typical proliferation time of keratinocyte progenitor cells dividing from the proximal part of the papillae; (ii) a rate of volume change per unit of time for the target size that is proportional to the surface area of the cell, or equivalently dR T =dt v m where v m is the proportionality constant representing the rate of inward flow of mass across the membrane of cells and; (iii) a change in the target size as a function of the position in the interpapillary space written as, y Ð 2R T dt. Further assuming that when cells enter the interpapillary space their target and real sizes are the same and is constant whatever Figure 1. (Opposite.) (a(i,ii)) Visual characteristics of straight and dorsally curved hooves linked to a differential growth across the coronary band. The arrow points to a 'diverging growth band' visible with a naked eye. (a(iii)) Based on these observations, a simple geometric model can be inferred to describe how the dorsal curvature of the hoof can appear as a result of a diverging growth from the coronary region. (b(i)) Basic anatomical nomenclatures of the equids foot, given the radial symmetry of the hoof an angular notation involving the parameter 'u' is used to describe the location at the coronary band. (b(ii)) Location of the papillary region with regard to the distal phalange using an X-ray picture of the equid foot. (b(iii), left) A dissection of the papillary region demonstrates that the papillary region is also where the epithelium changes its state from being a soft tissue to a hard one. (b(iii), right) The papillary region can be immunohistochemically stained using H&E demonstrating the presence of papillae namely soft digit structure (scale bar, 2 mm). (c) A magnification of a longitudinal section of papilla labelled with H&E demonstrates the different regions involved in cell differentiation and the hoof synthesis including the blue proximal region, the red cytosolic region and the white region where remnant structures are devoid of cytosol and nuclei (n ¼ 3 and scale bar, 200 mm). The black arrow points to a region where the borders of the papillae seem to join. A magnification of interpapillary regions was carried out (scale bar, 100 mm). The numbered squares refer to magnified regions on the right (scale bar, 20 mm). The letters a, b and c refer to proximal, medial and distal regions labelled with K10 and K14 (figure 2d ). The stars represent the regions where the size of cells, along the axis perpendicular to the papilla, were measured (see ( f ) below). (d ) Measure of the cell sizes as a function of the distance in the direction of growth. The three colours used (black, dark blue and red) correspond to three different hooves. The inset is the theoretical fit using equation (3.1) (n ¼ 3). (e(i)) High-intensity X-ray imaging (synchrotron) of horse papillae (n ¼ 1) showing three sub-regions of the interpapillary region including the stratum externum (SE), the stratum medium (SM) and the stratum internum (SI). The SM is the papillary sub-region from where the bulk of the hoof is synthesized (scale bar: 2 mm). (e(ii)) A three-dimensional reconstruction of the SM sub-region permits measurement of and colour coding the diameter of papillae where the red colour is indicative of a larger diameter as opposed to the blue colour (scale bar: 2 mm). (e(iii)) A selection of 10 papillae (N ¼ 10) demonstrates that the average diameter of a papilla changes along its longitudinal axis and that a reduction in its diameter is associated with an increase in cell size in the same region (figure 1d). Note that the region where the average diameter of papillae is the smallest (distance approx. 2 mm) is also where the borders of the papillae seem to join, see black arrow in c. (f ) Surface area of cells measured along the axis perpendicular to the direction of the papillae at different positions marked by a white star '*' (c). Perpendicular to the direction of growth the cell surface area in the interpapillary space changes over a thickness corresponding to two cell layers close to the papillae (L1 and L2). In the L3 region, the interpapillary cells or remnant structures (after the transition) have a homogeneous size that is a function of their progression in the interpapillary space. The error bars correspond to the standard deviation of the cell surface area. Note that L1 þ L2 þ L3 represents only half the interpapillary transverse length of the interpapillary space. Consequently, it is worth noting that the position of the transition is independent of the surface area of cells. This observation suggests that if the temporal evolution of the size of cells is constrained by the interpapillary pressure, this pressure is not involved in the transition observed. Said differently, the position of the transition is 'timed' by the biology of the differentiation of keratinocytes. ('n' describes the number of hooves used for measurements and 'N' the number of papillae used). royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 16: 20190214 the angular position, i.e. R u (y ¼ 0) ¼ R 0 ; by noting v 0 ¼ 2R 0 =t 0 the cellular proliferation rate and, y ¼ y=2R 0 , the set of relations allows one to determine the following rate of growth: ð3:1Þ Concentrating on the dorsal region and assuming the histological pictures as being initial states, i.e. N u¼0 ¼ 1, a nonlinear regression against the data plotted in figure 1d provides 4v m =v 0 0:32 + 0:03 and lv 0 =K 0:059 + 0:006, with (R adj ) 2 greater than 0.98 for each fit prior to the transition (inset figure 1d). From the numerical constants, the typical hoof growth rate can be determined. Assuming that the growth rate is driven by the variation in cell size prior to the transition located at the position y c 199 + 26 (figure 1d), by using a nominal cell size R 0 5:14 + 0:07 mm (figure 1d) and a typical proliferation time t 0 10 h one finds a theoretical growth rate based solely on the changes in the cells' morphology that is (v u¼0 ) c 0:17 mm day À1 , which is the order of magnitude of the hoof growth rate in different ungulate species (table 1; note that (v u¼0 ) c 0:3 mm day À1 with t 0 5:6 h). Thus, the position of the transition seems to match the soft-to-hard transition namely the location at which point the hoof becomes a solid and reaches its steady growth rate.
By virtue of the balance of stresses, the interpapillary pressure can also be determined under the form: where, (P u ) 0 , is the interpapillary proximal pressure, i.e. for y ¼ 1, generated by the papillae and with a magnitude that can be estimated using figure 2a, i.e. when N u¼0 ¼ 1, leading to (P u¼0 ) 0 1:9 Â 10 5 Pa + 4:9 Â 10 4 Pa (electronic supplementary material, appendix SM.3). This magnitude that has an order of magnitude similar to the values found in the field study (see thereafter) allows one, by using equation (3.2) and figure 1d, to estimate that the pressure at transition, (P u¼0 ) c , is: (P u¼0 ) c 0:94 Â (P u¼0 ) 0 1:8 Â 10 5 Pa.
As beyond the transition there is no possibility for cells to actively generate any further growth, the pressure at the transition, (P u ) c , should correspond to the growth stress of the hoof. In this instance, it can be assumed that the process of cornification/horn formation, i.e. the transition, is also a means of balancing external stresses, and the presence of the transition is probably linked to the reorganization of keratin filaments from dead cells into larger bundles under pressure so that the hoof is synthesized to match its function as a weight-bearing element.
The soft-to-hard transition
The reason for which the transition in size corresponds to the formation of the hard horn can be understood as follows. Firstly, let us assume that cells have to be in a certain biological state when cell death is ongoing to collapse, as observed in figure 1c,d. In this context, one assumes that the transition is driven by the pressure involved at the transition. Secondly, let us assume also that prior to collapsing, each nonaggregated keratin filament occupies a volume V 0 within living cells that is their individual degree of freedom. Thus upon cell death, by neglecting the binding energy between keratins, the entropy gained by forcing the formation of one thick fibril containing 'n' non-aggregated intermediate filaments is, TDS Ànk B T lnðnÞ, where k B T is the thermal energy. In parallel, this aggregation releases the mechanical energy À(P u¼0 ) c DV (P u¼0 ) c V 0 (n À 1). Equating both relations gives: ln (n) (P u¼0 ) c V 0 =k B T Â (1 À 1=n). Assuming a typical keratin dimer radius of approximately 14 nm [41] at room temperature one finds n approximately 20. As the bending stiffness of filaments is proportional to the fourth power of their radius, the mechanical resilience of the keratin bundle gained through this reorganization would be approximately 20 4 or equivalently approximately 10 5 . This means that the persistence length determined when the bending energy equals the thermal energy should be increased by a similar factor [42]. Let us assume a persistence length of keratin filament approximately 0.5 mm in two-dimensional cell culture conditions [43], then the reorganization of keratin filaments would increase this value to approximately 80 cm making a hoof with a typical dimension approximately 10 cm a solid structure. Finally, the reorganization of keratin filaments can also explain the approximately 10 5 order of magnitude difference between the elastic moduli of keratinocytes and the solid keratinized hoof capsule.
Passed this transition, a mechanical description of the hoof capsule treated as a solid can provide a better understanding of the underlying bio-physical mechanisms regulating hoof shape.
Stresses balance in the hoof capsule
Straight or curved hooves redistribute the different loads across the coronary band. As the transition is supposed to balance the external load, there is no residual shear stress present in the solid hoof capsule. Let us assume a circular hoof capsule of constant radius r 0 that is curved dorsally modelled as a two-dimensional object of constant thickness e and of longitudinal length Z u , where u is the angular position on the coronet as defined above. In this context, it is possible to define a reduced variable, CZ u , where C is the dorsal curvature (i.e. for u ¼ 0) that we shall assume constant and much smaller than any other spatial dimensions describing the hoof. The local growth stress, e(P u ) c , defined as the growth force applied per unit of length of the coronet needs to balance the adhesion of the hoof capsule on the distal phalange and the ground reaction to the weight, which using equation (3.2) can be written as: In the r.h.s., the first term is the adhesion stress that is proportional to the angular growth rate of the hoof (v u ) c and the constant f 0 adh that characterizes the adhesion of the hoof [44]; and the second term is the component of the ground [39,40] royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 16: 20190214 reaction to the weight applied on the surface of the hoof capsule (in-plane description). Thus, CZ u ( 1, g is the gravity constant and r the mass of the animal per surface area of contact between the animal's foot and the ground.
Physical condition required for the straight hoof capsule
As equation (3.3) describes the set of hoof shapes, the ideal case of a straight hoof can now be discussed. Let us assume that (P u ) 0 is constant (electronic supplementary material, appendix SM.3) and a constant growth rate whatever the angular position considered. As a straight hoof implies CZ u ¼ 0, the angular variation in the growth stress in this case is necessarily linked to the asymmetry of equids' foot via the adhesion term, in turn imposing a physical condition on (N u ) 0 . To determine (N u ) 0 , let us consider as reference the dorsal region as far as the amount of interpapillary cells is concerned, i.e. N u ¼ N u =(N u¼0 ) 0 ; and consider the 'initial state' as being the ideal straight hoof, namely ( N u ) 0 ¼ (N u ) 0 =(N u¼0 ) 0 where the subscript '0' refers to the straight hoof. Using equation (3.3) in the dorsal region and for any angular positioning, the difference in the balances of stresses gives: , which depends on two variables ( y u ) c and ( N u ) 0 . However, a relation between these variables exists if one assumes that the cells need to be in a certain biological state after a given biological time, t B ¼ t B Â t 0 , before their volume is allowed to collapse at the transition. In this case, whatever the angular position, the relation is given by: As a result, any small or moderate changes in the relative amount of cells in the interpapillary space from 1 to ( N u ) 0 shifts the position of the transition by ( y u ) c À ( y u¼0 ) c ( y u¼0 ) c x[( N u ) 0 À 1], where x À0:62 (electronic supplementary material, appendix SM.4). Thus, as ( y u¼0 ) c ) 1, one finds the ideal straight hoof condition: It is worth noting that any deviation from the latter relation should promote the formation of a curved hoof.
A kinematic point of view for the occurrence of a dorsal curvature
Following figure 1a, the gradient in growth rate is key to define the dorsal curvature of a hoof. As equation (3.1) defines the angular growth rate of the hoof it is possible to relate the gradient in growth rate to the dorsal curvature. In order to achieve this, let us consider that ( N u ) 0 is transformed to N u ¼ N u =N u¼0 and assume that N u¼0 (N u¼0 ) 0 . As the relative growth rate along the coronet is proportional to the hoof curvature under the form Cur 0 (V u ) c =(V u¼0 ) c À 1 (figure 1a); by considering the first order in N u À ( N u ) 0 of equation (3.1) one finds, Cur 0 (a þ bx)( N u À ( N u ) 0 ), where a 0:31 and b 0:19 are the first derivatives of the growth rate with regard to N u and ( y u ) c , respectively, at u ¼ 0 (electronic supplementary material, appendix SM.5). Focusing on the quarter regions (u ¼ p=2) and using r 0 5 cm typically, one finds a numerical slope approximately 2.4 m 21 .
To validate the model, as the interpapillary space and papillae form a closed system, namely that cells proliferating from the papillae should populate the interpapillary space in due course, the experimental relationship between the ability of cells to proliferate locally from the papillae at the quarter and dorsal regions and the average dorsal curvature of the hoof were determined using equine feet. The average dorsal curvature of the capsule was calculated using a dorsoventral sagittal section of the hoof using mCT scanning. The feet were then dissected to estimate the rates of progenitor cell proliferation from papillae (average proportion of Ki-67 positive cells to total number of cells) at dorsal (u 0) and quarter (u p=2) regions as done in figure 2a, to estimate N u¼p=2 =N u¼0 . In order to validate N u¼0 (N u¼0 ) 0 , the feet selected were those where the standard deviation in the proportion of Ki-67 positive cells at the dorsal regions was approximately 10% or less from the average value. Figure 3a shows a linear trend with a slope approximately 2.0 m 21 between the dorsal curvature of hooves as a function of N u¼p=2 =N u¼0 , with a magnitude similar to the predicted value. Using the slope and constant terms from the fit figure 3a, it can be estimated that the curvature is null if ( N u¼p=2 ) 0 0:16, leading to f 0 adh (v u¼0 ) c 2:9 Â 10 4 Pa by using typically g u¼p=2 2=3 [45], Z u¼0 10 cm and e approximately 1 cm.
As a result, curved or 'dished' hooves result from an excess of cell proliferation from the quarter regions. To what extent is physics key in this process can only be clarified by using the full stresses balance.
Physics of hoof shape: impact of chronic changes in the horse weights
To further the knowledge of hoof growth V (CZ u ) was estimated using the hoof symmetry: firstly by considering the mediolateral symmetry of the hoof leading to (@V =@Z u )j Zu¼0 ¼ 0; and secondly, as the heel is shorter than the toe and that the magnitude of projection of the ground reaction stress decreases from the quarters to the toe, (@V =@Z u )j Zu=Zu¼0 , 0. In this context, a second-order development around CZ u¼0 can be envisaged, V (CZ u ) V (CZ u¼0 ) þ a=2 Â (CZ u À CZ u¼0 ) 2 , where a . 0 is a constant. Therefore, using (v u¼p=2 ) c =(v u¼0 ) c À 1 Cr 0 p=2, the difference in the balances of stresses between the quarter and toe regions can be determined: where dP c ¼ (P u¼p=2 ) c À (P u¼0 ) c and DZ ¼ Z u¼0 À Z u¼p=2 . 0. Finally, noting C ¼ CZ u¼0 and 1=r 0 ¼ age(1 À g u¼p=2 ) 2 =f 0 adh (v u¼0 ) c r 0 gp, the physical solution for the dorsal curvature at the lowest order in r is: Equation (3.4) stipulates that the maintenance of a straight hoof, i.e. C ¼ 0, is possible if the gradient in growth stress matches exactly the asymmetrical design of the hoof, i.e. when V 0 0 or equivalently that N u ¼ ( N u ) 0 ; and/or that the mass per unit of surface area of hoof is r r 0 =V 0 . It is worth noting that in this latter case and by virtue of the definition of r 0 , the ratio between the adhesion and ground reaction stresses must verify: f 0 adh (v u¼0 ) c =agr eV 0 (1 À g u¼p=2 ) 2 =r 0 g u¼p=2 p; namely that to maintain a straight hoof the variation in the hoof growth rate needs to be related to the variation of the horse mass under the form: d(v u¼0 ) c þdr.
royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 16: 20190214 In order to address these points, 129 ponies were selected and their body condition score (BCS) assessed using the Henneke nine-point scale [31,32], with a score of 1 being an emaciated/unwell horse and a score of 9 being an overweight/obese horse. As scores ranged between 3 and 8, three broad BCS categories were defined as underweight, normal and overweight linked to, respectively, BCS 4 (n ¼ 12), 4 , BCS 7 (n ¼ 187) and BCS ! 8 (n ¼ 20) where 'n' is the number of feet. The stress linked to the weight was estimated on the forefeet knowing the animals' mass and foot circumference. The hoof dorsal curvature was calculated using a lateral picture of the hoof. Given that the weight appears as a second-order term in the quadratic equation related to the stresses balance, only extreme cases were plotted, i.e. BCS less than or equal to 4 and BCS greater than or equal to 8, demonstrating a negative relationship between the stress linked to weight and hoof curvature (figure 3b, p-value less than or equal to 10 24 ).
The concordance between the theory and measurements can be underlined from the fit figure 3b. In this context, one can estimate that a null curvature occurs for r 0 =V 0 1:7 Â 10 4 kg m À2 , which by considering the constant term of the fit corresponding to V 0 =Z u¼0 9:6m À1 and assuming typically that Z u¼0 10 cm allows one to estimate firstly V 0 0:96 and secondly a slope À5 Â 10 À4 m kg À1 , which is close to À6 Â 10 À4 m kg À1 experimentally deduced. Note also that by multiplying r 0 =V 0 by the gravity constant (g 9:8 m s À2 ) a magnitude for the ground reaction stress 1:6 Â 10 5 Pa can be estimated, which has the same order of magnitude as the growth stress deduced from histology pictures 1:8 Â 10 5 Pa.
As V 0 seemed to be a positive constant one can expect that the angular gradient in the growth stress vanishes in case of extreme BCSs. This point is further suggested as in these conditions and from equation (3.4), V 0 2Z u¼0 (1 À g u¼p=2 )= pr 0 g u¼p=2 , that in turn allows one to estimate theoretically V 0 =Z u¼0 6:4 m À1 , which has an order of magnitude similar to the curvature experimentally found 9:6 m À1 . This result suggests that for extreme BCSs the biology of hoof growth does not compensate for the physics associated with the asymmetry of the hoof capsule.
Thus, it is the magnitude of the pressure load linked to the equids' weight applied onto the distal edge of its hoof that drives the dorsal curvature, which means also that for the equids studied the straight hoof condition, d(v u¼0 ) c þdr, is not fulfilled. This statement can be deduced directly using equation (3.3) by comparing how the relative growth rate changes at the dorsal region as a function of a relative change in the horse mass. In this context, it can be shown that d(v u¼0 ) c Àdr (electronic supplementary material, appendix SM.5). Thus, the straight hoof condition is never fulfilled that, in turn, underlines a central issue regarding the equids' hoof.
Discussion
In developing countries, owing to the fact that good husbandry and veterinarians are expensive to afford [46], between 70% [47] and 85% [46] of equids have a low BCS (less than 4 on the nine-point Hanneke scale) and chronic pathological conformations of the foot, limb deformities and foot pain are widespread issues [4,46,47]. Given the social and economic importance of working equids, a multi-scale theoretical framework is provided to improve our knowledge of chronic hoof shape variations.
The main anatomical -biological -physical issue regarding hoof growth
Owing to the fact that a three-dimensional histologic resolution at the cellular scale was not feasible and that no specific cellular organizations were observed in the interpapillary space, a minimalist model considering the interpapillary cells as being spherical was used. The concepts applied would not be affected if cells had different geometry as only geometric constants would change. Finally, the coherence between the theory and the experimental data provides a guarantee that this assumption is sound. As a result, the hoof growth rate is principally related to the number of keratinocytes duplicating from the papillae and how well these can swell while differentiating in the interpapillary space. The process whereby soft tissues make hard ones is essential in ungulates as the hoof is a weight-bearing element and the mechanical resilience of the hoof has to be adjusted to the horse weight. This process is in theory possible by the ability of dead soft structures to | 9,655.8 | 2019-06-01T00:00:00.000 | [
"Physics",
"Environmental Science",
"Biology"
] |
AM fungi patchiness and the clonal growth of Glechoma hederacea in heterogeneous environments
The effect of AM fungi spatial distribution on individual plant development may determine the dynamics of the whole plant community. We investigated whether clonal plants display, like for other resources, a foraging or a specialization response, to adapt to the distribution of AM fungi. Two separate experiments were done to investigate the response of Glechoma hederacea to a heterogeneous distribution of a mixture of 3 AM fungi species, and the effects of each species on colonization and allocation traits. No specialization and a limited foraging response to the heterogeneous distribution of AM fungi was observed. An effect of the AM fungal species on plant mass allocation and ramet production, but not on spacer length, was detected. Two possible explanations are proposed: (i) the plant’s responses are buffered by differences in individual effects of the fungal species or their root colonization intensity. (ii) the initial heterogeneous distribution of AM fungi is perceived as homogeneous by the plant either by reduced physiological integration or due to the transfer of AM fungi propagules through the stolons. Microscopic and DNA sequencing analyses provided evidence of this transfer, thus demonstrating the role of stolons as dispersal vectors of AM fungi within the plant clonal network.
ramet may specialize in acquiring the most abundant resource (division of labor theory 19 ) and share it throughout the network. This specialization can involve modifications in ramet resource allocation patterns 20,21 whereby a higher root/shoot ratio is observed in ramets developing in nutrient-rich patches, and a lower ratio in light-rich patches 20,22 .
Clonal foraging and ramet specialization have been demonstrated in response to soil nutrient heterogeneity [22][23][24][25] . However, under natural conditions, plant-nutrients uptake is mostly mediated by symbiotic micro-organisms such as Arbuscular Mycorrhizal (AM) fungi which colonize ~80% of terrestrial plants 26 . AM fungi symbionts (i.e. Glomeromycota) colonize roots and develop a dense hyphal network, exploring soil to 'harvest' mineral nutrients for the plant's benefit 26 . Plants with mycorrhized roots can thus attain higher rates of phosphorus and nitrogen absorption (x 5 and x 25 respectively) than plants with non-mycorrhized roots 27,28 . In turn, AM fungi obtain from plants the carbohydrates required for their survival and growth 29,30 . Under natural conditions, plant roots are colonized by a complex community of AM fungi 31 . These fungi display different levels of cooperation ranging from good mutualists to more selfish ones (i.e. cheaters 32 ). Within the root-colonizing fungal assemblage, plants have been shown to preferentially allocate carbon to the best cooperators, thereby favoring their maintenance over cheaters 33 . The additional nutrient supply provided by AM fungi can be assimilated as a resource for the plant. (An important raising expectation is that plants may respond to the heterogeneous presence of AM fungi as they do for a nutritive resource. Thus the plant might forage (optimal foraging theory) or specialize (division of labor theory) in response to AM fungi presence. The opposite hypothesis is that AM fungi and foraging or specialization are alternatives to cope with resource heterogeneity, implying that plants with clonal mobility do not rely on AM fungi to respond to this heterogeneity.
Our aim in this study was to analyze a plant's plastic response to AM fungal heterogeneity by performing two experiments under controlled conditions with the clonal herb Glechoma hederacea. In the first experiment, we tested the plant's foraging and specialization response to the heterogeneous distribution of AM fungi. The treatments consisted of a mixture of three species of AM fungi that had been shown to display various degrees of cooperativeness in precedent studies. Two assumptions were tested: (i) according to the optimal foraging theory, clones should aggregate ramets in the patches containing AM fungi by reducing their internodes lengths and (ii) according to the division of labour theory, clones should specialize in producing ramets with a higher allocation to roots in the presence of AM fungi than in their absence. To better understand the results obtained in experiment 1 and because of the potential impact of different levels of cooperation in the fungi involved in this symbiosis, we carried out a second experiment to test the effect of AM fungal identity on the foraging and specialization response of G. hederacea. We tested i) the effect on plant traits of the individual presence of the three different species of AM fungi used in the assemblage treatment and ii) the assumption that AM fungal species differ in their effects on the traits involved in foraging and specialization responses. In both experiments, the performance of clonal individuals was expected to be reduced in the absence of AM fungi.
Results
G. hederacea traits variation was not significantly influenced by plant genotype in either experiment (i.e. the inter-genotypic variance was not greater than the intra-genotypic variance). Experiment 1: Effect of heterogeneous AM fungi distribution on G. hederacea foraging and specialization responses. The hypothesis of modified foraging and specialization responses of Glechoma hederacea to the patchiness of AM fungal presence was tested by comparing the internode lengths and R/S ratio between the treatments for the 5 th , 6 th , 10 th and 11 th ramets (see Methods for details on ramet selection and experimental design).
A significant effect of the AM fungal treatment was found on the 10 th internode length (P = 0.005; F = 5.74) ( Fig. 1) with a longer internode in the PA treatment (AM fungi initially present then absent) than in the absence (A) and presence (P) treatments (results are presented in Table 1). Conversely, no significant effect was found for the 5 th ramets (P = 0.71; F = 0.45) or 6 th ramets (P = 0.15; F = 1.92) (Fig. 1). The 11 th ramets seemed to display the Specialization response: root:shoot ratio (R/S) of 5 th , 6 th , 10 th and 11 th ramets under the four applied treatments (g of roots per g of shoots after drying) (B). Absence (blue bars), Presence (grey bars), Presence-Absence (orange bars), Absence-Presence (green bars). Statistical significance of the internode length or R/S variations between treatments: NS, not significant; **P < 0.01. same response patterns as the 10 th ramets, but no significant differences were detected between the treatments (P = 0.93; F = 0.15), due to a partial bimodal distribution of data in the "P" treatment with a few individuals exhibiting longer stolons. In addition, the number of ramifications produced by the 5 th , 6 th , 10 th , and 11 th ramets was not significantly affected by treatment. No changes in the R/S ratio in response to AM fungal treatment were detected in any of the four tested ramets.
As regards performance, G. hederacea growth rate tended to vary with the AM fungal treatment (P = 0.067; F = 2.7), with a tendency for slower growth in the "A" treatment. No differences between treatments were detected for clone total biomass (P = 0.75; F = 0.39) which indicated that the clone, as a whole, did not exhibit any difference in biomass production or performance. Experiment 2: Effect of AM fungi identity on G. hederacea traits. The hypothesis that modifications in G. hederacea foraging and specialization traits were affected by the AM fungal species was tested by comparing the allocation, architectural and growth traits of four treatments inoculated with different AM fungal species (see Methods for details on experimental design). Primary stolon length (an architectural trait) tended to vary (P = 0.07; F = 2.83) in response to the presence and species of AM fungi whereas the number of ramifications (P = 0.25; F = 1.49) did not (results are presented in Table 2). Allocation to stolons was significantly affected by the presence and species of AM fungi (P = 0.017; F = 4.51) with plants inoculated with Glomus intraradices allocating significantly fewer resources to stolons (Fig. 2) and more to shoots (P = 0.019, F = 4.24) than plants without AM fungi. The allocation to roots, however, was not dependent on the treatment (P = 0.68; F = 0.50).
As regards performance, changes in ramet production per biomass unit (P = 0.038; F = 3.55) were detected with G. intraradices inducing less ramet production than G. custos, whereas the treatments without AM fungi and with G. clarum did not differ significantly from the other two treatments (Fig. 3). No treatment-dependent change in total biomass was observed (P = 0.57; F = 0.67).
Discussion
The plants did display some foraging behavior in response to AM fungi heterogeneity, as elongation of the internodes was observed in patches without AM fungi after the plant had experienced patches with AM fungi. This behavior would correspond to an avoidance of resource-poor patches, as expected from the optimal foraging theory. However, this behavior was only detected at a particular ramet age (10 th ramets), indicating a possible role of the ontogenic state in development of the plastic response 34 . This may be due to a "lag time" in the plant's response based on the need for environmental sampling. Indeed, Louâpre et al., (2012) demonstrated that clonal plants may need a minimum number of sampling points as benchmarks in order to perceive and respond to resource availability 35 . In their study, Potentilla reptans and P. anserina started to respond to the treatment after the 5 th internode, suggesting a strong effect of patch size. A similar patch size effect had already been demonstrated in modeling studies 10,36 . No plastic modifications, corresponding to a ramet specialization of G. hederacea in response to AM fungal spatial heterogeneity, were found either. Contrary to the results expected with the specialization theory, biomass was not preferentially allocated to the roots in patches with AM fungi or to the shoots in patches without AM fungi. This absence of response was recorded for all the ramet ages tested.
These results -a mild foraging response and no specialization -give credit to the theory supported by Onipchenko & Zobel (2000) that species with high mobility do not rely on AM fungi to cope with resource heterogeneity 37 . Glechoma with its high clonal mobility should thus show no response to AM fungi presence. However, our results do not fit with the literature predictions for specialization and foraging response 38 . This divergence may be explained by two alternative hypotheses that are developed in the following sections. The first explanation is linked with the occurrence of an individual effect of the species of AM fungus on plant traits, which may predominate or modify the response to the presence/absence of AM fungi when all three species exist together (experiment 2); the second is linked with reduced physiological integration either due to a direct effect of AM fungi on this plant trait, or to the absence of a clear contrast between the different patches sensed by the plant. In our second experiment, we demonstrated that the architectural traits involved in the plant's foraging response were not affected by the species of AM fungi tested, which is consistent with the weak response detected in the first experiment. On the contrary, significant changes in resource allocation traits (linked to the specialization response) were detected, depending on the species of AM fungus. Only one species, G. intraradices induced a change in allocation by the plant, in comparison to the absence of AM fungi treatment, which led to an increased allocation to shoots at the expense of stolons. Modifications of plant phenotype, depending on the AM fungal species, have already been observed in such traits 39,40 . These authors identified a significant effect of Glomus species isolates on branching, stolon length and ramet production in Prunella vulgaris and Prunella grandiflora. In the first analysis of the AM fungal genome, Tisserant et al. (2013) revealed existing pathways attributed to the synthesis of phytohormones or analogues 41 . Such molecules would have a direct effect on host phenotype. In the individual effect observed, plant response in the presence of G. intraradices symbiosis was coupled with decreased plant performance due to a diminution of ramet production relative to biomass in this treatment. In contrast, the G. custos treatment led to a decrease in the potential number of descendants of the clone. According to experiment 1, root colonization by an inoculum containing three species had no effect on plant traits associated with specialization and foraging. This suggests two alternative hypotheses: i) G. intraradices may be less cooperative than G. custos with Glechoma hederacea and the result is a consequence of the plant's rewarding process to the more cooperative fungus 33 and/or ii) root colonization by G. custos or G. clarum buffers the effect of G. intraradices due to a 'priority effect' (i.e. order of arrival in the colonization as a key to fungal community structure in roots) 41,42 .
To test this, the mycorrhization intensity of the three AM fungal species inoculated in the first experiment would need to be assessed by qPCR. Alternatively, the combined effects of the three AM fungal species on plant phenotype might result in the environment not being perceived as heterogeneous by the plant. This hypothesis is developed in the following section. The intraclonal plasticity predicted by the foraging and division of labor theories is based on the ability of ramets to sense environmental heterogeneity, and to share information and resources within the clonal network, to locally adapt and optimize the performance of the whole clone. The weak response of G. hederacea to AM fungal heterogeneity could thus be explained by a decrease in physiological integration that reduces the level of resource-sharing within the clone and prevents the plant from developing an optimized foraging or specialization response. This diminution could initially be due to the presence of AM fungi. Only a few studies have been carried out on the effect of AM fungi on the degree of integration 43 . These authors demonstrated that AM fungi led to reduced physiological integration in the clonal plant Trifolium repens when grown in a heterogeneous environment. This effect was dependent on the presence and richness of AM fungal species. Whether this observed diminution of physiological integration would be due to a direct manipulation of the host plant phenotype by the fungi remains, as far as we know, unknown. Secondly, this diminution may depend on the individual plant's perception of environmental conditions that might be sensed as homogeneous because the patch contrast is smaller than expected. A reduction of plant integration is expected when the maintenance of high physiological integration is more costly than beneficial 44,45 , e.g. when the environment is resource-rich, not spatially variable 46 or insufficiently contrasted 10,47 . Such a reduced contrast might result from the effect of the three AM fungal species on the plant phenotype (when used as a mixed inoculum), which is unlikely. A more probable mechanism of environment homogenization could result from AM fungal transfer through the stolons. Scanning electron microscopy of the clone cultures (see protocol in supplementary material) revealed the presence of hyphae on the stolon surface (Fig. 4). In addition, several cells close to the external surface of the stolon cross-section were invaded by structures which could be interpreted as fungi. DNA sequencing of stolon samples (Fig. 5) confirmed these results and demonstrated the presence of AM fungi in the stolons. This suggests that fungi can be transferred from one ramet to another, at least by colonization of the stolon surface (as shown in Fig. 4A) and/or within the stolon (Fig. 4B). Whether fungi are passively or actively transferred through the plant's stolon tissues, and hence to all related ramets, remains an open question. Further studies are therefore needed to confirm these fungal transfers to plant clones and to measure their intensities in contrasted environments.
Studies of the response of clonal plants to environmental heterogeneity have classically focused on abiotic heterogeneity 48,49 . Our study is the first to investigate clonal response to a heterogeneous distribution of AM fungi, based on the assumption that AM fungi can be regarded as a resource for the plant. However, in response to the heterogeneous distribution of AM fungi, G. hederacea clones displayed only a weak foraging response and no specialization which suggests, respectively, that clones do not aggregate more especially in patches with AM fungi or maximize their proportion of roots in contact with AM fungi. We provide a first explanation by highlighting the impact of AM fungal identity on the plant phenotypes and more particularly on the allocation traits involved in specialization. More importantly, we provide evidence that stolons might be vectors for the transfer of micro-organisms between ramets, thereby buffering (through this dispersion of fungi) the initial heterogeneous distribution. If this is true, stolons will have to be regarded in a different way, and be seen as ecological corridors for the dispersion of micro-organisms allowing a continuity of partnership along the clone. Considering the plant as a holobiont 31,50 , this novel view of stolon function is expected to stimulate new ideas and understanding about the heritability of microbiota in clonal plants. Methods Biological material. We used the clonal, perennial herb Glechoma hederacea, which is a common Lamiaceae in woods and grasslands. G. hederacea clones produce new erect shoots at the nodes at regular intervals of 5 to 10 cm on plagiotropic monopodial stolons (i.e. aboveground connections). Each ramet consists of a node with two leaves, a root system and two axillary buds. In climatic chambers with constant conditions, G. hederacea does not flower and displays only vegetative growth 12 . This species is known to exhibit foraging behavior 12,22,45 and organ specialization 22 in response to nutrients or light heterogeneity. The ramets used in our experiments were obtained from the vegetative multiplication of 10 clonal fragments taken in 10 different locations sufficiently spaced to obtain different genotypes. Plants were cultivated for three months under controlled conditions to 62 . Bootstrap values at the nodes were produced from 200 replicates. Only values above 50 are shown. Multiple alignment and tree reconstruction were performed using SEAVIEW 63 . OTUs were obtained from a Glechoma hederacea stolon after DNA extraction using the DNEasy plant mini kit (Qiagen), PCR amplification using fungal primers NS22b and SSU817, and Illumina MiSeq sequencing. In addition to reference sequences within the Glomeromycota phylum, we sampled 13 sequences among the best BLAST hits ( †). avoid parental effects linked with their original habitats 51 . Vegetative multiplication was carried out on a sterilized substrate (50% sand and 50% vermiculite, autoclaved at 120 °C for 20 minutes) to ensure the absence of AM fungi propagules. For each experiment, the transplanted clonal unit consisted of a mature ramet (leaves and axillary buds) with one connective internode (to provide resources to support ramet survival) 52 , and without roots (to avoid prior mycorrhization). The AM fungi inocula used in both experiments were Glomus species: Glomus intraradices (see Stockinger et al., 2009 for discussion on G. intraradices reclassification 53 ), Glomus custos, and Glomus clarum. These AM species were chosen to limit phylogenetic differences between the fungal life-history traits 54 . G. intraradices has been shown to induce beneficial P uptake in Medicago truncatula 33 . The use of three different AM species also ensure a range of cooperativeness in the symbionts. The inocula used in the two experiments consisted of a single-species inoculum produced in in vitro root cultures (provided by S. L. Biotechnologia Ecologica, Granada, Spain) or a mixture of equal proportions of all three inocula. The inoculations consisted of an injection of 1 mL of inoculum directly above the roots, and were administered when the plants had root lengths of 0.5 to 1 cm. Experimental conditions. Experiment 1 was designed to test the foraging and specialization responses of G. hederacea to the heterogeneous distribution of AM fungi. Experiment 2 tested the effect of the species of AM fungus on the plant traits involved in these responses.
Both experiments were carried out with cultures grown on the same sterile substrate (50% sand, 50% vermiculite) in a climate-controlled chamber with a diurnal cycle of 12 h day /12 h night at 20 °C. Plants were watered with deionized water every two days to check for nutrient availability. Necessary nutrients were supplied by watering the plants every 10 days using a fertilizing Hoagland's solution with strongly reduced phosphorus content to ensure ideal conditions for mycorrhization (i.e. phosphorus stress) [55][56][57] . At each watering, the volumes of deionized water and fertilizing solution per pot were 25 mL and 250 mL respectively for the first and second experiments. We also controlled nutrient accumulation during the experimental period by using pierced pots that allowed evacuation of the excess watering solution. To prevent nutrient enrichment due to the inoculum, AM fungi-free pots were also inoculated with a sterilized inoculum (autoclaved at 100 °C for five minutes).
Experiment 1: Effect of heterogeneous AM fungal distribution on G. hederacea foraging and specialization responses.
The responses of G. hederacea to four different spatial distributions of AM fungi were tested. G. hederacea was grown in series of 11 consecutive pots: two homogeneous treatments with the presence (P) or absence (A) of AM fungi in all pots; and two heterogeneous treatments with two patches of 5 pots either in presence then absence (PA) or absence then presence (AP) (Fig. 1). The two latter treatments were included to take into account a potential effect of ramet age in the plant's response to heterogeneity. These treatments were replicated for 10 clones of Glechoma hederacea (see Methods section "Biological material" for precision on plants used). Each clone was grown in plastic pots (8 × 8 × 7 cm3) filled with sterile substrate. Only one ramet was allowed to root in each pot and plant growth was oriented in a line by removing lateral ramifications. The initial ramet, in all treatments, was planted in a pot without AM fungi. For each treatment, the inoculum consisted of a mixture of the three AM fungal species (G. clarum, G. custos and G. intraradices). Inoculations were started on the second pot of each line which actually contained the fourth ramet of the clone (exceptionally, the first three ramets rooted in the same first pot due to internode shortness, see Fig. 1). Inoculations were administered to each ramet separately when the ramet had roots 0.5 to 1 cm in length to avoid a ramet age effect on the AM fungi colonization process.
The clones were harvested when the final ramet (number 13) had rooted in the 11th pot. This ensured that each clone had 10 points for sampling environmental quality. The 5th, 6th, 10th and 11th ramets of each clone in the pot line ( Fig. 6) were used for statistical analyses. These ramets corresponded to the second and third ramets experiencing the current patch quality. Indeed, Louâpre et al. (2012) emphasized the role of the "past experience" of the clone in developing a plastic response. The choice of these four ramets thus ensured that the clone had enough sampling points to assess the quality of its habitat i.e. in the patches where AM fungi were present or absent, in the heterogeneous treatments, and to adjust accordingly when initiating new ramets 35 . Each ramet was carefully washed after harvesting. The foraging response was assessed by measuring the length of the internode just after the ramet. An aggregation of ramets, with shortened internodes, was expected in patches where AM fungi were present, and an avoidance of patches, i.e. production of longer internodes, was expected where AM fungi were absent. Modifications in ramification production linked to the effect of the treatment were checked by recording the number of ramifications produced by the ramets throughout the experiment. The specialization response was examined by measuring the root/shoot ratio (R/S) i.e. the biomass allocated to the below-and above-ground resource acquisition systems, after separating the roots and shoots and after oven-drying for 72 h at 65 °C. We expected a higher R/S ratio in patches where AM fungi were present than in patches where AM fungi were absent. Clone performance was assessed from (i) the total biomass of the clone, calculated as the sum of ramet roots, shoots and stolons after oven-drying for 72 h at 65 °C and (ii) the growth rate calculated as the number of days needed for the clone to develop the 10 sampling ramets i.e. the number of days between rooting of the 4th ramet and final harvesting. Experiment 2: Effect of AM fungal identity on G. hederacea performance and traits. The effects of individual AM fungal species on G. hederacea foraging and specialization traits were tested using four culture treatments: 1) no AM fungi, 2) with Glomus custos, 3) with Glomus intraradices, and 4) with Glomus clarum. Each treatment was replicated eight times with four related ramets assigned to each treatment replicate (32 clones in total), to control for plant-genotype effects. The initial ramet of each clone had previously been cultivated on sterile substrate to ensure root system development and facilitate survival after transplanting. The initial ramets Scientific RepoRts | 6:37852 | DOI: 10.1038/srep37852 were then transplanted in pots (27.5 × 12 × 35 cm3) filled with substrate. The AM fungi inoculations consisted of three injections of 1 mL of inoculum directly on the roots of the first three rooted ramets to ensure colonization of the whole pot. The plants were harvested after six weeks. The following traits involved in foraging were measured: (i) the longest primary stolon length (of order 1) as an indicator of the maximum spreading distance of space colonization (ii) the number of ramifications as an indicator of lateral spreading and clone densification. We also measured biomass allocation to the roots, shoots and stolons at the clone level, i.e. traits involved in the specialization response, after oven-drying for 72 h at 65 °C. Plant performance for the entire clone was determined from: (i) the total biomass calculated as the sum of the dry weights of the shoots, roots and stolons after oven-drying for 72 h at 65 °C. and (ii) the number of ramets i.e. the number of potential descendants. Performance was expected to be higher in pots inoculated with fungi and to differ depending on the fungal species.
Statistical analysis. For experiment 1, to test whether G. hederacea developed a plastic foraging (internode length) or specialization (R/S ratio) response to the heterogeneous distribution of AM fungi, ANOVA analyses were performed using the linear mixed-effects model procedure in R 3.1.3 58 with packages "nlme" 59 and "car" 60 . Ramets of the same age were compared between genotypes to control for a possible effect of ramet age.
For experiment 2, to determine whether the species of AM fungi induced changes in plant traits and performance, ANOVA analyses were performed using linear mixed models with the same R packages and version described above. Resource allocation was tested by using the clone total biomass as covariate to take into account the trait variance associated with clone growth.
In both experiments genotype-induced variance and data dependency was controlled by considering the treatment (four modalities) as a fixed factor and the plant-clone genotype as a random factor. The effect of genotype was assessed by comparing the intra-and inter-genotype variance and was considered significant when the inter-genotype variance was strictly higher than the intra-genotype variance. When a significant effect of treatment was detected by ANOVA, post hoc contrast tests were performed using the "doBy" package 61 to test for significant differences between modalities. When necessary, the normality of the residuals was checked by subjecting the data to log transformation. The total clone biomass (summed dry weights of shoots, roots, and stolons) was used as covariate to account for variance due to differences in clone performance. Ramets were forced to root in different pots and lateral ramifications were removed to orient growth in a line. Four treatments of AM fungal distribution were applied based on the presence or absence of AM fungi in the pots: Absence (A) (10 pots without AM fungi); Presence (P) (10 pots with AM fungi); Presence-Absence (PA) (five pots with AM fungi followed by five pots without AM fungi); Absence-Presence (AP) (five pots without AM fungi followed by five pots with AM fungi). | 6,570.2 | 2016-11-25T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Autonomously Simultaneous Localization and Mapping Based on Line Tracking in a Factory-Like Environment
This study is related to SLAM, also known simultaneous localization and mapping which is highly important and an indispensable issue for autonomous mobile robots. Both an environment mapping and an agent’s localization are provided with SLAM systems. However, while performing SLAM for an unknown environment, the robot is navigated by three different ways: a user guidance, random movements on an exploration mode or exploration algorithms. A user guidance or random exploration methods have some drawbacks that a user may not be able to observe the agent or random process may take a long time. In order to answer these problems, it is searched for a new and autonomous exploration algorithm for SLAM systems. In this manner, a new kind of left-orientated autonomous exploration algorithm for SLAM systems has been improved. To show the algorithm effectiveness, a factorylike environment is made up on the ROS (Robot Operating System) platform and navigation of the agent is observed. The result of the study demonstrates that it is possible to perform SLAM autonomously in any similar environment without the need of the user interference.
Introduction
It is a well-known fact that robotic applications have been increasing day after day and robots have performed an assistance to human from health to indus-trial applications [8], [12], [14], [18] and [28].In order for a robot to be able to fulfil a task, it has to know its location and what the world looks like around it.It is agreed that the problem of where the robot is seen as a localization problem.Moreover, the problem of constructing a map of the environment is specified as a mapping one [3] and [25].Despite the fact that these two issues tackle separately, it may be impossible to give the robot neither location nor map in some cases.Therefore, it is necessary to build a map of an environment while simultaneously localize the robot within this map for such situations.When the literature is scrutinized, it can be seen that this problem is called as simultaneous localization and mapping or acronym of it SLAM [6].SLAM deals with a construction of a map of an environment in which concurrently localize itself within it.Due to the fact that it could gain an autonomy to the robot, SLAM has been seen a 'holy grail' [6] and, it is an important milestone for the mobile robotic applications.From this point of view, a mobile robot is able to have an information about where it is or where to go by courtesy of SLAM.
Many algorithms have been presented about SLAM from the 2-D maps and metrics maps to 3D or topologic ones, from the filter based approaches to the visionbased ones [1], [11], [19], [28] and [30].According to the related previous studies, a guidance of robots in an unknown environment could be done in three different ways: • The first one is a user navigation and it can be thought an ideal and the most efficient solution due to the fact that it is based on human observation.By way of this method, the robot can be navigated to an unmapped area.One of the limitations of this method is that it is not explicit what happens if the user is not able to observe the robot or the exploration area.
• The second guidance method is the random exploration method.The robot is allowed to run in an exploration mode so that the robot might explore the area moving random orientation and movement.A critical weakness of this method, however, is that it takes a long time to discover all over the area [14], [21], [22], [24] and [29].
• The third method is the special algorithms for autonomous navigation and exploration.When these types of algorithms are combined with SLAM, it is usually called active SLAM approach which ensures the full autonomy for the mobile robot.These approaches generally are benefited from occupancy grid maps.The environment is split into grids and the robot is navigated to the unexplored regions [13], [24] and [31].
The first two methods suffer from some serious disadvantages as mentioned above while the algorithms under the third section generally count on the laser measurements.In this context, the presented study can be discussed under the third class of active SLAM algorithms.Our enhanced approach differs from the existed algorithms because it is image-based instead of laser scanning.
Simultaneous Localization and Mapping (SLAM)
The first serious discussions and analyses of SLAM have emerged during the late 1980s.The idea of gathering probability and robotics were the heart of the matter [5] and [23].There has been notable progress on the solution of the problem after then implementation of Bayes based filters such as Kalman Filter (KF) for linear systems and Extended Kalman Filter (EKF) for non-linear ones [8], [24] and [26].
SLAM is mathematically described in the form of probability in which sensor and control data of the robot are inputs; a map and a pose of the robot are outputs Eq. (1) [6], [8], [21] and [22]: where m is a map created with the algorithm and x is pose information of the robot.These two terms also stand for global state parameters of SLAM.On the other hand, z is identified as sensor observations and u is robot control inputs.At the first stage of EKF-SLAM, the state is predicted considering the robot previous state and the control input.At the latter phase, the prediction has been updated by using sensor observations.
However, traditional EKF method suffers from the non-Gaussian noise cases and a big size covariance matrix if the number of landmarks is relatively high [3], [4] and [6].Because of some downsides of EKF-SLAM, new methods have been improved.The most striking development at this point was the implementation of Particle Filters (PF) to SLAM problem.However, PF also suffers from a big size covariance matrix due to the fact that each particle represents the individual solution.To overcome this problem, the remarkable solution was the implementation of Rao-Blackwellization decomposition method along with the PF.This method is also called FastSLAM.By means of this decomposition, SLAM problem has turned into a classical Monte Carlo Localization (MCL) of the robots and traditional EKF mapping Eq. ( 2) [1], [9], [15], [16] and [21]: ( Assuming to landmark locations independent, localization and mapping can be handled separately.Thanks to this trick, it is computed M times 2 by 2 matrix instead of tackling with the big size of M by M covariance matrix facing on conventional EKF.With this development, computation speed of the algorithm has been notably increased and non-Gaussian distribution representation of the model has been developed.Considering from this point of view, each particle has its belief on the potential solution (map and pose of a robot).The particles have been updated in every iteration when the environment is observed.
Implemented System
A factory-like environment has been made up to demonstrate the effectiveness of the improved autonomous algorithm.The lines used to separate the sections from each other are considered to form the working environment (Fig. 1).A pure line follower algorithm is highly likely to produce unstable results considering the robot has to turn when faced rotation points or the endpoint of the lines.In order to get over this problem, a left-orientated follower algorithm has been developed and the movements of the robot have been provided within this framework.
Evaluation of Images
The robot takes an image (Fig. 2(a)) of the environment via a camera mounted on it.With these images, the navigation path of the robot can be determined by means of image processing algorithms.
The images taken from the camera of the robot are being processed continuously.During this period, several types of images have been produced like HSV and masked ones (Fig. 2 and Fig. 3).Segmented images are used to determine the foreground path.
First of all, images are transformed into HSV images.The aim of this transformation is to provide a more reliable result on the evaluation of the images because of the fact that HSV images are more robust on the change of brightness, shadow effects etc.Otherwise, the areas covered shadows might be misidentified (Fig. 3(b)).
After applying HSV transformation to the image, a novel approach has been thought due to the fact that the assessing of the whole image both will be difficult and increase the computational time.To this end, the image is divided into three sub-regions, x 1 , x 2 , x 3 .x 1 and x 3 parts are multiplied by 0 to exclude the irrelevant regions.Only the region x 2 is let alone and decisions are made based on that region.In this way, only the region to be tracked are extracted from the whole image and the robot is manipulated to the true path.The value of the regions has been calculated by empirically after examining some trials.For this study, the values of the parameters have been computed like 4 and Fig. 5) where h is the height of the image, while w is the width of it.
In addition to this, it is needed to use another masked image to determine the left orientation.And therefore, some right part of the original masked image (w − x 4 ) is also covered with zero so that the left orientation can be decided by means of the difference on image moments (Fig. 5 and Subsec.3.3.).
The Robot, Sensor and Movement of the Robot
Turtlebot II, which is also known as Kobuki Turtlebot is used to verify the demonstration of the study.This robot is widely accepted and used in academic experiments.It has linear movement in the direction of +x/ − x and radial rotation on the +z/ − z.This robot is preferred to carry out the studies because of the fact that it is cheaper, enabled to assemble a variety of sensors and allowed to use open source material to control [7].The robot is utilized from a differential drive to steer and detailed analyzing about its kinematic and movement equations are given on Cook [2].The robot generally comes together with an integrated camera, which is usually Asus Xtion or Microsoft Kinect also used in this study.The Kinect is able to give depth data as well as RGB one.
It could perform the scanning of an environment via infrared lights on 57 horizontal and 43 vertical degrees.The depth information given sensor is generally accepted reliable up to 5 meters.The different application of robotics has been widely benefited from this sensor because of its cheapness and presenting valuable data, especially in depth since its firstly announcing in 2010.The depth information obtained from the workspace can be converted to a 2 dimension (2-D) laser data.In order to this, the depth information is obtained and smoothed by means of different filters and a single horizontal line excided from the measurement data.Thus, a robot is able to behave like having a 2-D laser scanner on it [29].Taking into account the bulky structure and expensiveness of laser sensors, this kind of sensors gives a good opportunity on 2-D scanning particularly if the edge determination or similar implementations of an environment is mainly aimed.A comparative work can be seen on [17] about the usage of Kinect -like a laser sensor.
According to the presented left-orientated algorithm, we have manipulated the robot generally in three directions; forward movement and left or reverse turns.
First: Without any orientation decision, the robot goes forward at a rate of 0.2 m•s −1 and fits the line by means of P-controller considering the deviation from the line center.
Second: Giving a decision on left-orientation, the robot turns left 90 degrees.In this case, forward speed is 0 m•s −1 while the robot turns fully 90 degrees in the direction of left.
Third: While the robot follows the line, it may encounter the parts that the line finishes.Under these circumstances, the forward speed of the robot is set to 0 m•s −1 and it turns reverse or 180 degrees.
With the help of these three movements, the robot could be manipulated autonomously with regard to the offered algorithm.
System Algorithm
Active SLAM is a combination of SLAM and autonomous exploration algorithms.With the presence of the developed algorithm, the robot does not need to navigate by a user or move randomly and SLAM for a given environment could be done autonomously by means of improved left-orientated line follower algorithm.Assuming the robot is started somewhere in the environment, the robot first investigates the existence of line and if there is, it goes forward according to the forward motion.The forward motion continues until any rotation.Having a rotation, the robot asks for it is left or right and then a different type of processes are developed in accordance with the offered algorithm (Fig. 6).
The image moments are thought of as features to determine orientation way or the function of what to do.An image moment is obtained in regard to the investigation of the density function of image pixels.A general physical moment expression can be described in Eq. ( 3): where S is the domain of workspace, i and j are the degrees of the function of f [15], [20] and [27].
The general moment statement is described for computer vision in Eq. ( 4) (continues case): x i y j I(x, y)dxdy, (4) where R(t) is the observed area by a camera, m ij is the origin moments and I(x, y) is intensity function.(c) HSV masked view, the robot path is defined correctly.Weight the particles using the equation Replace the particles having lower to the higher ones resampling The centered moments of order i + j as regards the centroid of the object is expressed by Eq. ( 5): where, c x = m 10 /m 00 , c y = m 01 /m 00 are the centroids of the 2-D object, (c x , c y ).For the discrete case, the integrals have been replaced by summations.
The line center is determined using the abovementioned equations from the masked images (Fig. 4 and Fig. 5).This center is updated as the image is renewed.The deviation from this line center could be thought of an error Eq. ( 6): where, c x is the image center on x-direction and c i is the center of the line.The computed error is applied to the robot to correct the drift from the line.This scheme can also be regarded as a P-controller structure owing to the non-existence of neither past error nor predicted one.The experiments are also tested with the PID controller.However, it is observed that a P-controller body is sufficient for following the line smoothly [27].
It is benefited from ROS (Robot Operating System) to implement the study.ROS is a widely accepted and used framework because it can support the representation of the many real-time parameters.ROS-gMapping package is used to build a map of the environment and localize the robot within it [7], [9], [10], [13] and [32].This method is a kind of grid-based SLAM that uses the Rao-Blackwellizied particle filter.Particles hold the location of the robot and the map of the environment.Getting new observations lead to updated states in every iteration.A scan matching method is used for a measure of distance.According to technique, the robot location is calculated using the matching of sequential observations (see Alg. 1).The output of this method is the occupancy grid mapping of the environment.The algorithm mentioned in the study generally consists of two main blocks that one of them is the executing SLAM and another one is the navigation the robot autonomously.The pseudo-code of the algorithm is as in Alg. 1.
Results
In this study, a factory-like environment has created to carry out SLAM autonomously and to investigate the mentioned algorithm effectiveness.For this purpose, it is desired the robot to start a point referred in the map and followed the line in accordance with the offered algorithm.During the autonomous run, the robot has performed four main actions: find line, left, reverse, forward.
The frequency of the seen actions is pointed out in Fig. 7 as in the form of a histogram.As it is expected, the main action for the robot is the forward movements while the less one is "find line".This result also shows that when the robot finds the path, it sticks to the developed algorithm.According to the result of the trials, the robot can successfully build the map of the environment autonomously and robustly via the improved algorithm free from the effects of brightness, shadow etc. Figure 8 shows the map of the environment using the method and Fig. 9 points out the map created by user observed navigation.
Conclusion
This paper has investigated an autonomous SLAM approach.SLAM is a highly important issue for mobile robots to implement their duties.A mapping of an environment might be done by user guidelines, random exploration or systematic algorithms.However, the first two mentioned methods suffer from some serious limitations such as the absence of a user or not suitable observations for the user.In addition to this, randomly exploration could lead to a loss of time.
As to our knowledge, there is no comprehensive overview of recent research of vision-based for active SLAM.This study is designed to fill this gap by presenting a left-orientated navigation algorithm.Within this scope, a map of the environment has been built autonomously via the presented method without any necessity of user input or random process.In order to validate the presented method results, a factory-like environment is made up.The environment has the lines which describe the individual parts in a factory.The robot can manage to build a map of the environment by way of the mentioned algorithm.The evidence from the study indicates that the robot is able to succeed autonomous SLAM without a need of an external intervention.With the help of this method, a created map done by one robot can easily be shared with the other robots to save time or collaborate with each other.
This study has thrown up some questions in need of further investigation.For example, a future study investigating on different decision process using artificial intelligence would be very interesting and might improve the decision operation.On the other hand, further research might combine the other types of SLAM methods with this presented algorithm.
Fig. 7 :
Fig. 7: The histogram bar graphic of the movements during the experiment. | 4,244.8 | 2019-03-17T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Mechanisms of decadal variability in the Labrador Sea and the wider North Atlantic in a high-resolution climate model
A necessary step before assessing the performance of decadal predictions is the evaluation of the processes that bring memory to the climate system, both in climate models and observations. These mechanisms are particularly relevant in the North Atlantic, where the ocean circulation, related to both the Subpolar Gyre and the Meridional Overturning Circulation (AMOC), is thought to be important for driving significant heat content anomalies. Recently, a rapid decline in observed densities in the deep Labrador Sea has pointed to an ongoing slowdown of the AMOC strength taking place since the mid 90s, a decline also hinted by in-situ observations from the RAPID array. This study explores the use of Labrador Sea densities as a precursor of the ocean circulation changes, by analysing a 300-year long simulation with the state-of-the-art coupled model HadGEM3-GC2. The major drivers of Labrador Sea density variability are investigated, and are characterised by three major contributions. First, the integrated effect of local surface heat fluxes, mainly driven by year-to-year changes in the North Atlantic Oscillation, which accounts for 62% of the total variance. Additionally, two multidecadal-to-centennial contributions from the Greenland–Scotland Ridge outflows are quantified; the first associated with freshwater exports via the East Greenland Current, and the second with density changes in the Denmark Strait Overflow. Finally, evidence is shown that decadal trends in Labrador Sea densities are followed by important atmospheric impacts. In particular, a positive winter NAO response appears to follow the negative Labrador Sea density trends, and provides a phase reversal mechanism.
Introduction
The North Atlantic ocean is a major source of decadal variability (e.g. Kerr 2000;Frankcombe et al. 2008;Vianna and Menezes 2013) with reported widespread climate impacts (Knight et al. 2006;Zhang and Delworth 2006;Sutton and Dong 2012). Predicting the North Atlantic therefore has a great importance for decadal prediction (e.g. Collins et al. 2006). One of the key locations to explain this decadal variability is the Labrador Sea, which is an important region of deep convection that contributes signiicantly to the formation of North Atlantic Deep Water (NADW; Haine et al. 2008), and thus also to the intensity of the deep western boundary current (DWBC; Hodson and Sutton 2012). Modelling studies (e.g. Delworth et al. 1993;Eden and Willebrand 2001) suggest that Labrador Sea waters 1 can inluence both the Atlantic Meridional Overturning Circulation (AMOC) and the subpolar gyre (SPG) strength and in this way afect decadal variability in the wider North Atlantic. Understanding precisely the drivers of these Labrador Sea changes, and how exactly these later relate to the 1 3 large-scale ocean circulation is of critical importance to better predict future decadal changes in the North Atlantic sector.
Observations suggest that the ocean circulation has played a key role in recent climate variability in the North Atlantic. The irst decade of direct measurements from the RAPID array shows a signiicant decrease in AMOC strength at 26°N since 2004 AD (Smeed et al. 2014). A decline is also reported in loat-derived estimates of the subsurface circulation (Palter et al. 2016) and altimetryinferred estimates of the upper subpolar gyre strength (Häkkinen and Rhines 2004;Hakkinen and Rhines 2009). Other observational records, such as the deep densities in the Labrador Sea or the wider subpolar gyre, can ofer invaluable indirect information on changes in ocean dynamics, as suggested by model outputs (Robson et al. 2014a;Hermanson et al. 2014). In particular, recent observed changes of water mass properties in the deep (i.e. 1000-2500 m) Labrador Sea suggest that the AMOC weakening started in the mid 1990s (Robson et al. 2014a), leading to an important reduction in the meridional heat transport that is most probably responsible for the observed cooling in the eastern SPG ocean heat content since 2005 AD (Robson et al. 2016). The lagged relationship between deep Labrador Sea density and upper ocean trends would suggest that the cooling could continue and extend to the whole North Atlantic, as indicated by the MetOice's decadal prediction systems (Hermanson et al. 2014). Thus, it could give rise to a negative phase of the Atlantic Multidecadal Variability (AMV), which is an ocean state associated with important climate impacts.
Climate models are a useful tool to quantify and attribute these impacts, and further explore the recent ocean changes. Diferent studies with models demonstrate, in particular, that the AMV-deined as a coherent pattern of sea surface temperature (SST) anomalies in the North Atlantic most probably linked to the variability of the AMOC (Knight et al. 2005)-induces multidecadal changes in, e.g., the frequency of occurrence of Atlantic hurricanes (Knight et al. 2006), the drought conditions in the Sahel (Zhang and Delworth 2006) and summer precipitation in southwestern North America and western Europe (Sutton and Hodson 2005). The relative contribution of stochastic atmospheric forcing and ocean dynamics to AMV variability remains to be clariied (e.g. Clement et al. 2015;Zhang et al. 2016). However, models do suggest that the ocean circulation played a decisive role in the observed rapid warming of the North Atlantic SPG (Robson et al. 2012a) and probably contributed to the previous cooling during the 1960s and 1970s (Hodson et al. 2014). In addition, initialised decadal hindcasts based on climate models show high predictability for these large SPG temperature changes (Robson et al. 2012b(Robson et al. , 2014bYeager et al. 2012) and have been able, in particular, to anticipate the on-going eastern SPG cooling trend (Hermanson et al. 2014). The initialisation of the ocean, including the overturning circulation, is essential to explain the good prediction skill in the SPG (Robson et al. 2014b).
However, the most important drivers of the ocean circulation variability remain largely unknown due to the limited availability of direct observations. This problem can be partly circumvented by using AMOC proxies, like the deep Labrador Sea densities. Unlike the AMOC, this later quantity has been observed since 1950 AD, allowing more robust relationships to be established at the decadal time scale. Also, the AMV is a less reliable AMOC ingerprint to track its past variability as, being based on SSTs, it is strongly afected by a variety of external factors (Roberts et al. 2013), such as volcanic and anthropogenic aerosols (Otterå et al. 2010;Booth et al. 2012). Many studies, based both in models (e.g. Eden and Willebrand 2001;Getzlaf et al. 2005;Yeager et al. 2012;Danabasoglu et al. 2016) and observations (e.g. Dickson et al. 1996;Curry et al. 1998;Kieke and Yashayaev 2015), show a direct connection between the winter North Atlantic Oscillation (NAO; Hurrell 1995) and Labrador Sea water properties, explained through the cooling efect that NAO-driven westerly winds exert on Labrador Sea surface. Modelling studies also show that this buoyancy signal, integrated over time, can explain at least half of the total AMOC variance (Ortega et al. 2011;Mecking et al. 2014). Yet, the exact contribution of the NAO to AMOC variability in the real world is still to be quantiied. Advective processes are also relevant to understand the recent Labrador Sea water changes. Following the passage of the Great Salinity Anomalies (GSA) in the 1960-1970s (Dickson et al. 1988), 1980s (Belkin et al. 1998) and 1990s (Häkkinen 2002)-three major events of freshwater inlows in the North Atlantic with different origins, magnitude and extent (Houghton and Visbeck 2002)-Labrador Sea waters have experienced subsequent freshenings afecting the rate of deep water formation (Kieke and Yashayaev 2015). Since the 1970s, a deep longterm freshening of the Labrador Sea is observed (Robson et al. 2016), which could be related to a change in the overlows feeding the Labrador Sea from the Arctic (Dickson et al. 2002). Other potential inluences to Labrador Sea density relate to the transport of heat content anomalies from the eastern subpolar gyre, linked themselves to changes in the strength and position of the North Atlantic Current (NAC; Sutton and Allen 1997;Menary et al. 2015).
Labrador Sea water signals have been reported to inluence other remote regions of the North Atlantic. For instance, they lead by 6 years the changes in the deep waters near Bermuda (Curry et al. 1998). Through their contribution to the NADW, Labrador Sea waters are also important to understand the southward propagation 1 3 of AMOC anomalies observed in models, although the propagation timescale appears to be dependent on the region (Zhang 2010) and the model resolution (Getzlaf et al. 2005). Eddy-permitting models, as opposed to non-eddy resolving ones, show a fast (almost in-phase) southward propagation of the AMOC signals consistent with the speed of boundary waves (Getzlaf et al. 2005). Longer lead times between the Labrador Sea and the subtropics are explained through the existence of interior pathways diferent than the DWBC (Bower et al. 2009), in which advection processes dominate (Zhang 2010). It is, however, still unclear how Labrador Sea waters relate to these latitudinal AMOC changes, and the two distinct propagation timescales.
In this study we assess the link of Labrador Sea densities with the ocean circulation and their relationship with the climate in the wider North Atlantic. In particular, we address the following questions: 1. How do Labrador Sea waters afect the AMOC and associated northward salt/heat transport? 2. What is the chain of events leading to the development of prolonged Labrador Sea density trends, such as the one recently observed (Robson et al. 2014a)? 3. Is there any impact on the atmosphere as a result of the broader ocean response to the Labrador Sea changes? 4. Are there any feedback mechanisms at play?
The analysis is based on a control run with the stateof-the-art coupled climate model HadGEM3-GC2 (hereafter referred to as GC2; Williams et al. 2015). This model has an eddy-permitting ocean resolution and a highly resolved stratosphere, two key aspects that have been shown, respectively, to improve the representation of the Gulf Stream extension (Williams et al. 2015), and reproduce more realistic atmospheric teleconnections and interactions (Ineson and Scaife 2009) as compared to models with lower resolution. This is the same model coniguration used in the operational seasonal and decadal forecast systems of the Met Oice (GloSea5-GC2 and DEPRESYS3, respectively). Note that an earlier version of GloSea5, based on a slightly diferent model coniguration, has produced the irst skilful long-range predictions of the NAO (Scaife et al. 2014), suggesting that the model has a good representation of the relevant processes in the North Atlantic.
The article is organized as follows: Sect. 2 describes the model and its climatology. Sect. 3 presents the main results, which include a characterisation of Labrador Sea density variability in the model, its link with the ocean circulations, and the identiication of the associated drivers and atmospheric impacts. A inal discussion and conclusions are presented in Sects. 4 and 5. Walters et al. 2011), with a horizontal resolution of N216 (92 km at the equator and 60 in mid latitudes) and 85 levels in the vertical (with a top at 85 km). It is coupled with the land-surface model, corresponding to the Global Land version 6.0 of the Joint UK Land Environment Simulator (JULES; Best et al. 2011), which shares the same grid and is part of the same model executable as the atmospheric component. The ocean model is the Global Ocean 5.0 (Megann et al. 2014) version of the v3.4 NEMO model (Madec 2008) under the ORCA025 tripolar grid coniguration. It comprises 75 vertical levels (24 of them in the top 100 m) and runs with a nominal horizontal resolution of 0.25°, the same used by the Sea Ice component, that corresponds to version 4.1 of the Los Alamos Sea Ice Model (CICE; Hunke and Lipscomb 2010). This version of CICE includes ive sea-ice thickness categories and has improved the representation of Arctic sea ice concentration and extent with respect to the previous coarser conigurations (Rae et al. 2015). For further details on the diferent components of the model please refer to Megann et al. (2014), Rae et al. (2015) and Walters et al. (2016). Likewise, a detailed description of the coupling and the systematic errors can be found in Williams et al. (2015).
Note that GC2 is an updated version of HadGEM3 (hereafter referred to as HG3), also run at the same resolution but using the NEMO version 3.2 and lacking some additional developments. HG3 presents a prominent mode of bi-decadal variability in the Subpolar Gyre region (Menary et al. 2015), a periodicity that is not present in GC2, thus pointing to diferences in the key processes and interactions. In the North Atlantic, the most important diference between the two model versions arises from a correction to the treatment of convective mixing in the turbulent kinetic energy (TKE) scheme in NEMO v3.4 (used in GC2), leading to substantially lower winter mixed layer depth biases (Megann et al. 2014). At the end of the paper, we discuss conjointly the new indings in GC2 and the previous results with HG3.
Experiment description
We examine a 310-year long preindustrial control simulation with GC2. This difers from the present day control simulation analysed in Jackson et al. (2015) as it uses 1 3 CMIP5 forcings appropriate to 1850 (Jones et al. 2011). They consist of well-mixed greenhouse gases (CO 2 , CH 4 , N 2 O, chloroluorocarbons, and hydroluorocarbons), tropospheric aerosols (including sulfates, soot, biomass aerosols and organic carbon from fossil fuels), monthly-varying climatological tropospheric and stratospheric ozone concentrations, solar irradiance ixed as the average of the two solar cycles within 1850-1882 (from http://solarisheppa. geomar.de/solarisheppa/sites/default/files/data/Calcula tions_of_Solar_Irradiance.pdf), and stratospheric volcanic aerosol at background levels (Sato et al. 1993). This preindustrial control simulation was initialised from the end of previous preindustrial control simulations produced during the development period of GC2, totaling 132 years, with the aim of minimising the spin-up period and maximising the useable period of the GC2 preindustrial control examined in this paper. A small and constant drift, mostly notable in the deep ocean, is still present in temperature and salinity. Since our interest is on decadal and multidecadal variability, all variables are linearly detrended at the grid point scale prior to analysis. Note that this is the same model simulation analysed in Robson et al. (2016) to investigate the causes of the recent cooling trend in the North Atlantic.
Model climatology
We now describe the mean state in GC2 of the relevant ocean variables for North Atlantic climate. The longterm climatology of the AMOC streamfunction in depth coordinates (AMOC-z; Fig. 1a) shows an overturning cell with a maximum value of 16.8 Sv at 30°N. At 26°N-the latitude of the RAPID array-the climatological maximum is 15.3 Sv and occurs at 650 m, too shallow and about 2 Sv weaker than for the RAPID measurements (17.2 Sv at 1000 m; McCarthy et al. 2015). The largest variance is found in the subtropics, with a standard deviation (SD) of 1.4 Sv at 33°N. There is no notable change in the intensity and vertical extent of the AMOC cell with respect to the climatology in the GC2 present day control run (Fig. 2 in Jackson et al. 2015), suggesting a negligible efect of the changes in radiative forcing since the preindustrial period on the mean AMOC state. The important role of subpolar latitudes emerges more clearly when the AMOC streamfunction is calculated in σ 2 density space (AMOC-σ; Fig. 1b), which highlights the major contribution of water mass transformation in the North Atlantic. Indeed, its climatological maximum value (i.e. 17.3 Sv) occurs at 58°N, where the Labrador Sea is located. The AMOC-σ streamfunction is fairly uniform with latitude, a feature that is particularly true for the denser waters associated with NADW formation, as seen in previous studies with other models (Zhang 2010;Talandier et al. 2014;Kwon and Frankignoul 2014). Besides this closer representation of NADW variability, another advantage of the AMOC-σ over the AMOC-z streamfunction, highlighted irst in Kwon and Frankignoul (2014) for the CCSM3 model, is a stronger relationship with the meridional ocean heat transport, especially in the subpolar gyre region. Figure 1c describes the mean state of the barotropic streamfunction, and therefore the main characteristics of the subtropical and subpolar gyres in the model. The Labrador Sea appears as a key region for the subpolar gyre, as it is there that the anticlockwise gyre circulation strength attains both its maximum climatological mean (42 Sv) and standard deviation (5 Sv). The mean value is comparable with observed transport estimates of 37-42 Sv for the DWBC strength at the exit of Labrador (53°N), obtained from a compilation of moored current meters, hydrographic surveys and sections, and 0.08° eddy-resolving forced simulations (Fischer et al. , 2010Xu et al. 2013). There is also a particularly large variance in the region of the Gulf Stream and the NAC, up to four time larger than for the subpolar gyre. The deepest convection in the model is observed in the Labrador Sea (Fig. 1d), with other key regions like the Irminger and Nordic Seas also showing winter mixed layer depths greater than 1000 m. However, the region exhibiting largest variability is in the Nordic Seas, possibly due to a higher role of sea ice interactions.
We now look in Fig. 1e at the mean temperature and salinity proiles in a section accross the Denmark Strait (yellow transect in Fig. 1c). The Denmark Strait is a key place to understand Labrador Sea variability if we consider its proximity, and the presence of two distinct major currents. First, the East Greenland Current (EGC, green box in Fig. 1e), which is located at the surface and along the western boundary and associated with the export of anomalously fresh and cold waters. And second, the Denmark Strait Overlow (DSO, purple box in Fig. 1e), which provides the densest contribution to the NADW (Swift et al. 1980) and is associated with cold and relatively fresh waters from the subsurface. Other Greenland-Scotland outlows can also potentially impact the Labrador Sea density changes. In particular, GC2 presents a major branch of cold and fresh waters coming through the Faroe-Scotland ridge (blue thick line in Fig. 1c), which feeds directly into the Eastern SPG. The role of these diferent lows will be further discussed in Sect. 3.3. Finally, Fig. 1f describes the speed and direction of the mean ocean currents in the top 1000 m. Boundary currents are particularly intense along the Greenland coast (south of the ridge) and also near Newfoundland, the passage regions of the EGC and the exiting Labrador current, respectively.
Computation of the density components
All density values in this study are computed using the International Equation of State of seawater (EOS-80) referred to the level of 2000 dbar (σ 2 ), to give more emphasis to the deep water properties. Temperature and salinity contributions to density are calculated using the thermal expansion and haline contraction coeicients. These are themselves estimated as the σ 2 change (in the EOS-80 equation) associated with a small increase in temperature (0.02 °C) and salinity (0.01 psu), respectively.
Characteristics of density variability in the interior Labrador Sea
We irst characterise how Labrador Sea densities evolve with time and depth (Fig. 2). To separate the Labrador Sea contributions to the AMOC from those of the boundary currents our analysis is focused on the Interior Labrador Sea (ILS; red box in Fig. 1f), a region with no direct inluence of the boundary currents. Interestingly, this is also a region with both enhanced variability in the subpolar gyre strength and anomalously deep winter convection ( Fig. 1c, d). Figure 2a shows hints of decadal variability in the ILS densities, with upper ocean signals penetrating downward and generally reaching the 3000 m depth. Both temperature and salinity seem to play an active role in driving these density changes ( Fig. 2b-d), with temperature dominating at all depths except 2000-3000 m (Fig. 2e). The stronger inluence of salinity at deeper levels, together with the particularly long (almost centennial) timescale associated, suggest a potential contribution of Arctic discharges, as reproduced in other climate models exhibiting enhanced multi-decadal variability (approx. 70-80 years) in the North Atlantic (Jungclaus et al. 2005;Hawkins and Sutton 2007). The leading mode as described by the irst Empirical Orthogonal Function (EOF) of the spatially averaged ILS densities is illustrated in Fig. 3, and explains 67% of the total variance. Its corresponding Principal Component (PC1-ILS, Fig. 3a) shows smooth multidecadal variations, with a similar timescale as for the observed 1000-2500 m Labrador Sea densities (Robson et al. 2014a). Figure 3a also shows the variations in the maximum AMOC-z strength at 45°N after the Ekman transport is removed (AMOC-45N-noek), in order to isolate the thermo-haline driven component. 45°N is the latitude where the AMOC is more strongly correlated with PC1-ILS (Fig. 4b). To remove the Ekman signal, we integrate vertically the Ekman velocities (after introducing a depth-uniform return low to ensure no net meridional mass transport) following Eq. 6 in Baehr et al. (2004), an approach only valid in depth space. AMOC-45N-noek shares a large part of the slow modulations in PC1-ILS, but also includes higher frequency variations (blue line in Fig. 3a). A similar result, with Atlantic midocean densities showing smoother changes than the AMOC is reported in Roberts et al. (2013). In their study this difference relates to the fact that subsurface hydrographic properties are less exposed than the AMOC to highfrequency surface luctuations, which can be a major advantage for the detection of low-frequency forced changes (Vellinga and Wood 2004). In GC2, the leading EOF of ILS densities describes a coherent vertical proile ( Fig. 3b), with maximum values in the upper 2000 m that decrease slowly with depth and become almost zero below 3000 m. Most of this density structure is thermally driven, except for the deeper ocean where salinity dominates. Figure 3c describes the cross-correlations between PC1-ILS and other relevant variables. A persistent NAO inluence (via related local surface heat luxes) precedes the changes in PC1-ILS by up to 9 years. PC1-ILS mode is therefore associated with the cumulative (instead of instantaneous) cooling efect of these NAO-driven atmospheric heat luxes, which can themselves explain the dominant contribution of temperatures in the upper levels. Note that similar prolonged periods with predominant positive NAO phases have been observed in the real world (e.g. 1980s to mid 1990s), and are associated with major changes in the subpolar gyre region (Robson et al. 2012a). The link with ocean dynamics is explored in Fig. 4, describing the inphase spatial correlations between PC1-ILS and both the barotropic and meridional overturning streamfunctions. PC1-ILS variations are strongly linked with changes in the strength of the western SPG, and also with changes in the AMOC at subpolar latitudes. Maximum correlations occur in phase with PC1-ILS (Fig. 3c) and are stronger for an index of the SPG strength (SPGSI, here deined as the spatially averaged barotropic streamfunction over the blue box in Fig. 4a) than for the AMOC-45N-noek. Fig. 2 a-c Hovmoller plot (depth vs. time) of the standardised anomalies of the spatially averaged Interior Labrador Sea densities and the related thermal and haline components, respectively. Anomalies are computed with respect to the long-term mean. d Vertically averaged (0-3000 m) relative contributions of the thermal and haline components to density, respectively. These relative contributions are computed as the percentage of levels in a given time step for which the haline (or alternatively thermal) contribution to density represents more than half of the full density anomaly. For clarity, both timeseries are smothed using 11-year moving averages. e The same but for the temporally averaged contributions. In this case, the percentage refers to a given depth level and counts how many time steps are dominated by each one of the density components ◂ We have characterised and described the dominant mode of ILS variability in the model, identifying a signiicant statistical link with both the horizontal and meridional circulations. The next sub-section explores the relationships between these three variables (i.e. PC1-ISL, SPGSI and AMOC-45N-noek) and density changes across the western boundary current, and also their respective contributions to the meridional ocean heat transport.
Labrador Sea density link with the ocean circulation and transports
Western boundary densities are an important driver of AMOC variability as a direct consequence of the thermal wind equation (Tulloch and Marshall 2012), which relates the meridional mass transport to the zonal density gradients (Hirschi and Marotzke 2007). Fig. 1c) and the maximum AMOC streamfunction at 45°N after the Ekman transport is removed (AMOC-45N-noek). b Vertical structure of the EOF associated with the irst PC (black line), re-scaled to density units by multiplying by the standard deviation at each depth level. The contributions of salinity and temperature to this density proile (purple and blue lines, respectively) are computed by regressing PC1-ILS onto their corresponding spatially averaged ields. c Time-lag correlations between PC1 and a selection of North Atlantic climate indices: the AMOC-45N-noek, a subpolar gyre strength index (SPGSI), the averaged heat luxes in the Labrador Interior (HFL) and the North Atlantic Oscillation (NAO). Dots denote correlation values exceeding a 95% conidence level based on a t-test that takes into account the series autocorrelation. Positive lags correspond to PC1 leading the changes in the other variables 1 3 Here we look irst at the link between Labrador Sea densities and changes along the western margin of the Atlantic, and explore their particular relationship with the AMOC and the SPG (Fig. 5).
Fig. 5 a-c Correlation between the SPG strength index (SPGSI)
and density across a zonal section at 57 N near the western Atlantic boundary, with the SPGSI leading by 2 years, in phase with, and lagging by 2 years the changes in density, respectively. Thin black contours enclose areas with correlation values exceeding the 95% conidence level established as in Fig. 3c. The red box denotes the longitudes of the Interior Labrador Sea over which PC1-ILS is calculated. The dashed horizontal line highlights the 1500 m depth level.
d-f
The same as in a-c but between the AMOC-45N-noek and the density proiles. g-i and j-l The same as in d-f but for zonal sections at 45°N and 35°N and the Interior Labrador Sea (red box). In this region, an upper ocean signal (top 1500 m) appears in phase with the AMOC (Fig. 5e), with maximum values near the continent and also in the ILS. Two years later (Fig. 5f), maximum correlations occur below 1500 m, suggesting some downward penetration of the previous surface signal. A further look at other zonal sections (Fig. 5g-l) shows that, in GC2, only the densities in the upper 1500 m correlate coherently across all latitudes with the AMOC. These are therefore the key levels contributing to intensify the AMOC strength and enhance the northward heat transport. By contrast, it appears also that the deep Labrador densities do not contribute directly to the AMOC variability in this model. In contrast to the AMOC, SPG variability is associated with more deeply reaching density signals at 57°N (Fig. 5a, c). Note that the cyclonic geostrophic low associated with these vertically uniform positive density anomalies is indeed consistent with a SPG strengthening. Maximum in-phase correlations with the SPGSI index are seen below 1500 m and over the ILS region, thus involving deeper levels than previously found for the AMOC-45N-noek (Fig. 5e). Similar depths are reached when the AMOC lags the changes in density by two years (Fig. 5f). This lag is because SPG changes tend to follow those in the AMOC by 1-2 years, as supported by a cross-correlation analysis between both indices (not shown). This particular lag represents the typical time required for the upper density anomalies to penetrate downward.
The latitudinal coherence of the AMOC signals related to PC1-ILS is now explored in Fig. 6. In-phase with PC1-ILS, the full AMOC-z shows a large-scale intensiication ( Fig. 6a) from midlatitudes to the Equator, with maximum values between 40-45°N. This is consistent with results from coarser models (Lohmann et al. 2014), showing a strong link between North Atlantic deep water formation and AMOC variability between 40-50°N. Part of the subtropical AMOC-z signal precedes the changes in PC1-ILS and is associated with the local efect of the Ekman transport (Fig. 6b), which is ultimately driven by the NAO (that leads PC1-ILS for up to a decade). The NAO establishes a well-known dipolar wind pattern between subtropical and subpolar latitudes, inducing northward Ekman transport in the irst region and southward in the second (see e.g. Fig. 7a in Ortega et al. 2012). Besides this instantaneous Ekman-driven response, the NAO also induces a slow delayed AMOC response, due to the efect of westerly wind variability in driving Labrador Sea deep water formation. This slow efect is more evident once the Ekman signal has been removed from the AMOC-z streamfunction (AMOCz-noek; Fig. 6c), which now shows a strengthened AMOC at northern latitudes (50-60°N) that propagates southward in 2-3 years time. This southward propagation, however, is disrupted again around 30°N, which is probably due to limitations of the AMOC-z streamfunction to represent the meridional connectivity with the subtropics (Zhang 2010). In fact, an uninterrupted intensiication across all latitudes is observed when the AMOC-σ streamfunction is considered (Fig. 6d). Two apparent propagation timescales for the AMOC-σ signal are seen, the irst being almost instantaneous and in phase with PC1-ILS, and the second developing slowly (~8 years) after the PC1-ILS intensiies and being more restricted to mid-latitudes. This later timescale is consistent with the delayed baroclinic oceanic response to the NAO, identiied in Eden and Willebrand (2001). Now we look at the lead-lag relationships of Labrador Sea densities and the diferent circulation indices with the meridional ocean heat transport (OHT). Consistently with the associated AMOC changes in Fig. 6, PC1-ILS also represents a general OHT strengthening that is particularly evident north of 40°N (Fig. 7a). This strengthening is preceded by an OHT increase in the subtropics, which cannot be explained through changes in the AMOC-znoek streamfunction (Fig. 6c). These local OHT changes are most probably wind-driven, as they appear in the same latitudes and lead times in which the northward Ekman transport is intensiied (Fig. 6b), and coincide also with predominant positive NAO phases (Fig. 3c). The latitudinal OHT variations speciic to AMOC-45N-noek are shown in Fig. 7b. Consistent with Fig. 6d, two diferent propagation timescales for the AMOC-driven OHT changes are hinted. First, an almost instantaneous response that is most probably associated with the propagation speed of boundary waves, as evidenced in other eddy-permitting models (Getzlaf et al. 2005). And second, a slowly developing response that might involve a signal propagation through interior pathways (Zhang 2010). The SPGSI-driven OHT changes are described in Fig. 7c. As expected from the strong correlation between SPGSI and PC1-ILS in Fig. 3c, they are largely consistent with the changes described above for PC1-ILS (Fig. 7a). The main diference is seen for the in-phase correlations, that in SPGSI are characterised by large positive values at mid-latitudes, probably associated with rapid responses of both the ocean heat content and the SPG to NAO luctuations. Much weaker relationships are obtained between the previous indices (PC1-ILS, SPGSI, AMOC-45N-noek) and the meridional ocean salinity transport (not shown). In this case, all the three circulation indices relate to a local in-phase increase of salinity at midlatitudes, but show no consistent changes over the Tropics. This subsection has demonstrated the close link between the ILS densities and both the North Atlantic overturning and barotropic circulations in GC2, all of them inluencing the northward OHT. It has also highlighted the important role in the model of the top 1500 m west boundary densities to induce coherent AMOC changes across latitudes. We now move on to examine the processes responsible for Labrador Sea density variability in GC2.
Drivers of Labrador Sea density variability
Labrador Sea density changes can respond to both atmospheric and oceanic inluences. The above analysis has highlighted a persistent leading relationship of NAO variability on PC1-ILS (Fig. 3c), with the accumulation of the associated surface heat luxes playing a key role. Here we quantify its total contribution to PC1-ILS variability. To do so, a least-squares regression model for PC1-ILS is deined, using the local heat luxes at diferent lead times as the only predictor. Note that no signiicant contribution of the local surface freshwater luxes (precipitation minus evaporation) on PC1-ILS variability is remarked (not shown). This is an equivalent approach to the one employed in Ortega et al. This atmospheric inluence is accounted for in the regression model presented in Fig. 8, which considers a maximum lead-time between the heat luxes and PC1-ILS of 12 years. This is the shortest L for which the HFL timeseries becomes uncorrelated with the residuals at all lead-times, thus guaranteeing that all PC1-ILS variability associated to the ILS heat luxes is accounted for in the regression model (noted as PC1-ILS-HFL from now on). This model explains 62% of the total PC1-ILS variance. Fig. 3b, such as the uniform contribution of salinity across the whole water column, as well as a maximum temperature signal in the top 50 m, decreasing irst sharply (from 50 to 100 m) and later on smoothly with depth. The upper maximum is related to the instantaneous efect of heat luxes, while the slowly decreasing signal likely results from the downward propagation of heat luxes accumulated from previous lead times. The residuals of PC1-ILS-HFL show a pronounced nearly centennial modulation (Fig. 8a) and correlate strongly (R = 0.61, p value <0.05) with PC1-ILS (Fig. 8b). This correlation is diicult to explain if the residuals were composed by pure stochastic noise and thus suggests that additional contributions, e.g. from the Greenland-Scotland overlows, are at play. PC1-ILS-HFL residuals relate to maximum salinitydriven density changes in the upper ocean (top 400 m) and thermally-driven changes in the subsurface (between 1500 and 2500 m; Fig. 8c). The irst maxima could be potentially explained by changes in the EGC while the second probably relates to variations in the dense water overlows (i.e. from the Denmark Strait and Faroe-Scotland Ridge). Let's irst focus on the shallow contribution. Figure 9a compares the evolution of the PC1-ILS-HFL residuals with the density components of the EGC. Despite some diferences at the decadal time-scale, particularly evident during the period 2160-2270, two of the four major maxima in the residuals (occurring in 2280 and 2395) are clearly preceded by large salty and dense anomalies in the EGC (Fig. 9a). This suggests that the inluence of the EGC on Labrador Sea density is related to the occurrence of extreme salinity events. A four-box conceptual model by Born and Stocker (2014) supports the importance of salt luxes from the boundary currents to drive convection in the Interior Labrador, and the SPG variability. The cross-correlations in Fig. 10a describe the average lead-lag relationship between the PC1-ILS-HFL residuals and the EGC indices. The associated density signals (black lines) lead by 1 year the changes in the residuals. It becomes also evident that these density changes are salinity driven, with temperature just playing a compensating role.
Likewise, other ocean signals, such as the DSO, might be at the origin of the multidecadal variability and the other maxima in the PC1-ILS-HFL residuals. Figure 9b conirms that DSO density variations describe reasonably well the multidecadal modulations in the PC1-ILS-HFL residuals, in particular during the highlighted period from 2160 to 2270 where the EGC shows a rather diferent evolution. Furthermore, in the irst part of this period, DSO densities experience a sustained increasing trend anticipating the occurrence of the maximum seen in the residuals in year 2215, which is not attributable to the EGC. Volume transport and density changes across the Faroe-Scotland Ridge (FSR) also exhibit important multidecadal and centennial modulations (not shown) in line with the timescales recently reported in paleocurrent proxy data from the Icelandic basin (Mjell et al. 2016). The inluence of the FSR overlow is, however, not independent from that of the DSO. In particular, the changes in both their low speed and salinity exports are strongly correlated, with in-phase correlation coeicients larger than 0.6. For simplicity, the rest of this analysis will be focused on the DSO.
Over the whole simulation, PC1-ILS-HFL residuals lag by about 2 years the changes in the DSO densities (Fig. 10b), but unlike for the EGC, these density DSO signals result from combined contributions of temperature and salinity. The DSO inluence on the PC1-ILS-HFL residuals appears to be irst thermally driven (for leading times from 12 to 3 years), and explained through the combined efect of both DSO salinity and temperature for shorter lead times. Note also the particularly large correlations found when the PC1-ILS-HFL residuals lead the DSO components by 10 years (positive lags in Fig. 10b). These probably represent a delayed DSO response to changes in the meridional transports at lower latitudes, which follow in turn the changes in the ILS densities. A cross-correlation analysis between the DSO density components and the ILS densities at diferent levels (not shown) locates the leading temperature signal in the upper ocean (top 2000 m), while the salinity-leading contribution takes place at deeper levels (2000-3000 m). Note that these coincide with the same levels at which salinity dominates the ILS density evolution in Fig. 2e. This important role of DSO exports in leading by about a decade the centennial changes in deep ILS salinity is illustrated in Figs. 9c and 10c.
We have identiied above the major drivers of ILS density variability. These include the accumulation of atmospheric heat luxes at the surface, episodic inlows of large salinity anomalies by the EGC, and slowly varying modulations in the DSO and FSR exports (both temperature and salinity driven). In light of the long-timescales involved in most of these ILS density changes, which might imply potential for climate predictability, the next subsection looks at the associated atmospheric impacts.
Atmospheric impacts
To explore the potential impacts associated with the recent observed Labrador Sea density weakening (Robson et al. 2014a), we perform a composite analysis on a selection of analogous events in GC2. This focus on strong trends is expected to increase the signal-to-noise ratio of the atmospheric responses, which may not be necessarily appreciable in regression analyses based on interannual timescales (Allison et al. 2014). In particular, we compute the composite atmospheric trends following by 5 years the 9 largest non-overlapping 15-year decreasing trends in PC1-ILS . 11a). This particular lead time is selected after the analysis in (Robson et al. 2016), that identiies, in the same simulation, a trend towards more positive NAO phases 5 years after the strongest decreasing trends in the deep Labrador Sea densities (1000-2500 m). Here, we look at the composite delayed winter (DJF) and summer (JJA) trends on SLP and other surface variables with high socioeconomic impacts, like temperature and precipitation. The delayed winter impacts are described in Fig. 11b-e. Consistent with Robson et al. (2016), trends in PC1-ILS give rise to a positive NAO-like trend pattern in winter (Fig. 11b). Likewise, trends in surface air temperature at 1.5 m (SAT; Fig. 11c) describe the canonical quadrupole structure associated with the NAO (Hurrell et al. 1997), with positive values over Northern Europe and the United States and negative values over Eastern Canada and North Africa. In the ocean (Fig. 11d), there is a large warming along the Gulf Stream Extension and a widespread cooling in the SPG region, both also present in the spatial SAT trends. The location southeast of Greenland where the maximum cooling occurs coincides with the North Atlantic "warming hole" region, which is related in several studies to an AMOC decline (Drijfhout et al. 2012;Rahmstorf et al. 2015), thus further supporting the close association between PC1-ILS and the AMOC. Only small winter precipitation trends are observed over the continents (Fig. 11e), the largest changes being a zonal dipole in the mid-latitude North Atlantic, with increased rainfall in the Grand Banks and decreased rainfall west of Europe. Previous studies with diferent coupled models (Gastineau and Frankignoul 2012;Frankignoul et al. 2013) highlight the key role of AMOC-driven SST anomalies in the Gulf Stream and NAC region in the generation of winter NAO responses, triggered through changes in local surface heat luxes that afect eddy activity in the North Atlantic storm track. These studies, based on lagged linear regressions with the annual AMOC strength, report a typical warming over the SPG region of 0.4 °C and a negative NAOlike dipole of the order of −0.6 hPa for AMOC increases of 1 Sv. That is, a 1.5 hPa north-to-south gradient in SLP when the Gulf Stream Extension warms by 1 °C. Our composite analyses, following instead the strong decreasing trends in the Labrador Sea densities-which are themselves linked to AMOC decreases-, describe an equivalent SLP gradient and SST change of 7.2 hPa and −1.1 °C, respectively. The sign of the response is consistent in both analyses, but the magnitude of the changes is larger in GC2. This could be due to the fact that our study is speciically focused on large extreme events and is therefore more adequate to identify potentially non-linear atmospheric responses.
Interestingly, diferent impacts appear for the summer atmospheric circulation (Fig. 11f), mainly characterised by a blocking situation over Scandinavia. Enhanced blocking activity, but more located over the eastern SPG, has been previously associated with the development of abrupt cooling events over the region (Drijfhout et al. 2013). A strong local warming is now noticed east of Greenland, possibly linked to summer ocean-ice interactions. Due to the different atmospheric circulation, the associated summer inland T1.5 trends (Fig. 11g) are also diferent than for winter, in particular over Western Europe and northeastern North America where a cooling is now established. As in winter, no noteworthy summer precipitation impacts are observed over the continents, although a southward migration of the Intertropical Convergence Zone (ITCZ) is seen in the Equatorial Atlantic and Eastern Paciic. This is a well-known equatorial response caused by a southward expansion of the northward Hadley cell that compensates for the reduced ocean northward heat transport following the AMOC declines (Kageyama et al. 2009). It is furthermore possible that part of the anomalous summer atmospheric circulation in the mid-latitudes described above is forced by ITCZ-driven SST changes (via latent heat luxes) in the Tropics. Other dynamical extratropical atmospheric responses to low-latitude SST forcing have been previously identiied both for the winter (e.g. Sutton et al. 2000) and summer seasons (e.g. Cassou et al. 2005).
An analogous composite analysis based on the PC1-ILS increasing trends (not shown) identiies nearly opposite atmospheric impacts to those described above. In particular, a clear negative NAO-like pattern is established in winter, and the SPG and Gulf Stream regions experience a warming and a cooling, respectively. Also worth noting is a northward shift in the ITCZ position, now present both during winter and summer. These and the above results are consistent with Allison et al. (2014), that explored the sensitivity of climate impacts to the polarity of rapid decadal AMOC changes in a variety of climate models, and concluded that the associated atmospheric and surface ocean changes are approximately symmetric.
In conclusion, by focusing on a selection of large events characterised by rapid decadal changes in the ILS densities, we have identiied two major atmospheric impacts. First, a delayed winter NAO response that furthermore provides a negative feedback to the initial ILS changes. And second, Fig. 9 a Evolution of the PC1-ILS-HFL residuals and three standardised EGC indices, deined as the total, thermal and haline density components averaged over the green box in Fig. 1e. All timeseries are smoothed using 11-year running averages. The dotted vertical lines denote a period of poor agreement between the PC1-ILS-HFL residuals and the full EGC density. b The same but for a selection of DSO indices (computed as averages over the purple box in Fig. 1e). c Evolution of the standardised DSO haline density component (DSO-RHOS), and a standardised index of deep ILS salinity, computed as the vertically averaged salinity between 2000 and 3000 m ◂ 1 3 a meridional displacement in the ITCZ-mainly visible in summer-that responds to changes in the ocean circulation. These two atmospheric signals lead to important climate impacts over the continents.
Summary of the interactions
A schematic of the diferent processes and interactions described for GC2 throughout the article is represented in Fig. 12. Positive density anomalies in the Interior Labrador Sea appear in response to a combination of diferent factors, including the accumulation of cooling surface heat luxes driven by the NAO for up to a decade, the delayed inluence of salinity exports by the EGC, as well as slow changes in the Denmark Strait overlows inducing irst a cooling and later on a saliniication of the deep Labrador Sea waters. Almost concomitant anomalously dense waters are seen in the upper 1500 m of the Labrador Sea and all along the western boundary, where they contribute to speed up the AMOC. Such coherent structure across latitudes is not clear for the deeper ILS waters, which instead induce a local acceleration of the SPG strength. It is therefore through this combined efect on both the meridional and horizontal ocean circulations that positive ILS density anomalies increase the northward heat and salinity transports. These transports provide a irst feedback mechanism on the ILS densities. Yet, the associated temperature and salinity changes produce competing efects, the irst (i.e. arrival of warmer waters) contributing to reverse, while the second (i.e. arrival of saltier waters) contributing to maintain the initial ILS density anomalies. An additional feedback mechanism through the atmosphere is also present. Negative NAO phases appear 5 years after the increased AMOC conditions, thus establishing a mechanism of phase reversal for the ILS densities.
In the following section we discuss the associated uncertainties and how they afect the relevance of these indings for the real world.
Discussion
Before making any inferences for the real world, some of the previous indings merit further discussion. This includes the degree of realism of the key drivers identiied in the model, the impact of the major model biases and misrepresented processes on our results, and some limitations inherent to our analysis.
The major role of the NAO in driving Labrador Sea variability in GC2 appears consistent with observations (Dickson et al. 1996;Visbeck et al. 2003) and other model studies (Eden and Willebrand 2001;Guemas and Salas 2008;Ortega et al. 2012;Yeager et al. 2012;Danabasoglu et al. 2016). During positive NAO phases, strengthened westerly winds increase surface heat loss in the Labrador region, and this signal propagates downward contributing to cool the whole water column. Thus, the observed decline in the NAO from the 1950s to 1970 was followed by a warming in the Labrador Sea, and the predominant positive NAO phases in the early 1990s gave rise to a remarkable Labrador Sea cooling (Curry et al. 1998). Also, episodes of persistently high (or low) NAO phases have been linked to anomalously deep (or shallow) Labrador Sea convection (e.g. Pickart et al. 2002;Kieke and Yashayaev 2015), an inluence that can explain the vertical homogeneity of the Labrador Sea signals in GC2. Other atmospheric conditions c-e The same as in b but for DJF trends in air temperature at 1.5 m (T1.5 m), SST and total rainfall. f-i The same as in b-e but for JJA trends 1 3 than the canonical NAO might exert an inluence on Labrador convection in the real world. For instance, a recent analysis with reanalysis data and forced-ocean sensitivity experiments has identiied a somewhat diferent dipole structure, with a mid-latitude high occupying the western (instead of the eastern) North Atlantic, that explains half of the episodes of strong heat loss in the Labrador Sea. This atmospheric pattern appears to be accompanied by La Niña conditions, suggesting some remote inluence from the Paciic on Labrador convection.
As for the other ocean inluences, the degree of realism of the simulated Arctic contributions is probably limited by model deiciencies in the representation of the Denmark Strait overlows, which are too strong when compared to observational estimates (Megann et al. 2014). Some differences are also noted regarding the associated time scale and origin of salinity signals inluencing the Labrador Sea.
Thus, while our model shows centennial modulations in the DSO discharges, three major GSA events have been observed in the real world since the 1960s. Two of them (early 1980s and early 1990s) are associated with the efect of severe winters on the Labrador Sea waters. Additionally, the 1980s and 1970s GSA events are inluenced by remote freshwater exports from the Arctic (via the Fram Strait) and the Canadian Archipelago, respectively (Belkin et al. 1998).
In light of these diferences, two important model biases to keep in mind are the anomalously strong winter convection and the relatively shallow NADW return low in GC2 as compared to observations. The simulated climatological winter mixed layer depth is deeper than 2000 m in some areas of the Labrador Sea, which is in stark contrast with reported values of 500-1000 m inferred from observations (de Boyer Montégut et al. 2004). This overestimation of convection can thus afect how realistically upper ocean signals penetrate downward in the model. The overly shallow NADW return low in GC2 is due to excess entrainment in the representation of the overlows (Megann et al. 2014). This issue might explain why while previous works (Hodson and Sutton 2012;Robson et al. 2014a;Menary et al. 2015) report a close link between the deep Labrador Sea densities (typically between 1000 and 3000 m) and the AMOC, in GC2 this link is only found for the upper Labrador Sea densities (top 1500 m). But, which one of these relationships is more realistic? The answer is not clear. Other results both with proiling loats released at 700 and 1500 m (e.g. Fischer and Schott 2002;Bower et al. 2009) and high-resolution models (Spence et al. 2012) suggest that subsurface Labrador Sea waters are often advected into the North Atlantic Interior instead of following the DWBC toward the subtropics, thus reducing the direct role of these waters on the AMOC at lower latitudes. These studies also provide evidence for the existence of interior pathways, which could explain the slow timescale observed in GC2 for the southward propagation of AMOC anomalies, contrasting with the additional almost instantaneous timescale related to fast wave adjustments along the western boundary.
Finally, it is important to remark that our previous analysis does not account for the total contribution of the SPG strength to the net OHT. The SPG strength index deined in this study accounts broadly for the advection of mean temperatures by the anomalous gyre circulation (v′T 0 ). Therefore, this index mostly describes the increased (or decreased) advection of relatively warm and salty waters entering the SPG in the region downstream of the NAC (Fig. 7c). The contribution of anomalous temperatures transported by the mean low (v 0 T′) has not been considered in this study, and can potentially be of the same order than the v′T 0 component, as previously shown for HG3 (Menary et al 2015). This v 0 T′ component is responsible for the westward transport of heat content anomalies from the eastern SPG, which can potentially feed back onto the Labrador Sea waters. To properly quantify the importance of this contribution would require a detailed heat budget breakdown, which is out of the scope of this paper.
Conclusions
Water mass properties in the Labrador Sea exhibit large decadal variability, which is believed to play an important 1 3 role in the decadal variability of the wider North Atlantic Ocean. This study has investigated Labrador Sea density variability on decadal to multi-decadal timescales in a 310-year control simulation with the high-resolution coupled climate model HadGEM3-GC2 (GC2). In particular, we have explored the chain of events leading to changes in Interior Labrador Sea (ILS) density, and the links between ILS density and large-scale ocean circulation and transports.
To do so we irst evaluated the leading mode of ILS densities (PC1-ILS; accounting for 67% of the total variance), which exhibits strong multi-decadal variability. PC1-ILS is characterised by a fairly uniform vertical proile, with maximum density values in the top 2000 m, followed by a slow decrease with depth down to 4000 m. Both temperature and salinity contribute to the vertical structure of density anomalies associated with PC1-ILS. The major drivers of PC1-ILS variability can be summarised as follows: • The largest contribution is due to the temporal integration of local surface heat luxes, primarily driven by changes in the North Atlantic Oscillation (NAO). A univariate regression model of PC1-ILS, which uses as a predictor surface heat luxes averaged over the interior Labrador Sea and accumulated for 12 consecutive years, explains 62% of the PC1-ILS variance. This result is consistent with previous works with other models, supporting a dominant role of NAO forcing on Labrador Sea deep water formation (Ortega et al. 2011), the Atlantic Meridional Overturning Circulation (AMOC) and the subpolar gyre strength (Mecking et al. 2014). • The residual variability of PC1-ILS (i.e. the variability of PC1-ILS not explained by the regression model) exhibits coherent variations on centennial timescales, highlighted particularly by the occurrence of three major density maxima in the 310 year time series. This residual variance has two distinct density components: a salinity-driven component in the upper ocean and a thermally-driven component in the subsurface. • The upper ocean salinity variability is related to variations in the East Greenland Current (EGC). Large anomalous salinity exports by the EGC are found to precede two of the three density maxima in the residuals of PC1-ILS. This is consistent with other models, which support a role of similar Arctic salinity outlows in driving bidecadal (Escudier et al. 2013) to multidecadal (Jungclaus et al. 2005;Hawkins and Sutton 2007) changes in the AMOC. • An additional Arctic contribution comes from the inluence of dense Greenland-Scotland overlows on the Labrador Sea densities. In particular, the strongest inluence comes from the Denmark Strait Overlow waters (DSO). Analysis of the PC1-ILS residuals reveals that these waters irst inluence the subsurface temperature changes in the Labrador Sea, and alsoafter a delay of 9 years -induce salinity changes at the deepest levels. The role of DSO is also consistent with rapid AMOC changes in the HadCM3 model (Hawkins and Sutton 2008), which are explained by a low of anomalous dense waters through the Denmark Strait.
Regarding the relationship between variability in Labrador Sea densities and variability in ocean circulation and transports, the major indings are as follows: • In GC2, AMOC variability is most closely linked to Labrador Sea density variability in the upper 1500 m, mediated by propagation of signals along the western boundary. The lack of a strong relationship to density variability at deeper levels is likely related to a shallow bias in the mean AMOC in this particular model. • The PC1-ILS index is found to be a good proxy of the AMOC strength (R = 0.6) due to its strong link with upper ocean Labrador Sea densities. Our analysis also supports using the AMOC streamfunction calculated in density space (AMOC σ ) to track how the associated signals propagate from high to low latitudes, as in Zhang (2010). Indeed, the link of PC1-ILS with the circulation in the subtropics can only be observed when the AMOC σ streamfunction is considered. The analysis is also suggestive of two distinct speeds for the AMOC changes to propagate from high to low latitudes (Fig. 6d), characterised by a fast (almost instantaneous) and a slow time scale (i.e. 5-7 years), which is more evident for the AMOC σ streamfunction. Two similar AMOC propagation time scales have been identiied in the model GFDL-CM2.1 (Zhang 2010), associated with fast-propagating boundary waves and slow advection through interior pathways, respectively. • By contrast, the SPG strength is strongly linked with inphase Labrador Sea density anomalies that extend over the whole water column, and are largest at mid depths (i.e. 1000-1500 m). Changes in the strength of the SPG are consistent with the geostrophic response induced by these Labrador Sea density anomalies. • Since PC1-ILS is connected with both the shallow and deep ILS densities, it provides an even better proxy of the SPG (R = 0.8) than for the AMOC strength in this model. This analysis therefore reines the previous interpretation of Labrador Sea densities as a precursor of AMOC variability (Robson et al. 2014a(Robson et al. , 2016, by highlighting that Labrador Sea densities might also afect the SPG strength, and thus provide a link between the variability of both the gyre and the overturning circulations. • PC1-ILS is also a good indicator (R > 0.5 at mid-latitudes) of ocean heat transport (OHT) changes across the Atlantic. A few years before the changes in PC1-ILS there is an intensiication of the northward heat transport between 10-40°N, which is explained by local changes in the Ekman transport (the Ekman transport itself is ultimately driven by the NAO and concomitant with the NAO-driven surface heat luxes giving rise to the Labrador density anomalies). Once the density anomalies are formed their subsequent inluence on the ocean circulation leads to an intensiication of the OHT, this time starting at high latitudes and propagating southward. In phase with PC1-ILS, there is a coherent OHT intensiication at all latitudes, which is consistent with the efect of the fast propagating boundary waves on the AMOC. In subsequent years, the OHT intensiication continues, but almost exclusively at subpolar latitudes.
The link with the OHT (and related ocean salt transport; OST) provides a negative feedback (positive for OST) on the Labrador Sea densities. Although it is the same sign, the feedback is weaker than for the previous version of the HadGEM3 model (HG3; Menary et al. 2015), which exhibits a strong 17-year periodicity that is not present in the version analysed in this study. In HG3, the phase reversal is instead related to a strong negative feedback between Labrador Sea densities and those in the North Atlantic Current (NAC). However, a similar mechanism is not found in GC2.
The interaction with the atmosphere also seems to be diferent in GC2 compared to HG3. In HG3 the NAO plays an intensifying role on the Labrador Sea/NAC feedback, which is most efective at inter-annual timescales. In particular, the NAO forces an anomalous geostrophic current response contributing signiicantly to the NAC changes that irst feed onto the SPG temperatures, and eventually onto the Labrador Sea. However, in GC2, the mechanism is different. We identify a delayed NAO response that contributes to phase reversal in Labrador Sea densities through surface lux changes. This response is particularly clear after large sustained trends in ILS densities. A positive NAO signal emerges, overall, 5 years after the strongest decadal decreasing trends in PC1-ILS, most likely driven by the ocean surface temperature changes following the weakened OHT associated with these events. Note that by selecting these rapid large amplitude transitions, the signalto-noise ratio is improved, and the atmospheric signal is better detected.
Finally, it is also important to highlight the similar timescales between the simulated PC1-ILS changes and those of the observed deep Labrador Sea densities (Robson et al. 2014a), suggesting that similar drivers to the ones identiied in this study might be at play in the real world. Our results with GC2 support a major role of NAO-driven ocean heat luxes on Labrador Sea densities, and thus on the evolution of the SPG as well as the AMOC, a result that is consistent with previous studies exploring the reasons for the rapid warming in the North Atlantic during the 90s (Robson et al. 2012a, b;Yeager et al. 2012). GC2 also suggests that changes in the water exports through the Denmark Strait may play a role in the observed anomalously low densities in the deep Labrador Sea following 2000 AD (Robson et al. 2016). In the last few years, coinciding with this minimum in Labrador Sea densities, there seems to be a new tendency towards positive winter NAO phases (observed in 2012, 2014, 2015 and 2016). This might as well suggest that the delayed negative NAO feedback identiied in GC2, and also reported for the AMOC in other models (Gastineau and Frankignoul 2012), is potentially present in the real world. Understanding the processes and mechanisms behind this atmospheric response is beyond the scope of this paper, and will be the subject of a follow-up study. | 14,002.8 | 2017-04-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
We study the detection of propagandistic text fragments in news articles. Instead of merely learning from input-output datapoints in training data, we introduce an approach to inject declarative knowledge of fine-grained propaganda techniques. We leverage declarative knowledge expressed in both natural language and first-order logic. The former refers to the literal definition of each propaganda technique, which is utilized to get class representations for regularizing the model parameters. The latter refers to logical consistency between coarse- and fine- grained predictions, which is used to regularize the training process with propositional Boolean expressions. We conduct experiments on Propaganda Techniques Corpus, a large manually annotated dataset for fine-grained propaganda detection. Experiments show that our method achieves superior performance, demonstrating that injecting declarative knowledge expressed in both natural language and first-order logic can help the model to make more accurate predictions.
Introduction
Propaganda is the approach deliberately designed with specific purposes to influence the opinions of readers.Different from the fake news which is entirely made-up and refers to fabricated news with no verifiable facts, propaganda conveys information with strong emotion or somewhat biased, albeit is possibly built upon an element of truth.This characteristic makes propaganda more effective and unnoticed through the rise of social media platforms.There are many propaganda techniques.For instance, examples of propagandistic texts and definitions of corresponding techniques are shown in Figure 1.
We study the problem of fine-grained propaganda detection in this work, which is possible thanks to the recent release of Propaganda Techniques Corpus (Da San Martino et al., 2019).Different from earlier works (Rashkin et al., 2017;Wang, 2017) that mainly study propaganda detection at a coarse-grained level, namely predicting whether a document is propagandistic or not, the problem requires identification of tokens of particular propaganda techniques in news articles.Da San Martino et al. (2019) propose strong baselines in a multi-task learning manner, which are trained by binary detection of propaganda at sentence level and fine-grained propaganda detection over 18 techniques at token level.Such data-driven methods have the merits of convenient end-to-end learning and strong generalization, however, they cannot guarantee the consistency between sentencelevel and token-level predictions.In addition, it is appealing to integrate human knowledge into datadriven approaches.
In this paper, we introduce an approach named LatexPRO that leverages logical and textual knowledge for propaganda detection.Following (Da San Martino et al., 2019), we develop a BERTbased multi-task learning approach as the base arXiv:2004.14201v1[cs.CL] 29 Apr 2020 model, which makes predictions for 18 propaganda techniques at both sentence-level and token-level.Based on that, we inject two types of knowledge as additional objectives to regularize the learning process.Specifically, we use logic knowledge by transforming the consistency between sentencelevel and token-level predictions with propositional Boolean expressions.Moreover, we use textual definition of propaganda techniques by first representing each of them as a contextual vector and then minimizing the distances to corresponding model parameters in semantic space.
We conduct extensive experiments on Propaganda Techniques Corpus (PTC) (Da San Martino et al., 2019), a large manually annotated dataset for fine-grained propaganda detection.Experiments show that our knowledge augmented method significantly improves a strong multi-task learning approach.In particular, results show our model greatly improves precision, demonstrating injecting declarative knowledge expressed in both natural language and first-order logic can help the model to make more accurate predictions.What is more important, further analysis indicates that augmenting the learning process with declarative knowledge reduces the percentage of inconsistency in model predictions.
The contributions of this paper are summarized as follows: • We introduce an approach to leverage declarative knowledge expressed in both natural language and first-order logic for fine-grained propaganda techniques.
• We utilize both types of knowledge as regularizers in the learning process, which enables the model to make more consistent between sentence-level and token-level predictions.
• Extensive experiments on the PTC dataset (Da San Martino et al., 2019) demonstrate that our method achieves superior performance with high F 1 and precision.
by Da San Martino et al. (2019) to calculate precision, recall and F 1 , in that giving partial credit to imperfect matches at the character level.The FLC task is evaluated on two kinds of measures: (1) Full task is the overall task of detecting both propagandistic fragments and identifying the technique, while (2) Spans is a special case of the Full task, which only considers the spans of fragments except for their propaganda techniques.
Method
In this section, we present our approach LatexPRO that injects declarative knowledge of fine-grained propaganda techniques into neural networks.A high-level illustration is shown in Figure 2. We first present our base model ( §3.1), which is a multi-task learning neural architecture that slightly extends the model of (Da San Martino et al., 2019).Afterwards, we introduce ways to regularize the learning process with textual knowledge from literal definitions of propaganda techniques ( §3.3 and logical knowledge about the consistency between sentencelevel and token-level predictions ( §3.2).Finally, we describe the training and inference procedures ( §3.4).
Base Model
To better exploit the sentence-level information and further help token-level prediction, we develop a fine-grained multi-task method as our base model, which makes predictions for 18 propaganda techniques at both sentence-level and token-level.Inspired by the success of pretrained language models on various natural language processing downstream tasks, we adopt BERT (Devlin et al., 2019) as the backbone model here.To fine-tune the model, for each sentence, the input sequence is modified as "[CLS]sentence tokens[SEP ]".Specifically, we add 19 binary classifiers and one 19-way classifier on top of BERT, where all classifiers are implemented as linear layers.At sentence level, we perform multiple binary classifications and this can further support leveraging declarative knowledge.
The last representation of the special token [CLS]
which is regarded as a summary of the semantic content of the input, is adopted to perform multiple binary classifications, including one binary classification of propaganda vs. non-propaganda and 18 binary classifications of each propaganda technique.We adopt sigmoid activation for each binary classifier.At token level, the last representation of each token is fed into a linear layer to predict the propaganda technique over 19 cate-gories (i.e., 18 categories of propaganda techniques plus one category for "none of them").We adopt Softmax activation for the 19-way classifier.We apply two different losses for this multi-task learning process, including the sentence-level loss L sen and the token-level loss L tok .L sen is the binary cross-entropy loss of multiple binary classifications.
L tok is the focal loss (Lin et al., 2017) of 19-way classification for each token, which could address class imbalance problem.
Inject Logical Knowledge
There are some implicit logical constraints between predictions.However, neural networks are less interpretable and need to be trained with a large amount of data to make it possible to learn such implicit logic.Therefore, we consider to further improve performance by using logic knowledge.To this end, we propose to employ propositional Boolean expressions to explicitly regularize the model with logic-driven objective, which improves logical consistency between sentence-level and token-level predictions, and makes our method more interpretable.For instance, in this work, if a propaganda class c is predicted by the multiple binary classifiers (indicates the sentence contains this propaganda technique), then the token-level predictions belonging to the propaganda class c should also exist.We thus consider the propositional rule F = A ⇒ B, formulated as: where A and B are two variables.Specifically, substituting f c (x) and g c (x) into above formula as , then the logic rule can be written as: where x denotes the input, f c (x) is the binary classifier for propaganda class c, and g c (x) is the probability of fine-grained predictions that contains x being category of c. g c (x) can be obtained by maxpooling over all the probability of predictions for class c.Note that the probabilities of the unpredicted class are set to 0 to prevent any violation, i.e., ensuring that each class has a probability corresponding to it.Our objective here is maximizing P (F ), i.e., minimizing L logic = −log (P (F )), to improve logical consistency between coarse-and fine-grained predictions.
Inject Textual Knowledge
Declarative knowledge in natural language, i.e., the literal definitions of propaganda techniques in this work, can be regarded as somewhat textual knowledge which contains useful semantic information.
Training and Inference
Training.To train the whole model jointly, we introduce a weighted sum of losses L j which consists of the token-level loss L tok , fine-grained sentencelevel loss L sen , textual definition loss L def and logical loss L logic : ) where hyper-parameters α, β, λ and γ are employed to control the tradeoff among losses.During training, our goal is minimizing L j using stochastic gradient descent.
Inference.For the SLC task, our method with a condition to predict "propaganda" only if the probability of propagandistic binary classification for the positive class is above 0.7.This threshold is chosen according to the number of propaganda and non-propaganda samples in the training dataset.For the FLC task, to better use the coarse-grained (sentence-level) information to guide fine-grained (token-level) prediction, we design a way that can be used to explicitly make constraints on 19-way predictions when doing inference.Prediction probabilities of 18 fine-grained binary classifications
Spans
Full Task BERT (Da San Martino et al., 2019) above 0.9 are set to 1, and vice versa to 0. Then the Softmax probability of 19-way predictions (except for the "none of them" class) of each token is multiplied by the corresponding 18 probabilities of propaganda techniques.This means that our model only considers making predictions for the propaganda techniques which are strongly confident the sentence contains.
Experimental Settings
In this paper, we conduct experiments on Propaganda Techniques Corpus (PTC)1 (Da San Martino et al., 2019) which is a large manually annotated dataset for fine-grained propaganda detection, as detailed in Section 2. We adopt F 1 score as the final metric to represent the model performance.
We select the best model on the dev dataset.We adopt BERT base cased (Devlin et al., 2019) as the pre-trained model.We implement our model using Huggingface (Wolf et al., 2019).We use AdamW as the optimizer.In our best model on the dev dataset, the hyper-parameters in loss optimization are set as α = 0.8, β = 0.2, λ = 0.001 and γ = 0.001.We set the max sequence length to 256, the batch size to 16, the learning rate to 3e-5 and warmup steps to 500.We train our model for 20 epochs and adopt an early stopping strategy on the average validation F 1 score of Spans and Full Task with patience of 5.For all experiments, we set the random seed to be 42 for reproducibility.
Models for Comparison
We compare our proposed methods with several baselines for fine-grained propaganda detection.Moreover, three variants of our method are provided to reveal the impact of each component.The notations of LatexPRO (T+L), LatexPRO (T), and LatexPRO (L) denote our model which injects of both textual and logical knowledge, only textual knowledge and only logical knowledge, respectively.Each of these models will be described as follows.
BERT (Da San Martino et al., 2019) adds a linear layer on the top of BERT, and is fine-tuned on SLC and FLC tasks, respectively.
MGN (Da San Martino et al., 2019) is a multitask learning model, which regards the SLC task as the main task and drive the FLC task on the basis of the SLC task.
LatexPRO is our baseline model without leveraging declarative knowledge.
LatexPRO (T) arguments LatexPRO with declarative textual knowledge in natural language, i.e., the literal definitions of propaganda techniques.
LatexPRO (L) injects logical knowledge by employing propositional Boolean expressions to explicitly regularize the model.
LatexPRO (T+L) is our full model in this paper.
Experiment Results and Analysis
Fragment-Level Propaganda Detection.The results for the FLC task are shown in basic model LatexPRO has achieved better results than other baseline models, which approves the effectiveness of our fine-grained multi-task learning structure.It is worth noting that, our full model LatexPRO (T+L) significantly outperforms MGN by 10.06% precision and 2.85% F 1 for Spans task, 12.54% precision and 4.92% F 1 for Full task, which is considered as significant process on this dataset.This demonstrates that leveraging declarative knowledge in text and first-order logic helps to predict the propaganda types more accurately.Moreover, our ablated models LatexPRO (T) and LatexPRO (L) both gain improvements over Lat-exPRO, while LatexPRO (L) gains more improvements than LatexPRO (T).This indicates that injecting each kind of knowledge is useful, and the effect of different kinds of knowledge can be superimposed and uncoupled.It should be noted that, compared with baseline models, our models have achieved a superior performance thanks to high precision, but the recall slightly loses.This is mainly because our models tend to make predictions for the high confident propaganda types.
To further understand the performance of models for the FLC task, we make a more detailed analysis of each propaganda technique.Table 3 shows detailed performance on the Full task.Our models achieve precision and F 1 improvements of almost all the classes over baseline model, and can also predict some low-frequency propaganda techniques, e.g., Whataboutism and Obfus.,Int.This further demonstrates that our method can stress class imbalance problem, and make more accurate predictions.
Sentence-Level Propaganda Detection.Table 4 shows the performances of different models for the SLC task.The results indicate that our model achieves superior performances over other baseline models.Compared to MGN, LatexPRO (T+L) increases the precision by 1.63%, recall by 9.16% and F 1 score by 4.89%.This demonstrates the effectiveness of our model, and shows that our model can find more positive samples which will further benefit the token-level predictions for the FLC task.The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel3.[…]Sinema is proof: the Left hates America,2 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.2
MGN:
[…] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.5"[…] The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel .[…]Sinema is proof: the Left hates America,4 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.2The baseline MGN predicts spans of fragments with wrong propaganda techniques, while our method can make more accurate predictions.
Effectiveness of Improving Consistency
We further define the following metric M C to measure the consistency between sentence-level predictions Y c which is a set of predicted propaganda technique classes, and token-level predictions Y t which is a set of predicted propaganda techniques for input tokens: where |Y t | denotes a normalizing factor, 1 A (x) represents the indicator function: knowledge from propaganda definitions, and logical knowledge from implicit logical rules between predictions, which enables the model to make more consistent predictions.The results show that although MGN could predict the spans of fragments correctly, it fails to identify their techniques to some extent.However, our method shows promising results on both spans and specific propaganda techniques, which further confirms that our method can make more accurate predictions.
Error Analysis
Although our model has achieved the best performance, it still some types of propaganda techniques are not identified, e.g., Appeal to Authority and Red Herring as shown in Table 3.To explore why our model LatexPRO (T+L) cannot predict for those propaganda techniques, we compute a confusion matrix for the Full Task of FLC task, and visualize the confusion matrix using a heatmap as shown in Figure 4. We find that most of the off-diagonal elements are in class O which represents none of them.This demonstrates most of the cases are wrongly classified into O.We think this is due to the imbalance of the propaganda and non-propaganda cate-gories in the dataset.Similarly, Straw Men, Red Herring and Whataboutism are the relatively low frequency of classes.How to deal with the class imbalance still needs further exploration.
Related work
Our work relates to fake news detection and the injection of first-order logic into neural networks.We will describe related studies in these two directions.Fake news detection draws growing attention as the spread of misinformation on social media becomes easier and leads to stronger influence.Various types of fake news detection problems are introduced.For example, there are 4-way classification of news documents (Rashkin et al., 2017), and 6way classification of short statements (Wang, 2017).There are also sentence-level fact checking problems with various genres of evidence, including natural language sentences from Wikipedia (Thorne et al., 2018), semi-structured tables (Chen et al., 2019), and images (Zlatkova et al., 2019;Nakamura et al., 2019).Our work studies propaganda detection, a fine-grained problem that requires tokenlevel prediction over 18 fine-grained propaganda techniques.The release of a large manually annotated dataset (Da San Martino et al., 2019) makes the development of large neural models possible, and also triggers our work, which improves a standard multi-task learning approach by augmenting declarative knowledge expressed in both natural language and first-order logic.
Neural networks have the merits of convenient end-to-end training and good generalization, however, they typically need a lot of training data and are not interpretable.On the other hand, logicbased expert systems are interpretable and require less or no training data.It is appealing to leverage the advantages from both worlds.In NLP community, the injection of logic to neural network can be generally divided into two groups.Methods in the first group regularize neural network with logicdriven loss functions (Xu et al., 2017;Fischer et al., 2018;Li et al., 2019).For example, Rocktäschel et al. (2015) target on the problem of knowledge base completion.After extracting and annotating propositional logical rules about relations in knowledge graph, they ground these rules to facts from knowledge graph and add a differentiable training loss function.Kruszewski et al. (2015) map text to Boolean representations, and derive loss functions based on implication at Boolean level for entail-ment detection.Demeester et al. (2016) propose lifted regularization for knowledge base completion to improve the logical loss functions to be independent of the number of grounded instances and to further extend to unseen constants, The basic idea is that hypernyms have ordering relations and such relations correspond to component-wise comparison in semantic vector space.Hu et al. (2016) introduce a teacher-student model, where the teacher model is a rule-regularized neural network, whose predictions are used to teach the student model.Wang and Poon (2018) generalize virtual evidence (Pearl, 2014) to arbitrary potential functions over inputs and outputs, and use deep probabilistic logic to integrate indirection supervision into neural networks.More recently, Asai and Hajishirzi (2020) regularize question answering systems with symmetric consistency and symmetric consistency.The former creates a symmetric question by replacing words with their antonyms in comparison question, while the latter is for causal reasoning questions through creating new examples when positive causal relationship between two cause-effect questions holds.
The second group is to incorporate logic-specific modules into the inference process (Yang et al., 2017;Dong et al., 2019).For example, Rocktäschel and Riedel (2017) target at the problem of knowledge base completion, and use neural unification modules to recursively construct model similar to the backward chaining algorithm of Prolog.Evans and Grefenstette (2018) develop a differentiable model of forward chaining inference, where weights represent a probability distribution over clauses.Li and Srikumar (2019) inject logic-driven neurons to existing neural networks by measuring the degree of the head being true measured by probabilistic soft logic (Kimmig et al., 2012).Our approach belongs to the first direction, and to the best of knowledge our work is the first one that augments neural network with logical knowledge for propaganda detection.
Conclusion
In this paper, we propose a fine-grained multitask learning approach, which leverages declarative knowledge to detect propaganda techniques in news articles.Specifically, the declarative knowledge is expressed in both natural language and firstorder logic, which are used as regularizers to obtain better propaganda representations and improve log-ical consistency between coarse-and fine-grained predictions, respectively.Extensive experiments on the PTC dataset demonstrate that our knowledge augmented method achieves superior performance with more consistent between sentence-level and token-level predictions.
Figure 1 :
Figure 1: An example of propagandistic texts, and definitions of corresponding propaganda techniques (Bold denotes propagandistic texts).
Figure 2 :
Figure2: Overview of our proposed model.A BERT-based multi-task learning approach is adopted to make predictions for 18 propaganda techniques at both sentence-level and token-level.We introduce two types of knowledge as additional objectives: (1) textual knowledge from literal definitions of propaganda techniques, and (2) logical knowledge about the consistency between sentence-level and token-level predictions.
Doc ID: 999000155 • Title: Arizona Democrat Senate Candidate Kyrsten Sinema Refuses To Retract Saying It's OK For Americans To Join Taliban Ground-truth: […] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.1"[…] The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel3.[…]Sinema is proof: the Left hates America,2 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.2MGN: […] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.5"[…] The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel .[…]Sinema is proof: the Left hates America,4 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.2Our method: […] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.1"[…] The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel3.[…]Sinema is proof: the Left hates America,2 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.Doc ID: 999000155 • Title: Arizona Democrat Senate Candidate Kyrsten Sinema Refuses To Retract Saying It's OK For Americans To Join Taliban Ground-truth: […] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.1"[…] Our method: […] "I go over there, and I'm fighting for the Taliban […]."Sinema responded: "Fine.I don't care if you go and do that, go ahead.1"[…]The Left maintains that Islamic jihad terror is not a problem -it's just a reaction to the evil deeds of the U.S. and Israel3.[…]Sinema is proof: the Left hates America,2 and considers "right-wing extremists," […] It used to be that this fact was dismissed as hysterical hyperbole.
Figure 3 :
Figure 3: Qualitative comparison of 2 different models on a news article.The baseline MGN predicts spans of fragments with wrong propaganda techniques, while our method can make more accurate predictions.Here are 5 propaganda techniques:1.Thought-terminating Cliches, 2.Loaded Language, 3.Causal Oversimplification, 4.Flag waving and 5.Repetition.(Best viewed in color)
Figure 4 :
Figure 4: Visualization of confusion matrix result of our LatexPRO (T+L), where O represents the none of them class.
Figure 3
Figure 3 gives a qualitative comparison example between MGN and our LatexPRO (T+L).Different colors represent different propaganda techniques.The results show that although MGN could predict the spans of fragments correctly, it fails to identify their techniques to some extent.However, our method shows promising results on both spans and specific propaganda techniques, which further confirms that our method can make more accurate predictions.
Table 1 :
The statistics of all 18 propaganda techniques.
, we conduct experiments on two different granularities tasks: sentence-level classification (SLC) and fragmentlevel classification (FLC).Formally, in both tasks, the input is a plain-text document d containing a sequence of characters and a set of propagandistic fragments T , in that each propagandistic text fragment is represented as a sequence of contiguous characters t = [t i , ..., t j ] ⊆ d.For SLC, the target is to predict whether a sentence is propagandistic which can be regarded as a binary classification.For FLC, the target is to predict a set S with propagandistic fragments s = [s m , ..., s n ] ⊆ d and identify s ∈ S to one of the propagandistic techniques.
Table 2 :
50.39 46.09 48.15 27.92 27.2727.60-MGN(DaSanMartino et al., 2019) 51.16 47.27 49.14 30.10 29.37 29.73-Overall performance on fragment-level experiments (FLC task) in terms of Precision (P), recall (R) and F 1 scores on our test set.M C denotes the metric of consistency between sentence-level predictions and tokenlevel predictions.Full task is the overall task of detecting both propagandistic fragments and identifying the technique, while Spans is a special case of the Full task, which only considers the spans of fragments except for their propaganda techniques.Note that (T+L), (T), and (L) denote injecting of both textual and logical knowledge, only textual knowledge, and only logical knowledge, respectively.
Table 3 :
Detailed performance on the full task of fragment-level experiments (FLC task) on our test set.Precision (P), recall (R) and F 1 scores per technique are provided.
Table 4 :
Results on sentence-level experiments (SLC task) in terms of Precision (P), recall (R) and F 1 scores on our test set.Random is a baseline which predicts randomly, and All-Propaganda is a baseline always predicts the propaganda class. | 5,871.6 | 2020-04-29T00:00:00.000 | [
"Computer Science"
] |
Towards a first measurement of the free neutron bound beta decay detecting hydrogen atoms at a throughgoing beamtube in a high flux reactor
In addition to the common 3-body decay of the neutron $n\rightarrow p e^-\overline{\nu_e}$ there should exist an effective 2-body subset with the electron and proton forming a Hydrogen bound state with well defined total momentum, total spin and magnetic quantum numbers. The atomic spectroscopic analysis of this bound system can reveal details about the underlying weak interaction as it mirrors the helicity distributions of all outgoing particles. Thus, it is unique in the information it carries, and an experiment unravelling this information is an analogue to the Goldhaber experiment performed more than 60 years ago. The proposed experiment will search for monoenergetic metastable BoB H atoms with 326 eV kinetic energy, which are generated at the center of a throughgoing beamtube of a high-flux reactor (e.g., at the PIK reactor, Gatchina). Although full spectroscopic information is needed to possibly reveal new physics our first aim is to prove the occurrence of this decay and learn about backgrounds. Key to the detection is the identification of a monoerergtic line of hydrogen atoms occurring at a rate of about 1 $\rm{s}^{-1}$ in the environment of many hydrogen atoms, however having a thermal distribution of about room temperature. Two scenarios for velocity (energy) filtering are discussed in this paper. The first builds on an purely electric chopper system, in which metastable hydrogen atoms are quenched to their ground state and thus remain mostly undetectable. This chopper system employs fast switchable Bradbury Nielsen gates. The second method exploits a strongly energy dependent charge exchange process of metastable hydrogen picking up an electron while traversing an argon filled gas cell, turning it into manipulable charged hydrogen. The final detection of hydrogen occurs through multichannel plate (MCP) detector.
Introduction
The neutron decay has for many years been and is subject of intense studies, as it reveals detailed information on the structure of the weak interaction [1].Using the twobody neutron decay into a hydrogen atom and an electron anti-neutrino n → H + νe , the hyperfine populations of the emerging hydrogen atom can be investigated [2].The challenge lies in the very small 4 • 10 −6 branching ratio to the total neutron decay rate.Hydrogen atoms from this decay have 325.7 eV kinetic energy corresponding to a β = v/c of 0.83 • 10 −3 (non-relativistic Hydrogen atom).Due to conservation of angular momentum, the electron populates only s-states in the Hydrogen atom (83.2% H(1s), 10.4% H(2s)).If one applies the standard purely left handed V-A interaction (the antineutrino helicity H ν being 1) [3,4], three of the four possible hyperfine spin states are allowed (see Fig. 1).
Their populations are given by a e-mail<EMAIL_ADDRESS>(2) They depend on λ = g A g V (ratio of the weak axial-vector and vector coupling constants of the nucleon, λ = −1.27641± 0.00056 [5]), and also on the scalar and tensor coupling constants g S and g T .Thus by measuring the populations w 1 -w 3 of these spin states, a combination of g S and g T can be obtained.The population w 4 of the spin state shown as configuration 4 in Fig. 1 can only occur, if righthanded neutrinos are emitted [6].Applying the left-right symmetric model with its V + A admixture, leads to where x = η − ς , and y = η + ς [6].The parameter η depends on the mass ratio of two intermediate charged vector bosons, and ς is the mixing angle of the boson's mass eigenstates.From the µ + decay, upper limits for these parameters can be deduced [7,8] (η < 0.036 and ς < 0.03; C.L. 90%).The antineutrino helicity can be expressed by [6].
If one sets ς = 0 and η = 0.036, then the population of the forbidden spin state becomes w 4 10 −5 , and the helicity of the antineutrino H ν = 0.997.The goal of the planned BoB experiments is to reduce the upper limit of |g S | < 6 • 10 −2 (C.L. 68%) [9] by a factor of 10.The helicity of the antineutrino should be determined with an accuracy of 10 −3 .With this accuracy one can set the statistical uncertainty of η to 10 −2 , and therefore via Eq.( 5) also the necessary statistical uncertainty of w 4 .
It is planned, to perform the first experiments at the PIK reactor in Gatchina (Russia).In a first setup (see Fig. 2) we will install in one of the throughgoing beam tubes of this reactor an Argon gas cell, located in the high neutron flux area, close to the fuel element.The metastable hydrogen atoms H(2 s) will capture an electron from the Argon atoms, and be transformed to H − ions.These ions have almost the same energy as the H(2s) atoms ( 326 eV).The cross section for electron capture by H(2 s) is roughly two orders of magnitude larger, compared to the reaction with hydrogen atoms in the ground state (H(1s)) [10].During the capture process, the 2 s state decays, and the energy difference between the 2 s and 1 s state is transferred to the H − ion ( 10 eV [11]) as gain in kinetic energy.
Inside the throughgoing beam tube, a combination of Einzel lenses [12] will focus the H − ions onto the entrance of a pulsed electrical deflector, outside of the biological shield of the reactor.This deflector (see Fig. 14) will bend the H − ions by 90 • out of the direct view into the throughgoing beam tube.This strongly reduces the direct background coming from the beam tube.
The H − ions pass subsequently through two Bradbury Nielsen gates (BN gates), which work as an electrical time-of-flight (TOF) system [13] for charged particles, enabling us to measure the energy of the H − ions with a resolution of about 1.6%.The H − ions are counted by a multi-channel-plate (MCP) detector afterwards.A second method of determining the velocity/energy of the H − ions would be the counter-field method, which is described in detail in Ref. [11].Detailed simulation of the background, emerging from the throughgoing beam tube still have to be performed (MCNP [14]) for the PIK reactor.We have already done such simulations for the situation at the beam tube SR6 at the FRM II reactor in Munich.We developed for that beam tube a concept (shielding, collimation), which reduces the neutron and γ background to a level at which the BoB experiment becomes feasible.These results will be published elsewhere.Furthermore, we have investigated the effects of residual gases in the beam tube.It turned out, that a cooled insert tube in the throughgoing beam tube is necessary, in order to freeze out the gas molecules, which otherwise would disturb the traveling H(2 s) atoms.
A second detection scenario for bound neutron β-decays is the use of a velocity filter for metastable hydrogen atoms, again using a system of two switchable gates in the beam tube.These gates act on the 2 s state and will quench metastable hydrogen by means of an electric field in the closed mode and leave it untouched in the open mode.Placing two fast switching gates at fixed distance with appropriate phase shift of open/close state acts as a narrow band velocity filter (chopper).The detection of surviving metastable hydrogen atoms can proceed with an argon cell described above or by means of a quenching plate mounted close to a MCP to detect electrons released from the plate in this process.
Bradbury Nielsen gate chopper
A BN gate consists of a layer of wires mounted on insulating frames as shown in Fig. 3. Opposite electric potentials applied to adjacent wires generate local electric fields between them which deflect charged particles out of the beam, as shown at the right in Fig. 3. Switching off the voltage removes the deviating effect.One can therefore set up a time-of-flight (TOF) system, using two BN gates at a certain distance, combined with a fast switching electronics [13].While in the first step of the neutron BoB experiment this concept will be applied for H − ions, it works also for metastable H(2 s) atoms which are quenched to the 1s state in the vicinity of a charged wire and thus are no longer available for study in the beam.A typical pulse signal applied to a BN gate using the electronic system developed in-house is shown in Fig. 6.A typical pulse signal for one BN gate is shown in Fig. 6.Switching BNG1 and BNG2 (see Fig. 9) with short pulses (ns), and with a delay for the second gate, leads to a selection of a defined velocity of the charged particle or H(2 s) atom with a good energy resolution of a few percent.Figure 4 depicts one of our BN gates.Its aperture is 1.76 cm × 1.26 cm.The geometrical dimensions of the BN grid wires are shown in Fig. 5.
For example, if a voltage of U +/− = ±200 V is applied between the wires, an electrical field of E = 2.5 • 10 5 V/m is produced at (x, y) = (0, 0).A proton with 500 eV energy will be deflected by this field with an angle of 14.7 • out of the collimated beam direction.As proof of principle we tested our BN gate system with 500 eV protons, coming from a strong plasma source (see next section for details).The TOF spectrum is shown in Fig. 7.The gates were switched with 500 ns pulses (duty time), which determine the FWHM of the peak.
A proton/ion source for R&D experiments towards BoB
For the purpose of developing the necessary tools for a future BoB experiment, we have built an experimental test facility with a commercial ion source, which is normally used in the semiconductor production [15] (see Fig. 8).We have set up a beam line for protons and ions, which are collimated by iris diaphragms.Figure 9 shows a typical setup of our experiments.
If the plasma source is driven with hydrogen gas, a stable proton beam can be produced.The energy of this beam can be set by an extraction voltage.If the pressure in the plasma source is not too high an almost mono-energetic beam can be achieved.the FWHM line width of the proton source line profile increases, e.g., to ∼30 eV at 5 • 10 −3 mbar (see Fig. 11).The broadening can be explained by elastic scattering of the produced protons by the abundant H 2 molecules in the source.Furthermore the yield of the proton source is three orders of magnitude higher than at the lower pressure.
Electrostatic focusing and pulsed electric deflection
Various focusing elements are needed for the beam transport of the H − ions leaving the Ar cell in the neutron BoB experiment.Einzel lenses are the best choice inside the throughgoing beam tube, whereas outside the biological shield an electrostatic quadrupole doublet can be used.
A photograph of a suitable quadrupole doublet (QPD) is shown in Fig. 12.The device consists of two individual electric quadrupoles (QPs), which are operated in a crossed mode, resulting in a common focal point for all particle trajectories [16].The focusing of the QPD is independent on the mass of the focused charged particle.Only the charge of the particle and the kinetic energy matters.The QPD is compact and light, compared to magnetic systems.Furthermore it is cheap, concerning production costs.The common focal point is achieved by operating the second QP at higher voltage than the first one [16].This can be explained by a "simple" physical picture.For one focal area (A1), the first and the second QP act as a combination of a convex (first) and diverging lens (second).The focal length of this configuration is larger, compared to the focal length of the first QP for this area.For the second (crossed) focal area (A2) it is just the opposite case.The first QP acts as a diverging lens, while the second QP is a convex lens for this focal area (A2).The focal length for this focal area is also larger compared to pure focal length of the second QP.If both QP voltages are equal, then there is no common focal point possible [17].If the voltage of second QP is increased, compared to the first QP, then the total focal length of focal area A1 will increase (diverging strength of QP2 increases!).The focal length of A2 will decrease, because the focusing effect of QP2 will become stronger, due to the the higher voltage.Both focal areas will concur at a certain point z f , for a certain voltage of QP2, if the voltage of QP1 is fixed.The choice of voltages depends on the energy of the charged particle, and on the selected focal point z f .We used this QPD in our BN gate test measurements (see Fig. 7).There we placed the doublet after the BN gates, and focused the protons at the point, where the MCP detector was installed.A further beam optic device, a deflector, which bends the particle trace of the H − ions by 90 • was designed and built [18] (see Fig. 14).Due to the spherical shaped electrodes, the radial and axial deflector focusing lengths are equal (see Fig. 13).The electric field of the deflector is The bending electric field E for 500 eV protons at r = 5 cm is E = 2 • 10 4 V/m.The corresponding voltage is U = 416.7 V, which means +208.4V at the outer, and −208.4V at the inner electrode.
The deflector was successfully tested at our beam facility with O + ions [19].The results of these measurements (see Fig. 15) also hold for protons, because the electric deflection depends only on the charge and the kinetic energy of the ion.The dispersion of the deflector The deflector voltage U ± was set to ±208 V, which deflects ions with 500 eV kinetic energy.The data were fitted with a Gaussian function.The dispersion peaks at E 0 = 575 ± 8 eV.The FWHM is 500 ± 20 eV (χ 2 /DO F = 1.4;R 2 = 0.98).The wide dispersion is remedied using the BN gate TOF system (or counter-field system) for the selection of the BoB H − ions.COMSOL Multiphysics tm [20] simulations (see Fig. 16) confirmed approximately the theoretical model [18] leading to Eq. ( 7).
BN gate TOF chopper applications
A proton detection system, which uses secondary electrons from thin foils, produced by protons with typical energies in the region 5-20 keV, was investigated with our BN gate TOF system [22].This proton detector shall be used in experiments studying the decay of the free neutron, were protons occur in the energy range of 0 -∼750 eV [23].These protons are accelerated to high higher energies by applying high voltage, and converted to secondary electrons, which can be detected by standard electron detectors.
The measurements were performed at our proton source in the BoB lab.The degrader system (see Fig. 17) was installed 0.58 m (entrance frame) after the second BN gate (see Fig. 9).The distance of the exit frame to the MCP detector was 0.28 m.As an example Fig. 18 shows a measurement with a carbon foil (17 µg/cm 2 ), coated with 10 ÅLiF.
The peak position (TOF channel 2370 in Fig. 18) of the secondary electrons correspond to a kinetic energy of 18.5 keV of the secondary electrons.A simple approximative calculation shows, that ∼29% of the secondary electrons reach the MCP detector.The detection efficiency of 18 keV electrons is around 20%.The proton detection efficiency for 500 eV protons is 5% [24].Using these rough estimates, we get a gain of 3.1 secondary electrons per incident proton.Thorough studies of the electric field distribution and further measurements including incident beam intensity monitoring are planned to obtain more accurate secondary electron yield values.
Conclusion
A new experiment to detect the bound beta neutron decay n → Hν e requires the development of novel methods and technologies.The key requirement is a high rejection of broad band backgrounds from thermal and fast hydrogen atoms of yet unknown magnitude.It results from both the rest gas and hydrogen forming from the abundant neutron decay protons picking up an electron.We have demonstrated these technologies using a laboratory setup producing fast protons and hydrogen atoms at energies up to 500 eV.A system of Bradbury Nielsen gates was built and used as TOF selector system (chopper) for a narrow band beam of fast protons.Such a fast switching electric system can also be used for other measurements in neutron decay.We have operated an Argon filled gas cell to verify the charge exchange process.This cell still needs to be optimized concerning the number density of Argon atoms.We have also simulated, built and tested the dispersive ion optics and the ion detector using protons and positive Oxygen ions.This system is to be placed downstream of the Argon cell acting as a second velocity discriminating system.With these technologies at hand, we can now optimize the full set-up to be operated with metastable hydrogen atoms in order to probe shielding of electric fields and to verify the efficiencies of individual components.We thus are conceptually prepared to build a first experiment at a high flux neutron source with throughgoing beam pipe.
Figure 1 .
Figure 1.Neutron bound beta decay hydrogen atom hyperfine states (black: momentum, red: spin).Configurations 1, 2, 3 are allowed within the V-A theory.For the configuration 4 a righthanded neutrino is needed.F is the total hyperfine spin and m F the F projection.
Figure 2 .
Figure 2. BoB setup at the Gatchina PIK reactor consisting of an Ar gas cell, an electrostatic focusing element, a pulsed electric deflector, a BN gate chopper and a MCP detector.
Figure 3 .
Figure 3. Left: A design drawing of a BN gate.Right: BN gate section view and functioning.The device consists of two insulated grids with equal + and − voltages, producing an electric field between the grid wires by means of which charged particles are deflected.At zero voltage, the particles pass the gates nondeflected.Neutral particles, e.g., metastable H(2 s) hydrogen atoms, are de-excited into H(1 s) by means of the electric field.At zero field they remain to be H(2 s).
Figure 4 .
Figure 4. Bradbury Nielsen gate consisting of two insulated wire grids being chargeable with ±500 V each.For adjusting the grid input resistance to the 50 cable between electronics and BN gate, the grids are grounded by a serial RC element with R = 50 and C = 100 nF (not shown).
Figure 5 .
Figure 5.The positively and negatively charged wires of a BN gate belong to two concatenated grids.The schematic shows the geometry of adjacent wires, with dimensions in mil.
Figure 7 .
Figure 7. TOF spectrum of 500 eV protons.The delay between the pulses applied to the BN gates is 3.3 µs, while the delay t f between the pulse to BNG1 and the detected signal is 7.2 µs.
Figure 8 .
Figure 8. Test facility in BoB laboratory at TUM.
Figure 9 .
Figure 9. Sketch of an experimental setup at the ion source in the BoB lab.
Figure 10 .
Figure 10.Plasma proton source line profile at 5 • 10 −4 mbar source H 2 pressure, measured by varying the delay time between the gates and, thus, scanning the spike T p over the line width.The error bars denote the statistical error.
Figures 10 and 11 show the plasma source proton line profile for two different pressures.The proton source peak at low pressure (5 • 10 −4 mbar) has roughly a FWHM of 10 eV.The manufacturer of the source (TECTRA) quotes an intrinsic FWHM line width of 10 eV at a pressure of 5 • 10 −4 mbar.At higher pressure
Figure 13 .
Figure 13.Sketch of a 90 • electric deflector consisting of two electrodes with curvature radii R 1 and R 2 , r being the reference particle radial coordinate.The deflector focuses both in horizontal and vertical directions.
Figure 14 .
Figure 14.Pulsed electric deflector consisting of two spherically shaped insulated electrodes, one with a hole for the throughgoing beam.
Figure 15 .
Figure 15.Dispersion of the electric deflector for O + ions.
Figure 17 .
Figure 17.Experimental setup for measuring the secondary electron production in thin foils by protons.Different foils were tested (carbon foils coated with MgO and LiF).The foils are placed in the center of the degrader.
Figure 18 .
Figure 18.Blue: Secondary electron production by 500 eV protons, accelerated to 18 keV at the foil position in the degrader.Red: Proton (500 eV) distribution at 0 V voltage in the degrader.The proton source H 2 pressure was 3 • 10 −3 mbar. | 4,797.8 | 2019-10-10T00:00:00.000 | [
"Physics"
] |
Ion chemistry of phthalates in selected ion flow tube mass spectrometry: isomeric effects and secondary reactions with water vapour
Phthalates are widely industrially used and their toxicity is of serious environmental and public health concern. Chemical ionization (CI) analytical techniques offer the potential to detect and monitor traces of phthalate vapours in air or sample headspace in real time. Promising techniques include selected ion flow tube mass spectrometry (SIFT-MS), proton transfer reaction mass spectrometry (PTR-MS) and ion mobility spectrometry (IMS). To facilitate such analyses, reactions of H3O , O2 + and NO reagent ions with phthalate molecules need to be understood. Thus, the ion chemistry of dimethyl phthalate isomers (dimethyl phthalate, DMP – ortho; dimethyl isophthalate, DMIP – meta; dimethyl terephthalate, DMTP – para), diethyl phthalate (DEP), dipropyl phthalate (DPP) and dibutyl phthalate (DBP) was studied by SIFT-MS. Reactions of H3O , O2 + and NO with these phthalate molecules M were found to produce the characteristic primary ion products MH, M and MNO, respectively. In addition, a dissociation process forming the (M–OR) fragment was observed. For phthalates with longer alkyl chains, mainly DPP and DBP, a secondary dissociation channel triggered by the McLafferty rearrangement was also observed. However, this is dominant only for the more energetic O2 + reactions with phthalates, additionally resulting in a recognisable formation of the protonated phthalate anhydride. For the NO reagent ions, the McLafferty rearrangement makes only a minor contribution and for H3O , it was not observed. Experiments on the effect of water vapour on this ion chemistry have shown that protonated DMIP and DMTP efficiently associate with H2O forming the DMIP HH2O, DMIP H(H2O)2 and DMTP HH2O cluster ions, whilst the protonated ortho DMP isomer as well as other ortho phthalates DEP, DPP and DBP does not associate with H2O. The results indicate that the degree of hydration can be used to identify specific phthalate isomers in CI.
Introduction
Phthalates (esters of phthalic acid) are used in the production of plastics as plasticizers and their environmental and health impacts are now well understood. Phthalates are characterized as endocrine disruptors that represent a major hazard for pregnant women and children under 3 years. 1,2 Several of the most dangerous phthalates (diethylhexyl phthalate, dibutyl phthalate, benzylbutyl phthalate, diisononyl phthalate, diisodecyl phthalate, and di-n-octyl phthalate) are regulated by EU regulations 3 or tracked by the European Chemical Agency. 4 However, these regulations only cover toys and are not concerned with other daily products. Phthalates can thus be present in plastic containers, 5 cosmetics 6 and toothbrushes. 7 Phthalates were additionally detected in indoor air and dust 8 or in seawater. 9 Further risk of exposure to phthalates may also arise from the import of products from countries without regulations in place.
Several analytical techniques are used for the detection of phthalates, mainly based on gas chromatography -mass spectrometry using electron ionization, EI, at 70 eV. 10,11 The notable feature observed in phthalate mass spectra is a common fragment ion of the protonated phthalate anhydride with the mass to charge ratio m/z 149. This mass peak is characteristic for most phthalates with longer alkyl substituents. Whilst the appearance of the m/z 149 peak is a good indicator for the presence of phthalate, the selectivity between the different phthalate compounds by EI is limited. Chemical ionization (CI) combined with liquid chromatography has been shown to provide better selectivity between different phthalates. 12 The aim of the present study is to investigate the possibilities of analyzing phthalate vapours via proton transfer reaction mass spectrometry (PTR-MS) and selected ion flow tube mass spectrometry (SIFT-MS). These techniques are mainly used in the real time detection of VOCs present at trace levels 13 and were successfully applied in several analytical applications including breath research, food flavour analysis, environmental monitoring and homeland security. [14][15][16] It is, therefore, important to understand the ion chemistry of phthalates related to SIFT-MS and PTR-MS not only to facilitate their analyses, but also to gain insight into the reaction mechanism by observing trends in changes of reactivity with the phthalate molecule size and structure. Recently, atmospheric pressure chemical ionization (APCI) and ion mobility spectrometry (IMS) were combined to study dimethyl phthalate isomers showing interesting selective behaviour in the formation of protonated phthalate water clusters where the ortho orientation of phthalate esters does not form water clusters whilst the other two conformers do. 17 In the present study, we have investigated the H 3 O + , NO + and O 2 + ion reactions with dimethyl phthalate (DMP), dimethyl iso-phthalate (DMIP), dimethyl terephthalate (DMTP), diethyl phthalate (DEP), dipropyl phthalate (DPP) and dibutyl phthalate (DBP) via SIFT-MS. Secondary reactions of the protonated products with neutral water molecules were also studied in order to gain an understanding of formation of their water clusters.
SIFT experiments
The SIFT experiments 22 were carried out using a Profile 3 instrument (Instrument Science, Crewe, UK). The H 3 O + , NO + and O 2 + reagent ions were generated in a microwave discharge.
One reagent ion type was mass selected using a quadrupole mass filter at a time and injected into the 5 cm long flow tube where a constant laminar flow of helium carrier gas was established at a total pressure of 1.5 mbar and a temperature of 300 K. Synthetic air containing the controlled amounts of neutral reagent vapours (phthalates and water vapour) was introduced into the flow tube by an inlet port located 1 cm downstream from the ion injector at a flow rate of 20 mL min À1 . Depending on the type of the reagent ions, the ionization of phthalate molecules (M) may occur at thermal energy via several channels: proton transfer typical for reaction with H 3 O + reagent ions and association forming an ion adduct, typical for primary reactions with NO + reagent ions or secondary reactions of ions with water NO + + M + He -MNO + + He.
The product ions were sampled at the end of the flow tube, mass analysed using the downstream quadrupole mass filter and detected using an electron multiplier. Data were collected from full scan mass spectra (MS) and the multi-ion monitoring (MIM) mode was used to monitor the product ion distribution during the controlled humidity change.
To characterize the product ion composition via mass spectrometry, a few mg of phthalate sample was placed at the bottom of a 15 mL glass vial closed by aluminium foil and heated up to T = 370 K. The volume of the vial containing phthalate vapours was then sampled directly via SIFT-MS. The humidification of the sample was difficult in this setup. Thus, we carried out the measurements only with synthetic and laboratory air.
To confirm the identity of the product ions, phthalate vapours were also deposited on the inner surface of a 2 m long, 0.25 mm ID polyether ether ketone (PEEK) capillary heated up to T = 360 K. This capillary was then flushed by a flow rate of 20 mL min À1 of pure synthetic air. This approach allowed the suppression of highly volatile impurities and the spectra so obtained were much cleaner, containing only the clean phthalate product ion peaks. Peaks that disappeared (i.e. m/z 57, 75 and 93 in DDP using H 3 O + , see the ESI †) were considered to originate from volatile impurities.
To study the influence of humidity on phthalate ion chemistry, we used the diffusion tube method. 23 A few mg of phthalate sample was placed in a 2 mL vial closed by polytetrafluoroethylene (PTFE) septum caps penetrated with a diffusion tube (1/16 00 OD Â 0.25 mm ID Â 5 cm length PEEK capillary). The 2 mL vial was then placed in a 15 mL glass vial closed by a PTFE septum. The headspace of the 15 mL vial was sampled directly via SIFT-MS. Individual samples were heated up to T = 370 K to enhance their evaporation. Synthetic air was used to refill the air in the vial sampled via SIFT-MS. The humidity of synthetic air was controlled using an in-line water reservoir using the diffusion tube method. The water temperature within the reservoir was varied between T = 77 K and T = 350 K. The resulting water vapor concentration ranged from 10 12 to 10 14 molecules per cm 3 . These water vapor concentrations were estimated according to the hydronium water cluster distribution via SIFT-MS as described elsewhere. 24,25 The relative value of the water vapor concentration is calculated using the following dimensionless logarithmic factor
Molecular properties
The properties of three DMP isomer molecules and of the DEP molecule are given in Table 1. The quantification of DMP isomers using chemical ionization in the H 3 O + reactions is feasible as their proton affinity (PA) is 41.5 eV that of water molecules (PA(H 2 O) = 7.2 eV 26 ). Note that proton transfer is possible also from the hydrated hydronium ion as PA((H 2 O) 2 ) = 8.56 eV. 17 In addition, charge transfer from O 2 + is also possible due to the ionisation potential being lower than that of O 2 . As the PA of all three DMP isomers exceeds the PA of H 2 O, the rate constant for proton transfer (k) is equal to the collisional rate constant (k c ). 27 The k c can be calculated for the H 3 O + reactions using the parametrised trajectory formulation described by Su and Chesnavich 18 (see Table 1) using the polarizabilities and dipole moments of the molecules. These parameters were obtained by quantum chemical calculations (ob97xd/6-311+G(2d,p)) related to previous IMS studies of these DMP isomers. 17
Ion molecule reaction products
Using the SIFT method, we have studied the ion chemistry of individual DMP isomers, DEP, DMP and DBP using H 3 O + , NO + and O 2 + reagent ions. The observed ion products are summarized in Table 2. All H 3 O + reactions led to the formation of protonated phthalates (1) and a loss of one alkyloxy substituent (OR, where R stands for alkyl radicals). This dissociation channel, forming (M-OR) + ions, was observed for all phthalate reactions. Protonated phthalate anhydride (m/z 149, PhAÁH + ) was a minor product only for DPP and DBP. For the NO + reactions, association (reaction (3)) was observed for all phthalates except DBP, where the adduct mass exceeded the upper limit of the downstream quadrupole mass filter (m/z 300). DBPÁNO + is likely to be a dominant product (as for the smaller phthalates) and thus the product ratio cannot be determined and only the upper limits are given in Table 2. The protonated molecule MH + was observed for all phthalates and, as will be discussed later, we consider this to be a product of the MÁNO + secondary reaction with water vapour. Other fragments including protonated phthalate anhydride and (M-(R-2H)) + were detected with low intensity.
The molecular ion is dominant only for the ortho and para DMP isomers. The (M-OR) + ion fragment was formed for all DMP isomers. The production of (M-(R-2H)) + becomes dominant for phthalates with longer alkyl chains, and it is accompanied by the formation of protonated phthalic acid (m/z 167, (PhAÁH + H 2 O) + ) and protonated phthalate anhydride (m/z 149, PhAÁH + ). The observed ion chemistry may be compared with previous studies of chemical ionization reactions of phthalates, using CI reagents such as methane and isobutane, 28 methane and ammonia 12 and methane. 29 Chemical ionisation reactions involving isobutane and ammonia were found to produce mostly the protonated phthalate ions. For methane, the CI reaction leads to protonated phthalate anhydrides (m/z 149) since the PA of methane (5.6 eV) is much lower than the PAs of phthalates. Note that in electron ionization, 30 this ion (m/z 149) is also often dominant. The formation of protonated phthalate anhydride was well explained by theoretical calculations of Jeilani et al. studying protonated 29 and ionised 31 phthalates. Protonated phthalate anhydride is generated from protonated phthalates via two pathways, initiated by the dissociation of alkyl or alcohol: First, the loss of an alcohol molecule (5), common for all phthalates, leads to the formation of protonated phthalate anhydride directly. The second reaction (6) proceeds via a McLafferty rearrangement, 32 which requires a C 2 or longer alkyl ligand to be present in the phthalate ester group. From there, formation of the (M-(R-2H)) + fragment leads to protonated phthalic acid (m/z 167, (PhAÁH + H 2 O) + ) and then to m/z 149 by H 2 O loss. The change of the free energy for the case of DBP favours reaction (6) by 12.2 kJ mol À1 31 as calculated at the B3LYP/6-311G(d,p) level of theory. Both pathways were identified in the CI spectra of most phthalates before; (M-OR) + ions are often more intense than (M-(R-2H)) + ions and the intensity of the (M-OR) + fragment decreases with increasing alkyl chain length. 12 In contrast to these previous studies, in our present results, only traces of the specific products related to the McLafferty rearrangement were observed for DMIP and DBP. For phthalates with longer alkyl chains, even though the calculations indicated the McLafferty rearrangement to be energetically more favourable, the loss of alcohol is a much faster process when H 3 O + or methane ions are used.
Similar pathways were described for phthalate ions produced by EI 30,31 M + -[M-OR] + + OR M + -[M-(R-2H)] + + (R-2H) (8) and these agree with the observed charge transfer reactions (2) of O 2 + reagent ions. The McLafferty rearrangement occurs for (7), and 130 kJ mol À1 is required to produce PhAÁH + . Our present studies follow this precedence, and fragments related to (8) can be observed for DEP and become dominant for DPP and DBP.
In the NO + CI reactions, the observed (M-OR) + fragments cannot result from the dissociative charge transfer, as IE(NO) = 9.26 eV 26 is below the IE of DMP isomers and must proceed by the formation of a neutral RONO product from the reaction intermediate MNO + . A similar process was observed previously for the MÁC 2 H 5 + adducts. 29 In our present studies, the only observed fragment adduct ion was PhAÁNO + resulting from the NO + reaction with DBP. An interesting observation is the presence of a small amount of (M-(R-2H)) + fragments, as these are typical for McLafferty rearrangement. For MÁC 2 H 5 + adducts, this rearrangement occurred in a reaction sequence after the initial dissociation of an alkyl substituent, while in the present study, it is a separate NO + reaction channel. The presence of protonated phthalate was also observed for MÁC 2 H 5 + adducts and it was explained by the dissociation of neutral C 2 H 4 obtained from the adduct. This process is not possible for NO + ions. However, the protonated phthalate can be formed via secondary reactions with water vapour, which will be explained later.
Secondary reactions with water vapour
In the second part of the work, we have studied the influence of humidity on the ion chemistry of individual phthalates by changing the water vapour concentration within the flow tube. The presence of water vapour affects the ion chemistry in several ways. First, the reagent ions form water clusters and this changes their facility to ionise other organic molecules. In SIFT-MS, H 3 O + reagent ions are the most affected, where reactions can form clusters with n up to 4. This effect is illustrated in Fig. 1, showing the relative distribution of hydronium water clusters as a function of water vapour concentration. Second, water clusters formed from protonated organic molecules may lead to complicated ion chemistry, compromising SIFT-MS selectivity and detection limits. Finally, at higher concentrations, water vapour may increase the rate of adduct formation. The influence of water vapour on ion chemistry has been investigated in the present experiments for all reagent ions. For the NO + reagent ions, we observed an increase in the relative intensity for the MH + ions (by 5-10%) with water vapour concentration. This MH + intensity is too great to be produced by proton transfer from the (B1%) amount of H 3 O + ions present in the flow tube together with the NO + ions. Based on Reaction (10) can only take place if the proton affinity of phthalate (M) sufficiently exceeds the NO + affinity, as the notional reaction is 711 kJ mol À1 endothermic. 26 The proton affinity is known only for the DMP isomers (see Table 1) while the NO + affinity can only be estimated. The typical NO + affinities 37 of organic molecules range from 100 to 200 kJ mol À1 . A linear correlation between PA and NO + affinity can be used, but it depends on whether the NO + association gives a s or p complex. 37 For DMP, such a correlation estimates a NO + affinity of 169 kJ mol À1 for the s complex (222.6 kJ mol À1 for p). Both these estimates are well below the PA(DMP) = 935.9 kJ mol À1 and render reaction (10) exothermic by 56.3 (or 2.7) kJ mol À1 . For the meta and para isomers, reaction (10) would be endothermic by 4.7 (or 39.4) kJ mol À1 . Despite this, the high number density of water molecules in the carrier gas (10 12 -10 14 cm À3 ) can shift the reaction equilibrium in favour of MH + production. Finally, the secondary reactions of the O 2 + products can be discussed. A notable effect was observed only for DEP, DPP and DBP. Increasing the water vapour concentration led to a decrease in the fragmentation rate of the protonated phthalate anhydride (m/z 149) by 5-10%, compensated by an increase of protonated phthalate acid (m/z 167) intensity. For the H 3 O + reagent ions, the change in the relative distribution of the phthalate product ions for different water vapour concentrations is shown in Fig. 2. The formation of protonated phthalate water clusters depends strongly on the location of esters in the phthalate structure. In the ortho position, protonated phthalate hydrates are not produced at all. However, for DMIP (phthalate ester in the meta position), the formation of protonated phthalate water clusters can proceed to a degree of up to two water molecules per ion. Finally, for DMTP (phthalate esters in the para position), only one water molecule is observed to be attached to the protonated DMTP. The observed trend agrees with the APCI-IMS study of DMP isomers that was theoretically explained. 17 For DMPÁH + , the proton is located between the two carboxyl oxygens of phthalate esters, independent of the amount of H 2 O molecules.
The formation of the protonated DMP hydrates is energetically possible (see Table 3). However, due to the minimal energy difference between the individual hydration states and high number density of water molecules, equilibrium will be estab- (14) where n 4 0.
Using the numerical simulation software KIMI developed by the first author of this paper, 38 it is possible to model the contribution of the individual reactions taking place in SIFT. Taking into account the relative ion distribution of protonated phthalate ions (Fig. 3), it is possible to interpret the observed (13)). The solid line represents the previous solution obtained by the association of protonated phthalates with water (reaction (14)).
profile considering only proton transfer from hydronium reagent ions (H 3 O + ) and secondary interaction of protonated phthalates with water according to (14). Thus, in CI, where multiple hydronium water clusters as reagent ions are present, higher protonated phthalate water clusters will be formed mainly by the hydration of smaller water clusters rather than by direct switching reactions of H 3 O + (H 2 O) n . This agrees with previous SIFT results for a series of aldehydes. 39 Finally, formation of the (M-OR) + fragment is affected by different concentrations of water vapour in SIFT as well. Fig. 2 shows a similar trend for DMP and DEP, where dissociation at higher water concentrations decreases as hydronium ions are replaced by less reactive hydronium water clusters. For DPP, DBP and DMTP, the dissociation was not affected by the presence of water vapour. The opposite trend can be observed for DMIP, where fragmentation rates increase with water vapour concentration. As the reactivity for higher hydronium water clusters decreases with the level of hydration, increased fragmentation can be explained by an additional secondary reaction. This reaction is probably initiated by the formation of a protonated phthalate-water complex, observed only for DMIP and DMTP, in a specific electronic state providing specific repulsive potential leading to where n 4 1. As we cannot determine details of this process, further study is required to fully understand this reaction.
Conclusions
New data were obtained on the kinetics of reactions involving H 3 O + , NO + and O 2 + ions with phthalate isomers DMP, DMIP and DMTP, DEP, DPP and DBP, including information on the primary and some secondary ion products using SIFT at different concentrations of water vapour. Different ion-molecule reaction channels were observed for the individual reagent ions, including a characteristic dissociation channel forming (M-OR) + ions. This dissociation channel has been observed in previous CI and EI studies for several phthalates and its dominance can be explained for the DMP isomers and DEP by the short length of the alkyl substituents. For DPP and DBP containing longer alkyl chains, products characteristic of the McLafferty rearrangement were dominant only for the O 2 + reagent ions.
A strong effect of the DMP isomeric structure on the formation of the protonated phthalate water clusters was revealed. For the ortho DMP molecules, hydration of protonated molecules is not effective due to the small energy difference between the individual hydration levels. The high number density of water molecules moves the reaction equilibrium in favour of the dominant formation of protonated DMP.
For the DMIP and DMTP isomers, the energy levels for water cluster formation are more different, facilitating the formation of DMIPÁH 3 O + , DMIPÁH 3 O + H 2 O and DMTPÁH 3 O + water clusters. Using numerical simulation, we show that under the given SIFT conditions, phthalate water clusters are preferably formed by sequential hydration of protonated phthalates (14) rather than by direct ligand switching from hydronium water clusters (13). An increasing fragmentation rate at increasing water vapour concentrations observed for DMIP indicates the presence of an additional dissociation channel, producing (M-OR) + fragments from the generated protonated phthalate clusters. For DMP and DEP (phthalate esters in the ortho position), the protonated phthalate water clusters are not observed and the dissociation rate decreases with increasing water vapour concentration.
This detailed SIFT study of ion chemistry thus demonstrated that it is possible to analyse phthalates using different SIFT-MS ionization mechanisms. In addition, the humidity of the sample does not affect the ion chemistry for the studied ortho phthalates. As the proton is located between the two carboxyl oxygens of phthalate esters, the same effect is expected for the other phthalates as well. The effect of humidity on DMIP and DMTP can be additionally used to differentiate individual phthalate isomers via SIFT-MS.
Conflicts of interest
There are no conflicts to declare. | 5,467 | 2020-07-10T00:00:00.000 | [
"Chemistry"
] |
Assessment of Environmental Factors on Corrosion in Reinforced Concrete with Calcium Chloride
Corrosion of steel in reinforced concrete causes severe damage in durability as weakness support of reinforced elements. We investigate impacts of cement fraction and curing method on corrosion progression. Corrosion level is evaluated by measuring carbonation penetration and electrical conductivity in concrete plots as indicators of corrosion. Two types of cement were used, Normal and quick setting. For each cement type, two concrete mixes were used (3% and 8% C3A are designed). Six levels of CaCl 2 ranging from 0.5 % to 3% were used to simulate corrosion. Also, two curing methods are compared, liquid water and steam application are used. Chloride ion in low alumina cement mortar progressed faster than high alumina. The results show significant increase in carbonation depth for (less cement) compared to (more cement) mixes. Also, steam curing showed less penetration than normal water setting method. Variation in carbonation penetration for 0.5 and 1 % CaCl 2 is high close to double. Electrical potential of steel in cement mortar is negatively related with increasing calcium chloride content and with increasing cement content. Also, normal setting cement shows better corrosion protection as demonstrated by higher measured EC.
Introduction
Cement is the most common used construction material worldwide. Concrete is basic building material used in construction, usually with steel in many elements of reinforced concrete. Concrete and steel in structure elements are exposed to corrosive chemicals that penetrate into the concrete elements with water. This effect causes severe damage and weakness in affected elements [1, 2, 3, and 4]. Reinforcement of concrete with steel strengthens the structural element in tension because concrete alone cannot provide it [5, and 6].
The reinforced concrete, has steel bars to give the concrete's its tensile strength capacity [6,7]. Water is required for hardening the cement and reaching the proper mix consistency. A major factor that determines concrete properties is the water/cement ratio. High water fraction causes an increased number of capillary voids which reduces concrete rigidity.
Calcium chloride is common additive to the concrete mix since it improves rate of resistance in cold temperatures and it shortens setting time. It is typically used in cold climate which increases the heat emission during the initial setting [8, and 9].
Freezing in concrete mix leads to increasing the volume of concrete and reduces the water content usable for chemical reactions which reduces resistance and delay solidification processes.
In structure repair tasks operators prefer early-rich mixtures with low w/c ratio and calcium chloride. However, a side effect is the oxidation of steel. There is a risk of reducing alkaline environment by chemicals interaction in the presence of calcium chloride in concrete elements [10,11].
Fresh concrete is highly alkaline, which creates protective environment surrounding steel bars [12, and 13]. Salts dissolved in water, particularly chloride ions in water penetrate in concrete. This hazard impacts highway structures, buildings, and foundations in coastal regions. The salt absorbed in concrete erodes the steel protective layer. If oxygen and moisture come in vicinity of steel bars rusting process is initiated and cracks from corrosion forms with volume expansion in the affected areas. The concrete layer above the reinforcement is pushed away which results in serious concrete damage. The steel embedded in concrete is protected from corrosion by the alkalinity of the cement matrix, which forms a passive film on the steel surface.
Corrosion in concrete is defined as the physical change in material due chemical reaction with its environment. [4, 10, and 14] . Corrosion of steel bars weakens its support function significantly and may cause elements to fail if not treated [14, and 15].
Corrosion is serious problem worldwide, with costly repairs that reach billions of dollars annually. In addition, the numerous intangible losses such as the energy needed to manufacture replacements of corroded objects [ 16, and 17] .
The produced protective film will erode away when the Cl-content reaches high critical content (threshold level), and corrosion in steel will occur when reaction with oxygen and water [ 18, 19, and 20].
Reinforced concrete elements may corrode highly in proximity to marine environments through groundwater and droplets in atmosphere. Chloride was detected in marine environments within 300m of ocean transported by wind reaching 500 ppm or greater [21]. The repair cost of corrosion impacted structures is major concern for highway agencies and estimated to be more than $20 billion and expected to increase by $500 million per year in united state [ 22,23].
Steel when fully covered by concrete it gives high corrosion protection because the cement paste provides an alkaline environment which protects steel bars with a ferric oxide layer that builds up on the surface steel. This protection film is few nanometers thick and it is stable in alkaline environment at pH >11 [24]. The thin film protection can be removed by carbonation of concrete especially in vicinity of chloride ions. The steel becomes unprotected when the pH becomes < 10 [ 23,25].
Weakening of concrete in corrosion of steel is due to growth of the oxide that has an increase in volume [11]. Steel corrosion weakens areas when hydrated it swells and becomes porous which increases the volume to at the steel-concrete interface by two times at least. This leads to cracking of the concrete protection film as a result of corrosion of steel in concrete and make it rusty with brittle and flaky on the bar which increase risk of cracks in the concrete [10, and 25]. The protection of the alkaline environment can be damaged when chloride ions are present. Steel bars surfaces become exposed and dissolution reaction occurs [11].
Corrosion treatment molecules penetrate the concrete through pores and cracks to restore the thin film surrounding steel to extend life of the concrete structure [26].
Related research evaluated corrosion progress and found that calcium nitrite is good protection agent against corrosion in the presence of chloride [27]. Other research investigated effectiveness of corrosion inhibitors, authors investigated concrete samples containing NaCl and soaked in saturated Ca (NO2)2 solutions to simulate strong mortar deterioration [8]. They observed visual deterioration in form of cracks and bulging in concrete without corrosion of reinforcing steel because of using corrosion inhibitors.
In other study, researchers investigated the performance of new types of corrosion inhibitors based on bipolar mechanism and it can penetrate deep in concrete by its high vapor pressure [19]. The inhibitor was applied as an admixture in concrete mix and it is applied as a coating on hardened concrete. Authors measured half cell potential and found a reduction in concrete strength. Authors observed that adding inhibitors did not affect mechanical properties like workability, water absorption, setting time, and compressive strength [19].
In other study researchers evaluated corrosion impact on existing reinforced concrete on bridge decks. They used suction to remove excess moisture from concrete elements then injected inhibitor by pressure in the concrete. This method reduces the corrosion by slowing the anodic and cathodic reactions [20].
Steam curing of concrete is advantageous because it provide hardening in short time. Desired effect is increasing compressive strength of the concrete in short time without cracking. The improvement in concrete resistance to chemicals such as Sodium and Magnesium Sulfate Salts can be achieved by steam curing.
Chemical additives are used in concrete in small fraction for several reasons like voids reduction, reduction of water or cement content plasticization, and control of setting time.
The calcium chloride additives for concrete have several benefits on physical properties of concrete.
Calcium chloride (CaCl2) is a chemical admixture and a secondary product of the sodium carbonate solvay process. Calcium chloride is available as flakes of calcium chloride and as pellets or granules [28, and 29]. Calcium chloride is typically added to concrete mix in cold climate because it allows gaining strength of concrete at similar extent under normal curing temperatures [30].
The chloride ions level causing corrosion in concrete at soluble chloride ion level of 0.1% and higher. This is equivalent to average of 700 gm of chloride per m³ of concrete [17]. The chloride threshold level for corrosion is expressed as a ratio of chloride / hydroxyl ions. Corrosion begins when chloride concentration exceeds 0.6 of the hydroxyl concentration, corrosion is initiated. This is equivalent to a level of 0.4% chloride by weight of cement cast into concrete [29 and 30].
In normal conditions, calcium chloride is used to accelerate time for proper hardness during initial setting. A major limitation to the wider use of calcium chloride in reinforced concrete is it promotes corrosion of the reinforcement if present in large free amounts [31].
There is a chloride threshold concentration that initiates corrosion in metal embedded in concrete and grout. The chloride threshold values are stated in construction codes in several countries. Design codes specify maximum allowable chloride contents in concrete and grout. The American Concrete Institute (ACI) , American Association of State Highway and Transportation (AASHTO) code, and a Post-Tensioning Institute (PTI) code suggest a fraction of (0.06 -0.08) percent by weight of cement [ 20].
On fresh concrete, calcium chloride shortens time for initial concrete hardening which reduces waiting time to safely remove support frames by 40-50% at least without reducing the final resistance. The positive effect is reducing waiting time for hardening which allows moving to next phases in construction.
The penetration of chloride ions through pores in concrete reaches steel bars which weaken the protection layer and expose the steel bar to be undergo corrosion in the presence of moisture and oxygen [32].
The chemical damage from chloride compounds come from Cl reactions with cementitious matrix which leaches calcium hydroxide due to decalcification of calcium silicate hydrate, and forming of brucite (Mg(OH)2 and magnesium silicate hydrate [30].
The chloride ion is adsorbed inside porosity voids or join with the hydration process [33]. Chloride binding in concrete elements induces the rate of chloride enterence, which indicates chloride-induced corrosion occurrence. The chloride pore solution controls diffusion process, which is hindered by the binding reaction [34].
When binding effect of chloride is strong, concentration of free chloride is reduced which means that chloride diffusion decreases simultaneously [35].
Chloride ions react with calcium aluminates and calcium aluminoferrite in the concrete material to form calcium chloroaluminates and calcium chloroferrites where chloride is bonded in insoluble precipitate [31] . Some active soluble chloride remains free in liquid form in the concrete. While the concrete is not carbonated state, free dissolved chloride remains low about 10% of the total Chloride. However, when active carbonation proceeds, hydrated cement phase breaks and for chloroaluminates, chloride ions are released. Thus carbonated concrete contains more free chloride [36]. The chloride binding chemical reaction occurs between chloride ions and the C4AF, C3A, and the hydration products which are known as formation of Friedel's salt [37].
The chemical reaction between C3A, and chloride ions,leads to the formation of Friedel's salt, is given below: This study aims to investigate the influence of carbonation on the penetration of chloride in concrete and its degree of corrosion. Chloride ions may be transferred into concrete from external sources such as de-icing salt, seawater and groundwater, and internal sources through contaminants in concrete such as marine aggregate and chemical admixtures containing chloride ions [3].
Carbonation in concrete is associated with CO 2 dissolution in the solution in pores, which reacts with calcium in calcium hydroxide (CaOH) and calcium silicate hydrate (3CaO • 2SiO 2 • 3H 2 O) producing calcite (CaCO 3 ).
The exposed fresh concrete surface will react with CO 2 in air gradually. Corrosion process penetrates deeper into the concrete at a proportional rate to the square root of time [29]. After 1 year penetration may reach 1 mm if concrete is dense or low W¥C ratio. If concrete is porous it may reach 5 mm if high porosity cement.
This study aims to investigate the effect of calcium chloride (CaCl 2 ) as an admixture on the corrosion process in concrete.
Materials and Methods
The depth of carbonation is measured as the the average measured carbonation depth for three points in the cube.
In this research, we add calcium chloride to concrete mix by 2% by weight of cement. In this research we consider two types of cement, and 1) Ordinary Portland cement, and 2) Quick-setting Portland cement (QSC) with 2% of gypsum to improve setting time, as shown in table 1. Experimental work included preparing concrete samples in cubic frames measuring 15*15*15 cm of mortar with 1:2 ratio of cement: sand (sand is larger than 0.60 mm sieve) (Type1).
The second mix (Type2) was composed of 1:3 (water / sand). The two samples were reinforced with 5 mm diameter steel bars and the percentage of water: cement = 1:2.
Six cubic samples (cast dimensions 15 x 15 x 15 cm) were prepared for each cement type. The cement paste was mixed to avid large voids in the sample. Three concrete samples were treated in a special chamber after casting for six hours under ambient pressure at 80 o C temperature.
Calcium chloride was added at fractions from 0.5 to 3% of the cement weight. Six levels of CaCl 2 are added to concrete samples at 0.5 , 1 , 1.5 ,2 , 2.5 , and 3 percent.
The selected values cover range of expected CaCl 2 levels expected at different operation conditions. For example , 0.3 and 1% fractions are recommended for concrete not protected and well protected from water as suggested by ACI 318-08 (ASCC, 2021 ). Between 1-3% fractions are selected to produce high corrosion risk and analyze corrosion progress.
A steam curing cycle is followed involves three intervals of curing time, maximum steam temperature and duration at the maximum temperature of 3, 3, 4 hours respectively. Then samples are placed on flame source under ambient humidity and temperature 20 o C. The heating treatment of samples is needed to achieve early resistance that allows handling of concrete production after short time of casting, frames can also be removed in shorter period than typical water treatment, which is beneficial for construction projects. Then the three samples of each type were placed in hardening chamber after removing casts. Steam curing is compared to natural curing, that is in ambient temperature and humidity which takes longer time.
Water steam-curing process consists of 3 stages, 1) heating up impacted area after initial setting, 2) keeping high temperature constant for designated period, and 3) cooling period of impacted area. Water steam curing is conducted at ambient atmospheric pressure inside enclosure especially installed to keep applied moisture on treated area and reduce heat losses. Tarpaulins sheets are typically used in enclosure treatment area.
Injection of water steam in the enclosure cannot start until initial setting is achieved which occurs 3 hours after final placement of concrete. Typically, 3 to 5 hour waiting period before steaming will provide maximum early strength [40]. Steam temperature should be at 60°C in the enclosure until the desired concrete strength is achieved and temperature should not rise beyond 70°C to avoid heat induced delay expansion and cause reduction in final strength [40]. Carbonation depth is measured by adding phenolphthalein indicator that gives pink color when solution is alkaline with pH > 9 [38]. The indicator test is conducted by spraying on freshly exposed or broken concrete surfaces [39].
The phenolphthalein indicator is added on fresh concrete fracture surface. A fully-carbonated cement mix has a pH of about 8.4. If the indicator turns purple, then pH > 8.6, if surface remain colorless the pH of the concrete is <8.6. A strong, immediate, color change to purple indicates a pH that is >8.6, typically between 9 and 10.
For long term (150 days) a different method was used. Samples were tested for chloride ion using silver nitrate 1%. The reaction produces AgCl and Nitrate.
We measure the intensity of iron bar corrosion using the difference in electrical potential method using electric chemical cell where one pole is immersed in a silver chloride saturated solution and the second electrode is immersed in cement cube.
Concrete temperature should be monitored at the concrete surface. Ambient Air temperatures are not adequate indicator of hydration heat because it causes concrete internal temperature to exceed 70°C. Curing concrete at 60°C temperature is intended to reduce shrinkage and creep relative to curing at 23°C for 28 days [40, 41, and 42].
Results
The depth of Carbonation as a function of w/c ratio for the mixes used in this study is shown in Table 1.
Chloride ion is evaluated in cement mortar composed of cement alumina content of (CaCl 2 ) from 0.5 -3% of cement per weight analyzed by adding 1% silver nitrate solution. After mortar steam curing, chloride ion was detected at calcium chloride percent of more than 2.5%.
Hydrated calcium aluminate chloride forms when bonding of gypsum present in cement with tri-calcium aluminate. The remaining of the tri-calcium aluminate reacts with calcium chloride.
The resulting compounds of chlorine ion association with ferrous hydroxide are stable.
Mixing water with the dry ingredients hydration reaction is initiated. The extent of the reaction is completed affecting concrete strength and durability. Typically, fresh concrete contains excess water than amount needed for hydration; however, significant evaporation cause water loss will delay adequate hydration. However, insufficient hydration may occur on the concrete surface since it dries first. Hydration is fast on the initial days and it is important to maintain excess water on concrete. During curing process cement becomes harder, impermeable, and resistant to stress.
There are several types of curing methods, and appropriate method depends on the use of construction and waiting time for hardened concrete. There are several curing chemicals compounds that can be used in covering the fresh concrete with water layer or wet sheet.
During cement hydration the internal water decreases which causes the cement paste to dry out and additional water is needed. Water shortage may affect final concrete properties, especially when internal humidity decreases below 80% during first 7 days. Therefore, curing compounds that builds membrane may not maintain adequate water in the concrete. To secure adequate hydration, fogging and wet curing treatment is used on site [9, and 17]. Moisture fogging after placing concrete protects from cracks especially in low water-cement ratios.
If water decreases to about 80%, hydration curing is stopped and gaining in strength stops. However, if moisture curing is continued, strength increase will continue, but the maximum potential strength will not be obtained.
Curing at warmer or cooler temperature should be avoided to prevent undesired shrinkage. Temperature in the enclosure fitted on concrete surface should be increased and decreased at rate of 33°C per hour based on the configuration and size of element.
Therefore, continuous initial sufficient hydration curing is essential to reach sufficient strength. Therefore, Concrete surfaces must be hydrated during initial period. Evaporation water loss causes shrinkage in concrete, which creates internal tensile stress within the concrete. If this stress develops before the concrete has reached adequate tensile strength then surface cracks will occur [18].
At low temperature, hydration proceeds at a much slower rate. Cold temperatures below 10°C slows the gaining of early strength [17].
Steam curing is preferred when time is limited and early strength gain in concrete is needed and when additional heat is required for hydration process.
Curing temperature in the installed enclosure should be fixed until concrete area obtains the normal strength. The appropriate curing time depends on the concrete type and water steam temperature inside the enclosure [36 and 40].
Carbonization process initiates from carbon dioxide reaction with hydrated cement in the presence of water. Tables 3 show that the carbonation depth in mortar with a mix design of 1:2 Cement/Sand and a W/C = 0.5 for Mix 1. Similarly, Table 2 shows the carbonation depth in mortar with a mix design of 1:3 Cement/Sand and a W/C = 0.6 for Mix 2. The excess water due to carbonization process is used in rehydration and calcium carbonate formation and reduces the pores in fresh concrete. However, carbonation and the presence of calcium chloride combine to erode the coating layer on steel bars and starts corrosion process [43, and 44].
Discussion
Chloride ion is evaluated in cement mortar composed of cement alumina content of (CaCl 2 ) from 0.5 -3% of cement per weight analyzed by adding 1% silver nitrate solution. After mortar steam curing, chloride ion was detected at calcium chloride percent of more than 2.5%.
Hydrated calcium aluminate chloride forms when bonding of gypsum present in cement with tri-calcium aluminate. The remaining of the tri-calcium aluminate reacts with calcium chloride.
The resulting compounds of chlorine ion association with ferrous hydroxide are stable.
Mixing water with the dry ingredients hydration reaction is initiated. The extent of the reaction is completed affects concrete strength and durability. Typically, fresh concrete contains excess water than amount needed for hydration; however, significant evaporation cause water loss will delay adequate hydration. However, insufficient hydration may occur on the concrete surface since it dries first. Hydration is fast on the initial days and it is important to maintain excess water on concrete. During curing process cement becomes harder, impermeable, and resistant to stress.
There are several types of curing methods, and appropriate method depends on the use of construction and waiting time for hardened concrete. There are several curing chemicals compounds that can be used in covering the fresh concrete with water layer or wet sheet.
During cement hydration the internal water decreases which causes the cement paste to dry out and additional water is needed. Water shortage may affect final concrete properties, especially when internal humidity decreases below 80% during first 7 days. Therefore, curing compounds that builds membrane may not maintain adequate water in the concrete. To secure adequate hydration, fogging and wet curing treatment is used on site [9 , and 17]. Moisture fogging after placing concrete protects from cracks especially in low water-cement ratios.
If water decreases to about 80%, hydration curing is stopped and gaining in strength stops. However, if moisture curing is continued, strength increase will continue, but the maximum potential strength will not be obtained.
Therefore, continuous initial sufficient hydration curing is essential to reach sufficient strength. Therefore, Concrete surfaces must be hydrated during initial period.
Evaporation water loss causes shrinkage in concrete, which creates internal tensile stress within the concrete. If this stress developing before the concrete has reached adequate tensile strength then surface cracks will occur [18].
At low temperature, hydration proceeds at a much slower rate. Cold temperatures below 10°C slow the gaining of early strength [17].
Steam curing is preferred when time is limited and early strength gain in concrete is needed and when additional heat is required for hydration process.
Water steam-curing process consists of 3 stages, 1) heating up impacted area after initial setting, 2) keeping high temperature constant for designated period, and 3) cooling period of impacted area. Water steam curing is conducted at ambient atmospheric pressure inside enclosure especially installed to keep applied moisture on treated area and reduce heat losses. Tarpaulins sheets are typically used in enclosure treatment area.
Injection of water steam in the enclosure cannot start until initial setting is achieved which occurs 3 hours after final placement of concrete. Typically, 3 to 5 hour waiting period before steaming will provide maximum early strength [36]. Steam temperature should be at 60°C in the enclosure until the desired concrete strength is achieved and temperature should not rise beyond 70°C to avoid heat induced delay expansion and cause reduction in final strength [36].
Concrete temperature should be monitored at the concrete surface. Ambient Air temperatures are not adequate indicator of hydration heat because it causes concrete internal temperature to exceed 70°C. Curing concrete at 60°C temperature is intended to reduce shrinkage and creep relative to curing at 23°C for 28 days [36, 37, and 38]. Curing at warmer or cooler temperature should be avoided to prevent undesired shrinkage. Temperature in the enclosure fitted on concrete surface should be increased and decreased at rate of 33°C per hour based on the configuration and size of element. Curing temperature in the installed enclosure should be fixed until concrete area obtains the normal strength. The appropriate curing time depends on the concrete type and water steam temperature inside the enclosure [33 and 36].
Carbonization process initiates from carbon dioxide reaction with hydrated cement in the presence of water. Tables 3 show that the carbonation depth in mortar with a mix design of 1:2 Cement/Sand and a W/C = 0.5 for Mix 1. Similarly, Tables 2 show the carbonation depth in mortar with a mix design of 1:3 Cement/Sand and a W/C = 0.6 for Mix 2.
The excess water due to carbonization process is used in rehydration and calcium carbonate formation and reduces the pores in fresh concrete. However, carbonation and the presence of calcium chloride combine to erode the coating layer on steel bars and start corrosion process [39, and 40].
Conclusions
In this research, two types of cement were used, Normal and Quick setting. For each cement type, two concrete mixes were used with low and medium Tricalcium aluminate (3% and 8% C3A respectively). Cement plots are designed and prepared in cubic frames (15 x 15 x15 cm3). For each concrete mix, six levels of CaCl 2 ranging from 0.5 % to 3% were used. Also, water setting and steam curing are compared.
Corrosion is measured as carbonation penetration depth in concrete and they correlate positively as shown in Figures 1 and 2. Also, Figures 1 and 2 indicate that QSPC maintain better protection when exposed to high levels of CaCl 2 . Natural concrete shows less corrosion resistance. Also, Steam method better inhibits corrosion than normal method consistently. At low CaCl 2 level in concrete mix the curing method and concrete type show explicit variation. Increasing CaCl 2 reduces carbonation layer thickness. natural curing maintain high increasing CaCl 2 fraction inhibits Depth of oxidation layer was measured and it is assessed by measuring electrical potential of morter samples. Low cement ratio (Mix 2) shows deeper corrosion penetration than higher cement ratio (Mix 1). Also for the normal and quick setting cement types, steam curing produced higher protection than water setting method.
Electrical conductivity of steel in reinforced samples is indicator of protection level surrounding steel bars.
Electrical conductivity of steel bar is inversely related to corrosion.
The results shown in Figures 3.4 indicate that measurement of electrical potential on steel bars depends on content of alumina in cement. The electric potential of steel bars in Type 1 (moderate alumina) is greater than for Type2. Rusting trend in concrete mixes was variable. Corrosion in iron bars is reduced in higher aluminum Cement compared to less alumina level. This trend is attributed to the shortage in chloride ion. High cement ratio (Mix 1) shown in Figure 3 indicates significantly higher conductivity compared to lower cement ratio (Mix 2) for the two types of cement used (Figure 4). High EC in mv means higher protection from corrosion. While quick setting type had lower EC and less corrosion resistance.
In summary, the results show significant corrosion of steel iron bars in cement with low alumina (Mix2). The minimum corrosion is observed in the moderate alumina cement mortar (Mix 1) and where the corrosion of iron with a calcium chloride content of 3%.
Also, additional benefit of adding calcium chloride is increasing final strength resistance of the concrete. The gain in concrete strength and corrosion protection due to use of calcium chloride is comparable to wet burlap for three days. Steam curing provided higher protection as shown in higher EC. A CaCl 2 level of 1 % show highest level of protection. | 6,461 | 2021-10-01T00:00:00.000 | [
"Materials Science"
] |
Asymptotic Symmetries of Maxwell Theory in Arbitrary Dimensions at Spatial Infinity
The asymptotic symmetry analysis of Maxwell theory at spatial infinity of Minkowski space with $d\geq 3$ is performed. We revisit the action principle in de Sitter slicing and make it well-defined by an asymptotic gauge fixing. In consequence, the conserved charges are inferred directly by manipulating surface terms of the action. Remarkably, the antipodal condition on de Sitter space is imposed by demanding regularity of field strength at light cone for $d\geq 4$. We also show how this condition reproduces and generalizes the parity conditions for inertial observers treated in 3+1 formulations. The expression of the charge for two limiting cases is discussed: Null infinity and inertial Minkowski observers. For the separately-treated 3d theory, a set of non-logarithmic boundary conditions at null infinity are derived by a large boost limit.
Introduction
In Lagrangian theories with a Lie group of global symmetries, Noether's first theorem establishes a conserved current for every generator of the corresponding Lie algebra. Noether's method, however, fails to assign conserved currents to gauge symmetries [1]. Instead, several methods have been proposed to associate 2-form currents k µν to gauge symmetries [2][3][4][5], which yield conserved surface charges. In gauge theories (and gravity), the asymptotic symmetry group (ASG) is the group of gauge transformations with finite surface charge 1 , and the elements are called large (or improper) gauge transformations (in gravity, large (or improper) diffeomorphisms). To obtain the asymptotic symmetry group, one fixes an appropriate gauge, and imposes certain fall-off behavior on the fields. Large gauge transformations are then the residual gauge transformations i.e. those which preserve both the boundary conditions and the gauge.
In this work, we study the asymptotic structure of Maxwell field in arbitrary dimensions at spatial infinity, and identify a set of boundary conditions with non-trivial ASG, generalizing previous works in four dimensions. The ASG with our prescribed boundary conditions is local-U (1) on celestial sphere S d−2 , parametrized by arbitrary functions on S d−2 . The surface charges are obtained by manipulating surface terms arising from variation of the action, circumventing standard methods. To do this, we make the action principle well-defined, by making the timelike boundary term vanish, as done in [23][24][25][26]. As it was shown in [6], demanding the action principle to be well-defined determines the asymptotic gauge almost completely. This condition automatically ensures conservation of the charges for residual gauge transformations.
A key result of this paper is that we provide a rationale for imposing the antipodal matching condition in arbitrary dimensions. Previous works on gauge theories in flat space advocate a matching condition [22] for the fields at spatial infinity i 0 , when it is approached from future and past null boundaries I + , I − . On the asymptotic de Sitter space, this condition relates the states at past and future boundaries I ± . In dS/CFT studies, various antipodal conditions are proposed to make the Hilbert space well-defined [27]. We show that an antipodal condition is necessary to ensure regularity of field strength at light cone for d ≥ 4.
We will work in de Sitter slicing [21,28] of Minkowski space which makes the boundary conditions manifestly Lorentz invariant. In the 3+1 Hamiltonian approach of [20], the formalism loses manifest Lorentz symmetry and the ASG is presented as the product of two oppositeparity subgroups. We will show how their results regarding conserved charges and parities are recovered and generalized, by focusing on specific slices of de Sitter space.
Finally, 3-dimensional theory is covered in section 4. Asymptotic symmetries of 3d Einstein-Maxwell theory was studied in [18] at null infinity and in [29] in near-horizon geometries. We will show by taking null infinity limit that the same set of charges (in Maxwell sector) can be obtained in a non-logarithmic expansion. In addition, our hyperbolic setup fits completely with [30] on BMS 3 symmetry at spatial infinity. Thus, we expect that the combined hyperbolic analysis will reproduce the results of [18] in its non-radiative sector.
Rindler patch, action principle and conserved charges
Given an arbitrary point O in Minkowski space, one can define null coordinates u = t − r and v = t + r. The future light cone L + of O is the u = 0 hypersurface, while the past light cone L − is at v = 0. L + and L − intersect at the origin O. We call the set of points with space-like distance to O, the Rindler patch and denote it by Rind d−1 (see figure 1). The Rindler patch is conveniently covered by coordinates (ρ, T , x A ), A = 1, · · · , d − 2, in which the metric is The origin is at ρ = 0 and undefined T . Future light cone L + is at (ρ = 0, T = 0) and past light cone L − is at(ρ = 0, T = π). 2 Spatial infinity i 0 defined as the destination of spacelike geodesics is at (ρ → ∞, 0 < T < π), shown as the intersection of future and past null infinities on the Penrose diagram. The limit (ρ → ∞, T → 0, π) covers the portion of null infinity outside the light cone. 3 The constant ρ hypersurfaces are (d − 1)-dimensional de Sitter spaces with radius ρ, invariant under Lorentz transformations about O. We will show de Sitter coordinates by x a , a = 2, · · · , d, and the unit dS d−1 metric by h ab .
The study is restricted to solutions with asymptotic power expansion in ρ Figure 2. The region where we define the action problem. It is confined by initial and final cones I 1 and I 2 (e.g. at constant T ), intersecting at O. The region is not bounded in ρ direction, so I 1,2 are Cauchy surfaces where initial and final data are fixed. The boundary terms are computed at constant-ρ hyperboloids (B). Dashed lines show the light-cone.
In some cases, we drop the superscript (n) for the leading term (the least n) in each component to reduce clutter.
The action principle
In the Lagrangian formulation of physical theories, the classical trajectories of the dynamical variables Φ i are stationary points of an action functional for fixed initial and final values. In field theories, the functional derivative of the action is well-defined, if variation of dynamical fields leaves no boundary terms. In our setup, there are two spacelike boundaries I 1,2 and one timelike boundary B lying on asymptotic de Sitter space (see figure 2). Data on spacelike boundaries is fixed, so we must ensure that the boundary term on B either vanishes or itself is a total derivative.
For Maxwell theory with action
the timelike boundary term is We will show that for specific boundary conditions and an asymptotic gauge fixing, the boundary term does vanish.
Conserved charges
For the specific example of Maxwell theory, we show that with a well-defined action principle at hand, one can define conserved charges for gauge transformations of the theory, and identify the asymptotic symmetry group as the group of gauge transformations having finite charge.
Consider variation of the action around a solution to equations of motion 5 If the field variation is a gauge transformation (or a diffeormorphism in gravity theories), then, the I integrands in (2.7) become total derivatives, so the first two terms becomes codimension-2 integrals on ∂I 1 and ∂I 2 . This can be checked in specific examples, and a proof for gravity case is given in [? ] If the action principle is well-defined, the B-integral on the timelike boundary is either vanishing, or a total derivative on the hyperboloid (so that it becomes a surface integral on boundaries of B). As a result, the gauge transformation of the action becomes the difference of two codimension-2 integrals on shell (2.8) The left-hand-side depends on the the explicit form of the action. If the action is gauge invariant (δ λ S = 0), (2.8) shows that the integral ∂I C is independent of the surface of integration; thus we can identify the codimension-2 integrals as the conserved charges corresponding to the gauge transformation δ λ .
Covariant phase space method
Let us compare the procedure above with covariant phase space method. The symplectic form of the theory is nothing but variation of the action surface terms for two field variations δ, δ , and I is defined in (2.7). Taking a second variation of (2.7) shows that in general Ω is not conserved since its flux at timelike boundary B is non-vanishing and given by where the integrand B is again defined in (2.7). Therefore, eliminating the symplectic flux is equivalent to making the action principle well-defined. For the conservation of the symplectic form, the flux (2.10) need not be strictly vanishing. It is enough, if possible, to make it a total divergence reducing the expression to codimension-2 integrals on ∂B: Finally, Ω b'dry can be added to Ω as a surface term, leading to conserved charges. This subtraction is a Y ambiguity in covariant phase space terminology [3]. This procedure was done in [19] for 4d Maxwell theory. It can be readily generalized to arbitrary dimensions by appropriate choice of boundary conditions. However, we decide to bypass the symplectic form by working directly with the action.
5 Notation: ≈ is equality when equations of motion hold.
Field equations in de Sitter slicing
Written in coordinates (ρ, x a ), the field equations and Bianchi identities are where D is the covariant derivative on dS d−1 . Analyzing the solutions suggests appropriate boundary conditions for the theory. Note that F aρ and F ab are distinct Lorentz invariant components. First we ask if there are solutions to equations of motion once either of them is set to zero.
2. In general, F ab is closed on de Sitter space by (2.12d), thus it is locally exact F ab = (dA) ab . Switching F aρ off, fixes the ρ-dependence by (2.12c) to F ab ∝ ρ 0 . Finally, the field equation Any other solution involves both F ab and F aρ . The solutions with power-law fall-off in ρ correspond to multipoles of electric and magnetic branes. Electric monopoles generate the independent solution (2.13) for F aρ , while magnetic mono-poles(-branes) generate the independent solution (2.14) forF ab . Their multi-poles generate fields of lower fall-off which mix F aρ and F ab . On the contrary, arranging monopoles to build lines of charge will generate stronger fields at infinity, but in any case mix F aρ and F ab . 7 Denote the set of solutions for electric monopoles given in (2.13) by E. This space covers moving electric charges in space, which are passing the origin simultaneously at t = 0, hence their worldlines cross O. The field strength is F ρa ∝ ρ 3−d with no subleading terms. For an arbitrary configuration of freely moving charges, the leading component of asymptotic field is an element of E, but subleading terms are generally present. In other words, the definition of E is Lorentz invariant, but not Poincaré invariant. E encodes the information of charge values q n 6 Notation: ∼ O(ρ n ) means all powers not exceeding n, while ∝ ρ n means the n-th power of ρ exclusuvely. 7 For example, for an electric dipole, F ρa ∝ ρ 2−d , and by Bianchi identity away from sources. and their velocities β n . The space E is isomorphic to the space of boost vectors β, that is R d−1 .
The space of conserved electric charges we will construct is also isomorphic to R d−1 ; each point of this space with coordinate vector β is a conserved charge and gives the total electric charge in space, moving with that specific boost.
The set of solutions (2.14) covers magnetic monopoles moving freely in space and crossing the origin at t = 0. In dimensions larger than 4, the magnetic monopoles are replaced by extended magnetic branes since the dual field strength * F is a (d − 2)-form in that case. We are considering boundary conditions which exclude magnetic charges in this work.
Four and higher dimensions
In this section, we exploit the asymptotic symmetries of Maxwell theory in dimensions higher than three. First, we present a set of well-motivated boundary conditions on field strength tensor. Nonetheless, existence of large gauge transformations demand that the gauge field be finite at infinity. That will necessitate an asymptotic gauge choice to make the action principle well-defined. Finally, we find the conserved charges of the theory at spatial infinity by computing the on-shell action.
Boundary conditions and the action principle
The electromagnetic field of a static electric charge is 8 Applying a boost (which belongs to the isometry group of the hyperboloid) will turn on other de Sitter components of F aρ with the same fall-off; so one generally has F aρ ∝ ρ 3−d . Therefore, we propose the following boundary conditions for d-dimensional theory The F ab components arise because of electric multipoles (c.f. §2.3). The leading component of F aρ is in E of § 2.3 and satisfies Components of gauge field that saturate (3.1) behave like . Plugging into (2.6), the boundary term falls like O(ρ 3−d ). For d > 3, the action principle is well-defined. However, this choice will make the charges for all gauge transformations vanish. 8 For a spherically symmetric field we have where a d−2 is the area of a (d − 2)-sphere. In hyperbolic coordinates we have For instance, the Gauss law is regarded as the charge for gauge transformation with λ = 1, which is excluded if A a ∼ O(ρ 3−d ) in dimensions higher than three. The theory enjoys non-trivial ASG, only if δA a ∼ O(1). Thus, our prescribed boundary condition is as follows: A a ∼ O(1) but the first few terms in the asymptotic expansion of A a are pure gauge 9 , such that F ab ∼ O(ρ 3−d ). Previous works in four dimensional Maxwell theory allow magnetic monopoles. That would make F ab ∼ O(1) so the leading term of the gauge field would not be pure gauge. Here we are not taking account of magnetic charges though.
With the aforementioned boundary condition, the boundary term of the action will be finite Consequently, after integration by parts, the boundary term of the action vanishes on shell, by equation of motion D 2 ψ = 0 (up to a total divergence on B). However, we request off-shell vanishing of the boundary term, since the variational principle must entail the equations of motion, and they can not be used a priori.
One way out is to fix the asymptotic gauge δD a A (0) a = 0, for which the boundary term becomes a total divergence on B after an integration by parts. There are also other possibilities. The Lorenz gauge at leading order is and by our boundary conditions on field strength, A Thus, the Lorenz gauge, or its extension to general α will make the action principle well-defined in dimensions strictly higher than 4. In four spacetime dimensions, A (1) ρ = ψ (up to a constant number which drops from derivatives) so it is necessary to add a boundary term to make the action well-defined [6].
Conserved charges from action
The action with a solution to equations of motion plugged in, is a functional of initial and final field values (or boundary values in Euclidean versions); That is how classical trajectories are defined. For Maxwell theory, γ is the induced metric on I and n µ is its future-directed normal vector. Varying (3.8) by gauge transformations δA µ = ∂ µ Λ, and using field equations following an integration by parts gives 10 where λ = Λ (0) . The explicit form of Maxwell action (2.5) shows that the left-hand-side above is the flux through spatial boundary: We can make this "charge flux" vanish asymptotically by the additional assumption J ρ ∼ O(ρ −d ). This condition ensures that the system is localized and the charges are conserved. So far we made the left-hand-side in (3.9) vanish; let us look at the other side.
Recall that the action principle necessitated fixing the asymptotic Lorenz gauge (3.6), leaving residual gauge transformations with arbitrary subleading terms. The condition on λ allows us to turn the very last term in the right-hand-side of (3.9) into a total divergence on B. As a result we manage to prove that the quantity is independent of I; i.e. conserved. The general solution is
Light cone regularity and antipodal identification
where P m l and Q m l are associated Legendre functions of the first and second kind respectively. For = 0, the solutions are As far as field equations are concerned, the whole set of solutions in (3.15) with two sets of coefficients are admissible both for ψ and λ. In previous works in four dimensional Maxwell theory, a boundary condition, the antipodal matching condition [22], was imposed such that one of branch of the solutions in (3.15) was allowed for ψ and the other for λ. Here we will provide a rationale for the antipodal matching condition in higher dimensions.
The field strength tensor F being a physical field must be regular at light cone L ± (i.e. u = 0 and v = 0 surfaces in advanced/retarded Bondi coordinates). Recall that in E space, F aρ = ρ 3−d ∂ a ψ in d dimensions, which diverges at ρ = 0 in dimensions larger than three. Near L + (located at ρ = 0, T = 0), ψ must decay at least like T d−2 , to make F T ρ finite.
The light cone behavior of solutions (3.15) is 11 This is a well-known condition in dS/CFT studies [31]. Gauge parameters with non-vanishing charge (3.12) must reside in f + set. These are even under de Sitter antipodal map Note that the conditions (3.18) and (3.19) hold on the entire de Sitter space and in particular for T = 0, relating the fields on future and past boundaries of the hyperboloid The fields on left-hand-side live on the past of future null infinity I + − while those on right-handside live on the future of past null infinity I − + . Therefore λ and F aρ = ∂ a ψ are both even under antipodal map between future and past null infinity.
Charge at null infinity
In the Rindler patch, one can approach the light cone hypersurface L + ∪ L − from outside. The charge (3.12) takes a simpler form in that limit: The second term in (3.12) vanishes, while the first term becomes The leading field strength at null infinity becomes Hence, the familiar expression for surface charges at future null infinity is recovered
Inertial observers
Consider a Minkowski observer with coordinates (t, r, x A ), who advocates a "3+1 formulation" of d-dimensional theory. Boundary conditions restrict Cauchy data residing in constant-time hypersurfaces at large r. It is implicitly presumed that time interval ∆t between Cauchy surfaces is much smaller that the radius r beyond which is conceived as "asymptotic region". This ∆t/r → 0 condition makes all Cauchy surfaces to converge at T = π/2 "throat" on the asymptotic de Sitter space. Infinitesimal Lorentz boosts will incline this surface, though, to The solutions to (second order) equations of motion on dS d−1 are specified by initial/final data on past/future boundaries of de Sitter space I − + /I + − . When an additional antipodal condition is imposed, only one set of data on either boundary suffices (and the other one is determined by e.o.m.). When the spacetime is restricted to a cylinder around T = π/2, the solution can be specified by a couple of independent data Φ and ∂ T Φ (and higher time derivatives determined by e.o.m.). The antipodal condition then halves the possibilities in each one by a restriction on angular dependence, as explained below.
Here, we would like to focus around T = π/2 surface and translate previous results to a canonical language. First of all, the coordinates are related as In four dimensions, A t receives an additional contribution −ψ( π 2 ,x)/r. The radial components may be written as The field strength is given by The "momenta" π i are symbolic in this discussion, but they are equal to momenta in a true Hamiltonian formulation. Finally, the gauge parameter divides into The antipodal conditions (3.18) and (3.19) implȳ and 12 In even spacetime dimensions, these are parity conditions, cause the antipodal mapx → −x reverses the orientation of S d−2 (the volume form shifts sign). In odd dimensions, however, the map is a rotation about the origin, preserving the orientation. These conditions are preserved under boosts. The connected part of Lorentz group SL(d−1, 1), commutes with parity and timereversal, thus the antipodal conditions (3.18) and (3.19) hold in any Lorentz frame. Explicitly, for an infinitesimally boosted frame and keeping the terms at zeroth order of T we have In the second equality we have used the antipodal conditions and the temporal argument is found by π − ( π 2 − β · (−x)) = π 2 − β ·x. The conserved charge (3.12) is rewritten as One must note that µ transforms like a vector under boosts, for it is the T -derivative of a scalar.
Finite action and symplectic form
Here we will show that the symplectic form is finite in dimensions higher than 4. In analogy with mechanical systems, the symplectic 2-form Ω in field theories is defined from the boundary term of the Lagrangian. For Maxwell theory in Rindler patch, it is with Ω b'dary being a surface term introduced in [19] for d = 4 13 . In four dimensions, this is logarithmically divergent, since The second term which correponds to magnetic monopoles is excluded in our boundary condition (3.1). The first term, however has the form ψ∂ T ψ. If the integration surface is T = π/2, this term vanishes by antipodal condition (3.18). This remains true for boosted frames too. Nevertheless, it is not clear if the divergence cancels for arbitrary spacelike surfaces I, and we are not aware of any resolution. Similar divergence occurs in computing the on-shell action, where the cancellation around T = π/2 surface is again ensured by antipodal conditions.
Three dimensions
This section is devoted to three dimensional Maxwell theory. The asymptotic symmetry at null infinity was discussed in [18]. The reason for separate consideration of three dimensional case is the simple form of solutions: dS 2 is conformally flat and the solution is a whole set of left-and right-moving scalar modes. For this simplest case, we will translate the boundary conditions to Bondi coordinates (u, r, ϕ).
Boundary conditions and solution space
The boundary conditions (3.1) for d = 3 become 14 This boundary condition is realized by following fall-off on the gauge field The asymptotic behavior adopted here allows for moving charges in 2+1 dimensions. At leading order, F ρ . Let us denote A The differential operator is the Laplacian on dS 2 , which takes a nicer form in coordinates x ± = ϕ ± T . The metric on dS 2 is The field equation(4.3) becomes ∂ + ∂ − ψ = 0 . The general solution with periodic boundary condition ψ(T, ϕ) = ψ(T, ϕ + 2π) is the following.
Action principle and charges
The boundary term with fall-off (4.2) is finite Integration by parts and fixing the asymptotic gauge D a A (0) a = 0 makes the integrand a total divergence. In contrast to higher dimensions, fixing the Lorenz gauge ∇ µ A µ is not possible, because it implies either ψ = 0 or A a ∼ O(ρ).
The asymptotic gauge fixing leaves residual gauge transformations satisfying D a D a λ = 0. The conserved charges are obtained by the same method explained before.
The static Coulomb solution is F tr = q/r . The electric field in hyperbolic coordinates becomes F T ρ = −q .
Antipodal condition
The whole set of solutions (4.6) are regular at light cone. Nevertheless, we opt to impose conditions (3.18) which include physical solutions.
ψ(T, ϕ) = −ψ(π − T , ϕ + π) (4.9) The antipodal map (T, ϕ) → (π − T , ϕ + π) is equivalent to x + ↔ x − . As a result, (4.6) is divided into even and odd parts d n e inϕ cos nT , d n = d * −n even (4.10b) By this boundary condition, the field strength is obtained by taking a derivative of ψ. One can explicitly check that for a boosted electric charge, the gauge field lies in (4.10a).
Charge and boundary conditions at null limit Close to the future null infinity at T = 0, the fields behave as follows At null infinity, only the second term of the charge remains non-vanishing, so the charge is To make contact with results [18] let us rewrite the boundary conditions near null infinity (ρ → ∞, T → 0) in Bondi coordinates (u, r, x A ). The coordinates are related by u = −ρT/2 and r = ρ/T. Expanding the asymptotic gauge D a A (0) a = 0 we have (4.14) This condition can be solved by introducing a scalar α(T, ϕ) .
We have to assign a fall-off for α around T = 0. Analyzing the dipole solutions, the appropriate condition is α(T, ϕ) =ᾱT + O(T 2 ). Now we can find A u , A r and A ϕ at leading order.
A ϕ =ᾱ(ϕ) + ∂ ϕλ (ϕ) + O(r −1 ) (4.16) A u =ψ(ϕ) + O(r −1 ) (4.17) A r = − u rψ (ϕ) + O(r −2 ) (4.18) These results must not be interpreted as null infinity boundary conditions. To account for electromagnetic radiation, there should exist one arbitrary function both of u and ϕ, corresponding to the single helicity state of photon in three dimensions. Nontheless, (4.16) provides a consistent boundary condition at past of future null infinity, where the radiation has not yet started.
Discussion
In this paper, we considered asymptotic symmetries of Maxwell theory in three and higher dimension at spatial infinity. We tried to bypass standard methods for computing surface charges, by making the action principle well-defined, applying a gauge transformation on it, and interpreting the resulting conserved quantity as the charge. This work excludes magnetic charges to avoid technical difficulties, although they are discussed in various four dimensional treatments.
We showed that regularity of field strength tensor at light cone implies a certain antipodal condition on de Sitter space in four and higher dimensions, which was familiar in dS/CFT context. In addition, the charges depend on the scalar field ψ on de Sitter space in all dimensions. It is interesting if dS/CFT quantum considerations applied to ψ have implications on Maxwell theory.
In three dimension, the solution space is more transparent as the asymptotic de Sitter space is conformally flat. The light cone regularity argument does not work in three dimension, although it is satisfied by the solution for moving electric charges. For this simple model, we could solve the gauge condition and translate the boundary conditions into Bondi coordinates which are better suited for null infinity discussions.
As an interesting generalization, note that in three dimensions, non-trivial vorticity for gauge field is possible. Gauge transformations considered here are regular, so preserve vorticity. Addition of singular gauge transformations which lead to vorticity might lead to an unexpected relation with electric charges considered here; as is the case in four dimensions [13,32].
Finally, we compared our treatment with Hamiltonian formulations of the theory. Symplectic form and on-shell action are finite in d > 4 and their divergences d = 3, 4 cancel in inertial frames by virtue of parity conditions. Nonetheless, cancellation in arbitrary slice of asymptotic de Sitter space remains elusive. | 6,585.6 | 2019-02-07T00:00:00.000 | [
"Physics"
] |
Characterization of Soil Bacteria with Potential to Degrade Benzoate and Antagonistic to Fungal and Bacterial Phytopathogens
The intensive development of agriculture leads to the depletion of land and a decrease in crop yields and in plant resistances to diseases. A large number of fertilizers and pesticides are currently used to solve these problems. Chemicals can enter the soil and penetrate into the groundwater and agricultural plants. Therefore, the primary task is to intensify agricultural production without causing additional damage to the environment. This problem can be partially solved using microorganisms with target properties. Microorganisms that combine several useful traits are especially valuable. The aim of this work was to search for new microbial strains, which are characterized by the ability to increase the bioavailability of nutrients, phytostimulation, the antifungal effect and the decomposition of some xenobiotics. A few isolated strains of the genera Bacillus and Pseudomonas were characterized by high activity against fungal phytopathogens. One of the bacterial strains identified as Priestia aryabhattai on the basis of the 16S rRNA gene sequence was characterized by an unusual cellular morphology and development cycle, significantly different from all previously described bacteria of this genus. All isolated bacteria are capable of benzoate degradation as a sign of the ability to degrade aromatic compounds. Isolated strains were shown to be prospective agents in biotechnologies.
Introduction
The continuously increasing level of environmental pollution leads to the need for the development of biotechnologies for environment remediation. Soil is an extremely complex habitat, rich in microorganisms and characterized by a high diversity of microbial communities. The number of microorganisms is several billion cells per gram of soil, and the biodiversity reaches hundreds of thousands of species of bacteria and archaea [1].
The rhizosphere is the most microorganism-rich part of the soil due to the mutual positive influence of plants and microorganisms on each other.
Soil is not only a depository of biological diversity but also a kind of biochemical reactor, since microorganisms constantly carry out many enzymatic reactions/processes, including the degradation of xenobiotics and pollutants of natural origin [2][3][4][5]. The microbial community is the most important ecological indicator of "soil health", which reflects the state of the soil biocenosis and its response to various influences, including pollution by toxic substances. Soils of agricultural importance are subjected to colossal anthropogenic impact, which leads to a change in their mineral composition, a decrease in the content of soil organic matter, the accumulation of pesticides, the spread of pesticide-resistant phytopathogenic microorganisms and, ultimately, to the depletion of the species and numerical composition of agrobiocenoses [6].
Among the research fields of soil microbiology, it is necessary to note such areas as the assessment of the general state of microbial systems, seasonal fluctuations in the biomass of the microbiocenosis and the spread of microorganisms of various taxonomic groups in soils of different types [7], as well as the study of the effect of the anthropogenic load on the change in abundance and the diversity of soil microorganisms [8]. Special attention is paid to the study of rhizosphere microorganisms, including bacteria that stimulate plant growth (plant growth-promoting rhizosphere bacteria) [9,10]. In this case, not only microbial diversity is investigated, but the features of the interactions of microorganisms with the host plant are also revealed. The mechanisms of the positive effect of bacteria on plants can be roughly divided into two types: direct stimulation of plant growth through the synthesis of phytohormones and improvement of their mineral nutrition and indirect stimulation of plant growth by inhibiting the growth of soil phytopathogenic fungi and bacteria [11].
In recent decades, biological preparations based on a variety of microorganisms and their metabolites are increasingly used to protect plants from pathogens [12,13]. The active agents of biopreparations are components of natural biocenoses, which explains their safety for the environment. The positive aspects of using microbiological preparations are their environmental friendliness, including a decrease in the chemical load on agroecosystems, low cost, the possibility of rotating them with chemical plant protection agents and a fairly high biological effectiveness of action. The use of biological products increases the yield of agricultural crops by 10-40% (depending on the type of plants) and improves the quality of products and their nutritional and feed values. In the Russian market, there are a number of such well-proven microbiological formulations, including inoculants for legumes (Rizotorfin); biofungicides (Flavobacterin, Rizoplan, Gamair and Glyocladin) and growth stimulants (Mizorin, Agrophil and BisolbiMix) [14,15].
Currently, the urgent tasks for increasing the productivity of agricultural crops are the isolation of new effective strains of bacteria to protect and stimulate plant growth and the study of their influence on plant development in various conditions, including in areas contaminated with toxicants, including pesticides. The aim of this work was to search for new microbial strains that are applicable for agricultural production and/or are able to degrade any pollutants.
Bacterial Strains and Cultivation Conditions
Bacterial strains were isolated from the chernozem soil of the Belgorod Region, Russia, collected in March 2020 (GPS 50.558336,36.399521). These soils are used as agricultural soils with a crop rotation. To isolate strains, the selected samples (1.0 g) were resuspended in a mineral medium of the following composition (g L −1 ): Na 2 Soil suspension were diluted to 10 −8 and 100 µL of diluted 10 −6 -10 −8 samples were plated on mineral agar medium with 0.2 g L −1 of benzoate Na as the sole sources of carbon and energy. Bacteria were grown at 28 • C. Grown colonies were picked up and transferred on sterile agar Luria-Bertani (LB) medium [16].
Strains of phytopathogenic fungi Gaeumannomyces graminis var. tritici, Fusarium graminearum and Rhizoctonia solani (anastomosis group 5) were used as test cultures to study the antagonistic activity of the isolates. For the cultivation of fungi, Kanner's medium was used [17].
Determination of the Antagonistic Activity of Soil Strains
To determine the antifungal activity, the soil strains were grown in liquid LB medium for [16][17][18][19][20] h. An aliquot of the 18-h culture (5 µL) was applied to the surface of Kanner's agar medium and incubated at 24 • C for 2 days. Then, a segment of the fungus mycelium (8-10 mm in diameter) was placed in the center of a Petri dish. The fungus mycelium was preliminarily grown in Kanner's medium at room temperature for 7 days. Petri dishes were incubated at room temperature for 7-10 days, and then, the inhibition zone size of mycelium was assessed. The size of the growth inhibition zone was measured from the edge of the bacterial colony.
To determine the antibacterial activity, 100 µL of the phytopathogenic test culture (density 1-3 × 10 8 colony-forming units (CFU)/mL) was spread over the surface of the LB agar medium. Then, 10 µL of the culture of the isolates was applied from above and incubated at a temperature of 24 • C for 7 days. The diameter of the growth inhibition zone was assessed over 2-7 days, considering the fact that the size of the bacterial colony was no more than 10 mm.
16S rRNA Gene Sequencing and Phylogenetic Analysis
Genomic DNA was isolated from cells using a Fungal/Bacterial DNA Kit (Zymo Research, Irvine, CA, USA) according to the manufacturer's recommendation. The 16S rRNA gene was amplified by PCR using primers universal for 16S rRNA prokaryotes: 27f (5 -AGAGTTTGATCCTGGCTCAG-3 ) and 1492r (5 -TACGGYTACCTTGTTACGACTT-3 ) [22]. The amplified DNA was purified using the Zymoclean Gel DNA Recovery Kit (Zymo Research, Irvine, CA, USA). Sequencing of the PCR DNA fragments was performed by the Sanger method on an Applied Biosystems 3130 Genetic Analyzer automatic sequencer (Applied Biosystems, Foster City, CA, USA) [23].
Primary phylogenetic screening of the obtained sequences was performed using the BLAST program [24] in the EzBioCloud database [25]. For the phylogenetic analysis, 16S rRNA gene sequences were taken from the GenBank database [26]. The nucleotide sequences of the 16S rRNA gene obtained for the strains were manually aligned with the sequences of the reference strains of the nearest microorganisms. A phylogenetic tree was constructed using partial 16S rRNA gene sequences by the neighbor-joining method [27] with a bootstrap test of 1000 replicates, performed using MEGA 6.0. [28].
Microscopy
Light microscopy under phase contrast was carried out under a Nikon Eclipse Ci microscope (Nikon, Minato, Japan) equipped with a Jenoptic ProgResSpeedXTcore5 camera (Jenoptic, Jena, Germany).
To perform electron microscopy of thin sections, the cell biomass was prefixed with 1.5% (v/v) glutaraldehyde solution in 0.05-M cacodylate buffer (pH 7.2) at 4 • C for 1 h. After three washings with the same buffer, the material was additionally fixed with 1% OsO 4 in 0.05-M cacodylate buffer at 20 • C for 3 h. After dehydration, the material was embedded into the epoxy resin Epon 812. Ultrathin sections were made on an 8800 ULTROTOME III (LKB-Produkter, Stockholm, Sweden). The sections were mounted on copper grids covered with a Formvar film, contrasted with uranyl acetate (3% solution in 70% ethanol) for 30 min and then stained with lead citrate [29] at 20 • C for 4 to 5 min. The sections were examined in a JEM-1200EX (JEOL, Tokyo, Japan) electron microscope at an 80-kV accelerating voltage.
Characterization of the Biochemical Properties of the Isolated Strains
To determine the spectrum of utilized substrates and enzyme activities of the isolates, we used the Analytical Profile Index (API) 20 E and CH 50 strips (bioMerieux, Marcyl'Étoile, France) according to the manufacturer's instructions, as well as colored Hiss medium with carbohydrates that contained the following (g L −1 ): peptone, 10.0; NaCl, 5.0; carbohydrate, 7.0 and 1.6% bromothymol blue solution, 1.0 mL.
Bacterial growth on glyphosate (0.5 g L −1 ) as the sole source of phosphorus was tested as described previously [30].
Statistical Data Processing
Mean values and standard deviations were calculated based on the data of three independent experiments using the Microsoft Excel 2007 program [31].
Antagonistic Activity
By the method of direct inoculation on agar medium with sodium benzoate as the sole growth substrate, 30 bacterial strains were isolated, differing in colony morphology.
Recently, for the development of biological products, preference has been given to strains that combine such useful properties as the biocontrol of phytopathogens and stimulation of plant growth, as well as the ability of the biodegradation of xenobiotics. For this reason, all isolates were tested for antagonistic activity against phytopathogenic fungi and bacteria, which are the most dangerous infecting agents of plant diseases.
Initially, the ability of soil strains to inhibit the growth of fungi Gaeumannomyces graminis var. tritici, Fusarium graminearum and Rhizoctonia solani was studied. Only four strains (designated as 3, 18, 27 and 28) exhibited pronounced antifungal activity. Of these isolates, three had the greatest activity against all used phytopathogenic fungi. The size of the growth inhibition zones of the test microorganisms was not less than in the control strain P. chlororaphis BS1393. Bacteria 18, 27 and 28 suppressed the growth of F. graminearum only (Table 1 and Figure 1).
Further, these strains (3, 18, 27 and 28) were studied for their ability to inhibit the growth of the phytopathogenic bacteria belonging to the genera Pseudomonas, Pantoea, Ralstonia, Pectobacterium, Xanthomonas, Agrobacterium and Clavibacter. Strain 3 exhibited the highest antibacterial activity and effectively suppressed nine out of 10 test bacteria (Table 2 and Figure 2). The diameter of the growth inhibition zones, as a rule, was larger than that of the control strain P. chlororaphis BS1393. A significant difference between strains 3 and 1393 was found in the suppression of phytopathogenic bacteria Ralstonia sp. 7-1, X. campestris B-610, A. tumefaciens CBE21 and C. michiganensis Ac-1403. Further, these strains (3, 18, 27 and 28) were studied for their ability to inhibit the growth of the phytopathogenic bacteria belonging to the genera Pseudomonas, Pantoea, Ralstonia, Pectobacterium, Xanthomonas, Agrobacterium and Clavibacter. Strain 3 exhibited the highest antibacterial activity and effectively suppressed nine out of 10 test bacteria (Table 2 and Figure 2). The diameter of the growth inhibition zones, as a rule, was larger than that of the control strain P. chlororaphis BS1393. A significant difference between strains 3 and 1393 was found in the suppression of phytopathogenic bacteria Ralstonia sp. 7-1, X. campestris B-610, A. tumefaciens CBE21 and C. michiganensis Ac-1403. Table 2. Antibacterial activity of the soil strains.
Diameter of the Suppression Zone, mm
Pseudomonas chlororaphis 3
Pseudomonas chlororaphis BS1393 (Control)
Pseudomonas savastanoi B-1546 The table shows the mean values ± standard deviations. The results were obtained from three independent experiments. 2 Zone of suppression of the growth of the test bacteria is absent. T Type strain for this species.
Representatives of the genera Pseudomonas and Bacillus, among which are many endophytic and rhizospheric species, are described [32,33]. These bacteria are characterized by the ability to directly or indirectly influence plant growth, including controlling the growth of phytopathogenic microorganisms [32]. It is known that some strains of fluorescent Pseudomonas bacteria can produce a wide range of antibiotic active metabolites that protect plants from various phytopathogenic microorganisms. An example of such antibiotics are phenazines, colored heterocyclic nitrogen-containing compounds produced almost exclusively by bacteria in the late exponential and stationary growth phase. The ability of Pseudomonas bacteria to control the growth of phytopathogens by the synthesizing of the phenazine or pyrrolnitrin antibiotics was the reason for the increased interest of researchers in this group of bacteria [34,35]. Strain 3, identified as P. chlororaphis (see Section 3.3. Strains identification), was characterized by a bright orange color of colonies, which may indicate to the production of 2-hydroxylated derivatives of phenazine-1-carboxylic acid (unpublished data). It was previously shown that 2-hydroxyphenazines exhibited strong bacteriostatic and fungistatic activity [36]. Probably, the synthesis of phenazine derivatives is the reason for the high antagonistic activity of this strain. Some strains of Pseudomonas were shown to have significant antagonistic effects against many different organisms, such as rootworms, Fusarium oxysporum, F. graminearum, Gaeumannomyces graminis, Phytophthora capsici, Pythium ultimum and Sclerotinia spp. [34,35,37].
Microorganisms 2021, 9, x FOR PEER REVIEW 6 of 17 1 The table shows the mean values ± standard deviations. The results were obtained from three independent experiments. 2 Zone of suppression of the growth of the test bacteria is absent. T Type strain for this species. Representatives of the genera Pseudomonas and Bacillus, among which are many endophytic and rhizospheric species, are described [32,33]. These bacteria are characterized by the ability to directly or indirectly influence plant growth, including controlling the growth of phytopathogenic microorganisms [32]. It is known that some strains of fluorescent Pseudomonas bacteria can produce a wide range of antibiotic active metabolites that protect plants from various phytopathogenic microorganisms. An example of such antibiotics are phenazines, colored heterocyclic nitrogen-containing compounds produced almost exclusively by bacteria in the late exponential and stationary growth phase. The ability of Pseudomonas bacteria to control the growth of phytopathogens by the synthesizing of the phenazine or pyrrolnitrin antibiotics was the reason for the increased interest of researchers in this group of bacteria [34,35]. Strain 3, identified as P. chlororaphis (see Section 3.3. Strains identification), was characterized by a bright orange color of colonies, which may indicate to the production of 2-hydroxylated derivatives of phenazine-1-carboxylic acid (unpublished data). It was previously shown that 2-hydroxyphenazines exhibited strong bacteriostatic and fungistatic activity [36]. Probably, the synthesis of phenazine derivatives is the reason for the high antagonistic activity of this strain. Some strains of Pseudomonas were shown to have significant antagonistic effects against many different organisms, such as rootworms, Fusarium oxysporum, F. graminearum, Gaeumannomyces graminis, Phytophthora capsici, Pythium ultimum and Sclerotinia spp. [34,35,37].
Strains 18 and 27 were characterized by lower antibacterial activity and suppressed seven out of 10 phytopathogenic bacteria. Strain 28 inhibited the growth of only A. tumefaciens GV3101(pMP90RK) and A. tumefaciens CBE21. None of the strains inhibited the growth of Pantoea agglomerans ATCC 27155TM (formerly Erwinia herbicola ATCC 27155). Bacillus subtilis and B. amyloliquefaciens were among 133 bacterial strains from 11 composted aromatic plant wastes isolated for their ability to inhibit the growth of the mycelium of soil phytopathogenic fungi Sclerotinia minor and Rhizoctonia solani [38]. Gram-positive bacteria Bacillus spp. is also among the best-studied antagonist microorganisms for biological control in agriculture. B. subtilis ACB-83 produced two antibiotics, iturin and surfactin, and was successfully used to prevent and control citrus black spots caused by the fungus Phyllosticta citricarpa [39]. Bacillus velezensis 83 was isolated from the mango tree phyllosphere of orchards [9]. This strain can be listed as an example of the ability of bacilli to carry out a protective function, synthesizing compounds that can inhibit the growth of phytopathogens and acting as producers of compounds that stimulate plant Bacillus subtilis and B. amyloliquefaciens were among 133 bacterial strains from 11 composted aromatic plant wastes isolated for their ability to inhibit the growth of the mycelium of soil phytopathogenic fungi Sclerotinia minor and Rhizoctonia solani [38]. Gram-positive bacteria Bacillus spp. is also among the best-studied antagonist microorganisms for biological control in agriculture. B. subtilis ACB-83 produced two antibiotics, iturin and surfactin, and was successfully used to prevent and control citrus black spots caused by the fungus Phyllosticta citricarpa [39]. Bacillus velezensis 83 was isolated from the mango tree phyllosphere of orchards [9]. This strain can be listed as an example of the ability of bacilli to carry out a protective function, synthesizing compounds that can inhibit the growth of phytopathogens and acting as producers of compounds that stimulate plant growth and development. "In vivo assays B. velezensis 83 was shown to be able to control anthracnose (Kent mangoes) as efficiently as chemical treatment with Captan 50 PH™ or Cupravit hidro™. The inoculation of B. velezensis 83 to the roots of maize seedlings yielded an increase of 12% in height and 45% of root biomass, as compared with uninoculated seedlings" [9]. Thus, bacteria of the genera Pseudomonas and Bacillus isolated from soil are promising agents for use as an active component of microbiological formulations. (Figure 3d). In strain 25, the cells were characterized by an unusual shape and greatly varied in size (2.5-4-µm-long and 0.8-1-µm-wide), depending on the age of the culture. Strain 25 was selected for further work due to its unique morphological features.
Microscopic Studies
All 30 isolated cultures were microscopically examined for purity and morphological evaluation. Based on the results of tests for antifungal activity and microscopy, five cultures were selected for further work. The morphometric analysis of the phase contrast images of the cells of the studied bacterial strains made it possible to estimate their sizes and characterize a number of unique morphological features. The vegetative cells of studied bacterial strains 18, 27 and 28 are represented by large rods of regular shapes with the following sizes (Figure 3a-c): strain 27: 3.5-5 × 0.7-0.9 µm, strain 28: 2.5-3.5 × 0.9-1.0 µm and strain 18: 3.5-5 × 0.8-1.0 µm. The vegetative cells of bacteria 3 have characteristic sizes 1.2-1.8 × 0.7-0.9 µm (Figure 3d). In strain 25, the cells were characterized by an unusual shape and greatly varied in size (2.5-4-µm-long and 0.8-1-µm-wide), depending on the age of the culture. Strain 25 was selected for further work due to its unique morphological features. Cells of strain 3 have a cell wall structure typical of Gram-negative bacteria with a characteristic outer membrane, a very thin layer of peptidoglycan (murein) and periplasm. In the nucleoid zone, the inclusion of regular spherical shapes is often present. In the cytoplasm, and often in the periplasmic space of cells, multiple small electron-dense inclusions are detected, apparently, as polyphosphates ( Figure 4). Cells of strain 3 have a cell wall structure typical of Gram-negative bacteria with a characteristic outer membrane, a very thin layer of peptidoglycan (murein) and periplasm. In the nucleoid zone, the inclusion of regular spherical shapes is often present. In the cytoplasm, and often in the periplasmic space of cells, multiple small electron-dense inclusions are detected, apparently, as polyphosphates ( Figure 4).
Microscopic Studies
All 30 isolated cultures were microscopically examined for purity and morphological evaluation. Based on the results of tests for antifungal activity and microscopy, five cultures were selected for further work. The morphometric analysis of the phase contrast images of the cells of the studied bacterial strains made it possible to estimate their sizes and characterize a number of unique morphological features. The vegetative cells of studied bacterial strains 18, 27 and 28 are represented by large rods of regular shapes with the following sizes (Figure 3a-c): strain 27: 3.5-5 × 0.7-0.9 µm, strain 28: 2.5-3.5 × 0.9-1.0 µm and strain 18: 3.5-5 × 0.8-1.0 µm. The vegetative cells of bacteria 3 have characteristic sizes 1.2-1.8 × 0.7-0.9 µm (Figure 3d). In strain 25, the cells were characterized by an unusual shape and greatly varied in size (2.5-4-µm-long and 0.8-1-µm-wide), depending on the age of the culture. Strain 25 was selected for further work due to its unique morphological features. Cells of strain 3 have a cell wall structure typical of Gram-negative bacteria with a characteristic outer membrane, a very thin layer of peptidoglycan (murein) and periplasm. In the nucleoid zone, the inclusion of regular spherical shapes is often present. In the cytoplasm, and often in the periplasmic space of cells, multiple small electron-dense inclusions are detected, apparently, as polyphosphates ( Figure 4).
Strains Identification
The nucleotide sequences of the 16S rRNA gene (~1400 base pairs) for strains 18, 27 and 28 were determined. The sequences of the 16S rRNA gene in all strains differed in
Strains Identification
The nucleotide sequences of the 16S rRNA gene (~1400 base pairs) for strains 18, 27 and 28 were determined. The sequences of the 16S rRNA gene in all strains differed in single-nucleotide substitutions, and, according to the results of the phylogenetic analysis, strains 18, 27 and 28 belong to the class Bacceliaceae, genus Bacillus. The strains showed the highest sequence similarity (99.4%) with the type of strain of this species Bacillus subtilis subsp. subtilis DSM 10 T ( Figure 5). The sequence analysis of the 16S rRNA gene of bacterial strain 25 showed that it belongs to the class Bacceliaceae, genus Priestia and has a high level of similarity (~99.6%) with the Priestia aryabhattai B8 W22. Sequence of the gene encoding 16S rRNA P. aryabhattai 25 was registered in the GenBank database under access number EF114313 ( Figure 6). The sequence analysis of the 16S rRNA gene of bacterial strain 25 showed that it belongs to the class Bacceliaceae, genus Priestia and has a high level of similarity (~99.6%) with the Priestia aryabhattai B8 W22. Sequence of the gene encoding 16S rRNA P. aryabhattai 25 was registered in the GenBank database under access number EF114313 (Figure 6).
A sequence analysis of the 16S rRNA gene of bacterial strain 3 showed that it belongs to the class Pseudomonadaceae, genus Pseudomonas, to the species Pseudomonas chlororaphis and has a high level of similarity (~99.6%) with the strain Pseudomonas chlororaphis DSM 50083 T . The sequence of the gene encoding 16S rRNA P. chlororaphis 3 was registered in the GenBank database under access number MW659070 (Figure 7). Table 3. Range of the utilized substrates by soil strains.
Priestia aryabhattai 25
Pseudomonas chlororaphis 3 A sequence analysis of the 16S rRNA gene of bacterial strain 3 showed that it belongs to the class Pseudomonadaceae, genus Pseudomonas, to the species Pseudomonas chlororaphis and has a high level of similarity (~99.6%) with the strain Pseudomonas chlororaphis DSM 50083 T . The sequence of the gene encoding 16S rRNA P. chlororaphis 3 was registered in the GenBank database under access number MW659070 (Figure 7). A sequence analysis of the 16S rRNA gene of bacterial strain 3 showed that it belongs to the class Pseudomonadaceae, genus Pseudomonas, to the species Pseudomonas chlororaphis and has a high level of similarity (~99.6%) with the strain Pseudomonas chlororaphis DSM 50083 T . The sequence of the gene encoding 16S rRNA P. chlororaphis 3 was registered in the GenBank database under access number MW659070 (Figure 7).
Biochemical Characteristics of Soil Strains
The determination of the biochemical characteristics of the cultures using the API 20 E and API 50 CH tests revealed the following features (Tables 3 and 4). The biochemical profiles of strains 18, 27 and 28 were almost identical to each other and very close to the species Bacillus subtilis ATCC 6051 T . All three strains were urease-negative. They showed the ability to liquefy gelatin and utilize citrates, as well as a wide range of carbon sources-namely, glycerol, L-arabinose, D-ribose, D-xylose, D-glucose, D-fructose, Dmannose, inositol, D-mannitol, D-sorbitol, salicin, D-cellobiose, D-maltose, D-melibiose, D-sucrose, D-trehalose and inulin. The studied strains cleaved potassium 2-ketogluconate and potassium 5-ketogluconate and also showed pronounced hydrolytic activity with respect to starch and glycogen, weak with respect to turanose.
Biochemical Characteristics of Soil Strains
The determination of the biochemical characteristics of the cultures using the API 20 E and API 50 CH tests revealed the following features (Tables 3 and 4). The biochemical profiles of strains 18, 27 and 28 were almost identical to each other and very close to the species Bacillus subtilis ATCC 6051 T . All three strains were urease-negative. They showed the ability to liquefy gelatin and utilize citrates, as well as a wide range of carbon sources-namely, glycerol, L-arabinose, D-ribose, D-xylose, D-glucose, D-fructose, Dmannose, inositol, D-mannitol, D-sorbitol, salicin, D-cellobiose, D-maltose, D-melibiose, D-sucrose, D-trehalose and inulin. The studied strains cleaved potassium 2-ketogluconate and potassium 5-ketogluconate and also showed pronounced hydrolytic activity with respect to starch and glycogen, weak with respect to turanose.
P. aryabhattai strain 25, in contrast to strains 18, 27 and 28, showed the ability to utilize D-galactose and erythritol but not inositol and inulin. The strain had β-galactosidase and also possessed the ability to degrade N-acetylglucosamine, which indicates its potential antimicrobial activity. P. chlororhaphis strain 3 utilized a wide range of organic substrates but did not hydrolyze starch, glycogen and xylitol and did not exhibit β-galactosidase and N-acetylglucoseamine enzyme activities. This strain was somewhat different in its biochemical characteristics from the P. chlororaphis strain ATCC 9446, which also belongs to the PGPR group of bacteria and is capable of synthesizing phenazine antibiotics [40]. For example, the latter strain fermented inositol, but not arabinose, in contrast to the strain we isolated.
Degradation Potential of Soil Strains
The strains P. chlororaphis 3; B. subtilis 18, 27 and 28 and P. aryabhattai 25 mentioned above were isolated on a mineral medium containing sodium benzoate as the sole source of carbon and energy. Benzoic acid is a naturally occurring aromatic compound widely distributed in the environment. When these strains were cultivated on a mineral medium with benzoate, no yellow color of the medium was observed in any case. This indicates that the isolated cultures implement the orthoand not the metapathway of the cleavage of catechol [41]. This intermediate is the most frequently formed during microbial degradation of benzoate. Microorganisms capable of degrading benzoate, as a rule, convert some other aromatic compounds.
It should be noted that only P. chlororaphis 3 grew on a medium containing glyphosate as the sole source of phosphorus, which is the basis of numerous herbicides registered in more than 120 countries under various trademarks. Owing to its application, residual amounts of glyphosate can persist for a long time in plants, soil and groundwater [42]. Therefore, the isolation of new strain-destructors, as well as increasing their activity, is an urgent task for the development of modern biotechnologies for the remediation of contaminated ecosystems.
The ability of soil microorganisms to catalyze the degradation reactions of various aromatic compounds can be considered as a basic ability of soil to self-purify and self-repair. As a rule, xenobiotic compounds enter the soil either as a result of an intense technogenic load on the soil or as a result of emergency situations. On the other hand, aromatic compounds are widespread in nature. Thus, benzoate is a common plant component. "Lignin is a structurally complex, heterogeneous, partly branched polymer synthesized from three main phenylpropane monolignols-coniferyl, sinapyl, and p-coumaryl alcohols. Softwood lignins are mainly composed of guaiacyl units originating from coniferyl alcohol, whereas hardwood lignin has both guaiacyl units and syringyl units originating from sinapyl alcohol" [43]. Thus, the presence of such naturally occurring molecules can be seen as a constant opportunity for bacteria to train their ability to degrade toxic or resistant compounds, which occurs during the coexistence of plants and bacteria.
Features of the Morphology of Priestia aryabhattai 25
A number of unique morphological features of the bacterium P. aryabhattai 25 were identified by the morphometric analysis of the phase contrast images of the cells (Figures 8 and 9). The bacterium P. aryabhattai 25 is distinguished by an unusual form of morphological rearrangements of cells in the cycle of culture development. Strain 25, in the process of exponential growth, forms chains of cells of irregular shapes, which, on the first day of growth on rich nutrient media, are filled with multiple refractory granules of unknown nature (Figure 8a). logical rearrangements of cells in the cycle of culture development. Strain 25, in the process of exponential growth, forms chains of cells of irregular shapes, which, on the first day of growth on rich nutrient media, are filled with multiple refractory granules of unknown nature (Figure 8a). On the second day of growth, the process of sporulation begins. During this phase of growth, some cells within the cell chain begin to divide in a fragmented manner, forming clusters of multiple small and ultrasmall irregular cell forms (Figure 9). Some of the ultrasmall cells in the clusters form spores (Figure 9d,e) and some form spirally twisted cords in which septa are subsequently formed, followed by fragmentation into ultra-small cell forms (Figure 9c,d). The formation of chaotically oriented cell walls in the cytoplasm can be seen on an ultrathin section of the growing strain 25 cell. There is an uneven division by the type of fragmentation, which is accompanied by the formation of small, up to 0.5 µm, and very small, around 0.3 µm, cells (Figure 9c,d). The P. aryabhattai strain 25 is of great interest for understanding the processes of survival, the colonization of plant roots and the implementation of the interaction between the bacterium and the host plant. An analysis of the genomes of this bacterial species showed that most of them carry genes responsible for stimulating plant growth [44]. The peculiarity of strain 25, realized in the development cycle, to split into multiple ultra-small On the second day of growth, the process of sporulation begins. During this phase of growth, some cells within the cell chain begin to divide in a fragmented manner, forming clusters of multiple small and ultrasmall irregular cell forms (Figure 9). Some of the ultrasmall cells in the clusters form spores (Figure 9d,e) and some form spirally twisted cords in which septa are subsequently formed, followed by fragmentation into ultra-small cell forms (Figure 9c,d). The formation of chaotically oriented cell walls in the cytoplasm can be seen on an ultrathin section of the growing strain 25 cell. There is an uneven division by the type of fragmentation, which is accompanied by the formation of small, up to 0.5 µm, and very small, around 0.3 µm, cells (Figure 9c,d).
The P. aryabhattai strain 25 is of great interest for understanding the processes of survival, the colonization of plant roots and the implementation of the interaction between the bacterium and the host plant. An analysis of the genomes of this bacterial species showed that most of them carry genes responsible for stimulating plant growth [44]. The peculiarity of strain 25, realized in the development cycle, to split into multiple ultrasmall forms can contribute to the rapid and effective colonization of the rhizosphere of agricultural plants.
Recently, Muniraj et al. investigated the role of the bacterial isolate Bacillus aryabhattai TFG5 in the production of tyrosinase and its involvement in the production of humic substances from the waste of the coir pith [45]. The authors highlighted the role of the enzymes of this strain, tyrosinase and laccase, in the formation of humic substances in the soil. It is known that soil productivity is determined by the content of organic matter in it. Microorganisms play an important role in the restoration of soil fertility. Until recently, the main role in this process was assigned to fungi, due to the presence of a ligninolytic enzyme complex in them; thanks to which, fungi carry out the destruction of plant residues. The polymerization of phenolic molecules that originate from the degradation of lignin or the synthesis by microorganisms may lead to humic substances that can incorporate a variety of organic and inorganic molecules and elements [46]. Thus, the presence in soil bacteria, including Bacillus aryabhattai, of the activity of ligninolytic enzymes and tyrosinase may indicate an active role of bacteria of this species in the formation of soil fertility.
Conclusions
As a result of this study, a number of bacterial strains were isolated from the soil according to their ability to degrade benzoate. Some of these strains were proven to be antagonists of fungal and bacterial phytopathogens. Particularly noteworthy is the P. chlororaphis 3 strain, which was characterized by a pronounced ability to control the growth of the phytopathogens with comparable efficiency with the best bacterial strains used in biological products. In addition, the microscopic studies carried out allowed us to find the P. aryabhattai 25 strain, which was originally attributed to the genus Bacillus. However, both in the sequence of the 16S rRNA gene and in its biochemical properties and morphophysiological characteristics, it differs significantly from all known representatives of this genus, which made it possible to define it as Prestia aryabhattai strain 25. Thus, several new strains were isolated, which are of interest both as highly active agents controlling the growth of plant phytopathogens and as a representative of new groups of bacteria, the role of which for the environment still needs to be studied. | 7,866.6 | 2021-04-01T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Visual model‐predictive localization for computationally efficient autonomous racing of a 72‐g drone
Drone racing is becoming a popular e‐sport all over the world, and beating the best human drone race pilots has quickly become a new major challenge for artificial intelligence and robotics. In this paper, we propose a novel sensor fusion method called visual model‐predictive localization (VML). Within a small time window, VML approximates the error between the model prediction position and the visual measurements as a linear function. Once the parameters of the function are estimated by the RANSAC algorithm, this error model can be used to compensate the prediction in the future. In this way, outliers can be handled efficiently and the vision delay can also be compensated efficiently. Theoretical analysis and simulation results show the clear advantage compared with Kalman filtering when dealing with the occasional large outliers and vision delays that occur in fast drone racing. Flight tests are performed on a tiny racing quadrotor named “Trashcan,” which was equipped with a Jevois smart camera for a total of 72 g. An average speed of 2 m/s is achieved while the maximum speed is 2.6 m/s. To the best of our knowledge, this flying platform is currently the smallest autonomous racing drone in the world, while still being one of the fastest autonomous racing drones.
be relatively larger for them. Moreover, a cheap, light-weight solution to drone racing would allow many people to use autonomous drones for training their racing skills. When the autonomous racing drone becomes small enough, people may even practice with such drones in their own home.
Autonomous drone racing is indebted to earlier work on agile flight.
Initially, quadrotors made agile maneuvers with the help of external motion capture systems (Mellinger & Kumar, 2011;Mellinger, Michael, & Kumar, 2012). The most impressive feats involved passing at high speeds through gaps and circles. More recently, various researchers have focused on bringing the necessary state estimation for these maneuvers onboard. Loianno, Brunner, McGrath, and Kumar (2017) plan an optimal trajectory through a narrow gap with difficult angles while using visualinertial odometry (VIO) for navigation. The average maximum speed of their drone can achieve 4.5 m/s. However, the position of the gap is known accurately a priori, so no gap detection module is included in their research. Falanga, Mueggler, Faessler, and Scaramuzza (2017) have their research on flying a drone through a gap aggressively by detecting the gap with fully onboard resources. They fuse the pose estimation from the detected gap and onboard sensors to estimate the state. In their experiment, the platform with a forward-facing fish-eye camera can fly through the gap with 3 m/s. Sanket, Singh, Ganguly, Fermüller, and Aloimonos (2018) develop a solution for a drone to fly through arbitrarily shaped gaps without building an explicit three-dimensional model of a scene, using only a monocular camera.
Drone racing represents a larger, even more challenging problem than performing short agile flight maneuvers. The reasons for this are that (a) all sensing and computing has to happen on board, (b) passing one gate is not enough. Drone races can contain complex trajectories through many gates, requiring good estimation and (optimal) control also on the longer term, and (c) depending on the race, gate positions can change, other obstacles than gates can be present, and the environment is much less controlled than an indoor motion tracking arena.
One category of strategies for autonomous drone racing is to have an accurate map of the track, where the gates have to be in the same place. One of the participants of the IROS 2017 autonomous drone race, the Robotics and Perception Group, reached gate 8 in 35 s. In their approach, waypoints were set using the pre-defined map and VIO was used for navigation. A depth sensor was used for aligning the track reference system with the odometry reference system. NASA's JPL lab report in their research results that their drone can finish their race track in a similar amount of time as a professional pilot. In their research, a visual-inertial localization and mapping system is used for navigation and an aggressive trajectory connecting waypoints is generated to finish the track (Morrell et al., 2018). Gao et al. (2019) come up with a teachand-repeat solution for drone racing. In the teaching phase, the surrounding environment is reconstructed and a flight corridor is found.
Then, the trajectory can be optimized within the corridor and be tracked during the repeating phase. In their research, VIO is employed for pose estimation and the speed can reach 3 m/s. However, this approach is sensitive to changing environments. When the position of the gate is changed, the drone has to learn the environment again.
The other category of strategies for autonomous drone race employs coarser maps and is more oriented on gate detection. This category is more robust to displacements of gates. The winner of IROS 2016 autonomous drone race, Unmanned Systems Research Group, uses a stereo camera for detecting the gates (Jung, Cho, Lee, Lee, & Shim, 2018). When the gate is detected, a waypoint will be placed in the center of the gate and a velocity command is generated to steer the drone to be aligned with the gate. The winner of the IROS 2017 autonomous drone race, the INAOE team, uses metric monocular SLAM for navigation. In their approach, the relative waypoints are set and the detection of the gates is used to correct the drift of the drone (Moon et al., 2019). S. Li, Ozo, De Wagter, and de Croon (2018) combine gate detection with onboard IMU readings and a simplified drag model for navigation. With their approach, a Parrot Bebop 1 (420 g) can use its native onboard camera and processor to fly through 15 gates with 1.5 m/s along a narrow track in a basement full of exhibits. Kaufmann, Loquercio, et al. (2018) use a trained Convolutional Neural Network (CNN) to map the input images to the desired waypoint and the desired speed to approach it. With the generated waypoint, a trajectory through the gate can be determined and executed while VIO is F I G U R E 1 The IROS autonomous drone race track over the years 2016-2018 (a-c). The rules have always been the same. Flight is to be fully autonomous, so there can be no human intervention. The drone that passes through most subsequent gates in the track wins the race. When the number of passed gates is the same, or the track is fully completed, the fastest drone wins the race (a) IROS 2016 drone race track; (b) IROS 2017 drone race track; (c) IROS 2018 drone race track [Color figure can be viewed at wileyonlinelibrary.com] used for navigation. The winner of the IROS 2018 autonomous drone race, the Robotics and Perception Group, finished the track with 2 m/s (Kaufmann, Gehrig, et al., 2018). During the flight, the relative position of the gates and a corresponding uncertainty measure are predicted by a CNN. With the estimated position of the gate, the waypoints are generated, and a model-predictive controller (MPC) is used to control the drone to fly through the waypoints while VIO is used for navigation.
From the research mentioned above, it can be seen that many of the strategies for autonomous drone racing are based on generic, but computationally relatively expensive navigation methods such as VIO or SLAM. These methods require heavier and more expensive processors and sensors, which leads to heavier and more expensive drone platforms. Forgoing these methods could lead to a considerable gain in computational effort, but raises the challenge of still obtaining fast and robust flight.
In this paper, we present a solution to this challenge. In particular, we propose a visual model-predictive localization (VML) approach to autonomous drone racing. The approach does not use generic vision methods such as VIO and SLAM and is still robust to gate changes, while reaching speeds competitive to the currently fastest autonomous racing drones. The main idea is to rely as much as possible on a predictive model of the drone dynamics, while correcting the model and localizing the drone visually based on the detected gates and their supposed positions in the global map. To demonstrate the efficiency of our approach, we implement the proposed algorithms on a cheap, commercially available smart camera called "Jevois" and mount it on the "Trashcan" racing drone. The modified Trashcan weighs only 72 g and is able to fly the race track with high speed (up to 2.6 m/s) 1 . The vision-based navigation and high-level controller run on the Jevois camera while the low-level controller provided by the open source Paparazzi autopilot (Gati, 2013;Hattenberger, Bronz, & Gorraz, 2014) runs on the Trashcan. To the best of our knowledge, the presented drone is the smallest and one of the fastest autonomous racing drone in the world. Figure 2 shows the weight and the speed of our drone in comparison to the drones of the winners of the IROS autonomous drone races.
| Problem formulation
In this study, we will develop a hardware and a software system that the flying platform can fly through a drone race track fully autonomously with high speed using only onboard resources. The racing track setup can be changed and the system should be adaptive to this change autonomously.
For visual navigation, instead of using SLAM or VIO, we directly use a computationally efficient vision algorithm for the detection of the racing gate to provide the position information. However, implementing such a vision algorithm on low-grade vision and processing hardware results in low frequency, noisy detections with occasional outliers. Thus, a filter should be employed to still provide high frequency and accurate state estimation. In Section 3, we first briefly introduce the "Snake Gate Detection" method and a pose estimation method used to provide position measurements. Then, we propose and analyze the novel VML technique that estimates the drone's states within a time window. It fuses the low-frequency onboard gate detections and high-frequency onboard sensor readings to estimate the position and the velocity of the drone. The control strategy to steer the drone through the racing track is discussed. The simulation result in Section 4 shows the comparison between the proposed filter and the Kalman filter in different scenarios with outliers and delay. In Section 5, we will introduce the flying experiment of the drone flying through a racing track with gate displacement, different altitude and moving gate during the flight. In Section 6, the generalization and the limitation of the proposed method are discussed. Section 7 concludes the article.
| System overview
To illustrate the efficiency of our approach, we use a small racing drone called Trashcan (Figure 3). This racing drone is designed for FPV racing with the Betaflight flight controller software. In our case, to fly this Trashcan autonomously, we replaced Betaflight by the Paparazzi open source autopilot for its flexibility of adding custom code, stable communication with the ground for testing code and active maintenance from the research community. In this article, the Paparazzi software only aims to provide a low-level controller. The main loop frequency is 2 kHz. We employ a basic complementary filter for attitude estimation and the attitude control loop is a F I G U R E 2 The weight and the speed of the approach proposed in this article and the winners' of IROS autonomous drone race. All weights are either directly from the articles or estimated from online specs of the used processors [Color figure can be viewed at wileyonlinelibrary.com] cascade control including a rate loop and an attitude loop. For each loop, a P-controller is used. The details of Trashcan's hardware can be found in Table 1 For the high-level vision, flight planning and control tasks, we use a light-weight smart camera (17 g) called Jevois, which is equipped with a quad core ARM Cortex A7 processor and a dual core Mali-400 GPU. In our experiment, there are two threads running on the Jevois, one of which is for vision detection and the other one is for filtering and control (Figure 4a). In our case, the frequency of detecting gates ranges from 10 to 30 Hz and the frequency of filtering and control is set to 512 Hz. The Gate detection thread processes the images in sequence. When it detects the gate it will send a signal telling the other thread a gate is detected. The control and filtering thread keeps predicting the states and calculating control command in high frequency. It uses a novel filtering method, explained in Section 3, for estimating the state based on the IMU and the gate detections. In Figure 4b, the Gate detection and Pose estimation module first detects the gate and estimates the relative position between the drone and the gate. Next, the relative position will be sent to the Gate assignment module to be transferred to global position. With the global position measurements and the onboard AHRS reading, the proposed VML filter fuses them together to have accurate position and velocity estimation. Then, the Flight plan and high-level controller will calculate the desired attitude commands to steer the drone through the whole track. These attitude commands will be sent to the drone via MAVLink protocol. On the Trashcan drone, Paparazzi provides the low-level controller to stabilize the drone.
| ROBUST VML AND CONTROL
State estimation is an essential part of drones' autonomous navigation. For outdoor flight, fusing a GPS signal with onboard inertial sensors is a common way to estimate the pose of the drone (Santana, Brandao, & Sarcinelli-Filho, 2015). However, for indoor flight, a GPS signal is no longer available. Thus, off-board cameras (Lupashin et al., 2014), Ultra Wide Band Range beacons (Mueller, Hamer, & D'Andrea, 2015) However, the racing scenario has properties that make it challenging for a Kalman filter. Position measurements from gate detections often are subject to outliers, have non-Gaussian noise, and can arrive at a low frequency. This makes the typical Kalman filter approach unsuitable because it is sensitive to outliers, is optimal only for Gaussian noise, and can converge slowly when few measurements arrive. In this section, we will propose a VML technique which is robust to lowfrequency measurements with significant numbers of outliers. Subsequently, we will also present the control strategy for the autonomous drone race.
| Gate assignment
In this article, we use the "snake gate detection" and pose estimation technique as in S. Li et al. (2018). The basic idea of snake gate detection is searching for continuing pixels with the target color to find the four corners of the gate. Subsequently, a perspective n-point (PnP) problem is solved, using the position of the four corners in the image plane, the camera's intrinsic parameters, and the attitude estimation to solve the relative position between the drone and the ith Figure 5 shows this procedure, which is explained more in detail in S. Li et al. (2018). In most cases, when the light is even and the camera's auto exposure works properly, the gate in the image is continuous and the Snake gate detection F I G U R E 5 The Snake gate detection method and pose estimation method (S. Li et al., 2018). (a) Snake gate detection. From one point on the gate P 0 , the Snake gate detection method first searches up and down, then left and right to find all the four corners of the gate. (b) When the four points of the gate are found, the relative position between the drone and the gate is calculated with the points' position, the camera's intrinsic parameters and the current attitude estimation [Color figure can be viewed at wileyonlinelibrary.com] positive detections, there is still a small chance that a false positive happens. The negative effect is that outliers may appear which leads to a challenge for the filter and the controller.
Since for any race a coarse map of the gates is given a priori (cf. Here, we assume that the position of the gate is fixed. Any error experienced in the observations is then assumed to be due to estimation drift on the part of the drone. Namely, without generic VIO, it is difficult to make the difference between drone drift and gate displacements. If the displacements of the gates are moderate, this approach will work: after passing a displaced gate, the drone will see the next gate, and correct its position again. We only need a very rough map with the supposed global positions of the gates ( Figure 6).
Gate displacements only become problematic if after passing gate i the gate + i 1 would not be visible when following the path from the expected positions of gate i to gate + i 1.
At the IROS drone race, gates are identical, so for our position to be estimated well, we need to assign a detection to the right gate. For this, we rely on our current estimated global positionˆ= [ˆˆ] x y x , k k k . When a gate is detected, we go through all the gates on the map using Equation (1) to calculate the predicted position¯= [¯¯] x y x , Then, we calculate the distance between the predicted drone's positionx k i and its estimated positionx k at time t k by After going through all the gates, the gate with the predicted position closest to the estimated drone position is considered as the detected gate. At time t k , the measurement position is determined by The gate assignment technique ( Figure 7) can help us obtain as much information on the drone's position as possible when a gate is detected. Namely, it can also use detections of other gates than the next gate, and allows to use multiple gate detections at the same time to improve the estimation. Still, this procedure will always output a global coordinate for any detection. Hence, false positive or inaccurate detections can occur and have to be dealt with by the state estimation filter.
| VML
The racing drone envisaged in this article has a forward-looking camera and an IMU. As explained in the previous section, the camera is used for localization in the environment, with the help of gate detections. Using a typical, cheap CMOS camera will result in relatively slow position updates from the gate detection, with occasional outliers. The IMU can provide high-frequency, and quite accurate attitude estimation by means of an AHRS. The accelerations can also be used in predicting the change in translational velocities of the drone. In traditional inertial approaches, the accelerations would be integrated. However, for smaller drones the accelerometer readings become increasingly noisy, due to less possible damping of the autopilot. Integrating accelerometers is "acceleration stable," meaning that a bias in the accelerometers that is not accounted for can lead to unbounded velocity estimates. Another option is to use the accelerometers to measure the drag on the frame, which-assuming no wind-can be easily mapped to the drone's translational velocity (cf. S. Li et al., 2018). Such a setup is "velocity stable," meaning that an accelerometer offset of drag model error would lead to a proportional velocity offset, which is bounded. On really small vehicles like the one we will use in the experiments, the accelerometers are even too noisy for reliably measuring the drag. Hence, the proposed approach uses a prediction model that only relies on the attitude estimated by the AHRS which is an indirect way of using the accelerometer. It uses the attitude and a constant altitude assumption to predict the forward acceleration, and subsequently velocity of the drone. The model is corrected from time to time by means of the visual localization. Although the IMU is used for estimating attitude, it is not used as an inertial measurement for updating translational The gates are displaced. The drone uses the gate's position on the map to navigate. After passing through the first gate, it will use the second gate's position on the map for navigation. After seeing the second gate, the position of the drone will be corrected [Color figure can be viewed at wileyonlinelibrary.com] velocities. This leads to the name of the method; VML, which will be explained in detail in this subsection.
| Prediction error model
As mentioned above, the attitude estimated from the AHRS is used in the prediction of the drone's velocity and position. However, due to the AHRS bias and the model inaccuracy, the prediction will diverge from the ground truth over time. Fortunately, we have visual gate detections to provide position information. This vision-based localization will not integrate the error over time but it has a low frequency. Figure k q k . At the beginning of this time window, the difference between the ground truth and the prediction is Δ − x k q and Δ − v k q . The prediction can be done with high frequency Attitude and Heading Reference System (AHRS) estimates. The vision algorithm outputs low-frequency unbiased measurements. The prediction curve deviates more and more from the ground truth curve over time because of the AHRS bias and model inaccuracy [Color figure can be viewed at wileyonlinelibrary.com] Assuming that there is no wind, and knowing the attitude, we can predict the acceleration in the x and y axis. Figure 9 shows the forces the drone experiences. * * T denotes the acceleration caused by the thrust of the drone. It provides the forward acceleration together with the pitch angle θ. * * D denotes the acceleration caused by the drag which is simplified as a linear function of body velocity (Faessler, Franchi, & Scaramuzza, 2017): where * c is the drag coefficient.
According to Newton's second law in xoz plane, Expand Equation (5), we have is the drag coefficient matrix. If the altitude is kept the same as in the IROS drone race, we have Since the model in the y axis has the same form as in the x axis, the dynamic model of the quadrotor can be simplified as where ( ) x t and ( ) y t are the position of the drone, and ϕ is the roll angle of the drone. In Equation (8), the movement in x and y axis is decoupled.
Thus we only analyze the movement in the x axis. The result can be directly generalized to the y axis. The nominal model of the drone in x axis can be written bẏ( where The superscript n denotes the nominal model. Similarly, with the assumption that the drag factor is accurate, the prediction model can be written aṡ( is assumed to be a constant in short time. Consider a time window where T s is the sampling time. The predicted states of model 10 are Thus, the error between the predicted model and nominal model can be written as is the input bias which can be considered as a constant in a short time. In Equation (13), Since the sampling time T s is small, (T s = 0.002 s in our case), we can assume Hence, Equation (13) can be approximated by F I G U R E 9 Free body diagram of the drone. * *( ) v t is the velocity of the drone. The superscript E denotes north-east-down (NED) earth frame while B denotes body frame. * * T is the acceleration caused by thrust and * * D is the acceleration caused by the drag, which is a linear function of the body velocity. g is the gravity factor and c is the drag factor which is positive. θ ( ) t is the pitch angle of the drone. It should be noted that since we use NED frame, θ < 0 when the drone pitches down [Color figure can be viewed at wileyonlinelibrary.com] Expanding Equation (17), we have Actually, = − − qT t t s k k q is the time span of the time window. If we neglect T s 2 term, we can have the prediction error at time t k Thus, within a time window, the state estimation problem can be transformed to a linear regression problem with model Equation (19), T are the parameters to be estimated. From Equation (19) In this simplified linear prediction error model, we use the constant altitude assumption to approximate the thrust T z B on the drone, which may lead to inaccuracy of the model. During the flight, this assumption may be violated by aggressive maneuvers in z axis. However, if the maneuver in z axis is not very aggressive and the time window is small (in our case less than 2 s), the prediction error model's inaccuracy level can be kept in an acceptable range. In the simulation and the real-world experiment shown later, we will show that although the altitude of the drone changes 1 m in 2 s, the proposed filter can still have very high accuracy with this assumption. Another way to improve the model accuracy is to estimate the thrust by fusing the accelerometer readings and rotor speed together, which needs the establishment of the rotors' model. It should also be noted that we neglect T s 2 term in Equation (18) to have a linear model. To increase the model accuracy, the prediction error model can be a quadratic model. In our case, since the time window is small, the linear model is accurate enough.
| Parameter estimation method
The classic way for solving the linear regression problem based on Equation (19) is to use the least square method (LS Method) with all data within the time window and estimate the parameters β . where The LS Method in Equation (20) can give optimal unbiased estimation. However, if there exist outliers in the time window k q k , they will be considered equally during the estimation process. These outliers can significantly affect the estimation result.
Thus, to exclude the outliers, we employ random sample consensus (RANSAC) to increase the performance (Fischler & Bolles, 1981 (Figure 10). When β i is estimated, it will be used to calculate the total prediction error ε i of the all the data in the time window In the process of Equation (21), if ϵ j is larger than a threshold σ th , it counts the threshold as the error. After all the iterations, the parameters β i which has the least prediction error will be selected to be the estimated parameters for this time window With the BRF method, the influence of the outliers is reduced, but it has no mechanism to handle over-fitting. For example, in time is the penalty factor/prior matrix. To minimize the loss function, we take derivatives of β (ˆ) J and let it be 0 Then we have the estimated parameters by We call the use of Equation (26) To conclude, in this part we propose three methods for estimating the parameters β. The first one is the LS Method which considers all the data in a time window equally. The second method is BRF, which has the mechanism to exclude the outliers.
And the third one is PRF, which can not only exclude the outliers but also take into account the prior knowledge to avoid overfitting. In the next section, we will discuss and compare these three methods in simulation to see which one is the most suitable for our drone race scenario.
| Prediction compensation
After the error model (Equation 19) is estimated in time window k, the error model can be used to compensate the prediction by Also, at each prediction step, the length Δ = − − T t t k k q of the time window will be checked, since the simplified model 19 is based on the assumption that the time span of the time window ΔT is small. If ΔT is larger than the allowed maximum time window size ΔT max , the filter will delete the oldest elements until Δ < Δ T T max . The pseudo-code of the proposed VML with LS Method can be found in Algorithms 3 and 4.
| Comparison with Kalman filter
When it comes to state estimation or filtering technique, it is inevitable to mention the Kalman filter which is the most commonly used state estimation method. The basic idea of the EKF is that at time − t k 1 , it first predicts the states at time t k with its error covariance | − P k k 1 to have prior knowledge of the states at t k .
When an observation arrives, the Kalman filter uses an optimal gain K k which is a combination of the prior error covariance + | P k k 1 and the observation's covariance R k to compensate the prediction, which as a result, leads to the minimum error covariance P k .
According to Diderrich (1985), a Kalman filter is a least square estimation made into a recursive process by combining prior data with coming measurement data. The most obvious difference between the Kalman filter and the proposed VML is that VML is not a recursive method. It does not estimate the states at t k only based on the last step statesˆ− x k 1 . It estimates the states considering the previous prediction and observations in a time window.
In the VML approach, we use least square method within a time window, which looks similar to the least square estimation method.
F I G U R E 1 0 In the ith iteration, the data in the time window ∈ [ ] t t t , 1 9 will be randomly sampled into Δ − t k q k However, there are two major differences between the two methods.
The first one is that in the proposed VML, the prediction information is fused to the VML. Secondly and most importantly, we estimate the prediction error model β instead of estimating all the states in the time window as in the least square method. Thus, the VML has its advantages of handling outliers and delay by its time window mechanism and it also has the advantage of computational efficiency to the Least Square Estimation. In Section 4, we will introduce Kalman filter's different variants for outliers and delay and compare them with VML in estimation accuracy and computation load in detail.
| Flight plan and high-level control
With the state estimation method explained above, to fly a racing track, we employ a flight plan module which sets the waypoints that guide the drone through the track and a two-loop cascade P-controller to execute the reference trajectory ( Figure 11).
Usually, the waypoint is just behind the gate. When the distance between the drone and the waypoint is less than a threshold D turn , the gate can no longer be detected by our method, and we set the heading of the drone to the next waypoint. This way, the drone will start turning towards the next gate before arriving at the waypoint. When the distance between the drone and the waypoint is within another threshold D _ switch wp , the waypoint switches to the next point. With this strategy, the drone will not stop at one waypoint but already start accelerating to the next waypoint, which can help to save time. The work flow of flight plan module can be found in Algorithm 5.
We employ a two-loop cascade P-controller (Equation 30) to control the drone to reach the waypoints and follow the heading reference generated from the flight plan module. The altitude and attitude controllers are provided by the Paparazzi autopilot, and are both two-loop cascade controllers. , cos sin sin cos , 0 0 , 0 0 ,
| Simulation setup
To verify the performance of VML in the drone race scenario, we first test it in simulation and then use an EKF as benchmark to compare both filters to see which one is more suitable in different operation points. We first introduce the drone's dynamics model used in the simulation.
where ( ) x y z , , is the position of the drone in the Earth frame.
is the acceleration caused by other aerodynamics. The last four equations are the simplified first order model of the attitude F I G U R E 1 1 The Flight plan module generates the waypoints for the drone to fly the track. When the distance between the drone and the current waypoint < d D turn , the drone starts to turn to the next waypoint while still approaching the current waypoint. When < d D _ switch wp , the drone switches the current waypoint to the next one. The cascade P-controller is used for executing the reference trajectory from the flight plan module. The attitude and rate controllers are provided by the Paparazzi autopilot. k r is a positive constant to adjust the speed of the drone's yawing to the setpoint. In the real-world experiment and simulation, we set where ϕ b and θ b are the AHRS biases on ϕ and θ. B N and B E are the north and east bias caused by the accelerometer bias, which can be considered as constants in short time. From real-world experiments, they are less than 3°. Thus, the AHRS reading can be modelled by where f v is the detection frequency. Next, we randomly select n v points between u and v to be vision points. For these points, we generate detection measurement by In Equation (34), is the detection noise and σ * = 0.1 m In these n v vision points, we also randomly select a few points as outlier points, which have the same model with Equation (34) Figure 13.
When there are no outliers, all three filters can converge to the ground truth value. However, the EKF has a longer startup period and BRF overfits after turning, leading to unlikely high velocity offsets (the peaks in Figure 13b). This is because, after the turn, the RANSAC buffer is empty. When the first few detections come into the buffer, the RANSAC has a larger chance to estimate inaccurate parameters. In PRF, however, we add a prior matrix = ⎡ ⎣ ⎤ ⎦ P 0 0 0 0.3 to limit the value of Δv and the number of the peaks in the velocity estimation is significantly decreased. At the same time, the velocity estimation is closer to the ground truth value.
To evaluate the estimation accuracy of each filter, we first introduce a variable, average estimation error γ, to be an index of the filter's performance: where N is the number of the sample points on the whole trajectory.
x andŷ are the estimated states by the filter. x and y are the ground truth positions generated by the simulation. γ captures how much the estimated states deviate from the ground truth states. A smaller γ indicates a better filtering result.
We use running time to evaluate the computation efficiency of each filter. It should be noted that since we need to store all the simulation data for visualization and MATLAB has no mechanism of passing pointers, data accessing can take much computation time.
Thus, we only count the running time of the core parts of the filters, which are the prediction and the correction.
The results are shown in Figure 14. In the simulation, the time window in BRF and PRF is set to be 1 s and five iterations are performed in the RANSAC procedure. For each frequency, the filters are run 10 times separately and their average γ and running time are calculated. It can be seen in Figure 14a that when the detection frequency is larger than 30 Hz, BRF and PRF perform close to the EKF. In terms of calculation time, the EKF is heavier than BRF and PRF when the frequency is lower than 40 Hz. It is because that during the prediction phase, the EKF not only predicts the states but also calculates the Jacobian matrix and the prior error covariance | − P k k 1 by high frequency while BRF and PRF only do the state prediction. However, when the detection comes, the EKF does the correction by several matrix operations while BRF and PRF do the RANSAC which is much heavier. This explains why the EKF's computation load is only slightly affected by the detection frequency but BRF and PRF's computation load increases significantly with higher detection frequency.
| Comparison between EKF, BRF, and PRF with outliers
When outliers appear, the regular EKF can be affected significantly.
Thus, outlier rejection strategies are always used within an EKF to increase its robustness. A commonly used method is using Mahalanobis distance between the observation and its mean as an index to determine whether an observation is an outlier (Chang, 2014;Z. Li, Chang, Gao, Wang, & Hernandez, 2016). Thus, in this section, we implement an . When there are no outliers, EKF, BRF, and PRF's estimating result all converge to ground truth value. In velocity estimation, however, EKF has longer startup period than VML and BRF shows peaks, which is caused by the over-fitting. To limit this over-fitting, in PFR, we add a prior matrix Two examples of the filters' rejecting outliers are shown in Figure 15. The first figure shows a common case that the three filters can reject the outliers successfully. However, in some special cases, EKF-OR is vulnerable to the outliers. In Figure 15b, for instance, after a long time of pure prediction, the error covariance | − P k k 1 becomes large. Once EKF-OR meets an outlier, it has a high chance to jump to it. The subsequent true positive detections will be treated as outliers and EKF-OR starts diverging. At the same time, BRF and PRF are more robust to the outliers. The essential reason is that for EKF-OR, it depends on its current state estimation (mean and error covariance) to identify the outliers. When the current state estimation is not accurate enough, like the long-time prediction in our case, EKF-OR loses its ability to identify outliers. In other words, it tends to trust whatever it meets. The worse situation is that after jumping to the outlier, its error covariance become smaller which, as a F I G U R E 1 4 The simulation result of the filters. It can be seen that when the detection frequencies are below 20 Hz, the EKF performs better than Basic RANSAC Fitting (BRF) and Prior RANSAC fitting (PRF). However, when the detection frequencies are higher than 20 Hz, BRF and PRF start performing better than the EKF. In terms of computation time, the EKF is affected by the detection frequency slightly while the computation load of BRF and PRF increase significantly higher detection frequencies. F I G U R E 1 5 In most cases, EKF with outlier rejection (EKF-OR), Basic RANSAC Fitting (BRF), and Prior RANSAC fitting (PRF) can reject the outliers. But after a long time of pure prediction, EKF-OR is very vulnerable to the outliers while BRF and PRF still perform well. (a) When outliers appear, EKF-OR, BRF, and PRF can reject them. (b) After a long time of pure prediction, EKF-OR has large error covariance. Once it meets an outlier, it has a high chance to jump to it. As a consequence, the later true positive detections are beyond the threshold χ α and EKF-OR will treat them as outliers [Color figure can be viewed at wileyonlinelibrary.com] consequence, leads to the rejection of the coming true positive detections. However, for BRF and PRF, outliers are determined in a time window including history. Thus, after long time of prediction, when BRF and PRF meet an outlier, they will judge it considering the detections in the past. If there is no other detection in the time window, they will wait for enough detections to make a decision.
With this mechanism, BRF and PRF become more robust than EKF-OR especially when EKF-OR's estimation is not accurate. Figure 16 shows the estimation error and the calculation time of the three filters. As we stated before, although EKF-OR has the mechanism of dealing with the outliers, it still can diverge due to the outliers in some special cases. Thus, in Figure 16a EKF-OR has large estimation error when the detection frequency is both low and high.
In terms of calculation time, it can be seen that it has no significant difference with the non-outlier case.
| Filtering result with delayed detection
Image processing and visual algorithms can be very computationally expensive for running onboard a drone, which can lead to significant delay ( van Horssen, van Hooijdonk, Antunes, & Heemels, 2019;Weiss et al., 2012). Many visual navigation approaches ignore this delay and directly fuse the visual measurements with the onboard sensors, which sacrifices the accuracy of the state estimation. A commonly used approach for compensating this vision delay is a modified Kalman filter proposed by Weiss et al. (2012). The main idea of this approach, called EKF delay handler (EKF-DH), is having a buffer to store all sensor measurements within a certain time. At time t k , a vision measurement corresponding to the states at earlier time t s arrives. It will be used to correct the states at time t s . Then, the states will be propagated again from t s to t k (Figure 17a). Although updating the covariance matrix is not needed according to Weiss et al. (2012), this approach still requires updating history states whenever a measurement arrives, which can be computationally expensive especially when the delay and the measurement frequency get larger. In our case, we need to use the error covariance for outlier rejections, it is necessary to update the history error covariance matrices, which in turn increases the computation load further. At the same time, for VML, when the measurement arrives, it will first be pushed into the buffer. Then, the error model will be estimated within the buffer/time window. With the estimated parameter β, the prediction at t k can be corrected directly without the need of correcting all the states between t s and t k (Figure 17b). Thus, the computational burden will not increase when the delay exists. Figure 18 shows an example of the simulation result of the three filters when both outliers and delay exist. In this simulation, the visual delay is set to be 0.1 s. It can be seen that although there is a lag between the vision measurements and the ground truth, all the filters can estimate accurate states. However, EKF-DH requires much more computation effort. Figure 19 shows the estimation error and the computation time of the three filters.
In Figure 19, we can see that the computation load of EKF-DH increases significantly due to its mechanism of handling delay. Unsurprisingly, EKF-DH is still sensitive to some outliers while BRF and PRF can handle the outliers. process, which can be caused by system interrupts. Thus, we first exclude the outliers by the Interquartile Range Method (Upton & Cook, 1996) and then provide the statistics for each component. The result can be found in Figure 22 and Table 3.
From Table 3, it can be seen that vision takes much more time than the other three parts. Please note though that the snake gate computer vision detection algorithm is already a very efficient gate detection algorithm. In fact, it has tunable parameters, that is, the number of samples (a) (b) F I G U R E 1 7 The sketches of EKF delay handler (EKF-DH) and visual model-predictive localization's (VML) handling delay mechanism. (a) The sketch of the EKF-DH proposed in Weiss et al. (2012). When the measurement arrives at t k , EKF-DH first corrects the corresponding states at t s and then updates the states until t k . (b) The sketch of VML's mechanism of handling delay. When the measurement arrives, it will be pushed to the buffer with the corresponding states. Then, the error model will be estimated by the RANSAC approach. At last, the estimated model will be used to compensate the prediction at t k . There is no need to update all the states between t s and t the approach presented in this article is that we do not employ VIO and SLAM, which would take substantially more processing. However, as the Snake gate detection provides relatively low-frequency and noisy position measurements, the VML needs to run in high frequency and cope with the detection noise to still provide accurate estimation for the controller. Table 4. In Table 4, x g and y g are the position of the gates in the real world andx g andỹ g are their position on the map. In this situation, they are the same. The aim of this experiment is to test the filter's performance with sufficient detections. Thus, the velocity is set to be ∕ m s 1.5
| Flying experiment without gate displacement
to give the drone more time to detect the gate. In Figure 23, the blue curve is the ground truth data from OptiTrack motion capture system and the yellow curves are the filtering results. From the flying result, it can be seen that the filtered results are smooth and coincide with the ground truth position well. During the period when the detections are not available, the state prediction is still accurate enough to navigate the drone to the next gate. When the drone detects the next gate, the filter will correct the prediction. In this situation, the divergence of the states is only caused by the prediction drift. It should also be noted that when the outliers appears at s 84 , the filter is not affected by them because of the RANSAC technique in the filter.
| Flying experiment with gate displacement
In this section, we test our strategy under a difficult condition where the drone flies faster, the gates are displaced and the detection frequency is low. The real gate positions and their position on the map are listed in Table 5 The pose estimation is based on the gates' position on the map. When the gates are displaced, the drone still thinks they are at the position which the map indicates. After the turn, when the drone sees the next gate, which is displaced, it will attribute the misalignment to the prediction error and correct the prediction by means of new detections.
With this strategy, our algorithm is robust to the displacement of the gates.
| Flying experiment with different altitude and moving gate
We also show a more challenging trace track where the height of the gates varies from 0.5 to 2.5 m. Also, during the flight, the position of the second gate (2.5 m) is changed after the drone passes through it.
In the next lap, the drone can adapt to the changing position of the gate (Figure 26).
The flight result is shown in Figure 27. In this flight, the waypoints are not changed and the gates are deployed without any is still demonstrated that this light-weight flying platform has the ability to finish the drone race task autonomously. Compared with a regular size racing drone, the Trashcan has more complex aerodynamics and is more sensitive to disturbances. On the other hand, it has faster dynamics which can make maneuvers more agile. More important, it is much safer than a regular size racing drone, which may even allow for flying at home. In any case, the present approach represents another direction of the autonomous drone race, which does not need high performance and heavy onboard computers. Also, without computationally expensive navigation methods such as SLAM and VIO, the proposed approach is still able to make the drone navigate autonomously with relatively high speed.
However, the proposed approach still has its limitations. First of all, in this approach, we don't estimate the thrust. Instead, we use a non-changing altitude assumption to approximate the thrust to derive the prediction error model. The simulation and real-world experiments have shown that violating this assumption still results in accurate estimation. However, when the racing track will contain more considerable height changes, it may become desirable to estimate the thrust with a model, to have a more accurate error model and increase the estimation accuracy, especially in more aggressive flight. This is a major bottleneck of increasing the speed of the flight. In the future, we will design a gate detection method using deep learning methods to detect the gate in a more complex environment. This deep net can then run on the GPU of the Jevois. Also, higher speeds could be attainable.
Thirdly, in this paper, we mainly focus on the navigation part of the drone. The guidance is only a waypoint based method and the controller is a PID controller. To make the drone fly faster, optimal guidance and control methods are needed (S. Li, Ozturk, De Wagter, de Croon, & Izzo, 2019;Tailor & Izzo, 2019;Tang, Sun, & Hauser, 2018). Another direction is to explore joint estimation for navigation.
This will become very useful when one assumes that gates are mostly not displaced. Then, over multiple laps, the drone can get a better idea of where the gates are.
In the future, with the high speed development of computational capacity, when more reliable gate detection and online optimal control are implemented onboard, the speed of this autonomous racing drone can certainly be increased significantly. Compared with regularly sized drones, this tiny flying platform will be able to perform faster and more agile flight. At that time, the proposed VML approach will still be suitable for providing stable state estimation for the drone.
| CONCLUSION
In this paper, we presented an efficient VML approach to autonomous drone racing. The approach employs a velocity-stable model that predicts lateral accelerations based on attitude estimates from the AHRS. Vision is used for detecting gates in the image, and-by means of their supposed location in the map-for localizing the drone in the coarse global map. Simulation and real-world flight experiments show that VML can provide robust estimates with sparse visual measurements and large outliers. This robust and computationally very efficient approach was tested on an extremely lightweight flying platform, that is, a Trashcan racing drone with a Jevois camera. In the flight experiments, the Trashcan flew a track of three laps with an average speed of 2 m/s and a maximum speed of 2.6 m/s.
To the best of our knowledge, it is the world's smallest autonomous racing drone with a weight six times lighter than the currently lightest autonomous racing drone setup, while its velocity is on a par with the currently fastest autonomously flying racing drones seen at the latest IROS autonomous drone race. | 11,852.4 | 2019-05-24T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
THE OPERATOR B∗L FOR THE WAVE EQUATION WITH DIRICHLET CONTROL
In this paper, we primarily make reference to [10, Section 5.2, pages 1117–1120]. At the end, in Section 3 below, we will also examine its impact on [10, Section 7.1], which is a direct consequence of [10, Section 5.2]. Section 5.2 of [10] deals with the regularity of the map g → B∗Lg, where v = Lg is the solution of the two-dimensional wave equation [10, equation (5.2.2)] in the half-space, with zero initial conditions and Dirichlet boundary control g. (See problem (1.9) below for the general case on a bounded domain in Rn, n≥ 2.) The claim made in [10, Section 5.2] that B∗L / ∈ (L2(0,T ;U)) is incorrect, due to a spurious appearance of the symbol “Re” (real part) in [10, equation (5.2.18)]—and, consequently, in [10, equation (5.2.22)]—while in view of the correct [10, equation (5.2.10)], the symbol “Re” should have been omitted. Luckily, the same analysis given in [10, Section 5.2], once the spurious symbol “Re” is omitted from [10, equation (5.2.18)] (as it should be), provides, in fact, a direct proof of the positive result that
626 The operator B * L for the wave equation with Dirichlet control in the two-dimensional half space of [10, Section 5.2]; (ii) on the other hand, it provides its replacement in the addendum-the positive statement of Theorem 1.1 below.
(ii) As a consequence of (i), in equation (5.2.22), page 1120, suppress the symbol "Re," so that the corrected equation becomes, for (σ,τ (1.2) (iii) As a consequence of (ii), in equation (5.2.23), page 1120, suppress the symbol "Re," so that the corrected equation becomes by (1.2), with 3) The very same argument with "Re" omitted, as it should be, instead of a negative result, gives the positive result in (1.1) in the half-space; in fact, for any n ≥ 2. We will see this below.
Positive result on a half-space, n ≥ 2. The proof is essentially contained in [10, Section 5.2], modulo the corrections as stated above.We consider the half-space wave equation problem in [10, equation (5.2.2)].Let u ∈ L 2 (0,∞;L 2 (Γ)).Then, the corresponding version of [10, equation (5.2.10), page 1119] is (1.7) I. Lasiecka and R. Triggiani 627 Then, (1.4) and (1.7) yield the desired conclusion: and thus (1.1) holds true for the wave equation on the n-dimensional half-space n ≥ 2. The argument above is very transparent and shows exactly what is going on in order to gain the additional derivative on the boundary in the present case.
Addendum.We now state the general positive result.
For future reference in the proof of Section 2, we recall from [10, equations (5.1.3),(5.1.10),(5.1.13)]that (1.11) (1.12) Remark 1.2.The above Theorem 1.1 was first stated in [1] (see estimate (2.7), page 121).We believe that the proof that we will give below in Section 2 is essentially self-contained and much simpler than the sketch given in [1].The idea pursued in [1] is based on a full microlocal analysis of the fourth-order operator ∆(D 2 t − ∆) (where the extra ∆ is used to eliminate Dg from the z-dynamics z tt = ∆z + Dg t , see [10, equation (5.1.11b)],as ∆Dg t ≡ 0).The subsequent microlocal analysis of [1] considers, as usual [8], three regions: the hyperbolic region, the elliptic region, and the "glancing rays" region.The latter is the most demanding, and it is unfortunate that no details are provided in [1] for the analysis in the glancing region, except for reference to the author's Ph.D. thesis.
By contrast, our proof in Section 2 below invokes, for the most critical part, the sharp regularity of the wave equation from [5]-which is obtained via differential, rather than pseudodifferential/microlocal analysis methods.In addition, standard elliptic (interior and) trace regularity of the Dirichlet map D is used.Thus, by simply invoking these results in (1.12) above for z t , we obtain-by purely differential methods-the critical result on ∂z t /∂ν of Step 1,(2.3).This then provides automatically the desired regularity of ∂z/∂ν microlocally outside the elliptic sector of the D'Alambertian = D 2 t − ∆, where the time variable dominates the tangential space variable in the Fourier space, see (2.11) below.
Thus, the rest of the proof follows from pseudodifferential operator (PDO) elliptic regularity of the localized problem.
Step 2. It remains to show that the L 2 regularity of ∂z/∂ν holds also in the elliptic sector.This is done by standard arguments using localization of the PDO symbols.We use standard partition of unity procedure and local change of coordinates by which Ω and Γ can be identified (locally x + r(x, y)D 2 y + lot, where lot (which result from commutators) are first-order differential operators and r(x, y)D 2 y stands for the secondorder tangential (in the y variable) strongly elliptic operator.Since solutions v satisfy zero initial data, we can also extend v(t) by zero for t < 0. For t > T we multiply the solution by a smooth cutoff function φ(t) = 0, t ≥ (3/2)T, φ(t) = 1, t ≤ T. Thus, in order to obtain the desired solution, it amounts to consider the following problem: where ∆ 0 = D 2 x + r(x, y)D 2 y is the principal part of ∆ and v is the original solution v = Lg of problem (1.9).Below, we will write w = u + y, where u, y satisfy (2.5) and (2.6), respectively.As a consequence, we will obtain (2.4b) I. Lasiecka and R. Triggiani 629 Below we will denote by u the solution of the counterpart regularity statement of (2.1) for v in Ω.Likewise, we introduce the following nonhomogenous problem: where f = lot(v) results from the presence of the lower-order terms applied to the original variable v in (2.4), that is, in (1.9).Thus, recalling that v ∈ C([0,T];L 2 (Ω)) by (2.1), we obtain By the principle of superposition, we have w = u + y, as announced above.
Step 3. In this step, we handle the y-problem (2.6).We first recall from (1.10) that our original objective is showing that D * v t ∈ L 2 (Σ) continuously in g ∈ L 2 (Σ).Moreover, we recall that v in Ω is transferred into w = u + y, on the half-space Ω (locally).Thus, by (2.6), (2.7), what suffices to show for y is the following regularity property: whereby D * y t is ultimately continuous in g ∈ L 2 (Σ).However, the above property (2.8) is known from [5, Theorem 3.11, page 182] and has been used in the past several times.In fact, set A = −∆ 0 , with Ᏸ(A) = H 2 ( Ω) ∩ H 1 0 ( Ω) and rewrite (2.6) abstractly as y tt = −Ay + f .Apply A −1 throughout and set , again by (2.7).Thus, Ψ solves the problem (2.9) We further have that A −1 y t ∈ C([0,T];H 1 0 ( Ω)), again by (2.7).Finally we recall that D * AA −1 y t = −(∂/∂ν)Ψ t (see [9], [10, equation (5.1.9)]).One can simply quote [5, Theorem 3.11, page 182] or [9, equation (10.5.5.11), page 952] to obtain the desired regularity (2.8): where henceforth we take for Q an extended cylinder based on Ω × [−T,2T].Indeed, this last inclusion follows from [ᐄ, ] ∈ S 1 ( Q) and the priori regularity (2.5b) for u imply- Furthermore, still by (2.5b) and the fact that suppu ∈ [0,(3/2)T], we have, by the pseudolocal property of pseudodifferential operators, that (ᐄu)(2T) ∈ C ∞ ( Ω), (ᐄu)(−T) ∈ C ∞ ( Ω).We conclude that ᐄu| ∂ Q ∈ L 2 (∂ Q), a boundary condition to be associated to (2.12).Since ᐄ is a pseudodifferential elliptic operator, classical elliptic theory, applied to where the first containment on the right-hand side of (2.13) is due to the boundary term, and the second to the interior term.Next, we return to the elliptic problem ∆z = −v t in Q, z| Σ = 0 from (1.11), with a priori regularity noted in (1.11).The counterpart of the above elliptic problem in the half-space Q (locally) is ∆z = −u t in Q, z|Σ = 0 (we retain the symbol z in Q), as we are identifying w with u in the present Step 4 (due to the results of Step 3).Applying ᐄ throughout yields Hence, by the a priori regularity in (2.5b) for u and in (1.11) for z, we conclude I. Lasiecka and R. Triggiani 631 Moreover, by virtue of (2.13), (d/dt)ᐄu ∈ H (0,−1/2) ( Q) where we have used the anisotropic Hörmander's spaces [3, Volume III, page 477], H (m,s) ( Q), where m is the order in the normal direction to the plane x = 0 (which plays a distinguished role) and (m + s) is the order in the tangential direction in t and y.Via (2.15), we are thus led to solving the problem (2.16) By elliptic regularity (note that ∆ᐄ is elliptic in Q), we obtain again (2.17) Combining (2.17) and (2.11) yields the final conclusion and Theorem 1.1 is proved.that is, Then the map g → (∂/∂ν)v t is continuous on L 2 (Σ).Proposition 4.1.In addition to the standing hypotheses (i) and (ii) above, assume that (a) A is skew adjoint: A * = −A, so that e A * t = e −At , t ∈ R, and (b) Then, in fact, Finally, recalling L T in (4.1) and its adjoint L * T [9], we rewrite (4.8) in the following attractive form: (from which (4.5) follows, by taking the L 2 (0,T;U)-inner product with u).Equation (4.9) shows the implication (4.5)⇒(4.3).
3 .Theorem 3 . 1 .
Impact on [10, Section 7.1] Theorem 1.1 and the decomposition argument in [10, Section 7.1, page 1129] allow one to deduce the analogous positive result valid for the Kirchhoff plate with moment controls.Indeed, with reference to the model in [10, equations (7.1.1)],we have the following theorem.Let Ω be as in Theorem 1.1, and let v be a solution to [10, equations (7.1.1)],
Theorem 1.1.Let Ω be a sufficiently smooth bounded domain in R n , n ≥ 2. Consider the v-problem in [10, equation (5.1.1),page 1114], that is, Having accounted for the lot(v) in Step 3-which are responsible for the yproblem-we may in this step set y ≡ 0 and thus identify w with u : w ≡ u.Thus it remains to consider problem (2.5) in u, involving only the principal part of the D'Alambertian.Let ᐄ ∈ S 0 ( Q) denote the PDO operator ᐄ(x, y,t) with smooth symbol of localization χ(x, y,t,σ,η) supported in the elliptic sector of ≡ D 2 .10) 630 The operator B * L for the wave equation with Dirichlet control Step 4. | 2,504.6 | 2004-06-29T00:00:00.000 | [
"Mathematics"
] |
The HST Nondetection of SN Ia 2011fe 11.5 yr after Explosion Further Restricts Single-degenerate Progenitor Systems
We present deep Hubble Space Telescope imaging of the nearby Type Ia supernova (SN Ia) 2011fe obtained 11.5 yr after explosion. No emission is detected at the SN location to a 1σ (3σ) limit of F555W > 30.2 (29.0) mag, or equivalently M V > 1.2 (−0.1) mag, neglecting the distance uncertainty to M101. We constrain the presence of donor stars impacted by the SN ejecta with the strictest limits thus far on compact (i.e., logg≳4 ) companions. H-rich zero-age main-sequence companions with masses ≥2 M ⊙ are excluded, a significant improvement upon the preexplosion imaging limit of ≈5 M ⊙. Main-sequence He stars with masses ≥1.0 M ⊙ and subgiant He stars with masses ≤0.8 M ⊙ are also disfavored by our late-time imaging. Synthesizing our limits on postimpact donors with previous constraints from preexplosion imaging, early-time radio and X-ray observations, and nebular-phase spectroscopy, essentially all formation channels for SN 2011fe invoking a nondegenerate donor star at the time of explosion are unlikely.
INTRODUCTION
The single-degenerate scenario for producing Type Ia supernovae (SNe Ia) invokes a non-degenerate donor star undergoing Roche Lobe overflow (RLOF) to transfer mass onto the white dwarf (WD) until reaching the necessary central densities for carbon ignition (Whelan & Iben 1973;Nomoto 1982).Mass transfer via RLOF restricts the distance between the WD and the donor to a ≲ 3R ⋆ for semi-major axis a and companion radius R ⋆ (Eggleton 1983) resulting in the SN Ia ejecta impacting the companion within moments of the explosion (e.g., Wheeler et al. 1975; see Liu et al. 2023 for a recent review of observables).The impact deposits energy into the envelope of the companion (e.g., Marietta et al. 2000;Boehner et al. 2017), heating and expanding the outer layers and becoming overluminous for ∼ 10 3 yr (e.g., Podsiadlowski 2003;Shappee et al. 2013a).
<EMAIL_ADDRESS>CCAPP Fellow Searches for these post-impact donors have mostly been confined to nearby SN Ia remnants that are ≲ 1000 yr old when the donor is still expected to be significantly overluminous.Some tentative candidates have been reported for Galactic and Magellanic Clouds SN Ia remnants (e.g., Ruiz-Lapuente et al. 2004;Ihara et al. 2007;Li et al. 2019) but no unambiguous surviving donor stars have been identified thus far (e.g., Schaefer & Pagnotta 2012; Kerzendorf et al. 2013;Pagnotta & Schaefer 2015;Kerzendorf et al. 2018;Shields et al. 2023; see Ruiz-Lapuente 2019 for a recent review).However, this experiment is observationally difficult due to the cost of obtaining deep spectroscopic follow-up for many targets and a limited number of nearby and young SN Ia remnants.
An alternative method for identifying post-impact companion stars is to obtain deep imaging for nearby SNe Ia many years after explosion once the SN has sufficiently faded.However, this is only possible for the most nearby (≲ 10 Mpc) SNe Ia due to crowding and faintness constraints.Do et al. (2021) recently searched for a post-impact donor star for SN 1972E but did not find any candidates in HST imaging ≈ 33 yr after explosion.
In this Letter, we present deep Hubble Space Telescope (HST ) imaging of the nearby and well-studied SN Ia 2011fe (Nugent et al. 2011) ≈ 11.5 yr after explosion to constrain the presence of post-impact companions.We adopt a distance to M101 of 6.4 ± 0.4 Mpc (µ = 29.03± 0.14 mag; Shappee & Stanek 2011) to ease comparisons with previous studies of SN 2011fe.We correct for the small amount of Milky Way reddening (E(B − V ) MW = 0.008 mag; Schlafly & Finkbeiner 2011) but do not correct for any host-galaxy reddening toward SN 2011fe as it is negligible (Patat et al. 2013).The date of explosion is MJD 55797 (Pereira et al. 2013).
HST OBSERVATIONS
We obtain deep imaging of SN 2011fe using the HST Wide Field Camera 3 (WFC3) UVIS module.Imaging was only conducted in the F 555W filter due to the expected faintness of SN 2011fe at these epochs.Six images were obtained on UT 2023-03-04 (MJD 60007.55 1 ) corresponding to 4208 d (11.5 yr) after explosion with a total exposure time of 8400 s.To characterize the long-term evolution of SN 2011fe we include previous HST observations published by Shappee et al. (2017) and Tucker et al. (2022b) spanning ≈ 1100 − 2400 d after explosion.
Individual images were aligned with TweakReg and combined with AstroDrizzle (Avila et al. 2015) before performing point spread function (PSF) fitting photometry with Dolphot (Dolphin 2000(Dolphin , 2016)).Timedependent filter zeropoints are taken from the WFC3 headers (Calamida et al. 2022).All images are analyzed simultaneously to ensure the location of SN 2011fe remains consistent across images.SN 2011fe is for-1 Weighted by the exposure time for each observation.
mally undetected in the last epoch of HST imaging obtained 11.5 yr after the explosion so we validate the reported uncertainties by checking the photometry of nearby sources.The F 555W light curve of SN 2011fe is provided in Table 1.Non-detection limits are computed via m > −2.5 log 10 (3σ f ) + z for a given flux uncertainty σ f in counts/s and image zeropoint z in magnitudes.Fig. 1 shows cutouts around SN 2011fe for several HST epochs and the long-term F 555W evolution is shown in Fig. 2.
NON-DETECTION OF A POST-IMPACT COMPANION
We compare the F 555W light curve of SN 2011fe to models of post-impact He ( §3.1) and H-rich ( §3.2) companions.The stsynphot software (STScI Development Team 2020) is used to impute synthetic HST F 555W magnitudes (in the Vega system) using the T eff and L ⋆ values taken from the models.When referring to the stellar properties of different models throughout the following sections, we always report the values at the moment of SN explosion (i.e., after mass transfer but before the ejecta impact) to avoid confusion.
He-star Donors
Models of the post-impact evolution for He-burning donors are taken from Pan et al. (2013, models HeWDad; see also Pan et al. 2010Pan et al. , 2012) ) and Liu et al. (2022, models He01r and He02r; see also Liu et al. 2013) which use the binary evolution results of Wang et al. (2009) to construct their initial binary systems.We also include the simulations of Liu et al. (2021) for a doubledetonation explosion triggered by accretion from a lowmass MS He-star (models A, B, and C).
Fig. 3 shows that most He-star models are incompatible with the observed F 555W light curve of SN 2011fe.The three lowest-mass He donors (0.30 M ⊙ , 0.40 M ⊙ , 0.50 M ⊙ ) from Liu et al. (2021) would not be detected.Thus, we disfavor MS He-burning donors with masses ≳ 1.0 M ⊙ because the thermal timescale decreases with increasing mass.The inverse is true for the subgiant (SG) models, where more massive donors produce more extended envelopes and thus increase the thermalization timescale.The 1.0 M ⊙ model from Liu et al. (2022) would not be detected in our HST observations so we disfavor ≤ 0.8 M ⊙ SG He-stars.
H-rich Donors
Unlike He-stars, most H-burning MS donors remain undetectable (F 555W ≳ 32 mag) over the observational baseline (Pan et al. 2012;Rau & Pan 2022).This is attributed to H-rich stars having larger radii at a given 1.The red dashed line shows a simple power-law fit with f ∝ t −5 highlighting the steadily declining flux.The last observation is not included when fitting the power-law model.
mass compared to He stars and thus respond slower and reach lower maximum luminosities (e.g., Pan et al. 2012Pan et al. , 2013)).However, there are a few cases where H-rich donors can be assessed with the HST non-detection.
Rau & Pan (2022) compute models for zero-age mainsequence (ZAMS) companions assuming RLOF and extend the simulations to binary systems with companions beyond the canonical RLOF separation.While such systems are somewhat contrived due to fine-tuning the mass transfer and time of ignition for the C/O WD, the larger separations produce shallower energy deposition which, in turn, corresponds to increased post-impact luminosities and shorter thermalization timescales (e.g., Fig. 8 in Rau & Pan 2022).2022) assume ZAMS stars to facilitate parameter exploration and dependencies on SN properties.These differences will be discussed further below.
DISCUSSION
The compressibility of the stellar envelope determines the thermal response timescale (Pan et al. 2012).Stars with high surface gravities confine the energy deposition to the outermost layers whereas the energy is deposited deeper into the stellar interior for low log g donors (cf.50% 2 , our results restrict MS He donors to ≲ 1.1 M ⊙ at the onset of RLOF.While such systems are observed in the Milky Way (i.e., AM CVn binaries) they are likely the progenitors of faint and spectroscopically peculiar 2 The adopted 50% accretion efficiency is purely instructive as the true accretion efficiency depends sensitively on the mass transfer rate (e.g., Piersanti et al. 2014;Wu et al. 2017).He shells with masses ≳ 0.05 M ⊙ produce distinct early-time signatures (e.g., Polin et al. 2019;Collins et al. 2022) that are not observed in SN 2011fe.
The SG He donors follow an opposing trend, with more massive companions being harder to detect due to the increasing mass of the envelope which increases the heating depth and thermalization timescale.SG He donors with masses ≳ 0.9 M ⊙ (again assuming the mass of the He shell is ≲ 0.05 M ⊙ and an accretion efficiency of 50%) are disfavored by our late-time imaging.Highermass SG He donors are inconsistent with pre-explosion imaging (Li et al. 2011;Graur et al. 2014) so binaries with SG He stars are disfavored for SN 2011fe.
H-rich donors are less constrained by our latest epoch of HST imaging but complement existing non-detections of H-rich companions.All ZAMS H-rich donors with masses ≳ 2 M ⊙ are disfavored, representing a distinct improvement on the pre-explosion limit of ≈ 5 M ⊙ (Li et al. 2011).1.5 M ⊙ donors beyond 4 R ⋆ are also disfavored but smaller separations would not be detected, and ≤ 1 M ⊙ donors at < 5 R ⋆ are unconstrained by the late-time HST imaging due to their low post-impact luminosity (≲ 10 L ⊙ ).
However, these same H-rich ZAMS donors are already disfavored along separate lines of evidence.The impacting ejecta will strip or ablate material off the surface of the donor and heating from the radioactively-decaying ejecta will produce strong H emission lines in the nebular phase (Mattila et al. 2005;Botyánszki et al. 2018;Dessart et al. 2020).H emission is not seen in the spectra of SN 2011fe (Shappee et al. 2013b) out to 1000 d after explosion (Graham et al. 2015;Taubenberger et al. 2015) and the formal limits on unbound donor material are ≲ 10 −3 M ⊙ (e.g., Fig. 6 in Tucker et al. 2022a).All post-impact ZAMS donors not excluded by our HST observations produce ≳ 0.05 M ⊙ of unbound material, inconsistent with nebular-phase observations (Shappee et al. 2013b;Lundqvist et al. 2015).Our results also restrict 'spin-up/spin-down' scenarios (e.g., Justham 2011;Di Stefano et al. 2011) due to smaller companion radii increasing the surface gravity, decreasing the heating depth, and producing brighter companions after impact (see Fig. 4 and Rau & Pan 2022).Thus, H-rich donors are also disfavored for SN 2011fe.
One potential caveat for future observational and theoretical work on post-impact donor stars is the effect of the donor star structure.We qualitatively compare the H-rich 2 M ⊙ ZAMS model at RLOF from Rau & Pan (2022) to Model B from Pan et al. (2012) with a mass of 1.92 M ⊙ at the time of the explosion.While the right panel of Fig. 4 shows the former is disfavored by our observations, the latter is unconstrained by our observations (F 555W > 32 mag).The difference between these models is the density profile of the companion at the moment of impact, as the models of Pan et al. (2012) include mass transfer prior to explosion instead of adopting a ZAMS density profile.The mass transfer reduces the envelope density compared to a ZAMS star with identical mass.This is supported by the higher amount of unbound mass in the Pan et al. (2012) evolved model (≈ 15%) compared to the Rau & Pan (2022) ZAMS model (≈ 10%).This qualitative comparison highlights the differences in post-impact evolution with and with-out including the effects of mass loss on the donor star structure.
The ≈ 1 mag difference between the 1.2 M ⊙ MS Hestar models of Pan et al. (2013) and Liu et al. (2022) seen in Fig. 3 further highlights the effect of mass-transfer on the donor structure.Despite the donors having similar mass at the moment the WD explodes, they began with different masses 3 and experienced different masstransfer histories.The observed difference in synthetic F 555W (and the underlying L ⋆ estimates) are driven by inherent differences in the donor's internal density profile.Thus, all constraints on post-impact companions depend on the underlying assumptions used to construct the density profile of the companion.We encourage future simulation efforts to explore the dependence on different density profiles produced by realistic masstransfer histories.
It is worth noting that some constraints on a doubledegenerate system can be derived, assuming a companion WD survives the explosion as in the 'D6' scenario (Shen et al. 2018).Shen & Schwab (2017) show that winds can be driven from the WD surface by pollution from radioactive species in the SN ejecta.However, the primary issue with this constraining these models with the HST observations of SN 2011fe is the high temperatures (T eff > 10 5 K) shifting the majority of the emission to UV wavelengths.The UV photons likely cannot escape the Fe-rich SN ejecta due to extensive neutral and singly-ionized Fe transitions at these wavelengths (e.g., Pradhan et al. 1996;Bautista 1997).This will likely cause the observed radiation to deviate strongly from a blackbody and complicates reliable comparison to observations.Additionally, the excess luminosity from the polluted WD fades on similar timescales as the radioactively-decaying ejecta.Thus, one must simultaneously fit the isotopic ratios produced during the explosion (e.g., Tucker et al. 2022b) and the emission contribution from the surviving WD.This should be possible once radiative-transfer calculations can be incorporated into the Shen & Schwab (2017) models.
SN 2011fe has been a boon for understanding the complex physics governing SN Ia explosions.These observations, at 11.5 yr after explosion, provide the strongest limits on He-rich donors in addition to further disfavoring H-rich donors.Assessing our new imaging in conjunction with prior limits on the progenitor system of SN 2011fe (Li et al. 2011;Nugent et al. 2011;Bloom et al. 2012;Margutti et al. 2012;Chomiuk et al. 2012;Brown et al. 2012), almost all non-degenerate donor stars are observationally disfavored.The remaining scenarios that cannot be formally excluded, such as very low-mass (≲ 0.6 M ⊙ ) He donors, are disfavored by rate arguments (e.g., Bildsten et al. 2007;Neunteufel et al. 2019) given that SN 2011fe is a quintessential example of the SN Ia population.
Figure 1 .
Figure 1.Image cutouts centered on SN 2011fe for several HST epochs.The time between explosion and observation is denoted in the lower-right corner of each panel.
Figure 2 .
Figure2.F 555W light curve of SN 2011fe provided in Table1.The red dashed line shows a simple power-law fit with f ∝ t −5 highlighting the steadily declining flux.The last observation is not included when fitting the power-law model.
Fig. 4
Fig.4shows that the 2 M ⊙ donor fromRau & Pan (2022) is disfavored for all binary separations.The 1.5 M ⊙ model would have been marginally detected (≈ 2σ) at a ≳ 3R ⋆ and undetected if the system was in RLOF.The two lower-mass models from Rau & Pan (2022), 0.8 M ⊙ and 1 M ⊙ , are not constrained by our observations regardless of separation due to their low peak luminosities (≲ 10 L ⊙ ).The MS models computed by Pan et al. (2012) with masses of ≈ 1.2 − 1.9 M ⊙ remain undetectable for another century or more.The differences between these models and the similar-mass models computed byRau & Pan (2022) are attributed to the density structure of the donor star at the time of impact.Pan et al. (2012) model the full binary evolution, including mass transfer, when constructing their donor stars whereasRau & Pan (2022) assume ZAMS stars to facilitate parameter exploration and dependencies on SN properties.These differences will be discussed further below.
Fig. 5 Figure 3 .Figure 4 .
Figure 3.Comparison between the light curve of SN 2011fe (black squares) with models of post-impact He-star companions (colored lines).The inverted triangles show the 1σ (open triangle) and 3σ (filled triangle) non-detection limits.The uncertainty in the distance modulus is ≈ 0.14 mag.Left: Main-sequence He-star companions from Pan et al. (2013, P13), Liu et al. (2021, L21), and Liu et al. (2022, L22).Right: Similar to the left panel except for SG He-star companions. | 3,906 | 2023-08-16T00:00:00.000 | [
"Physics"
] |
Development of emulsion films based on bovine gelatin‐nano chitin‐nano ZnO for cake packaging
Abstract This research extends the effect of packaging with bovine gelatin, gelatin nanocomposite (GN), gelatin emulsion (GE), two layers gelatin nanocomposite and gelatin emulsion (GNE), and polyethylene (PE) films on sponge cake properties during storage at 25°C and 55 ± 2% RH. In this regard, water vapor permeability (WVP) and oxygen permeability (OP) of films were compared. Then, moisture content, acidity, peroxide value, texture profile, organoleptic properties, and fungal growth of packed cakes were determined. Results showed that the addition of nanoparticles could reduce the water vapor permeability from 9.680 ± 0.460 × 10–10 (g m/sm‐2Pa‐1) for net gelatin film to 6.067 ± 0.337 × 10–10 (g m/sm−2 Pa−1) for gelatin nanocomposite film and oxygen permeability from 39.262 (cm3μm/ m2dkPa) for net gelatin film to 29.645 (cm3μm m−2 dkPa) for nanocomposite film. However, GNE films had the highest barrier properties. Results of acidity and peroxide values of cakes admitted the sufficiency of GNE films for sponge cakes packaging. In addition, antifungal properties of nanoparticles led to less fungal growth on cakes packed in GNE films. The cakes packed in GNE films own more organoleptic and texture acceptability than the ones packed in other films. Generally, according to the results GNE films are acceptable for packaging of sponge cakes which contain no preservative because this packaging can prevent fungal growth for a longer time and even more can maintain the cake chemical and organoleptic quality.
. However, a major challenge for industrial application of these polymers is their low barrier properties against water molecules (Zheng, Tajvidi, Tayeb, & Stark, 2019).
One of the most important source of biodegradable films is gelatin. This material is interested because of low cost, availability, and film forming properties but its disadvantage is high water vapor permeability (Marvizadeh, Oladzadabbasabadi, Nafchi, & Jokar, 2017).There are different ways to improve barrier properties of biodegradable films such as different film production processes, applying cross-linking agents, plasticizers, and filling components such as nanoparticles (Araghi et al., 2015). Recently, researchers have studied effect of different nanoparticles on biodegradable films properties (Marvizadeh et al., 2017;Nafchi, Nassiri, Sheibani, Ariffin, & Karim, 2013). In this case, nano chitin (N-chitin) is one of the compatible nanoparticles with carbohydrate and protein-based polymers that not only can improve physical properties of films, but also can add antimicrobial properties to biopolymer (Sahraee, Milani, Ghanbarzadeh, & Hamishehkar, 2017).
On the other hand, metal oxide nanoparticles such as nano ZnO (N-ZnO), TiO 2 , MgO, and CaO are interested in their high stability to temperature of process, diffusivity from packaging to food, being safe for animals and human and antimicrobial properties (Nafchi, Moradpour, Saeidi, & Alias, 2014;Shankar, Teng, Li, & Rhim, 2015). Therefore, substituting biodegradable packaging for bakery products instead of synthetic polymers may be a substantial action in order to decrease waste pollution in environment and fuel usage for producing olefin polymers. Even more, because of high functionality of biodegradable polymers, addition of preservatives to bakery products may reduce. The objective of the present research was to apply gelatin nanocomposite films containing N-chitin and N-ZnO for packaging of sponge cake and compare the shelf life and quality of cakes with the ones packed in polyethylene films.
| Materials
For film preparation materials, bovine gelatin was supplied by Merck Chemical Co. (Darmstadt, Germany), with bloom number of 200 and density of 1,358 kg/m 3 . N-ZnO powder with size of 10-30 nm was purchased from Nano SANY Co. N-chitin gel was bought from Nano-Novin Co. with 1.5% dry matter and particle size of 50-70 nm. Liquid glycerol and 50% glutaraldehyde were provided by Sigma Chemical Co. For cake ingredients, wheat flour (extraction rate of 72%), sugar, eggs, oil, vanilla, and baking powder were provided from a local market, Tabriz, Iran.
| Chemicals
Sodium hydroxide, phenolphthalein, n-hexane, acetic acid, chloroform, sodium thiosulphate, and potassium iodide were purchased from Sigma Aldrich. In order to study antifungal activity, SDS Agar medium was supplied from Quelab.
| Film preparation
An aqueous solution of ZnO nanoparticles was prepared through dispersing 5% (based on dry gelatin) of N-ZnO powder in 100 ml distilled water and was stirred on a magnet stirrer at 30°C for 1 hr. Then, the solution was sonicated using a high-intensity ultrasonic processor (Heidolph) at periodic pulsing of 120 s on and 15 s off and at an amplitude of 80% with 0.5 cycle per second. Subsequently, 5% N-chitin (based on dry gelatin) was added to the solution and mixed for 1 hr further, followed by sonication in an ultrasonic bath (Parsonic 30S, Pars Nahand engineering Co.) for 30 min. Gelatin (4 g/100 ml) was dissolved in this solution by mixing for 30 min at room temperature followed by stirring on a hot plate at 55°C for 30 min. Later, the solution was cooled to 35°C and 30% glycerol as plasticizer and 1% glutaraldehyde as crosslinking agent was added and mixed for 30 min. Finally, the film solution was cast on 16 cm diameter Teflon-coated dishes and dried for 48 hr at room temperature. Also, net gelatin films were produced through the same method without adding nanoparticles. In order to make ready emulsion films, 4 g/100 ml gelatin was swelled in water for 30 min and 30% (based on dry gelatin) corn oil and Tween-80 (2 g/100 g oil) as emulsifier were added to the solution and heated up to 55 ± 5°C for 30 min. Then, the mixture was cooled to 35°C and plasticizer and cross-linking agent were added as described above and mixed for 30 min further. Finally, the solution was cast on Teflon-coated dishes and dried. Four kinds of packaging films were prepared in order to pack the cakes: net gelatin films, 5% N-chitin/ 5% N-ZnO/gelatin films (GN), gelatin emulsion films (GE), and 5% N-chitin/ 5% N-ZnO/gelatin as first layer and gelatin emulsion film as second layer (GNE), and polyethylene films as control (Sahraee, Milani, Ghanbarzadeh, & Hamishekar, 2016).
| Water vapor permeability
Water vapor permeability (WVP) of films was measured according to the method of ASTM E96/E96M-16 (2016) with some modifications (Zahedi, Ghanbarzadeh, & Sedaghat, 2010). In this method, 3 g CaSO 4 salt was filled in glass vials (1.5 cm diameter and 4 cm depth). This salt induced 0% RH in the vials. Then, disks of film samples were fixed with vial doors on top of them. There was a hole of 4 mm diameter in each door to let gas exchange through the films. The initial weigh of vials were measured and then put in the desiccators containing saturated K 2 SO 4 (RH = 97%). The vials were weighed at 24 hr intervals. That way, water vapor transmission rate (WVTR) was calculated from the slope of curve of weight difference versus time. Subsequently, the WVP (g/m s -1 Pa −1 ) of film samples were determined according to equation (1): where L is the average thickness of films, and ∆P is the water vapor pressure difference between two sides of films.
| Oxygen permeability of films
Oxygen permeability (OP) of films was determined according to the standard method of ASTM D3985-17 (2017) with a modular system of Ox-Tran 2/20 Ml (Modern Controls Inc.) at 25°C and 55 ± 2% RH.
The film samples were fixed on a stainless steel mask. One side of film was in exposure of nitrogen gas and the other side was in contact with oxygen gas. Both sides of films were set at the same temperature and humidity. As oxygen permeated the film, it transferred to calorimetric detector and produced electrical flow, which its intensity was depended on amount of oxygen flowing to the detector per time (Hong and Krochta, 2006).
| Cake preparation
The cake samples were prepared according to the method of Lu et al. (2010) by some modifications. The ingredients were mixed as following formula: 150 g flour, 150 g sugar, 6 eggs, 40 g oil, 1.70 g vanilla, and 0.90 g baking powder. After preparing the dough, it was transferred to a cake mold and baked for 40 min at 180°C in oven.
After baking, it was covered with a sterile aluminum paper and hold in a laminar hood in order to be cooled. Then, 5 × 5 cm 2 pieces of cakes were packaged in gelatin nanocomposite films and polyethylene films as control and stored at 55 ± 2% RH and 25°C. The shelf life investigations of packed cakes were done in 0, 7, 14, 21, and 28 days of storage.
| Moisture content of cake
Moisture content of cake samples (crumb) was determined according to AACC 44-15A (AACC, 2000). According to this method, pieces of 2 × 2 cm 2 of cakes weighed before and after drying at 103°C for 24 hr and the percentage of weight loss was reported as moisture content of the cake.
| Extraction of cake's lipid
Lipid extraction from cake samples was necessary to determine peroxide value and acidity of cakes' lipid during storage. In this regard, 100 g of cake samples was immerged in 200 ml n-hexane as a solvent and mixed thoroughly to become crashed and exposed better to the solvent. The mixture was held until the upper solvent became clear and then filtered through filter paper (Whatmann No. 1). Subsequently, the solvent was evaporated by rotary at 50°C, and the extracted lipid was taken for next experiments (Lu et al., 2010).
| Peroxide value
In order to determine the rate of lipid oxidation during the storage of cakes packed in different kinds of polymers, peroxide value (PV) was (2): whereN is the sodium thiosulfate normality, V is the volume of sodium thiosulfate used for titration, and W is the weight of lipid.
| Free fatty acids
One of the rancidity symptoms of food's lipid is increasing its free fatty acid content. Based on this fact, the acidity of cakes lipid was measured at 0, 7, and 14 days of storage according to the method of AOCS (AOCS Ca5a-40, 2017). According to this method, 2 g of lipid was mixed with 30 ml of neutralized ethanol and titrated with sodium hydroxide solution (0.01 N) in presence of phenolphthalein as indicator. The acidity of cake lipid was calculated by: where N is normality of sodium hydroxide solution, V is the volume of titrated sodium hydroxide, and W is the weight of lipid.
| Texture profile analysis of cakes
In order to study the effect of packaging polymer on texture properties of cake samples after 0, 7, and 14 days of storage, texture profile analysis of cakes was performed by an Instron universal testing machine (Texture Pro CT V1.6 Build, Brookfield Engineering Labs. Inc.).Cube pieces of cakes (4 × 4 × 4 cm 3 ) compressed using a cylinder probe (TA25/1,000, D = 1.245 mm) at room temperature.
The samples were compressed in two cycles to 40% of their initial heights with a load of 100N and speed of 1mms -1 .The springiness, cohesiveness, hardness, and resilience of the cake samples were determined as the mean of triple measurements.
| Antimicrobial activity of films
In order to assess the antifungal properties of packaging films on cakes, mold and yeast counts have been done according to the methods described in AOAC, 2014. Aseptically, the sample was diluted 1:10 with dilution water and stomached for 2 min. About 1 ml of diluted sample was carefully transferred on the surface of solidified medium in plate.
Then, the suspension was distributed evenly using gentle movement of pipette on the medium. The door of plate was closed and left for 1 min to allow the suspension be absorbed to the medium. The yeasts and molds were counted after 5 days of incubation in 25°C.
| Sensory evaluation
Sensory evaluation of cake samples was implemented to assess the effect of different packaging polymers on quality maintenance of cakes during 7 days of storage at ambient temperature. In this regard, 20 panelists including 7 men and 14 women were asked to test cake samples which were labeled with a three digit random numbers. Each panelist filled an evaluation form, ranking the quality attributes of appearance, color, odor, texture, overall acceptability by applying a 5-point hedonic scale (1 = dislike extremely and 5 = like extremely). The results are the average of these ratings
| Statistical analysis
The results of all experiments were stated as mean ± standard deviation. The one-way analysis of variance (ANOVA) was applied for analyzing the data. For post hoc comparisons, Duncan's test was considered. In order to study gelatin nanocomposite film's properties, Tukey's test was applied. SPSS 16.0 (SPSS) has been used for analyzing the results. In all experiments, the significant level was p < .05.
| Water vapor permeability of films
Water vapor permeability of gelatin, GN and GE films were shown in Table 1. Statistical analysis of data showed significant differences between WVP of the films. Net gelatin films had high WVP because of hydrophilic nature of the films. However, incorporation of N-chitin and N-ZnO increased barrier properties of films against water vapor. Since WVP of films is dependent to water molecules solubility and diffusion across the film, nanoparticles can reduce this permeability by increasing cross-linkage between polymer chains and filling the porosity of matrix (Rouhi, Mahmud, Naderi, Ooi, & Mahmood, 2013). Also, Kanmani and Rhim (2014) reported that addition of N-ZnO in different polymers reduced WVP by increasing overall hydrophobicity of films and inducing tortuous passway across water vapor.
Addition of oil to film formulation led to less WVP than gelatin and
| Oxygen permeability of nanocomposite emulsion films
Oxygen permeability of films (OP) was shown in
| Moisture content of cakes
The moisture content ( (Shankar et al., 2015, Rhim, 2015. As it can be seen in Figure 1, applying gelatin emulsion films as a second layer for gelatin films has improved barrier property of degradable packaging and GNE had the best barrier property after polyethylene. Statistical analysis has declared that there is no significant difference in acid value of cakes packed in net gelatin and GN films after 7 and 14 days of storage. The result may be due to the inefficiency of these polymers to avoid moisture loss of cakes and lack of moisture prevents triglycerides hydrolysis (Soukoulis et al., 2014). However, the acidity of samples packed in GE was higher than net gelatin films which was the result of higher moisture barrier of this packaging. On the other hand, the acidity of cake samples packed in polyethylene and GNE did not change after 7 days of storage. This may be because of similar moisture content of cakes packed in these films. However, the acidity became significantly different for cakes packed in polyethylene and GNE after 14 days, which probably was due to more fungal growth in polyethylene packaging than GNE films.
| Peroxide value of cakes
The effect of different packaging polymers on cake samples' peroxide value (mEq peroxide/Kg extracted oil from cakes) has been shown in Figure 1c. Generally, lipid oxidation of foods is affected by UV light, temperature, moisture content, metal ions, and oxygen exposure (Wu et al., 2013).
Results have shown that there is no significant difference between peroxide value of fresh and stored cakes packed in net gelatin films for 7 and 14 days of storage. The reason maybe good barrier property of gelatin films against oxygen and UV light (Sahraee et al., 2016), and moisture loss of packed cakes in net gelatin films up to a w (a w = 0.
| Antifungal properties of films
Antifungal properties of the films was investigated by comparing fungal growth on cake samples packed in gelatin, GE, GN, GNE, and polyethylene films (Table 2). Since the cakes that are packed in net gelatin films have dried after 3 days of storage and their moisture content has reached to 2.91 ± 0.92% and 1.70 ± 0.54% after 7 and 14 days, no fungal growth has occurred in 7, 14, 21, and 28 days. Even more, incorporation of nanoparticles to gelatin films has decreased water vapor permeability, but the moisture content of the cakes packed in GN films has decreased to 13.48 ± 0.45% after 14 days. Therefore, the water activity of cakes was less than the minimum a w for fungal growth on cakes. The comparison of fungal growth on cakes packed in polyethylene, GE, and GNE films has shown that fungal growth on cakes packed in polyethylene was more than cakes in GE and GNE after 7, 14, 21, and 28 days. The reason for less microbial growth on cakes packed in GE films may be less moisture content of cakes and less oxygen permeability of GE films.
However, results obviously showed that GNE films not only preserves the moisture content of cakes but also because of including N-chitin and N-ZnO is a functional nanocomposite film with antifungal properties that can increase cake shelf life by reducing microbial growth. In this regard, applying two layers packaging can accomplish the property of water vapor barrier by gelatin emulsion films and support antifungal and better physicochemical properties by gelatin nanocomposite films (Noshirvani, Ghanbarzadeh, RezaeiMokarram, & Hashemi, 2017;Sahraee et al., 2017). The results of cohesiveness were in accordance with hardness (Table 3). In this case, the cakes packed in polyethylene and GNE films were less cohesive than the ones in GE and GN films. Also, the effect of storage time on cohesiveness of cakes packed in different films was ascending. However, different packaging polymers did not affect cakes springiness factor significantly, but increasing storage time led to reduction in this property.
| Texture profile analysis of cakes
Resilience of packed cakes which is determined through the ratio between the areas of compression stage and decompression stage of the first cycle of texture profile analysis and is a criterion of recoverability of cakes (Fabra, Lopez-Rubio, & Lagaron, 2015; Guadarrama-Lezama, Carrillo-Navas, Pérez-Alonso, Vernon-Carter, & Alvarez-Ramirez, 2016). Here again, polyethylene and GNE packed cakes were more resilient than GE and GN, respectively (Table 3).
| Sensory characterization of cakes
In order to assess organoleptic characterization of cake samples packed in gelatin, GE, GN, GNE, and polyethylene films, appearance, taste, odor, color, texture (hardness or softness), and overall acceptability of cakes were tested after 7 days of storage at 25°C (Table 2).
Results declared that cakes packed in net gelatin films had the least acceptability of sensory properties. Since net gelatin films were not a good barrier against water vapor, after 3 days of storage the cake stored in it became dried and stiff which was not chewable.
On the other hand, the cakes packed in polyethylene, GE and GNE films did not possess significant difference in organoleptic characterization. By the way, polyethylene cake samples had better texture and acceptability than other cakes. It can be found from the results shown in Table 4
| CON CLUS ION
Substituting synthetic laminated films with laminated degradable packaging like nanocomposite gelatin/ emulsion gelatin films can achieve many advantages for industry and environment. As can be understood from the results, addition of nanoparticles to gelatin films has improved barrier properties of the films, but it was not sufficient. Accordingly, in order to lessen permeability of gases especially water vapor through packaging, applying gelatin emulsion film as the second layer could be appropriate. In this case, the results of moisture content, peroxide value, acidity, texture analysis, and fungal growth admitted that two layers nanocomposite emulsion films could be a substitute for polyethylene films.
This work was supported by Sari Agricultural Sciences and Natural
Resources University (SANRU).
CO N FLI C T O F I NTE R E S T
The authors declare that they do not have any conflict of interest. The results are mean of scores based on 9-point hedonic scale (1 = dislike extremely to 9 = like extremely). The same superscripts in each column show no significant difference between values. | 4,634 | 2020-01-23T00:00:00.000 | [
"Materials Science"
] |
Treatment of Solanum torvum seeds improves germination in a batch-dependent manner 1
The Solanum genus is a hyperdiverse taxon. There are around two thousand Solanum species worldwide, distributed primarily in tropical and subtropical areas, with a small portion in temperate zones (Edmonds & Chweya 1997). The Solanum torvum species is native to Latin America. It is shrubby, reproduces by seeds and is dispersed mainly by birds that feed on its berries. It is widely distributed in Pakistan, India, Malaysia, China, Philippines and tropical America (Zakaria & Mohd 1994). The species is used both in the pharmacological and agronomic areas, but is little studied and has no methodological ABSTRACT RESUMO
The Solanum genus is a hyperdiverse taxon.There are around two thousand Solanum species worldwide, distributed primarily in tropical and subtropical areas, with a small portion in temperate zones (Edmonds & Chweya 1997).
The Solanum torvum species is native to Latin America.It is shrubby, reproduces by seeds and is dispersed mainly by birds that feed on its berries.It is widely distributed in Pakistan, India, Malaysia, China, Philippines and tropical America (Zakaria & Mohd 1994).The species is used both in the pharmacological and agronomic areas, but is little studied and has no methodological
ABSTRACT RESUMO
description rules for seed testing in Brazil (Brasil 2009).The species is highly vigorous, rustic, wild and known in many equatorial countries as an invader capable of colonizing poor and inhospitable zones.Due to its robust root system, it manages to develop in soils with a heavy load of nematodes and pathogenic fungi, thus recently becoming much in demand in intensive agriculture as a rootstock of Solanaceae species, such as eggplant and tomato (Miceli et al. 2014, Scrimali 2014).
In southern Croatia and part of Montenegro, S. torvum is used in around 70 % of protected crops The Solanum torvum species can grow in soils with a heavy load of nematodes and pathogenic fungi.It is currently much in demand in intensive agriculture as a rootstock of Solanaceae species, such as eggplant and tomato.This study aimed at comparing treatments, in order to determine the best method to accelerate the germination of S. torvum seed batches.Three seed batches were submitted to four treatments to overcome dormancy (water, potassium nitrate, gibberellic acid and pre-imbibition in gibberellic acid).The first germination count, germination percentage, germination speed index, mean germination time and mean germination speed were assessed.Treatments with gibberellic acid, with either pre-imbibition or only moistened substrate, exhibited the best germination speed index, mean germination time and mean germination speed.The final germination percentage showed a significant interaction between treatments and seed batches.Therefore, the treatments affect the final germination in a batch-dependent manner.KEYWORDS: Solanaceae; dormancy breaking; gibberellic acid; potassium nitrate.
PALAVRAS-CHAVE: Solanaceae; quebra de dormência; ácido giberélico, nitrato de potássio.with positive results, when compared to nongrafted eggplant (Solanum melongena), but grown in soils disinfested with methyl bromide.The species provides a good economic and environmental advantage, since the rootstock vigor allows a biannual eggplant cropping, with significantly lower planting costs and a considerable increase in agricultural sustainability (Scrimali 2014).
RESEARCH NOTE
This species has also been widely exploited for its chemical constituents.Several parts (fruits, leaves and roots) are used to isolate a vast array of compounds.Its aqueous extracts inhibit pathogenic fungi such as Pyricularia oryzae, Alternaria alternata, Trichoconiella padwickii, Fusarium oxysporum and Fusarium solani (Jaiswal 2012).In pharmacological studies, several Solanaceae species, including S. torvum, have shown hypotensive action in the cardiovascular system (Batitucci 2003).
The main limitation for the practical use of S. torvum as a rootstock in the commercial production of grafted eggplant, as well as in genetic breeding programs, is the poor and irregular germination caused by seed dormancy (Ginoux & Laterrot 1991, Miura et al. 1993, Gousset et al. 2005, Hayati et al. 2005).
Among the procedures that may increase seed germination is seed imbibition in water or solutions capable of promoting growth, whether by immersion or simply with moistened substrate (Rosseto et al. 2000).The use of potassium nitrate (KNO 3 ), reported as one of the primary agents to overcome dormancy in numerous species, may cause structural changes in the seeds, decreasing water absorption by the pericarp, thereby increasing germination (Faron et al. 2004).Gibberellins, in turn, play a key role in regulating germination.As endogenous enzyme activators, they are involved in both dormancy breaking and reserve hydrolysis control (Soares et al. 2009).
Methods that make germination more regular and predictable are necessary in production systems that use rootstocks, for synchrony between the production of seedlings to be grafted and that of rootstocks.Thus, this study aimed at comparing treatments that improve the germinative parameters of S. torvum seed batches, in order to facilitate and accelerate the rootstock production.
The study was conducted between April and July 2014, at the University of Bologna, in Bologna, Italy.
The experimental design was completely randomized, in a 3 x 4 factorial scheme.Three S. torvum seed batches were assessed in storage and submitted to four treatments: substrate moistening with water (H 2 O), potassium nitrate (0.2 % KNO 3 ) and gibberellic acid (0.05 % GA 3 ) and seed imbibition with gibberellic acid for 24 h and subsequent planting in substrate moistened with GA 3 (imbibition with 0.05 % GA 3 ).
A total of fifty S. torvum seeds were sown per plate (containing 3 ml of the respective treatment), with three replications per treatment.The substrate used was germination-specific filter paper.The seeds were incubated in chambers with a controlled environment, under 16 h of light at 20 ºC and 8 h of dark at 30 ºC.The Petri dishes were randomly disposed inside the chamber and rotated daily.
Germination count occurred daily up to 35 days after sowing (DAS), when the experiment was finalized.Seeds were considered germinated when they exhibited root protrusion of more than 2 mm.The following variables were calculated: first germination count: conducted at 7 DAS by counting the number of seeds with root protrusion; germination (G): calculated by the formula G = (N/50) x 100, where N = number of germinated seeds at the end of the test (Labouriau & Valadares 1976), with results expressed in percentage; germination speed index (GSI): calculated by the formula GSI = ∑ (ni/ti), where ni = number of seeds that germinated in time i and ti = time after starting the test, with i = 1 → 35 days (Maguire 1962), dimensionless; mean germination time: calculated by the formula MGT = (∑ni ti)/∑ni, where ni = number of seeds germinated per day and ti = incubation time, with i = 1 → 35 days (Labouriau & Valadares 1976), in days; mean germination speed (MGS): calculated by the formula MGS = 1/t, where t = mean germination time (Kotowski 1926), in days.
The data were submitted to analysis of variance, using the F-test.Data that did not fit some Anova assumption were transformed to (x + 1) 0.5 .If significant, the averages of the treatments were compared by the Tukey test at 5 %.
The interaction batch x treatment was significant for all the germination parameters analyzed.The analysis for first germination count showed a significant difference among treatments only for batch 3 (Table 1), where seeds pre-imbibed in gibberellic acid for 24 h exhibited the largest number of germinated seeds at 7 DAS.When GA 3 was used e-ISSN 1983-4063 -www.agro.ufg.br/pat-Pesq.Agropec.Trop., Goiânia, v. 46, n. 4, p. 464-469, Oct./Dec.2016 only to moisten the substrate, the response differed statistically from the other treatments, being only lower than the treatment involving pre-imbibition of seeds in gibberellic acid.This response shows the marked effect of gibberellic acid in the germination process, activating hydraulic enzymes that are active in deploying reserve substances.
The effects of GA 3 on germination depend largely on the difference in physiological conditions among seeds caused by their ripening, post-ripening and aging conditions (Suzuki & Takahashi 1968).The positive response of GA 3 observed only in batch 3 is possibly due to its better physiological condition, when compared to the others.This could occur due to the larger amount of reserves accumulated in the seeds.Seed size was not measured in this study.
The final germination percentage at 35 DAS did not differ statistically among treatments for batches 1 and 3 (Table 2).However, there was a difference among treatments for batch 2, which exhibited a larger number of germinated seeds when treated with 0.2 % potassium nitrate, not differing statistically from the treatment with substrate moistened with GA 3 .Lower germination percentages at the end of the assessments were obtained for treatments with water and pre-imbibition with GA 3 .
In a study with S. torvum, Ranil et al. (2015) found that treatments with GA 3 and KNO 3 , among others, such as immersion in water for 24 h and light irradiation, have highly positive effects on germination stimulation.Similarly, applications of GA 3 or KNO 3 were also efficient for other Solanum species (Hayati et al. 2005, Wei et al. 2010, Gisbert et al. 2011).
The germination speed index was higher for treatments with GA 3 , whether only in the substrate or in pre-imbibition for batches 1 and 2 and only in pre-imbibition for batch 3. Similarly, studies with Genipa americana L. seeds pre-imbibed in liquid gibberellin (4 % GA 3 ) for 12 h obtained a higher germination speed index, when compared to preimbibition in water (Prado Neto et al. 2007).The germination speed index of potassium nitrate did not differ from the standard treatment with water.In some species, moistening seeds with potassium nitrate do not produce significant effects on dormancy breaking (Martins et al. 2012).
Although KNO 3 is widely used in laboratories to overcome dormancy, its use is recommended mostly in species whose coats are impermeable to gases, since it is believed that the contact with substances in the pericarp decreases resistance and facilitates gas exchanges (Frank & Nabinger 1996).Applying KNO 3 may accelerate water and oxygen capture, as well as improve the nutritional status of seeds (McIntyre et al. 1996).Hayati et al. (2005) observed that low concentrations of KNO 3 (0.1 %) were efficient in breaking the S. torvum dormancy, and that germination percentages declined significantly with an increase in KNO 3 .The positive effects of this chemical substance are not always observed, because it decreases the osmotic pressure of the substrate, in relation to the seeds, thereby precluding imbibition (Xia & Kermode 2000).
Means followed by the same lower case letter do not differ horizontally, while those followed by the same upper case letter do not differ vertically.Data were transformed by the formula (x + 1) 0.5 .
Means followed by the same lower case letter do not differ horizontally, while those followed by the same upper case letter do not differ vertically.The exposure of S. torvum seeds to gibberellic acid decreased the mean germination time for the three batches assessed, that is, fewer days were needed between the first and last germinated seed (Table 3).The treatment with pre-imbibition of seeds in GA 3 , for two of the batches assessed, required fewer days for germination.Treatments with water and potassium nitrate were not statistically different, but exhibited higher mean germination time values, if compared to treatments with GA 3 .Studies with Solanum betaceum indicated no statistical difference for mean germination time with the use of gibberellic acid, when compared to hydro-priming (Kosera Neto et al. 2015).
Mean germination speed data corroborate those observed for mean germination time.The treatment with pre-imbibition in GA 3 displayed higher mean germination speed for two of the batches assessed.However, for the third batch, this treatment did not differ statistically from the use of GA 3 only in the substrate.Batch 3 showed greater physiological potential, given that lower mean germination time and higher germination speed index and mean germination speed values were observed, when compared to the other batches.
By comparing batches, it was observed that batch 3 obtained a higher germination percentage, germination speed index and mean germination speed, and lower mean germination time than the other batches (Table 3), irrespective of treatment.The superiority of batch 3 may be associated with a greater physiological vigor.Vigor influences all germinative aspects, particularly characteristics such as speed, uniformity and mass of emerged seedlings (Carvalho & Nakagawa 2000).
The use of gibberellins in the germination phase may improve seed vigor and germination in a number of species, as observed here for S. torvum, but they become more important when the seeds are under adverse conditions (Ferreira et al. 2005, Lopes & Sousa 2008).However, gibberellins accelerate the germination and emergence of several species, while for others they promote a slight or no response (Soares et al. 2009).Studies with GA 3 in Coffea arabica L. seeds in vitro showed that this regulator do not contribute to accelerate germination or final seedling development.This may be due to the fact that seeds exhibit an adequate level of endogenous gibberellin, not interfering with performance during germination (Moraes et al. 2012).Dormancy in S. torvum seeds is not attributed to their seed coat, as a physical barrier to water absorption (Hayati et al. 2005).However, the physical resistance of the endosperm may represent a barrier to root protrusion (Nomaguchi et al. 1995, Leubner-Metzger 2002).
In tomato (Solanum lycopersicum), the embryo is embedded in a rigid endosperm.The region of the endosperm near the root tip weakens to allow the embryo emergence (Groot & Karrssen 1987).Enzymes such as expansin, β-1,3-glucanase, endoβ-mananase and xyloglucan endotransglucosylase/ hydrolase are involved in weakening the endosperm capsule.The levels of mRNA transcription of the genes that codify these enzymes are induced by gibberellic acid (Chen & Bradford 2000, Nonogaki et al. 2000, Wu et al. 2001, Chen et al. 2002).Thus, the gibberellic acid may be involved in weakening the endosperm rigidity, decreasing the resistance to root penetration, estimulating root growth and resulting in accelerated germination.Hayati et al. (2005) concluded that the dormancy mechanisms involved in S. torvum may be due to the mechanical resistance of the endosperm, presence of inhibitors in seed coats and physiological status of the embryo.
The treatments with GA 3 , with pre-imbibition or only moistened substrate, showed the best germination speed index, mean germination time Means followed by the same lower case letter do not differ horizontally, while those followed by the same upper case letter do not differ vertically.and mean germination speed.The response for final germination percentage did not differ among treatments for batches 1 and 3, while, for batch 2, the best treatment was KNO 3 .
Table 3 .
Mean germination time and mean germination speed of three Solanum torvum seed batches submitted to different treatments. | 3,368.4 | 2017-01-01T00:00:00.000 | [
"Biology"
] |
Electrical Stimulation Degenerated Cochlear Synapses Through Oxidative Stress in Neonatal Cochlear Explants
Neurostimulation devices use electrical stimulation (ES) to substitute, supplement or modulate neural function. However, the impact of ES on their modulating structures is largely unknown. For example, recipients of cochlear implants using electroacoustic stimulation experienced delayed loss of residual hearing over time after ES, even though ES had no impact on the morphology of hair cells. In this study, using a novel model of cochlear explant culture with charge-balanced biphasic ES, we found that ES did not change the quantity and morphology of hair cells but decreased the number of inner hair cell (IHC) synapses and the density of spiral ganglion neuron (SGN) peripheral fibers. Inhibiting calcium influx with voltage-dependent calcium channel (VDCC) blockers attenuated the loss of SGN peripheral fibers and IHC synapses induced by ES. ES increased ROS/RNS in cochlear explants, but the inhibition of calcium influx abolished this effect. Glutathione peroxidase 1 (GPx1) and GPx2 in cochlear explants decreased under ES and ebselen abolished this effect and attenuated the loss of SGN peripheral fibers. This finding demonstrated that ES induced the degeneration of SGN peripheral fibers and IHC synapses in a current intensity- and duration-dependent manner in vitro. Calcium influx resulting in oxidative stress played an important role in this process. Additionally, ebselen might be a potential protector of ES-induced cochlear synaptic degeneration.
INTRODUCTION
Neurostimulation devices, for example visual prosthetics, auditory prosthetics, deep brain stimulation device, prosthetics for pain relief, motor prosthetics and brain-computer interfaces, are promising therapeutics for neurological disorders by supplanting or supplementing the input and/or output of the nervous system. These devices were initially designed to bypass neural deficits that occurred as a result of injuries or diseases. Currently, neurostimulation devices are even developed to modulate existing neural function to improve performance, especially in the application of future braincomputer interfaces. Cochlear implants (CIs) are the most widely used neural prosthetic. Traditional CIs restore hearing perception by delivering electrical signals converted from sound information to spiral ganglion neurons (SGNs), bypassing the defective or missing mechanosensory structures of the organ of Corti, i.e., hair cells. In the last decade, electricacoustic stimulation (EAS) technology was newly developed for patients with severe or profound high-frequency hearing loss and residual low-frequency hearing (Von Ilberg et al., 1999;Gantz and Turner, 2003;Kiefer et al., 2005). This technology uses a short electrode array in the basal to middle part of the cochlear duct, leaving the apical part intact to preserve the residual low-frequency hearing. Patients are then able to receive acoustic signals at the apical part of the cochlea and electrical stimulation (ES) at the basal and middle part of the cochlea, simultaneously. Compared to full-insertion CI, EAS technology significantly improves music appreciation and speech recognition in background noise (Turner et al., 2004(Turner et al., , 2008Gfeller et al., 2006). Accordingly, the preservation of residual lowfrequency hearing is critical to EAS recipients. Unfortunately, clinical trials showed that 30-75% of EAS recipients experienced delayed progressive loss of residual low-frequency hearing over time after the activation of EAS (Gantz et al., 2009;Gstoettner et al., 2009;Santa Maria et al., 2013). Understanding how existing hearing function deteriorates under EAS might benefit not only the preservation of the residual hearing of EAS recipients but also the protection of existing neural functions on which, other neurostimulation devices depend. However, the mechanism of this delayed hearing impairment is largely unknown. Animal studies suggested that reduced endocochlear potential due to lateral wall or stria vascularis damage (Wright and Roland, 2013) and disturbed traveling wave due to fibrosis or new bone growth (Choi and Oghalai, 2005) were associated with the hearing loss of EAS recipients. Nevertheless, there is still a lack of strong evidence to support these theories. Previous animal studies demonstrated that ES did not cause any morphological changes in hair cells or SGNs (Ni et al., 1992;Shepherd et al., 1994;Coco et al., 2007;O'Leary et al., 2013). Notably, to the best of our knowledge, the status of synapses between SGNs and inner hair cells (IHCs) in EAS-induced hearing loss has not yet been investigated. However, the loss of IHC synapses has been shown to play an important role in noiseinduced hearing loss (Kujawa and Liberman, 2009;Lin et al., 2011) and in age-related hearing loss (Makary et al., 2011;Sergeyenko et al., 2013).
Cochlear implants use charge-balanced biphasic pulses to stimulate SGNs. The depolarization of the SGN membrane caused by ES results in calcium influx through various types of voltage-dependent calcium channels (VDCCs). Excessive calcium influx could lead to the injuries of SGN (Hegarty et al., 1997;Roehm et al., 2008) and hair cells (Fridberger et al., 1998). Oxidative stress also plays important roles in hearing loss induced by noise, aminoglycoside antibiotics, cisplatin and aging (Choi and Choi, 2015;Sheth et al., 2017;Tavanai and Mohammadkhani, 2017). We postulated that excessive calcium influx through VDCCs and the resulting increase in oxidative stress might be involved in the loss of residual hearing due to chronic ES.
In this study, we used cochlear explants culture with ES of charge-balanced biphasic pulses to investigate the impact of ES on SGN peripheral fibers, hair cells and their synapses. We demonstrated that CI with ES could induce the degeneration of IHC synapses and SGN peripheral fibers through calcium influx and resulting oxidative stress.
Cochlear Explant Culture
All procedures were approved by the Ethics Review Board of Eye and ENT Hospital of Fudan University (No. 2013024). Sprague Dawley rat pups of 4-6 postnatal days old of both sexes were provided by Shanghai SIPPR-Bk Lab Animal Co., Ltd. The cochlear explant culture was previously used to investigate the excitotoxic damage of IHC-SGN synapses (Wang and Green, 2011). Briefly, the cochlea were dissected out in ice-cold PBS. The osseous labyrinth, stria vascularis and spiral ligament were carefully removed. With the organ of Corti and modiolus preserved intact, Reissner's membrane and tectorial membrane were carefully removed with fine forceps. After the upper and basal turns were cut off, the middle turns were cut into small pieces and plated on poly-L-lysine-treated chamber slides. We usually dissected 5 pups and collected 10 cochleae at one time. Then the middle parts of cochlear tissues were pooled together and each of them was cut into 3-4 small pieces. Six pieces of cochlear tissues were then randomly put into each chamber. Unless otherwise indicated, the explants during the whole experiments, were maintained in a 37 • C humidified incubator with 5% CO 2 and in high glucose Dulbecco's modified eagle's medium (DMEM, Life Technologies, 11965) with N2 supplement (Life Technologies, 17502-048), 10% fetal bovine serum (Gibco, 10099-141), 10 µg/ml insulin (Sigma-Aldrich, I6634), 50 ng/ml neurotrophin-3 (NT-3, Sigma-Aldrich, N1905) and 50 ng/ml brain-derived neurotrophic factor (BDNF, Sigma-Aldrich, B3795). The explants were first allowed to settle down on the chamber floor for 24 h before the following treatments. The floating explants were discarded and the adherent ones were used for the following experiments.
Chamber Slide With ES
To investigate the impact of ES on cochlear structures, we established a culture system of cochlear explants under ES (Figure 1). Briefly, two parallel platinum-iridium wires were introduced into a four-well chamber slide system (154526, Thermo Scientific) through four holes at four corners against the chamber floor. The holes were sealed with silicon glue to secure the wires which were connected to a multichannel chargebalanced biphasic pulse generator (Listent Medical Tech Co., Ltd.). The charge-balanced biphasic pulses used for ES held adjustable amplitudes with a 65-µs pulse width, 8-µs open-circuit interphase gap, and 4862-µs short-circuit phase at a frequency of 200 Hz. The distance between the two paralleling wires was 1 cm and the volume of culture medium in each chamber was 0.6 ml. The maximum charge density used in this study was 0.043 µC/cm 2 /phase when a maximum current intensity of 400 µA was used. This charge intensity was far less than 15 to 65 µC/cm 2 /phase which was suggested as the maximum level of charge intensity in commercial CIs (Zeng et al., 2008).
Measuring Reactive Oxygen Species (ROS)/Reactive Nitrogen Species (RNS) Activity
The total ROS/RNS activity was measured by a ROS/RNS Assay Kit (Cell Biolabs, STA-347-5) according to the provided procedure. Briefly, cochlear explant cultures under different conditions were removed and rapidly homogenized under icecold conditions. The homogenates were then centrifuged, and the supernatants were reacted with dichlorofluorescein in a DiOxyQ probe for spectrofluorimetric measurement.
Real-Time PCR
For real-time PCR, PCR was conducted using an Applied Biosystems 7500 Real-time PCR System. Cochlear explants were harvested from cover slips and total RNA was purified with an RNeasy Plus Micro Extraction Kit (Qiagen, 74034). Then the RNA was reverse transcribed with a High Capacity RNA-to-cDNA kit (TaKaRa, RR036A) (Applied Biosystems, Foster City, CA, United States). The following primer pairs were designed using Primer3 software: β-actin, (F) CCTCTATGCCAACACAGT and (R) AGCCACCAATCCACACAG, with amplicon lengths of 155 bp; and glutathione peroxidase 2 (Gpx2), (F) AGACACTGGGAA ACCGAAGC and (R) AAGGAA ATGGGTGGCAGGAA, with amplicon lengths of 65 bp.
Quantitative Analysis of SGN Peripheral Fibers, IHC Synapses and Hair Cells
Digital images of immunostained cochlear explants were acquired by a Leica SP8 confocal microscope. Serial images of each explant at a 0.3 µm interval (z-axis) were recorded to generate a z-stack of images that could be projected onto a single plane (z-projection). Images of hair cells, IHC synapses and SGN peripheral fibers were simultaneously obtained with a 60×, 1.5 numerical aperture objective, while hair cells and SGN peripheral fibers were scanned at 40×, in different experiments. Then, the images were analyzed with ImageJ software. The number of SGN-IHC synapses was determined by counting the numbers of PSD-95 puncta on IHCs and in contact with NF200positive neurites slice by slice. Each puncta was counted in the first slice in which it appeared in focus to avoid being counted again. In the NF200 images, SGN peripheral nerve fibers in the area near the inner hair cell were crossing and overlapping. As a result, the fibers were hard to distinguish and count. We used the gray value of immunofluorescence in NF200 images to quantify the relative density of SGN peripheral nerve fibers. Images of each SGN peripheral fibers were captured using the same exposure time and light intensity and at the same sitting. At first, MYO7A and NF200 images from same location were converted to 8-bit grayscale images and constituted to a stack in ImageJ. A rectangle area with the dimension of 40 × 200 pixels was selected closely against to the base of inner hair cells in MYO7A images. That area coincided with the region that PSD-95 puncta distributed. Images of each SGN peripheral fibers were captured using the same exposure time and light intensity and at the same sitting. Then the mean gray value of the same area subtracted by that of background area in NF200 images was measured and determined as the relative density of SGN peripheral nerve fibers (Figure 2).
Statistical Analysis
Statistical analysis was performed by GraphPad Prism 7 (GraphPad Software, Inc., CA, United States). Unless otherwise indicated, significances of differences among various conditions were compared by one-way ANOVA followed by Dunnett's multiple comparisons test.
ES Decreased the Quantity of SGN Peripheral Fibers and IHC Synapses but Not the Quantity of Hair Cells
To investigate the impact of ES on cochlear structure, we cultured cochlear explants in a chamber slide system with multichannel charge-balanced biphasic pulse generators (Figure 1), which has been demonstrated in our previous work (Shen et al., 2016). The cochlear explants were electrically stimulated by charge-balanced biphasic electrical pulses with an amplitude of 50 or 100 µA amplitude for 8, 24, or 48 h. Cochlear explants cultured for the same duration and without ES were used as the control groups, respectively (non-ES group). The quantity of outer hair cells (OHCs), IHCs and anti-PSD95-labeled puncta and the density of SGN peripheral fibers (fiber density) near IHCs were measured after respective immunofluorescencelabeling. The ratio of the number of OHCs to IHCs number (OHC/IHC ratio) and the ratio of the number of PSD95 puncta number to IHCs (PSD95/IHC ratio) was used to evaluate the quantity of hair cells and IHC synapses, respectively. After 8 h or 24 h, there was no statistical difference in the OHC/IHC ratio, fiber density and PSD95/IHC ratio among the non-ES, 50 and 100 µA groups (P values in Table 1 and Figures 3A-C). After 48 h, PSD95/IHC ratio of 48 h/50 µA group were also comparable to that of non-ES group (P = 0.9170, Figures 3C,F,J), but the fiber density was less than that in non-ES group (P = 0.0097, Figures 3B,E,G,I,K). Compared to with non-ES Figure 3A). Additionally, there was no obvious difference between the hair cell morphology of ES explants and non-ES explants (Figures 3D,H,L).
The Quantity of IHC Synapses and SGN Peripheral Fibers Decreased Synchronously Under ES
We further used higher intensities of biphasic charge-balanced pulses to stimulate the cochlear explants for 48 h. Compared to the non-ES group with PSD95/IHC ratio counting to 25.38, PSD95/IHC ratios of 100, 200, and 400 µA groups significantly decreased to 20.06, 14.21, and 6.64, respectively ( Figure 4S). Additionally, the fiber densities of 100, 200, and 400 µA groups also significantly decreased to 4.17, 2.34, and 1.10, respectively, compared to 7.58 in the non-ES group ( Figure 4R). The density of SGN peripheral fibers and the quantity of IHC synapses were synchronously decreased with increasing ES intensity (Figures 4E-P). However, there was still no significant difference in the morphology of hair cells and the OHC/IHC ratios among these groups (Figures 4A-D,Q). There was a significant correlation between fiber density and PSD95/IHC ratio (Pearson test, r = 0.954, P = 0.046, Figure 4T). These results demonstrated that ES synchronously decreased the quantity of IHC synapses and SGN peripheral fibers in a current intensity-dependent manner, but did not change the morphology or quantity of hair cells. Thus, we only used fiber density to evaluate the change of cochlear structure in the following experiments.
Inhibition of Calcium Influx Attenuated the ES-Induced Loss of SGN Peripheral Fibers and IHC Synapses
To investigate the role of calcium influx through VDCCs in the ES-induced degeneration of SGN peripheral fibers and IHC synapses, we inhibited calcium influx in 48 h/100 µA cochlear explants by bath application of various VDCC blockers, i.e., 10 µM L-type Ca 2+ channel blocker VPL, 1 µM N-type Ca 2+ channel blocker GVIA, 1 µM P/Q-type Ca 2+ channel blocker IVA and their mixture (CCBM). The fiber density and PSD/IHC ratio of 48 h/100 µA group was significantly lower than those of the non-ES group as described above (P < 0.0001). However, fiber density and PSD/IHC ratio of the groups treated with any VDCC blocker were comparable to those of the non-ES group (P in Table 2 and Figures 5A,C). We also inhibited calcium influx in 48 h/100 µA cochlear explants by maintaining them in Ca 2+ -free medium or in medium with 10 µM Cd, a non-selective calcium channel blocker. As a result, the fiber density and PSD/IHC ratio were also comparable to those of the non-ES group (P in Table 3 and Figures 5B,D). These results suggested that calcium influx through VDCCs is vital to the ES-induced degeneration of SGN peripheral fibers and IHC synapses.
ES Increased the Activity of ROS and RNS in Cochlear Explants
To investigate whether ES caused oxidative stress in cochlear explants by increasing calcium influx, we measured ROS/RNS activity in explants under various intensities of ES for 48 h. ROS/RNS activity in explants under ES with amplitudes of 25, 50, 100, 200, and 400 µA were increased to 2.9, 2.1, 1.7, 4.4, and 6.5-fold to that of non-ES group, respectively (P = 0.0020, 0.0442, 0.1606, <0.0001, and <0.001, respectively when compared with the non-ES group, Figure 6A). In addition, ROS/RNS activity increased in an intensity-dependent manner when cochlear explants had amplitudes greater than 50 µA ( Figure 6B). To investigate the role of calcium influx through VDCCs in the change in ROS/RNS activity, we added a mixture of VPL, GVIA and IVA to culture medium of 48 h/100 µA cochlear explants. As a result, ROS/RNS activity decreased to a level comparable to that of the non-ES group (P = 0.1072, Figure 6C). These results suggested that ES could increase ROS/RNS activity and cause oxidative stress by increasing calcium influx through VDCCs.
ES Inhibited GPx Expression in Cochlear Explants
We hypothesized that the ES-induced increase in ROS/RNS activity in cochlear explants might be due to the altered expression of oxidative stress-related genes. We evaluated the mRNA expression levels of GPx1 and GPx2 in cochlear explants under various intensities of ES and without ES. Significant decreases in the GPx1 and GPx2 expression levels were both observed in 200 µA/48 h-and 400 µA/48 h-treated explants compared with non-ES explants, respectively (GPx1: 200 µA P = 0.0231 and 400 µA P = 0.0233, GPx2: 200 µA P = 0.0484 and 400 µA P = 0.0228, Figures 7A,B). The GPx1 expression level in 100 µA/48 h-treated explants also decreased compared to that in non-ES explants (P = 0.0647, Figure 7A). These results demonstrated that ES could result in downregulation of GPx1 and GPx2 mRNA expression levels.
Ebselen Prevented the Decrease of GPx Expression as Well as the Loss of SGN Peripheral Fibers in Cochlear Explants Exposed to ES
Ebselen is an organoselenium compound that acts as a GPx mimetic and is thereby able to prevent the cellular damage induced by the ROS and RNS generated and accumulated during various cellular processes. To investigate whether the ESmediated downregulation of GPx and the increase in ROS/RNS activity caused the degeneration of SGN peripheral fibers and IHC synapses, we maintained cochlear explants in medium with 40 µM ebselen for 48 h. As a result, the GPx1 and GPx2 expression levels in 100 µA/48 h-, 200 µA/48 h-and 400 µA/48 h-treated cochlear explants were comparable level to those in non-ES explants (Figures 7A,B). Moreover, the density of SGN peripheral fibers in all ES-treated groups was also comparable to that in non-ES group (Figures 7C-P).
These results indicated that ES-induced downregulation of GPx1 and GPx2 expression levels caused the degeneration of SGN peripheral fibers in cochlear explants. When ES-treated explants were also treated with 10 µM Cd or maintained in calcium-free medium (Ca -), the PSD95/IHC ratio was also comparable to that of non-ES explants n = 9 in each group. (C) The density of SGN peripheral fibers was comparable in ES-treated explants also treated with VPL, GVIA, IVA or CCB and in non-ES explants, n = 12 in each group. (D) The density of SGN peripheral fibers was also comparable in ES-treated cochlear explants also treated with Cd or maintained in calcium-free medium and in non-ES explants, n = 12 in each group. * p < 0.001 compared with any other group in the same experiment, one-way ANOVA followed by Dunnett's multiple comparisons. Data represent the mean + SEM.
Increased Oxidative Stress in Cochlear Explants Induced by H 2 O 2 Treatment Resulted in the Loss of SGN Peripheral Fibers
without H 2 O 2 treatment (Figures 8C,G,K). However, the quantity and morphology of HCs and the fiber density of explants simultaneously treated with 250 µM H 2 O 2 and 40 µM ebselen Non-ES/Ca − , cochlear explants without electrical stimulation in calcium-free medium; Non-ES/Cd, cochlear explants without electrical stimulation in medium with cadmium chloride; 100 µA/Ca − , cochlear explants with 100 µA electrical stimulation in calcium-free medium; 100 µA/Cd, cochlear explants with 100 µA electrical stimulation in medium with cadmium chloride.
for 8 h was not different (P = 0.3828, Figures 8B,D,F,H,J,L,N), from that of explants without treatment. These results further indicated that oxidative stress could induce the degeneration of SGN peripheral fibers.
DISCUSSION
Electrical stimulation is used by CI and other neurostimulation devices to activate targeting neurons. The impact of ES on targeted and related neural structures when neurostimulation devices are used as modulators of existing neural function instead of as substitutes of non-functioning neural tissues, warrants additional attention. As shown in cochlea implant recipients using EAS technology, there was a delayed loss of residual lowfrequency hearing function (Von Ilberg et al., 1999;Gantz and Turner, 2003;Kiefer et al., 2005). Here we show, that ES could degenerated the connection between the targeted neuron and modulated neural structures in vitro. In addition, calcium influx through VDCCs and resulting oxidative stress played important roles in this effect.
Our study suggested that continuous charge-balanced biphasic ES with an intensity up to 48 h/400 µA did not change the numbers of hair cells in cochlear explants. In accordance with our study, a recent in vitro study also reported that ES could induce synaptic change in cochlear tissues (Peter et al., 2019). In addition, several previous animal studies also found no morphological changes in hair cells and SGNs associated with ES (Ni et al., 1992;Shepherd et al., 1994;Coco et al., 2007;Irving et al., 2013;O'Leary et al., 2013), even though low-frequency hearing deteriorate after ES (O'Leary et al., 2013;Tanaka et al., 2014). A postmortem histopathological study also suggested that there was no significant loss of SGNs and hair cells in EAS recipients with delayed hearing loss (Quesnel et al., 2015). Our study demonstrated that SGN peripheral fibers and IHC synapses in cochlear explants decreased under the ES with charge-balanced biphasic pulses used by CIs. The charge intensities used in this study were far less than the maximum charge intensities allowed in commercial CIs. However, animal studies are warranted to further investigate whether a similar change is the cause of residual low-frequency hearing loss in EAS recipients.
Electrical stimulation can induce the activation of VDCCs and result in Ca 2+ influx. Calcium influx through VDCCs was involved in the inhibition of SGN neurite extension induced by continuous ES or membrane depolarization accomplished by raising extracellular K + (Roehm et al., 2008;Shen et al., 2016). Calcium overload has been shown to cause damage to SGNs (Hegarty et al., 1997;Roehm et al., 2008). Our study suggested that blocking various types of VDCCs by bath application of VDCC blockers, by the non-selective VDCC blocker cadmium or by the removal of extracellular Ca 2+ attenuated the ES-induced loss of SGN peripheral fibers and IHC synapses. The mixture of VPL, GVIA, and IVA also abolished the ES-induced increase in ROS/RNS activity in cochlear explants. These results suggest that calcium influx through VDCCs plays a key role in ES-induced cochlear synaptic degeneration.
The ES-induced loss of SGN peripheral terminals and IHC synapses with the preservation of hair cells and SGNs is similar to the changes that appeared in the early stage of noise-induced hearing loss (Kujawa and Liberman, 2009;Lin et al., 2011). Previous studies have suggested that the excitotoxicity and calcium overload play critical roles in noise-induced hearing loss (Le Prell et al., 2007;Kujawa and Liberman, 2009). Mimicking excitotoxicity in cochlear explant culture by brief treatment with NMDA and kainite also resulted in the loss of IHC synapses and SGN peripheral axons with the organ of Corti and SGNs intact (Wang and Green, 2011). Taken together, these findings suggest that the manifestations of cochlear explants under ES were similar to the findings in animal studies of CI chronic ES, and noiseinduced hearing loss and in the in vitro study of excitotoxicity in cochlear explants. This suggested that excitotoxicity and calcium overload might play important roles in delayed EAS hearing loss. This theory was supported by our results that the inhibition of calcium influx prevented the loss of IHC synapses and SGN peripheral terminals. Interestingly, a close correlation between EAS hearing loss and a history of noise-induced hearing loss shown in a recent clinical study provides further support for this postulation (Kopelovich et al., 2014).
Our study showed that ES induced an increase in ROS/RNS activity in cochlear explants. The increase in ROS/RNS activity was closely correlated with the intensity of ES. After the increase in ROS/RNS activity was prevented by ebselen, the loss of SGN peripheral fibers in ES-treated cochlear explants was significantly attenuated to a level comparable to that of non-ES 400 µA, P = 0.0228, when compared to non-ES group, n = 3 in each group, measurements were repeated three times). When the cochlear explants were treated with ES and 40 µM Ebselen at the same time, mRNA expression level of GPx1 and GPx2 was comparable to that in non-ES group (GPx1: 100 µA/Eb, P > 0.9999; 200 µA/Eb, P = 0.9738; 400 µA/Eb, P = 0.4027; GPx2: 100 µA/Eb, P > 0.9999; 200 µA/Eb, P > 0.9999; 400 µA/Eb, P > 0.9999 compared to non-ES group). When the cochlear explants were treated with 40 µM Ebselen, OHC/IHC ratio (C, P = 0.7997, P = 0.7629, P = 0.7639, in each group.) and the density of SGN peripheral fibers (D, P = 0.7860, P = 0.9025, P = 0.3482, n = 20 in each group.) in 100, 200, and 400 µA/48 h-ES groups had no significantly statistical difference from those in the non-ES group. (E-P) Representative images showed that there was no significant loss of hair cells (in magenta) and SGN peripheral fibers (in green) in ES-treated explants when they were maintained in medium with 40 µM Ebselen. Data represented the mean + SEM. cochlear explants. These results suggested that oxidative stress played an important role in the ES-induced loss of SGN-IHC connections. Oxidative stress has also been reported to play important roles in hearing loss induced by noise, aminoglycoside antibiotics, cisplatin and aging (Choi and Choi, 2015;Sheth et al., 2017;Tavanai and Mohammadkhani, 2017). Excessively high ROS and RNS activity can cause damage to DNA, lipids and proteins, trigger hair cell death and result in hearing loss (Fetoni et al., 2015). We added H 2 O 2 to the culture medium to induce oxidative stress and consequently caused a change similar to the ES-induced loss of IHCs-SGNs connection.
These results further supported our hypothesis that ES induces cochlear synaptic degeneration through calcium influx-induced oxidative stress.
This study demonstrated that GPx1 and GPx2 expression levels significantly decreased after 200 µA/48 h and 400 µA/48 h ES. Interestingly, GPx1 expression level significantly decreased even after a relatively weak ES, i.e., 100 µA/48 h of ES, while GPx2 expression level insignificantly decreased. In accordance with our study, a decrease in GPx1 activity was shown to play an important role in noise-induced hearing loss (Kil et al., 2007). The targeted mutation of the GPx1 gene in mice FIGURE 8 | Increased oxidative stress in cochlear explants induced by H 2 O 2 treatment resulted in the loss of SGN peripheral fibers. (A) Treatment of explants with 250 µM H 2 O 2 , 40 µM ebselen or both did not cause any significant difference in the OHC/IHC (in magenta) ratio from that of explants without these treatments. P = 0.9990, P = 0.9294, P = 0.8813, respectively, n = 3-5 in each group. (B) Treatment of cochlear explants with 250 µM H 2 O 2 significantly decreased the density of SGN peripheral fibers (in green, * P < 0.0001), while treatment of cochlear explants with both 250 µM H 2 O 2 and 40 µM ebselen did not decrease the density (P = 0.3828, n = 8 in each group), compared to the density in explants without H 2 O 2 or ebselen treatment. (C-N) Typical images of cochlear explants treated with 8 h/control (C,G,K), 8 h/Eb (D,H,L), 8 h/H 2 O 2 (E,I,M) and 8 h/H 2 O 2 Eb (F,J,N). The quantity and morphology of IHCs and OHCs (in magenta, labeled with anti-Myo7A) were comparable in explants treated with control (C), Eb (D), H 2 O 2 (E), and H 2 O 2 Eb (F). The density of SGN peripheral fibers (in green, labeled with anti-neurofilament-200, NF200) were similar explants treated with control (G), Eb (H), and H 2 O 2 Eb (J), while the density was less than that in explants treated with H 2 O 2 (I). Data represented the mean + SEM. also increased their vulnerability to noise-induced hearing loss (Ohlemiller et al., 2000). Ebselen could inhibit iNOS (Zembowicz et al., 1993) and mimic the anti-oxidative enzyme GPx (Ohlemiller et al., 2000). Ebselen treatment reducse the severity and duration of noise-induced hearing loss of in animals as well as human beings (Pourbakht and Yamasoba, 2003;Kil et al., 2017). In our study, ebselen treatment significantly increased GPx1 and GPx2 expression levels which were decreased by ES. Additionally, the ES-induced loss of SGN peripheral fibers was completely abolished. These results strongly supported that the decrease in GPx1 and GPx2 expression levels played a vital role in ES-induced loss of IHC-SGN connections. Our study also indicated that ebselen might be a promising agent to protect the residual hearing of EAS recipients although further in vivo studies are needed.
In conclusion, our study demonstrated that ES with chargebalanced biphasic pulses could result in the synchronous degeneration of SGN peripheral fibers and IHC synapses in a current intensity-and duration-dependent manner in vitro. Calcium influx through VDCC and resulting oxidative stress played key roles in this effect. Ebselen was shown to be a potential protector of ES-induced cochlear synaptic degeneration. Our study provides novel insights into delayed hearing loss in EAS recipients as well as the impact of other neurostimulation devices on targeting neural structures. However, only middle turn of immature cochlea was used in our study. Whether | 6,820.8 | 2019-10-14T00:00:00.000 | [
"Biology"
] |
Clinical and Microbiological Characteristics of a Community-Acquired Carbapenem-Resistant Escherichia coli ST410 Isolate Harbouring blaNDM-5-Encoding IncX3-Type Plasmid From Blood
Objectives: The aim of this research was to investigate the clinical and microbiological characteristics of a case of community-acquired carbapenem-resistant Escherichia coli isolated from a patient with a bloodstream infection in China. Methods: Escherichia coli Huamei202001 was recovered from the first blood culture from a patient hospitalised in China. An antimicrobial susceptibility test was performed, and the genome was sequenced on an Illumina HiSeq X 10 platform with a 150-bp paired-end approach. The generated sequence reads were assembled using Unicycler, and the whole genome sequence data were analysed using bioinformatics tools. Moreover, the patient and her main family members obtained a faecal sample screening test for CRE, the positive strain was further isolated and the identification and antimicrobial susceptibility testing was performed. Results: Escherichia coli Huamei202001 belonged to sequence type 410. In addition, a blaNDM-5-encoding IncX3-type plasmid was responsible for the spreading of carbapenem resistance. Only the patient was detected as having a positive faecal sample screening test for CRE. Strain Fec01 was identified as E. coli, and the antibiotic susceptibility profile was the same as that of E. coli Huamei202001. Conclusions: Escherichia coli Huamei202001 is defined as community-acquired carbapenem-resistant Enterobacteriaceae. The clone ST410 that harbours the blaNDM-5-encoding IncX3-type plasmid is causing new high-risk clones globally. Thus, infection control measures should be strengthened to curb the dissemination of IncX3.
INTRODUCTION
The emergence and spread of carbapenem-resistant Enterobacteriaceae (CRE) has created an escalating global threat with the dissemination of carbapenemase genes. The most common carbapenemase genes include blaKPC, blaNDM, blaVIM, blaIMP, and blaOXA-48-like (1). A nationwide survey conducted in China showed that acquisition of two carbapenemase genes, blaKPC-2 and blaNDM, was responsible for phenotypic resistance in 90% of the CRE strains tested (58 and 32%, respectively) (2). The incidence of CRE occurring in either community-associated or community-onset patients ranges from 0.04 to 29.5% worldwide; therefore, the presence of CRE in the community poses an urgent public health threat (3). NDM-5-producing ST167 (4), ST290 (5), ST361 (6), and ST410 (7,8) Escherichia coli have been reported. The blaNDM-5-encoding IncX3-type plasmid is responsible for disseminating carbapenem-resistant E. coli ST410, which has developed into a new high-risk clone globally.
According to the Chinese XDR Consensus, the possible therapeutic options for treating CRE infection are narrow, and the most frequently used antimicrobials used for combination therapies include aminoglycosides, carbapenems, colistin, fosfomycin, and tigecycline (9). The mortality rate of bloodstream CRE infections is close to 70% (10), but the clinical and microbiological characteristics of E. coli ST410 have not been thoroughly elucidated. This research describes the successful cure of a case of community-acquired bloodstream CRE infection, aiming to investigate the clinical and microbiological characteristics to provide evidence for the clinical control of CRE.
Isolation and Identification
Escherichia coli Huamei202001 was recovered from the first blood culture collected on January 7, 2020, from a 59-year-old female patient hospitalised at Hwa Mei Hospital, University of Chinese Academy of Sciences, Zhejiang Province, China. The study was reviewed and approved by the Ethics Committee of Hwa Mei Hospital, University of Chinese Academy of Sciences (Approval no. PJ-NBEY-KY-2021-015-01). On January 3, 2020, this patient suffered from fever, and the highest temperature was 39.7 • C, with chills, nausea and vomiting once. Two days later, she visited Hwa Mei Hospital and was administered levofloxacin (0.5 g ivgtt, qd) for 2 days with a white blood cell count of 6.0 × 10 9 /L, neutrophil% of 90.0%, haemoglobin 128 g/L, platelets 116 × 10 9 /L and C-reactive protein (CRP) 73.0 mg/L. On January 7, the symptoms continued, and the patient was hospitalised (day 0) in the infectious disease ward and was diagnosed with sepsis, hypertension and hepatic cysts, with a white blood cell count of 11.3 × 10 9 /L, neutrophil% of 90.3%, haemoglobin 127 g/L, platelets 60 × 10 9 /L, CRP 250.0 mg/L and procalcitonin 40.37 ng/ml (Supplementary Table 1). Blood culture was collected, and chest CT and abdominal CT showed some chronic inflammation in both lungs and multiple lowdensity masses in the liver; thus, one liver cyst with infection was considered. Empiric therapy was administered with imipenem (1.0 g q8 h). On day 2, ultrasound-guided percutaneous puncture of the infected liver cyst was performed. A total of 45 ml yellow liquid was extracted. The puncture fluid showed a white blood cell count of 1.879 × 10 9 /L and neutrophil% of 85.0%. However, the puncture fluid culture was negative. Because of the blood culture indicating carbapenem-resistant E. coli, tigecycline (100 mg q12 h) and polymyxin B (750,000 U q12 h, 1,000,000 U first dose) were administered. On day 9, the patient complained of numbness of the extremities, polymyxin neurotoxicity was considered and the dose of polymyxin B was reduced to 500,000 U q12 h, with the white blood cell count being 12 × 10 9 /L, CRP 49.47 mg/L and procalcitonin 0.28 ng/ml. On day 13, the patient's creatinine increased progressively, and polymyxin nephrotoxicity was considered, so polymyxin was discontinued, with a white blood cell count of 6.4 × 10 9 /L and a CRP level of 16.14 mg/L. On day 18, as a result of the antimicrobial susceptibility testing showing that the strain was susceptible to fosfomycin, administration was changed to tigecycline (50 mg q12 h) and fosfomycin (12 g q12 h) with a white blood cell count of 5.1 × 10 9 /L, a CRP level of 7.45 mg/L and a procalcitonin level of 0.95 ng/ml. On day 24, all antibiotics were discontinued because of the normal laboratory findings, and the patient's condition was closely observed. On day 29, the patient was discharged. Escherichia coli Huamei202001 was identified with a VITEK 2 compact automated microbiology system (bioMerieux, Marcy-l'Etoile, France).
Antimicrobial Susceptibility Testing
Antimicrobial susceptibility testing of MICs was performed by a VITEK 2 compact automated microbiology system, and testing of fosfomycin was performed by the K-B method (Oxoid, Basingstoke, UK), with susceptibility defined according to the Clinical and Laboratory Standards Institute (CLSI) (M100-S30). Testing of tigecycline was performed by the broth microdilution MIC determination method, and the broth was prepared fresh on the day of use (Oxoid), with susceptibility defined according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) (version 11.0, for tigecycline) guidelines.
Genome Sequencing and Bioinformatics Analysis
The strain was sent to Zhejiang Tianke Hi-Tech Development Co., Ltd. (Tianke, Hangzhou, China) for genome sequencing. Genomic DNA was extracted using a plant Genomic DNA kit (DP305, Tiangen, Beijing, China). The library was sequenced on an Illumina HiSeq X Ten platform (Illumina Inc., San Diego, CA, USA), and 150 bp paired-end reads were generated at a depth of 250×. The raw reads of E. coli Huamei202001 were assembled into draught genomes using Unicycler. The contigs were annotated by Rapid Annotation using Subsystem Technology, and whole genome sequence data analyses were performed using these bioinformatics tools (i.e., ResFinder v.3.
Faecal Sample Screening Test for CRE
To trace whether the source of E. coli Huamei202001 originated from the intestine, a faecal sample screening test for CRE was performed. Faecal samples were collected from the patient and her main family members and then inoculated on CRE screening plates (11). Colonies on CRE screening plates were further isolated, and identification and antimicrobial susceptibility testing were performed as noted above.
Pulsed-Field Gel Electrophoresis
Genomic DNA was prepared as described previously (12). Isolated colonies were harvested from Mueller-Hinton agar plates after overnight incubation at 37 • C, and the suspension was adjusted to a concentration of 10 9 CFU/ml in cell suspension buffer (100 mM Tris-HCl, 100 mM EDTA, pH = 8). After a short incubation of ∼5-10 min at 37 • C, the bacterial suspension was mixed with an equal volume of 1% Gold Agarose (Lonza, Rockland, MD, USA) and allowed to solidify in a 100-µl plug mould. The DNA block was incubated overnight at 54 • C in 1 ml of cell lysis buffer (50 mM Tris-HCl, 50 mM EDTA, 1% sarcosyl, 100 µg/ml proteinase K, pH = 8). To eliminate the lysed bacterial material and inactivate proteinase K activity, the DNA blocks were washed four times at 50 • C in 4 ml of Tris-EDTA buffer (100 mM Tris-HCl, 1 mM EDTA, pH = 8). A slice of each plug was cut and incubated with XbaI (Takara, Shiga, Japan). Restriction fragments of DNA were separated by pulsed-field gel electrophoresis (PFGE) with a CHEF Mapper apparatus (Bio-Rad, Hercules, CA, USA) through 1% Gold Agarose. Electrophoresis was performed at 6 V/cm and 14 • C. The run time was 20 h, with the pulse time ramping from 5 to 35 s. XbaI-digested DNA of Salmonella enterica serotype Braenderup H9812 was electrophoresed as the size marker.
Analysis of the draught genome sequences demonstrated that carbapenem-resistant E. coli Huamei202001 belonged to ST410, whereas identification of plasmid replicons revealed that it carried a blaNDM-5-encoding IncX3-type plasmid and IncFIA, IncFIB and IncI1 plasmids.
Only the patient was detected as having a positive faecal sample screening test for CRE. Isolation Fec01 was identified as E. coli, and the antibiotic susceptibility profile was the same as that of E. coli Huamei202001. Moreover, PFGE fingerprinting between strain Fec01 and strain Huamei202001 was highly similar (Figure 1). However, her main family members all showed negative results.
DISCUSSION
Patients who were hospitalised in the 2 weeks before admission or transferred from other hospitals are defined as having hospital-acquired infections (13). A positive culture taken ≤48 h after admission could be classified as healthcare-associated strain with additional criteria (14). A positive culture that did not meet the criteria above was considered to be a strictly community-acquired infection (14)(15)(16). Moreover, investigations on CRE have mainly focused on nosocomial infections, and only a few community-acquired CRE infections have been documented (3). Therefore, according to the definition, E. coli Huamei202001 is a community-acquired CRE that belongs to ST410 and carries a blaNDM-5-encoding IncX3-type plasmid, which is causing new international high-risk clones (17) not only in hospitals in eastern China (7,8), Egypt (18) and Denmark (17) but also in domestic animals in China (19) and South Korea (20,21); such risk is even found in the environment, such as in rivers in Switzerland (9) and sewage in northeast India (22). These investigations show that IncX3 is a key element in disseminating blaNDM-5 among E. coli, even among various species. Thus, infection control measures should be strengthened to curb the spread of highly transferable plasmid-borne carbapenemases.
We infer that bloodstream infection of Huamei202001 is most likely from intestinal colonisation of Fec01, which is due to highly similar PFGE fingerprinting between strain Fec01 and strain Huamei202001 (Figure 1). Another study also supported the strong association between intestinal colonisation and bloodstream infection (23). However, the origin of strain Fec01 is unclear because of the negative results of the faecal sample screening test for CRE among the patient's main family members.
In addition, from day 0 to day 2, imipenem was administered empirically without antibiotic susceptibility profiles, and it seemed effective because the levels of white blood cells, Creactive protein, procalcitonin and creatinine all decreased (Supplementary Table 1). Antibiotic susceptibility profiles showed that the MIC of imipenem to E. coli Huamei202001 was ≥16 mg/L, so it is difficult to explain the mechanism by which imipenem was effective. After imipenem was discontinued, antibiotic administration was changed to a combination of tigecycline and polymyxin B according to the Chinese consensus statement (24). However, the possibility of polymyxin neurotoxicity was considered (25) for the numbness of the extremities, so polymyxin was discontinued. Because the strain was susceptible to fosfomycin, administration was changed to a combination of tigecycline and fosfomycin until day 24. All laboratory findings showed normal results (Supplementary Table 1), and antibiotic administration was discontinued. Finally, the patient recovered and was discharged, which was due to proper antibiotic administration. This is a rare case report of the complete occurrence and development process of community-acquired CRE infection in China, including the diagnosis and antibiotic administration process, which was finally successfully cured. This outcome demonstrates the complexity and intelligence of antibiotic administration in the clinic. Furthermore, clone ST410 that harbours a blaNDM-5-encoding IncX3-type plasmid was analysed by whole genome sequencing and then was discussed to understand the spread of the clone worldwide.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/nuccore/JAENHW000000000.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of Hwa Mei Hospital, University of Chinese Academy of Sciences. The patients/participants provided their written informed consent to participate in this study. | 2,940 | 2021-06-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Assessment of HER2 Protein Overexpression and Gene Amplification in Renal Collecting Duct Carcinoma: Therapeutic Implication
Simple Summary Renal collecting duct carcinoma (CDC) is rare, but very aggressive, variant histology of kidney cancers. Besides surgery, the other therapeutic options, such as pharmacological or radiation therapy, have a poor impact on survival. Therefore, there is an urgent need to identify novel targets that can open up new avenues for alternative treatments. From this perspective, the aim of our study was to assess the HER2 protein expression by immunohistochemistry (IHC) and the gene copy number by fluorescence in-situ hybridization (FISH) in a cohort of 26 CDC. According to the 2018 ASCO/CAP guidelines, 2/26 CDC cases (8%) were HER2-positive. The HER2 protein is a well-established target of anti-HER2 mAbs or kinase inhibitors already used for breast and gastric cancer treatment. Thus, this study provides evidence that supports future biomarker-driven clinical trials that could address the lack of therapy, which is still an unmet clinical need for CDC patients. Abstract Collecting duct carcinoma (CDC) is rare and aggressive histology of kidney cancers. Although different therapeutic approaches have been tested, the 2-year survival remains very poor. Since CDC exhibits overlapping features with urothelial carcinoma, the analysis of shared molecular alterations could provide new insights into the understanding of this rare disease and also therapeutic options. We collected 26 CDC cases, and we assessed HER2 protein expression by immunohistochemistry (IHC) and gene amplification by fluorescence in-situ hybridization (FISH) according to 2018 ASCO/CAP HER2-testing recommendations. Six out of twenty-six (23%) tumors showed HER2 positive staining. In particular, 3+ score was present in 2/6 cases (33%), 2+ in 3/6 cases (50%) and 1+ in 1/6 cases (17%). The 6 HER2+ tumors were also analyzed by FISH to assess gene copy number. One out of six CDC with IHC 3+ was also HER2 amplified, showing an average HER2 copy number ≥4.0 (10.85) and a HER2/CEP17 ratio ≥ (5.63), while the 5/6 cases were HER2 negative. Based on the 2018 ASCO/CAP guidelines overall, 2/26 CDC cases (8%) were HER2+. The present study provides evidence for testing, in future studies, HER2 to assess its clinical value as a novel target for the treatment of this highly malignant cancer.
Introduction
Collecting duct carcinoma (CDC) of the kidney, also known as Bellini duct carcinoma, is a rare and aggressive variant histology of renal cell carcinoma (RCC), accounting for 1-2% of all RCC [1]. Early-stage CDC usually undergo radical nephrectomy with curative intent, whereas chemotherapy alone or in combination with radiation therapy in the adjuvant setting is not recommended [2]. Unfortunately, at the time of the diagnosis, about half of the cases have already developed metastasis at lymph nodes, bone, lung, liver, and adrenal glands [3,4]. In these metastatic patients (mCDC), the median overall survival (OS) is about 13 months after diagnosis [5]. Differently from the other and more common renal cancer malignant histologies, such as clear cell renal cell carcinoma, papillary renal cell carcinoma, and chromophobe metastatic, CDC still lacks a standard therapeutic approach [6]. Gemcitabine plus cisplatin chemotherapy is the only recommended therapy for the first-line treatment of mCDC.
Although a phase 2 trial attempted the use of conventional chemotherapy with gemcitabine plus cisplatin in combination with a multitargeted kinase inhibitor as sorafenib, this first-line regimen improved in terms of median PFS of only 8.8 months [7].
Overall besides surgical treatment, other therapeutic approaches that include chemotherapy regimens, targeted therapy, immunotherapy [13], and radiation therapy have been proposed for metastatic disease, but survival benefit is still very limited [4,6].
Moreover, the conduction of randomized clinical trials is severely hampered by the low incidence of this RCC histologic variant. CDC arises from the epithelial layer of the distal collecting duct of the kidney, and owing to the common mesonephric origin and the anatomical proximity, CDCs share some clinical, radiologic, morphological, and molecular features with urothelial carcinoma, but it also exhibits various differences [14][15][16]. Based on the similarities, several attempts have been made to test CDCs with different therapeutic agents already used for urothelial carcinoma [17]. Different studies provided evidence that protein overexpression and/or gene amplificationof human epidermal receptor-2 (HER2) occurs in solid tumors, including breast and gastric cancer, enabling the therapeutic use of anti-HER2 mAbs or HER2 kinase inhibitors [18,19] in these tumor types. Likewise, HER2 overexpression and/or gene amplification has also been observed in 0-25% of urothelial carcinoma [20], and it has been considered a target suitable for trastuzumab treatment [21]. HER2 overexpression in CDC has been reported only in a few studies, more specifically in two case-reports [22,23] and one small cohort of 11 cases [24]. Since the lack of effective adjuvant treatment, improved molecular characterization of CDCs, and identifying novel targets that can provide new therapeutic options will be crucial to improve patient outcomes from the perspective of a precision medicine approach. In the present study, we describe different morphological features of 26 CDC cases. Furthermore, we conducted Immunohistochemistry (IHC) and fluorescence in-situ hybridization (FISH) analysis to assess the level of HER2 protein expression and gene amplification according to ASCO/CAP 2018 criteria [25]. This study aims to provide preliminary evidence that can guide future clinical studies to explore HER2-targeting drugs in renal collecting duct carcinoma.
Clinical and Pathologic Characteristics of CDC Patients
A total of 26 patients diagnosed with CDC in five medical centers were collected and reviewed to confirm the diagnosis. Table 1 summarizes the clinical and pathologic features of the 26 CDC cases included in the study. Among 26 patients, 16 (62%) cases were male and 10 (38%) female. The mean age was 72 years old (range, 40 to 84 years). The average tumor size was 6 cm (range, 2.2 to 10.5 cm). Seven (41%) cases presented distant metastasis at the time of surgery (synchronous lesions), whereas in 10 (59%) patients, the appearance of metastasis was observed after diagnosis (metachronous lesions). Six (35%) patients had metastatic lesions in multiple sites. Tumors were staged according to the 2017 American Joint Committee on Cancer (AJCC) TNM stage classification. Two cases had TNM stage I (8%), 0 stage II (0%), 15 stage III (58%), and 9 stage IV (34%), respectively. Microscopically different architectural patterns have been observed, in particular tubular/solid with confluent solid nests, tubulopapillary, tubulocystic and tubular structures, respectively, present in 12 (46%), 11 (42%), and 2 (8%) and 1 (4%) cases. Additional features that supported the diagnosis of CDC, such as necrosis, desmoplastic stromal reaction, dysplastic changes in adjacent non-neoplastic collecting duct epithelium, intraluminal mucin, presence of Hobnail nuclei, lymphovascular and perineural invasion, pyelonephritis with glomerulosclerosis, sarcomatoid and rhabdoid areas, presence of squamous cells were also observed ( Table 1). The inflammatory infiltrates were predominantly represented by lymphocytes and less frequently by the coexistence of lymphocytes and granulocytes (rare eosinophils). Table 1. Clinical and pathologic characteristics of collecting duct carcinoma (CDC) patients.
HER2 Fluorescence In-Situ Hybridization Analysis in CDC
In this study, all six CDC cases that show positive IHC staining score (1+, 2+, 3+) for HER2 protein expression were tested by FISH to assess HER2 gene copy number (Table 2). FISH results were analyzed by counting the fluorescence signals in at least 20 malignant cells in two different areas of the section at 1000 magnification. For each case, the average HER2 copy number and the ratio of HER2 signals to chromosome 17 centromere (HER2/CEP17) signals were calculated according to the ASCO/CAP 2018. One out of six CDC patients with IHC 3+ was also HER2 FISH positive, showing an average HER2 copy number ≥4.0 (10.85) and a HER2/CEP17 ratio ≥ (5.63) (Figure 2A). The remaining 5/6 cases were regarded as HER2-negative exhibiting HER2/CEP17 ratio <2.0 with an average HER2 copy number <4.0 ( Figure 2B). None of the cases analyzed showed HER2-equivocal results (HER2/CEP17 ratio <2.0 with an average HER2 copy number ≥4.0 and <6.0). Overall HER2 test was considered positive when the tumor specimens showed HER2 IHC 3+ or positive HER2 gene amplification by FISH. Considering together IHC and FISH results, we found that 2/26 cases (8%) were HER2 positive (
Discussion
CDC is a rare kidney cancer histotype characterized by an aggressive clinical behavior [1]. Different therapeutic strategies have been tested, including chemotherapies, targeted therapy [7], immunotherapy [10][11][12][13], and radiotherapy, nevertheless, the prognosis still remains very poor [2,6]. Hence, there is an urgent need to provide additional molecular targets and predictive biomarkers, which may be useful for identifying candidate responder patients who may benefit from new treatments. Since CDC exhibits some overlapping features with urothelial carcinoma, different pharmacological agents already tested for urothelial carcinoma [17], have also been attempted in CDC. Because 9-80% of urothelial carcinoma showed HER2 overexpression and about 32% exhibit gene amplification [26], different clinical trials that include anti-HER2 therapies, such as trastuzumab, pertuzumab, lapatinib, and asatinib, used as single agents or in combination with other drugs have been conducted in urothelial carcinoma [17]. To the best of our knowledge, the only three studies that tried to characterize HER2 in CDC include a retrospective study conducted in 11 CDC cases in which HER2 amplification evaluated by competitive PCR, was present in 5 out of 11 cases (45%) and all these patients with HER2 amplified died within one year [24]. Another study carried out HER2 amplification analysis using FISH alone [22] in one patient, whereas another case report performed only IHC analysis showing a focal, faint perceptible membrane staining in less than 10% of the tumor cells [23]. Thus far, no study assessed HER2 expression and amplification status in the same sample cohort of CDC cases. Despite the rarity of the CDC histological subtype, in the present study, we had the chance to collect 26 CDC cases from five different institutions. Since the absence of previous studies that define the HER2 positivity in CDC, we refer to the most recent ASCO/CAP 2018 guidelines [25] to assess the HER2 protein expression by IHC and HER2 gene amplification by FISH in the tumor specimens. Our study revealed that 6 out of 26 patients (23%) exhibit IHC positive staining for HER2 with different scores ranging from 1+ to 3+, in particular, 3/6 cases (50%) were HER2 2+/3+ and 1/6 CDC patient with IHC 3+ was also HER2 FISH positive showing an average HER2 copy number ≥4.0 (10.85) and a HER2/CEP17 ratio ≥ (5.63). According to ASCO/CAP 2018 that considers the HER2 test positive when the tumor specimens showed HER2 IHC 3+ or positive HER2 gene amplification by FISH, we found that 2/26 cases (8%) were HER2 positive. With the exception of one single case report [22], there are no clinical studies that used anti-HER2 compounds in a single or multiple-agent approach in CDC. A large plethora of data indicates that solid tumors with HER2 gene amplification respond to an anti-HER2-targeted therapy [26][27][28][29], with an improvement in clinical outcomes. Based on this principle, our study provides preliminary evidence in support of testing anti-HER2 therapy in CDC. However, our study has some limitations. Indeed, due to the rarity of the disease, and despite the inclusion of five different hospitals in the study, the sample size is still small, leaving unmet needs. Larger studies will indeed be crucial to validate the frequency of HER2 overexpression and/or amplification, to define the clinically relevant threshold of the cut-off score, and to identify the subset of CDC cases that are HER2+ and that could be sensitive to anti-HER2 treatment. Indeed in the present study, the HER2 positivity is based on the breast and gastric cancer HER2 testing criteria, but to consider a pre-specified cut-off value that is routinely used from other tumors to assess the positivity of IHC staining could not identify those CDC cases which may exhibit a response to anti-HER2 therapies. So further studies will need to grade HER2 expression and amplification as values (percentage of the stained tumor cells plus staining intensity for IHC and the average HER2 copy number or HER2/CEP17 ratio for FISH) on a continuous scale to define the cut-offs that can have clinical significance for CDC. In the era of targeted therapies, a stringent evaluation of the gene and/or protein status is needed to significantly improve the drug response. Biomarker-driven studies have revolutionized the clinical trial design shortening the time for drug approval. Indeed FDA has recently approved an increasing number of biomarker-based novel compounds across several histotypes based on early-stage (phase I or II) non-randomized clinical trials [30].
For a rare and very aggressive tumor as CDC, the design of clinical trials and the definition of standard therapies are more challenging than those for major cancers, due to several factors, such as the difficulty of the patient recruitment, the randomization, and the lack of knowledge of molecular alterations.
From this perspective, identifying actionable targets is pivotal for biomarker-driven studies that can provide more effective therapeutic options in CDC patients. After a first histologic examination on hematoxylin-eosin stained slides carried out in the institution where each case was collected, all tissue specimens underwent a centralized revision by a dedicated uro-pathologist (SS). Only confirmed CDC cases have been further considered for the analysis of HER2 protein expression and gene amplification. For each patient, two representative blocks were selected for immunohistochemistry (IHC) analysis. Tumor tissue specimens were formalin-fixed paraffin-embedded (FFPE), and 3 µm sections were cut from the primary tumor specimens for hematoxylin-eosin staining to inspect the presence of neoplastic cells. The material poorly fixed and/or with low cellularity (<70% neoplastic cells) had been previously excluded. This study was conducted in accordance with the ethical standards of each institutional research committee and the Declaration of Helsinki. The hospital records were used to describe the clinical and pathological features of the cases included in the study (Table 1).
HER2 Immunohistochemical Analysis
For each patient, two paraffin blocks, with at least 70% of neoplastic cells, were selected, and for each block 3, micra tissue sections were cut and used for immunohistochemical (IHC) analysis after transferring them to SuperFrost Plus slides (Menzel-Gläser, Braunschweig, Germany). After deparaffinization, rehydration, and antigen retrieval in citrate buffer (10 mMol, pH 6.1), tissue sections were stained for HER2 (A0485 polyclonal antibody; Dako, Glostrup, Denmark; Dilution 1/200). Immunoreactions were revealed by Bond Polymer Refine Detection on an automated autostainer (Bond™Max, Leica Biosystem, Milan, Italy). Standard processing steps were performed according to the manufacturer's instructions. As chromogenic substrate Diaminobenzidine was used. The positivity for HER2 was assessed according to recommendations of the American Society of Clinical Oncology/College of American Pathologists 2018 scoring system guideline established for breast cancer, evaluating only membranous staining [25]. The interpretation of the results was also based on the negativity of collecting duct normal tissues. The level of HER2 protein expression was semi-quantitatively evaluated, considering the intensity and the percentage of staining and scored on a scale ranging from 0 to 3+according to ASCO/CAP 2018 guidelines. Scores of 0 and 1+ are categorized as negative, 2+ as equivocal, and 3+ as positive.
HER2 Fluorescence In-Situ Hybridization
All specimens presenting any score 1+ 2+ or 3+ HER2 protein expression were further evaluated by Fluorescence in-situ hybridization (FISH) using two selected blocks. The analysis was performed on 2 to 3 µm thick paraffin sections of tumor tissues using PathVysion Kit (Abbott Molecular Inc., Des Plaines, IL, USA) that is designed for the detection of HER-2/neu gene amplification in formalin-fixed, paraffin-embedded human tissue specimens placed on slides, according to the manufacturer's instructions. Before hybridization, paraffin sections were deparaffinized in xylene (3 times, 10 min each), dehydrated by two 5 min washes in 100% ethanol, then two 5 min washes in 96% ethanol, and air-dried at room temperature. Tissue sections were then transferred in Vysis Pretreatment Solution (Abbott Molecular Inc., Des Plaines, IL, USA) at 81 • C for 30 min, followed by 3 min washes in purified water, and treated with protease solution (Vysis Protease Buffer IV, Abbott Molecular Inc., Des Plaines, IL, USA) for 10 min at 37 • C to digest proteins. After brief washing in purified water, the slides were sequentially dehydrated in alcohol (70%, 85%, and 100%) and air-dried at room temperature, followed by hybridization with the probe Vysis LSI HER-2/neu Spectrum Orange/Cep 17 Spectrum Green (Abbott Molecular Inc., Des Plaines, IL, USA).
Following hybridization, the unbound probe is removed by a series of washes, and the nuclei are counterstained with DAPI (4,6 diamidino-2-phenylindole), a DNA-specific stain that fluoresces blue. Hybridization of the PathVysion probes is viewed using a fluorescence microscope equipped with appropriate excitation, and the emission filters visualize the intense orange and green fluorescent signals. Enumeration of the LSI HER-2/neu and CEP 17 signals is conducted by microscopic examination of the nucleus, which yields a ratio of the HER-2/neu gene to chromosome 17 copy number. The number of LSI HER-2/neu and CEP 17 signals per nucleus are recorded. Results on the enumeration of 20 interphase nuclei, conducted in two different areas of the section at 1000magnification, from tumor cells per target are reported as the ratio of the total HER-2/neu signals to those of CEP 17. According to ASCO/CAP, 2018 guidelines HER2 positivity by FISH was defined as an average HER2 copy number ≥4 or HER2/CEP17 ratio ≥2.0. The cases showing HER2/CEP17 ratio <2.0 with an average HER2 copy number ≥4.0 and <6.0 were regarded as HER2-equivocal, and the cases showing HER2/CEP17 ratio <2.0 with an average HER2 copy number <4.0 were regarded as HER2-negative. The results of the HER2 test were considered positive when the tumor specimens showed HER2 IHC 3+ or positive HER2 gene amplification by FISH.
Conclusions
This is the first study to provide a comprehensive evaluation of HER2 in a rare, but very aggressive, histotype (such as CDC), in agreement with the most recent ASCO/CAP 2018 guidelines. These data may pave the way for future biomarker-driven clinical studies to test anti-HER2 strategies in CDC. | 4,123 | 2020-11-01T00:00:00.000 | [
"Biology"
] |
Analysis of Casson Fluid Flow over a Vertical Porous Surface with Chemical Reaction in the Presence of Magnetic Field
Casson fluid flow over a vertical porous surface with chemical reaction in the presence of magnetic field has been studied. A similarity analysis was used to transform the system of partial differential equations describing the problem into ordinary differential equations. The reduced system was solved using the Newton Raphson shooting method alongside the Forth-order Runge-Kutta algorithm. The results are presented graphically and in tabular form for various controlling parameters.
Introduction
A fluid in which the viscous stresses arising from its flow at every point are linearly proportional to the rate of change in its deformation over time is called Newtonian fluid.This means that in a Newtonian fluid, the relationship between the shear stress and the shear rate is linear with the proportionality constant to refer to as the coefficient of viscosity.On the other hand, a fluid whose flow properties are different in any way from that of the Newtonian fluid is called a non-Newtonian fluid.Unlike the Newtonian fluids, the viscosity of non-Newtonian fluid is dependent on shear rate history.That is to say, in a non-Newtonian fluid, the relationship between the shear stress and the shear rate is different and can even be time dependent.Thus a constant coefficient of viscosity cannot be defined.Some examples of non-Newtonian fluids are salt solutions, molten polymers, ketchup, custard, toothpaste, starch suspensions, paints, blood and shampoo.
It is important to note here that, many fluids of industrial importance are non-Newtonian.It is now generally recognized that, in real industrial applications, non-Newtonian fluids are more appropriate than Newtonian fluids, due to their applications in petroleum drilling, polymer engineering, certain separation processes, manufacturing of foods and paper and some other industrial processes [1] [2].Due to the nonlinearity between the stress and the rate of strain for non-Newtonian fluids, it is difficult to express all those properties of several non-Newtonian fluids in a single constitutive equation.This has called on the attention of researchers to the analysis of flow dynamics of non-Newtonian fluids.Consequently, several non-Newtonian fluid models [3]- [10] have been proposed depending on various physical characters.The most popular among these fluids is the Casson fluid.
Casson fluid can be defined as a shear thinning liquid which is assumed to have an infinite viscosity at zero rate of shear, a yield stress below which no flow occurs and a zero viscosity at an infinite rate of shear [11].The nonlinear Casson's constitutive equation has been found to describe accurately the flow curves of suspensions of pigments in the lithographic vanishes used for the preparation of printing inks [12] and silicon suspensions [13].
The shear stress-shear rate relation given by Casson satisfactorily describes the properties of many polymers over a wide range of shear rates [14].Various experiments performed on blood with varying haematocrits, anticoagulants, temperatures, and the likes, strongly suggest the behaviour of blood as a Casson fluid [15] [16].In particular, the Casson fluid model describes the flow characteristics of blood more accurately at low shear rates and when it flows through small blood vessels.Casson fluids are found to be applicable in developing models for blood oxygenator and haemodialysers.
Fredrickson [17] investigated the steady flow of a Casson fluid in a tube.Mustafa et al. [1] studied the unsteady boundary layer flow and heat transfer of a Casson fluid over a moving flat plate with a parallel free stream using the Homotopy Analysis Method (HAM).On the other hand, boundary layer flows of non-Newtonian fluids caused by a stretching sheet have vast applications in several manufacturing processes such as extrusion of molten polymers through a slit die for the production of plastic sheets, hot rolling, wire and fibre coating, processing of food stuffs, metal spinning, glass-fibre production and paper production [18].During the processes, the rate of cooling has an important bearing on the properties of the final product.Hence, the quality of the final product depends on the rate of heat transfer from the stretching surface [19] [20].
The viscous fluid flow due to a stretching flat sheet was first investigated by Crane [21], and this pioneering work was extended by Rajagopal et al. [22] who considered viscoelastic fluid.Siddappa and Abel [23] discussed some other important aspects of flow of non-Newtonian fluid over stretching sheets.Sankara and Watson [24] studied micropolar fluid flow over a stretching sheet.Troy et al. [25] established the uniqueness of solution of the flow of second order fluid over a stretching sheet.Andersson and Dandapat [26] reported the flow behaviour of a non-Newtonian power-law fluid over a stretching sheet.Recently Hayat et al. [27] analyzed the mixed convection stagnation-point flow of a non-Newtonian Casson fluid.Most importantly, Bhattacharyya et al. [28] recently investigated the boundary layer flow of Casson fluid over a permeable stretching/shrinking sheet with magnetic field effect.
From literature, it can be found that not much attention is given to the Casson fluid flow over a porous vertical surface with chemical reaction in the presence of magnetic field.The increasing use of several non-Newtonian fluids in processing industries has motivated a study to understand their behaviour in several transport processes.Therefore, in this investigation, the steady incompressible Casson fluid flow and mass transfer towards a porous vertical stretching sheet are studied.The governing partial differential equations are converted into systems of nonlinear ordinary differential equations (ODE) using the suitable similarity transformations.The transformed self-similar ODEs are solved by shooting method: an efficient numerical method for solving boundary value problem [29]- [31].Then a graphical analysis is presented to show the existence and uniqueness of solution and to elaborately discuss the characters of the flow and mass transfer for the varying parameters.
Mathematical Model
Consider a two-dimensional steady incompressible Casson fluid flow over a vertical porous stretching surface at y = 0 in the presence of a transverse magnetic field, as shown in Figure 1.Let the x-axis be taken along the direction of the plate and y-axis normal to it.The fluid occupies the half space y > 0. The mass transfer phenomenon with chemical reaction is also retained.The flow is subjected to a constant applied magnetic field B 0 in the y direction.The magnetic Reynolds number is considered to be very small so that the induced magnetic field is negligible in comparison to the applied magnetic field.The tangential velocity u w , due to the stretching surface is assumed to vary proportionally to the distance x so that u w = ax, where a is a constant.
The rheological equation of state for an isotropic flow of a Casson fluid [32] can be expressed as: In Equation ( 1), π = e ij e ij; where e ij is the (i, j) th component of the deformation rate.This means that π is the product of the component of deformation rate with itself.Also, π c is a critical value of this product based on the non-Newtonian model, μ B is the plastic dynamic viscosity of the non-Newtonian fluid and P y is the yield stress of the fluid.If u and v are the fluid x-and y-components of velocity respectively; and C being the concentration field; then the equations governing the steady boundary layer flow of the Casson fluid are: ( ) ( ) Subject to the following boundary conditions: where 2 is the non-Newtonian Casson parameter, υ is the kinematic viscosity, D m is the mass diffusion, γ is the reaction rate, v 0 (x) is the suction velocity from the surface, C w is the concentration at the surface, C ∞ is the free stream concentration, β c is the solutal expansion coefficient, ρ is the fluid density, g is gravitational acceleration, and σ is the electrical conductivity.
The following dimensionless quantities are introduced: Substituting Equation ( 5) in ( 2)-(4) yields: ( ) The transformed boundary conditions are The prime symbol denotes differentiation with respect to the similarity variable η, where
Numerical Solution
The numerical technique chosen for the solution of the coupled ordinary differential Equations ( 7)-( 8) together with the associated transformed boundary conditions ( 9) is the standard Newton-Raphson shooting method alongside the fourth-order Runge-Kutta integration algorithm.From the process of numerical computation, the plate surface temperature, the local skin-friction coefficient, the local Nusselt number and the local Sherwood number, which are respectively proportional to −f″(0) and −ϕ′(0) are computed and their numerical values presented in a tabular form.
Table 1 shows the comparison of the works of [33]- [35] with the present study for varying values of the reaction rate parameter (B) and it is clear from the table that the present study is consistent with their works.The results of varying parameter values on the local skin friction coefficient and the local Sherwood number are shown in Table 2.It is observed that the skin friction increases with increasing values of M, β, Sc, B, and fw and decreases with increasing values of G C .This means that the combined effect of magnetic field, Casson parameter, Schmidt number, reaction rate parameter and suction parameter is to increase the local skin friction; whereas that of the buoyancy force is to decrease the local skin friction at the surface of the plate.Moreover, it is observed that the rate of mass transfer increases with increasing values of fw, Gc, Sc and B; and decreases with increasing values of M and β.
Effects of Parameter Variation on Velocity Profiles
Figures 2-5 show the effects of the magnetic parameter (M), suction parameter (fw), Casson parameter (β), and local solutal Grashof number (Gc), respectively, on the velocity profile, f′(η).Generally, the fluid velocity is minimal at the plate surface and increases to the free stream value satisfying the far field boundary conditions.The effects of magnetic parameter (M) and the suction parameter (fw) on velocity profiles are seen in Figure 2 and Figure 3 respectively.It is observed that the combined effect of M and fw is to decrease the velocity of the flow.This is due to the fact that the transverse magnetic field induces a Lorentz force which tends to provide resistance to the fluid flow.Suction also causes resistance to the fluid flow thus a decrease in the velocity profile as shown in Figure 3.It is observed in Figure 4 that the velocity decreases when β increases.In practice, increasing β results in an increase in the plastic dynamic viscosity that produces a resistance in the flow and a decrease in fluid velocity thereof.In addition, increasing the local Grashof number (Gc) increases the velocity of the flow as shown in Figure 5.This can be attributed to the fact that, increasing Gc causes the fluid velocity to increase due to buoyancy effect.We can note here that, increasing buoyancy force will lead to a better flow kinematics.to increase concentration thereof.The concentration profile decreases with increasing fw as shown in Figure 7.This is due to the fact that suction gives the fluid flow some resistance upon increasing the friction between its layers and hence, a decrease in concentration.In Figure 8, the concentration boundary layer thickness increases with increasing values of β.This is as a result of the retarding force induced by the plastic viscosity thus increasing concentration.It is noteworthy from Figure 4 and Figure 8 that the Casson parameter β has quite opposite effect on the velocity and concentration profiles.
Conclusions
An analysis of Casson fluid flow over a vertical porous surface with chemical reaction in the presence of a transverse magnetic field has been presented.Numerical results have been compared to earlier results published in the literature and a perfect agreement was achieved.Among others, our results reveal that: 1) The velocity decreases with the increase in values of M, fw and β; and increases with increasing values of Gc.
2) The concentration boundary layer decreases with increasing values of fw, Gc, Sc and B; and increases with increasing values of M and β.
3) The skin friction at the surface increases with increasing values of M, fw, β, Sc and B; and decreases for increasing values of Gc.
4) The rate of mass transfer at the surface increases with increasing values of fw, Gc, Sc and B; and decreases with increasing values of M and β.
Figure 1 .
Figure 1.Schematic diagram of the problem.
Figure 2 .
Figure 2. Velocity profiles for varying values of magnetic field parameter.
Figure 3 .
Figure 3. Velocity profiles for varying values of suction parameter.
Figure 4 .
Figure 4. Velocity profiles for varying values of Casson parameter.
Figure 5 .
Figure 5. Velocity profiles for varying values of solutal Grash of number.
Figures 6 -Figure 6 .
Figures 6-11 show the plots of the effects of the magnetic parameter (M), suction parameter (fw), Casson parameter (β), Schmidt number (Sc) and chemical reaction parameter (B) on the concentration profile, ϕ (η) respect-tively.It is observed in Figure 6 that, by increasing M, the concentration boundary layer thickness increases.This can be attributed to the retarding force of the transverse magnetic field which retards the fluid flow
Figure 7 .
Figure 7. Concentration profiles for varying values of the suction parameter.
Figure 8 .
Figure 8. Concentration profiles for varying values of Casson parameter.
Figure 9 .
Figure 9. Concentration profiles for varying values of Schmidt number.
Figure 10 .
Figure 10.Concentration profiles for varying values of reaction rate parameter.
Figure 11 .
Figure 11.Concentration profiles for varying values of local solutal Grashof number.
Figure 9
depicts that the concentration boundary layer thickness decreases with increasing values of Sc.In practice, increasing Schmidt number means increasing momentum diffusion over mass diffusion which in turn reduces the concentration profile.At a point in the flow where B is zero implies no chemical reaction.On the other hand, an increase in B means an increase in the chemical reaction rate which causes a reduction in concentration.
Figure 10 affirms this where increasing values of B decreases the concentration boundary layer.Moreover, it is observed in Figure 11 that increasing the buoyancy force due to chemical species concentration has adverse effect of decaying the concentration boundary layer thickness.
Table 2 .
Numerical results of skin friction coefficient and Sherwood number. | 3,257.8 | 2015-06-17T00:00:00.000 | [
"Mathematics"
] |
UConnRCMPy: Python-based data analysis for rapid compression machines
The ignition delay of a fuel/air mixture is an important quantity in designing combustion devices, and these data are also used to validate chemical kinetic models for combustion. One of the typical experimental devices used to measure the ignition delay is called a Rapid Compression Machine (RCM). This paper presents UConnRCMPy, an open-source Python package to process experimental data from the RCM at the University of Connecticut. Given an experimental measurement, UConnRCMPy computes the thermodynamic conditions in the reaction chamber of the RCM during an experiment along with the ignition delay. UConnRCMPy implements an extensible framework, so that alternative experimental data formats can be incorporated easily. In this way, UConnRCMPy improves the consistency of RCM data processing and enables the community to reproduce data analysis procedures.
Introduction
In recent years, there has been a surge in interest in ensuring that research outputs are reproducible across time and personnel [1]. Recognizing that the code used to process experimental data is an important part of the chain from observation to result and publication, this paper presents the design and operation of a software package to process the pressure data collected from Rapid Compression Machines (RCMs). Our package, called UConnRCMPy [2], is designed to analyze the data acquired from the RCM at the University of Connecticut (UConn). Despite the initial focus on data from the UConn RCM, the package is designed to be extensible so that it can be used for data in different formats while providing a consistent interface to the user. Thus, UConnRCMPy offers all of the features required to process standard RCM data including: • Filtering and smoothing the raw voltage output generated by the pressure transducer • Converting the voltage trace into a pressure trace using settings recorded from the RCM • Processing the pressure trace to determine parameters of interest in reporting the experiments, including the ignition delay and machine-specific effects on the experiment • Conducting simulations utilizing the experimental information to calculate the temperature at the end of compression (EOC) Previous software used to analyze RCM data has generally been undocumented and untested code specific to the researcher conducting the experiments. Moreover, the software typically used to This work is licensed under the Creative Commons Attribution 4.0 International License.
To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/. estimate the temperature in the experiments is difficult to integrate with the data processing code. To the best of the authors' knowledge, UConnRCMPy is the first package for analysis of standard RCM data to be presented in detail in the literature, and it tightly integrates the temperature estimation routine into the workflow, reducing errors and inefficiencies.
RCM Signal Processing Procedure
The RCMs at the University of Connecticut have been described extensively elsewhere [3,4], and interested readers are referred to those papers for further details. The primary diagnostic on the RCM is the reaction chamber pressure during and after the compression process, measured by a dynamic pressure transducer. The pressure trace is processed to determine the quantities of interest, including the pressure and temperature at the EOC, P C and T C respectively, and the ignition delay, τ. These values depend on the pressure and temperature prior to the start of compression (P 0 and T 0 , respectively), in addition to the composition of the reactant mixture and the overall compression ratio of the RCM. A single compression-delay-ignition sequence is referred to as an experiment or a run and a set of experiments at a given P C and mixture composition is referred to as a condition.
The dynamic pressure transducer outputs a charge signal that is converted to a voltage signal by a charge amplifier with a nominal output of 0 V. In addition, the output range of 0 V to 10 V is set by the operator to correspond to a particular pressure range by setting a "scale factor." The voltage output from the charge amplifier is digitized by a hardware data acquisition system and recorded into a plain text file by a LabView Virtual Instrument. Figure 1 shows a typical voltage trace measured from the RCM at UConn and demonstrates the typical noise in the signal, which requires filtering and further processing to produce a useful pressure trace. To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/.
In the current version of UConnRCMPy [2], the voltage is filtered using a first-order Butterworth filter. The cutoff frequency of the filter is chosen automatically by a procedure described in the work of Yu et al. [5] and Duarte [6]. Briefly, this procedure applies low-pass filters of varying cutoff frequencies to the signal and calculates the root mean square residual between the filtered signal and the original signal. Figure 2 shows a typical plot of the residuals versus the cutoff frequency and demonstrates that the residuals are nearly linear for a range of cutoff frequencies. This range tends to start near one-twentieth the Nyquist frequency, as demonstrated by the left-most line labeled "Fitting Edges" on Fig. 2. To determine the right edge of the linear region, a series of linear regressions of the residuals are performed. The y-intercept of the regression with the highest coefficient of determination is used to choose the optimal cutoff frequency. The right-most line labeled "Fitting Edges" in Fig. 2 demonstrates a case where the end point set at 0.15 times the Nyquist frequency produces the best fit. The optimal cutoff frequency is chosen as the frequency at the intersection of the y-intercept and the residuals curve.
After filtering, the voltage trace is converted to a pressure trace by correcting for the offset from the nominal initial volage of 0 V apparent in Fig. 1b, multiplying the voltage by the scale factor from the charge amplifier, and adding the initial pressure P 0 . The result is a vector of time-varying pressure values that must be further processed to determine the time of the EOC and the ignition delay.
Once the pressure trace has been constructed, T C , P C , and τ can be calculated. In the current version of UConnRCMPy [2], the time of the EOC is determined by finding the local maximum of the pressure prior to ignition. Then, the ignition delay is determined as the time difference between the EOC and the point of ignition, where the point of ignition is defined as the inflection point in the pressure trace due to ignition. The inflection point is found by the maximum of the first derivative of the pressure with respect to time. In the current version of UConnRCMPy [2], the first derivative of the experimental pressure trace is computed by a second-order forward differencing method. The derivative is then smoothed by a moving average algorithm with a width of 151 points. This value for the moving average window was chosen empirically.
For some conditions, the reactants may undergo two distinct stages of ignition. These cases can be distinguished by a pair of peaks in the first time derivative of the pressure. For some two-stage ignition cases, the first-stage pressure rise, and consequently the peak in the derivative, are relatively weak, making it hard to distinguish the peak due to ignition from the background noise. This is currently the area requiring the most manual intervention, and one area where significant improvements can be made by refining the differentiation and filtering algorithms. An experiment that shows two clear peaks in the derivative is shown in Fig. 3 to demonstrate the definitions of the ignition delays.
The final parameter of interest presently is the EOC temperature, T C . This temperature is often used as the reference temperature when reporting ignition delays. In general, it is difficult to measure the temperature as a function of time in the reaction chamber of the RCM, so methods to estimate the temperature from the pressure trace are used. The detailed procedure used in UConnRCMPy is described in the work of Dames et al. [7], and an overview is given here.
In general, the temperature in the RCM reaction chamber as a function of time can be found by integrating the first law of thermodynamics for an ideal gas: where c v is the specific heat at constant volume of the mixture, v is the specific volume, u k and Y k are the specific internal energy and mass fraction of the species k, and t is time. In UConnRCMPy, Eq. (1) is integrated by Cantera [8].
Integrating Eq. (1) requires knowledge of the volume of the reaction chamber as a function of time. To calculate the volume as a function of time, it is assumed that there is a core of gas in the reaction chamber that undergoes an isentropic, constant composition compression [9]. The initial entropy of the gas mixture is calculated using Cantera [8]. Subsequently, the state of the mixture is fixed by using the entropy and measured pressure; from this information, the volume is calculated. The initial volume is arbitrarily taken to be V 0 = 1.0 m 3 . The initial volume used in constructing the volume trace is arbitrary provided that the same value is used for the initial volume in the simulations described below. However, extensive quantities such as the total heat release during ignition cannot be compared to experimental values.
Two simulations can be triggered by the user that solve Eq. (1). In the first, the multiplier for all the reaction rates is set to zero, to simulate a constant composition (non-reactive) process. In the second, the reactions are allowed to proceed as normal. Only the non-reactive simulation is necessary to determine T C , which is defined as the simulated temperature at the EOC time.
When a reactive simulation is conducted, the user must compare the temperature traces from the two simulations to verify that the inclusion of the reactions does not change T C , validating the assumption of adiabatic, constant composition compression. Although including reactions during the compression stroke does not affect the value of T C , it does allow for the buildup of a small pool of radicals that can affect processes after the EOC [10]. Thus, it is critical to include reactions during the compression stroke when conducting simulations to compare a kinetic model to experimental results.
As can be seen in Fig. 3, the pressure decreases after the EOC due to heat transfer from the higher temperature reactants to the reaction chamber walls. This process is specific to the machine that carried out the experiments, and to the conditions under which the experiment was conducted. This work is licensed under the Creative Commons Attribution 4.0 International License.
To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/.
To include the effect of this heat transfer into simulations, a non-reactive experiment is conducted, where O 2 in the oxidizer is replaced with N 2 .
To apply the effect of the post-compression heat loss into the simulations, the reaction chamber is modeled as undergoing an isentropic volume expansion after EOC, and the same procedure is used as in the computation of T C to compute a volume trace for the post-EOC time. The only difference is that the non-reactive pressure trace is used after the EOC instead of the reactive pressure trace. This procedure has been validated experimentally by measuring the temperature in the reaction chamber during and after the compression stroke. The temperature of the reactants was found to be within ± ∼5 K of the simulated temperature [11,12].
Implementation and Usage of UConnRCMPy
UConnRCMPy is constructed in a hierarchical manner. The main user interface to UConnRCMPy is through the Condition class, the highest level of data representation. The Condition class contains all of the information pertaining to the experiments at a given condition. The intended use of this class is in an interactive Python interpreter (the authors prefer the Jupyter Notebook with an IPython kernel [13]). First, the user creates an instance of the Condition class. The cti_file argument to Condition must point to a file in the CTI format that contains the thermodynamic and reaction information for the species in the mixture. The experiments in the following example were conducted with mixtures of propane, oxygen, and nitrogen [7]. The CTI file necessary to run this example can be found in the Supplementary Material of the work by Dames et al. [7]. Then, the composition of the mixture under consideration must be added to the initial_state parameter of the ideal_gas() function in the CTI file: Ellipses indicate input that was truncated to save space; the truncated input is present in the file available with the work of Dames et al. [7]. The mole_fractions must be set to the appropriate values. The condition in this example is for a fuel rich mixture, with a target P C of 30 bar.
After initializing the Condition , the user conducts a reactive experiment with the RCM and adds the experiment to the Condition using the add_experiment() method. As each experiment is processed by UConnRCMPy, the information from that run is added to the system clipboard for pasting into some spreadsheet software. In the current version, the information copied is the time of day of the experiment, the initial pressure, the initial temperature, the pressure at the EOC, the overall and first stage ignition delays, an estimate of the EOC temperature, some information about the compression ratio of the reactor, and the filter frequnecy used. This work is licensed under the Creative Commons Attribution 4.0 International License.
To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/. # Conduct reactive experiment #1 on the RCM cond_00_02.add_experiment('00_in_02_mm_373K-1282t-100x-19-Jul-15-1633.txt') # ... conduct and add other reactive experiments In general, for a given condition, the user will conduct and process all of the reactive experiments before conducting any non-reactive experiments. Then, the user chooses one of the reactive experiments as the reference experiment for the condition (i.e., the one whose ignition delay(s) and T C are reported) by inspection of the data in the spreadsheet. The reference experiment is defined as the experimental run whose overall ignition delay is closest to the mean overall ignition delay among the experiments at a given condition. To select the reference experiment, the user sets the reactive_file attribute of the Condition instance. For this case, the reference experiment is the run that took place at 16:33: cond_00_02.reactive_file = '00_in_02_mm_373K-1282t-100x-19-Jul-15-1633.txt' Once the reference reactive experiment is selected, the user conducts experiments at the same initial pressure and temperature conditions, but with a non-reactive mixture. The user adds non-reactive experiments to the Condition by the same add_experiment() method and UConnRCMPy automatically determines whether the experiment is reactive or non-reactive. If the user does not specify the reactive_file attribute, they are prompted for the file name when the first non-reactive case is added.
# Conduct non-reactive experiment #1 on the RCM cond_00_02.add_experiment('NR_00_in_02_mm_373K-1278t-100x-19-Jul- 15-1652.txt') UConnRCMPy determines that this is a non-reactive experiment and generates a new figure that compares the current non-reactive case with the reference reactive case. For this particular example, the pressure traces are shown in Fig. 3. In this case, the non-reactive pressure agrees very well with the reactive pressure and no further experiments are necessary; in principle, any number of non-reactive experiments can be conducted and added to the figure for comparison. Since there is good agreement between the non-reactive and reactive pressure traces, the user sets the nonreactive_file attribute of the Condition instance. cond_00_02.nonreactive_file='NR_00_in_02_mm_373K-1278t-100x-19-Jul-15-1652.txt' Once the non-reactive case is chosen, the create_volume_trace() method can be run. This method requires three attributes to be set on the Condition instance: nonreactive_end_time which controls the end time for volume trace generation, reactive_end_time which controls the length of the pressure trace stored in the output file, and reactive_compression_time which is the length of the compression stroke. All of the values must be supplied in units of milliseconds.
cond_00_02.nonreactive_end_time = 400 cond_00_02.reactive_end_time = 80 cond_00_02.reactive_compression_time = 36 After generating the volume trace, create_volume_trace() writes the volume.csv file, the pressure trace file, and a file called volume-trace.yaml , which contains the values that were This work is licensed under the Creative Commons Attribution 4.0 International License.
To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/. set for each attribute. The final step to conduct the simulations to calculate T C and the simulated ignition delay. This is done by the user by running the compare_to_sim() function. This function takes two optional arguments, run_reactive and run_nonreactive . These determine which type(s) of simulation(s) should be conducted.
cond_00_02.create_volume_trace() cond_00_02.compare_to_sim(run_reactive=True, run_nonreactive=True) UConnRCMPy is documented using standard Python docstrings for functions and classes. The documentation is converted to HTML files by the Sphinx documentation generator [14]. The format of the docstrings conforms to the NumPy docstring format so that the autodoc module of Sphinx can be used. The documentation is available on the web at https://bryanwweber.github.io/ UConnRCMPy/. UConnRCMPy also relies heavily on functionality from the NumPy [15], SciPy [16], and Matplotlib [17] Python packages.
Conclusions and Future Work
UConnRCMPy provides a framework to enable consistent analysis of RCM data. Because it is open source and extensible, UConnRCMPy can help to ensure that RCM data in the community can be analyzed in a reproducible manner; in addition, it can be easily modified and used for data in any format. In this sense, UConnRCMPy can be used more generally to process any RCM experiments where the ignition delay is the primary output.
Future plans for UConnRCMPy include the development of a robust test suite to prevent regressions and document correct usage of the framework, as well as the development of a plugin architecture to allow easy implementation of user-defined analysis features. Other issues and directions are listed in the Issue page of the GitHub repository https://github.com/bryanwweber/ uconnrcmpy/issues/.
Acknowledgements
This paper is based on material supported by the National Science Foundation under Grant No. CBET-1402231. | 4,159.4 | 2017-06-06T00:00:00.000 | [
"Engineering"
] |
Productive knowledge, scaling, enterprise richness and poverty in a group of small U.S. counties
Abstract The socioeconomic and entrepreneurial characteristics of a group of 68 small (fewer than 120000 persons) U.S. counties exhibit extensive orderliness. There is a geographically-insensitive log-log (power law) relationship between the number of enterprises and enterprise richness (the number of enterprise types) in the counties. Enterprise richness is used as a proxy for productive knowledge, i.e., the explicit and tacit knowledge to produce and deliver things and services. Enterprise richness quantifies the number of times a new business idea is successfully introduced. The numbers of population, enterprises, employees in enterprises, total county employees, and higher-educated persons all scale super-linearly in relation to levels of productive knowledge. Thus, there are significant agglomeration effects associated with increases/decreases of productive knowledge. The ratios between two entrepreneurial types i.e., new and existing entrepreneurs, have been quantified in relation to the size of the counties. Smaller counties have a greater need for new entrepreneurs than for existing entrepreneurs, and the opposite is the case for larger counties. The notion that the level of poverty is related to the level of productive knowledge in countries, led to exploration of the relationship of various characteristics to poverty levels in the selected counties. Poverty is an important moderating factor in the entrepreneurial wellbeing of the 68 counties. The demographic-socioeconomic-entrepreneurial nexus of U.S. counties deserves further research attention.
Abstract: The socioeconomic and entrepreneurial characteristics of a group of 68 small (fewer than 120000 persons) U.S. counties exhibit extensive orderliness. There is a geographically-insensitive log-log (power law) relationship between the number of enterprises and enterprise richness (the number of enterprise types) in the counties. Enterprise richness is used as a proxy for productive knowledge, i.e., the explicit and tacit knowledge to produce and deliver things and services. Enterprise richness quantifies the number of times a new business idea is successfully introduced. The numbers of population, enterprises, employees in enterprises, total county employees, and higher-educated persons all scale super-linearly in relation to levels of productive knowledge. Thus, there are significant agglomeration effects associated with increases/decreases of productive knowledge. The ratios between two entrepreneurial types i.e., new and existing entrepreneurs, have been quantified in relation to the size of the counties. Smaller counties have a greater need for new entrepreneurs than for existing entrepreneurs, and the opposite is the case for ABOUT THE AUTHOR Danie Francois Toerien The past decade has been dedicated to studies of the enterprise structures and their relationships with demographic and socioeconomic characteristics of South African towns. Inordinate orderliness was detected, expressed in many different regularities/proportionalities. In particular, an enduring and geographically insensitive log-log (power law) relationship was detected between the total number of enterprises and the number of different enterprise types. This relationship, which has been used as a proxy for productive knowledge (i.e. the ability to successfully start enterprises of types not yet present), has provided a platform for investigations of the nexus between demographic, socioeconomic and entrepreneurial characteristics and has yielded much predictability. This research was extended to the demographic, socioeconomic and entrepreneurial domains of smaller (<120000 people) counties of the U.S., a developed country. This study reveals that a seemingly geographically insensitive log-log relationship is present and could well be characteristic of all smaller human settlements in the U.S.
PUBLIC INTEREST STATEMENT
Concerns about distressed communities in the U.S., requires investigation of the potential presence of statistically significant log-log (power law) relationships between their demographic, socioeconomic and entrepreneurial characteristics. Publicly-available data sets of a group of 68 smaller U.S. counties (fewer than 120000 residents) have been used to address this issue. Power laws are present and scaling based on population numbers and on levels of productive knowledge (i.e. the ability to successfully start enterprises of types not previously present) is a common feature. Counties with more productive knowledge have disproportionately fewer enterprises but also disproportionately more people, more employees, more people with higher degrees and also more officially poor people. Smaller counties face different development challenges than larger counties. The possibility that these findings could be generally true for U.S. counties, indicates that more of this type of research is needed to enhance efforts to assist distressed communities. Hausmann et al. (2017) developed an atlas of the economic complexity of most countries of the world. The atlas provides estimates of the level of productive knowledge of each country. They stated: "The social accumulation of productive knowledge has not been a universal phenomenon. It has taken place in some parts of the world, but not in others. Where it has happened, it has underpinned an incredible increase in living standards. Where it has not, living standards resemble those of centuries past. The enormous income gaps between rich and poor nations are an expression of the vast differences in productive knowledge amassed by different nations. These differences are expressed in the diversity and sophistication of the things that each of them makes."
Introduction
Productive knowledge is not book learning but knowledge to manufacture products or deliver services stemming from practice and experience. There are two kinds of knowledge: explicit and tacit (Hausmann et al., 2017). Explicit knowledge can be transferred easily by reading a text or listening to a conversation. Tacit knowledge is hard to embed in people. It comes from years of experience more than from years of schooling. Tacit knowledge forms an important part of productive knowledge.
Increasing productive knowledge increases the economic complexity of societies. The mix of products that countries are able to make is important (Hausmann et al., 2017) and is reflected in countries' business diversity. Producing new things is quite different from producing more of the same (Hausmann & Klinger, 2006, p. 1). Schumpeter (1942, p. 83) states: "… the same process of industrial mutation-if I may use that biological term-that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one." Innovation leading to new products and/or services expands business diversity and builds productive knowledge.
In biology, diversity is measured by recording the number of species, by describing their relative abundances or by using a measure that combines the two components (Magurran, 1988). Spellberg and Fedor (2003) suggested that the term "species richness" should be used to refer to the number of natural species in a given area or in a given sample. Toerien and Seaman (2014) adopted use of the term enterprise richness to express the diversity of or number of business types in geographic locations, e.g. in human settlements. (For simplicity's sake in the rest of this contribution, the term enterprise includes the term [business] establishment, often used in U.S. statistical business data).
The abundances of enterprise types across U.S. metropolitan statistical areas (MSAs) (metropolitan cities with at least 50 000 residents) were investigated by Youn et al. (2016). They reported that the number of enterprise types increases logarithmically with the number of enterprises but eventually levels off due to limits inherent to the North American Industry Classification Scheme to fully capture the true extent of economic diversity in megacities. Toerien and Seaman (2014) studied the enterprise richness (i.e. the number of enterprise types) of South African towns. Enterprise types were identified with the help of a database of more than 600 examples of enterprise types. They recorded a statistically significant (p < 0.01) power law (log-log) relationship between total enterprises (a measure of town size) and the enterprise richness of the South African towns. This relationship is not geographically sensitive (Toerien & Seaman, 2014), has endured over some seventy years (Toerien, 2017) and allows calculation of the ratio between two entrepreneurial types as a function of the size of towns: entrepreneurs that develop products or services that have not been present before in a locality (i.e., "new entrepreneurs"), and, entrepreneurs that start additional enterprises of types already present (i.e., "existing entrepreneurs") (Toerien, 2015(Toerien, , 2017. Toerien (2018a) argued that enterprise richness could be used as a proxy measurement of the productive knowledge in South African towns because it indicates the number of times entrepreneurs have been able to successfully conceive and implement new business opportunities involving enterprise types new to these towns. In other words, they have gone beyond "more of the same". The quantification of the relationship between enterprise richness and total enterprises, therefore, provided insight into business diversity and successful innovation outcomes in South African towns (Toerien, 2017;Toerien & Seaman, 2014).
Productive knowledge, innovation and people
Productive knowledge obviously involves people. Moretti (2017, p. 12) comments: "Fifty years ago, manufacturing was the driver of growth, the one sector responsible for raising the wages of American workers, including local service workers. Today the innovation sector is the driver." Moretti (2017, p. 56) explained: "[T]he vast majority of jobs in a modern society are in local services. People who work as waiters, plumbers, nurses, teachers, real estate agents, hairdressers, and personal trainers offer services that are produced and consumed locally. This sector exists only to serve the needs of a region's residents and is largely insulated from national and international competition. Economists call this the non-traded sector. Such jobs are 'non-tradable' because they cannot be exported outside the region where they are produced: you need to consume them where you produce them". Moretti (2017, p. 57) added: "By contrast, most jobs in the innovative industries belong to the traded sector, together with jobs in traditional manufacturing, some services-parts of finance, advertising, publishing-and agricultural and extractive industries such as oil, gas, and timber." He explained; "These jobs, which account for about a third of all jobs, are very different because they produce a good or service that is mostly sold outside the region and therefore needs to be competitive in the national and global marketplace". Additionally, Moretti (2017, p. 57) commented: "The paradox is that while the vast majority of jobs are in the non-traded sector, this sector is not the driver of our prosperity. Instead our prosperity mainly depends on the traded sector". The application of productive knowledge that results in tradable products and/or services is important and should be considered in relation to human settlements. Florida (2002, p. xix) pointed out: "For most of human history, wealth came from a place's endowment of natural resources, like fertile soil or raw materials. But today, the key resource, creative people, is highly mobile". Creative people, innovation and employment are tightly linked. Cities are the greatest invention of humankind and are gateways for ideas (Glaeser, 2011). The relationship between human settlements and creative people needs consideration.
instance, Youn et al. (2016) found that the total number of enterprises in each MSA in the U.S. is linearly proportional to its population size. Seaman (2012a, 2012b) reported the same for South African towns. In U.S. cities, there is approximately one enterprise for every 22 people per city, regardless of its size (Youn et al., 2016). The number of employees scales approximately linearly with enterprise numbers and there are approximately 7.9 employees per enterprise.
The per capita approach is, however, not suitable for the universal characterization and comparison of cities because it ignores the phenomenon of people-based agglomeration of characteristics Ortman, Cabaniss, Sturm, & Bettencourt, 2014). Cities are more than the linear sum of their individual components and larger cities with an agglomeration of creative people are disproportionally the centres of innovation, wealth and crime . As the populations of cities grow, major innovation cycles must be generated at a continually accelerating rate to sustain growth and avoid stagnation or collapse (Bettencourt et al., 2007a). On the other hand, smaller cities require relatively more infrastructural development than larger cities (West, 2017).
The agglomeration characteristics of modern cities take a simple mathematical form (Bettencourt et al., 2007a). For example, based on population, N (t), as the measure of city size at time t, the scaling power law is: Y can denote material resources (such as energy or infrastructure) or measures of social activity (such as wealth, patents, and pollution); Y 0 is a normalization constant. The exponent, β, reflects general dynamic rules at play across the urban system.
Robust and commensurate scaling exponents have been recorded across different nations, economic systems, levels of development, and recent time periods for a wide variety of indicators (Bettencourt et al., 2007a). Measures of the physical extent of urban infrastructure increase more slowly than city population size, thus exhibiting economies of scale. They scale sub-linearly (West, 2017). Regardless of where a city is located and regardless of the specific metric used, only about 85% more material infrastructure is needed for every doubling of city populations. On the other hand, various socioeconomic outputs increase faster than population size and thus exhibit increasing returns to scale. They scale super-linearly (West, 2017), which is typical of open-ended complex systems (Ortman, Cabaniss, Sturm, & Bettencourt, 2015). The exponents of the latter power laws for cities are always in the order of 1.15 (West, 2017).
As mentioned before, Toerien and Seaman (2014) and Toerien (2017) recorded statistically significant power law relationships between total enterprises and the enterprise richness (productive knowledge) in South African towns. The coefficients of these power laws are all sub-linear indicating that the ratio of enterprise types to total enterprises is proportionally higher in smaller towns than in larger towns. Inverting the power law between enterprise richness and enterprise numbers, enables investigation of the scaling of enterprise numbers in comparison to changes in enterprise richness. The coefficients of such power laws recorded for South African towns (Toerien, 2018b) are super-linear (more than 1.35) and a strong scaling effect was recorded. It is unknown if this is also the case in smaller human settlements in developed countries. Toerien (2018aToerien ( , 2018b) also demonstrated that a given level of enterprise richness (productive knowledge) could be associated with more than one level of population numbers in South African towns. The use of a community wealth/poverty measurement (the enterprise dependency index) enabled a demonstration that differences in the wealth/poverty states of towns play an important role in entrepreneurial dynamics (Toerien, 2018b). Poorer towns with the same level of enterprise richness (productive knowledge) than richer towns, have proportionally larger populations in step with their poverty levels. stated: "At the start of the twenty-first century, cities emerged as the source of the greatest challenges that the planet has faced since humans became social." Large cities manifest remarkably universal, quantifiable features (Bettencourt et al., 2007a). The South African research reviewed above extend the findings of the Santa Fe research group on large cities by revealing links in South African towns between demography, entrepreneurial dynamics, productive knowledge and poverty.
Purpose and organization of this contribution
Are the links detected in South Africa (a developing country) also present in the towns of developed countries? Unfortunately, the necessary demographic, socioeconomic and entrepreneurial data to investigate this question is not publicly available for towns in the U.S., a developed country. The smallest human settlements for which such data is available are counties, which are used instead. The prime purpose of this contribution is accordingly to provide information about an initial quantitative exploration of the above links in a group of U.S. counties. In particular, the links between productive knowledge (enterprise richness) and other important socioeconomic characteristics of the selected counties, and, the impact of the wealth/poverty states on such links are explored.
The contribution is organized as follows: The methods are described in the following section and then the relationship between enterprise numbers and enterprise richness is examined. The resulting power law is inverted to examine entrepreneurial activities in relation to enterprise richness. The geographic sensitivity of this relationship is then examined. The enterprise numbers-enterprise richness power law is used to quantify the requirements of two different entrepreneurial types, i.e. "new" and "existing" entrepreneurs as functions of the total number of enterprises in counties. Thereafter the scaling impacts associated with increased/decreases of productive knowledge (enterprise richness) are examined: firstly, in terms of enterprises and jobs, and, secondly, in terms of other socioeconomic characteristics of the selected counties. This is followed by an analysis of the role of the wealth/poverty states (measured as enterprise dependency indices) of counties on their demographics-entrepreneurship nexuses. A discussion and conclusions then follow.
Selection of the group of counties
Only three criteria were used to select a group of 68 counties for the exploration. Firstly, to avoid the problem encountered by Youn et al. (2016) that 6-digit NAICS data may be inadequate to fully quantify the number of enterprise types present in a county, use is only made of smaller counties (fewer than 120000 people). This reduces the risk that NAICS data contained in the County Business Pattern (CBP) datasets used (described later) would be inadequate for classification purposes. Secondly, to examine the geographic sensitivity of the results, counties were selected from seven states that roughly cover the geographic distribution of states in the U.S. (Table 1). The states are: Alabama (the South), California (the West), Kansas and Missouri (the Mid-West), Maine (the North-East), Maryland (the Eastern Seaboard) and Minnesota (the North). Thirdly, the NAICS data for a selected state was used starting at the first county in its database. If the county met the criteria, its data was used, and then the second county followed, etc. No other considerations entered into the selection of counties. Groups of about 10 counties per state were selected (Table 1) except Maine with only four counties with fewer than 120000 people. In broad terms, the selection process of the group of counties was basically random and the group of 68 is large enough to serve the purposes of the exploration.
For each county, the total number of enterprises and the number of their employees were extracted from the datasets. The number of separate 6-digit enterprise classifications in the dataset of each county provided the quantification of its enterprise types (enterprise richness).
Business pattern datasets
The County Business Pattern (CBP) datasets (U.S. Census Bureau, 2018a) provide essential information for this analysis. CBP is an annual series that provides U.S. subnational economic data by industry. The datasets contain information about all U.S. counties, including the numbers of their enterprises. The NAICS system used in CBP analyses uses a 6-digit system to classify the enterprises in the datasets into different business types and to enumerate the number of enterprises. The CBP 2016 basic dataset is used in this exploration.
Quick Facts (U.S. Census Bureau, 2018b) provides further socioeconomic information on U.S. counties. Information used in this exploration includes: 1. estimated population in 2016, 2. total employment per county (which is more than the number of employees associated with enterprises), 3. total personal income (average personal income for a county multiplied by its population number), 4. the number of poor people per county (the percentage of poor people multiplied by the population number), and 5. number of people with bachelor degrees or higher per county (the percentage of population with such degrees multiplied by the population number).
Normalization of parameters
A meaningful comparison between cities should rely on relative quantities rather than on their absolute values (Bettencourt et al., 2007b). Consequently, the characteristics of the 68 counties were first normalized by division with their average values before being subjected to log-log regression analyses to determine if power laws are present.
Relationships between enterprise richness and total enterprises
Normalized enterprise numbers and normalized enterprise richness values were subjected to loglog power law analyses. After a statistically significant power law was demonstrated between total enterprise numbers and enterprise richness, the inverse power law between enterprise richness and total enterprise numbers was investigated.
Geographic sensitivity of the enterprise richness-enterprise numbers power law
The geographic sensitivity of the relationship between normalized enterprise richness and normalized enterprise numbers is investigated by plotting the data of all counties in a single graph.
County enterprise richness, entrepreneurial spaces and impacts of enterprise richness
The power law between enterprise richness and enterprises is used to plot the number of enterprise types against total enterprise numbers (see Toerien, 2015Toerien, , 2017. This enables quantification of the ratio of "new entrepreneurs" relative to "existing entrepreneurs" for differently-sized urban settlements. New entrepreneurs are represented by the number of enterprise types, which indicates how many times a specific enterprise type has been successfully started and has endured in a specific county. Existing entrepreneurs represent the difference between total enterprise numbers and the number of enterprise types for a specific county. Two relationships are used to assess the impacts of changes in enterprise richness (i.e. productive knowledge) on enterprise numbers and employment numbers: 1. the enterprise richness-enterprise numbers power law, and, 2. a linear enterprise numbers-total employment relationship. At different levels of urban size (measured as total enterprises) the impact of a single additional new enterprise type is firstly calculated in terms of additional enterprises and then in terms of additional jobs.
Per capita-based linear regularities
Linear per capita indicators are often used to characterize and rank cities . Ordinary least squares regression analyses of per capita data are used where appropriate in this exploration.
Scaling in relation to enterprise richness (productive knowledge)
Scaling impacts associated with increases in enterprise richness (productive knowledge) of South African towns (Toerien, 2017(Toerien, , 2018a(Toerien, , 2018bToerien & Seaman, 2014) necessitated exploration of the scaling of socioeconomic characteristics relative to enterprise richness in the group of counties. Characteristics investigated are: total populations of counties, employee numbers of enterprises in counties, total income in counties, total employment in counties, numbers of (officially) poor persons in counties, and numbers of people with bachelor of higher degrees in counties. The data was derived from Quick Facts (U.S. Census Bureau, 2018b).
Normalized enterprise richness values and normalized values of the selected characteristics were subjected to log-log regression analyses. When power laws were recorded, it was noted if their coefficients are super-linear (exponent > 1), linear (exponent~1) or sub-linear (exponent < 1).
2.9. The impact of wealth/poverty on the demographic and entrepreneurial nexus
The relationship between enterprise numbers and enterprise richness
The enterprise numbers of the 68 counties ranged from 103 to 2698 and their enterprise richness ranged from 68 to about 450. The wide ranges of both characteristics enabled an investigation of the relationship between enterprise numbers and enterprise richness. The normalized enterprise numbers and normalized enterprise richness are linked in a statistically significant (p < 0.01) power law ( Figure 1) with a sub-linear exponent of 0.57. Every doubling of the normalized enterprise numbers of the counties corresponds to an increase of 48,2% in the value of the normalized enterprise richness. A statistically significant power law between enterprise numbers and enterprise richness was also reported for South African towns (Toerien & Seaman, 2014). Its coefficient was also sub-linear.
The inverse of the above relationship: Normalized enterprise number ¼ 0:0709 normalized enterprise richness ð Þ 1:7038 (2) is also a power law with R 2 = 0.9674 and n = 68. Increases in normalized enterprise richness (or productive knowledge) corresponds to proportionally larger increases of normalized enterprise numbers. Scaling in relation to enterprise richness is strongly super-linear (coefficient = 1.7038). For every doubling of the normalized enterprise richness the value of normalized enterprise numbers more than triples (3.26 times). Innovation leading to additional productive knowledge (i.e. the ability to identify and exploit new business opportunities) should have large spill-over effects as suggested by Moretti (2017) The normalized enterprise richness-normalized enterprise numbers relationship is not geographically sensitive. It holds for counties from widely differing geographic areas in the U.S (Figure 2). The same was reported for regions in South Africa (Toerien & Seaman, 2014). It is possible that this relationship might be generally true for small U.S. counties, but this suggestion needs further research confirmation.
New versus existing entrepreneurship
The fraction (deduced from Equation (2)) that the number of enterprise types forms of the total enterprises of the different counties, indicates how entrepreneurial space for new enterprise types and existing enterprise types increases/decreases with increases/decreases of size of counties (full and broken lines in Figure 3). The fraction closely corresponds with the actual ratios observed in the counties (dots and triangles in Figure 3). It illustrates how innovative new entrepreneurship (i.e. the ability to identify possibilities of new enterprise types that could be successfully launched) as opposed to existing entrepreneurship (i.e., launching just more of the enterprise types already present), contributes to the expansion of the total number of enterprises. On the other hand, conditions leading to a loss of enterprises in a county, should result in a loss of some enterprise types.
For economic growth in small counties, there is a high requirement for new enterprise types ( Figure 3). This requirement is progressively lowered as counties grow. At about 250 enterprises there still is a requirement for 50% new versus 50% existing enterprise types. A pareto distribution of 20% new to 80% existing enterprise types is reached by about 2200 enterprises. Importantly, expansion of total enterprise numbers in the 68 counties is always associated with the founding of new enterprise types and is, therefore, associated with the expansion of productive knowledge. This is in step with the views of , Moretti (2017), West (2017) and others about innovation as a driver of prosperity in the U.S. On the other hand, a loss of productive knowledge in counties, should result in fewer enterprises.
Spill-over impacts associated with increases/decreases of productive knowledge
Equation (2) allows calculation of increases/decreases of enterprise richness of the counties in relation to their total enterprise numbers ( Table 2). In addition, the total number of enterprises and total employment numbers of the 68 counties are linearly correlated (p < 0.01): Total employment ¼ 14:9 total enterprises ð Þ À 1852:1 (3) with r = 0.94 and n = 68. The proportionality coefficient of this equation is used to express the number of additional/fewer enterprises that would result from an increase/decrease of a single unit of enterprise richness (i.e. productive knowledge) at different county sizes. The number of additional enterprises is then related to additional employment opportunities ( Table 2).
The impacts on total enterprise numbers and total employment numbers that correspond with increases/decreases in enterprise richness are large (Table 2). For instance, in a small county (enterprise richness of 100 and 181 enterprises), the addition or loss of one enterprise type will be associated with three additional enterprises and 45 additional jobs. In a larger county (~2800 enterprises), the addition or loss of one enterprise type will be associated with the addition or loss of nine additional enterprises and more than 130 jobs. This provides further quantitative proof of the large impact of additional or reduced innovation in U.S. urban settlements (e.g. Moretti, 2017;West, 2017).
Further scaling impacts associated with changes in productive knowledge
Statistically significant (p < 0.01) power laws have been recorded in all comparisons based on enterprise richness (Table 3). All coefficients of these power laws are strongly super-linear, ranging from 1.32 (the number of people in poverty) to 1.95 (for total employment and number of degree holders) and are indicative of open-ended systems (Ortman et al., 2015). They also reflect the large impacts from additional or reduced innovation and the benefits/detriments of attracting/not attracting highly qualified people Moretti, 2017). The R 2 -value (0.57) of the power law between enterprise richness and the number of poor people explains less of the variation than those of the other power laws, which range from 0.82 to 0.97 (Table 3). However, it still serves as a further indication that agglomeration of poor people corresponds with higher levels of productive knowledge.
Poverty as a mitigating influence
There is a statistically significant power law relationship between enterprise richness and population in the 68 counties ( Figure 4). However, only 82% of the variation is explained by the power law ( Figure 4) and the data points of the relationship are not as densely concentrated as those for the enterprise numbers-enterprise richness relationship (Figure 1).
In a study of small towns in South Africa, Toerien (2018b) showed that the wealth/poverty states (measured as enterprise dependency indices) of these towns influenced the relationship between enterprise richness and town populations. Binning the selected counties into three wealth/poverty groups in this exploration: (1. enterprise dependency index < 40; 2. enterprise dependency index 40-60; 3. enterprise dependency index > 60) enabled calculation of the power law of each of the binned group ( Figure 5). The binning reduced the spread of data within the power laws. R 2 -values for the binned groups range from 0.91 to 0.97, whilst the R 2 -value of all counties is 0.82. The superlinear exponents of the power laws increase from 1.43 (poorer counties) through 1.59 (intermediate counties) to 1.86 (richer counties). Increasing wealth levels (i.e. increasing ability to buy from local enterprises) were proportionally in step with enhanced magnitudes of the power law exponents ( Figure 5). Increasing wealth in a community apparently increases spill-over impacts whilst increasing poverty decreases them.
The constants of the equations also changed progressively from 1.02 (richer counties) through 6.29 (intermediate counties) to 20.43 (poorer counties). Increasing poverty was associated with positional shifts of the power laws in Figure 5.
Are the wealth/poverty impacts important? Table 4 shows that they are, especially in smaller counties. An enterprise richness of 100 corresponds with 181 enterprises in the selected counties. Approximately 2.72 times more people (i.e. 14620 versus 5374 persons) would be needed to "carry" the 181 enterprises at an enterprise dependency index larger than 60 than at an enterprise dependency index of less than 40. This ratio diminishes steadily as enterprise richness and enterprise numbers increase, but at an enterprise richness of 500 and just more than 1920 enterprises, the ratio is still 1.36 (145410 versus 107339 persons) (Table 4). Wealth/poverty levels appear to be important influencers of the local economic dynamics in the 68 counties.
Validity of enterprise dependency index
One can ask if the enterprise dependency index provides a valid measurement of the wealth/ poverty states of counties. For an answer it is necessary to examine the relationship between the normalized number of poor people (according to official U.S. norms) and the normalized number of enterprises in the counties. This comparison reveals a statistically significant (P < 0.01) power law relationship ( Figure 6). The exponent (0.759) is sub-linear, indicating that the rate of increase in enterprise numbers does not match the rate of increase in poverty. In other words, as the number of poor people increases, the population apparently becomes less able to "carry" the same number of enterprises, indicative of a negative impact of poverty. Since enterprise dependency indices are partly based on the number of enterprises and the latter is related to the number of poor people in a county (Figure 6), the enterprise dependency index should be a valid measure of the wealth/ poverty state of U.S. counties.
Discussion
The socioeconomic characteristics of a group of small U.S. counties exhibit extensive orderliness, manifested by a range of statistically significant and mostly log-log regression analyses (power laws). This is generally similar to what was recorded by the Santa Fe group for cities in the U.S. and elsewhere (Bettencourt, 2013;Bettencourt et al., 2007aBettencourt et al., , 2007bBettencourt, Samaniego, & Youn, 2014;Ortman et al., 2015;West, 2017, Youn et al., 2016. This orderliness enabled the exploration of the links between productive knowledge, a driver of the economic fates of nations (Hausmann et al., 2017), and a number of socioeconomic characteristics of the group of U.S. counties.
The positive correlations that were recorded do not necessarily imply causality between the various characteristics, but merely that the variation patterns are similar when two characteristics are analysed in a regression analysis. However, Mayer-Schönberger and Cukier (2014, p. 14) state: "In a big data world one does not have to be fixated on causality. Instead one can discover patterns and correlations in data that offer novel and invaluable insights. The correlations may not tell one precisely why something is happening, but they alert one to the fact that it is happening." The orderliness detected in this exploration, therefore, alerts to what is happening in the group of small U.S. counties and not necessarily why it is happening. The latter needs further research. Toerien and Seaman (2014) and Toerien (2017) reported a power law relationship between total enterprise numbers and enterprise diversity (i.e., enterprise richness) in South African towns. To avoid the limitations of the six-digit NAICS system (United States, 2017) to define enterprise types adequately as experienced by Youn et al. (2016), the present exploratory analysis of a group of U.S. counties focused on counties with fewer than 120 000 residents (Table 1). This strategy succeeded and a power law relationship between enterprise numbers and enterprise richness was detected (Figure 1). The South African power laws had sub-linear coefficients. This is also the case for the group of selected U.S. counties (Figure 1).
The relationship between enterprise richness and the enterprise numbers of the 68 selected counties is not geographically sensitive (Figure 2). This was also observed for South African towns (Toerien & Seaman, 2014). This phenomenon should be investigated in more counties and if the relationship is indeed geographically insensitive, it would add substantial predictive power about the enterprise dynamics in U.S. counties. However, the limitations of classification systems such as NAICS to adequately identify all business types (Youn et al., 2016) would have to be overcome to assess if the relationship also holds for large counties.
Producing new things is quite different from producing more of the same (Hausmann & Klinger, 2006). Enterprise richness measures the entrepreneurial ability to produce products and services that are not yet present in human settlements. Enterprise richness can, therefore, serve as a proxy measurement of the levels of productive knowledge in human settlements (Toerien, 2018a(Toerien, , 2018b. Inverting the power law in Figure 1 produces Equation (2), which links the concept of productive knowledge (Hausmann et al., 2017) (as enterprise richness) to entrepreneurial activities in human settlements (as total number of enterprises). The recorded power law has a strong super-linear coefficient: scaling is present and larger counties have disproportional higher numbers of enterprises. This supports the suggestion that inventive activity is an important attribute of larger human settlements (e.g. Bettencourt et al., 2007aBettencourt et al., , 2007b. This power law is also typical of open-ended complex systems (Ortman et al., 2015;West, 2017).
The approach described above, enabled further exploration of the scaling of socioeconomic characteristics of the selected counties in relation to changes in the level of productive knowledge. Increases in productive knowledge of the counties are associated with disproportionate and sizeable increases (i.e. scaling) in some socioeconomic characteristics because the exponents of the statistically significant applicable power laws are in the order of 1.5 or higher (Table 3). The numbers of people, enterprises, jobs in enterprises, total jobs in counties, and numbers of highly educated persons agglomerate disproportionally in counties with higher productive knowledge. Total county income and the number of officially poor people also scale super-linearly when compared to increases in enterprise richness. These results seem to be in step with the following ideas: growth is not simply having an economy with a large number of people but rather how their capabilities are integrated by the environment they create and live in (Romer, 1990). Knowledge spill-overs among individuals and firms supply the underpinnings of growth (Romer, 1986) and larger cities are environments that support and sustain more social interactions per unit time (Florida, 2002;Ortmann et al., 2014;West, 2017). The creation and repositioning of knowledge in the selected counties apparently increase their attraction for educated, highly skilled, entrepreneurial and creative individuals.
The power law in Equation (2) was used to predict the impacts associated with the addition/ reduction of a single additional enterprise richness unit to the enterprise richness pools of differently-sized counties on enterprise and employment numbers of the counties ( Table 2). The spillover impacts on the total number of enterprises and total employment numbers are large and increase/decrease in step with size increases/decreases of counties. This analysis also quantitatively confirms that innovation is enhanced when counties grow larger as has previously been suggested for cities (Bettencourt et al., 2007a(Bettencourt et al., , 2007bFlorida, 2002;Glaeser, 2011;Moretti, 2017;West, 2017). Conversely, when cities regress, there is a los of innovative capacity.
In the small selected counties (< 250 enterprises), the majority of entrepreneurs must be able to conceive and start businesses of types not yet present (Figure 3). Similar results were reported for South African towns (Toerien, 2017;Toerien & Seaman, 2014). The need for new entrepreneurs in smaller human settlements is a daunting challenge because entrepreneurs in these settlements have limited or no role models to copy or learn from. However, even in the largest selected counties the requirement for entrepreneurs who can conceive and start enterprises of types not yet present, remains part of a growth challenge. This emphasises the importance of being able to produce new things (Hausmann & Klinger, 2006) that are tradable (Moretti, 2017). On the other hand, reduction of enterprise numbers in the selected counties should result in losses of some enterprise types.
A question can be raised about the reason why there is such a distinct relationship between enterprise richness and the number of enterprises of South African towns as well as in the selected group of U.S. counties. There is no clear answer to this question yet. This situation is, however, not unknown in geographic economics. Krugman (1996) remarked about the agglomeration phenomenon of people in cities (referred to as Zipf's law): "At this point we are in the frustrating position of having a striking empirical regularity with no good theory to account for it." This is presently also the case for the relationships between enterprise numbers and enterprise richness.
However, West (2017) remarked that scaling analysis, which quantifies how characteristics agglomerate in response to changes in the size of systems, has been a powerful tool across a broad spectrum of science and technology research. Its analytical punch stems from the observation that this response is often a simple, regular, and systematic function over a wide range of sizes, indicating that there are underlying generic constraints at work on the system as it develops (Lobo, Bettencourt, Strumsky, & West, 2013). The number of scaling phenomena when compared to increases/decreases in enterprise richness (productive knowledge) in the selected counties (Table 3) suggests that the generic constraints observed for cities (Lobo et al., 2013) might also play a role in U.S. counties.
A lack of productive knowledge is associated with poverty in countries (Hausmann et al., 2017). Increased poverty should, therefore, equate with lower levels of productive knowledge, which has been demonstrated for a group of South African towns (Toerien, 2018a). The present exploration indicates that increased poverty in the selected U.S. counties is associated with fewer enterprises (Figure 6), which corresponds with a lower enterprise richness (less productive knowledge) (Figure 1). An enterprise dependency index was used as an indicator of the wealth/poverty states of the selected counties (e.g. Figure 5). Enterprise numbers scale sub-linearly relative to increases/ decreases of the numbers of officially poor persons in the selected counties ( Figure 6). This observation justifies the use of the enterprise dependency index in this comparison because the index relates the number of enterprises in a county to the number of people in the county. Counties with proportionally more poor people "carry" fewer enterprises. It was also shown that increased poverty (shown by higher enterprise dependency indices) alters the power law relationship between enterprise richness and county populations ( Figure 5). The exponents and the proportionality constants of the power laws increase/decrease in relation to changes in the wealth/poverty states of the counties. The wealth/poverty states of the selected counties, therefore, has a strong moderating influence on their entrepreneurial success (Table 4) and this might be generally true for all U.S. counties.
Why do poor people agglomerate in larger cities? Large cities are not full of poor people because cities make people poor, but because cities attract poor people with the prospect of improving their lot in life (Glaeser, 2011). They come for jobs (Glaeser, 2011;West, 2017). The migration of rural people to urban settlements, wherever it occurs, is linked to people seeking better opportunities. This also seems to be the case in the group of counties.
Conclusions
The socioeconomic and entrepreneurial characteristics of 68 small U.S. counties exhibit extensive orderliness. This is similar to findings about South African towns. A power law with a sub-linear coefficient quantified the relationship between enterprise richness and total enterprise numbers of these counties. Larger counties have fewer enterprise types relative to total enterprise numbers compared to smaller counties. This power law is not geographically sensitive, which suggests that it might apply widely to U.S. counties.
The inverse of the former power law enables linking of the concept of productive knowledge (Hausmann et al., 2017) with entrepreneurial activities in the 68 counties. In the selected counties, some socioeconomic characteristics scale disproportionally in comparison to increases/decreases of productive knowledge (measured as enterprise richness). These socioeconomic characteristics are: population numbers, enterprise numbers, number of employees associated with the enterprises, total county employment, the number of higher-educated people and the number of officially poor persons. These characteristics scale super-linearly in comparison to increases/ decreases in productive knowledge. Counties with more productive knowledge have proportionally more of these characteristics than counties with lesser productive knowledge, and vice versa. The level of productive knowledge in U.S. counties might be an important driver of their entrepreneurial dynamics.
Two entrepreneurial types were identified in the group of counties: new and existing entrepreneurs. In smaller counties there is a greater need for new entrepreneurs that start enterprises of types not yet present than for existing entrepreneurs that start more enterprises of types already present. The need for new entrepreneurs in smaller human settlements is a daunting challenge due to a lack of role models. There is a greater need for existing entrepreneurs than for new entrepreneurs in larger counties. However, new entrepreneurs are still needed in larger counties for demographic and entrepreneurial growth.
Enterprise numbers scale sub-linearly in relation to increases/decreases in the number of officially poor persons in the selected counties. Poorer counties with a specific population number are less able to carry the same number of enterprises than richer counties with the same population number. Poverty in the group of counties moderates the relationship between productive knowledge and county population numbers and probably is an important factor in their entrepreneurial wellbeing.
This exploration has opened a window on regularities in the demographic-socioeconomicentrepreneurial interlinkages of smaller human settlements in a developed country. The demographic-socioeconomic-entrepreneurial nexus of U.S. counties clearly deserves further research attention. The County Business Pattern (CBP) datasets are also available for longitudinal studies of the relationships between productive knowledge and important socioeconomic characteristics of U.S. counties. | 9,963.8 | 2018-01-01T00:00:00.000 | [
"Economics",
"Sociology"
] |
Applying Effective Software for Controlling Computers Remotely
. Today, many educational institutes, labs, houses or organisations have their private Local Area Network (LAN), so they need to monitor employees, students in exams or machines which are working on the network. This paper aims to design and implement efficient application that can be used to track everyone who is working between 1-10 users (operating systems) on LAN network remotely; so access PC, Mac and Linux. In other words, it can monitor screens remotely, send a warning message and shutdown system. Including, it operates at low-cost, enables daily using in the services of all scales; and provide the network performance simultaneously. The implementation has achieved using Front end-JDK, JAVA language, Windows 7, 8, and10, Visio, Wireshark, Task manager, chart expert, Microsoft excel, ten computers, Ethernet & Wireless of LAN network, etc. This software has applied in real-world in students examinations at three schools in Iraq. As a result, this software eligible to allocate the task to clients to restrict them from misusing the resources. As well as, automates lab and monitor attendance with performance analysis.
Introduction
Web Real-Time Communication (WebRTC) framework offers the ability of direct interactive communication for audio, video and data between two web browsers (peer-to-peer) [1]. This technology does not need to registration, downloading, installation, external software (plugins), license, etc [2]. Also, it describes the aspects of media transport and identifies how the Real-Time Transport Protocol (RTP) is employed in the WebRTC context [3]. Also, it supported by Google, Opera, Mozilla, Apple, and Microsoft, as well as its limitations have been distributed by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF) [4].
Screen monitoring or remote desktop service as emphasised in [5][6][7] [8], is a technique that monitors and creates screenshots from computer's screen that working on the network; this monitoring able to capture and record whatever happening on screens of computer desktop like file, transfer and print documents remotely. These data can be images, videos, diagrams, playing games, files, etc. So, amendment, enhancement, and usage can occur on data; also, this data uses as feedback or report. In other words, an administrator can access all files, applications, and obtain network performance of other computers precisely [9]. Now a day, the mechanism of screen monitoring plays an essential role [10] and can support many applications at schools, companies and so on [11]. In addition, in [12] [13] [14] demonstrated that screen monitoring can support invigilator to monitor students in examinations and to improve their knowledge through getting feedback of assignment weaknesses, and also for developing writing skill and academic skills. In [15], emphasised that screen monitoring is required to improve studying and establishing a useful learning environment. Moreover, screen monitoring is necessary to protect the network from critical threats through obtaining screenshots by the network administrator. Similarly, screen monitoring becomes important while the detection has related to it, especially it is automatically joining investigative data within a screen capture [16]. Furthermore, the necessity of screen monitoring has been required directly in real-time to capture complete screen when wanted [17]. Additionally, screen monitor are still widely utilised; also, it is powerful to improve a system that records all dynamic activities on screen [17][18] [19]. Besides, in [20] illustrated that as long as a remote screen can share systems to display contents, so it is possible to gain device control at the same time. Equally important, using screen monitoring presents the accuracy of more than 70% and reveal valuable information about the communication between clients and computers [21]. Not only that but also, parental use screen monitoring to understand how adolescents are using media which support parental to select a better way to adjust implications on it [22] [23].
The primary objectives of this research are to design and test an application That Can Be Used To Invigilate Students, Children, Employees Who Are Using Between 1-10 laptops (operating systems) on the network remotely; as long as, this software is able to monitor computer screens and get different files, such as images, videos, applications, etc.
Accordingly, this application enables controlling the client's usage by sending a warning message and shutdown system. Moreover, this research works at low-cost, allows daily using in the services of all applications, and offers the network performance concurrently.
Identically, the application has implemented in real-world at schools in Iraq. Uniquely, this application has designed and established without using commercial servers, external software or hardware; also, it is not an extension of a TV structure.
The organisation of this paper is as shows; Section 2 discusses problem definition. In section 3, is a presentation of the methodology, implementation and analysis. Lastly, the conclusion and future effort have mentioned in Section 4.
Problem Definition
In [24][25] [26], illustrated that using sensors for monitoring is: expensive for measuring or analysing, need a specific type of exercise, need to select a suitable platform to use a sensor, error is available through personal usage. Moreover, in [27][28] [29] [30], explained that using screen monitoring via cloud leads to: downtime, security and privacy issues, limit control and flexibility, using a cloud is still under development phase, using a cloud requires a certain amount of confidence.
However, screen monitoring through the network has advantages that save content from a screen, such as images, videos, and files. In other words, the content can be as follows: (a) streaming online data from platforms or applications, (b) audio and video calls from various forms, such Skype, and Google Hangouts, and (c) because of Covid-19, live multimedia like audio or video social media can be offered among people. Therefore, in [31][32] [33][34] emphasised that many applications have been required developed screen monitoring apps and system controlling through a network that gains the following tasks: cost-effectiveness, better security, increased productivity, flexibility, real-time notifications, and early detection of problems, monitor the different activities of users, support organisations to monitor the behaviour of their employees, communicate with the users and providing instructions and share tasks, and gain information about network performance. This stage contains five points: • Data performance.
• Data process and analysis.
• Present the data in an appropriate form.
• Economic Feasibility (if someone goes to harm the system).
Methodology
In this project, it has used JAVA language and front end-JDK as a platform, C++ language (Object-Oriented concepts) for networking; generate portable executable code to be downloaded. Besides, many computers linked through (Ethernet and Wireless) of the LAN network was used. Figure (1
Implementation
A network lab was prepared to test this application among different computers via (Ethernet & Wireless) of LAN network. In this implementation, new application has tested to monitor other computers remotely by capturing screen, sending warning messages to the clients, and shutdown the system. Uniquely, various classes have created using JAVA and C++ languages to test mechanisation, self-running demos, and other applications. The primary classes and preparation can explain as follows: a) Robot Class: to produce a natural system and input actions for the exam, such as mechanisation. b) AWTException Class: to support when the configuration does not permit low-level input control. c) SecurityException Class: to build and adjust the main browser. d) Establish socket connection using Mnemonics. e) Administrator specifies the valid IP address and DNS server.
In this research, networking management services and supports social network in monitoring and maintaining a different kind of files with network's performance too. Furthermore, it has divided into many types based on ISO network management model as follows: (a) performance management, (b) configuration management, (c) accounting management, (d) fault management, and (e) security management. Based on network performance, it measures different aspects and shows them in the preferred form. As a result, it can display the number of parameters: a) Sends and receives bytes every second. b) Sends and receives packet every second. c) Output and bandwidth consumption.
In this project has been utilised Use Case Diagrams to assess the behaviour of the application. Thus, each programmed class can show a group of use cases and relationships between them. For the administrator, it login to the network monitoring and then begins monitoring the screen on the Ethernet or Wireless of LAN network. So, the administrator gets the right action depending on the status; therefore, it can prevent the client from running the system, or communicate to the client as shown in figure (2).
191
Technium Vol. 4, No.10 pp.188-196 (2022) ISSN: 2668-778X www.techniumscience.com In contrast, the server should present the captured images and then works consequently. Similarly, the administrator can cooperate with the client and help if needed. On the other hand, the application will capture images from client-side and deliver them to the server, as shown in figures (3&4). Accordingly, the server will send a message to the client-side. As well as, figure (5), shows software pseudocode.
Analysis
It has proved that this application has enabled screen monitoring among different systems via LAN network. This implementation using a new approach can offer controlling among ten operating systems that can send a warning message and shutdown system. Moreover, it can save images, videos, documents, and network performance for different clients. Identically, it keeps running even if any client gets in or leaves any time. In particular, it has created without using any commercial server, external software or supportive hardware; also, it is not an extension of a TV system. This software has advanced to enable cooperative work, and it can be supposed as a accomplished screen monitoring and controlling computer devices remotely. On the other hand, it does not support more than ten operating systems. In contrast, the QoE confirms that this software has worked perfectly so that it can achieve additional tests in the future.
Quality of Experience (QoE)
Using questionnaires, actual clients have involved in this test to give their views on the recognised user knowledge, as displayed in Table (1). This experiment confirmed that this software is productive and can be used in different applications and among ten clients via (Ethernet & Wireless) of LAN network.
Conclusion And Future Effort
In this paper, a productive application to monitor and control operating systems through capturing screen, send a warning message to the user and shutdown system has designed and tested. Using this application can serve many organisations, such as educational institutes, labs, houses and so on. In other words, everyone who is working on the network can be located, monitored and restricted from misusing the resources, wasting time, annoying others, harm devices, etc. As long as, this application able to control various activities of clients by the network administrator, restrict clients from performing an illegal task and give them instructions or other duties. Moreover, it offers benefits, such as low-cost, enables daily using in the services of all scales, and obtain network performance altogether. On the other hand, this application needs effort in the future, such as scalability to be worked over more than ten operating systems, and works over the Internet. | 2,550.8 | 2022-12-21T00:00:00.000 | [
"Computer Science"
] |
Transition between protein-like and polymer-like dynamic behavior: Internal friction in unfolded apomyoglobin depends on denaturing conditions
Equilibrium dynamics of different folding intermediates and denatured states is strongly connected to the exploration of the conformational space on the nanosecond time scale and might have implications in understanding protein folding. For the first time, the same protein system apomyoglobin has been investigated using neutron spin-echo spectroscopy in different states: native-like, partially folded (molten globule) and completely unfolded, following two different unfolding paths: using acid or guanidinium chloride (GdmCl). While the internal dynamics of the native-like state can be understood using normal mode analysis based on high resolution structural information of myoglobin, for the unfolded and even for the molten globule states, models from polymer science are employed. The Zimm model accurately describes the slowly-relaxing, expanded GdmCl-denaturated state, ignoring the individuality of the different aminoacid side chain. The dynamics of the acid unfolded and molten globule state are similar in the framework of the Zimm model with internal friction, where the chains still interact and hinder each other: the first Zimm relaxation time is as large as the internal friction time. Transient formation of secondary structure elements in the acid unfolded and presence of α-helices in the molten globule state lead to internal friction to a similar extent.
Results
Structural properties. According to the circular dichroism (CD) measurements, apoMb in its native-alike form at pD 6 contains 49% secondary structure elements (see Fig. 2). Under acid denaturation, apoMb at pD 4 has 25% secondary structure elements and at pD 2 4.3%. At 3 M GdmCl, there are about 6% secondary structure elements in the protein molecule. The content of secondary structure elements of apoMb at pD 2 and the one of GdmCl-denatured states are in both cases very small and comparable to each other within the error of the applied technique.
Small angle neutron scattering (SANS) was used to gain information on these folding states. The labile protons of the protein have been exchanged with deuterium and all the solvents used were deuterated to decrease the incoherent neutron scattering. The pD value was determined as 0.4 plus the pH meter read-out. Data of low concentrated (3-5 mg/mL) protein solutions that show no signs of intermolecular interactions, nor aggregates was used to characterize the form of the protein molecule in each folding/denaturation state (see Fig. 3). All measurements in this study were performed at 10 °C to minimize the risk of aggregation. The scattering curve of apoMb at pD 6 is well described by a generalized Guinier model 29 where Rg is the radius of gyration and α a parameter describing the three-dimensional form of the protein. The model is valid in the range: qRg < 1.3. With α = 0, the protein is a spheroid with Rg = 1.5 nm. With a hydrodynamic radius R H of approximately 2 nm determined by dynamic light scattering (DLS), this state has a high degree of compactness: R H /Rg = 1.32 ( 30 and the references therein). The theoretical limit of a solid sphere is given by R H /Rg = (5/3) 0.5 = 1.29, while the average for a random-coil polymer (or a polymer in θ-solvent) gives a ratio of 0.65 31 .
The measured SANS curves of the partially and completely unfolded proteins are well described by the polymer with excluded volume model 32,33 (See Table 1 and Fig. 3). This analytical model was used to describe various polymer systems 34 . Whereas a Gaussian polymer chain has orientationally uncorrelated links between the beads and the length of these segments follows a Gaussian probability distribution, this model considers excluded volume effects too, reflected by the excluded volume parameter ν. This is related to the Porod exponent m through ν = 1/m and also known as critical exponent. The statistical segment length of the polymer chain, also known as Kuhn length l, and the degree of polymerization n can be extracted from the formula . The compactness of a polymer is also related to the excluded volume parameter ν 30 . Applying the polymer with excluded volume model to the present denaturated protein structures is appropriate, given that the theory behind . Normalized Kratky-Porod representation of the SANS data with the models used to obtain the form factor. The apoMb at pD 6 structure shows the characteristic peak of a globular protein, the pD 4 is a typical molten globule (reaching a maximum at qR g = 0.2 Å −1 ⋅ 25. 4 Å = 5). pD 2 and GdmCl data are specific for unfolded states.
it is validated in practice by several techniques. In a simple picture, the denaturation by acid occurs because the amino acid side chains become protonated and repel each other, destabilizing the secondary structure elements. ApoMb at pD 4, the molten globule state with 30% content of secondary structure elements, is more compact (R H /Rg = 1.18 and ν = 0.46) than apoMb at pD 2 (4% content of secondary structure elements, R H /Rg = 0.67, ν = 0.55). Denaturation by GdmCl occurs through a slightly different process: some of the amino acid units become protonated (pH meter read-out for the buffer of the GdmCl-denaturated protein is 4.5), and the guanidium hydrochloride molecules interact with the protein chain, leading to an expansion of the unfolded molecule [35][36][37] . This is reflected in our data: larger Rg and R H values, and also less compactness compared to the other unfolded states: ν = 0.64. Similar to the apoMb unfolded state of urea investigated by Eliezer et al. 38 , this could be a mixture of monomer and dimer. In other words, apoMb at pD 6 is a typical globular protein, whereas the partially and completely acid-unfolded, apoMb at pD 4 and pD 2 are more compact than the denaturant unfolded state. The GdmCl-denaturated state has a larger size and lacks compactness. ApoMb at pD 2 has the typical R H /Rg value for a polymer in good solvent 39,40 and the typical ν-value for a chain with excluded volume interactions 41 .
The structure factor, which is concentration-dependent, is obtained by dividing the scattering curve of the concentrated solution by the form factor (see SI). The data is smoothed and averaged to remove the noise. Whereas the form factor describes the shape of a molecule in solution, the structure factor characterizes the interaction between these molecules. The structure factor is needed in order to correct the dynamics data reported later. Intermolecular interactions are well described in the case of apoMb by a mean spherical approximation (MSA) structure factor 42,43 , originally developed for macro-ion solutions. The model was implemented using the python package Jscatter 44 , an adaptation of the original Fortran code 45 . ApoMb at pD 6 is closer to its isoelectric point (estimated by ExPASy 46 to lie at 7.20), therefore there is only a slight difference between the number of positively and negatively charged residues. The charge on the surface is not distributed uniformly and the monomers attract each other (the structure factor is larger than 1 in the low q-regime). The curve has its minimum at q = 0.07 Å −1 , suggesting that monomers start to interact with each other at a typical distance of 2π/q = 90 Å. A radius of gyration of 17 Å (closely lying to the one obtained by fitting the form factor) and a screening length of 30 Å are obtained by fitting the MSA model. In comparison to apoMb at pD 6, the structure factors of the solutions of apoMb denaturated by acid and GdmCl show that the monomers repel each other. This repelling can be attributed to the charge state (Fit results are available in SI).
Dynamical properties. Neutron Spin-Echo Spectroscopy (NSE,) measures temporal and spatial correlations between different scattering particles and from internal motions in the particles resulting in the normalized intermediate scattering function (ISF) S(q, t)/S(q, 0). ISF can be investigated for each q-value: either for its initial slope or as a stretched exponential (Kohlrausch-Williams-Watts). Alternatively, the data can be modelled simultaneously for all q values according to polymer models.
Investigation of the spectra initial slope. From its initial slope, the effective diffusion coefficient D 1 is obtained S(q, t)/S(q, 0) = Aexp(− D 1 t − D 2 t 2 ) (see SI). According to the de Gennes 40 and Doi 22 theory, the overlap concentration c * = M/(N A 4πRg 3 /3) is the border between the diluted and the semidiluted regime of a polymer solution. ApoMb has a molecular weight of M = 16951 g/mol (the molecular weight of myoglobin of which the heme group weight is subtracted), and for Rg = 2 nm, the calculated value for the overlap concentration is c * = 840 g/L. At 30 mg/mL, the solution is significantly below the overlap concentration, thus it can be treated as a dilute solution. With an assumed Rg of 3 nm the overlap concentration would be c * = 249 g/L, still one order of magnitude larger than the maximum protein concentration used in the experiments presented here. However, this dilution classification is derived for polymer systems and does not account for any surface charge or forces between the protein molecules. Empirically, it was shown that intermolecular interactions and the solvent mediated interactions have to be considered as well 47 . Intermolecular interactions are represented by the structure factor. Solvent-mediated interactions are represented by the hydrodynamic function H(c, q), which can be approximated as a q-independent constant, given that its value in the low q regime is close to its value in the high q-regime. At low q-values, H c,q0 = D c S q0 /D 0 , where D 0 is the extrapolated diffusion constant at infinite dilution, D c is the diffusion coefficient at concentration c measured by DLS, and S(q = 0.026 nm −1 ) is the value of the structure factor at the DLS-specific q-value. At large q-values, the hydrodynamic functions H c,qL = 0 can be approximated as the ratio between the measured viscosity of the concentrated and diluted protein solution, η conc and η c=0 , respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ For these solutions of apoMb, the values of H c,q0 and H c,qL are close to each other (see Fig. 3) and we assume that the hydrodynamic functions are constant in the q-range of interest.
Thereby, the effective diffusion coefficients D eff for the protein monomers are obtained: Fig. 4). They comprise information on translational diffusion, rotational diffusion and internal dynamics of the single molecule. In a good approximation, these motions can be decoupled 18 . For apoMb denaturated by acid (at pD 2 and pD 4) and by GdmCl, D eff has a linear dependence on q which is specific for the Zimm regime of local chain relaxations 48 , whereas for apoMb at pD 6 the value of D eff has a non-linear dependence on q (see Fig. 4). The dynamics of the mostly folded protein apoMb at pD 6 deviates from the dynamics of the more denatured protein solutions. It is therefore discussed in the following paragraphs. At first, translational and rotational diffusion can be determined in the rigid-body approximation, directly from pdb structures using HYDROPRO 49 . ApoMb at pD 6 resembles the native structure of myoglobin. Given that there are no available pdb structures of the heme-free forms, and motivated by the work of Stadler et al. 50 , proving that myoglobin and apoMb at pD 6 have similar characteristics in solution, the crystal structure of myoglobin (pdb ID: 2v1k) was used for the calculation. For T = 283.15 K, η = 1.67 mPas, φ = 0.720 cm 3 /g (solute partial specific volume) and ρ = 1 g/cm 3 (solution density), the 9x9 diffusion matrix D is obtained, that comprises the translational and the rotational diffusion matrices, whose traces are the translational diffusion coefficient 5.96 Å 2 /ns and the rotational diffusion coefficient of 9.83 μs −1 .
The q-dependency of the coupled rotational and translational diffusion is obtained from the coordinates of the amino acids in the protein → r , their individual neutron scattering length b, the form factor F(q), and the diffusion matrix obtained above, using the formula: which is derived by Ortega et al. 47 . The brackets represent the ensemble average over the remaining variables. While the integration over the position space for the single particle is 1, the orientation average can be replaced by an averaging over q-space. The exchange occurring between the protons at the protein surface and the solvent is also considered. The calculated D 0 (q) values are shown in Fig. 5 together with the experimentally derived D eff (q) values. As can be seen in Fig. 5, the difference ΔD eff (q) = D eff (q) − D 0 (q) between the measured NSE data points and the calculated D trans−rot accounts for approximately 20% of the total dynamics and can be due to internal α-helices movements or other internal dynamics processes. We performed a normal mode analysis using the MMTK package [51][52][53] . We determined the effective diffusion specific for the first non-trivial mode, mode number 7, as following: www.nature.com/scientificreports www.nature.com/scientificreports/ the temperature T. We observe that the diffusion coefficient of the first non-trivial normal mode of the pdb structure 2v1k, describing the movement of the α-helices which allows the access to the heme group, has a similar dependence of the diffusion coefficient on q, see inset of Fig. 5.
Investigation of the spectra using stretched exponential functions. Another common practice in the NSE data evaluation is modeling using a stretched exponential function, characteristic for relaxation processes: The stretching exponent β for apoMb at pD 6 is on average for all q-dependent data sets 0.9, a value close to 1, so that the protein is seen rather as a point, where translational diffusion dominates, and the internal dynamics is small in comparison to it (about 20%). In contrast, for the pD 2, τ p is the relaxation time characteristic for the normal mode p, with η the solvent viscosity, and ν the critical exponent, k B the Boltzmann's constant, T the temperature. R E is the end-to-end distance of the polymer chain In the exponent of the first term of equation 3 one can find the hydrodynamic function H(c,q) mentioned earlier devided by the structure factor S(c,q). In the Zimm and ZIF models, the normal modes have all the same amplitude: A(p) = 1. Internal friction reflects the intrinsic resistance of a polymer to changes in its conformation and occurs due to dihedral angle rotational barriers, hydrogen bonding or intrachain collisions. As opposed to the Zimm model, the ZIF model incorporates the internal friction of the polymer chain as a resistive spring installed in parallel to the entropic spring connecting the beads. By solving the Langevin equation, a mode independent relaxation time τ intern is obtained. It is added to each Zimm mode τ p so that τ pZIF = τ p + τ intern . This way, in the ZIF model the higher frequency normal modes of the Zimm model are damped.
The NSE spectra can be simulated based on the equation defining I(q, t). Using the information on the translational diffusion from DLS, on the viscosity (from direct measurements), on the hydrodynamic function (see Table 2) and on the critical exponent ν and Rg obtained from the SANS data (see Table 1), the simulation can be performed. In Fig. 6a,b, the dotted lines are simulated NSE spectra of apoMb pD 4 and pD 2 using the Zimm www.nature.com/scientificreports www.nature.com/scientificreports/ model, under the assumption that the polymer consists of 20 beads. The simulation reproduces the spectra well, but the large q-values and the longer Fourier-times are not described properly by the Zimm model. Without any knowledge on the internal friction time, the ZIF model was fitted simultaneously for all q for each sample, having only D and τ intern as free parameters. The fit results are presented in Table 3. The values obtained for the center of mass diffusion coefficients D are comparable within error bars with the ones obtained via DLS measurements for both pD 2 and pD 4. Although apoMb at pD 4 is a molten globule and has a significantly higher content of secondary structure elements, its dynamics can still be understood similarly to the one of the totally unfolded state. The whole structure needs a similar time to relax (t Zimm ) and both protein states experience a similar internal friction(τ intern ). However, for apoMb at pD 4, the ZIF model deviates significantly from the experimental NSE spectra at longer Fourier times, especially for the ISF at q = 0.07 Å −1 , which is reflected in the larger χ 2 value. This could be because the model does not account for any residual secondary structure content. An interpretation of the experimental NSE data might be achieved by coarse-grained computer simulations, which are out of the scope of the present manuscript. We refer here to future studies to clarify that aspect.
In contrast to these two, apoMb denaturated by GdmCl has almost double the Zimm relaxation time and no internal friction time is observed (see Fig. 6c). Dynamics of denatured apoMb can be described very well using the Zimm model only. This supports the mechanism of denaturation described by Heyda et al. and Huerta et al. 35,37 : GdmCl increases the solubility of hydrophobic residues and the local energetic barriers are lowered. The trends observed on intrinsically disordered proteins (IDPs) denaturated states in different concentrations of GdmCl 36 are confirmed. Several studies including the Zimm model support the idea that the centre of mass diffusion coefficient of the protein scales with the chain length, or with the bead number, according to N 1/ 6,26,36 . These studies are performed on proteins where the chain length is varied, which is not the case for the present work. ApoMb, which always consists of the same number of amino acids, independently of its folding state. The bead number should not be confused with the chain length. The choice of the beads number when the protein is considered a polymer is arbitrary, but even when we increase the beads number, having less than 7 amino-acids per bead, the ZIF model does not change its validity (see Supplementary Information). Table 2. Values of the hydrodynamic functions in a low (H c,q0 ) and large q-regime (H c,qL ) determined by different methods. The SANS, DLS and viscosity measurements were performed at 283 K. The viscosity value η conc of the apoMb pD 2 solution with the highest concentration could not be determined accurately. Table 3. www.nature.com/scientificreports www.nature.com/scientificreports/ Further polymer models (Zimm with damping of the mode amplitudes 21 , compacted Zimm with internal friction 26 , the Zimm analogues of the Rouse with non-local interactions and of the Rouse with anharmonic potentials 54 have been considered to interpret this data, but none leads to better results. Some studies claiming that internal friction does not play a role are performed on smaller proteins 28 or the solvent viscosity is varied significantly 55 . In those cases, the friction with the solvent, and not the internal friction may be the dominant dissipation mechanism. For the data presented in this work for apoMb at pD 2 and pD 4, the ZIF model is the best fit.
Discussion
By comparing two different denaturation ways, we could gain insights on the denaturant effect on the structure and dynamics of the model system apomyoglobin. Both ways started from the native-alike form apoMb at pD 6. The protein in this folding state resembles many structural features of the holoprotein and its dynamics shows internal collective modes, which are no longer seen in any other unfolded states investigated (see Fig. 1). Its internal dynamics, accounting for less than 20% of the total dynamics of the protein is of biological relevance: the α-helices perform this movement to incorporate the heme group in the process of the protein synthesis 56 .
In case of the acid denaturation, apoMb at pD 4 has a high content of secondary structure elements, observed by CD spectroscopy and SANS. However, its dynamics can if it all be described by the same polymer model (ZIF) as the dynamics of the acid unfolded state, apoMb at pD 2 (see Fig. 1B,C,G). Although similar Zimm relaxation and internal friction times are obtained, the data is not as perfectly modelled. The GdmCl unfolded apoMb does not show internal friction, suggesting that this denaturant is screening the protein chains, reducing the interaction between them (see Fig. 1D,F). The observations of Zheng et al. 56 , Borgia et al. 36 and Samanta et al. 26 on IDPs are confirmed also for apoMb: internal friction is larger with considerable increase of protein compactness.
Previous QENS experiments showed that molecular dynamics on the faster ps to ns time-scale are similar between apoMb at pD 2 and apoMb at pD 4, but differ significantly from apoMb at pD 6 57 . That dynamic picture is corroborated here by NSE for slower collective dynamics as well. The first folding step in apoMb does not have a significant effect on collective internal dynamics. A fundamental change in the physical nature of the dynamics of Mb due to protein folding occurs only by the following folding step into the native state, where the heme-pocket is formed. By comparing the internal friction in apoMb at pD 4 with that of an IDP with a similar content of secondary structure 20 , we see that internal friction dominates the Zimm mode spectrum even stronger for the IDP than for the apo-Mb at pD 4. This shows that apoMb at pD 4 and apoMb at pD 2 still need to be seen as comparatively soft protein conformations. Therefore, the formation of the G and H helices in the apoMb at pD 4 state is not that important for the motions seen by NSE. Motions in apoMb at pD 2 and pD 4 are rather influenced by the transient formation of secondary structure content. If more information on intermediate states experiencing constant folding/refolding transitions would be available, the dynamics of the denaturated proteins observed by NSE could be modelled as an equilibrium, an average distribution of the intermediate state dynamics. Recent single-molecule techniques allow the observation of such intermediate states 58 , whilst theories such as Zimm-Bragg 59 claim that chemical unfolding is a multi-state process of a mixture of conformations. To relate the NSE observations with the in depth understanding of the chemical unfolding process of apoMb directly, such experiments and theories would be necessary.
Although proteins are known to adopt their unique structure based on the individuality of their amino acid side chains, coarse grain polymer models can characterize the nanosecond dynamics. In case of the GdmCl denatured apomyoglobin, the protein loses all its protein-like features and behaves like a Zimm polymer. This is mostly due to the binding of GdmCl to the side chains removing their individuality leading to a more polymer like behavior. Moreover, apoMb at pD 2, which could still exhibit hydrogen bonding and some transient elements of secondary structure, loses its protein-like features, but behaves like a non-ideal polymer, with internal friction.
Methods
Sample preparation. ApoMb was prepared from horse-heart myoglobin (Sigma-Aldrich) following the butanone method to extract the heme group (as performed in 50 ), adapting the method described in 60 ), and then refolded by dialysis in 20 mM NaH 2 PO 4 /Na 2 HPO 4 (Sigma Life Science, >99.5% and Sigma-Aldrich, >99%) pH 7 buffer and distilled water. Before storage in the freezer, the apo-Mb solution was lyophilized. To replace the exchangeable protons by deuterium ions, the freeze-dried apo-Mb powder was dissolved in heavy water (99.9% 2 H, Sigma-Aldrich), incubated for 1 day, and lyophilized again. The obtained powder was stored at −20 °C. In order to obtain the molten globule state of apoMb the powder was dissolved in 2 H 2 O and centrifuged to remove the large aggregates. In the supernate solution of concentration 2 mg/mL and pH 6, 2 HCl 0.1 M (Sigma-Aldrich) was added until the pH-read out value was 3.6 (monitored by pH meter Methrom). This corresponds to a a pD value of 4. The buffer exchanged protein solution was centrifuged (Heracus Instruments) to the final concentrations (Vivaspin 3,000 MWCO concentration units, Sartorius, Göttingen, Germany). Circular dichroism (CD). Circular dichroism was measured on a Jasco J1100 spectropolarimeter (JASCO, Tokyo, Japan), in the range 180-250 nm, with a pitch of 1 nm, a scanning speed of 100 nm/min, and 3 accumulations/measurement. The samples were measured at a concentration of 300 μM in 0.01 cm thick quartz cuvettes under constant nitrogen flow at 10 °C. According to the BeStSel Single Spectrum Analysis 61 , the α-helix composition of apoMb, varied as following: pD 6-49%, pD 4-25%, pD 2-4.3%, GdmCl-6%. In case of GdmCl-denaturated solution, only the range 200-240 nm was considered for data analysis because GdmCl absorbs strongly in the range 180-200 nm.
Small-angle neutron scattering (SANS).
The scattering vector q is defined as q = 4nπ/λsin(θ∕2) with the incident neutron wavelength λ and the scattering angle θ. The investigation of the form and structure factor was performed for apoMb at pD 2 and pD 6 at the instrument KWS-2 at the MLZ in Garching 63 . The in situ DLS option at this instrument helped to acquire data that confirmed that the samples did not show considerable aggregation during the neutron measurement. Protein concentrations were 3, 6, 15 and 30 mg/mL. The corresponding buffers, empty cells and references were measured as well. Hellma quartz cells of 1 mm and 2 mm were used for high-and low protein concentrations. The neutron wavelength was set to 4.5 Å, and measurements were performed at 3 detector positions: 2, 8 and 20 m. All measurements have been performed at 10 °C. For the low-concentrated solutions, the background-corrected intensities were linearly extrapolated to infinite dilution to extract the form factor per unit mass. The measured SANS curve of apoMb at pD 2, pD 4 and GdmCl are well-described by a polymer with excluded volume model, while apoMb at pD 6 is globular, thus the corresponding SANS curve is described by a Guinier model. By dividing the SANS curve of the highest concentration by the one at the lowest, the structure factor was obtained.
Neutron spin-echo spectroscopy (NSE). Solutions of apoMb at pD 2 were investigated at the instrument SNS-NSE, the neutron spin echo spectrometer at the Oak Ridge National Laboratory, Oak Rigde, Tennessee, USA 64 . It is a time-of-flight instrument: the Larmor precession of the neutron spin in a preparation zone with magnetic field before the sample encodes the individual velocities of the incoming neutrons into a precession angle. The other samples were measured at J-NSE "Phoenix" at MLZ, Garching 65 . The instrument covers a Q-range of 0.03-1.0 Å −1 , reaching Fourier Times of 250-90 ns using 12 and 8 Å neutrons. In the experiments presented here, a Q-range of 0.03-0.15 Å −1 was explored using using 12 and 8 Å neutrons. A graphite powder sample was measured as a scattering reference, followed by the protein sample and the buffer solution. All measurements were performed at 10 °C. NSE data evaluation was performed with the data reduction software DrSPINE 66,67 . Viscosimetry. The viscosity of all protein solutions and buffers was measured at 10 °C using a rolling-ball viscometer Lovis 2000 M/ME. Each measurement was performed 3 times and the average value was reported.
UV/VIS spectroscopy. The sample absorption in a cell with a path-length of 0.1 mm (Hellma, Germany) was measured using UV/VIS Spectroscopy (Cary 300). For the very low concentrations (<1 mg/mL), a 5 mm thick quartz Hellma cell was used. The concentrations were determined from the absorption values using the molar extinction coefficient ϵ 280nm = 13980 M −1 cm −1 calculated from the amino acid sequence (ExPASy 46 ). | 6,540.8 | 2020-01-31T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Improvement of multimodal images classification based on DSMT using visual saliency model fusion with SVM
Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.
Introduction
Nowadays, multimodal imaging has gained increasing importance in computer vision application, and significant efforts have been put into developing methods of different tasks, such as Registration[1][2] [3][4], Data fusion [5], Representation learning [6], Classification [7]and so on. In classification task, the unimodal image presents various problems as noisy data, incomplete information and distorted ones, etc. This often led to a misclassification. These limitations are overcome by using multimodal images, which are acquired from multiple sensors, and taken for the same object or scene. Each image or modality allows to provide different information that can sometimes be redundant, because the same area/scene is presented in a different sensor, and complementary for another modality, regarding the diversity of sensor technologies and theirphysical interaction mechanism. The use of this set of images together presents a real-world benefit to resolve a given problem with some various available information. The fusion of these data form a better quality classification.
However, these data are crippled with some imperfections such as conflict, ignorance, uncertainty and so on, which must be handled and taken into account by dedicated formalism as long as they presentan aspect of reality. To fix such problem, several formalism exist as probability theory [8], Fuzzy theory [9], belief function formalism [10]and Dezert-Smarandache formalism [11] [12].In this work, we benefit from the latest theory whichis the most recent one, and it was introduced in order to deal with the high conflicted and uncertaintydata thanks to its rich mode lization and the combination operators (PCR5 and PCR6) that it integrates.
In classification task, belief function theory is widely exploited in many works [13] [14] [15] [16]. Whereas DSmT or so-called plausible and paradoxical reasoning shows its efficiency in many applications, it was performed for multi-source remote sensing application [17]for supervised classification purpose by integrating contextual information obtained from ICM classifier with constraint and temporal information in hybrid DSmT process with adaptive decision rule, the authors also proposed a new decision rule based on DSmP transformation for change detection purpose [18]. In [19], the authors present an effective use of DSmT for multiclass classification by combining two SVM OAA (One-Against-All) implementation using PCR6 combination rule. A new method, based on fusing the attribute type information obtained from Ground Moving Target Indicator and imagery sensor using DSmT for tracking and classification, has been presented in [20]. Multidate fusion has been proposed in [21] [22] for the short-term prediction of the winter land cover. DSmT is also used in the medical case retrieval by [23], the authors used DSmT to fuse heterogeneousfeatures of several sensors which will be included in CBR systems.
According to our study of the state of the art, all studied research works disregard the power of perceptual attention to well classify any scene thanks to the high human brain capacities. We benefit from this ability in our approachby integrating the visual perception model, using DSmT,with spectral and dense SURF features obtained from SVM classification for significant classification improvement.
The paper is organized as follows. After a brief presentation of mathematical background of DSmT formalism in section 2, we present the overall system of the proposed method in section 3. Data and experiments are then given in section 4 in order to evaluate the performance of our approach on real image datasets. A conclusion is given in section5.
Mathematical Background of DSmT
Dezert-Smarandache theory was proposed jointly by Jean Dezert and Florentin Smarandache [24]and was an attempt to overcome belief function limitations by handling a high uncertainty and conflicting information. This theory can be describedas follows: We denote Θ = { 1 , 2 , … . . , } the discernment space of the N class classification problem, and Θ the hyperpower-set [25] that is the set of subsets of Θ, with the union of classes and also their intersection, so that if , ∈ Θ , then ⋃ ∈ Θ and ⋂ ∈ Θ . Each source contributes its belief mass to , known by the generalized basic belief assignment gbba step and satisfying following properties: (1) Where ∅ is the null set, The size of hyper-power-set presents a real limit in DSmT when N>6 (N number of classes) in Free model [26] which corresponds to the full hyper-power-set without any constraints, in contrary to hybrid model [26]which allows integrating constraints that can be exclusive and refined, and therefore minimizing Θ size.
The assigned generalist mass obtained from different sources are then combined and a new mass distribution is provided to Θ elements. Combination step presents the kernel of the fusion process and each formalism proposed several combination operators.In DSmT formalism, all combination operators can be found in detail in [27], we quote the most used as Smets rule, Dempster Shafer (normalized) operator, Yager operator, Zhang operator, DsmH rule, Debois and Prade rule, PCR5 operator for N=2 and PCR operator for N>2. To deal with a large number of the sources used in this work and the high uncertainty and conflicting information provided, we benefit from the performance of PCR6 combination rule in handling such problem.
The generalized belief functions Credibility noted Bel(. ) orCr(. ),Plausibility noted (. )and DSmP transformation are derived from the function of basic mass and respectively defined for Θ in 0,1 : Θ can present full Θ or reduced Θ with constraint , depend on the model used ( Free or Hybrid).ℇ is an adjustment parameter, ( ∩ ) and ( ) are respectively the cardinality of ∩ and .
The last step in DSmT process is making a final decision, which presents a real challenge in many applications. In this work, we are interested in improving classification, we have to take a decision about pixels' belonging to a simple class also called Singleton class, and in this case there are two ways: taking decision based on maximum of generalized basic belief mass gbba or based on generalized belief function already computed as follows:
Maximum of credibilityCr(. ) is widely used in many applications [28], and it is considered as a pessimistic decision.
Maximum of plausibility Pl(. ) which is considered as an optimistic decision.
Maximum of DSmP that is a compromise decision between the above decisions which are based on using probabilistic transformation P(. ) in the interval of [Cr . , Pl(. ) ].
Pre-processing
Generally, the pre-processing that precedes classification aims to eliminate imperfections that taint information by a set of actions as filtering, gradient operations, etc. However, in the classification based on the theories of the uncertain, these imperfections are protected, modeled and combined to help to make a decision.
The registration is the usually used pre-processing in the fusion process, it aims at setting correspondence between two or more images of a scene obtained from one or various sensors potentially at different spatial positions and scales, by using an optimal spatial and radiometric transformations between the images.
In the case of multimodal images, registration is an issue because of the significant difference between images [29] [30]. An original methodology was proposed in a previous work to answer the particular issue of the registration with multimodal imaging inputs in whichwe exploit the SURF scale-and rotation-invariant descriptors for the identification and the description of the interest points and we introduce a relevance filtering based on both SURF distance and orientation featuresin matching step[1].
Feature Extraction
Feature extraction is a pivotal step in the classification process. It aims to underline the relevant features that are correspondentto various classes. It is worth stating that the appropriate choice of extracted features improves the performance of classification step. Spectral, Spatial and perceptual features are extracted in this work.
Spectral Information
The spectral information is widely used on large classification methods. In this work, we have extracted the spectral values of each pixel as a vector of attributes and then converted them to Cielab space model for a better correlation with human color processing.
Dense SURF Description
Speeded up robust feature (SURF) proposed by Herbert Bay [31] is a spatial descriptor which consists originally of two phases, Detection and description of keypoints. We proposed in a previous work [32]to skip the detection phase and to perform description one to each pixel in the image. This is done, at the first by assigning to each pixel the dominant orientation calculated by combining the Haar wavelets results within a circular neighborhood around each pixel, and then creating 4 × 4sub regions around the pixel. In each subregion, a pixel wise Haar wavelets responses are computed, which in turn are summed up to form 64-elements descriptor.
Saliency Information
Based on a performed comparative analysis of saliency detection in our multimodal data [33], we extract the saliency features by using the method proposed by Rahtu et al [34]. This method used local features contrast in illuminance, color mapped to feature space that is divided into disjoint bins. A saliency measure is calculated by applying a sliding windows divided into inner windows and border in which a hypothesis that points in are salient and points in B are not, the measure can be defined as probability conditional and computed through the Bayes Formula as With 0 < 0 < 1and = ( ( )| 1). A regularized saliency measure is then introduced to make it more robust to the noise.
The motivation of integrating saliency information in the fusion process is the fact that usually visual perception succeeds easily to classify any objet or scene.
SVM Pre-Classification
Support vector machine is a supervised classification method introduced by Vapnik [35] [36], widely used in classification applications thanks to its performance to deal with high-dimensional data. Basically, it is designed for binary class by finding an optimal hyperplan that separates the two classes linearly-separated. In non-linear separable class, the feature space is mapped to some higher dimensional feature space where the classes are separable using a Kernel function that should fulfill Mercers conditions, the most kernels used are Radial Basis Function RBF, in which the decision function is expressed as a flow Where are Lagrange multipliers, and the associated Kernel function is In case of multiclass problem, two main approaches were proposed, One-Versus-Rest approach in which binary classifiers are constructed for -class classification, and One-versus-Onein which ( −1) 2 binary classifiers are applied on each pair of classes.
In order to generate the probabilities for DSmT, we have performed a pre-classification [32]based on combining spectral information (cited in 3.2.1) and Dense SURF information ( cited in 3.2.2) using SVM classifier with RBF kernel to handle non-linear high-dimensional data in our multimodal dataset, and One-Versus-Rest approach to deal with incomplete information provided from divers modalities.
Mass function estimation
Mass estimation function step is very crucial in fusion process, because the imperfections such as uncertainty, imprecision, paradox will be introduced. The most generation used fortheses masses is the probabilities from pre-classification. The SVM classification of images generates the matrices of the probabilities ( | )of pixels belonging to the singleton class of the frame of discernment Θ = { 1 , 2 , … . . , }, the same for saliency map generated using the proposed method in [34]. Each source (modality/saliency map) noted ( = 1, … . , ) gives the probability of belonging to one, or two classes, and their complementary classes which presents the mass of the partial ignorance. Based on [19], we denoteΘ = { 1 … . . }, and the gbba mass of each source is given by: ) is a normalization term that we used in order to make sure that = 1.
Combination of masses and decision
The estimated masses must be combined with appropriate rules that handles the conflict generated from different sources . In this work, we have used PCR6 [37] rule in combination step because it shows a better performance compared with all combination rule cited in the previous section and tested o our datasets. The PCR6 is computed as follows: Considering N independent sources, the combined 6 (. ) masses acquired from > 2sources are computed as follow: Where Where the mass 12…. ≡ ⋂ ( ) corresponds to the conjunctive consensus on between > 2 sources.
Once the combination step is achieved, we calculate the generalized belief function and we use a probabilistic transformation DSmP that converts the combined masses measure to a probability measure using Eq (6) to make a final decision.
Data
Large sets of multimodal images acquired on wall paintings from the Germolles palace are used to demonstrate our proposed method. This palace was offered by Dukes of Burgundy Philip to his wife Margaret Flanders in 1380, and it was the only remaining castle of the Dukes of Burgundy so well preserved, its wall painting was restored between 1989 and 1991. However, there were no conservation reports of the applied restoration. In order to detect the original from restored area, the conservator of Germolles used the multimodal images that have the advantage of being fast and relatively inexpensive solution for the examination of large areas of wall paintings.This technical photography consists of recording a set of images with a commercial digital photographic camera which has been modified by removing the thermal filter regularly positioned in front of the CCD. In this way it is possible to record images of reflected visible light (Vis), reflected infrared light (IRr), reflected ultraviolet light (UVr) and UV-fluorescence (UVf). This set of images provides information about the optical behaviour of the surface when reached by the different types of light and therefore provides information about the original portions of wall paintings from recent repainting.
For illustration purpose, we select an area of a south wall of the dressing room of Margaret represented in Figure 1. This area presents a large white P (for Philip) that covers the walls and painted in green, which is presented by four modalities VIS, UVF, UVR and IRR. Each modality measures 3744×5616 pixels. IRR modality shows very well the parts over non-original green surface. The image of the UV-induced fluorescence modality shows a relatively strong fluorescence corresponding to remains of an old/original paint layer over the white. The UVR image helps to identify the repainting original over the white of the letter P.
Experiments
The adopted methodology can be divided into four steps as illustrated in figure 2, which is started with the preprocessing by aligning each image with the VIS image that is used as a reference image.
Figure 2 A representative illustration of the workflow
In the second step, four topics have been identified: White original (WO), White repainted (WR), Green original (GO) and Green repainted (GR). Then spectral and Dense-SURF information is extracted and used jointly as the entry ofthe SVM classifier using the RBF kernel. In parallel, Saliency information is extracted using the proposed method in [34], the provided maps are shown in figure3.
Figure 3 Saliency maps
The third step is pre-classification using the SVM classifier that is applied to the images, in order to recover the probability matrixes of pixels belonging to classes. Each used modality highlights the presence of one or two classes. The UV-induced fluorescence modality shows a relatively strongfluorescence corresponding to the remains of an old painted layer of the white (WO) that reaches an accuracy of 92% using SVM, also UVR modality emphasizes WO class with a classification accuracy of 98%. Infrared light shows very well the parts over the original and repainted surface of the green and gets accuracy of 94% [32]. The provided maps are presented in figure4.
Figure 4 Multimodal SVM Classification
The VIS modality reaches an accuracy of 98% with the classification of the two classes GO and GR, whereas this precision is reduced when classifying four classes because of the increase of the conflict. The classified image is presented in figure 5.
The last step presents the fusion process that is started with defining the frame of discernmentΘ = { , , , }. Due to the obtained information by SVM classification and saliency maps, there are some constraints that can be taken into account to deal with the real situation and to reduce the hyper power set Θ , for example ∩ = ∅ .
Then the mass function that is associated with the emphasized class and it's complementary in each modality are computed using equation 10. The PCR6 combination rule is used for combining the calculated masses basing on the equation 11, and as a final task, the decision is taken using maximum DsmP.
The final classified map, provided by DSmT only, is given in figure 6, and the final classified map obtained using DSmT-Salience is shown in figure 7. The results have progressed with the integration of the perceptual model in DSmT process, the visual analysis of the classification maps shows that the result of the proposed method much better with the ground truth over the WR and WO classes and appears to be closer to the reality, rather than the result obtained using DSmT only for the same classes, while the obtained map using unimodal image present a degraded result in terms of smoothness and connectivity between classes. In this work, in order to evaluate the performance of the used methods and to compare the results, we have used the Overall accuracy (OA) that presentsa percentage of correctly classified pixels, and Mean Error Rate (MER) that presents the percentage of misclassified pixels. Table 1 summarizes the obtained results using the different methods, from the results, we can note that the proposed method produces a better overall accuracy of 95,39% compared with the DSmT classification which provides an overall accuracy of 91,46% and the SVM classification that gives an overall accuracy of 86,43%, in terms of the error rate, the proposed method gives the low MER score of 4,61% compared with DSmT-Classification and SVM-Classification that provides a MER of 8,53% and 12,60% respectively.
In conclusion, the use of DSmT theory with PCR6 combination rule provides a better result thanks to its effectiveness in managing correctly the conflict information that is provided from the different sources, and shows a significant classification improvement compared with the unimodal SVM classification. Thus, the integration of saliency information inthe fusion process presents a real benefit due to the powerful mechanism of the human brain in classification tasks.
Conclusion
In this paper, we have proposed a new method for multimodal image classification. As a first step, we have extracted spatial (Dense-SURF), spectral and saliency information. The extracted spatial and spectral information are combined and passing to the classifier SVM for pre-classification step. The SVM-classification results that are obtained from each modality is then fused using DSmT theory, the use of DSmT and SVM jointly provides better performance compared with the unimodal SVM classification. In the second step, the extracted saliency information is then modeled and combined with SVM classification results using DSmT process based on PCR6 combination rule and DsmP decision rule, the proposed method yields the best performance in terms of accuracy and error rate compared with DSmT-SVM classification and unimodal SVM classification.
Acknowledgement
The authors thank the Château de Germolles managers for providing data and expertise and the COST Action TD1201 "Colour and Space in Cultural Heritage (COSCH)" (www.cosch.info) for supporting this case study. The authors also thank the PHC Toubkal/16/31: 34676YA program for the financial support. [2] S. Y. a. J. Z. Jing Huang, "Multimodal image matching using self similarity," Applied Imagery Pattern | 4,636.8 | 2019-01-09T00:00:00.000 | [
"Computer Science"
] |
Genomic surveillance framework and global population structure for Klebsiella pneumoniae
K. pneumoniae is a leading cause of antimicrobial-resistant (AMR) healthcare-associated infections, neonatal sepsis and community-acquired liver abscess, and is associated with chronic intestinal diseases. Its diversity and complex population structure pose challenges for analysis and interpretation of K. pneumoniae genome data. Here we introduce Kleborate, a tool for analysing genomes of K. pneumoniae and its associated species complex, which consolidates interrogation of key features of proven clinical importance. Kleborate provides a framework to support genomic surveillance and epidemiology in research, clinical and public health settings. To demonstrate its utility we apply Kleborate to analyse publicly available Klebsiella genomes, including clinical isolates from a pan-European study of carbapenemase-producing Klebsiella, highlighting global trends in AMR and virulence as examples of what could be achieved by applying this genomic framework within more systematic genomic surveillance efforts. We also demonstrate the application of Kleborate to detect and type K. pneumoniae from gut metagenomes.
Virulence and AMR scores 183
Genomes are scored according to the clinical risk associated with the AMR and 184 virulence loci that are detected (see Methods). Here we take advantage of the 185 structured distribution of AMR and virulence determinants within the K. pneumoniae 186 population 14 to reduce the genotyping data to simple numerical summary scores that 187 reflect the accumulation of loci contributing to clinically relevant AMR or 188 hypervirulence: virulence scores range from 0 to 5, depending on the presence of key 189 loci associated with increasing risk (yersiniabactin < colibactin < aerobactin); 190 resistance scores range from 0 to 3, based on detection of genotypes warranting 191 escalation of antimicrobial therapy (ESBL < carbapenemase < carbapenemase plus 192 colistin resistance, see Table 1). These simple numerical scores facilitate downstream 193 analyses including trend detection. For example, analysis of a non-redundant subset of 194 9,705 publicly available K. pneumoniae genomes (see below, Table S2) showed 195 increasing AMR and virulence scores over time (barplots in Figure 1A-B). The 196 virulence and resistance scores were correlated not only with the prevalence of 197 individual components that contribute to the scores, but also with other components 198 that are co-distributed in the population (lines in Figure 1A-B). For example, the 199 AMR, virulence or convergence of both traits; such as specific K. pneumoniae 208 lineages or specimen types (see below). 209 210 Rapid genotyping of clinical isolates from a large-scale surveillance study 211 We applied Kleborate to analyse all K. pneumoniae clinical isolate genomes deposited 212 in RefSeq by the EuSCAPE surveillance study (927 carbapenem-non-susceptible, 697 213 carbapenem-susceptible; see Table S2) 33 . Kleborate rapidly and accurately 214 reproduced key findings from the original study, which were originally derived from 215 multi-step analyses comprising five independent tools and four independent databases 216 (each from a different public repository, one with additional manual curation): (i) 217 70.2% of carbapenem-non-susceptible genomes (n=651/927) carried carbapenemases, 218 mainly KPC-3, OXA-48, KPC-2 and NDM-1; (ii) these were dominated by a few 219 major clones, ST11, ST15, ST45, ST101, ST258, and ST512; (iii) individual countries 220 were associated with specific carbapenemase/clone combinations (see Figure 2A) location and year of isolation (see Methods). However, we cannot fully correct for 296 the sampling biases inherent in the public genome data and even after subsampling, 297 the 30 most common STs accounted for 63.4% of genomes (n³50 genomes each, 298 n=6,151 total; see Figure S4). Figure 5 shows the distribution of AMR and virulence 299 scores amongst non-redundant genomes from these 30 common K. pneumoniae STs 300 (n>50 per ST), each of which displays high rates of AMR and/or virulence. 301 302 AMR determinants 303 SHV β-lactamases conferring intrinsic resistance to the penicillins were detected in 304 85.9% of the 9,705 non-redundant K. pneumoniae genomes (ESBL forms of SHV 305 were detected in 10.0%). Acquired AMR was widespread (77.1% of genomes had at 306 least one gene or mutation conferring acquired AMR detected) and 71.6% of genomes 307 were predicted to be MDR (acquired resistance to ≥3 drug classes 48 ), a much higher 308 rate than is reported in most geographical regions 3,49-51 , reflecting the bias within 309 public genome collections. The majority of genomes had a non-zero resistance score, reflecting presence of ESBL and/or carbapenemase genes: 22.3%, 37.1% and 5.9% 311 genomes had resistance scores of 1, 2 and 3 respectively. Mean resistance scores 312 increased through time ( Figure 1B). This trend could be an artefact of sampling bias 313 towards the selective sequencing of AMR isolates, however it is consistent with the 314 increasing AMR rates reported in surveillance studies globally 52-54 . 315 316 Comparatively higher prevalence of acquired AMR genes was observed in some STs 317 ( Figure S4) Figure S5A-B), highlighting their mobile nature. The 323 notable exception was CTX-M-65, which appeared to be largely clone specific, 324 detected in only 9 STs and ST11 accounting for 96.7% of these genomes. 325
326
Colistin resistance determinants were detected in 8.7% of the non-redundant K. 327 pneumoniae genomes. These were mostly nonsense mutations in MgrB or PmrB 328 (83.5%) rather than acquisition of an mcr gene (15.8%, and an additional 6 genomes 329 with both acquired mcr and truncated MgrB/PmrB). The rate of detection ranged from 330 0-25.2% for the 30 most common STs, and was highest amongst ST512, ST437, 331 ST147, ST16 and ST258 (Figure S5C), each of which are also associated with high 332 rates of carbapenem-resistance. Porin mutations were detected in 37.9% of genomes 333 (34.0% OmpK35, 20.2% OmpK36, 16.3% both). High prevalence of specific porin 334 defects have been reported previously in some clones 41,42 , and this was reflected in 335 our analysis of ST258 and its derivative ST512. We observed OmpK35 truncations in 99.9% of non-redundant ST258 genomes (with or without truncations or substitutions 337 in OmpK36), and truncations in OmpK35 and/or OmpK36 in all ST512 (99.4% with 338 OmpK35 truncations, 94.4% with the OmpK36GD mutation, see Figure S5D). 339 340
Virulence loci 341
The prevalence of acquired siderophores and colibactin loci amongst non-redundant 342 K. pneumoniae genomes was 44.4% ybt, 7.5% clb, 11.2% iuc and 7.0% iro. The loci 343 were found across diverse K. pneumoniae STs (391 STs with ybt, 56 with clb, 144 344 with iuc, 108 with iro) but were rarely detected in other Klebsiella species (with the 345 exception of ybt among the K. oxytoca species complex, see Figure 4) indicating 346 frequent mobilisation within K. pneumoniae but not between species (Table S6, 347 Figure S6). Mean virulence scores increased through time ( Figure 1A). Figure 5B (Table S6). Figure S7A shows the frequency of iuc lineages 360 in K. pneumoniae STs with ≥20 non-redundant genomes and at least one genome 361 harbouring iuc. There were four STs for which >60% genomes harboured iuc, and only a single iuc lineage was detected in each (iuc1 in ST23, ST65, ST86; iuc2A in 363 ST82), consistent with long-term persistence of a specific virulence plasmid in these 364 well-known hypervirulent clones. In contrast, iuc was less frequent among other STs, 365 several of which were associated with multiple iuc lineages (e.g. ST231, ST25, 366 ST35), consistent with more recent and/or transient virulence plasmid acquisitions 367 (mostly iuc1, followed by iuc3 and iuc5). Table S8). 416
417
The most common virulence plasmid, KpVP-1 (iuc1 ± iro1), accounted for 54% of 418 virulence plasmid acquisition events (n=45 acquisitions), while iuc3 plasmids, the E. 419 coli derived iuc5 (±iro5) and iuc/iro unknown (i.e. novel or divergent iuc/iro loci) 420 accounted for 21%, 11% and 14%, respectively (Figure 7). AMR acquisitions by 421 hypervirulent clones involved the ESBL/carbapenemase genes that are most common 422 in the general K. pneumoniae population: KPC-2 (26%), OXA-232 (17%) and CTX-423 M-15 (18%). The majority of convergence events (87%) were associated with just a 424 small number of genomes (i.e. n≤3); however, five events were associated with >20 425 genomes in the complete dataset, which may indicate clonal expansion and 426 dissemination of the corresponding convergent strains locally and/or between 427 countries. One such event corresponded to the ST11-KPC + KpVP-1 deletion variant 428 strain that was originally reported in 2017 20 and has since been recognized as widely 429 distributed in China 20-24 . The complete public genome set (i.e. counting redundant 430 genomes) included 148 genomes corresponding to this specific ST11 convergence 431 event mostly from China but also from France (n=2). Notably though, this was only 432 one of 50 convergence events that we detected in China, including 8 involving 433 acquisition of iuc1 or iuc5 by ST11 (see Table S8, and interactive tree at Overall, convergent genomes were detected originating from most geographical 443 regions for which genome data was available, but some regions had many more 444 events than others (Figure 7, Table S8). This uneven distribution may stem from a 445 skew in the number of genomes available per region (e.g. due to variation in 446 accessibility or application of genome sequencing). Nevertheless, the number of 447 convergent genomes in the eastern, southeastern and southern parts of Asia were 448 noticeably high, driven by the frequency of convergence events detected in China 449 (n=50 events) and Thailand (n=26 events) as well as putative clonal expansions of 450 these strains as discussed above (Figure 7). Of Another strength of our approach is the rich data output by Kleborate, which 507 facilitates in-depth investigation of population structure, AMR and virulence 508 epidemiology. This allows rapid exploration and understanding of: (i) hypervirulence-509 associated loci and the molecular drivers of their dissemination (Figures S4 and S7); 510 (ii) molecular mechanisms of complex AMR phenotypes e.g. carbapenem resistance 511 ( Figure 3); (iii) AMR and virulence trends (Figures 1, 5 and 6); (iv) emerging 512 convergent AMR-virulent strains so that they can be targeted for surveillance and 513 infection control (Figure 7); (v) overrepresented STs and genotypes, which may be 514 indicative of transmission clusters that should be targeted for further investigation (as antigen epidemiology, which can inform the design of novel vaccines and 517 therapeutics (Figure 2B-C). Notably Kleborate can also yield useful genotyping 518 results from metagenomics data (Figure S8), which is gradually being adopted for 519 clinical and surveillance applications relevant to K. pneumoniae. User interpretation 520 of Kleborate's extensive data output can be guided by the accompanying web-based 521 visualization app, Kleborate-Viz. Through this app, many of the analyses and plots 522 presented in this manuscript can be rapidly replicated, and further explored in an 523 interactive manner. 524
525
Kleborate is designed to facilitate detection and tracking of clinically relevant AMR 526 and virulence traits from genome data, and analysis of public data not only identified 527 specific clones and genes associated with one or the other of these traits (Figures 5, 528 6), but also 601 genomes in which the two converge (carrying iuc+ virulence 529 plasmids and ESBL and/or carbapenemase genes; Figure 7). We estimated at least 530 173 unique AMR-hypervirulence convergence events; the majority were detected 531 within a single isolate (n=119 events), but many others appear to be associated with 532 local outbreaks or larger-scale spread and apparently across multiple countries (Table 533 S8 (Table S2). Table S8, assignment of genomes to events in Table S2). Circles are scaled by the number of total genomes linked to the event and colored to indicate whether convergence is | 2,650.2 | 2020-12-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Effect of Gelation on the Apparent Magnetism of ZnFe 2 O 4 Sol-Gel Systems
Experiments have shown that for thixotropic solgel systems consisting of ZnFe2O4 nanoparticles without any matrix material, the measured magnetization, or susceptibility of gels, are greater than those of sols. For the reduced susceptibility, a system with a volume fraction of particles of v=2.0% is lower than a system with v=1.5%. These results have been interpreted in terms of a magnetization mechanism based on the Brownian rotation of the moments fixed inside the colloidal particles, which would be dramatically affected by the non-magnetic hydrodynamic interaction. For weakly cross-linked gels, the translational degree of freedom is “frozen” while the rotational degree of freedom remains unchanged, so that their hydrodynamic interaction effect is weaker, and they are more easily magnetized than the sols with both rotational and translational degrees of freedom. The action of gelation preventing the hydrodynamic interaction effect on the magnetization process can be referred to as the “gelation decoupling”. Correspondingly, such behavior of the hydrodynamic interaction in affecting the apparent magnetism can be referred to as a “viscomagnetic effect”.
Introduction
Magnetic sols, also known as ferrofluids, magnetic liquids, magnetic colloids, etc., are suspensions of magnetic nanoparticles with a mean diameter of about 10 nm in a non-magnetic carrier liquid.Such systems have been extensively studied since the 1960s due to their novel magnetically-controlled properties (Odenbach, 2009).A remarkable feature of magnetic sols is its ability to change their hydrodynamic (rheological) properties under the action of an external magnetic field, which is referred to as a magnetoviscous effect (Zubarev & Chirikov, 2010).For spin-up of colloidal ferrofluids in a rotating magnetic field, the fluid motion depends on non-equilibrium magnetization (Rosensweig, Popplewell, & Johnston, 1990).In many investigations, the apparent magnetism of magnetic sols has generally been regarded as depending on the magnetic interaction between like particles as dipoles.However, Zhang and co-workers noted an effect of hydrodynamic interaction on alternating current susceptibility (Zhang, Boyd, & Luo, 1996).Gels sensitive to electric or magnetic fields are of interest as "smart" materials with unique potential applications (Tanaka et al., 1982;Qsada et al., 1992;Barsi et al., 1996;Suto et al., 2009;Leveis et al., 2010).Typically, magneto-sensitive gels consist of magnetic nanoparticles dispersed in an organic or inorganic matrix (Chaput et al., 1993;Li et al., 1999;Bentivegna et al., 1999;Casas et al., 2002;Bohlius et al., 2004;Galicia et al., 2011).As for magnetic sols, the novel magnetic features of such gels stems from the magnetic properties of the nanoparticles.Each particle is a magnetic monodomain, that is to say, it has a paramagnet magnetic dipole, the intensity of which is fixed, depending on the nature of the constituent material, but the direction can fluctuate inside the particle.In such magnetic gels, the magnetic phase is captured in the non-magnetic network of the organic/inorganic gel.Such magnetic gels, prepared by suspending magnetic nanoparticles in a non-magnetic matrix, are clearly binary composite systems (Teixeira et al., 2003) rather than conventional gels, and their magnetic properties may be influenced by the matrix material (Bohlius et al., 2004).A few magnetic gels based on iron oxide nanoparticles without any matrix material have been investigated with regard to the dynamics of gelation and have been characterized (Ponton et al., 2002;Liu et al., 2006).Compared with solidified magnetic sols (frozen ferrofluids) (Kötitz et al., 1995;Hrianca., 2002), in a gel the translational motion of the colloid particles, which is a main feature of magnetic sols, is inhibited to the same extent, but the rotation of the particles themselves is not restricted since magnetic gels are usually only weakly cross-linked (Jarkova, Pleiner, Müller, & Brand, 2003).Therefore, magnetic gels may exhibit different magnetic behavior compared to fluids (sols) and solids (frozen fluids).The macroscopic behavior of a material depends on its microstructure, and hence studies of the magnetic behavior of magnetic solgel systems may not only be valuable for natural physical research of complex fluids, but may also lead to potential applications.Nevertheless, reports on investigations of magnetic gels have been far fewer than those on magnetic sols.
A thixotropic fluid has the property of being in a gel state at equilibrium.If it is mechanically sheared or shaken above a given threshold, it becomes a flowing liquid, but the gel will regenerate if the sample is left to stand (Ponton et al., 2002).Experiment shown that ZnFe 2 O 4 nanoparticles based colloids can self-form thixotrogical sol-gel system (Li et al., 2009).Due to lack any matrix or additive, such system's magnetism results only from the colloidal particles, so the system is suitable in particular to be used to investigate the relation between macroscopic behaviors and microstructure.In the work presented herein, thixotropic solgel systems based on ZnFe 2 O 4 nanoparticles have been prepared, and the effect of hydrodynamic interactions on the apparent magnetism has been revealed by comparing the magnetization behaviors of both the sols and gels.
Experimental
Bulk ZnFe 2 O 4 is an antiferromagnetic material and its nanoparticles can exhibit superparamagnetic or weak ferromagnetic properties since net spins can exist on their surfaces (Schnele & Deetscreek, 1962) or be induced by point defects (Wu, Mao, Ye, Xie, & Zheng, 2010).Spherical ZnFe 2 O 4 nanoparticles were produced from an aqueous mixture of ZnCl 2 and FeCl 3 by a co-precipitation method.The crystal structure and morphology of the as-prepared particles were characterized by X-ray diffraction analysis (XRD, XR-2) and transmission electron microscopy (TEM, Philips Technai 10), as shown in Figure 1.Statistical analysis indicated that the size of the particles fit a log-normal distribution with a median diameter d g of 4.22 nm and a standard deviation ln g of 0.26.The diameter of average volume, d v , obtained from the expression d v =exp(lnd g +1.5ln 2 g ) (Granqvist & Buhrman, 1976), was 4.67 nm.The solgel systems were synthesized by a similar method to that used to prepare self-formed CoFe 2 O 4 ionic ferrofluids (Li et al., 2007), whereby the ferrofluid is formed through self-ionization of the nanoparticles and letting the metal ions absorb on the remaining part of the particles to prevent aggregation by electrostatic repulsive force.In this method, dilute aqueous HNO 3 solution is used as the carrier liquid to form acid ferrofluids.The concentration of HNO 3 (S) is dependent on the volume fraction of the particles v for synthesizing ZnFe 2 O 4 solgel systems, i.e.: where p is the density of ZnFe 2 O 4 , Z s is the valence of nitric acid, M ws is the molecular weight of ZnFe 2 O 4 , and Q is an experimentally determined parameter.For the formation of thixotropic ZnFe 2 O 4 gels, the Q value is 0.3 when v is 1.52.0%,and the pH is about 1.5, as measured by means of a pH meter (HDP-9522 BT type).After 24 h, the ZnFe 2 O 4 sol fluids (as shown in Figure 2 (a)) were transformed into non-fluid gels (as shown in Figure 2(b)).The formation of ZnFe 2 O 4 solgels differs from that of SnO 2 solgels, for which polyvinyl alcohol (PVA) was added to induce gelation (Santos, Santilli, & Pulcineli, 1999), and no polymer's action need be considered for gelation of the ZnFe 2 O 4 system.
The magnetization curves of the solgel systems with v =1.5 and 2.0% were measured by means of a vibrating sample magnetometer (VSM, HH-15) by sweeping the magnetic field at room temperature.For the measurement, the sol samples were freshly broken-up by mechanical shaking for 0.5 h using an IKA-KS130 basic apparatus set at MOT 720 rpm, and transferred to plastic tubes of diameter 1.5 mm and length 8.0 mm.After sealing the tubes, the sols were left to stand for 24 h to form the gel samples for the measurement of magnetization.
(a) (b) Figure 2. Photographs of (a) the ZnFe 2 O 4 sol with v =2.0% and (b) the gel formed after 24 h
Results and Analysis
Figure 3 presents the magnetization results, from which it can be seen that the magnetization curves of the sol and gel samples do not coincide with each other, with the former lying below the latter.Since the same magnetic phase was present in the sol and gel samples, the difference in their apparent magnetization curves implies that the gel was more easily magnetized than the sol.In addition, it is noticed that for the solgel system, all magnetization curves corresponding to an increase and decrease of the field did not exhibit hysteresis loops, and the magnetization curves were the same when the field reversed.Thus it is judged that the sols and gels all achieved the thermodynamic equilibrium state during measurements of the magnetization curve during magnetic field sweeps.Therefore, it can be concluded that the respective samples underwent different magnetization processes. - The difference in the apparent magnetization behavior can be clearly discerned from the susceptibility curves of (=M/vs.H, as shown in Figure 4.It can be seen that in the low-field regime (|H|<50 kA/m), these curves exhibit obvious fluctuations (as shown in the insets in Figure 4), but that when |H| exceeds 50 kA/m, these susceptibility curves tend to decrease with the field strength.The susceptibilities of the sols were lower than those of the gels.
Discussion
The behavior of the apparent susceptibilities indicates different interparticle interactions affecting the magnetization processes for the sols and gels, as discussed below.
After the application of a magnetic field H, the magnetic moment m fixed inside the particles in the solgel systems interact with the field via the potential where 0 is the magnetic permeability in vacuum.As a consequence, these moments tend to align along the direction of the field by Brownian (bulk) rotation and/or Néel (magnetic vector) rotation, which lead to the apparent magnetic behavior (Shliomis & Stepanov, 1994).Because thermal motion tends to destroy alignment of the moments, the average degree of the alignment depends on the ratio of the 0 mH to thermal energy k B T, i.e. the magnetic forces dominate the Brownian random forces (Bossis, Volkova, Lacis, & Meunier, 2002). 0 mH/k B T is defined as the Langevin parameter , which characterizes the system under an external magnetic field (Cerdà et al., 2010).Since the magnetic moments of particles are proportional to the volume of them, the large particles can be oriented more easily than small particles.The magnetization of the ZnFe 2 O 4 nanoparticles is weak, so that thermal agitation has a dramatic effect on the magnetization of the solgel system.So, in the initial stage of magnetization (|H|<50 kA/m), the susceptibility of the solgel systems exhibited fluctuation.Only when the absolute strength of the applied magnetic field H exceeded 50 kA/m did the susceptibility curves of vs. H change monotonically with the magnetic field, reflecting the intrinsic effect of the interparticle interaction on the magnetization process.Therefore, discussion is focused on the field regime of H>50 kA/m.For comparison, the susceptibility curves reduced by v , r (=M/ v vs.H at H>50 kA/m were measured, as shown in Figure 5.The basic considerations are believed to be the following three points: (1) The magnetization of the ZnFe 2 O 4 colloidal particles, which occurs on their surfaces, is very weak, so that the magnetic dipoledipole interaction effect can be neglected and the apparent magnetism of the solgel systems results from the behavior of individual particles.
(2) In solgel systems based on antimagnetic ZnFe 2 O 4 nanoparticles, the magnetizing mechanism is based on Brownian rotation of the moments because the local spins at the surface of the particles could be pinned (Du et al., 1987;Lodama et al., 1996) by absorbing Fe 3+ and/or Zn 2+ on the outer surface of the particles.This is similar to the adsorption of surfactant molecules on the surface of Fe 2 O 3 particles causing the spins of the iron atoms close to the surface to be pinned (Blanco-Mantecón & O'Grandy, 2006).
(3) The colloidal particles can have a translational degree of freedom, in addition to the rotational degree of freedom that determines the magnetization behavior through Brownian rotation of the moments fixed inside the particles.The results whereby the apparent magnetization or susceptibility of gels are larger than those of sols might appear to be contradictory because a computer simulation has shown that for a ferrosolid consisting of magnetic dipoles frozen at random locations but free to rotate, its susceptibility was considerably lower than for ferrofluids having fluidity (Wang, Holm, & Müller, 2002).This paradox can be explained in relation to hydrodynamic effects.A colloidal particle's motion could be influenced by another particle's motion through the carrier liquid as intermedium, which produces so-called "hydrodynamic interaction" mediated by the solvent (Zahn, Méndez-Alcaraz, & Maret, 1997).For magnetic colloids, under influence of a magnetic field, the increase of orientation of the magnetic moment of the particles leads to an increase of the effective attraction between colloidal particles, so that the randomly distributed nanoparticles in sols can tend to aggregate into chain-like structure by translational motion (Wang, Li, & Gao, 2009).Thus, in the magnetization process, a hydrodynamic interaction among the colloidal particles in a sol (Zhang et al., 1996;Bossis et al., 2002;Liu et al., 1995) could be induced through this translational degree of freedom.Experimental evidence has shown that the hydrodynamic interaction may enhance the self-diffusion of colloidal particles (Zahn et al., 1997).The diffusion coefficient D can be described as where is approximately the solvent viscosity and d is the diameter of the particles (Zahn et al., 1997).Therefore, the enhancement of the diffusion is equivalent to the effective diameter of the particles becoming small.Due to the magnetic moments of particles depending on their volume, the hydrodynamic interaction makes the magnetization of the sols difficult.In inorganic gels, these colloidal particles are interlinked through van der Waals forces.Thus, after gelatinization of the sols, the translational degree of freedom can be viewed as being "frozen", but the rotational degree of freedom remains the same.Hence, the hydrodynamic interactions in a gel are negligible.Consequently, the gel is more easily magnetized than the sol, and hence the apparent susceptibility and magnetization of the former are larger than those of the latter.In addition, it can be seen from Figure 5 that the reduced susceptibility curves of the solgel system with v =2.0% lie below those of the system with v = 1.5%.This can be explained as follows.
With an increasing volume fraction of particles, the hydrodynamic interaction between the particles is enhanced accordingly since the average interparticle distance decreases.Thus the susceptibility of the sol with v =2.0% is less than that of the sol with v =1.5%.Also, by increasing volume fraction of particles, the viscous friction increases, which will tilt the moments of the particles against the field direction if the moments are spatially fixed in the particles (Odenbach, 2003), This effect is enhanced for the gel with v =2.0% than for the one with v =1.5%, and so, the susceptibility of the former is less than that of the latter.
Conclusion
For magnetic colloids with a rotational degree of freedom and a translational degree of freedom, the hydrodynamic interaction between colloidal particles, which hampers magnetization, can play an important role in the Brownian magnetization process.For the ZnFe 2 O 4 solgel system, the magnetization mechanism could be Brownian rotation of moments fixed inside particles since the moments could be pinned.Hence, the gels are different from the so-called frozen ferrofluids in the low-temperature regime, in which only Néel rotation is possible (Blanco-Mantecón et al., 2006).Gelation "freezes" the translational degree of freedom and inhibits the hydrodynamic interaction, so that the apparent susceptibility and magnetization of gels are larger than those of sols.This shows that, just as for uniaxial ferrogels, the relative rotations between the moments and the network can be viewed as a magnetic degree of freedom (Bentivegna et al., 1999) for the ZnFe 2 O 4 gels.In other words, the sols have a translational degree of freedom that produces a hydrodynamic interaction, but gelation will freeze this translational degree of freedom and inhibit the hydrodynamic interaction, so the gel is more easily magnetized than a sol.The action by which gelation prevents the hydrodynamic interaction effect in the magnetization process, so that the system can be more easily magnetized, may be referred to as the "gelation decoupling".The influence of the hydrodynamic interaction could be enhanced with increasing v , hence, the apparent reduced susceptibility of the sol with v =2.0% is less than that of the system with v =1.5%.And, increasing volume friction of particles, the viscous friction is enhanced correspondingly, so that reduced susceptibility is lower for the gel with v =2.0% than for one with v =1.5%.
In addition, the hydrodynamic interaction may also be an important physical factor in making the reduced magnetization of the ferrofluids less than the magnetization of the dry particles (Chantrell et al., 1978;Berkowitz et al., 1980;Lin et al., 2010), besides producing an additional relaxation peak in the complex susceptibility of the ferrofluids (Zhang et al., 1996).It is possible that the nanoparticles of antiferromagnetic bulk materials can form Brownian particles particularly easily.That is, so-called magnetically hard particles (Odenbach, 2009), the magnetization mechanism of which is Brownian rotation of the moments, since their moments lie on the surface of the particles and are easily pinned to form rigid dipoles, just as magnetic nanoparticles with a large anisotropy constant can be viewed as rigid dipoles (Neveu-Prin, Tourinho, Bacri, & Perzynski, 1993).Such colloids consisting of magnetic Brownian particles may exhibit special features, for example, not only an influence of the magnetic interaction on the hydrodynamic properties, but also an effect of the hydrodynamic interaction on the apparent magnetism.The latter behavior may be referred to as a "viscomagnetic effect".Experiments have shown that compared to a ferrofluids system without magnetic interaction (i.e. in the dilution), the reduced initial susceptibility of the system with magnetic interaction is less rather than larger as theory predicts (Wang et al., 2002;Taketomi et al., 2002).The difference may be understandable with the help of the "viscomagnetic effect" resulting from the hydrodynamic interaction.In addition, this thixotropic system may have novel applications, which will be investigated further.
Figure 1
Figure 1.X-ray diffraction spectrum of the particles.The inset shows a typical micrograph of the particles (size bar 50 nm)
Figure 5 .
Figure 5. Susceptibility curves reduced by v at H ≥ 35 kA/m | 4,253.2 | 2013-01-05T00:00:00.000 | [
"Physics"
] |
Mouse Nudt13 is a Mitochondrial Nudix Hydrolase with NAD(P)H Pyrophosphohydrolase Activity
The mammalian NUDT13 protein possesses a sequence motif characteristic of the NADH pyrophosphohydrolase subfamily of Nudix hydrolases. Due to the persistent insolubility of the recombinant product expressed in Escherichia coli, active mouse Nudt13 was expressed in insect cells from a baculovirus vector as a histidine-tagged recombinant protein. In vitro, it efficiently hydrolysed NADH to NMNH and AMP and NADPH to NMNH and 2′,5′-ADP and had a marked preference for the reduced pyridine nucleotides. Much lower activity was obtained with other nucleotide substrates tested. K m and k cat values for NADH were 0.34 mM and 7 s−1 respectively. Expression of Nudt13 as an N-terminal fusion to green fluorescent protein revealed that it was targeted exclusively to mitochondria by the N-terminal targeting peptide, suggesting that Nudt13 may act to regulate the concentration of mitochondrial reduced pyridine nucleotide cofactors and the NAD(P)+/NAD(P)H ratio in this organelle and elsewhere. Future studies of the enzymology of pyridine nucleotide metabolism in relation to energy homeostasis, redox control, free radical production and cellular integrity should consider the possible regulatory role of Nudt13.
Introduction
Mammalian genomes typically possess 20-25 genes for members of the Nudix superfamily. Nudix proteins hydrolyze or bind a wide variety of nucleotide and other phosphorylated molecules and are involved in many processes including nucleotide pool regulation, metabolic control and RNA decapping [1,2]. Several Nudix hydrolases have broad substrate specificities in vitro, making it difficult to ascertain their functions in vivo [2]. This uncertainty is compounded by the common misannotation of uncharacterized Nudix proteins in online databases as, for example, ADP-ribose pyrophosphatases or antimutator 8-oxo-dGT-Pases based on sequence similarities to well characterized proteins with these activities; thus, experimental characterization is important. Most mammalian nudix proteins have been well studied, but a few, such as NUDT13, have not. NUDT13 is annotated in some databases as a mitochondrial NADH pyrophosphohydrolase. This is based on the presence of a putative N-terminal mitochondrial targeting sequence and the sequence motif "SQPWPFPxS" that is found in all characterized NADH pyrophosphohydrolases downstream of the catalytic nudix box [3]. A mitochondrial location has also been suggested from a proteomic study Abstract The mammalian NUDT13 protein possesses a sequence motif characteristic of the NADH pyrophosphohydrolase subfamily of Nudix hydrolases. Due to the persistent insolubility of the recombinant product expressed in Escherichia coli, active mouse Nudt13 was expressed in insect cells from a baculovirus vector as a histidine-tagged recombinant protein. In vitro, it efficiently hydrolysed NADH to NMNH and AMP and NADPH to NMNH and 2′,5′-ADP and had a marked preference for the reduced pyridine nucleotides. Much lower activity was obtained with other nucleotide substrates tested. K m and k cat values for NADH were 0.34 mM and 7 s −1 respectively. Expression of Nudt13 as an N-terminal fusion to green fluorescent protein revealed that it was targeted exclusively to mitochondria by the N-terminal targeting peptide, suggesting that Nudt13 may act to regulate the concentration of mitochondrial reduced pyridine nucleotide cofactors and the NAD(P) + /NAD(P)H ratio in this organelle and elsewhere. Future studies of the enzymology of pyridine nucleotide metabolism in relation to energy homeostasis, redox control, free radical production and cellular integrity should consider the possible regulatory role of Nudt13. 1 3 [4]. Here, we experimentally confirm these predictions for the first time with recombinant mouse Nudt13 expressed in a baculovirus system. This should now allow the potential influence of Nudt13 and its orthologs to be included in studies of nicotinamide dinucleotide metabolism, energy homeostasis, mitochondrial dynamics and disease where it has hitherto not been considered.
Materials
RIKEN clone 3110052E14, a full-length cDNA insert from 13-day mouse embryo head cloned between the XhoI and SstI sites of pBluescript I SK(+), was obtained from RIKEN (the Institute of Physical and Chemical Research), Yokohama, Japan. Bac-N-Blue linear viral DNA, pBlue-Bac4.5/V5-His vector, Escherichia coli TOP10, Sf21 (Spodoptera frugiperda) and High Five (Trichoplusia ni) insect cells, Sf-900 II SFM medium, Cellfectin and MitoTracker Red CM-H 2 XRos were from Invitrogen (Thermo Fisher Scientific). pEGFP-N1 and pEGFP-C2 were from Clontech. EX-CELL 405 medium was from Sigma. FuGENE was from Roche. The anti-His.Tag monoclonal antibody was from Merck.
Cloning of Nudt13 from cDNA into Baculovirus Vector
The mouse Nudt13 gene was PCR-amplified from clone 3110052E14 using the forward and reverse primers 5′-CAG ACT CGA GAA TGA ATC GGA CAA TGT CTC -3′ and 5′-CCA TTT AAG CTT AGC AGC CAGGG-3′ which provided a XhoI site at the start of the amplified gene and a HindIII site at end. After amplification with Pfu DNA polymerase, the Nudt13 PCR product was purified using a Qiagen PCR purification kit and digested with XhoI and HindIII. The digest was gel-purified and the product ligated between the XhoI and HindIII sites of the pBlueBac4.5/ V5-His vector. The resulting pBlueBac-Nudt13 construct (10 ng), encoding Nudt13 with a C-terminal His.Tag and V5 epitope under the control of the strong polyhedrin promoter, was electroporated into E. coli TOP10 cells for propagation and its structure confirmed by sequencing.
Recombinant Nudt13 virus was obtained by co-transfection of the pBlueBac-Nudt13 DNA construct with linearized Bac-N-Blue viral DNA in Sf21 cells. pBlueBac-Nudt13 DNA (2 µg) was mixed with 0.5 µg Bac-N-Blue DNA in 1.5 ml Sf-900 II SFM and then 20 µl of Cellfectin was added, mixed for 10 s, then incubated for 45 min at room temperature. Sf21 cells (10 6 cells/60 mm dish) were washed with 4 ml Sf-900 II SFM and the transfection mixture added. After 4 days at 27 °C, pure recombinant Nudt13 plaques were isolated from the viral supernatant by blue/white color selection and plaque purification using Sf21 cells [5]. The structures of the recombinants were confirmed by PCR analysis of purified viral DNA and a high titer Nudt13 viral stock (5 × 10 8 pfu/ml) prepared from purified virus [5].
Expression and Purification of Nudt13
After optimisation of the time and multiplicity of infection (MOI) for expression of Nudt13, High Five™ cells were seeded as a monolayer in EX-CELL 405 medium in 10 × 75 mm 2 flasks at 10 7 cells/flask at 27 °C then infected with recombinant Nudt13 virus at a MOI of 10. After 48 h, the cells were dislodged and centrifuged at 1000×g for 10 min at 4 °C, then washed with PBS. The cells were lysed in 5 ml 50 mM Tris-HCl, pH 8, 50 mM NaCl, 1% (v/v) Triton X-100, 1% (v/v) Nonidet P-40, and 1 mM phenylmethylsulfonylfluoride. After 2 h at 4 °C, the lysate was sonicated four times, 20 s each time. The extract was centrifuged at 15,000×g for 20 min at 4 °C and the supernatant mixed with 1 ml NiCAM™-HC resin (Sigma) equilibrated in 50 mM Tris-HCl, pH 8.0, 500 mM NaCl and gently shaken for 2 h at 4 °C. The mixture was then poured into a 15 × 50 mm column, the column washed with 2 × 10 ml 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 10 mM imidazole, and the protein eluted with 3 ml 50 mM Tris-HCl pH 8.0, 500 mM NaCl, 0.25 M imidazole. The purified protein was dialysed overnight against 2 × 1 l of 50 mM Tris-HCl pH 8.0, 50 mM NaCl, 1 mM dithiothreitol.
Nudt13-EGFP Fusion Constructs and Subcellular Localization
The same PCR product used to make the pBlueBac-Nudt13 construct was used to make N-and C-terminal fusions of Nudt13 to enhanced green fluorescent protein (EGFP). It was ligated between the XhoI and HindIII sites of pEGFP-N1 or pEGFP-C2 to give pNudt13-EGFP or pEGFP-Nudt13 respectively. The plasmids were propagated by transformation of E. coli TOP10 cells. HeLa cells, 6 × 10 4 cells/dish, were seeded into 35 mm glass-bottomed dishes (MatTek, Ashland, MA, USA) in 2 ml complete MEM and transfected after 24 h when at 50% confluence. FuGENE (2.5 µl/µg DNA) was diluted into 100 µl serum-free MEM, incubated for 5 min at room temperature and added dropwise to 1 µg pNudt13-EGFP or pEGFP-Nudt13 in a volume of 10 µl. The mixture was incubated for 45 min at room temperature. The old medium was removed from the dishes and replaced with 2 ml of fresh complete MEM and then the transfection mixture was added dropwise to the cell monolayer. The cells were incubated for up to 24 h at 37 °C in a humidified incubator containing 5% CO 2 . Mitochondria were visualized by incubating HeLa cells 16-24 h after transfection with pNudt13-EGFP in complete MEM containing 50-100 nM of MitoTracker red for 45 min at 37 °C in a humidified incubator in 5% CO 2 . After removal of the dye, cells were observed in the confocal microscope as previously described [6].
Cloning, Expression and Purification of Nudt13
All attempts to obtain recombinant Nudt13 by expression in E.coli yielded insoluble, inactive protein, which may explain the lack of any study reporting the properties of this enzyme so far. However, we were successful with a baculovirus expression system. The Nudt13 sequence was PCR-amplified from a full-length mouse embryo cDNA and inserted into the pBlueBac4.5/V5-His expression vector in frame with the C-terminal His.Tag and V5 epitope to give a theoretical protein of expected mass 42,679 Da under the transcriptional control of the baculovirus polyhedrin promoter. The nucleotide sequence of the insert was determined to be exactly the same as that submitted to GenBank under accession no. AK014204. Sf21 insect cells were co-transfected with the pBlueBac-Nudt13 DNA construct and Bac-N-Blue viral DNA and pure recombinant Nudt13 baculovirus isolated by plaque assay and purification. High Five insect cells were then infected with pure Nudt13 virus. SDS-PAGE analysis of a cell lysate 48 h after infection showed the presence of a major band corresponding to a 42 kDa protein in cells infected with Nudt13 virus which represented more than 50% of the total cell extract and which was not present in uninfected cells (Fig. 1a). The expression of Nudt13 was confirmed by western blotting using an anti-His.Tag monoclonal antibody which detected the C-terminal His.Tag of the recombinant Nudt13 (Fig. 1b). It was purified to homogeneity by affinity chromatography on NiCAM-HC resin (Fig. 1a, lane 4).
Substrate Specificity and Reaction Requirements of Nudt13
Among the substrates tested, Nudt13 showed a high degree of specificity towards NADH and NADPH compared with other related nucleotides when assayed at a fixed concentration of 0.5 mM. Low activity was found with Ap 2 A, NAD + , NADP + , FAD and ADP-ribose and little or no activity with other nucleotides examined (Table 1). With NADH as substrate, Nudt13 displayed optimal activity at alkaline pH, between pH 7.8 and 8.2, with about 50% activity remaining at pH 7.0 and 9.0. The enzyme was absolutely dependent on a divalent metal cation for its activity, with 2-5 mM Mn 2+ proving optimal for all substrates tested. The optimal Mg 2+ concentration was unusually high at between 40 and 100 mM, giving about threefold lower activity than 2 mM Mn 2 ; only 20% maximum activity remained at 5 mM Mg 2+ . The enzyme obeyed simple Michaelis-Menten kinetics with NADH as substrate in the presence of 2 mM Mn 2+ . K m and k cat values were determined for NADH under optimal assay conditions by non-linear regression analysis of data obtained by HPLC analysis and were 0.34 mM and 7 s −1 .
Product Analysis
To determine the products of NADH and NADPH hydrolysis by Nudt13, aliquots of reaction mixtures containing each substrate were analysed by HPLC. The disappearance of substrate was accompanied by the appearance of AMP and NMNH in the case of NADH (Fig. 2a), and 3′,5′-ADP and NMNH in the case of NADPH (Fig. 2b).
Subcellular Localization of Nudt13
The subcellular localization of Nudt13 was determined by expression of the protein in HeLa cells as N-and C-terminal fusions with EGFP. HeLa cells transfected with pNudt13-EGFP showed a distinctive pattern of fluorescence characteristic of mitochondria (Fig. 3a), while cells transfected with pEGFP-Nudt13, in which the putative N-terminal mitochondrial targetting signal is masked, showed a diffuse nucleo-cytoplasmic fluorescence (Fig. 3b) similar to EGFP alone (Fig. 3c). The mitochondrial localization of Nudt13 in cells transfected with pNudt13-EGFP (Fig. 3d) was confirmed with Mitotracker Red CM-H 2 XRos staining (Fig. 3e). Superimposition of both green and red fluorescence resulted in a yellow image with both signals clearly coincident (Fig. 3f).
Discussion
Eukaryotic members of the Nudix hydrolase subfamily possessing the "SQPWPFPxS" sequence motif characterized so far are known or predicted to be peroxisomal-Saccharomyces cerevisiae NPY1 [3,8], Caenorhabditis elegans ndx-9 [3], Homo sapiens NUDT12 [6] and A. thaliana AtNUDX19 [14], with AtNUDX19 having a dual chloroplastic location [11]. Such locations are not surprising as many reactions in these organelles are dependent upon reduced pyridine nucleotide cofactors. Another subcellular compartment where such activities would be expected is the mitochondrion. In rat hepatocytes, free mitochondrial NADH has been measured at 300-400 µM and NADPH at 4 mM while the corresponding figures for NAD + and NADP + are 4-6 and 1 mM respectively [15]. Compared to other Nudix NADH pyrophosphohydrolases characterized so far, Nudt13 exhibits a strong substrate preference for NAD(P)H over any other substrates tested and so a role for Nudt13 and its human ortholog NUDT13 in the regulation of NAD(P)H pools can be suggested. Nudt13 might also serve to generate NMNH, which may have a specific function within the mitochondrion [16].
Nudt13 has the same domain architecture as A. thaliana AtNUDX19. The latter enzyme has a marked preference for NADPH over NADH [11] and analysis of pyridine nucleotide levels in nudx19 deletion mutants has shown an increase in intracellular NADPH, but not NADH [17]. This along with other phenotypic features of the mutant cell lines has suggested that AtNUDX19 is a key factor in the regulation of NADPH pools and redox control in this organism [17,18]. Nudt13 does not display the same preference for NADPH in vitro as AtNUDX19; however, a more detailed analysis of substrate utilization in vitro than that presented here is unlikely to reveal the true substrate profile and preference of the mammalian NUDT13 subfamily in vivo, given the known difficulties in inferring these from in vitro activities [2,19]. A good illustration of this is the unusual divalent ion requirement of Nudt13. Optimal activity in vitro was obtained in the presence of 2-5 mM Mn 2+ or 40-100 mM Mg 2+ , both of which are highly unphysiological. Free matrix Mg 2+ has been measured as 0.67 mM [20,21] while free Mn 2+ is unlikely to be greater than 1 µM [22]. The activity of Nudt13 measured at 0.67 mM Mg 2+ was only about 1% of the maximum observed and was negligible at 1 µM Mn 2+ . The microenvironment of Nudt13 within the mitochondrial matrix may of course alter the divalent ion requirement to match the physiological setting. Alternatively, substrates as yet untested may prove to have low K m values and divalent ion optima. By analogy with the MutT and NUDT1 (MTH1) 8-oxo-dGTPases [23,24], these could include ring-oxidized or other non-functional metabolites of the pyridine nucleotides [25] with hydrolysis removing them from the functional pyridine nucleotide pools to prevent toxicity. Another possibility arises from the finding that the E. coli NudC NADH pyrophosphohydrolase [26] removes NMN from NAD + -capped small regulatory RNAs much more efficiently than it hydrolyzes NADH [27,28]. Regulatory micro-RNAs have been detected in mitochondria [29] but there is currently no evidence that they are capped by NAD + . Thus, although a role for Nudt13 in mitochondrial pyridine nucleotide metabolism seems the most likely by analogy with AtNUDX19, a true understanding will require a full phenotypic analysis of a deletion mutant, including measurements of pyridine nucleotide levels.
Assuming that mitochondrial NADH and/or NADPH are the relevant substrates for Nudt13, what might its role be? The NAD(P) + /NAD(P)H ratios are important regulators of the redox state of the cell and of numerous enzymic activities and signalling processes and may act as redox sensors for transcriptional control [30][31][32]. In particular, mitochondrial NADPH is required for the reduction of oxidised glutathione and for thioredoxin regeneration while NADH can be used for the generation of reactive oxygen species from the electron transport chain. How cellular responses to oxidative stress might be affected by Nudt13 activity will depend on how it is regulated in response to physiological signals. Induction or activation would favor NAD(P)H hydrolysis and an increase in NAD(P) + /NAD(P)H ratios while repression or inhibition would have the opposite effect. Such a ratio change could occur independently of redox reactions and could be a transient reponse as NADH at least can be regenerated from NMNH and ATP by the mitochondrial enzyme NMNAT3 [16]. Its influence could also extend to the cytosol as result of the NADH and NADPH shuttles that can transfer reducing equivalents across the mitochondrial membrane [32,33]. That the human NUDT13 gene is subject to regulation has been shown by the direct correlation of its expression with that of the proposed tumor suppressors MFSD4 and occludin (OCLN) and the inverse correlation with that of the metastasis-promoting bone morphogenetic protein 2 (BMP2) in several gastric cancer cell lines [33]. Increased OCLN and decreased BMP2 expression inhibit the epithelial-mesenchymal transition (EMT), an important stage in tumor cell invasion of tissues. This study suggests that both the uncharacterized MFSD4 and NUDT13 may have a role in the regulation of the EMT. Increased NADPH oxidase activity has been associated with induction of the EMT [34,35] so it would be interesting to establish whether up-regulation of NUDT13 can reduce the supply of cytosolic NADPH.
Other Nudix hydrolases known to be located in mammalian mitochondria are the NUDT9 ADP-ribose hydrolase [36,37] and a portion of the NUDT1 (MTH1) 8-oxo-dGTPase [38] while Arabidopsis has confirmed mitochondrial Nudix hydrolases that are active towards coenzyme A derivatives (AtNUDX15) and long-chain diadenosine polyphosphates (AtNUDX13) [11,39]. Recent studies have focussed on the possible role of NUDT9 and the cytosolic NUDT5 in the catabolism of mitochondrial NAD + and its metabolites [40][41][42] while many other studies have addressed the dynamic regulation of pyridine nucleotides and energy homeostasis in this organelle [43,44]. The essential role of NAD(P) + and nudix proteins in DNA damage repair, ageing and neurodegeneration linked to mitochondrial homeostasis is now also well recognized [45,46]. However, none of these studies has considered the possible role of Nudt13 in these processes, most probably because details of its activity are not present in the primary literature. Thus, the simple characterization presented here should now serve to draw attention to this protein and lead to its consideration in future analyses of pyridine nucleotide metabolism and function in the mitochondria and other cellular compartments. | 4,331.6 | 2017-07-28T00:00:00.000 | [
"Biology"
] |
Proteogenomics data for deciphering Frankia coriariae interactions with root exudates from three host plants☆
Frankia coriariae BMG5.1 cells were incubated with root exudates derived from compatible (Coriaria myrtifolia), incompatible (Alnus glutinosa) and non-actinorhizal (Cucumis melo) host plants. Bacteria cells and their exoproteomes were analyzed by high-throughput proteomics using a Q-Exactive HF high resolution tandem mass spectrometer incorporating an ultra-high-field orbitrap analyzer. MS/MS spectra were assigned with two protein sequence databases derived from the closely-related genomes from strains BMG5.1 andDg1, the Frankia symbiont of Datisca glomerata. The tandem mass spectrometry data accompanying the manuscript describing the database searches and comparative analysis (Ktari et al., 2017, doi.org/10.3389/fmicb.2017.00720) [1] have been deposited to the ProteomeXchange with identifiers PXD005979 (whole cell proteomes) and PXD005980 (exoproteome data).
Subject area
Environmental microbiology More specific subject area
Frankia comparative proteogenomics
Type of data Mass spectrometry raw files, Excel tables How data was acquired Data-dependent acquisition of tandem mass spectra using a Q-Exactive HF tandem mass spectrometer (Thermo).
Data format
Raw and processed Experimental factors Cells were incubated with filter sterilized root exudates derived from either compatible (Coriaria myrtifolia), incompatible (Alnus glutinosa) and nonactinorhizal (Cucumis melo) host plants, or without for the control.For each condition, three biological replicates were performed. From each condition, cells and supernatants (exoproteomes) were obtained by centrifugation.
Experimental features
The 12 cellular proteomes and 12 exoproteomes were briefly run on SDS-PAGE, followed by trypsin proteolysis. Tryptic peptides were analyzed by nano LC-MS/MS and spectra were assigned with the genome-derived protein sequence databases from strains BMG5.1 and Dg1.
Data source location
CEA-Marcoule, DRF-Li2D, Laboratory "Innovative technologies for Detection and Diagnostics", BP 17171, F-30200 Bagnols-sur-Cèze, France Data accessibility Data is within this article and deposited to the ProteomeXchange via the PRIDE repository with identifiers PRIDE: PXD005979 (whole cell proteomes) and PXD005980 (exoproteome data).
Value of the data
The proteogenomics data are an invaluable resource for understanding Frankia/host plant interactions.
A better coverage of Frankia coriariae BMG5.1 proteome is achieved by means of querying two closely-related genomes.
The data have been exploited to decipher the main proteome changes in response to various root exudates. As described in detail in the accompanying manuscript [1], the proteins which are solely induced by Coriaria myrtifolia root exudates are involved in cell wall remodeling, signal transduction and host signals processing.
Experimental design and data
Interpreted tandem mass spectrometry results were acquired with a Q-Exactive HF instrument incorporating an ultra-high-field orbitrap analyzer. This mass spectrometer allows a rapid and deep coverage of proteome samples [2][3][4]. The results of peptide-to-spectra assignation were formatted in four.xls tables using the Microsoft excel program. The whole-cell proteome and exoproteome data from the 12 independent conditions were assigned to tryptic peptides against either the Frankia BMG5.1 annotated genome or the Frankia Dg1 annotated genome using the MASCOT 2.3.02 search engine (Matrix Science), with standard parameters: maximum number of missed cleavages at 2, mass tolerances for the parent ion and the product ions at 5 ppm and 0.02 Da, respectively, carbamidomethylated cysteine residues as fixed modification, oxidized methionine residues and deamidation of asparagine and glutamine as variable modifications, selection of peptides of at least 7 amino acids. For this, peptide-to-spectrum matches with a score above their peptidic identity threshold were filtered at p o 0.05.
The use of two databases allows an improved coverage of gene products circumventing some erroneous annotations [5]. Supplementary Tables S1 and S2 list the peptide-to-spectrum matches for whole-cell proteomes queried against the BMG5.1 and Dg1 databases, respectively. A total of 149,629 and 144,213 MS/MS spectra were assigned, respectively. A total of 18,344 MS/MS spectra were specifically assigned with the Dg1 database, highlighting the interest of pan-proteomics [6]. Supplementary Tables S3 and S4 list the peptide-to-spectrum matches with all the tandem mass spectrometry characteristics for the exoproteomes queried against the BMG5.1 and Dg1 databases, respectively. The deposited data correspond to the 24 raw files and the interpreted files.
Preparation of Frankia coriariae BMG5.1 samples
Frankia coriariae BMG5.1 cells were grown in BD-N medium supplemented with 2.5 mM pyruvate as a carbon source at 28°C. After ten days of cultivation, cells were supplanted with an equal volume of root exudates from each plant species that was previously filter sterilized. The cells were incubated for five additional days as described [1]. Cells were harvested by centrifugation. Proteins from the resulting supernatants were precipitated by trichloroacetic acid (10% final, w/vol). Cell pellets and exoproteins were dissolved in lithium dodecyl sulfate β-mercaptoethanol protein gel sample buffer (Invitrogen) and incubated at 99°C for 5 min. They were processed as indicated previously [7]. For statistical purpose three independent biological replicates were performed for each condition.
Protein extracts and tandem mass spectrometry
The 24 peptide mixtures were analyzed by high-resolution tandem mass spectrometry using a Q-Exactive HF mass spectrometer (Thermo) coupled to an UltiMate 3000 LC system (Dionex-LC Packings) in similar conditions as those previously described [8]. Peptide mixtures (10 μl) were loaded and desalted on-line on a reverse phase precolumn (Acclaim PepMap 100 C18) from LC Packings. Peptides were then resolved onto a reverse phase Acclaim PepMap 100 C18 column and injected into the Q-Exactive HF mass spectrometer. The Q-Exactive HF instrument was operated according to a Top20 data-dependent acquisition method as previously described [8], selecting 2 þ and 3 þ possible charge states.
Protein sequence database for proteogenomics MS/MS assignment
The recorded MS/MS spectra for the 12 whole-cell proteome samples and the 12 exoproteome samples were searched against the genome-derived protein sequence databases from Frankia strains BMG5.1 and Dg1 with standard parameters for microbial proteomics [9][10][11][12].The number of MS/MS spectra per protein (spectral counts) was determined for the three replicates of each of the four conditions which were assayed. | 1,316 | 2017-07-11T00:00:00.000 | [
"Biology"
] |
Interactions of Osteoprogenitor Cells with a Novel Zirconia Implant Surface
Background: This study compared the in vitro response of a mouse pre-osteoblast cell line on a novel sandblasted zirconia surface with that of titanium. Material and Methods: The MC3T3-E1 subclone 4 osteoblast precursor cell line was cultured on either sandblasted titanium (SBCpTi) or sandblasted zirconia (SBY-TZP). The surface topography was analysed by three-dimensional laser microscopy and scanning electron microscope. The wettability of the discs was also assessed. The cellular response was quantified by assessing the morphology (day 1), proliferation (day 1, 3, 5, 7, 9), viability (day 1, 9), and migration (0, 6, 24 h) assays. Results: The sandblasting surface treatment in both titanium and zirconia increased the surface roughness by rendering a defined surface topography with titanium showing more apparent nano-topography. The wettability of the two surfaces showed no significant difference. The zirconia surface resulted in improved cellular spreading and a significantly increased rate of migration compared to titanium. However, the cellular proliferation and viability noted in our experiments were not significantly different on the zirconia and titanium surfaces. Conclusions: The novel, roughened zirconia surface elicited cellular responses comparable to, or exceeding that, of titanium. Therefore, this novel zirconia surface may be an acceptable substitute for titanium as a dental implant material.
Introduction
The loss of teeth or edentulism is a debilitating condition affecting approximately 15.5% of the Australian population in its severe form, resulting in fewer than 21 teeth in adult dentition [1]. Edentulism can directly result in physical impairment, functional limitation, psychological disability, and social disability, along with handicap [2]. The successful management of partial and full mouth edentulism involves the placement of dental implants into the jaw bone, which provides anchorage and support for the fixed artificial tooth/teeth (prosthesis).
Modern implant dentistry began in the 1950s when Per-Ingvar Brånemark, a Swedish professor, stumbled upon a phenomenon he called "osseointegration" [3]. The success of endosseous implants is directly related to osseointegration: a process of implant-bone interaction that ultimately leads to bone-to-implant anchorage, which is crucial for the long-term success of the implant [4]. The first patient was successfully treated in 1965, using a titanium screw implant [3]. Since then, millions of patients worldwide have been treated with dental implants, with titanium having established itself as the preferred material [3,5,6].
Titanium is known to possess excellent mechanical strength and is highly biocompatible, whereby the formation of an oxide layer facilitates cellular interaction and osseointegration. This biocompatible material historically enjoys high success rates, making it the most widely used material for osseous implants today [3,[7][8][9]. Despite being the gold standard, titanium has its own drawbacks. Its grey hue can be a significant aesthetic issue, and there can be a corrosion of metal that can trigger a hypersensitivity reaction or lead to an accumulation of titanium within internal organs [3,5]. Furthermore, a shift in patient preference towards a non-metallic solution has resulted in a demand for alternative implant systems, ushering in the advent of ceramic dental implants [3,5,6,[10][11][12][13][14][15].
Ceramic materials are commonly used in dentistry due to a high biocompatibility and excellent aesthetics, mimicking the appearance of a natural tooth [5]. Ceramics have been used for various applications such as the fabrication of crowns and bridges, orthodontic brackets, and implant abutments [16]. Recently, Yttria-tetragonal zirconia polycrystal (Y-TZP) has been proposed as an alternative material for implants as it is tooth-coloured, resistant to plaque formation, and biocompatible with suitable mechanical properties. However, there is a need to evaluate its properties further [3,4,6,[16][17][18][19][20][21][22].
The micro-and nano-structure of implant surfaces is a significant factor for titanium and zirconia to achieve successful and reliable osseointegration [5,[23][24][25][26][27][28][29]. Hence, various surface modifications have been used to modulate the physical and chemical properties aiming to improve bone-to-implant interaction [5,17,25,29]. At the molecular level, modified implant surfaces can increase the adsorption of serum proteins, cytokines, and mineral ions and better retain a fibrin clot, subsequently promoting cellular migration and attachment [30][31][32]. Different implant surface treatments and materials will possess unique characteristics that can affect the host cellular response as shown by in vitro studies using several cell lines including human fetal osteoblasts, human mesenchymal stem cells, and mouse calvaria cells MC3T3-E1 [17,31,[33][34][35][36][37][38][39][40]. Hence, cell culture assays are pivotal to understanding the cell response to any new implant material surface [17,33]. The unique surface characteristics of implants may be obtained through various methods of machining, blasting, acid-etching, coating, laser technology, or a combination of procedures. As with titanium, in vitro and in vivo studies have confirmed that zirconia-based ceramic surfaces are chemically inert with minimal local or systemic adverse responses [41].
Creating an optimised zirconia topography without compromising the biomechanical stability is a technical challenge that, so far, has resulted in increased failure rates under function with numerous zirconia implant fractures [18][19][20][42][43][44]. Zirconia's physical characteristics and mechanical properties are a significant impedance in its development for implant fabrication. Of the three crystalline forms in which zirconia exists (monoclinic, tetragonal, and cubic), the desirable mechanical properties are achieved with the advent of partially stabilised zirconia. However, manufacturing processes induce an inherent stress and cracks within the material and render it unstable ( Figure 1). The addition of various stabilising agents such as magnesia, cerium, or yttria at different concentrations and combinations has yielded various forms of partially stabilised zirconia with a significant enhancement in structural strength due to enhanced resistance to slow crack growth [45]. Currently, yttria partially stabilised zirconia is commonly employed for dental implants that are currently available for clinical use [46,47]. Amended manufacturing processes have aimed to produce a micro-roughened zirconia implant with decreased fracture rates and improved fatigue strength enhancing the clinical performance of zirconia implants [18]. However, the optimal design for zirconia implant osseointegration is yet to be determined [24,[48][49][50][51]. Zirconia implants have shown superior soft-tissue responses, biocompatibility, and aesthetics with comparable osseointegration to titanium; however, additional research is required further improve zirconia dental implants and validate them as a viable alternative to the titanium implant [16,23]. Therefore, the rationale of this study was to characterise a novel zirconia implant surface and evaluate its osseointegration potential compared to a titanium surface, using in vitro cell culture assays focused on cellular viability and proliferation, attachment, cytoskeletal changes, and migration. Figure 1. The three crystalline phases of zirconia. The desirable mechanical properties of zirconia are possessed by a state of tetragonal and cubic forms, stabilised below 1070 • C to retain these properties with the addition of alumina, magnesia, cerium, and/or yttria.
Materials and Methods
All the cellular assays were designed and conducted in compliance with the Minimum Information About a Cellular Assay (MIACA) guidelines [52,53]. Modified Consolidated Standards of Reporting Trials (CONSORT) guidelines for preclinical in vitro studies on the dental materials checklist was utilised to report our findings [54]. Ethics approval was not required for this in vitro study.
Sample Preparation
Commercially procured titanium alloy (CpTi) was used to fabricate discs that were 14 mm in diameter and 3.5 mm in thickness. The yttria-tetragonal zirconia polycrystal (Y-TZP) used was obtained by sintering commercial 3 mol% yttria partially stabilised zirconia powder (30% monoclinic and 70% tetragonal) to produce discs that were 16 mm in diameter and 3 mm in thickness. A ready-to-press powder was uniaxially pressed at 3000 kgf/cm 2 pressure in a pellet press die followed by sintering at 1450 • C for 2 h with a constant heating rate of 10 • C/min. The sintered discs were characterised by 100% tetragonal crystalline structure with a bulk density of 6.07 g/cm 3 . After sintering, all samples were wet ground on silicon carbide abrasive paper and polished to obtain a smooth surface. Prior to surface treatment, the samples were ultrasonically cleaned in a 100% ethanol bath for 15 min and then in distilled water for 10 min to remove any surface contamination or debris from the polishing process.
Surface Characterisation
After sterilisation, discs of CpTi and Y-TZP were used for surface analysis using laser scanning microscopy, scanning electron microscopy (SEM), and contact angle measurement, comparing untreated and sandblasted surfaces to elucidate the effect of sandblasting on surface characteristics.
Laser Scanning Microscopy
To assess the surface topography of the discs, images were acquired with a laser scanning microscope (LEXT OLS4100, Olympus Corporation, Tokyo, Japan). Three discs of each sample type (untreated Y-TZP and CpTi; sandblasted Y-TZP and CpTi) were used to obtain a range of measurements at three randomly selected sites on each disc. The following measurements were recorded with a Gaussian filter to separate the roughness from errors of form or waviness; discs were characterised by height, spatial, and hybrid parameters as described by Wennerberg and Albrektsson [30].
• Sa (µm): arithmetical mean height; average height deviation (above and below) from the mean plane height of the surface. A measure of surface roughness. The resulting surface topography from sandblasting was also represented by three dimensional-topography models [30,60].
Scanning Electron Microscopy
Topographical disc analysis was also determined with SEM analysis (Phenom™ G2 pro, Phenom-World BV, Eindhoven, The Netherlands). Following gold sputtering (Spi-Module™ Sputter Coater, SPI Supplies, West Chester, PA, USA), three discs of each sample type (sandblasted and untreated CpTi; sandblasted and untreated Y-TZP) were used to obtain images at three randomly selected sites on each disc.
Contact Angle Measurement
The sessile drop method was used for contact angle measurements whereby 80 µL of purified water was deposited onto both the untreated and sandblasted dry disc (CpTi and Y-TZP) surfaces at room temperature. Three discs of each sample were used to obtain images. Images were calibrated to the scale in each image, and the angles were measured using ImageJ software (version 1.53a, National Institute of Health, Bethesda, NY, USA) that was repeated three times on each side of the sessile drop.
A relationship between surface roughness and wettability is shown by Wenzel's equation r a (γ sv −γ sl ) = γ lv cosθ w , whereby r a is the roughness factor and θ w is the contact angle of a rough surface. The equation shows that if the roughness factor increases, then cosθ w will increase, resulting in a decreased resultant contact angle [61].
Cell Morphology
The cytoskeletal arrangement of the MC3T3-E1 cells was examined by an inverted epi-fluorescence microscope (Olympus IX53 epifluorescence microscope, Olympus Corporation, Tokyo, Japan) to visualise the cytoskeletal protein, actin, which was stained with 2% Flash Phalloidin™ red solution (BioLegend, San Diego, CA, USA) according to the manufacturer's instructions. Initially, 1.6 mL of media containing MC3T3-E1 cells at 1×10 6 cells/mL was used to seed cells onto the SBY-TZP (n = 4) and SBCpTi (n = 4) discs over 24 h. Subsequently, cells were fixed for 10 min with 4% paraformaldehyde at room temperature and permeabilised using 0.5% Triton X-100 (Sigma-Aldrich, Castle Hill, NSW, Australia). FBS (5%) was used as a blocking agent prior to incubation with Flash Phalloidin™ red solution for 20 min at room temperature. Then, stained cells were imaged and analysed to determine the percentage of Flash Phalloidin™ red solution fluorescence compared to the background (disc) as a percentage to quantify the amount of fluorescence of MC3T3-E1 cells on SBY-TZP and SBCpTi discs [62][63][64].
The cytoskeletal arrangement of the cells was also examined under SEM using a protocol adapted from Fischer et al. [65]. First, 1.6 mL of media containing cells at 1 × 10 6 cells/mL was seeded onto SBY-TZP (n = 4) and SBCpTi (n = 4) discs and incubated for 24 h. Attached cells were fixed with 3% glutaraldehyde; then, they were dehydrated in graded concentrations of ethanol (25%, 50%, 75%, 95%, 100%) for 5 min at each concentration. Subsequently, discs were placed in a 1:1 solution of hexamethyldisilazane (HMDS) and ethanol for 15 min, followed by 100% HMDS for 5 min. Samples were dried for 4 h within a fume hood before gold sputter coating and SEM evaluation.
Cell Viability and Cell-Covered Area
To evaluate the viability of the cells on the SBY-TZP and SBCpTi discs, live and dead cells were analysed at days 1 and 9. A Cytopainter Cell Plasma Membrane Staining Kit at 20% (ab219941 Abcam, Melbourne, Australia) was used to stain live cells, and 2% propidium iodide stain (ThermoFisher, Scoresby, Australia) was used to counterstain dead cells. First, 1.6 mL of media containing cells at 1 × 10 6 cells/mL was seeded onto the SBY-TZP (n = 4) and SBCpTi (n = 4) discs. The stain solution was added to each well and incubated for 20 min in the dark. Images were obtained using an inverted epifluorescence microscope and analysed using ImageJ software to determine a live-dead ratio and cell-covered area [62][63][64].
Cell Proliferation
The MC3T3-E1 cells were seeded onto the SBY-TZP (n = 8) and SBCpTi (n = 8) discs, as well as eight cultures wells (positive control). First, 1.6 mL of media containing cells at 1× 10 6 cells/mL was used to seed cells onto each disc or well. The assay was run with 10% v/v Resazurin (Sigma-Aldrich, Castle Hill, NSW, Australia) to determine cellular proliferation at 1, 3, 5, 7, and 9 days. Resazurin solution was added to the wells at each time point and incubated for 5 h in the incubator. Media from each specimen was transferred to a 96-well plate (in triplicate: 3 wells of 100 µL each), and the absorbance of resorufin (reduction product of resazurin) at 570 nm and 600 nm wavelength was recorded using a microplate absorbance reader (iMark™ Microplate Absorbance Reader, BioRad Laboratories, Hercules, CA, USA). The percentage of resorufin was calculated using the values obtained for stock solution (without cells). The raw data were transformed as a factor of surface area, with the SBY-TZP dimensions (16 mm diameter) being greater than the SBCpTi discs (14 mm).
Cell Migration
Migration was assessed using a scratch-healing assay whereby cells at 1 × 10 6 cells/mL were seeded on the SBY-TZP (n = 4) and SBCpTi (n = 4) discs (1.6 mL of media) and cells were grown until confluent. Two scratches across each disc were made using a sterile 200 µL pipette tip and followed by thorough washing with phosphate-buffered saline (PBS; Sigma-Aldrich, Castle Hill, NSW, Australia) to remove detached cells. The scratches were imaged using an inverted epifluorescence microscope after 0, 6, and 24 h of incubation with Flash Phalloidin™ Red solution as per the manufacturer's instructions. The images were analysed, with the result represented as a percentage of the initial open area of the scratch covered by cells at each time point using ImageJ software [66].
Statistical Analysis
IBM SPSS Statistics 20 (IBM SPSS Inc., Chicago, IL, USA) were used for statistical analysis. The Mann-Whitney U-test and ANOVA was used to make comparison among the groups; results are presented as median ± interquartile range and mean ± standard deviation, respectively. A p-value < 0.05 was considered statistically significant.
Sandblasting-Affected Surface Topography in a Unique Manner for the Y-TZP and CpTi Discs
The untreated Y-TZP and CpTi surfaces revealed a smooth topography, with limited signs of texture that resulted from the manufacturing processes ( Figure 2). The untreated Y-TZP and CpTi surfaces had no significant difference in Sa, Sdr, or Sku value, meaning that both surfaces were of similar sharp roughness and surface area ( Table 1). The two surfaces were significantly different in Ssk (p = 0.0076) and Str (p = 0.001), confirming that the untreated Y-TZP surface had a significantly more uniform topography and its texture was skewed above the mean surface plane compared to CpTi, which had relatively equal height distribution above and below the mean plane (Table 1).
Figure 2.
Three-dimensional laser microscopy images of untreated and sandblasted Y-TZP and CpTi. Images were obtained on three discs at three randomly selected sites using digital laser scanning microscopy, and representative wireframes were generated. Wireframes are shown in micrometres (µm). A and B, representative images (10× magnification) of untreated CpTi (A) and SBCpTi (B). C and D, representative images (10× magnification) of untreated Y-TZP (C) and SBY-TZP (D). Table 1. Results of the topographical analyses by laser scanning microscope on untreated and sandblasted Yttria-tetragonal zirconia polycrystal (Y-TZP) and titanium (CpTi). Data are presented as mean ± standard deviation n = 3 sites per discs (into 3 discs). Sa, arithmetic mean height; Sdr, developed interfacial area ratio; Ssk, skewness; Sku, kurtosis; Str, texture aspect ratio. * indicates significant difference (p < 0.001) between untreated surfaces. # indicates a significant difference (p < 0.001) between sandblasted and untreated surfaces. + indicates a significant difference (p < 0.001) between sandblasted surfaces. Both Y-TZP and CpTi had a significant increase in the surface parameter Sa with the sandblasting process compared with the untreated surfaces (for both CpTi and Y-TZP; p = 0.008. Table 1). The other parameters of Ssk, Sku, and Str show that both treated surfaces had a height distribution skewed below the mean plane, which was spiked in texture and spatially isotropic (Table 1). This was evident in the three-dimensional surface models obtained from laser scanning microscopy on both the SBY-TZP and SBCpTi samples ( Figure 2B,D). SBY-TZP had a significant increase in Sa (p = 0.008), Sdr (0.007), Ssk (p = 0.0022), and Str (p = 0.0022) compared with untreated Y-TZP, showing a surface that had increased in roughness, surface area, and a texture skewed below the mean plane, describing a non-uniform surface topography. SBCpTi had a significant increase in Sa (p = 0.008), Sdr (0.002), and Sku (p = 0.001) compared to untreated CpTi, revealing an increased roughness, surface area, and sharp surface topography (Table 1; Figure 2).
Material
The Sa values were comparable for SBY-TZP and SBCpTi, with no statistically significant difference (p = 0.6320); therefore, the surfaces were of similar surface roughness. Titanium showed a sharper surface (higher Sku value; not significant) that was more uniform in texture (lower Str value; not significant) and had a height distribution closer to the mean surface plane (lower Ssk value; not significant) compared to zirconia (Table 1). Although the surface differences were visually apparent (Figure 2), the parameters did not reach statistical significance. SBY-TZP had a height distribution further below the mean plane of the surface (Figure 2), where 10× magnification shows a heavily pitted surface with limited surface features evident between pits. Table 1 and Figure 2 show that the morphology of the SBCpTi surface revealed a more prominent surface topography with a significant difference in Sdr (p = 0.001) compared to the SBY-TZP surface, showing a significant difference in the surface area of the sandblasted surfaces; titanium had a more developed nano-topography. In summary, the same sandblasting procedure resulted in a Y-TZP surface with distinct topographical properties compared to CpTi.
Representative SEM micrographs of the SBCpTi and SBY-TZP discs are shown in Figure 3. Sandblasted surfaces were free from residual particles following the cleaning process, confirming no surface contamination by the Al 2 O 3 particles used for the sandblasting treatment. Sandblasted CpTi was shown to have a consistently complex surface topography without a discernible pattern. Sandblasted Y-TZP showed almost untreated areas of the surface interspersed with irregular surface features, which occur without any visible pattern. SEM micrograph showed that whilst the two surfaces have received the same surface treatments, the resultant topography is unique for both titanium and zirconia.
Sandblasting Improved the Wettability of both Y-TZP and CpTi Discs
The wettability of the untreated surfaces was significantly different to the wettability of treated surfaces (p = 0.0022) with sandblasting treatment on both titanium and zirconia discs resulting in a reduction in contact angle ( Figure 4). As noted in Table 1 and Figure 2, the surface roughness of the SBCpTi and SBY-TZP increased, and as such, the contact angle decreased. Roughened titanium was seen to have a significant reduction in contact angle compared to the untreated titanium surface: approximately a 34-degree reduction (p = 0.002). However, no significant difference was noted between the two sandblasted surfaces (SBCpTi and SBY-TZP) with contact angles of approximately 56 degrees and 55 degrees, respectively (p > 0.05).
Cells Showed Morphological Differences between Sandblasted Y-TZP and CpTi Discs
After 24 h of incubation, cells were found to be adhering onto both SBCpTi and SBY-TZP discs ( Figure 5). Flash Phalloidin™ staining showed that the extent of early cellular spread was denser on zirconia with a marked difference in the number of cell-to-cell contacts when compared to titanium. Images from titanium discs revealed a prominent number of spherical cells with limited extensions and cell-to-cell contact. MC3T3-E1 cells cultured on the zirconia surface showed well-organised actin fibres and filopodiae seeking intercellular contact. This was supported by the percentage of actin fluorescence being significantly higher on SBY-TZP compared with SBCpTi (p = 0.0015). SEM confirmed the findings of the adhesion assay, showing SBCpTi to have limited sites of cell-to-cell contacts with limited cellular extensions ( Figure 6C,D) in contrast to SBY-TZP, wherein the extent of cellular spread was greater with a marked increase in intercellular contacts ( Figure 6A,B). Images were analysed using ImageJ software (E). Results are presented as mean ± standard deviation (n = 3). Lines within the graph show which samples had a statistically significant difference (p < 0.05).
Sandblasted Y-TZP and CpTi Discs Showed Comparable Proliferation and Viability of MC3T3-E1 Cells
Results for cellular proliferation at 1, 3, 5, 7, and 9 days of incubation are shown in Figure 7. A 3 (substrate) × 5 (day) mixed factorial ANOVA was performed to examine the differences in cellular proliferation over time and across substrate (SBY-TZP, SBCpTi, or cell control). There was a significant main effect for time: F(4,24) = 207.76, p < 0.05. Post-hoc analyses indicated there was a significant difference in cellular proliferation between days 7 and all other days (p < 0.05), and day 9 and all other days (p < 0.05). There was no significant main effect for substrate, F(2,6) = 0.25, p > 0.05, nor interaction between time and substrate, F(2,6) = 0.63, p > 0.05. This indicates that cellular proliferation did not significantly differ across SBY-TZP or SBCpTi, nor did it differ based on time. The cell-covered area on the surfaces of the discs increased significantly across time (p < 0.05) but did not significantly differ across SBCpTi compared to SBY-TZP at any time point (Figure 7).
The results for cell viability at 1 and 9 days of cultivation are shown in Figure 8. Day 1 showed a limited number of dead cells, with dominant live cell numbers for both SBCpTi and SBY-TZP. Day 9 showed a confluence of cells on the discs with a subsequent increase in dead cell numbers for both surfaces. A 2 (disc material) × 2 (day) mixed factorial ANOVA was performed to examine the differences in cellular viability (live-dead ratio) over time and across disc type (SBY-TZP or SBCpTi). There was no significant main effect for time, F(9, 20) = 0.01042, p > 0.99. However, a significant main effect for disc material was noted F(1, 20) = 10.45 p < 0.05. Post-hoc analyses indicated there was no significant difference in cellular viability between SBY-TZP and SBCpTi (p > 0.05). This suggests that cellular viability did not significantly differ across time or across disc material on both days 1 and 9. The amount of fluorescence was measured to obtain a percentage area of cytoskeletal arrangement using ImageJ software. Results are presented as median ± interquartile range (E). * denotes a significant difference in the percentage of actin fluorescence compared to total area on SBY-TZP discs compared with SBCpTi discs (p < 0.05). . Cellular proliferation expressed as the average percentage of reduced resazurin in cells cultured on SBY-TZP and SBCpTi compared to a positive cell control. MC3T3 cells were seeded on to SBY-TZP (n = 8) and SBCpTi discs (n = 8) and a positive control (n = 8) and incubated for 1, 3, 5, 7, and 9 days. The proliferation assay determined the reduction of resazurin into resorufin and was measured at a wavelength of 570 nm with the subtraction of the 600 nm background using a microplate absorbance reader. A cell-covered area assay was included as a trendline. Cellular proliferation results are presented as mean ± standard deviation. * denotes a significant difference of the percentage reduction of resazurin of SBY-TZP and SBCpTi compared to the cell control (p < 0.05). 4) and SBCpTi (n = 4) discs and incubated for 1 and 9 days. The cells were imaged with epifluorescence microscopy, a Cytopainter Cell Plasma Membrane Staining Kit (green), and propidium iodide (red). A-D Representative images of cellular viability across SBCpTi: day 1 and 10× magnification (A), day 1 and 40× magnification (B), day 9 and 10× magnification (C), day 9 and 40× magnification (D). E-H Representative images of cellular viability across SBY-TZP: day 1 and 10× magnification (E), day 1 and 40× magnification (F), day 9 and 10× magnification (G), day 9 and 40× magnification (H). The amount of fluorescence in a Cytopainter Cell Plasma Membrane Staining Kit (green) and propidium iodide (red) was measured to obtain a live-to-dead number ratio using ImageJ software. Results are presented as the median ± interquartile range (I).
Cells Showed Improved Rates of Migration on Sandblasted Y-TZP Compared to CpTi Discs
The healing of the scratch area on each disc at 0, 6, and 24 h and healed percentage is evident in Figure 9. At 6 h, no significant difference in the percentage and subsequent rate of migration was evident for either surface ( Figure 9B,E). After 24 h, zirconia had an average 72% healing rate of the scratched area ( Figure 9F,G), which was significantly higher (p = 0.016) than the average 51% healing rate for titanium ( Figure 9C). This showed that a significantly higher rate of migration of the cells was facilitated by the zirconia surface compared to the titanium surface. . Images were analysed, and the percentage of the area of the scratch healed at each time point was calculated using ImageJ software. Results are presented as median + interquartile range. * denotes a significant difference of percentage of the area covered on SBY-TZP discs compared with SBCpTi discs (p < 0.05).
Discussion
The aim of the present study was to characterise the zirconia implant surface and evaluate the behaviour of MC3T3-E1 cells on novel SBY-TZP and SBCpTi in vitro. The results showed improved viability, cytoskeletal arrangement, and attachment and migration of cells on the SBY-TZP surface compared with similarly modified titanium.
Previous studies have shown that a roughened zirconia surface will result in improved in vitro results compared to a machined or polished zirconia surface; therefore, the rationale of this investigation was to evaluate the osseointegration potential of a novel zirconia surface compared to a titanium surface [6,28,32,56,58,67]. In this study, micro-and nano-topographies were created on Y-TZP and CpTi surfaces by sandblasting and evaluated by laser scanning microscopy and SEM. Both Y-TZP and CpTi had a significant increase in surface topography measures following sandblasting, indicating that the subtractive treatment of sandblasting was successful for both CpTi and Y-TZP (Table 1).
The Sa value of 3.36 µm of Y-TZP in this study was relatively higher than other reported sandblasted zirconia ceramics with previously reported values ranging from Ra (arithmetical mean height of a line) or Sa values from 0.56 µm to 2.50 µm [29,41,44,55,58,[68][69][70]. Several of these studies used a titanium surface for comparison that had roughness values higher than the sandblasted ceramic; whilst in this study, the roughness of the tested SBCpTi and SBY-TZP surfaces were very similar (Sa values of 3.41 µm and 3.36 µm respectively; no significant difference). However, it is evident in Figure 2 that whilst the Sa value and, therefore, the surface roughness of the two sandblasted surfaces are similar, the micro-and nano-topography of each surface is unique. The hybrid parameter Sdr value showed a significant increase in the surface area of the SBCpTi discs due to the development of a more prominent nano-topography (Table 1). Therefore, the SBCpTi surface was characterised by a higher concentration of nano-topographical features compared to SBY-TZP, despite having had the same surface treatment. This highlights the technical challenge of creating and optimising zirconia topography compared to titanium [18][19][20][42][43][44]. Our study confirmed these findings, showing titanium to have a far more consistently complex surface topography ( Figure 3A,B) compared to the isolated areas of surface texture on the sandblasted zirconia surface ( Figure 3C,D). This can be explained by the higher plasticity of titanium alloy compared to zirconia attributable to the inherent differences in toughness and brittleness of the bulk materials [44]. Sandblasting of the zirconia surface can trigger a tetragonal to monoclinic phase transformation, which is accompanied by a substantial increase in volume, subsequently inducing a compressive force at the surface. This force closes the fractured area, enhancing the resistance of the surface to further propagation. Further development of a nano-topography on the zirconia surface has been facilitated by subsequent acid-etching [6,48,56].
Despite the differing topography, SBCpTi and SBY-TZP had similar water contact angles of approximately 56 and 55 degrees respectively (Figure 4), classifying both surfaces as hydrophilic [38,61]. Although both SBCpTi and SBY-TZP had significant increases in Sa and Sdr and, a significant difference in surface wettability was observed between untreated titanium and SBCpTi whilst the surface wettability of SBY-TZP was not significantly different to untreated Y-TZP ( Figure 4E). Interestingly, there was no significant difference in Sa or Sdr values between untreated Y-TZP and CpTi surfaces, although there was a significant difference in the surface wettability. Consequently, it is apparent that factors other than surface roughness may play a significant role in increasing the hydrophilicity of a surface as well as influencing cellular interaction. The interactions of cells and tissues with foreign materials are governed not only by the physical properties, such as roughness and topography, but also the chemical properties of the material surface such as hydrophilicity [30,33,61,71,72].
It was apparent that the differing sandblasted topographies of titanium and zirconia facilitated differing MC3T3-E1 cell morphologies (Figures 5 and 6). After 24 h of incubation, there was a marked increase in the number of cell-to-cell contacts and well-organised cellular extensions on SBY-TZP compared with SBCpTi, indicating improved cellular adhesion on SBY-TZP [73]. This was confirmed with the quantitative analysis of actin fluorescence that was higher (statistically significant) in zirconia discs compared to titanium discs ( Figure 5). This was supported by the findings of the viability assay (Figure 8) that showed a greater number of cellular contacts on zirconia compared to titanium. In contrast, Yamashita et al. compared sandblasted zirconia and titanium surfaces, with comparable surface roughness (Ra 1.01 and 1.03 µm respectively), and found no significant difference in cell attachment or morphology [58]. Han et al. used the same cell line, MC3T3-E1, to compare titanium and zirconia surfaces with similar roughness and also found no significant difference in cell morphology; however, Bergemann et al. found that the surface roughness Ra values from 1.22 to 1.32 µm resulted in reduced cell spreading with shortened actin filaments on zirconia [55,73]. Additionally, Strickstrock et al. tested two sandblasted Y-TZP surfaces of Sa values of 1.01 µm and 2.50 µm, with the more roughened surface not supporting cell adhesion as efficiently [44]. These studies highlight not only the importance of surface characteristics but also the optimisation of these surfaces to obtain ideal cellular interaction and long-term success. In this study, higher roughness values with a Sa value of 3.36 µm resulted in zirconia having improved cellular spreading. This may also have facilitated the significantly increased rate of migration (Figure 9), as cells were able to produce a greater number and length of cellular extensions and filopodiae. Further research into different surface topographies in combination with the chemical state of the implant material surface is important to allow for the complete optimisation of the osseointegration potential of this novel zirconia surface. Chemical properties such as hydrophilic status and charge may have a direct impact on the initial adsorption of proteins and subsequently promote better cell adhesion and spread [33,34,73]. Optimal cell adhesion is mediated by the absorption of cell adhesion-mediating molecules (such as fibronectin and vitronectin), which then makes the surface accessible to cell adhesion receptors [71,72]. Therefore, the improved cell morphology and migration of the MC3T3-E1 cells on the SBY-TZP surface compared to SBCpTi may relate to the differences in the chemical properties of the two surfaces. Future studies of this novel zirconia material will examine the role of its chemical properties in interactions with cells.
No significant difference in cellular proliferation was evident between the SBCpTi and SBY-TZP surfaces (Figure 7). At day 9, cell proliferation began to decline, which was likely a result of contact inhibition as cells reached confluence. Additionally, the higher cell numbers would exhaust nutrients in the media and create an associated build-up of toxic lactic acid, which may result in an increase in cellular death, correlating to the findings of the cell-covered area on day 9 [17,74]. This was also confirmed by a decrease in the live-dead ratio, with increasing dead cell numbers. However, cellular viability was not statistically significant on SBY-TZP compared to SBCpTi on days 1 and 9, although titanium did not show any cytotoxic effects. Similarly, other studies reported no difference in cellular proliferation between the tested titanium (Ra of 1.04 to 1.43 µm) and zirconia surfaces (Ra of 0.93 to 1.41 µm) [48,59,70]. Strickstrock et al. also found that zirconia and titanium surfaces of similar surface roughness showed no significant difference in cellular proliferation or viability; however, a more roughened zirconia surface (Sa of 2.50 µm) did show a more pronounced reduction in cell density and proliferation of primary human osteoblasts [44]. This further highlights the importance of surface optimisation to cater for ideal cellular responses [44,48,59,70]. In this study, no significant difference was seen in viability and proliferation, indicating that long-term osseointegration is reliant on other properties than just surface topography.
This study showed that the cell-stimulating properties of zirconia were comparable or exceeded that of titanium. Sandblasting resulted in higher roughness values with a Sa value of 3.36 µm giving zirconia a greater osseointegration potential due to its significantly increased viability, cellular spreading, and rate of migration compared to titanium. However, this is within the limitations of an in vitro study. Further in vitro and in vivo animal studies would be necessary to further assess this novel zirconia surface as a suitable material for dental implants and an alternative to titanium-based dental implants. Further study of the physiochemical changes of the surfaces would be important for optimisation to determine if the improved cellular interactions were due to chemical or topographical changes, or, most likely, a combination of both.
Conclusions
Modifying the surface roughness of zirconia and titanium discs with sandblasting resulted in similar surface roughness measures for both zirconia and titanium, although zirconia failed to achieve a nano-topography similar to that seen in titanium with a significantly higher surface area. Despite this, within the limitations of an in vitro study, sandblasted yttria partially stabilised zirconia was noted to enhance viability, migration, and spreading when compared to titanium. However, further research is needed to characterise the chemical properties and subsequent effects on the cellular response of this novel zirconia surface. Additionally, the negative effect of sandblasting zirconia surfaces on the mechanical properties should be evaluated further. Collectively, this study confirms the biocompatible nature of this novel zirconia surface and its potential for the application as a dental implant owing to improved cellular response. | 8,239 | 2020-07-16T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
The Med1 Subunit of Transcriptional Mediator Plays a Central Role in Regulating CCAAT/Enhancer-binding Protein-β-driven Transcription in Response to Interferon-γ*
Transcription factor CCAAT/enhancer-binding protein (C/EBP)-β is crucial for regulating transcription of genes involved in a number of diverse cellular processes, including those involved in some cytokine-induced responses. However, the mechanisms that contribute to its diverse transcriptional activity are not yet fully understood. To gain an understanding into its mechanisms of action, we took a proteomic approach and identified cellular proteins that associate with C/EBP-β in an interferon (IFN)-γ-dependent manner. Transcriptional mediator (Mediator) is a multisubunit protein complex that regulates signal-induced cellular gene transcription from enhancer-bound transcription factor(s). Here, we report that the Med1 subunit of the Mediator as a C/EBP-β-interacting protein. Using gene knock-out cells and mutational and RNA interference approaches, we show that Med1 is critical for IFN-induced expression of certain genes. Med1 associates with C/EBP-β through a domain located between amino acids 125 and 155 of its N terminus. We also show that the MAPK, ERK1/2, and an ERK phosphorylation site within regulatory domain 2, more specifically the Thr189 residue, of C/EBP-β are essential for it to bind to Med1. Last, an ERK-regulated site in Med1 protein is also essential for up-regulating IFN-induced transcription although not critical for binding to C/EBP-β.
Mouse monoclonal antibody against FLAG tag and actin were obtained from Sigma. Rabbit polyclonal antibodies against C/EBP-; goat polyclonal antibodies against Med1, Med23, Med24, and Med25; and bovine anti-goat IgG-horseradish peroxidase conjugate were purchased from Santa Cruz Biotechnology, Inc. Horseradish peroxidase conjugates of anti-rabbit and anti-mouse IgGs were obtained from GE Healthcare, Inc. ERK1-, ERK2-, and ppERK-specific antibodies (Cell Signaling Technology, Inc.) were used in this report. Rabbit polyclonal antibodies against the phospho-Thr 189 -C/EBP- form of protein were provided by Peter Johnson (NCI-Frederick). The ERK pathway inhibitor U0126 (18) was purchased from Calbiochem. All-trans-retinoic acid (RA) was obtained from Sigma.
Plasmids-Wild type and RBD-2 mutant med1 constructs were generated with Med1-Fwd and Med1-Rev primers (supplemental Table 1) using pSG5-HA-TRAP220 and pSG5-HA-TRAP220/M96 (23) as templates in PCR, respectively. The med1 N-terminal deletions (N1, N2, and N3) and C-terminal deletions (C1, C2, C3, C4, and C5) were generated using specific primers (supplemental Table 1) with pSG5-HA-TRAP220 as template. The PCR fragments were cloned into NotI and EcoRV sites of p3ϫFLAG-CMV-10 vector (Sigma). The shorter C-terminal deletion mutants C60, C73, C89, C123, and C143 were constructed using a 5Ј-primer specific to the vector and med1-specific primer that ends at amino acids 60, 73, 89, 123, and 143, respectively, using p3ϫFLAG-CMV-C1 as template and cloned into EcoRI and XhoI sites of pcDNA 3.1 vector (Invitrogen). Site-directed mutagenesis was performed with specific primers (supplemental Table 1) using the QuikChange XL kit (Stratagene, La Jolla, CA) as suggested by the manufacturer. All constructs were FLAG-tagged at their N terminus for detection by Western blot analysis. Sequence-verified constructs were used in this study. Expression vectors coding for wild type C/EBP- and its mutants (24), human RAR-␣, and DR5-Luc (25) were also used in these studies.
Lentiviral shRNAs-Lentiviral vectors carrying shRNAs specific for human and mouse med1, and mouse erk1 and erk2 were purchased from Open Biosystems, Inc. Virus stocks were prepared as recommended by the supplier (26). Briefly, to produce lentiviruses, each shRNA expression plasmid (3 g) was mixed with pCMV-dR8.2dvpr (2.7 g) and pCMV-VSVg (0.3 g) vectors and transfected into HEK-293T cells using the Fugene 6 reagent (Roche Applied Science) as described earlier (26). Thirty-six hours post-transfection, media from these cultures were collected daily for 5 days, pooled, and passed through a 0.45-m filter and used as source for lentiviral shRNAs. Knockdown of the target gene product was assessed by performing Western blot analyses.
Proteomic Analysis-Immunoprecipitation (IP) and proteomic analysis was performed as described earlier (27). To identify the proteins that associated with C/EBP-, mouse macrophage cell line RAW264.7 was stimulated with mouse IFN-␥ for 2, 4, 6, 12, 16, and 24 h. For each time point, 12 separate samples were employed. At least three separate batches of proteins were prepared for these analyses. Cells were scraped and centrifuged. Pellets were suspended in 50 mM Tris⅐Cl, pH 7.4, 100 mM NaCl containing protease inhibitors and subjected to five cycles of freeze-thaw lysis. At the end of this, 0.25% Nonidet P-40 was added to the lysates and left on ice for 5 min. Samples corresponding to each time point from three different batches were pooled prior to IP with C/EBP--specific IgG coupled to Sepharose-4B at 4°C for 12 h. Cell extracts from unstimulated cells and IP reactions performed with IgG alone were used as controls. Protein eluates from IFN-treated samples were pooled, concentrated using Centricon tubes (Amicon, Inc.), and trypsinized. The resultant peptide mixture was subjected to MALDI-TOF analysis at the University of Maryland Proteomics Core Laboratory. Mass fingerprint profiles generated from C/EBP-associated peptides of unstimulated cells were compared with stimulated cell extracts. Peptide fingerprints present in complex with C/EBP- in IFN-stimulated cells were chosen for querying the MASCOT fingerprint data base to predict the matches.
Reporter Gene Assays-Transfection, -galactosidase, and luciferase assays were performed as described earlier (28). For the luciferase assay, 500 ng of the luciferase reporter, 50 ng of pCMV--galactosidase reporter, and 200 ng of the effector plasmids were used to transfect cells in 6-well plate using Lipofectamine Plus reagent (Invitrogen). Where required, the total amount of transfected DNA was kept constant by including empty vector. irf9-luc was described elsewhere (28). dapk1-luc contains a 1.2-kb fragment from the mouse dapk1 promoter upstream of the luciferase gene (29). Luciferase activity was normalized to that of -galactosidase. Triplicate transfection per sample was performed to evaluate the statistical significance of the differences between various treatment groups. Each experiment was repeated at least three times.
Reverse Transcription-PCR Analyses-RNA was extracted using RNAzol reagent (Tel-Test Inc.) after appropriate treatments. Total RNA was used for cDNA synthesis by using a commercially available kit (Invitrogen). The resultant cDNA was used as template in real time analysis (quantitative PCR) employing SYBR chemistry (Sigma) using gene-specific primers (supplemental Table 2). Relative levels of specific transcripts were normalized to that of ribosomal protein L32 (rpl32) on the basis of C t values as described in our recent studies (30). At least triplicate reactions were performed for evaluating the statistical significance of the differences between samples using Student's t test. Each experiment was repeated with three separate batches of RNA.
Western Blot and IP Analyses-After separating on SDS-PAGE (8 -10% gels), the proteins were blotted onto a nylon membrane and probed with appropriate antibodies. Unless mentioned otherwise, all primary antibodies were used at 1:1000 dilution, and secondary antibodies were used at 1:2000 dilution for Western blots. Signals were generated using ECL kits (Pierce). IP analyses were conducted as described earlier (31). Briefly, 350 g of total cellular lysate was incubated with the desired antibody at 4°C overnight and then incubated with protein G-agarose (Santa Cruz) at 4°C for 2 h. Beads were washed, and bound proteins were resolved on 8% SDS-PAGE and transferred to polyvinylidene difluoride membrane (Millipore) and checked by Western analysis.
In Vitro Interaction Assay-The pcDNA 3.1 vector carrying the mouse wild type cebpb gene was employed as a template for generating an in vitro translated protein using a coupled TNT in vitro transcription translation kit (Promega) and immunopurified using anti-C/EBP--IgG-agarose and subsequently used in binding assays. The wild type Med1 gene cloned in the pGEM-7Zf vector (Promega) was used as template for the generation of an in vitro translated 35 S-labeled Med1 protein as described earlier (32). The labeled Med1 protein was incubated with immunopurified C/EBP- (ϳ1 g) in a buffer containing 50 mM Tris⅐Cl, pH 7.4, 100 mM NaCl, 0.1 mM EDTA, 0.1% -mercaptoethanol at 37°C for 1 h. The bound products were washed, and the samples were denatured by heating at 95°C for 10 min and separated on SDS-PAGE. The gels were fluorographed to detect bands.
Chromatin IP (ChIP) Assay-These assays were performed using a commercially available kit (Upstate Biotechnology, Inc.). Briefly, chromatin was cross-linked using 1% formaldehyde at 37°C for 10 min after appropriate treatments, and cells were sonicated for 15 s seven times with a 30-s interval under ice using a Branson sonicator. The average fragment size was ϳ500 bp under these conditions. After removing the debris, soluble chromatin was subjected to IP with specific or control IgG (5 g) at 4°C overnight. In a typical set-up, soluble chromatin input, among various samples, was normalized with gene-specific primers, prior to use in ChIP reactions. The DNA recovered from ChIP products was used for quantitative PCR with specific primer pairs (supplemental Table 3). The dapk1 primer pair detects the IFN-induced recruitment of C/EBP- to a recently identified CRE/ATF site in dapk1 promoter (29). DNA extracted from soluble chromatin was used as input control.
RESULTS
Binding of C/EBP- to Med1-Since C/EBP- participates in multiple transcriptional processes in response to disparate extracellular stimuli and plays a central role in IFN-␥-induced transcription in a number of cell types (14,19,33), we hypothesized that transcriptional specificity of C/EBP- might be controlled by a stimulus-specific association with distinct cellular proteins. To identify the proteins that associated with C/EBP- in response IFN-␥, we immunoprecipitated total cellular proteins from IFN-stimulated RAW264.7 cells using C/EBP-specific IgG. This cell line was chosen because of its exquisite sensitivity to IFN (14). Since we were interested in obtaining a global picture of the IFN-stimulated C/EBP--binding proteins, we pooled the IP reaction products of IFN-stimulated samples from different batches. Proteins recovered from these IP reactions were trypsinized, and the resultant peptide mixture was subjected to MALDI-TOF analysis. Mass fingerprints of C/EBP--associated peptides from the IFN-stimulated cell extracts were used for querying the MASCOT protein fingerprint data base to predict the matches. Forty-three tryptic peptides from IFN-stimulated C/EBP--associated proteins matched to seven different proteins, and six nonoverlapping peptides (Table 1) from this mixture matched to that of Med1 protein. Mass fingerprints from unstimulated cells and/or isotypic IgG control did not reveal any Med1-derived peptides.
To further verify the specificity of these interactions and more importantly to examine if these interactions occur between the endogenous proteins in other cell types, isogenic cebpb ϩ/ϩ and cebpb Ϫ/Ϫ MEFs were stimulated with IFN-␥, and cell lysates were subjected to IP with C/EBP--specific IgG. The IP products were probed for Med1 using an immunoblot. Both proteins interacted with each other in the steady-state in cebpb ϩ/ϩ cells, and IFN-␥ treatment enhanced it by ϳ3-fold. Med1 protein was not detected in IP products of cebpb Ϫ/Ϫ cells and in the IP products of control IgG. IFN treatment did not induce Med1 levels (Fig. 1A). Similarly, FLAG-tagged Med1 was able to associate with C/EBP- like native Med1 in a variety of cells like HeLa, HEK-293, and hTERT-HME, indicating a cell type-independent interaction (Fig. S1). A kinetic analysis of these interactions revealed that Med1 and C/EBP- dynamically interacted with each other in the presence of IFN (Fig. S1).
Since Med1 is a part of a multiprotein complex, we next investigated if Med1 directly interacted with C/EBP-. In vitro translated unlabeled C/EBP- protein was incubated with an in vitro translated 35 S-labeled Med1 protein. Protein translated from the Med1-programmed, but not mock, reactions bound to C/EBP- (Fig. 1B). These interactions appear to be quite weak in vitro. Nonetheless, these data show that C/EBP- can interact with Med1 in the absence of other constituents of the Mediator complex.
Med1 Is Required for IFN-induced C/EBP--dependent Expression of Certain Cellular
Genes-Although the above experiments showed an IFN-induced augmentation of physical interactions between C/EBP- and Med1, they did not reveal whether Med1 was required for IFN-induced expression of C/EBP--dependent genes. We have shown earlier that IFNinduced expression of irf9 mRNA is regulated by C/EBP- (14). We have recently found that the death-associated protein kinase 1 gene (dapk1) is also regulated by IFN-␥ through C/EBP- (29). Therefore, we examined if IFN-␥-induced expression of these two genes was influenced by the loss of Med1 by stimulating isogenic med1 ϩ/ϩ and med1 Ϫ/Ϫ cells with IFN-␥ and monitored the expression levels of irf9 and dapk1 mRNA by real time PCR. Fig. 2A shows Western blot analysis of Med1 expression in med1 ϩ/ϩ and med1 Ϫ/Ϫ MEFs. The IFN induced the expression of irf9 and dapk1 transcripts in med1 ϩ/ϩ cells (Fig. 2B) but not in med1 Ϫ/Ϫ cells. An shRNA-mediated knockdown of Med1 in wild type MEF (data not shown) and hTERT-HME1 cells (Fig. 2C) yielded similar data. Consistent with the knock-out cell data, IFN-induced up-regulation of irf9 and dapk1 mRNA levels was suppressed by med1-specific shRNAs and not by the controls (Fig. 2D). The steady-state expression level of these mRNAs was unaffected by med1-specific shRNA. Thus, Med1 is required for the IFN-induced expression of C/EBP--driven genes.
We next checked for Med1 recruitment to the dapk1 promoter in an IFN-stimulated manner, using ChIP assays. We used primers that could detect C/EBP- binding to the critical IFN-induced regulatory element, CRE, of the dapk1 promoter (29). Since dapk1 was induced in a delayed manner by IFN-␥, med1 ϩ/ϩ and med1 Ϫ/Ϫ MEFs were stimulated with IFN-␥ for 8 h, and soluble chromatin was subject to the ChIP assay. IFN-induced recruitment of Med1 and C/EBP- to the dapk1 promoter was seen in med1 ϩ/ϩ cells (Fig. 3A). No PCR product was detected in the controls, showing the specificity of the ChIP reaction. However, C/EBP- was recruited to the promoter upon IFN treatment in med1 Ϫ/Ϫ cells. This observation was further supported by a quantitative ChIP assay for Med1 recruitment to the dapk1 promoter upon IFN treatment (Fig. 3B). Last, restoration of f-med1, but not an empty vector, into med1 Ϫ/Ϫ cells resulted in an IFN-induced recruitment of F-Med1 to the dapk1 promoter (Fig. 3B). When a similar experiment was performed in cebpb ϩ/ϩ and cebpb Ϫ/Ϫ MEFs, Med1 was recruited to the dapk1 promoter following IFN-␥ treatment only in the presence of C/EBP- (Fig. 3A). This result was also confirmed by a quantitative ChIP assay (Fig. 3C). Consistent with this, when C/EBP- was rescued into cebpb Ϫ/Ϫ cells, Med1 recruitment to the dapk1 promoter was restored (Fig. 3C). Thus, IFN-induced recruitment of Med1 to the dapk1 promoter appears to be C/EBP--dependent.
Identification of the C/EBP--interacting Domain in Med1-Initial studies using the RBD-2 mutant, which failed to promote nuclear receptor-induced transcription, of Med1 indicated that NR-binding motifs are not essential for IFN-stimulated C/EBP--driven transcription and physical interaction (Fig. S2). Therefore, we next searched for the critical region that can bind to C/EBP- by using several deletion mutants of Med1. The smallest of these constructs, C1, was able to associate with C/EBP-. These studies lead to a conclusion that the first 155 amino acids of Med1 are critical for binding to C/EBP- (Fig. S3).
Computer-based searches for conserved motifs in this region did not yield any clues. Therefore, we modeled the first 155 amino acids to obtain a conformation using Raptor protein folding software. The last 29 amino acids within this region have a propensity to fold into a small -sheet and an ␣-helix (Fig. 4A). Based on these predictions, we substituted potential phosphoacceptor residues, such as Ser 134 and Ser 151 (bracketing the ␣-helical region), and Val 125 and His 127 residues (within the small -sheet) to alanine in the context of C1 (Fig. 4B) and studied the impact of these sub- stitutions on C1 interactions with C/EBP-. Initial IP analyses were performed in HEK-293 cells. All three mutants expressed equivalently to that of C1 (Fig. 4C). However, all of them had significantly lost (about 80%) their ability to bind C/EBP- compared with C1 (Fig. 4C). Unusually, the V125A/H127A double mutant ran slower than C1 in all experiments, under the conditions of electrophoresis. The reason for this anomalous migration was unclear, although there were no other sequence differences except for the mutant residues in this construct. In summary, disruptions within the last 29 amino acids of C1, specifically the Ser 134 , Ser 151 , and Val 125 /His 127 residues, severely affected C1 binding to C/EBP-.
Effect of med1 Mutations on Transcriptional Induction of IFN-induced Genes Driven by C/EBP--Based on the information obtained from the C1 construct, we engineered the same substitutions into full-length Med1 and examined their impact on IFN-induced interactions with C/EBP- vis à vis gene expression in med1 Ϫ/Ϫ cells. Although all Med1 mutants expressed equivalently (Fig. 5A), their IFN-induced binding to C/EBP- differed significantly from that of wild type Med1. No IFN-induced C/EBP- binding was observed with S134A, S151A, and V125A/H127A mutants. However, the proteins coded by the S134D and S151D mutants bound to C/EBP- like wild type Med1 upon IFN treatment (Fig. 5A). Similar results were obtained in HEK-293 cells (data not shown).
We next verified the functional significance of these interactions to IFN-induced C/EBP--dependent gene expression by measuring their impact on endogenous genes (irf9 and dapk1 mRNAs) and luciferase reporters driven by corresponding promoters. The mutant Med1 proteins significantly lost their ability to promote IFN-induced expression of dapk1 mRNA (Fig. 5B) and dapk1luc (data not shown). Similarly, the recruitment of mutant Med1 proteins to dapk1 promoter was significantly inhibited (Fig. 5C). The S134A and H125A/V127A mutants also exhibited a lower steady-state activity compared with the wild type Med1, although this was not discernible in Western blots. Similar results were obtained with the irf9 gene (data not shown). No significant loss of activity was observed with S134D and S151D mutants albeit being lower compared with wild type Med1 upon IFN stimulation in all three methods analyzed.
To determine if the effect of Med1 mutations were specific to IFN-induced genes, we next measured their impact on DR5-luc. All mutants induced luciferase activity upon RA treatment that was indistinguishable from that of wild type Med1 (Fig. 5D). We have also determined if defective association of Med1 with other Mediator subunits could account for its failure to activate transcription. Upon IP with FLAG tag-specific antibody followed by Western blot analysis, all Med1 mutants were able to associate with other members of the Mediator complex like wild type Med1 (Fig. 5E). Thus, Med1 mutations did not significantly affect its interactions with the other Mediator subunits.
The ERK1/2-regulated Site in Regulatory Domain 2 (RD2) of C/EBP- Is Necessary for Its Interaction with Med1-We have previously shown that RD2 of C/EBP- is necessary for IFNinduced gene expression (33). To further define the critical elements, we engineered mutations into RD2. The RD2 of C/EBP- contains many serine and threonine residues, some of which are potential sites for phosphorylation (Fig. 6A). In the first set of experiments, we used two mutants, Mut1, which lacked the adjacent serine residues, and Mut2, which lacked the TPSP sequence. Both mutants were transfected into cebpb Ϫ/Ϫ cells, and their IFN-induced binding to Med1 was compared with that of wild type C/EBP- (Fig. 6B). Mut1 interacted with Med1 in a manner similar to that of wild type C/EBP- following IFN treatment. However, Mut2 failed to bind Med1 in FIGURE 3. C/EBP- dependent IFN-induced recruitment of Med1 to the dapk1 promoter. A, ChIP assays with the indicated antibodies were performed as described under "Materials and Methods" using isogenic MEFs from wild type and mutant mice lacking med1 or cebpb. Typical PCR patterns after ChIP are shown. IFN treatment was performed as in Fig. 1. B and C, a real time PCR analysis of the ChIP products with dapk1 promoter-specific primers (n ϭ 9/sample). Before using in ChIP assays, soluble chromatin input was normalized by PCR. The right halves of these panels show the effect of the rescue of Med1 recruitment to the dapk1 promoter following a restoration of med1 and cebpb, respectively. The Western blots below these graphs show the expression of the rescued genes.
response to IFN treatment. Such differential interaction was not due to differences in the expression levels of C/EBP- or Med1 (Fig. 6B). In the second set of experiments, we used two other mutants of the GTPS motif, a consensus site for ERK1/2 phosphorylation. Mut T3 A contained an alanine in place of threonine, and Mut T3 D contained an aspartate in place of threonine. We have recently shown that this threonine residue is phosphorylated by ERK1/2 in response to IFN-␥ treatment (29). Both mutants were transfected into cebpb Ϫ/Ϫ cells, and their IFN-induced binding to Med1 was compared with that of wild type C/EBP- (Fig. 6C). Unlike wild type, Mut T3 A failed to bind Med1 above the steady-state level upon IFN-␥ stimulation. In contrast, Mut T3 D bound to Med1 readily in the steady state, and IFN treatment enhanced it further. Such differential interaction was not due to differences in the expression levels of C/EBP- or Med1 (Fig. 6C).
A Role for ERK1/2 in Regulating IFN-induced Interplay between Med1 and C/EBP--To provide further evidence for ERK1/2 in regulating the interactions between Med1 and C/EBP-, we initially studied the effect of U0126, a known inhibitor of ERK1/2 activation. This inhibitor not only blocked the IFN-induced phosphorylation of C/EBP- at Thr 189 but also inhibited the binding of Med1 to C/EBP- (see Fig. S4).
This observation was further complemented by shRNAmediated knockdown of ERK1/2 proteins. Wild type MEFs were infected with lentiviral particles containing shRNA that could target erk1/erk2 mRNAs. Greater than 95% of ERK1/2 was knocked down by the specific shRNA, not by the controls (Fig. 6D). These cells expressed comparable levels of C/EBP- and Med1. Loss of ERK1/2 resulted in the IFNinduced interaction between C/EBP- and Med1 mimicking the steady-state binding (Fig. 6E). Last, using ChIP assays, we measured the IFN-induced recruitment of C/EBP- and Med1 to the dapk1 promoter. Both C/EBP- and Med1 were recruited to the dapk1 promoter in an IFN-induced manner only in the controls but not in the presence of erk1/erk2specific shRNA (Fig. 6F). Thus, ERK1/2 control the IFNinduced phosphorylation of Thr 189 in the GTPS of C/EBP- and subsequent recruitment of C/EBP- and Med1 to the dapk1 promoter. Consistent with these results, expression of dapk1 mRNA was also inhibited in cells lacking erk1/erk2 (data not shown).
Like C/EBP-, Med1 is also regulated by the MAPK pathways. A recent study showed that the Thr 1032 and Thr 1457 residues of Med1 are phosphorylated in response to thyroid hormone via an ERK-dependent pathway (34). Therefore, we next examined if these sites are also critical for mediating protein-protein interactions between Med1 and C/EBP- by mutating Thr 1032 and Thr 1457 residues to alanines. FLAGtagged wild type and mutant Med1 constructs were transfected into med1 Ϫ/Ϫ cells and were stimulated with IFN-␥. Lysates were subjected to IP with a C/EBP--specific IgG followed by a Western blot analysis with FLAG tag-specific antibody. Mutant proteins coded by the T1032A, T1457A, and T1032A/T1457A constructs bound to C/EBP- like wild type Med1 (Fig. 7A). Thus, ERK1/2 phosphorylation sites of Med1 are not critical for its IFN-induced association with C/EBP-. We next determined if these residues were required for supporting IFN-stimulated induction of dapk1luc. All three mutants significantly lost their ability to promote IFN-induced luciferase expression from dapk1 promoter compared with wild type Med1 (Fig. 7B). The T1032A mutant had some residual transcriptional activity compared with T1457A. The T1457A mutant failed to activate IFNinduced transcription. These mutants also exhibited a similar regulatory profile in the context of dapk1 mRNA (Fig. 7C). These mutants yielded a similar picture, when tested in the context of irf9 promoter (data not shown). Thus, the Thr 1032 and Thr 1457 residues of Med1, although not impor-
DISCUSSION
The physiological diversity of C/EBP--driven responses suggests that multiple signal-induced posttranslational modifications and the consequent interactions with cellular proteins may govern its cell-, gene-, and signal-specific effects. In order to understand these pathways, we first sought to identify cellular factors that associate with C/EBP- in an IFN-␥-dependent manner. Our preliminary proteomic analyses have identified several proteins that participate in this process. In our strategy, we used pooled samples for detecting C/EBP--interacting proteins. One caveat with this approach is that low abundant proteins will not be detected because of pooling. Thus, it is likely that we have not detected all possible cellular proteins present in complex with C/EBP-. On a similar note, not all proteins detected with this approach may be bound to C/EBP- all of the time during IFN-␥ stimulation. Nonetheless, our studies identified some significant IFN-induced C/EBP-interacting proteins.
The critical role of one such protein, Med1, in regulating IFN-induced transcription, has been demonstrated in this report using RNA interference, knock-out cells, IP, ChIP and mutational analyses. The Mediator protein complex regulates transcription from specific gene enhancers in response to hormones and other extracellular signals. Deletion of its constituent subunits leads to loss of a number of transcriptional events that participate in cell division, differentiation, and metabolism (21,(35)(36)(37). We have shown that Med1 dynamically and directly associates with C/EBP-. The steady state binding of Med1 to C/EBP- may regulate other IFN-independent C/EBP--regulated cellular genes. It is important to note that different transcription factors associate with distinct sub- units of the Mediator for modulating transcription in a signalspecific manner. One recent study showed that IFN-␣-induced transcription driven by the STAT2 protein requires its interaction with the MED14 and MED17 subunits of Mediator (38). Thus, C/EBP- does not appear to associate with the same subunits of Mediator complex, although it, like STAT2, functions in an IFN-regulated pathway. Previous reports have shown that the N-terminal region of Med1 plays a critical role in mediating its interactions with other transcription factors, such as GATA1, GATA2, and Pit (21,39). Although the exact contact sites for these transcription factors in the Med1 protein have not been finely mapped, a broad region of Med1 consisting of residues 622-701 appears to form a critical binding domain (39). This domain is distinct from the C/EBP- binding region mapped in the current study. Last, the nuclear receptor binding LXXLL motif of Med1 is dispensable for binding to C/EBP- (Fig. S2). We showed that the Ser 134 , Ser 151 , and Val 125 /His 127 residues of Med1 are critical for promoting IFN-induced interaction with C/EBP- (Fig. 5). These observations suggest that multiple residues of Med1 contact C/EBP- for driving IFN-stimulated transcriptional response. Although the crystal structure of Med1 is unknown at this stage, the Ser 134 and Ser 151 residues flank a potential ␣-he-lix (Fig. 4). A negative charge at positions 134 and 151, probably acquired via phosphorylation, may allow it to interact with C/EBP- efficiently. This interpretation is consistent with a loss of IFN-induced transcription upon conversion of these residues to alanines and a restoration of transcription following insertion of an aspartate residue. The two charged residues at Ser 134 and Ser 151 might serve as contact points, whereas the ␣-helix provides sufficient stretch for an interaction. These sites do not appear to be homologous to the consensus phosphorylation sites of the known protein kinases. Thus, it is unclear at this stage which kinase(s) might phosphorylate these sites. One of our future priorities is to identify the kinase that regulates phosphorylation at these sites. The Val 125 and His 127 residues are located in a predicted -sheet-like structure that may form an additional interacting point of Med1. The equipotent activation of nuclear receptor-dependent transcription, but not IFN-induced transcription, by the Med1 mutants like the wild type protein suggests that functionally dissociable motifs mediate the interactions of Med1 with specific transcription factors.
We have also shown that the ERK1/2 signaling pathway is critical for promoting IFN-induced Med1 and C/EBP- interactions and gene expression. Regulation of transcriptional coactivator proteins, such as CBP/p300, by MAPK kinase and other signaling pathways was shown earlier (40 -42). A role for ERKs in regulating Mediator proteins was suggested earlier (43,44). Recently, the Thr 1032 and Thr 1457 residues of Med1, direct substrates for ERK-dependent phosphorylation, have been shown to play an important role in regulating nuclear receptorinduced transcription (34). Although these sites are not critical for the IFN-induced binding of Med1 to C/EBP-, they are necessary for driving IFN-induced transcription (Fig. 7). The C/EBP- and nuclear receptor-induced transcriptional signals seem to functionally converge at these sites of Med1. Thus, we have mapped two separate domains of Med1, one that is required for binding to C/EBP- and the other that mediates IFN-inducible transcriptional response. Like the Med1 protein, C/EBP- also requires the MAPK signaling for promoting transcription (19,33). A threonine residue, located within the RD2, of C/EBP- plays an important role in mediating its interactions with Med1. We have shown that IFN-activated ERK1/2 can directly phosphorylate this residue (29). Thus, MAPK signaling controls phosphorylation of C/EBP- and possibly of Med1. Both of these events are critical for ensuing transcription from IFN-responsive C/EBP--dependent gene promoters.
One recent study (45) has shown that C/EBP- binds to the Med23 subunit of the Mediator. Our report identified Med1 as a C/EBP--interacting protein. Although the precise nature of this discrepancy is unclear, there are certain important differences between these two studies. The mouse C/EBP- used in the current study is about 13 kDa smaller than its human counterpart and is significantly different from the chicken C/EBP- used in those studies. Our studies used IFN-regulated promoters, whereas their studies investigated Ras-inducible C/EBPdependent gene promoters. Another major contributing factor for these differences is the type of response element that is controlled by C/EBP-. The irf9 gene is controlled by GATE, a unique response element, which is distinct from the conven-
WT T 1032 A T 1457 A EV T 1457 A T 1032 A f-Med1
[A] tional C/EBP--binding sites (14,28). In the case of dapk1, it is a CRE-like element that binds C/EBP- in response to IFN-␥ (29). These elements are distinct from consensus C/EBP-binding sites. In preliminary studies, we have also found that several non-Mediator proteins also form a complex with C/EBP- in the presence of IFN-␥ (data not shown). Whether these proteins influence the Mediator binding to different promoters or its composition is unclear. Unless these other C/EBP--interacting proteins are fully characterized, we may not know the precise reasons for these differences. Last, the studies of Mo et al. (45) did not rule out a role for additional Med proteins being part of a C/EBP--bound complex. Electron microscopy and other studies have shown that the Mediator complex is remarkably flexible with respect to its conformation and exists in different states (46 -49), depending on the transcriptional activator. Furthermore, ligand-induced post-translational changes further contribute to these interactions. We have provided evidence for such activities in this report. The other possibility is that both Med1 and Med23 subunits of the fully assembled Mediator complex contact the C/EBP- protein, which binds to DNA as a dimer. Such interaction may be expected, given the observation that STAT2 interacts with the Med14 and Med17 subunits of Mediator in response to IFN-␣ (38). A number of other transcription factors, such as GR (50), TR␣ (51-53), HNF-4 (54), Dif (55,56), p53 (57)(58)(59), and HSF (55,56), also interact with more than one subunit of the mediator. Notably, GR, TR␣, HNF-4, and p53 interact with Med1 and another subunit of Mediator. Also, Mo et al. (45) reported the Ras-induced binding of Med23 to C/EBP-. In contrast to this, we have shown earlier that Ras was dispensable for IFNinduced transcription to occur (19,33). Although the same transcription factor, C/EBP-, participates in these two apparently distinct responses (Ras-driven pro-oncogenic responses and IFN-induced growth regulatory response), it is likely that terminal interacting factors may facilitate distinct patterns of transcription. Last, C/EBP- itself has been suggested to undergo conformational changes following phosphorylation (60,61). Consistent with a role for Med1 in regulating C/EBP--driven responses, another study showed that C/EBP--dependent adipocyte differentiation and gene expression were defective in med1 Ϫ/Ϫ cells (37). In summary, we show for the first time that Med1 plays a critical role in regulating C/EBP-driven IFN-induced transcription. | 7,368.2 | 2008-05-09T00:00:00.000 | [
"Biology"
] |
System of industrial park functioning indicators reflecting the effect of competitiveness factors of resident enterprises
Recently, industrial parks, demonstrating their effectiveness, have turned into a locomotive of business development providing the resident enterprises with a whole range of competitive advantages. In order to analyze the impact of factors enhancing the competitiveness of residents of industrial parks, the authors have developed a comprehensive system of indicators for the functioning of the industrial park which provide a quantitative assessment and a comprehensive account of the aggregate of "intra-park", local and regional factors. Considering the requirements formulated by the authors, a system of indicators for the functioning of the industrial park has been developed, which includes two structural blocks: indicators of the industrial park performance and indicators of the industrial park's base area potential. Options of practical use of the formed system of indicators are proposed, on the one hand, to establish targets for further development of the industrial park based on the comparison of the studied industrial park with the projects of competitors, and on the other hand, to design an industrial park development program that ensures the competitiveness of the site residents.
Introduction
The positive experience of many Russian regions over the past decade indicates that industrial parks at the current stage of the country's economic development play a significant role in ensuring the dynamic growth of direct investments in the modernization of industry, creating conditions for the organization of new competitive production.This allows us to consider industrial parks as a locomotive of business development, creating an effective platform for sustainable long-term growth of the industrial complex of a particular region.At the same time, it should be noted that industrial parks provide enterprises that host production on their territory with a whole range of advantages, the key ones being the proximity of sales markets and labor resources, transport accessibility, simplified procedure for residents to pass administrative and licensing procedures, supply with energy resources and advanced engineering solutions [1][2][3][4][5][6].
Considering the aforesaid, we note that resident enterprises of industrial parks are influenced by many factors that have a positive impact on their level of competitiveness [7,8,[9][10][11][12].
The whole aggregate of these factors can be divided into three blocks according to their territorial character (Figure 1): "intra-park" factors (conditions created within the boundaries of a specific investment site), local factors (advantages of the territory in the immediate vicinity of the industrial park) and regional factors (preferences for production location on the territory of a certain subject of the Russian Federation) [13][14].At the same time, the impact of these factors on the competitiveness of enterprises can be assessed and analyzed solely on the basis of a system of quantitative indicators characterizing their impact.This actualizes the problem of determining and evaluating a set of basic indicators of the functioning of industrial parks and studying their impact on the level of competitiveness of economic entities.To consider the diversity of characteristics of the activity of industrial parks, it is necessary to form an integrated system of indicators of their functioning, the key requirement of which is to ensure a comprehensive accounting of the above-mentioned factors of increasing the competitiveness of enterprises operating on the investment site.
Methods
In practice, the entity interested in analyzing the system of indicators of functioning of an industrial park to develop a set of management impacts aimed at increasing the attractiveness of the site for potential investors is its management company.In addition, such an analysis makes it possible to formulate a program for the development of an industrial park, considering the needs of resident enterprises already operating on its territory, which contributes to the creation of favorable conditions for their further growth.
In this regard, the system includes indicators of the functioning of an industrial park, meeting the following basic requirements: indicators should be calculated on the basis of the industrial park management data available to the management company; the management company should be able to directly or indirectly influence the change in indicators; indicators should characterize the activity of an industrial park in dynamics and enable analysis of its development trend.
At the same time, it should be noted that the system of indicators of the functioning of an industrial park is open and, depending on the individual features of the site and the volume of available initial data, experts can supplement the necessary set of indicators.
The classification of indicators of the functioning of an industrial park assessing the impact of factors of competitiveness of enterprises operating on its territory is based on their differentiation into two blocks: a block of indicators of the industrial park performance reflecting the impact of a complex of "intra-park" factors; a block of indicators of the industrial park's base area potential reflecting the impact of a complex of local and regional factors.
The blocks include several groups of indicators, each of which has main and additional indicators.Division of indicators into main and additional is carried out according to the degree of their importance (primary or secondary) from the position of influence on the level of competitiveness of resident enterprises of the industrial park.Main indicators provide the most comprehensive assessment of competitiveness factors, additional indicators allow us to assess individual aspects of the manifestations of the impact of the factors under consideration [15,16].
The system of indicators of the functioning of an industrial park proposed by the authors, which assess the influence of factors of competitiveness of enterprises operating on its territory, is includes seven groups of indicators.
I. Performance of the management company of the industrial park: 1.1.Share of revenue of the management company from managing the functioning and establishment of the industrial park in the total volume of revenue; 1.2.Coefficient of return of expenses of the management company on primary activities; 1.3.Activity coefficient of expanding activities of the management company; 1.4.Labor productivity of employees of the management company.
II. Quality of the infrastructure support of the industrial park territory: 2.1.Average level of loading of the infrastructural capacities of the industrial park; 2.2.Average degree of wear of the industrial park infrastructure; 2.3.Average rate of growth in tariffs for energy resources consumed by the residents of the industrial park; 2.4.Volume of capital investments in the development of the industrial park per 1 ha of the territory provided with a new infrastructure.Level of the infrastructure provision of the industrial park territory; 2.5.Level of the reserve of the territory provided with infrastructure for the further development of the industrial park; 2.6.Level of accidents in the engineering networks of the industrial park.
III. Degree of the industry specialization and cooperation potential of the industrial park: 3.1.Level of specialization of the industrial park in certain activities; 3.2.Level of co-production in the industrial park; 3.3.Indicator of the number of orders carried out in the order of cooperation between the resident enterprises of the industrial park.
IV. Level of the personnel and scientific potential of industrial park's base area: https://doi.org/10.1051/matecconf/201819301036ESCI 2018 4.1.Share of employees of enterprises resident in industrial park with higher professional education; 4.2.Share of employees of enterprises resident in the industrial park who received additional training in the framework of targeted educational programs; 4.3.Ratio of the share of innovation and R&D expenditures in the revenues of enterprises resident in the industrial park to the share of innovation and R&D expenditures in the territory of the subject of the Russian Federation in GRP; 4.4.Ratio of the average wage at enterprises resident in the industrial park to the average wage in the territory of the industrial park; 4.5.Ratio of supply and demand in the labor market of the territory of the industrial park; 4.6.Unemployment rate in the territory of the industrial park.
V. Level of the natural resource potential of the industrial park's base area: 5.1.Distance ratio of the industrial park residents from places of concentration of mineral resources; 5.2.Level of provision of the industrial park base territory with reserves of natural and technogenic mineral resources.
6. Level of the transport and logistics potential of the industrial park's base area: 6.1.Distance ratio of the industrial park residents from the main areas of sales of products; 6.2.Indicator of the development of a network of motor roads with a hard surface and railway tracks in the industrial park base territory.
7. Level of the investment potential of the base region of the industrial park: 7.1.Amount of state support given to residents of the industrial park; 7.2.Average level of tax burden on the residents of the industrial park; 7.3.Growth rate of the regional economy; 7.4.Growth rate of the number of inspections in respect of legal persons and sole entrepreneurs in the region; 7.5.Growth rate of the time spent in the region for administrative procedures.
Results and discussion
I. Performance of the management company of the industrial park.The group unites indicators that in one way or another characterize the quality and efficiency of how the managing company provides the residents of an industrial park with a one-stop service package.Given that the pricing policy of the management company, the balance of the range of services it provides, as well as their compliance with the real needs of the residents of the site have a direct impact on the financial and economic performance of production on the territory of the industrial park, this group of indicators is of paramount importance in terms of the impact on the level of competitiveness of the resident enterprises.
II. Quality of the infrastructure support of the industrial park territory.The second group consists of indicators that characterize the use of the main asset of the industrial parkengineering, transportation and production infrastructure facilities.Considering the fact that the ready infrastructure is the key advantage of industrial parks as a form of organization of investment sites, the group of indicators should be considered as a priority when analyzing the activities of the industrial park.
Effective and smooth operation of engineering networks, availability of a reserve of infrastructure capacities, balanced cost of energy resources are the key to the stable and efficient operation of enterprises within the boundaries of an industrial park.Indicators of infrastructure support of an industrial park determine the time and financial expenditures of resident enterprises at the stage of setting up production, having a significant impact on the level of current production costs.III.Degree of the industry specialization and cooperation potential of the industrial park.This group includes indicators reflecting the potential of the industrial park for residents in terms of the possibility of forming long-term economic relations, conducting joint or technologically related production activities.The possibility of building cooperative ties between companies operating on the territory of the industrial park in the industrial sectors that are relevant to the industrial park contributes to stabilizing the demand for products produced by the residents, increasing the efficiency of using production capacities, and deepening the specialization of the residents' production facilities.
MATEC Web of
A result of this is optimization of production processes, increase in the quality of products and labor productivity, lower costs and higher profitability of the activities of resident enterprises.
IV. Level of the personnel and scientific potential of the industrial park's base area.This group combines indicators characterizing the provision of the territory in the area of the industrial park with labor resources, including highly qualified ones, the degree of development and quality of the education system, the level of development of the scientific and technical sphere, and the innovative and intellectual potential of the territory.
High labor, scientific and technical potential of the territory gives the residents of the industrial park the opportunity to staff the production of professional workers in the conditions of high competition in the labor market, with a constant improvement of their skills, effective implementation of research and development and innovative solutions in production [17][18][19].
V. Level of the natural resource potential of the industrial park's base area.This group includes indicators characterizing the supply of the territory adjacent to the industrial park with mineral reserves and technogenic mineral resources that may be involved in economic circulation and largely determine the industry profile of the industries that are promising for developing production on the given territory.
The proximity of the raw materials base allows minimizing the time costs for transportation and preliminary processing of raw materials, saving on logistics costs, which ultimately gives enterprises the opportunity to establish a competitive price for the products sold.
VI. Level of the transport and logistics potential of the industrial park's base area.This group of indicators from various sides characterizes the location of the territory of the industrial park in relation to the transport network, the degree of development of which has a great influence on the prospects for the successful operation of enterprises within its boundaries.Logistic connectivity of the territory provides an opportunity for a rapid, unhindered and cost-effective exchange between economic entities, and the proximity of markets allows companies to realize the finished products at minimum costs and in optimal terms.
VII. Level of the investment potential of the base region of the industrial park.The last group of indicators characterizes the degree of favorable investment climate on the location of the industrial park location, conditions created in the region for business, and instruments for supporting investment activities.The effectiveness of the provision of public services for businesses, the degree of administrative pressure on entrepreneurs, the diversity and accessibility of various types of financial and non-financial support to the residents of industrial parks, and the level of investment risks largely determine the stability, predictability and prospects for the development of resident companies in the industrial park.
The proposed system of indicators of the functioning of the industrial park allows management companies to carry out a comprehensive analysis of the activities of the investment site in dynamics, as well as its key advantages and disadvantages.On the one hand, management companies have an opportunity to compare the values of certain performance indicators of the industrial park with similar indicators of competitor sites, On the other hand, the developed system of indicators, reflecting the impact of a complex of factors of competitiveness of resident enterprises of industrial parks, can be used to formulate an industrial park development program, the final goal of which will be a growth in competitiveness of the residents of the site.
Conclusions
As a result of the work, key factors for increasing the competitiveness of resident enterprises of industrial parks have been identified.A set of requirements for the content of the system of indicators of functioning of an industrial park has been determined and justified.The system of indicators of functioning of an industrial park that assess the influence of factors of competitiveness of enterprises operating on its territory has been developed, which includes two structural blocks: indicators of the industrial park performance and indicators of the industrial park's base area potential.It is emphasized that the formed system of indicators is open in nature and, if necessary, can be supplemented by the necessary set of indicators.
The directions of practical application of the developed system of indicators of functioning of industrial parks are proposed: comparison of the studied industrial park with competing sites for establishing targets for further development of the industrial park, as well as the formation of an integrated program for the development of an industrial park aimed at increasing the competitiveness of the residents of the site.
Fig. 1 .
Fig. 1.System of factors of increasing the competitiveness of enterprises operating on the territory of an industrial park. | 3,585.4 | 2018-01-01T00:00:00.000 | [
"Business",
"Economics",
"Environmental Science"
] |
Theoretical Study on Non-Linear Optics Properties of Polycyclic Aromatic Hydrocarbons and the Effect of Their Intercalation with Carbon Nanotubes
Results of a theoretical study devoted to comparing NLO (non-linear optics) responses of derivatives of tetracene, isochrysene, and pyrene are reported. The static hyperpolarizability β, the dipole moment μ, the HOMO and LUMO orbitals, and their energy gap were calculated using the CAM-B3LYP density functional combined with the cc-pVDZ basis set. The para-disubstituted NO2-tetracene-N(CH3)2 has the highest NLO response, which is related to a large intramolecular charge transfer. Adding vinyl groups to the para-disubstituted NO2-tetracene-N(CH3)2 results in an increase in the NLO responses. We further investigated the effect of the intercalation of various push–pull molecules inside an armchair single-walled carbon nanotube. The intercalation leads to increased NLO responses, something that depends critically on the position of the guest molecule and/or on functionalization of the nanotube by donor and attractor groups.
Introduction
The backbone of polycyclic aromatic hydrocarbons (PAHs) contains a sequence of at least two fused benzene rings whereby the way they are linked distinguishes different PAHs [1,2]. PAHs can have an unlimited number of contiguous rings [3][4][5]. This gives rise to a large number of isomers and enriches this family of aromatic hydrocarbons. The main approach for producing PAHs is through an incomplete combustion of organic materials (for instance, fuels and coal) [6][7][8]. PAHs are divided into two classes (light and heavy) according to the number of rings involved in their structures. Each class has its own physicochemical properties [9], which allows for a large variety of different applications including organic field effect transistors [10][11][12], organic light-emitting diodes [13], reinforcing agents in pigment lasers [14], and batteries [15].
The aim of the present study is to use theoretical methods to study the performance of three smaller PAHs, i.e., tetracene, isochrysene (or triphenylene), and pyrene, with special emphasis on their non-linear optics (NLO) responses. The π electrons of these conjugated molecules [16][17][18] facilitate an intramolecular charge transfer (ICT) between electron donor (D) and electron acceptor (A) groups when such groups are attached [19,20]. To study how the NLO responses can be influenced upon functionalization of the system is one purpose of the present work.
A number of recent papers have focused on intramolecular charge transfer in PAHs, including studies on tetracyclic molecules and their derivatives [21][22][23][24][25]. Moreover, it has been shown that purely organic rings can be considered as being more aromatic than BN-containing systems [26] and, accordingly, to have more delocalized π electrons.
A number of recent papers have focused on intramolecular charge transfer in PAHs, including studies on tetracyclic molecules and their derivatives [21][22][23][24][25]. Moreover, it has been shown that purely organic rings can be considered as being more aromatic than BNcontaining systems [26] and, accordingly, to have more delocalized π electrons.
Even carbon nanotubes (CNTs) can be considered as being a special case of extended PAHs, independent of whether they are single-walled carbon nanotubes, SWCNTs; or multi-walled carbon nanotubes, MWCNTs. Since their discovery in 1991 [27], a vast number of studies of their properties have appeared, including studies of their practical applications in, e.g., pharmacy, mechanics, and optoelectronics. They possess a high mechanical resistance, a high electrical and thermal conductivity, and chemical inertness [28][29][30]. Because of their optoelectronic properties, they have been used for light-emitting diodes [31]. In addition, functionalization of SWCNTs has been used as a way of improving their properties as shown, e.g., by Khazaei et al. [32]. The hollow structure of the carbon nanotubes, shared by the fullerenes, allows for intercalation, a prospect that has been studied by, e.g., Hirscher et al. [33] and by Chaban et al. [34,35].
In addition, NLO properties of such systems have been at the center of earlier studies [36,37]. However, a more systematic study of the dependence of the NLO properties on the size of the system, on functionalization, and on intercalation is lacking, although this could provide very useful information for experimentalists who aim at designing optimal systems. It is the purpose of the present work to provide results of such a study.
We also study some push-pull molecules when interacting with SWCNTs. The pushpull molecules considered in this work are shown on Figure 1. They consist of the pure polycyclic aromatic hydrocarbons tetracene, isochrysene, and pyrene and they all contain a conjugated bridge with delocalized π electrons [38][39][40]. We study the effects of substituting the PAHs at different positions through donors [41]: NH2, N2H3, N(CH3)2, OH, OCH3, and the acceptor group NO2 [42]. For the molecule giving the largest NLO response, i.e., tetracene, we subsequently studied modified versions of this containing a larger conjugated part. This was achieved by the addition of vinyl groups at the terminations of tetracene. Then, we compared the basic molecule (Mol a) and the derivative obtained after the modification (Mol b) in terms of intramolecular charge transfer, the first hyperpolarizability, and the dipole moment.
Subsequently, we considered the effects of intercalation of derivatives of a single PAH molecule inside carbon nanotubes. Initially, we constructed a (9,9) armchair nanotube with a diameter of 12.21 nm and a length of 19.69 nm as shown, e.g., in Figures 2 and 3 Dangling bonds at the ends were saturated with hydrogen atoms. The initial structure of this chairtype SWCNT was optimized using the B3LYP density functional [43] combined with the 6-31g(d,p) basis set [44]. We investigated the effect of the position of the guest molecule, paranitroaniline (PNA), inside the nanotube by performing single-point calculations using the CAM-B3LYP functional together with the GD3 dispersion correction [45,46] and using the 6-31g(d,p) basis set. Various NLO parameters, including the static first hyperpolarizability, the dipole moment, and the HOMO-LUMO energy gap, were calculated for different positions of the guest molecule inside the nanotube by translating the former along the x-axis (parallel to the tube) with a step length of 2 Å relative to the initial position (denoted position 0, cf., Figure 2).
For the molecule giving the largest NLO response, i.e., tetracene, we subsequently studied modified versions of this containing a larger conjugated part. This was achieved by the addition of vinyl groups at the terminations of tetracene. Then, we compared the basic molecule (Mol a) and the derivative obtained after the modification (Mol b) in terms of intramolecular charge transfer, the first hyperpolarizability, and the dipole moment.
Subsequently, we considered the effects of intercalation of derivatives of a single PAH molecule inside carbon nanotubes. Initially, we constructed a (9,9) armchair nanotube with a diameter of 12.21 nm and a length of 19.69 nm as shown, e.g., in Figures 2 and 3 Dangling bonds at the ends were saturated with hydrogen atoms. The initial structure of this chair-type SWCNT was optimized using the B3LYP density functional [43] combined with the 6-31g(d,p) basis set [44]. We investigated the effect of the position of the guest molecule, paranitroaniline (PNA), inside the nanotube by performing single-point calculations using the CAM-B3LYP functional together with the GD3 dispersion correction [45,46] and using the 6-31g(d,p) basis set. Various NLO parameters, including the static first hyperpolarizability, the dipole moment, and the HOMO-LUMO energy gap, were calculated for different positions of the guest molecule inside the nanotube by translating the former along the x-axis (parallel to the tube) with a step length of 2 Å relative to the initial position (denoted position 0, cf., Figure 2). After that, we examined the effect of the size of the guest molecule on the intramolecular charge transfer of the system. For that purpose, we considered different push-pull For the molecule giving the largest NLO response, i.e., tetracene, we subsequently studied modified versions of this containing a larger conjugated part. This was achieved by the addition of vinyl groups at the terminations of tetracene. Then, we compared the basic molecule (Mol a) and the derivative obtained after the modification (Mol b) in terms of intramolecular charge transfer, the first hyperpolarizability, and the dipole moment.
Subsequently, we considered the effects of intercalation of derivatives of a single PAH molecule inside carbon nanotubes. Initially, we constructed a (9,9) armchair nanotube with a diameter of 12.21 nm and a length of 19.69 nm as shown, e.g., in Figures 2 and 3 Dangling bonds at the ends were saturated with hydrogen atoms. The initial structure of this chair-type SWCNT was optimized using the B3LYP density functional [43] combined with the 6-31g(d,p) basis set [44]. We investigated the effect of the position of the guest molecule, paranitroaniline (PNA), inside the nanotube by performing single-point calculations using the CAM-B3LYP functional together with the GD3 dispersion correction [45,46] and using the 6-31g(d,p) basis set. Various NLO parameters, including the static first hyperpolarizability, the dipole moment, and the HOMO-LUMO energy gap, were calculated for different positions of the guest molecule inside the nanotube by translating the former along the x-axis (parallel to the tube) with a step length of 2 Å relative to the initial position (denoted position 0, cf., Figure 2). After that, we examined the effect of the size of the guest molecule on the intramolecular charge transfer of the system. For that purpose, we considered different push-pull After that, we examined the effect of the size of the guest molecule on the intramolecular charge transfer of the system. For that purpose, we considered different push-pull molecules inserted inside the chair-like nanotube. As guest molecules, we considered PNA, VD, VA, VDA, stilbene, and tetracene, all shown in Finally, we modified the host system, i.e., to the armchair-type nanotube, we attached an NH3 donor on one side and an NO2 acceptor group on the other side, cf., Figure 3. At first, the structure of the isolated host was optimized using B3LYP/6-31g(d,p), after which the push-pull molecule PNA was inserted in the center of the tube and calculations were performed to check the effect of these substitutions on the hyperpolarizabilities and on the total dipole moment.
Figure 4.
Structures of the push-pull molecules that were inserted inside the armchairtype carbon nanotube.
Computational Details
At first, we emphasize that our study involves several approximations. The size and number of the systems of our interest make it prohibitive to apply the most accurate computational methods for each of those. Instead, our focus is on studying the changes when modifying the systems in one way or another, so that our results should be able to describe those changes, although the absolute numbers will be less accurate. The approximations we employ include a basis set of finite size, the finite lengths of the carbon nanotubes, and the density functional itself.
We focused on the total hyperpolarizability: Finally, we modified the host system, i.e., to the armchair-type nanotube, we attached an NH 3 donor on one side and an NO 2 acceptor group on the other side, cf., Figure 3. At first, the structure of the isolated host was optimized using B3LYP/6-31g(d,p), after which the push-pull molecule PNA was inserted in the center of the tube and calculations were performed to check the effect of these substitutions on the hyperpolarizabilities and on the total dipole moment.
Computational Details
At first, we emphasize that our study involves several approximations. The size and number of the systems of our interest make it prohibitive to apply the most accurate computational methods for each of those. Instead, our focus is on studying the changes when modifying the systems in one way or another, so that our results should be able to describe those changes, although the absolute numbers will be less accurate. The approximations we employ include a basis set of finite size, the finite lengths of the carbon nanotubes, and the density functional itself.
We focused on the total hyperpolarizability: with According to our benchmark study, the CAM-B3LYP functional provides the best agreement with the MP2 reference results. Therefore, this functional was used in the subsequent calculations. This finding agrees with that of Rabah et al. [52].
Subsequently, we performed single-point (SP) calculations using the CAM-B3LYP functional combined with the cc-pVDZ basis set on each molecule. This functional includes a description of long-range corrections [68,69] and, accordingly, it provides a better description of properties related to an intramolecular charge transfer [70].
The dipole moment was calculated according to [71,72] We also used the energy gap between the HOMO and the LUMO frontier orbitals: as parameters quantifying the NLO properties of our systems.
For the geometric structure, we focused on the BLA (Bond Length Alternation) parameter, i.e., the difference between the average lengths of single and double bonds in a conjugated system [73]. A smaller value of the BLA facilitates an intramolecular charge transfer.
Ia. Selection of the Functional
In this part, we identify the density functional that gives results closest to those obtained with the MP2 method. The latter is considered as reliable for NLO properties. DFT (functionals BMK, BHHLYP, CAM-B3LYP, M062X, and PBE0) as well as MP2 calculations were performed in combination with the cc-pVDZ basis set to calculate the first hyperpolarizabilities of ten tetracene derivatives.
The results (see Table 1) show that the PBE0 functional overestimates the hyperpolarizabilities. The values related to the functionals BMK, BHHLYP, and M062X give a less pronounced difference, whereas the best agreement is obtained for the functional CAM-B3LYP. This is explained by the fact that this functional includes long-range Hartree-Fock exchange interactions. Consequently, the subsequent calculations for the pyrene and isochrysene derivatives were carried out using this functional in combination with the cc-PVDZ basis set. That this combination yields accurate results, particularly concerning trends, is in agreement with our earlier findings [52]. The calculated static first-order hyperpolarizabilities reported in Table 2 show that among the tetracene derivatives, the para-disubstituted NO 2 -tetracene-N(CH 3 ) 2 gives the largest value of β as well as the largest dipole moment µ, and also the lowest-energy gap, which is roughly inversely proportional to an intramolecular charge transfer. It is added that a comparison of the dipole moment or the hyperpolarizability between different molecules is hampered by the fact that these properties are extensive properties, so, in general, larger molecules have larger values for these properties. However, the differences we discuss here are larger than what can be explained through this simple fact. Table 2. Calculated hyperpolarizability (10 −30 esu), dipole moment (Debye), and energy gap (eV) of tetracene derivatives with substitutions at the ortho and para positions using CAM-B3LYP/cc-PVDZ. The system with the largest value for β tot is highlighted. We studied all isochrysene derivatives containing the NO 2 group at one side of the chromophore and an electron donor (i.e., NH 2 . N(CH 3 ) 2 , N 2 H 3 , OH, or OCH 3 ) at the other side (cf., Figure 1). In Table 3, we present only the results of the NLO parameters of the derivatives in which the position of the donor N(CH 3 ) 2 was varied while that of the NO 2 group was kept fixed. This combination results in a larger ICT compared to the other combinations. From Table 3, we can observe that the charge transfer occurs mainly along the x-axis (the main axis of the chromophore). Indeed, the value of β y is very small compared to the value of β x , and the value of β z vanishes. For substitutions at positions I-6, the largest charge transfer is obtained as the donor and the acceptor groups are parallel to the dipole moment (x-axis). Table 4 reports results obtained for pyrene derivatives substituted with N(CH 3 ) 2 as a donor and NO 2 as an acceptor. The results are very similar to those reported in Table 3 and we, again, notice that the charge transfer occurs along the x-axis and that the substitution at positions 1-6 gives the largest charge transfer. Table 5 summarizes the results for those derivatives of the three molecules of our interest that possess the highest values for the first hyperpolarizability. We notice that the hyperpolarizability of the tetracene derivative is markedly larger than those of the other two derivatives. The same holds for the dipole moment. The energy gap of the tetracene derivative is smaller, which correlates with the larger charge transfer between donor and acceptor. Table 4. Calculated hyperpolarizability (10 −30 esu), dipole moment (Debye), and energy gap (eV) of the molecules N(CH3) 2 -pyrene-NO 2 by varying the position from 3 to 9 of N(CH3) 2 relative to that of NO 2 (Position 1) giving mol 1-3 to mol 1-9, using CAM-B3LYP/cc-PVDZ. The system with the largest value for β tot is highlighted. As the tetracene derivatives give the highest charge transfer among the three molecules, only this system is considered in the next step. In this, the π-conjugated system is extended by adding vinyl groups at either termination of the molecule (see Figure 1), so the effect of extending the π-chain length on the ICT can be analyzed [74].
NH2-
This substitution leads to an increase in the first static hyperpolarizability from 85.07 10 −30 esu to 229.79 10 −30 esu. In addition, the dipole moment, which depends on the ICT, increases from 9.72 to 11.31 Debye.
For the energy gap, we notice only a smaller decrease from 4.53 eV for Mol_a to 4.34 eV for Mol_b. The HOMO-LUMO gap is inversely proportional to the ICT [75]. The very similar values for the gap for the two molecules can be understood from Figure 5: the frontier orbitals are largely localized to the backbone of the molecules. Equivalently, the energies of the HOMO and LUMO orbitals decrease only slightly for the substituted molecules that have a larger conjugation.
The BLA (Bond Length Alternation) parameter is useful in quantifying NLO responses for conjugated molecules. The results reported in Table 6 show an increase in BLA upon an increase in the conjugated bridge, which correlates with the previous results. Table 6. Comparison between the results for tetracene and divinyl-tetracene derivatives: first static hyperpolarizability in 10 −30 esu, dipole moment (Debye), energy gap (eV), HOMO and LUMO orbital energies in eV, and BLA in Å as obtained with CAM-B3LYP and cc-pVDZ. The variation in the different NLO parameters as a function of the position of the paranitroaniline guest molecule inside the carbon nanotube (CNT) is reported in Table 7 and is depicted in Figure 6. According to these results, the charge transfer is largest when the guest molecule is placed in the center of the carbon nanotube, resulting in a maximum value of the static hyperpolarizability. At that position, there is a maximum guest-host interaction. The energy gap hardly varies by varying the position, a finding that is related to the fact that the two frontier orbitals HOMO and LUMO are localized mainly on the finite SWCNTs. Finally, the ICT hardly changes with the position of the guest molecule inside the CNT.
This substitution leads to an increase in the first static hyperpolarizability from 85.07 10 −30 esu to 229.79 10 −30 esu. In addition, the dipole moment, which depends on the ICT, increases from 9.72 to 11.31 Debye.
For the energy gap, we notice only a smaller decrease from 4.53 eV for Mol_a to 4.34 eV for Mol_b. The HOMO-LUMO gap is inversely proportional to the ICT [75]. The very similar values for the gap for the two molecules can be understood from Figure 5: the frontier orbitals are largely localized to the backbone of the molecules. Equivalently, the energies of the HOMO and LUMO orbitals decrease only slightly for the substituted molecules that have a larger conjugation. The BLA (Bond Length Alternation) parameter is useful in quantifying NLO responses for conjugated molecules. The results reported in Table 6 show an increase in BLA upon an increase in the conjugated bridge, which correlates with the previous results.
IIa. Effect of Position
The variation in the different NLO parameters as a function of the position of the paranitroaniline guest molecule inside the carbon nanotube (CNT) is reported in Table 7 and is depicted in Figure 6. According to these results, the charge transfer is largest when the guest molecule is placed in the center of the carbon nanotube, resulting in a maximum value of the static hyperpolarizability. At that position, there is a maximum guest-host interaction. The energy gap hardly varies by varying the position, a finding that is related to the fact that the two frontier orbitals HOMO and LUMO are localized mainly on the finite SWCNTs. Finally, the ICT hardly changes with the position of the guest molecule inside the CNT. In Table 8, we list the values of the first static hyperpolarizability, the dipole moment, and the energy gap for various guests inside the SWCNT. In all cases, the guest is placed at the center of the host. From these results, we observe that the VDA molecule possesses the highest hyperpolarizability despite this not being the largest molecule. The same observation holds true for the dipole moment. These high values are partly due to the longer conjugation because of the vinyl groups on either side of the benzene in the push-pull molecule, as demonstrated in the first part of this study. We also notice that the HOMO-LUMO gap remains constant for the six systems, which, again, can be explained from the localization of those two orbitals to the finite SWCNT. Table 8, we list the values of the first static hyperpolarizability, the dipole moment, and the energy gap for various guests inside the SWCNT. In all cases, the guest is placed at the center of the host. From these results, we observe that the VDA molecule possesses the highest hyperpolarizability despite this not being the largest molecule. The same observation holds true for the dipole moment. These high values are partly due to the longer conjugation because of the vinyl groups on either side of the benzene in the pushpull molecule, as demonstrated in the first part of this study. We also notice that the HOMO-LUMO gap remains constant for the six systems, which, again, can be explained from the localization of those two orbitals to the finite SWCNT. Finally, we considered the effects of modifying the SWCNT by adding a donor and an acceptor group to its ends (see Figure 3). This was expected to lead to an increase in the charge transfer properties of the whole system. The results are reported in Table 9. Upon substitution, β tot becomes 3 times larger. Similarly, the dipole moment increases significantly, a behavior that is observed in all three spatial directions, x, y, and z. After the substitution, the value of the energy gap decreases only slightly from 1.85 to 1.82 eV. Table 9. Calculated hyperpolarizability (10 −30 esu), dipole moment (Debye), and energy gap (eV) of SWCNT-PNA with (denoted TA1) and without (denoted TA0) substitution on the SWCNT, as obtained from the CAM-B3LYP/6-31g(d,p) calculations.
Conclusions
The purpose of the present work was to study the effects of functionalization and/or embedding on the NLO properties of some PAHs. Therefore, our focus was not on obtaining very accurate values for specific systems, but on monitoring the changes when modifying the system of interest.
At first, we showed that the functional CAM-B3LYP provided the most accurate description of the properties of interest when using MP2 results as a reference. Furthermore, this was most important for PAHs for which the rings are arranged linearly, as demonstrated in the case of tetracene, a case where long-ranged (exchange) interactions are most pronounced. Moreover, the addition of vinyl groups to the conjugated π bridge led to enhanced NLO responses.
The intercalation of the PAH-derived molecules inside carbon nanotubes also led to increased NLO responses. Finally, the functionalization of the CNT through donor and acceptor groups to the CNT made it possible to increase the intramolecular charge transfer, leading to increased values of the hyperpolarizability and of the dipole moment but, in parallel, an only slightly reduced value of the energy gap. | 5,544 | 2022-12-23T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Physics"
] |
RNA Probes for Visualization of Sarcin/ricin Loop Depurination without Background Fluorescence
Abstract Protein synthesis via ribosomes is a fundamental process in all known living organisms. However, it can be completely stalled by removing a single nucleobase (depurination) at the sarcin/ricin loop of the ribosomal RNA. In this work, we describe the preparation and optimization process of a fluorescent probe that can be used to visualize depurination. Starting from a fluorescent thiophene nucleobase analog, various RNA probes that fluoresce exclusively in the presence of a depurinated sarcin/ricin‐loop RNA were designed and characterized. The main challenge in this process was to obtain a high fluorescence signal in the hybridized state with an abasic RNA strand, while keeping the background fluorescence low. With our new RNA probes, the fluorescence intensity and lifetime can be used for efficient monitoring of depurinated RNA.
The solvent was removed under reduced pressure and the crude product was purified via column chromatography (DCM/MeOH 95:5, 1% NEt3). Nucleoside 4 was obtained as a white foam. Compound 5: Synthesis was performed according to literature. [2] Nucleoside 4 (237 mg, 0.39 mmol, 1 eq) was dissolved in 5 mL anhydrous dichloroethane. Compound 6: The synthesis was performed referring to a procedure known from literature. [2] TOMprotected nucleoside 5 (160 mg, 0.20 mmol, 1 eq) was dissolved in 5 mL dry dichloromethane. Diisopropylethylamine (96 mg, 0.41 mmol, 2 eq) and 2cyanoethoxy-N,N-diisopropylaminochlorophosphine (131 mg, 1.01 mmol, 5 eq) were added subsequently under continuous stirring. The mixture was stirred at room temperature for 20 hours under an argon atmosphere. The solvent was removed under reduced pressure and the crude product was purified via column chromatography (cyclohexane/EtOAc, 2:1). The product was obtained as a white foam.
Solid-phase synthesis:
Solid-phase synthesis was performed on an ABI392 instrument. 3 and MeOH (Fluka) were used with a gradient from 5% to 100% MeOH in 22 minutes. The 5' DMTr-protecting groups were removed by incubating the oligonucleotides with 80% AcOH (1 ml) at room temperature for 20 minutes. The solvents were removed using a vacuum concentrator and the DMTr-off oligonucleotides were purified via reverse phase HPLC using the same conditions as described above.
Quencher labeling: The oligonucleotide was dissolved in borate buffer (pH 8.45). A 60-fold excess of the Dabcyl-NHS (Sigma-Aldrich) in DMSO was added. The volume ratio of buffer to DMSO was 3:1. After incubation for approximately 16 hours at 35 °C, a 60-fold excess of the quencher in DMSO was added again and buffer was supplemented so that the 3:1 ratio was maintained. After incubating again for eight hours at 35 °C, additional quencher in DMSO and buffer was added a third time. After final incubation for another 16 hours at 35 °C the excess of quencher was removed by size exclusion chromatography via Sephadex™ G-25 M from GE Healthcare. The oligonucleotides were purified via reversed phase HPLC with a gradient from 5% to 50% MeOH in 13 minutes. Table S1. Sequences and determined molecular masses of the synthesized oligonucleotide probes 1-8 and hybridization strands SRL and SRL abasic.
Fluorescence measurements
For steady state fluorescence intensity measurements the final concentration of the probes was 10 µM (n = 1 nmol) and for the counterstrands SRL and SRL abasic 20 µM (n = 2 nmol). The final volume used was 100 µl and the solutions were prepared in CSH Brain Buffer, having a final salt concentration of 135 mM NaCl, 5.4 mM KCl, 1 mM MgCl2, and 5 mM HEPES at pH 7.4. Measurements were performed on a Tecan infinite M200PRO plate reader at 37 °C. 304 nm was used for excitation and fluorescence intensity at 408 nm was used for evaluation. Every measurement was repeated 3-5 times.
The time-correlated single photon counting (TCSPC) experiments were conducted with an FT100 spectrometer (PicoQuant, Berlin). For excitation, a pulsed LED PLS310 with a central wavelength of 310 nm and a pulse duration of 800 ps was applied, controlled by a PDL800-D driver (PicoQuant, Berlin). Time-resolved fluorescence measurements were acquired with software TimeHarp260 (PicoQuant, Berlin). The instrument response function (IRF) was measured by the scattered light of TiO2 dispersed in ethanol. For the sample fluorescence measurements, a UVB390 filter was used to cut off the excitation stray light and the excitation. The concentrations of the labeled probes were 2 µM and the counter strands were provided in excess (3 µM). All of the samples were prepared in CSH brain buffer in 4x10 mm quartz glass cuvettes. Exponential fitting of the data was performed with the software FluoFit 4.6 (PicoQuant, Berlin). [3] Figure S1. Fluorescence decay curves of the free probes (magenta), the hybridized states with SRL (gray) and with SRL abasic (green) of the probes 1-8. The semi-transparent dots represent the data points and the solid line the obtained multiexponential fits. Fluorescence measurements with full-length SRL and full-length SRL abasic: Figure S2. Probes 3, 4, 5 and 7 and their fluorescence properties when hybridized to complementary full-length SRL and full-length SRL abasic RNA. cprobe = 10 µM in CSH brain buffer (135 mM NaCl, 5.4 mM KCl, 1 mM MgCl2, 5 mM HEPES) at 37 °C, 2 eq. counter strand RNA, λex = 304 nm, vges = 100 µL. | 1,201.2 | 2022-11-02T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Evidence for the Effectiveness of Remdesivir (GS-5734), a Nucleoside-Analog Antiviral Drug in the Inhibition of I K(M) or I K(DR) and in the Stimulation of I MEP
Remdesivir (RDV, GS-5734), a broad-spectrum antiviral drug in the class of nucleotide analogs, has been particularly tailored for treatment of coronavirus infections. However, to which extent RDV is able to modify various types of membrane ion currents remains largely uncertain. In this study, we hence intended to explore the possible perturbations of RDV on ionic currents endogenous in pituitary GH3 cells and Jurkat T-lymphocytes. The whole-cell current recordings of ours disclosed that upon membrane depolarization in GH3 cells the exposure to RDV concentration-dependently depressed the peak or late components of I K(DR) elicitation with effective IC50 values of 10.1 or 2.8 μM, respectively; meanwhile, the value of dissociation constant of RDV-induced blockage of I K(DR) on the basis of the first-order reaction was yielded to be 3.04 μM. Upon the existence of RDV, the steady-state inactivation curve of I K(DR) was established in the RDV presence; moreover, the recovery became slowed. However, RDV-induced blockage of I K(DR) failed to be overcome by further addition of either α,β-methylene ATP or cyclopentyl-1,3-dipropylxanthine. The RDV addition also lessened the strength of M-type K+ current with the IC50 value of 2.5 μM. The magnitude of voltage hysteresis of I K(M) elicited by long-lasting triangular ramp pulse was diminished by adding RDV. Membrane electroporation-induced current in response to large hyperpolarization was enhanced, with an EC50 value of 5.8 μM. Likewise, in Jurkat T-lymphocytes, adding RDV declined I K(DR) amplitude concomitantly with the raised rate of current inactivation applied by step depolarization. Therefore, in terms of the RDV molecule, there appears to be an unintended activity of the prodrug on ion channels. Its inhibition of both I K(DR) and I K(M) occurring in a non-genomic fashion might provide additional but important mechanisms through which in vivo cellular functions are seriously perturbed.
Recent studies have disclosed that RDV and chloroquine (or hydroxychloroquine) could be highly efficacious in control of the SARS-CoV-2 infection in vitro (Dong et al., 2020;Gao et al., 2020;Lai et al., 2020;Li and De Clercq, 2020;Wang et al., 2020). There are human studies of RDV efficacy for the treatment of SARS-CoV-2 infection (Beigel et al., 2020). However, none of the noticeable studies have been available with regard to the perturbing actions of RDV on membrane ion channels.
The voltage-gated K + (K V ) channels are essential in determining the membrane excitability in electrically excitable or non-excitable cells. Specifically, K V 3 (KCNC) and K V 2 (KCNB), two delayed-rectifier K + channels, are widespread in different excitable cells such as endocrine cells (Lien and Jonas, 2003;Wang et al., 2008;Fletcher et al., 2018;Kuo et al., 2018;Lu et al., 2019;So et al., 2019). The causal link between the delayedrectifier K + current (I K(DR) ) and K V 3/K V 2 channels has been previously disclosed (Yeung et al., 2005;Wang et al., 2008;Huang et al., 2013;Chang et al., 2019;Lu et al., 2019). The biophysical characteristics of K V 3.1-K V 3.2 channels, which are the dominant factors of I K(DR) identified in pituitary tumor (GH 3 ) cells Lu et al., 2019;So et al., 2019), show a positively shifted voltage dependency as well as fast deactivation rate. However, whether and how RDV effects the adjustments on the amplitude and kinetic gating of above-stated types of K + currents still requires investigations.
Furthermore, the KCNQ2, KCNQ3, and KCNQ5 genes have been noticed to encode the main subunits of K V 7.2, K V 7.3, and K V 7.5 channels, respectively; and among them, the augmented activity produces the M-type K + current (I K(M) ), which is characterized by a slowly activating and deactivating property (Brown and Adams, 1980;Sankaranarayanan and Simasko, 1996;Wang et al., 1998;Selyanko et al., 1999;Shu et al., 2007;Lu et al., 2019;So et al., 2019;Yang et al., 2019). With growing recognition, targeting I K(M) is regarded as a treatment of various neurologic diseases. How this compound acts on these types of K + currents, however, remains largely uncertain.
Membrane electroporation (MEP) applies an external electrical field in situations where an increase in the electrical conductivity and permeability of the plasma membrane could be produced. Such maneuvers have been applied to the electrotransferation of membrane-impermeant molecules which include DNAs, anticancer drugs, and antibodies, into the internal milieu of cells (Liu et al., 2012;Napotnik and Miklavcǐc, 2018). Of notice, through applying an electrical field to the cells which exceed the electric capacity of surface membrane, it transiently and temporarily turns to be permeable and destabilized. Consequently, the molecules could readily and efficiently get into the cell So et al., 2013;Napotnik and Miklavcǐc, 2018). In this scenario, to facilitate the uptake of antineoplastic or antiviral agents with difficulty in passing the cell membrane, MEP-induced current (I MEP ) has been viewed as a novel therapeutic maneuver. However, as far as we are aware, none of studies have investigated whether the presence of RDV exerts any effects on I MEP .
For the considerations elaborated above, we attempted to inquire into the actions of RDV on different types of ionic currents (e.g., I K(DR) , I K(M) and I MEP ) in GH 3 cells. Whether the I K(DR) identified in Jurkat T-lymphocytes is subject to any modification by RDV was also tested. Noticeably, the present observations unveiled that, in GH 3 cells, RDV is presumably not a prodrug, and that it is virtually effective in inhibiting I K(DR) and I K(M) with similar potency; however, it was noticed to increase the strength of I MEP . These actions demonstrated presently are prone to be acute in onset and will resultantly summate to affect electrical behaviors of different cell types. Findings from the present observations may conceivably contribute to its toxicological and pharmacological actions of RDV occurring in vitro or in vivo.
Chemicals, Drugs, and Solutions Used in This Study
Remdesivir (RDV, development code: C 27 H 35 ,2,4] triazin-7-yl)-5-cyano-3,4-dihydroxyoxolan-2-yl]methoxyp h e n o x y p h o s p h o r y l ] a m i n o ] p r o p a n o a t e ) w a s f r o m MedChemExpress (Bio-genesis Technologies, Taipei, Taiwan), while a,b-methylene ATP (AMPCPP), cyclopentyl-1,3dipropylxanthine (DPCPX), ivabradine, nonactin, and tetrodotoxin were from Sigma-Aldrich (Merck, Taipei, Taiwan). Chorotoxin was a gift of Professor Woei-Jer Chuang (Department of Biochemistry, National Cheng Kung University Medical College, Tainan, Taiwan). In this study, we obtained the reagent water by using a Milli-Q Ultrapure Water Purification System (18.2 MW-cm) (Merck Millipore, Taipei, Taiwan) in all experiments.
The composition of bath solution (i.e., HEPES-buffered normal Tyrode's solution) used in this study was (in mM): 136.5 NaCl, 5.4 KCl, 1.8 CaCl 2 , 0.53 MgCl 2 , 5.5 glucose, and 5.5 HEPES, adjusted with NaOH to pH 7.4. In attempts to check I K(M) or I K(erg) , we substituted the bath solution for a high-K + , Ca 2+ -free solution (in mM): 130 KCl, 10 NaCl, 3 MgCl 2 , and 5 HEPES, adjusted with KOH to pH 7.4. To judge different types of K + currents or I MEP , we backfilled the patch electrode with a solution (in mM): 130 K-aspartate, 20 KCl, 1 KH 2 PO 4 , 1 MgCl 2 , 0.1 EGTA, 3 Na 2 ATP, 0.1 Na 2 GTP, and 5 HEPES, adjusted with KOH to pH 7.2. To minimize any contamination of Cl − currents, Cl − ions inside the examined cell were mostly replaced with aspartate. In a different set of recordings for measuring the cation selectivity of ion channels, K + ions inside the internal solution were replaced with NMDG + ions.
Cell Culture
GH 3 , originally acquired from the Bioresources Collection and Research Center ([BCRC-60015]; Hsinchu, Taiwan), were cultured in Ham's F-12 medium added on with 15% (v/v) horse serum, 2.5% (v/v) fetal calf serum and 2 mM Lglutamine; while the Jurkat T cell line, a human T cell lymphoblast-like cell line (clone E6-1), was also from the Bioresource Collection and Research Center ([BCRC-60255]; HsinChu, Taiwan), and Jurkat T cells were grown in RPMI-1640 medium added on with 10% (v/v) fetal bovine serum. GH 3 or Jurkat T cells were maintained at 37°C in a 95% air and 5% CO 2 humidified atmosphere. The viability of these cells was often judged with the trypan blue dye-exclusion test. The electrical recordings were undertaken five or six days after cells had been cultured (60-80% confluence).
Electrophysiological Studies
Briefly before the recordings, we harvested GH 3 or Jurkat T cells and rapidly resuspended an aliquot of cell suspension to a custom-made cubicle mounted on the fixed stage of CKX-41 inverted microscope (Olympus; YuanLi, Kaohsiung, Taiwan). We the immersed cells at room temperature (20-25°C) in normal Tyrode's solution, the composition of which has been described above in detail. We exploited either a P-97 Flaming/ Brown horizontal puller (Sutter Instruments, Novato, CA) or a PP-83 vertical puller (Narishige; Taiwan Instrument, Taipei, Taiwan) to fabricate the recording pipette electrodes, which were made of Kimax-51 glass capillaries (Kimble; Dogger, New Taipei City, Taiwan), and we then fire-polished electrode tips with an MF-83 microforge (Narishige). The patch electrodes, in which different internal solutions were filled up, had a tip resistance of 3 to 5 MW. In this study, we undertook standard patch-clamp whole cell recordings at room temperature by applying either an RK-400 (Bio-Logic, Claix, France) or an Axopatch-200B patch-amplifier (Molecular Devices, Sunnyvale, CA). To measure whole-cell data, the junctional voltage between the pipette and bath solution was set as zero once the electrode was bathed but shortly before the giga-seal (>1 GW) formation. The details of data recordings and analyses achieved in the present work were described in Supplementary Material.
Curve Fitting Procedures and Statistical Analyses
Curve parameter estimation was achieved either by a non-linear (e.g., Hill and Boltzmann equation or single-exponential function) or by linear fitting routine, in which the Solver addin bundled with Excel 2013 (Microsoft, Redmond, WA) was undertaken. The experimental data in the present study are presented as the mean ± standard error of the mean (SEM), with sample sizes (n) representing the number of cells (e.g., GH 3 or Jurkat T cells) collected. Student's t-test and a one-way analysis of variance (ANOVA) were implemented and post-hoc Fisher's least-significance difference test was applied for multiple comparison procedures. However, assuming that the results might violate the normality underlying ANOVA, the nonparametric Kruskal-Wallis test was thereafter performed. Statistical significance was regarded as P < 0.05.
Inhibitory Effect of RDV on Depolarization-Evoked Delayed-Rectifier K + Current (I K(DR) ) Identified in GH 3 Cells
In the first stage of experiments, we undertook the whole-cell configuration of standard patch-clamp technique applied to these cells. The experiments were conducted in cells bathed in Ca 2+ -free, Tyrode's solution which contained 1 mM tetrodotoxin and 10 mM CdCl 2 , and we afterwards backfilled the recording electrode by utilizing K + -containing solution. Tetrodotoxin or CdCl 2 in bathing solution was employed to block voltage-gated Na + or Ca 2+ currents, respectively. As depicted in Figure 1A, when we voltage-clamped the examined cells at −50 mV and then applied depolarizing command potential to +50 mV with a duration of 1 sec, the delayed-rectifier K + current (I K(DR) ) was able to be robustly evoked, as elaborated previously Lu et al., 2019). Of notice, As exposed to RDV at various concentrations, the strength of I K(DR) evoked by the corresponding depolarizing pulse was dose-dependently declined; however, the initial peak component of I K(DR) was measurably decreased to a less extent as compared with the late component of the current. Depending on the modified Hill equation elaborated in Materials and Methods section, the IC 50 value entailed for its inhibitory effects on initial peak or late components of I K(DR) was yielded to be 10.1 or 2.8 mM, respectively ( Figure 1B). As such, the experimental observations disclosed that during GH 3 -cell exposure to this compound, the late component of I K(DR) by step depolarization applied from −50 to +50 mV was manifestly lessened to a greater extent than the initial peak component of the current.
Beyond the decreased strength of I K(DR) , as the cells exposed to different RDV concentrations, the increase of I K(DR) inactivation relaxation responding to protracted depolarization was noticeably observed in a time-dependent manner. That is, the relaxation time course of I K(DR) inactivation in the presence of this compound likely became strengthened, though the activation one of the current was unchanged. What is more, we measured the time A B D C FIGURE 1 | Effect of RDV on delayed-rectifier K + current (I K(DR) ) in pituitary GH 3 cells. Cells were bathed in Ca 2+ -free, Tyrode's solution and the recording electrode was backfilled up with K + -containing solution. (A) Superimposed I K(DR) traces obtained in the control (1, i.e., RDV was not present), and during the exposure to 0.3 mM RDV (2), 1 mM RDV (3) or 3 mM RDV (4). The upper part is the voltage-clamp protocol applied to the cell. (B) Concentration-dependent inhibition of RDV on I K(DR) amplitude measured at the beginning (□) and end (○) of depolarizing command potential (mean ± SEM; n=8 for each point). I K(DR) amplitudes (i.e., transient or late component) in different RDV concentrations were taken at the beginning or end of depolarizing pulse for 1 sec from −50 to +50 mV. Continuous lines were well fitted with Hill equation as detailed in Materials and Methods. The IC 50 value (as indicated by the vertical dashed line) measured in initial peak or late component of I K(DR) was yielded to be 10.1 or 2.8 mM, respectively. (C) Relative block (i.e., (I control -I RDV )/I control ) of I K(DR) in the presence of 1 or 3 mM RDV. Smooth line in the presence of 1 or 3 mM RDV denotes the exponential fit with the time constant of 113.5 or 98.9 ms, respectively. (D) Relationship of the RDV concentration as a function of the rate constant (1/t) (mean ± SEM; n=8 for each point). Based on minimal kinetic scheme described in Materials and Methods, the value of k +1 * or k -1 was estimated to be 2.01 s −1 mM −1 and 6.12 s −1 , respectively; and the K D value (k -1 /k +1 *, i.e., dissociation constant) was resultantly yielded to be 3.04 mM.
constants of I K(DR) inactivation in different RDV concentrations, as illustrated in Figure 1C, the time courses of relative block of I K (DR) , namely, (I control -I RDV )/I control , in the presence of different RDV concentrations were appropriately fitted to a single exponential process. Under minimal reaction scheme elaborated in the Supplementary Material, the estimated K D value in the existence of RDV amounted to 3.06 mM (as indicated in Figure 1D), which is noticeably near the IC 50 value warranted for RDVmediated blockade of the late (or sustained) component of I K(DR) ; however, it was noticeably lower than that for its depressant action on the initial peak component of the current.
Inhibitory Effect of RDV on Averaged Current-Voltage (I-V) Relationship of I K(DR)
In another separate series of measurements, we voltage-clamped at −50 mV and then delivered command voltage pulses from −60 to +70 mV in 10-mV increments with a duration of 1 sec to the examined cells. Under these experimental voltage protocols, a family of I K(DR) could be robustly elicited and the currents were noticeably manifested by an outwardly rectifying property with a reversal potential of −74 ± 2 mV (n = 13) Lu et al., 2019;So et al., 2019). Of notice, one minute after exposure to 10 mM RDV, the I K(DR) strength was depressed particularly at the potentials ranging between −20 and +70 mV. Figures 2A-C depict the I-V relationships of I K(DR) measured at the beginning (initial peak) and end (late or sustained) of each potential in the control and during cell exposure to 10 mM RDV. The magnitude for RDV-induced block of I K(DR) measured at the end of depolarizing pulses (i.e., late I K(DR) ) noticeably became greater than that achieved at the beginning of pulses (i.e., peak I K(DR) ). For instance, at the level of +50 mV, RDV (10 mM) lessened the peak component of I K(DR) by 46 ± 2% from 976 ± 178 to 527 ± 114 pA (n = 8, P<0.05). However, at the same level of voltage pulse, RDV at the same concentration distinctly declined the I K (DR) amplitude attained at the end of depolarizing pulse by 74 ± 3% from 748 ± 121 to 194 ± 42 pA. After washout of RDV, the peak or late amplitude of I K(DR) was back to 956 ± 168 or 732 ± 114 pA, respectively (n = 7). Meanwhile, from the current experimental conditions, the presence of 10 mM RDV significantly declined initial or late component of macroscopic I K(DR) conductance (measured at the voltage from +30 to +70 mV) to 9.2 ± 0.2 or 3.5 ± 0.2 nS from the control values of 12.7 ± 0.6 or 8.5 ± 0.5 nS (n = 8), respectively. In consequence, the strength for RDV-induced block of late or steady-state I K(DR) in dealing with step depolarizations was pronouncedly larger than that of instantaneous peak components of the current.
Comparison Among the Effects of RDV, RDV Plus a,b-Methylene ATP (AMPCPP) and RDV Plus Cyclopentyl-1,3-Dipropylxanthine (DPCPX) on I K(DR) Amplitude It has been noticed that the binding of muscarinic or purinergic receptors to GH 3 cells is likely to activate K + -channel activity through a G-protein modulation (Yatani et al., 1987). We hence examined whether adding AMPCPP or DPCPX, but still in the continued exposure to RDV, was able to adjust RDV-perturbed inhibition of I K(DR) detected in GH 3 cells. Of surprise, as depicted in Figure 3, neither further application of AMPCPP (30 mM) nor DPCPX (1 mM) effectively modified the inhibition of I K(DR) produced by 10 mM RDV, in spite of the ability of RDV alone to depress I K(DR) and to fasten current inactivation. AMPCPP, a non-degradable ATP analog, is previously reported to be a P 2Xpurinergic-receptor agonist, whereas DPCPX is an antagonist of adenosine A 1 receptor (Wu et al., 1998). Alternatively, in the continued presence of 10 mM RDV, further application of 10 mM nonactin, known to be a K + ionophore, could effectively reverse RDV-induced decrease of current amplitude. Therefore, RDVperturbed strength of I K(DR) observed in GH 3 cells is most unlikely to be connected with its preferential binding to the purinergic or adenosine receptors, although the RDV molecule was thought to be a prodrug of an adenosine nucleoside analog (Lo et al., 2017;Brown et al., 2019;Tchesnokov et al., 2019;Gordon et al., 2020).
The Inactivation of I K(DR) Modified by RDV
As cells were exposed to different RDZ concentrations, the I K(DR) in response to membrane depolarization noticeably exhibited an evident peak followed by an exponential decline to a steady-state level. Hence, we further explored the quasi-steady-state inactivation curve of I K(DR) attained in the absence or presence of RDV by using a two-step voltage protocol. In this series of experiments, we immersed cells in Tyrode's solution (Ca 2+ -free), and then filled the electrode with K + -containing solution, during electrical recordings. Once whole-cell configuration has been tightly established, we applied a two-pulse protocol, under analog-to-digital conversion, to the examined cells in which different RDV concentrations were present. From the leastsquares minimization, the inactivation parameters of I K(DR) were appropriately derived in the presence of 3 or 10 mM RDV. As illustrated in Figures 4A, B, we constructed the normalized strength of I K(DR) (i.e., I/I max ) against the conditioning command potentials, and the continuous sigmoidal curve was well fitted with a modified Boltzmann function elaborated under Materials and Methods. In the presence of 3 mM RDZ, V 1/2 = −33.4 ± 1.8 mV, q = 4.7 ± 0.3 e (n = 8), whereas in the presence of 10 mM RDZ, V 1/2 = −18.5 ± 1.7 mV, q = 4.5 ± 0.3 e (n = 8). Observations from this set of experiments disclosed that during GH 3 -cell exposure to different RDV concentrations, the V 1/2 value of I K(DR) inactivation curve attained from these cells could be measurably altered, although modification in the gating charge was not noticed.
RDV on the Recovery of I K(DR) Blockage Identified in GH 3 Cells
Recovery from block by RDV was additionally undertaken with another two-step voltage-clamp protocol which comprises an initial (i.e., the first conditioning) depolarizing pulse sufficiently long to allow block to reach block to reach a steady-state level. The membrane voltage was thereafter stepped to +50 mV from −50 mV for a variable time, after a second depolarizing pulse (test pulse) was applied at the same potential as the conditioning pulse ( Figure 5A). The ratios (2 nd pulse/1 st pulse) of the peak amplitude of I K(DR) evoked in response to the test and the conditioning pulse were employed for a measure of recovery from block, and the values were constructed and then plotted versus interpulse interval ( Figure 5B). The time course for the recovery of I K(DR) block with or without RDV addition was noticed to be described by a single-exponential function. The time constant for current recovery from inactivation in the control was measured to be 453 ± 17 ms (n = 7), whereas the addition of 1 or 3 mM RDV to the examined cells prolonged the time constant to 687 ± 23 (n = 7, P<0.05) or 867 ± 37 ms (n = 7, P<0.05), respectively. These observations prompted us to indicate that the slowing of recovery caused by adding RDV might be principally owed to the block in open or inactivated state.
RDV on M-type K + Current (I K(M) ) in GH 3 Cells
In another separate measurements, we further checked whether the effect of RDV on the amplitude or gating of another type of K + current (i.e., M-type K + current [I K(M) ]) endogenously in GH 3 cells (Sankaranarayanan and Simasko, 1996;Selyanko et al., 1999;Yang et al., 2019). The cells were bathed in high-K + , Ca 2+ -free solution, and the K + -containing solution was used to fill up the recording electrode. Of notice, within 1 min of RDV exposure, the I K(M) strength of GH 3 cells was considerably declined ( Figure 6A). For example, at as the cells were depolarized from −50 to −10 mV, the addition of 3 mM RDV decreased I K(M) amplitude from 176 ± 25 to 78 ± 19 pA (n=9, P<0.05), and after removal of RDV, current amplitude returned to 169 ± 24 pA (n=9). We consequently constructed the association between the RDV concentration and the degree of be 2.5 mM, and at a concentration of 100 mM, it nearly fully depressed current strength ( Figure 6B). It is apparent, therefore, that RDV can exert a pronounced action on the inhibition of I K (M) identified in GH 3 cells.
Effect of RDV on I K(M) Triggered by Triangular Ramp Pulse With Varying Durations
Previous experiments disclosed the capability of I K(M) strength to modulate the patterns of bursting firing in central neurons (Brown and Passmore, 2009). Therefore, we wanted to evaluate how RDV could have any propensity to perturb I K(M) responding to long-lasting triangular ramp pulse with varying durations, which were achieved by digital-to-analog conversion. In the presence experiments, the examined cell was voltage-clamped at −50 mV and the upsloping (forward) limb from −50 to 0 mV followed by the downsloping (backward) limb back to −50 mV with varying durations (40-940 ms) was thereafter applied. As demonstrated in Figure 7A, once the slope of ramp pulse was declined, the maximal strength of I K(M) triggered by the upsloping limb of triangular ramp pulse was progressively raised, whereas the peak amplitude of I K(M) was initially elevated and followed by gradual decline. However, once 3 mM RDV was added, the strength of the current responding to both rising and falling ramp pulse was noticeably decreased ( Figure 7A). For instance, as the duration of triangular ramp pulse applied was set at 940 ms (i.e., slope= ± 0.1 V/sec), the addition of 3 mM RDV decreased current amplitude measured at the upsloping or downsloping limbs from 150 ± 12± to 83 ± 9 pA (n=8, P<0.05), or from 294 ± 23 to 131 ± 11 pA (n=8, P<0.05). The experimental results illustrated that the strength of I K(M) in the upsloping lime was considerably raised as the duration of triangular ramp pulse elevated, while that in the downsloping limb was gradually declined, and that adding RDV contributed to a decline of I K(M) by a time-dependent manner in GH 3 cells. The voltage hysteresis of ionic currents has been demonstrated to have an impact on electrical behaviors of action-potential firing (Männikko et al., 2005;Fürst and D'Avanzo, 2015;Hsu et al., 2020). The I K(M) amplitude triggered by the upsloping limb of triangular voltage ramp was considerable lower that that by the downsloping limb, strongly indicating a voltage-dependent hysteresis for I K(M) as depicted in Figure 7B, according to the relationship of I K(M) versus membrane voltage. As the duration of triangular pulse raised from 40 to 940 ms (i.e., the slope became decreased), the hysteresis degree for I K(M) was decreased. Of notice, by adding RDV (3 mM), I K(M) evoked in the upsloping limb of long-lasting triangular ramp decreased to a less extent than which measured from the downsloping ramp. For instance, in controls (i.e., RDV was not present), I K(M) at the level of −20 mV elicited upon the upsloping and downsloping ends of triangular ramp pulse were 78 ± 9 and 301 ± 23 pA (n=8), respectively, the values of which were noticed to differ significantly between them (P<0.05). Furthermore, by adding 3 mM RDV, the strength of forward and backward I K(M) at the same membrane voltage was evidently declined to 65 ± 6 and 135 ± 18 pA. Therefore, the strength of RDV-induced current inhibition at the upsloping (forward) and downsloping (reverse) limbs of triangular ramp differ significantly. The addition of 3 mM RDV decreased I K(M) amplitude evoked at the upsloping or downsloping limb of triangular ramp pulse by about 17% or 55%, respectively. As described by the dashed arrows in Figure 7B, upon the difference (i.e., Darea) in area under the curve in the forward (upsloping) and backward (downsloping) direction, furthermore, we quantified the degree of voltage-dependent hysteresis of I K(M) . It showed that the amount of voltage hysteresis responding to 940-ms triangular ramp pulse was considerably lessened in the presence of RDV. Figure 7C summarized the data demonstrating the effects of RDV (3 or 10 mM) on the area under such curve. For instance, in addition to its depression of I K(M) amplitude, the presence of 3 mM RDV decreased the area responding to long-lasting triangular ramp, as illustrated by a specific reduction of Darea from 9.6 ± 1.2 to 2.8 ± 0.8 mV·nA.
Mild Inhibition by RDV of erg-Mediated K + Current (I K(erg) ) in GH 3 Cells Further, we investigated the potential modifications of RDV on another K + current (i.e., I K(erg) ) also endogenously in these cells.
Under our experimental conditions, the deactivating inwardly directed I K(erg) could be robustly elicited from −10 mV holding potential to a range of voltage pulses from −100 to −10 mV within 1 sec (Wu et al., 2000;Huang et al., 2011;Hsu et al., 2020). When GH 3 cells were exposed to RDV at a concentration of 30 mM, the amplitude of deactivating I K(erg) was mildly but significantly depressed throughout the entire voltage-clamp pulses applied ( Figure 8). For example, at the level of −90 mV, the peak amplitude of I K(erg) was noticeably decreased from 565 ± 59 to 383 ± 42 pA (n=9, P<0.05), as cells were exposed to 30 mM RDV. After the agent was washed out, the strength was back to 554 ± 51 pA (n=8). Alternatively, adding 30 mM RDV lessened whole-cell conductance of peak I K(erg) measured between −50 and −90 mV from 8.7 ± 0.8 to 5.8 ± 0.7 nS. Therefore, as compared with I K(DR) or I K(M) , the I K(erg) in these cells is relatively resistant to being blocked by RDV. However, the RDV effect on I K(erg) tends to be rapid in onset, and it should be independent of its perturbing effect on the activity of RNA-polymerase.
Stimulation by RDV of I MEP in GH 3 Cells
It has been reported that I MEP elicited in response to large membrane hyperpolarization (Dyachok et al., 2010;Liu et al., 2012;Wu et al., 2012;So et al., 2013;Chiang et al., 2014;Chang et al., 2020a). To study whether RDV possibly perturb this type of ionic current, we bathed cells in Tyrode's solution (Ca 2+ -free) and performed whole-cell current recordings. As described in previous observations (Dyachok et al., 2010;Wu et al., 2012;Chang et al., 2020a;Chang et al., 2020b), when the cell was voltage-clamped at −80 mV and the 300-ms hyperpolarizing pulse to −200 mV was applied to evoke I MEP . As depicted in Figures 9A, B, when cells were continually exposed to RDV, the amplitude of I MEP elicited by such large hyperpolarization was progressively raised. For instance, 3 mM RDV conceivably elevated I MEP amplitude from 112 ± 21 to 238 ± 35 pA (n=8, P<0.05) at the level of −200 mV. After washout, current amplitude was back to 124 ± 24 pA (n=8). Additionally, as K + ions in the internal solutions were replaced with equimolar concentrations of NMDG + , this current could still be enhanced through adding 3 mM RDV; however, current magnitude tended to be smaller. Figure 9B shows the association between the concentration of RDV and the degree of I MEP increase. RDV could concentration-dependently elevate the amplitude of I MEP activated during large step hyperpolarization. The half-maximal concentration (EC 50 ) needed for the stimulatory effect of RDV on I MEP was noticed to be 5.8 mM.
Our findings disclosed the effectiveness of RDV in generating a stimulatory action on I MEP in GH 3 cells. Figure 9C depicts summary bar graph showing the effect of RDV, RDV plus ivabradine or RDV plus LaCl 3 on I MEP . The results indicate that RDV-stimulated I MEP was overcome by subsequent addition of LaCl 3 (5 mM), but not by ivabradine (3 mM). Ivabradine or hydroxychloroquine was demonstrated to be an inhibitor of hyperpolarization-activated cation current (Capel et al., 2015;Hsiao et al., 2019). Subsequent addition of chlorotoxin (1 mM), a blocker of Clchannels, was unable to reverse RDV-induced I MEP (242 ± 38 pA [in the presence of 3 mM RDV] versus 239 ± 41 pA [in the presence of 3 mM RDV plus 1 mM chlorotoxin]; n=8, P>0.05). In consequence, the RDV-stimulated I MEP identified in GH 3 cells is unlikely to result from its activation of hyperpolarization-activated cation current.
DISCUSSION
In this study, we noticed that in a time-and concentrationdependent fashion the presence of RDV depressed the strength of delayed-rectifier K + current (I K(DR) ) in pituitary tumor (GH 3 ) cells. The rate of current inactivation apparently became fastened as the RDV concentration increased. In another perspective, the suppression of RDV on I K(DR) is evidently associated with an increasing inactivation rate of the current responding to membrane depolarization. Specifically, the relative block of I K (DR) induced by the RDV concentrations could be hence fitted in an exponential fashion. From the minimal reaction scheme (as shown in Supplementary Material (1)), the value of dissociation constant (K D ) required for RDV-induced block of I K(DR) in GH 3 cells was yielded to be 3.04 mM, which is close to effective IC 50 value (2.8 mM) for RDV-mediated inhibition of late I K(DR) , but is lower than that (10.1 mM) for its block of initial peak I K(DR) . Alternatively, during cell exposure to different RDV concentrations, the inactivation parameter (i.e., V /12 value) for the inactivation curve of I K(DR) emerging from GH 3 cells can be evidently adjusted, with no modifications of the gating charge. The presence of RDV (1 and 3 mM) induced I K(DR) block from the inactivation could be also noticeably recovered with single exponential of 687 and 867 ms, respectively. In this scenario, the present observations disclose that the RDV molecules tend to accelerate I K(DR) inactivation in a concentration-and statedependent fashion, implying that they reach the blocking site of the channel, only when the channel involved resides in the open conformational state. The EC 50 value of RDV against SARS-CoV-2 existing in Vero E6 cells was noticeably measured to be 1.76 mM, indicating that its working concentration is more than likely achieved in vivo . In the present study, the RDV presence was also observed to inhibit I K(DR) in Jurkat T-lymphocytes in a time-and concentration-dependent fashion (Supplementary Material (2) and Supplementary Figure 1). Besides its antiviral activity, similar to chloroquine, RDV per se might to some extent effect an immune-modulating activity possibly through the inhibition of K V channels.
The current observations pointed out that with effective IC 50 of 2.5 mM in GH 3 cells, RDV was capable of depressing the strength of I K(M) . Moreover, the voltage-dependent hysteretic changes of ionic currents are hypothesized to play an essential characteristic in the behaviors of different types of electrically excitable cells. In the current study, echoing previous observations (Männikko et al., 2005;Fürst and D'Avanzo, 2015;Hsu et al., 2020), the I K(M) endogenously in GH 3 cells was also observed to go either through a voltage-dependent hysteresis, or a mode-shift in the conditions of which the voltage sensitivity of gating charge movements is dependent on the previous state. By long-lasting triangular ramp pulse, RDZ noticeably suppressed the strength of voltage-dependent hysteresis for I K(M) elicitation. As such, we provide the experimental results strongly demonstrating that there is a perturbing effect of RDZ on such non-equilibrium property in M-type K + channels in electrically excitable cells such as GH 3 cells, although how RDZ-induced changes in voltage hysteresis of I K(M) are connected with the behaviors of electrically excitable cells is unclear.
The present study discloses that RDV can directly inhibit I K (M) and I K(DR) in pituitary GH 3 cells, suggesting that this compound per se presumably is not an inactive prodrug. The depression of these K + currents would be expected to be potentially charged with its actions on activities in various types of cell including GH 3 cells. A current report noticeably demonstrated the occurrence of hypokalemia present in the patients with coronavirus disease 2019 . It is reasonable to presume that, apart from its effects on the viral polymerase and the proofreading exoribonuclease (Agostini et al., 2018;Brown et al., 2019;Tchesnokov et al., 2019;Gordon et al., 2020), to what extent RDV-induced perturbations of ion channels unexpectedly identified in this study participates in its antiviral actions has yet to be further delineated.
Our results are in accordance with previous findings demonstrating that the large hyperpolarization induced inward currents (i.e., I MEP ) occur in glioma cells, heart cells, pituitary cells, and macrophages (Dyachok et al., 2010;Liu et al., 2012;So et al., 2013;Chiang et al., 2014;Chang et al., 2020a;Chang et al., 2020b). Such hyperpolarization-induced activation followed by irregular time course indicates that I MEP was produced by transient rupture of cell membrane caused by the electrical field tied to large hyperpolarization (Dyachok et al., 2010;Wu et al., 2012;So et al., 2013;Chang et al., 2020a;Chang et al., 2020b). In the current study, the presence of RDV was effective at increasing I MEP dose-dependently with EC 50 value of 5.8 mM.
Further addition of LaCl 3 , yet not that of chlorotoxin or ivabradine, was noticed to reverse RDV-stimulated I MEP .
Previous observations have reported the effectiveness of AUY922, a small-molecule inhibitor of heat-shock protein 90 (HSP90), in stimulating I MEP in glioblastoma cells through a mechanism independent of HSP90 inhibition (Chiang et al., 2014). As a corollary, stimulation by RDV of I MEP in GH 3 cells also tends to be direct and is unlikely to be mediated through a mechanism linked to its prevailing actions on RNA polymerases. The MEP-perturbed portion of the surface membrane can initiate ion fluxes into and out of the cell, hence producing a massive change in the ionic milieu of the cytosol. This effect has applications in biotechnology and medicine and, hence, has been the subject of both experimental and theoretical work (Gehl, 2003;So et al., 2013;Napotnik and Miklavcǐc, 2018). Due to high conductance of MEP-induced channels, even at low probability that would be open, significant currents have the propensity to flow, thereby altering the electrical behavior of cells (Vernier et al., 2009;Kaminska et al., 2012). Alternatively, previous studies have shown that the activity of MEP-elicited channels could act as a component of trans-plasma membrane electron transport, to which the targeting of mitochondrial permeability transition pore (mPTP) is closely linked (Del Principe et al., 2011;Bagkos et al., 2015). Therefore, whether RDV-stimulated perturbations of I MEP in different types of cells can account for its antiviral effectiveness is worth further investigation.
Aconitine, a material agent with potential cardiotoxicity, has been described to modify the gating of I K(DR) in lymphocytes, neural, and cardiac cells . Aconite alkaloids from Aconitum carmichaelii were recently demonstrated to exert antiviral activity against cucumber mosaic virus (Xu et al., 2019). Additionally, curcuminoids have been demonstrated to depress I K(DR) as well as to fasten I K(DR) inactivation in insulinsecreting cells (Kuo et al., 2018), as well as to possess potent antiviral activities against coronavirus (Wen et al., 2007). Though additional experiments are required to verify the current results, RDV-induced effects on ionic currents demonstrated could be a confounding factor and the notable ionic mechanism underlying its modifications on cell behaviors occurring in vitro or in vivo. The summary of our findings regarding the possible perturbations of RDV is illustrated in Figure 10.
RDV-perturbed suppression of I K(DR) or I K(M) demonstrated is independent of its possible actions on RNA polymerase (Agostini et al., 2018;Brown et al., 2019;Gordon et al., 2020). in another perspective, it is intriguing to investigate whether the modification by RDV of RNA polymerase would attribute to its blocking of membrane I K(DR) or I K(M) , as well as from its stimulation of I MEP in different cell types. To what extent RDV-induced perturbations on membrane ionic currents confers its effectiveness in antiviral activities thus remains to be resolved. Following intravenous administration of RDV can readily pass across the blood-brain barriers (Warren et al., 2016;Ferren et al., 2019;Lucey, 2019). Recent studies have demonstrated that CoVs might exert neuro-invasive potential (Ferren et al., 2019;Li H. et al., 2020). Findings from the present observations might shed the light to the notion that the effect of RDV on the gating of the currents are intimately tied to its antiviral actions or variable forms of neurological effects (Ferren et al., 2019); however, the present observations do not preclude
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
S-NW designed the experiments. Z-HG, S-WL, W-KL, and S-NW carried out the experiments. P-YL provided the resources. W-TC and S-NW analyzed the data. W-TC and S-NW wrote the paper. All authors contributed to the article and approved the submitted version.
FUNDING
This study was financially supported by the grants from Ministry of Science and Technology (MOST-108-2314-B-006-094) and National Cheng Kung University (NCKUH-10709001 and D107-F2519), Taiwan. The funders are not involved in the study design, data collection, analyses, or interpretation. | 9,324 | 2020-07-21T00:00:00.000 | [
"Biology"
] |
Linear Electro Optic Effect for High Repetition Rate Carrier Envelope Phase Control of Ultra Short Laser Pulses
This paper is devoted to analyzing the principle and applications of the linear electro-optic (EO) effect for the control of the carrier-envelope-phase (CEP). We introduce and detail here an original method, which relies on the use of an EO dispersive prism pair in a compressor-like configuration. We show that, by choosing an adequate geometry, it is possible to shift the CEP without changing the group delay (isochronous carrier-envelope-phase shifter) or change the induced group delay without varying the CEP. According to our calculations, when applying an electric field around 400 V/cm to the rubidium titanyle phosphate (RTP) prisms in a double pass configuration (2 × 40 mm total length), one obtains a CEP shift of π rad at 800 nm without inducing a group delay. In contrast, this CEP shift is obtained for an electric field around 1.4 kV/cm in a RTP rectangular slab of the same total length and, in this case, the group delay is of the order of a few fs.
Introduction
The electric field of a laser pulse is generally described by the product of a wave envelope and a carrier wave.In a dispersive medium, phase and group velocities are different, inducing a slippage of the carrier frequency wave inside the envelope.For ultra-short pulses containing only a few optical cycles, laser-matter interactions can drastically depend on the electric field and not only on its envelope.In this case, controlling the carrier-envelope-phase (CEP) is of prime importance [1][2][3].
Because of the difference between effective group and phase velocities in laser cavities and of environmental effects, such as vibrations and thermal drift, the generated pulses from ultra-short chirped pulse amplification (CPA) lasers do not have the same CEP.Various methods have been developed in order to obtain, through the use of a fast control loop, a train of CEP stabilized pulses from mode-locked oscillators [4][5][6].Different ways also exist to stabilize the CEP of the amplified pulses of a (CPA) laser system seeded by a CEP stabilized mode-locked oscillator.They are mainly based on a slow feedback loop containing a f-2f interferometer [7], a proportional-integral-derivative controller (PID) and a specific CEP correction technique [8].
Examples of those techniques are the use of a pair of wedges to modify the optical path in the dispersive element composing these wedges [9,10], the modification of one parameter of the compressor or of the stretcher (this parameter can be the distance between the gratings) [11,12], the use of an Acousto-Optic Programmable Dispersive Filter (AOPDF) [13][14][15] or of a 4f system with an adaptive phase modulator device [16].
We recently proposed an original method based on the linear electro-optic (EO) effect in a bulk material (LiNbO 3 -lithium niobate) [17,18] and successfully applied it to the CEP control of a titanium-sapphire CPA laser [19] with stabilization performances of the order of those obtained with classical methods.Compared to the main equivalent CEP shifters (as will be detailed at the end of this paper), the major advantages of EO CEP shifters are related to the fact that they do not need mechanical displacements and especially to their high correction bandwidth (>10 kHz).In this paper, we introduce another original method, still based on the use of the EO effect, but in a more complex optical scheme and which relies on the use of an EO dispersive prism pair.While experimental demonstration of CEP shifts and CEP control [17][18][19] of CPA amplified pulses with a "longitudinal" shifter (simple rubidium titanyle phosphate (RTP) or LiNbO 3 rectangular slab) has been proven, no experimental results will be given here concerning the prism pair EO shifter, which will be the topic of a future paper.
The structure of the paper is the following.We first recall analytical results explaining how a CEP shift can be induced in an EO crystal, the former being linked to the static electric field dependence of the difference between the induced group and phase delays in the crystal.We then give briefly the main experimental results already obtained with CEP shifters consisting of an EO material rectangular slab (that will be called the "longitudinal configuration" of the CEP shifter in the rest of the paper) on which a transverse electric field is applied.The new prism pair configuration is then theoretically detailed, and it is shown that it has many advantages compared to the first set-up, as, in particular, making it possible to shift the CEP without changing the group delay.This may be essential (typically in high resolution pump-probe experiments and especially in experiments making use of attosecond laser pulses) when any change in timing between pulses is to be avoided.In the last part of the paper, we provide some elements to compare the different solutions for CEP stabilization.
Theory
Due to the wavelength dispersion of the refractive index in dispersive media, phase and group velocities have, in general, different values.In a crystal with a non-vanishing EO effect, the refractive index can be linearly modulated by an external electric field, E, leading to a variation of the CEP.We consider a laser pulse whose carrier angular frequency is ω 0 , propagating in a homogeneous dispersive medium of length, L, which is non-centrosymmetric and exhibits a Pockels effect.This is, for example, the case [17,20] in LiNbO 3 -uniaxial crystal, crystalline class 3 m or RTP-biaxial crystal orthorhombic crystalline class mm2-when choosing appropriate directions for the linearly polarized laser field and the applied static electric field.Figure 1 illustrates how the direction of propagation and the polarization of light are to be chosen in practice and how the voltage is to be applied on the crystal in two interesting cases, LiNbO 3 and RTP.X, Y and Z are the principal dielectric axes, which are parallel to the crystallographic axes.With these choices, the variation of the corresponding electric field-dependant refractive index, Δn, is given as a linear function of the EO coefficient, r, and of the electric field, E. We only consider here the case where one can neglect the change in the crystal length, ΔL, due to the inverse piezoelectric effect, as is the case in LiNbO 3 and RTP [20][21][22].
For the sake of simplicity, we only consider the case when the change of the refractive index due to the linear EO effect depends on a unique r(λ 0 ) coefficient (unclamped EO coefficient at wavelength λ 0 ) and can be written in a scalar form as: where is the refractive index without applied field.Neglecting the inverse piezoelectric effect, one obtains: This phase change is proportional to the length, L, of the crystal and to the electric field, E, applied.It can be used to make an active correction of the CEP and maintain its value constant with a control loop, despite environmental fluctuations.
Experiments
This section presents previously published results demonstrating the validity of our model and giving an example of the performances obtained with the EO shifter.Equation 3 was checked in [17] for LiNbO 3 using a femtosecond 800 nm Ti:S laser source and an f-2f interferometer.The results show a good agreement between theory and experiments.Another approach using spectral interferometry with a broadband laser source [18] was applied to measure CEP shifts and confirmed again, with a better accuracy, the above theory.Stabilization of the CEP (slow loop) of a Ti:S femtosecond laser source was finally demonstrated in [19] with very promising results.The CEP-stable 20W range kHz laser and the EO device for CEP control arrangement is given in Figure 2. In this case, a LiNbO 3 crystal longitudinal EO shifter was used.
Introduction
The EO CEP shifter discussed in the previous part of the document is well-suited to stabilize a high repetition rate CPA Ti:S CEP stabilized mode-locked oscillator system.Nevertheless, for some applications, the CEP shift is not the only relevant parameter.In particular, the electric field-induced group delay Δτ g (Δτ g = 0 for an isochronous system) and the electric field-induced dispersion ΔФ 2 (ΔФ 2 = 0 for an iso-dispersive system) may play a role.Furthermore, lowering the applied voltage necessary to obtain the same CEP shift is also clearly of practical interest.Equation 3shows that the two ways to lower the electric field are to increase the (effective) crystal length or to find a material with optimized characteristics (for example, with a higher EO coefficient).
As can be seen from the theoretical analysis given in paragraph 2.1, the previously described EO CEP shifter configuration is neither rigorously isochronous, nor iso-dispersive, when using LiNbO 3 or RTP crystals.We investigated a combination of two CEP shifters with different crystals, but this set-up proved to be ineffective, as it led to very low CEP shifts when a constant group delay (isochronous system) or a constant dispersion condition (iso-dispersive system) were sought.
The idea we propose here is to combine material dispersion with angular dispersion, as in the case of a prism compressor [23], and to study the characteristics of an "EO prism compressor".The prisms are EO crystals (RTP or LiNbO 3 for example) on which an electric field is applied (see Figure 4).In this configuration, the polarization of the optical electric field is parallel to the "static" electric field.This leads to an S polarization on the prism surfaces and, thus, requires an anti-reflection coating in order to reduce optical losses.The other point, which will be clarified hereafter, concerns the homogeneity of the "static" field in the prisms.
Theory
The induced spectral phase can be written as: where Λ op is the optical path for a ray of angular frequency, ω, and c is the speed of light in vacuum.
General Configuration
We first consider the general case where the angle of incidence can take any value.As the optical path is obviously invariant for parallel incident rays, we calculate the path of a ray going through the apex of the first prism (Figure 5).For a single pass in the system, it can be written as: Surprisingly, following the method given in [24], a rather simple calculation shows that it can be written in an elegant mathematical form.We consider (Figure 5) a ray NA 2 parallel to A 1 B (NA 1 normal to A 1 B) intercepting the apex A 2 of prism 2. This ray is deviated by prism 2 along the path, A 2 A 3 .As A 1 B and NA 2 are parallel incident rays, the light paths, Λ opt = A 1 BCD and NA 2 A 3 , are identical.Let l be the distance between the apices of the two prisms and ρ the angle, (NA 2 A 1 ) (Figure 5), the optical pass, Λ opt , can be written as: This last equation leads finally to the result: where (Figure 5) θ 4 is the external refraction angle at the exit of prism 1 in air, a the slant distance, A 1 O, between the output face of prism 1 and input face of prism 2, b the distance, OA 2 , and A 2 A 3 the distance between the apex of the second prism and a reflecting mirror (used in a two pass configuration to remove spatial chirp).A 2 A 3 being a constant, we can remove it, and the spectral phase can thus be written: This expression is equivalent to that given in [24].An analogous calculation was also done in [25], but the above expression was not given., and second order dispersion, (2) , have, respectively, the following expression: Because only the variation of these parameters with the applied static electric field, E, is relevant, we define: The isochronous CEP shifter condition at carrier angular frequency, ω 0 (CEP shift without induced group delay), corresponds to: This is to be fulfilled for any value of E. In order to obtain analytical results, we rewrite Equation 2 as: By derivation with respect to ω, one gets: with the following relations: Using for the refractive index, a first order Taylor expansion as a function of the static electric field, E, the isochronous condition leads to the following relation (Appendix I) between a and b: where θ 20 and θ 40 are respectively the values of θ 2 (internal refraction angle in first prism) and θ 4 (external refraction angle in air at the exit of first prism) for E = 0 (Figure 5) and β is the apex angle of the prisms.Similarly, generation of a group delay without variation of the CEP leads to the condition: This situation, that will be called "Pure Group Delay" (PGD) generation in the rest of this paper, implies that a and b are such that (Appendix II): Finally, the iso-dispersive condition being: This leads again to a similar relation.In this case, however, the analytical result is more complex and is not given here for conciseness.
As a first conclusion, we see that for a particular set of the ratio, b/a, that is to say, a specific geometry, it is possible to make the system behave like an isochronous CEP shifter, a PGD generator or an iso-dispersive CEP shifter.
Minimal Deviation Configuration
We now suppose that the incident angle corresponds to the prism minimal deviation at the carrier frequency, ω 0 .This condition, which gives the highest angular dispersion, can be written as: As these parameters are more relevant from an experimental point of view, instead of using the parameters, a and b, we now switch to the parameters, d 2 and d 3 , corresponding, respectively, to the distances covered between prism 1 and 2 in air and to the total path inside the prisms for the ray at the carrier frequency (Figure 6).The spectral phase takes the form (Appendix III): We will now express the results in the particular case of central wavelength, λ 0 , corresponding to the central angular frequency, 0 , and using λ instead of ω.Equation 21 becomes (Appendix IV): This shows that, at central wavelength, λ 0 the variation of the spectral phase with respect to the applied electric field does not depend on the distance, d 2 , nor on the apex angle, β.The group delay variation with respect to the applied electric field is written: where λ is the wavelength corresponding to the angular frequency, ω, and where we use the derivatives taken with respect to the wavelength, λ, instead of ω. Contrary to the case of the variation of the spectral phase with the electric field, the variation of the group delay with the field depends on d 3 and d 2 .Analytical expression of the group dispersion with respect to the electric field can also be derived, but, as in the general configuration, is not given here for conciseness.
Analytical expressions of the ratio, d 2 /d 3 , at 0 for isochronous CEP shifter and PGD generator configuration are, respectively, given below: When the isochronous CEP shifter configuration is chosen, the variation of the CEP is equal to the variation of the spectral phase with the electric field, which is given by Equation 22.
Another interesting parameter to evaluate is the group-delay dispersion (GDD) induced on the beam by the two prism compressor for E = 0 at 0 .It can be written in our notations as: From this expression, we deduce the ratio, d 2 /d 3 , for which the system introduces no dispersion (i.e., (2) Analytical results were checked numerically with a ray tracing program.
RTP Prism Pair CEP Shifter
This paragraph presents numerical results obtained in RTP crystals.The geometry corresponds to Figure 7, with the path length of the ray at the carrier frequency chosen equal in the two crystals.
This geometry is in practice to be preferred in order to obtain an homogeneous "static" electric field in the crystal (which would not be the case at the tip of a prism).The RTP Sellmeier formula for the refractive index is taken from [26] and the wavelength dispersion of the EO coefficients from [20].
Figure 8 plots the ratio, d 2 /d 3 , versus the apex angle, β, of the prisms when the isochronous ( blue) and the iso-dispersive conditions ( This gives a means of comparing the performances of each configuration.The thick brown line corresponds to a zero dispersion condition when no electric field is applied ( (2) the set-up.It appears that it is not possible to obtain simultaneously an isochronous and iso-dispersive system, nor an isochronous system without dispersion.In the isochronous configuration, the set-up introduces a positive dispersion (as the red curve is on the left of the brown curve, which corresponds to a zero dispersive system at zero electric field).Figure 8 also shows that the PGD configuration (no CEP shift) is very close to the iso-dispersive configuration.This implies that the iso-dispersive configuration should be generally inefficient as a CEP shifter (i.e., will require a far higher voltage to generate the same CEP shift than the isochronous configuration).This is confirmed by looking at the CEP shift, which is plotted (for d 3 = 40 mm and E = 1 kV/cm) on the same graph in the iso-dispersive configuration (dotted green line).Finally, the dotted red line curve (which corresponds to the isochronous CEP shift as a function of β) illustrates the fact that, in the isochronous configuration, the CEP shift (at central frequency ω 0 is independent of the apex angle of the prism-as can be seen from Equation 22. In order to have an idea of the practical values of d 2 corresponding to each configuration, Figure 9 plots the induced CEP shift (red line), the induced group delay (i.e., 0 g -blue line) and the induced GDD (brown line) at central frequency, ω 0 , as a function of d 2 .The group delay dispersion, (2) (GDD) for E = 0 is also plotted (dotted brown line).These results are obtained in the case of an apex angle of the prisms, This graph shows again that PGD and iso-dispersive configurations correspond to nearly the same value of d 2 and that the CEP phase shifter is not efficient in the iso-dispersive configuration.One can also conclude that at fixed CEP shift and, for a given pair of prisms, increasing d 2 (i.e., the distance between the prisms) can increase the CEP shift range or reduce the electric field.This, however, cannot be done while maintaining the isochronous condition.In any case, d 2 is limited by the size of the beam on the second crystal, which depends on its spectral extent.
Comparison between Different CEP Shifters
As different CEP shifters based on various physical phenomena can be found in the literature, it is interesting to compare their performances with the EO CEP shifters.In order to do so, we have restricted ourselves to five systems.These are the AOPDF [4,15], the grating compressor [11,12], the glass wedges system [9,10], the 4f + LCD system [16] and the lens/rotating grating system [27].Mainly, we chose to compare the induced group delay, Δτ g , and the induced GDD of each system and added, when significant, specific data, such as the mechanical displacement for the grating compressor and the electric field for the EO devices.Table 1 gives these parameters, when required, for all the systems, for a CEP phase shift of π radians at a wavelength of 800 nm.In the case of the EO CEP shifters, the single pass configuration and the double pass two-wedged crystal isochronous configuration are selected for comparison and a single pass propagation length, L = 40mm, chosen in RTP for the central wavelength rays.This corresponds to RTP crystal lengths, which are commercially available and that lead to relevant CEP shift at moderate values of the electric field.1,200 grooves/mm gratings were considered at 37° incidence for the grating compressor.Concerning the glass wedge system, the basic configuration described in [10] is considered, with a displacement of two silica wedges perpendicular to the beam axis to vary the thickness of silica.We also give an estimate of the correction bandwidth, defined as the inverse of the time needed to change from one CEP value to another.It is to be noted that the above bandwidths concerning the grating compressor, the glass wedges and the 4f+LCD systems are only to be considered as "typical values".In practice, the effective bandwidth depends on the specific configuration, and the main result to be kept in mind is that their response time is considerably lower than those of the Dazzler or the EO CEP shifter, for example.
Conclusions
Compared to the longitudinal EO CEP shifter, the major advantages of the new configuration described here are the possibility to work without induced group delay and the lower static electric field needed for a given CEP shift.In addition, the system can also be used to control a group delay without shifting the CEP (PGD generator).The combination of two such systems should allow us to control separately the CEP and the group delay at a very high speed.The induced GDD can be estimated from table 1 and can be neglected, provided that the condition, , is verified.This shows that corrections can be made, even on very short pulses, the most important point being the ability of the system to correct the CEP at a very high speed, which can very probably be extended to the 100 kHz range.
The EO phase shifter cannot be used to stabilize the CEP of the optical pulse train outside the mode-locked (ML) oscillator (see Appendix V).This new set-up should be well suited to stabilizing the CEP of the amplified pulses of a chirped pulse amplification (CPA) laser system seeded by a CEP stabilized mode locked oscillator (slow feedback loop), as was demonstrated in the case of the longitudinal EO shifter [19].
The authors strongly believe that this type of system could be of interest for different applications, like coherent control, polarization shaping, stabilization of interferometric systems, coherent combination of fiber amplifiers and frequency synthesizers.
Clearly, the same angles can be defined in Figure 5.Using Snell's law at each interface with elementary geometrics, the following relations are obtained: (A.1) Equation 9gives the spectral phase, , as a function of geometrical parameters, a,b, and the angle, θ 4 (Figure 5).The variation of the group delay, Δτ g , when applying the electric field, E, is given by: This equation can be rewritten as: where: (A.4) In these equations, θ 4 and θ 40 are, respectively, the output angle of prism 1, as defined in Figure 5, when an electric field is applied and when it is not.The isochronous condition corresponds to Δτ g = 0 (no group delay variation due to the application of the electric field).
Calculation of A
Approximate expressions as a function of E are now to be used.Elementary mathematics show that to the first order of the electric field E: where we used the following definitions: A method was recently proposed and demonstrated to stabilize the CEP of the optical pulse outside the oscillator cavity [29].This method is based on an acousto-optic device in order to shift the frequency comb.As EO systems are currently used in the ns domain as frequency shifters through a linear temporal phase sweep, it is natural to wonder whether an EO phase CEP shifter could be used to stabilize the CEP of the pulse train outside the ML oscillator by shifting the comb frequencies.To our knowledge, the only way to apply a linear temporal phase shift on every pulse is to use a sinusoidal phase modulator at frequency f rep (or an integer multiple of f rep ), as shown on Figure 11.The electric field at the output of the modulator can then be written: (A.29) where Γ is the amplitude modulation index, which depends linearly on the radio frequency (RF) electric field applied to the crystal, and ω rep is the angular frequency corresponding to f rep .Applying the Anger-Jacobi development [30] to the last term of Equation A.29 leads to: (A.30) In the spectral domain, the electric field has the following form: (A.31) where the symbol * stands for the convolution product and J k are Bessel functions.This last expression can be written as: (A.32)This shows that the EO device shifts the spectral envelope, but not the frequencies of the comb, as is the case with the acousto-optic device.This can be easily understood if one considers, on one side, a single pulse on which a linear temporal phase is applied and whose spectrum is shifted by Γω rep , and, on the other side, a single mode of the comb, whose temporal extension is infinite and which experiences a sinusoidal phase modulation, giving rise to adjacent modes separated by a multiple of ω rep .This shows that it is not possible to stabilize in this way the CEP of the optical pulse train outside the ML oscillator with our EO device.
Figure 1 .
Figure 1.Geometry of the interaction for LiNbO 3 and simple rubidium titanyle phosphate (RTP).X, Y and Z are the principal dielectric axes (parallel to the crystallographic axes).
Figure 3 .
Figure 3. Shot to shot (red dots) and 10 ms averaged (grey dots) measurement of stabilized CEP drift over 10 min of amplified pulses at 3 W output (a) without slow feedback control, (b) with EO feedback loop over 25 min leading to RMS CEP noise of, respectively, 320 and 130 mrad and (c) over 7 min of amplified pulses at 20 W output leading to RMS CEP noise of respectively 440 and 250 mrad.
Figure 4 .
Figure 4. EO prism pair CEP shifter.Yellow parts correspond to gold coating.
Figure 5 .
Figure 5. Geometrical arrangement of the EO prism pair.
Figure 6 .
Figure 6.Geometrical arrangement of the EO prism pair CEP shifter at prism minimal deviation.
Figure 7 .
Figure 7. Proposed practical geometry for the prism pair CEP shifter set-up.
Figure 8 .
Figure 8. Ratio, d 2 /d 3 , versus apex angle for different configurations and corresponding CEP shift.
total path length, d 3 = 40 mm, at 800 nm and a static electric field, E = 1 kV/cm.
(A. 6 )
Parameter A can be calculated to the first order of E using the set of Equation A.1:
Figure 11 .
Figure 11.Application of sinusoidal phase modulation at f rep .
Table 1 .
Comparison between different CEP shifters. | 6,132 | 2013-02-26T00:00:00.000 | [
"Engineering",
"Physics"
] |
Partially Hydrolysed Whey Has Superior Allergy Preventive Capacity Compared to Intact Whey Regardless of Amoxicillin Administration in Brown Norway Rats
Background It remains largely unknown how physicochemical properties of hydrolysed infant formulas influence their allergy preventive capacity, and results from clinical and animal studies comparing the preventive capacity of hydrolysed infant formula with conventional infant formula are inconclusive. Thus, the use of hydrolysed infant formula for allergy prevention in atopy-prone infants is highly debated. Furthermore, knowledge on how gut microbiota influences allergy prevention remains scarce. Objective To gain knowledge on (1) how physicochemical properties of hydrolysed whey products influence the allergy preventive capacity, (2) whether host microbiota disturbance influences allergy prevention, and (3) to what extent hydrolysed whey products influence gut microbiota composition. Methods The preventive capacity of four different ad libitum administered whey products was investigated in Brown Norway rats with either a conventional or an amoxicillin-disturbed gut microbiota. The preventive capacity of products was evaluated as the capacity to reduce whey-specific sensitisation and allergic reactions to intact whey after intraperitoneal post-immunisations with intact whey. Additionally, the direct effect of the whey products on the growth of gut bacteria derived from healthy human infant donors was evaluated by in vitro incubation. Results Two partially hydrolysed whey products with different physicochemical characteristics were found to be superior in preventing whey-specific sensitisation compared to intact and extensively hydrolysed whey products. Daily oral amoxicillin administration, initiated one week prior to intervention with whey products, disturbed the gut microbiota but did not impair the prevention of whey-specific sensitisation. The in vitro incubation of infant faecal samples with whey products indicated that partially hydrolysed whey products might confer a selective advantage to enterococci. Conclusions Our results support the use of partially hydrolysed whey products for prevention of cow’s milk allergy in atopy-predisposed infants regardless of their microbiota status. However, possible direct effects of partially hydrolysed whey products on gut microbiota composition warrants further investigation.
2 DNA extraction and amplicon sequencing of the 16S rRNA gene
DNA was extracted from faeces or small intestine content by DNeasy PowerLyzer PowerSoil Kit (Qiagen, Hilden, Germany) according to the manufacture's protocol. Mechanical lysis of bacteria was conducted at 30 cycles/s twice for 5 min using bead beater MM300 (Retsch, VWR, Haan, Germany).
The V3-region of the 16S rRNA gene was amplified using a universal forward primer (PBU 5'-Aadapter-TCAG-barcode-CCTACGGGAGGCAGCAG-3') with a unique 10-12 bp barcode for each sample (IonXpress barcode as suggested by the supplier, Life Technologies, Carlsbad, CA, US) and a universal reverse primer (PBR 5'-trP1-adapter-ATTACCGCGGCTGCTGG-3') and Phusion High-Fidelity DNA polymerase (Thermo Fisher Scientific, Waltham, MA, US). PCR products were purified by HighPrep™ PCR Clean-up System (Magbio, Gaithersburg, MD, US) according to the manufacture's protocol. DNA concentrations were determined with Qubit HS assay (Life Technologies). Finally, a library was constructed by mixing an equal amount of PCR products from each sample. Sequencing of all samples was performed on a 318-chip for Ion Torrent sequencing using the Ion OneTouch™ 200 Template Kit v2 DL (Life Technologies).
Preparation of defined culture mix
Frozen stocks of Bifidobacterium longum ssp. infantis (NCIMB 702205), Lactobacillus rhamnosus (ATCC 53103) and Enterococcus faecalis (DSM 20478) were thawed and plated on Bifidus Selective Medium (BSM) agar plates for 2 days (B. longum), de Man, Rogosa and Sharpe (MRS) plates for 2 days (L. rhamnosus) or blood agar plates for 1 day (E. faecalis). Single colonies of these were inoculated in Gifu Anaerobic Medium (GAM) broth and incubated anaerobically overnight. Finally, the three cultures were mixed to obtain an equal optical density of each and added glycerol in saline to a final concentration around 15% (v/v) and frozen at -80°C in aliquots.
Real-time PCR conditions
The 16S rRNA-targeting primers used in this study are listed in Table S2. Total reaction volume of 11 µL containing 5.5 μL LightCycler® 480 SYBR Green I Master (Roche), 2.2 pmol of each of the primers (TAG Copenhagen, Denmark), 2 ng template DNA, and nuclease-free water purified for PCR (Qiagen). The reaction conditions were: Pre-incubation at 95°C for 5 min followed by 45 cycles of 95°C for 10 s, 60°C for 15 s and 72°C for 45 s. Lastly, a melting curve was generated (95°C for 5 s, 68°C for 1 min and increasing the temperature to 98°C with a rate of 0.11°C/s with continuous fluorescence detection). The qPCR was run in 384-well format on a LightCycler® 480 II (Roche Applied Science) and analysed using the LightCycler® 480 software.
Real-time PCR data handling
For each incubation replicate, the mean threshold cycle (Ct) value between qPCR triplicates was used to calculate the relative abundance of the genera Bifidobacterium, Lactobacillus and Enterococcus relative to the total bacteria (ntarget/ntotal) using 2 ΔCt as described elsewhere (4). ΔCt is the Ct value of the bacterial target normalised to the Ct value of the total bacterial population in the same incubation sample. Furthermore, the ratio between the relative abundance of a bacterial target in incubation with different whey products relative to iW (ntreated/niw) were calculated using 2 ΔΔCt , where ΔΔCt is the ΔCt value of a given sample normalised to the median ΔCt of three iW samples. . Relative abundance of the most abundant bacterial genera in the small intestine of individual rats. Only those genera with a relative abundance of more than 0.05 in at least one rat are shown. The remaining genera are grouped into "Other". | 1,248.8 | 2021-08-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
The interplay of DAMPs, TLR4, and proinflammatory cytokines in pulmonary fibrosis
Pulmonary fibrosis is a chronic debilitating condition characterized by progressive deposition of connective tissue, leading to a steady restriction of lung elasticity, a decline in lung function, and a median survival of 4.5 years. The leading causes of pulmonary fibrosis are inhalation of foreign particles (such as silicosis and pneumoconiosis), infections (such as post COVID-19), autoimmune diseases (such as systemic autoimmune diseases of the connective tissue), and idiopathic pulmonary fibrosis. The therapeutics currently available for pulmonary fibrosis only modestly slow the progression of the disease. This review is centered on the interplay of damage-associated molecular pattern (DAMP) molecules, Toll-like receptor 4 (TLR4), and inflammatory cytokines (such as TNF-α, IL-1β, and IL-17) as they contribute to the pathogenesis of pulmonary fibrosis, and the possible avenues to develop effective therapeutics that disrupt this interplay.
Introduction
Pulmonary fibrosis is a chronic restrictive lung disease characterized by a progressive decline in lung volume capacity, resulting from many chronic inflammatory disorders affecting the lung [1][2][3]. The visibility of pulmonary fibrosis, in particular, has significantly increased during the 2020 COVID-19 pandemic [4,5]. Most hospitalized patients with COVID-19 have bilateral interstitial pneumonitis, as indicated by ground-glass opacities [6], and many show signs of fibrosis with their lung capacity reduced by up to 30% [7,8]. In addition to infections such as COVID-19, pulmonary fibrosis can also occur in the contexts of repeated inhalation of foreign particles (such as silicosis and pneumoconiosis) and autoimmune diseases (such as systemic autoimmune diseases of the connective tissue) [9,10]. The prototypical form of chronic fibrotic condition of the lung, however, is idiopathic pulmonary fibrosis (IPF), for which only pirfenidone (Esbriet, Genentech) [11] and nintedanib (Ofev, Boehringer Ingelheim) [12] have been FDA-approved to attenuate the rate of disease progression. IPF's median survival from diagnosis is 4.5 years [13], underlining the urgent medical need for more effective therapeutic approaches. Multiple genomewide association studies (GWAS) have reported genetic association signals in patients with IPF, stressing the importance of host defense, cell-cell adhesions, and DNA repair in the pathogenesis of the disease [14][15][16][17][18]. Furthermore, the altered host defense mechanisms explain not only the possible triggering of pulmonary fibrosis by chronic inflammation and viral infection but also the susceptibility of pulmonary fibrosis patients to viral-induced exacerbations [19].
During the past 10 years, damage-associated molecular pattern (DAMP) molecules have been shown to play a vital role in promoting exacerbation, remodeling, and silent progression of pulmonary fibrosis [20]. Toll-like receptors (TLRs), by virtue of being pattern recognition receptors of DAMPs, have been identified as critical mediators through which DAMPs exert their effect in Max Brenner and Ping Wang contributed equally to this work. cellular microenvironments. It is now clear that inflammation, though not the only trigger of fibrosis, plays a key role in the activation of fibroblasts -a cellular process critical in the development of pulmonary fibrosis. The pathogenetic model that we present in this review focuses on how DAMP signaling at the cellular level tilts the scale from remodeling and fibrosis resolution towards self-perpetuating cycles of connective tissue deposition leading to clinically relevant fibrosis.
DAMPs and TLR4 in pulmonary fibrosis
Intermittent episodes of transient inflammation in the lungs triggered by pathogens, chemical irritants, or autoimmunity can result in the necrosis and apoptosis of the epithelial cells and cause the release of intracellular components that act as DAMPs. The released DAMPs then activate homeostatic processes that most often promote the resolution of the insult underlying the inflammatory process. However, during more prolonged pathological states, this process is exaggerated, turning the homeostatic pulmonary environment into a self-perpetuating cycle of inflammation and DAMP release, resulting in pulmonary fibrosis.
Multiple structurally diverse DAMPs have been identified to act as mediators for this vicious cycle [20]. These include intracellular peptides [21], glycoproteins [22,23], phospholipids [24], and even nucleic acids [25,26] that are released to the environment during cell injury and necrosis processes which drive progressive tissue fibrosis. Once released, these endogenous ligands exert their effect mainly through TLRs [22]. TLRs are pattern recognition receptors to which DAMPs bind and, with the help of adaptor proteins, activate intracellular signal transduction cascades eliciting changes in gene expression and altering various cellular activities. Here, we focus on the profibrotic role of TLR4.
Among the TLRs, TLR4 has been shown to have a profibrotic effect in the lung when stimulated by DAMPs [27]. The first series of publications that illuminated the role of the TLR4 pathway on fibroblasts showed the activation of TLR4 enhances the process of fibrosis in the liver by downregulation the transforming growth factor (TGF)-β pseudoreceptor Bambi through TLR4 → MyD88 → NF-κB pathway, which causes sensitization of hepatic stellate cells (HSCs) to TGF-β1-induced signals and allows unrestricted activation of HSCs and differentiation to extracellular matrix (ECM)-producing myofibroblasts [28]. In this pioneering work, TLR4 was stimulated using lipopolysaccharide (LPS), a well-known and highly sensitive TLR4 activator [29-31]. Almost 11 years later, a similar effect was observed in persistent fibrosis of the lung through TLR4/myeloid differentiation 2 (MD2) complex related pathways and activation of pulmonary fibroblasts to myofibroblasts [32]. The stimulatory molecules used by Bhattacharyya et al. were tenascin-C (a multifunctional hexameric ECM protein) and fibronectin-extra domain A (Fn-EDA), which are potent TLR4 agonists generated within the injured pulmonary extracellular microenvironments [33][34][35][36][37]. The role of TLR4-activating DAMPs in pulmonary fibrosis has been further evaluated with high-mobility group box1 (HMGB1), a potent inducer of TLR4 [38] in pulmonary fibrosis. HMGB1 is highly expressed in IPF lungs, and its blockade with antibodies attenuates bleomycin-induced fibrosis [39]. Along the same line is the small heat shock protein alphaB-crystallin (HSPB5), implicated in the TLR4-dependent induction and progression of pulmonary fibrosis [40,41]. Mice deficient in HSPB5 had an attenuated response to bleomycin-induced pulmonary fibrosis [42]. Another category of TLR4 agonists that have recently been identified to be involved in the progression of pulmonary fibrosis consists of S100 proteins. Higher levels of S100A4 have been shown to independently correlate with worse disease progression in IPF [43], and S100A4 has been shown to contribute to fibrosis by activating pulmonary fibroblasts [44] ( Table 1).
The induction of DAMPs following tissue injury or cell death in chronic inflammatory diseases has been studied extensively [45]. Oxidative stress and ECM matrix stiffness can also damage the microenvironment and contribute to the cycle of sustained fibrosis by the release of DAMPs [46]. However, support for whether this induction happens by direct effects on macrophages or fibroblasts to release DAMPs in the microenvironment is still lacking. Although some studies have suggested HMGB1 can be 50], we know of no study that investigated the relationship between ECM stiffness and induction of profibrotic DAMPs at the cellular level in macrophages or fibroblasts.
Inflammatory cytokines and pulmonary fibrosis
Cytokines are proteins involved in cell signaling, including interferons, interleukins, tumor necrosis factors, and chemokines. Over the past 10 years, much evidence has been accumulated in the role of proinflammatory cytokines in fibrogenesis and myofibroblast differentiation [51,52]. Cytokines that did not use to be part of the discussion in pulmonary fibrosis have recently been shown to be integral to several pathways that drive pulmonary fibrosis [53][54][55][56][57]. The overarching mechanisms by which proinflammatory cytokines tip the scale towards fibrogenesis include the recruitment of immune cells, regulation of the fibroblast activation status, and production of other profibrotic cytokines, among which is TGF-β1, the master regulator of fibrosis. Proinflammatory cytokines can be regulated in pulmonary fibrosis by oxidation stress and redox signaling through induction of mitochondria-derived ROS [58][59][60], NADPH oxidase (NOX) [61][62][63][64][65], and antioxidant depletion [60,[66][67][68]. They can also be regulated by ECM matrix stiffness through deposition of collagen [69] and cross-linking 224 875 5689 with fibronectin [70] in the fibrotic tissue microenvironment. As discussed later, evidence shows that these cytokines can also be induced by DAMP stimulation of macrophages and fibroblasts. Regardless of how they are induced, proinflammatory cytokines have shown to be a profibrotic player in the early phase of fibrosis [51]. At the cellular level, these cytokines exert their effect by three mechanisms: directly inducing fibroblast activation, causing the release of profibrotic cytokines (including TGF-β1) in immune cells/fibroblasts, or promoting the persistent autocrine/paracrine activation of fibroblasts. Among the most studied proinflammatory and profibrotic cytokines is tumor necrosis factor-alpha (TNF-α), interleukin (IL)-1β, and IL-17.
TNF-α
In the case of TNF-α, all three cellular mechanisms of fibrosis have been described [71][72][73][74][75]. The profibrotic effect of TNF-α can be seen in the lungs of patients with IPF expressing high levels of TNF-α [76]. TNF-α released from M1 macrophages (classically activated macrophages, involved in secretion of proinflammatory cytokines) not only changes the phenotype of other macrophages and fibroblasts from reparative to inflammatory and delay tissue repair [77,78] but also induces the release of TGF-β1 and platelet-derived growth factor (PDGF) from fibroblasts which in turn mediate fibroblast activation and production [79,80]. Furthermore, even quiescent fibroblasts, which are resistant to activation by TLR agonists, will respond to TNF-α [81,82]. TNF-α stimulated fibroblasts to secrete lumican and express integrins that promote persistent activation of fibroblast in an autocrine and paracrine fashion [83][84][85]. Less is known, however, about the release of TNF-α in the fibrotic microenvironment. ROS intermediates regulate the release of TNF-α from macrophages and fibroblasts [86], and NOX generated ROS participate in TNF-α-induced expression of vascular cell adhesion molecule 1 (VCAM-1) [87], which is a cell adhesion molecule highly expressed in the lungs of IPF patients [88] that is required for fibroblast activation [89]. The role of ECM stiffness in the release of TNF-α in a cellular fibrotic microenvironment is less clear. ECM stiffness has been shown to increase the release of TNF-α from RAW 264.7 murine macrophages [90]. However, the release of TNF-α was inversely proportional to ECM stiffness in THP-1 human macrophages [91]. Further studies are required to determine the effect of ECM stiffness and TNF-α release in the fibrotic pulmonary microenvironment.
IL-17
Like TNF-α, IL-17 has been shown to play an important role in pulmonary fibrosis. Higher levels of IL-17 are found in lung tissues of IPF patients [92]. The mechanisms by which IL-17 is involved in the induction of fibrosis are likely very similar to those of TNF-α [93][94][95]. Furthermore, TNF-α and IL-17 have been shown to be the leading players in the recruitment of immune cells in the early stages of fibrosis [96]. The combination of these effects means that, overall, TNF-α and IL-17 are involved in sustained and intense activation of fibroblasts [97]. However, evidence has emerged that the effects of IL-17 on pulmonary fibrosis may be temporally distinct from those of TNF-α. While IL-17 has been shown to enhance the proliferation of fibroblasts [98], collagen deposition does not increase in the presence of IL-17 [99], and in fact, the signaling pathway of IL-17 is downregulated during collagen deposition [100]. Nevertheless, the precise role of IL-17 in fibroblast activation remains to be elucidated. The role of oxidative stress in the production of IL-17 has also remained unclear. While ROS induce TNF-α expression in macrophages and fibroblasts [87,101] and aid IL-17 induced proliferation of fibroblasts [102], they have not been shown to increase the expression of IL-17 directly. Furthermore, to our knowledge, no study has yet shown the correlation between ECM stiffness and IL-17 expression on macrophages.
IL-1β
The profibrotic role of IL-1β has long been known: mice overexpressing IL-1β have an exacerbated response to bleomycin-induced lung fibrosis [103]. Like TNF-α, IL-1β is a potent proinflammatory cytokine that induces activation of fibroblasts via the release of profibrotic cytokines like TGF-β1 [104]. Multiple pathways have been studied in connection with the direct effect of IL-1β on fibroblast activation [105][106][107]. Some studies have suggested that IL-1β is a cytokine upstream of IL-17 or that the profibrotic effect of IL-1β is contingent on IL-17 [108][109][110]. Other studies have indicated that the profibrotic effects of IL-1β are mediated through the IL-1 receptor 1 (IL-1R1)/myeloid differentiation primary response 88 (MyD88) pathway [111,112]. Further studies are needed to elucidate the exact mechanism by which IL-1β tilts the immune cells and fibroblasts towards persistent fibrosis in the lung microenvironment.
Connecting DAMPs, TLR4, and proinflammatory cytokines
In the previous sections, we reviewed the profibrotic effects of individual DAMPs and proinflammatory cytokines in the development of fibrosis. However, it should be noted that the interplay between DAMPs and cytokines exerts a critical role in the development and sustainment of fibrosis. The interaction between DAMPs and TLR4 causes the release of numerous proinflammatory cytokines on macrophages and fibroblasts [113,114]. These cytokines can, in turn, activate other macrophages and fibroblasts, as described in the previous section. This interplay has been demonstrated by induction of TNF-α and IL-1β expression in fibroblasts by activating the TLR4 pathway [115] using LPS. HMGB1 has also been shown to induce TNF-α and IL-1β signaling in macrophages through the TLR4-dependent pathway [85,116,117]. Similarly, HSPB5 has been shown to increase IL-1β and the nuclear localization of Smad4 [42,118], which is likely enhanced by TLR4 signaling [118].
One built-in defense mechanism against the development of pathological fibrosis is the induction of negative feedback loops by cytokines and DAMPs. TGF-β1 and IL-10 released by inflammatory macrophages and fibroblasts, for example, are potent inhibitors of inflammation in macrophages and fibroblasts which can tilt the organ towards resolution of fibrosis [119][120][121] in the late phase of fibrosis [122]. Furthermore, DAMPs can be protective against or involved in the resolution of fibrosis in some TLR signaling pathways. While fibroblast-specific deficiency of TLR4 has been shown to be protective against fibrosis, and TLR2 has shown to exacerbate bleomycininduced pulmonary fibrosis by inducing an oxidative response [123][124][125], mice deficient in both TLR4 and TLR2 have been shown to have increased pulmonary fibrosis in response to radiation injury [126][127][128]. There are also antifibrotic TLRs that contrast the effect of DAMPs on profibrotic TLRs [22,129]. TLR3 has been shown to have an antifibrotic effect by downregulation of the TGF-β1 signaling pathway and autocrine induction of interferon (IFN)-β [130][131][132]. Moreover, TLR3 deficiency in fibroblasts has also been shown to increase collagen deposition and profibrotic cytokines, suggesting the role of DAMPs through TLR3 in the resolution of fibrosis [133]. Similarly, TLR9-mediated IFN-β induction in fibroblasts has shown to be protective against pulmonary fibrosis, and TLR9deficient mice have exacerbated pulmonary fibrosis [134].
When taken together, a picture emerges that juxtaposes the interaction of DAMPs and cytokines through TLR4 promoting persistent fibrosis and, through other TLRs, the resolution of fibrosis. The pathology ensues when the balance is tilted towards the persistent profibrotic pathway by different sections of the pathway perpetuated through positive feedback. This has therapeutic potential in fibrotic diseases of the lung not only by disrupting TLR4 pathways and DAMPs but also by inducing antifibrotic TLRs.
Therapeutic considerations
While the research in therapeutic approaches to pulmonary fibrosis is ongoing, treatment strategies targeting the DAMPs, TLR4, and proinflammatory cytokines pathway have shown promising results in preclinical models ( Table 2). Anti-HMGB1 antibody significantly attenuated lung fibrosis in a mouse model [39]. In addition, there is evidence that inhibition of HMGB1 will diminish fibroblast activation [135] and can disrupt the process of fibrosis [136]. Furthermore, silencing HMGB1 or its downstream signaling has proven successful in inhibiting the fibrotic process in different conditions [137,138]. Anti-S100A4 has been shown to prevent bleomycin-induced pulmonary fibrosis in mice [44]. While the effect of anti-HSPB5 antibody in pulmonary fibrosis has not been studied, HSPB5-deficient mice have attenuated pulmonary fibrosis in response to bleomycin [42]. Among the extracellular TLR4 agonists present in the pulmonary fibrotic microenvironment, neutralizing tenascin-C is a promising target for antifibrotic therapy. Not only do tenascin-C-deficient mice have an attenuated response to bleomycin-induced lung fibrosis, but this process has also been shown to be TLR4 dependent [35].
While studies have looked at the effect of anti-TLR4 in stopping pulmonary fibrosis, many have failed. This is due to the fact that while TLR4 drives persistent fibrosis and fibroblast activation, TLR4 is also required for the resolution of fibrosis [139]. However, there is a promise that specifically targeting specific TLR4/MD2 signaling complexes, which are responsible for the profibrotic effect of TLR4, can provide potential therapeutic strategies [32, 35].
Anti-TNF-α antibodies embody the most successful therapeutic approaches to fibrotic lung diseases. While multiple studies have shown the therapeutic effects in animal models [140][141][142], a double-blind clinical trial of IPF patients treated with etanercept, a monoclonal antibody against TNF-α, improved neither the forced vital capacity nor the diffusing capacity of the lungs. However, it showed a non-significant improvement in function and quality of life measures [143]. The multicentric double-blind clinical trial "A Study of Cardiovascular Events in Diabetes" (ASCEND) showed that pirfenidone, a non-peptide synthetic molecule with anti-TNF-α activity, reduced disease progression in patients with IPF [11]. Additionally, a study combining the results of two previous trials of pirfenidone in IPF patients [144] observed a significant decrease in the risk of death after treatment [145]. While there has not been a trial evaluating the effect of neutralizing IL-1β in IPF, mice deficient in IL-1R1 are protected and developed attenuated bleomycin-induced pulmonary fibrosis [111]. Moreover, a monoclonal anti-IL-1β antibody has also been shown to attenuate silica-induced fibrosis in mice [146]. Along the same line, blocking IL-17 has shown to attenuate pulmonary fibrosis in both silica and bleomycin-induced pulmonary fibrosis models in mice and to promote resolution of fibrosis [93,147].
Conclusion and perspective
Strong evidence has emerged that pulmonary fibrosis results from a cycle receiving positive feedback at multiple checkpoints that are instigated by DAMP induction of proinflammatory cytokines through TLR4 receptors. The process starts with an injury either from a viral infection, chemical/mechanical trauma, or immune-mediated damage that causes the release of DAMPs in the microenvironment (Fig. 1). The DAMPs then reprogram resident macrophages and fibroblasts towards a proinflammatory/ profibrotic phenotype in a TLR4-dependent process. This prompts the deposition of extracellular collagen leading to ECM stiffness and the further release of DAMPs and proinflammatory/profibrotic cytokines along with the secretion of TGF-β1, the master regulator of fibrosis. TGF-β1, in turn, causes autocrine/paracrine activation of other macrophages and fibroblasts in the microenvironment that feeds the vicious cycle of persistent fibrosis. In our not yet published observations, we have discovered that extracellular cold-inducible RNA-binding protein (eCIRP), a DAMP that causes inflammation and organ injury in sepsis, hemorrhagic shock, and ischemia/reperfusion injury [148,149], also plays an important role in the pathogenesis of pulmonary fibrosis. By targeting eCIRP, we may be able to ameliorate the fibrotic process in the lungs.
In this review, we have focused on a selected number of inflammatory cytokines, namely TNF-α, IL-1β, and IL-17, and showed the interplay of TLR4, DAMPs, and [11,[140][141][142][143][144][145] IL-1β Anti-IL-1β antibody: canakinumab * IL-1R1 deficiency in mice and monoclonal antibody has been shown to attenuate fibrosis in mice [111,146] IL-17 Anti-IL-17 antibody: secukinumab * , brodalumab * , and ixekizumab * Mouse models have shown attenuated response to fibrosis [93,147] these cytokines. There are, however, other cytokines and chemokines that have shown to be either involved in this interplay or contribute to processes occurring in ECM, such as the production of ROS, which can contribute to the cycle of persistent fibrosis in the lung. In the class of interleukins alone, IL-2, IL-6, IL-9, IL-12, IL-13, and IL-27 have critical roles in the regulation of pulmonary fibrosis [150][151][152][153][154][155][156]. Among these, IL-6's contributing mechanisms to the fibrotic process are likely very similar to those of TNF-α and IL-17 [157]. Just like TNF-α, IL-6 is released by M1 macrophages and changes the phenotype of other macrophages and fibroblasts from reparative to inflammatory and delays tissue repair [158]. Some studies suggest that blocking IL-6 can have the opposite effect on lung fibrosis [159]. This effect is due to the protective effect of IL-6/Stat3 signaling axes in alveolar epithelial cells against apoptosis, which are imperative for the production of surfactant synthesis necessary for the protection of the lung during injury [160]. Therefore, the timing of the anti-IL-6 strategy in the treatment of lung injury is crucial in antifibrotic therapeutic approaches [161]. While IL-6 is one of the most studied inflammatory cytokines in pulmonary fibrosis, its precise role in regulating the process of fibrosis in inflammatory diseases of the lung remains to be elucidated.
Additionally, we focused on the fibrotic effect of TLR4 in the early phases of fibrosis in this review. However, as mentioned earlier in the review, TLR4 also plays a crucial role in the resolution of fibrosis in later phases of fibrosis and remodeling [139]. TLR4 −/− mice are more susceptible to intratracheal bleomycin-induced lung fibrosis due to (1) impaired type 2 alveolar epithelial cells renewal, which are critical cells in the fibrosis repair process [162], and (2) impaired activation of autophagy signaling leading to accumulation of ROS [139].
Although we focused on macrophages and their interactions with fibroblasts in this review, a wide range of immune cell types are also involved in the progression and resolution of fibrosis [163]. Neutrophils are the cells that are recruited early stages of the fibrotic process, mice depleted from neutrophils have ameliorated response to, and the failure in recruiting neutrophils protects mice from bleomycininduced pulmonary fibrosis [164,165]. On the other hand, natural killer cells may have a protective effect against lung fibrosis [166]. Without NK cell recruitment, the pulmonary environment lacks IFN-γ, an important anti-inflammatory cytokine involved in the resolution of fibrosis [167]. This results in an enhanced fibrosis process in the lung [168,169]. Dendritic cells (DCs), however, may play a dual role in pulmonary fibrosis. Like neutrophils, DCs arrive in the Fig. 1 The interplay of DAMPs, TLR4, and proinflammatory cytokines in pulmonary fibrosis centered around macrophages and fibroblasts. (1) Injury to the cells either from a viral infection, chemical/mechanical trauma, or immune-mediated damage causes the release of DAMPs in the microenvironment. (2) DAMPs stimulate and activate macrophages and fibroblasts through a TLR4-MD2 → MyD88-mediated pathway. (3) Activated macrophages release proinflammatory cytokines such as TNF-α, IL-17, and IL-1β in the tissue microenvironment that, (4) along with TGF-β, activate fibroblasts to become profibrotic and deposit collagen and ECM com-ponents like fibronectin and tenascin-C. This causes stiffness of ECM and oxidative stress in the microenvironment, which (5) causes the release of more DAMPs leading to the vicious cycle of pulmonary fibrosis. DAMP, damage-associated molecular patterns; HMGB1, high-mobility group box 1; eCIRP, extracellular cold-inducible RNAbinding protein; HSPB5, heat shock protein B5; TLR4, Toll-likereceptor 4; MD2, myeloid differentiation factor 2; MyD88, myeloid differentiation primary response 88; NF-κB, nuclear factor kappalight-chain-enhancer of activated B cells; ECM, extracellular matrix early phases of pulmonary fibrosis in significant numbers, and inhibiting the immune activity of DCs attenuates fibrosis [170]. However, it has also been observed that mice deficient in DCs develop more severe fibrosis, and, in contrast, mice equipped with an increasing number of DCs develop milder pulmonary fibrosis after the bleomycin challenge [171]. The mechanisms by which DCs exert their pro/antifibrotic role remain to be further elucidated [172]. We believe macrophages are the most pertinent to this review because they are the master regulator of fibrosis across organs, given that they are the primary providers of TGF-β [173]. Additionally, the close interaction of macrophages with fibroblasts is a critical contributor to the cycle described in this review [174,175].
In this review, we summarized the current state of knowledge regarding the role of DAMPs, selected proinflammatory cytokines, their interplay through TLRs (more specifically TLR4), and their contribution to cellular processes of lung fibrosis. Furthermore, we highlighted knowledge gaps and summarized the therapeutic potential of targeting this vicious fibrotic cycle at every checkpoint. Given that the issue of persistent fibrosis without resolution in COVID-19, IPF, and other profibrotic lung diseases is far from resolved, it is critical to look deeper into these pathways to illuminate not only the connection between the inflammatory reaction and fibrosis but also develop possible therapeutics that can ameliorate pulmonary fibrosis by disrupting the positive feedback pathways involved.
Author contribution SB and MB wrote the manuscript and prepared the figures. MB and PW revised and edited the manuscript. PW conceived the original idea of this review. All authors read and approved the final manuscript.
Funding This study was supported by the National Institutes of Health (NIH) grants R01HL076179 (PW) and R35GM118337 (PW).
Declarations
Ethics approval This research has full compliance with ethical standards as pertinent to this review article.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,831 | 2021-07-13T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Pharmacogenomic findings from clinical whole exome sequencing of diagnostic odyssey patients
Abstract Background We characterized the pharmacogenomics (PGx) results received by diagnostic odyssey patients as secondary findings during clinical whole exome sequencing (WES) testing as a part of their care in Mayo Clinic's Individualized Medicine Clinic to determine the potential benefits and limitations to this cohort. Methods WES results on 94 patients included a subset of PGx variants in CYP2C19,CYP2C9, and VKORC1 if identified in the patient. Demographic, phenotypic, and medication usage information was abstracted from patient medical data. A pharmacist interpreted the PGx results in the context of the patients’ current medication use and made therapeutic recommendations. Results The majority was young with a median age of 10 years old, had neurological involvement in the disease presentation (71%), and was currently taking medications (90%). Of the 94 PGx‐evaluated patients, 91% had at least one variant allele reported and 20% had potential immediate implications on current medication use. Conclusion Due to the disease complexity and medication needs of diagnostic odyssey patients, there may be immediate benefit obtained from early life PGx testing for many and most will likely find benefit in the future. These results require conscientious interpretation and management to be actionable for all prescribing physicians throughout the lifetime of the patient.
Introduction
Recent advances in genetics have provided benefit to individuals with inherited disease through the increasing availability of next generation sequencing (NGS) assays. Clinical whole exome sequencing (WES) tests result in a diagnosis for 25-30% of individuals with rare undiagnosed disease (Yang et al. 2013(Yang et al. , 2014Lee et al. 2014;Zhu et al. 2015;Lazaridis et al. 2016;Retterer et al. 2016). These patients on a diagnostic odyssey often have years from the onset of symptoms until they achieve a genetic diagnosis. WES is increasingly being used to evaluate diagnostic odyssey patients to identify the genetic cause of disease when traditional diagnostic testing has failed to resolve the etiology of disease or the symptoms of the patient do not suggest a likely diagnosis. WES interrogates sequence variation across protein-coding regions of the genome, providing an expansive testing platform for diagnosing congenital conditions. WES also allows results secondary to the primary test indication to be reported, including genetic variation known to affect medication efficacy and toxicity. The impact and utility of these secondary results has been understudied in this unique population of diagnostic odyssey patients.
Pharmacogenomics (PGx), the study of genetic contribution to variability in drug response (Weinshilboum 2003;Weinshilboum and Wang 2004;Wang et al. 2011), has benefited from advances in testing platforms as well as the knowledge of genetic variations contributing to specific drug responses. PGx is increasingly utilized clinically to impact treatment decisions in a growing number of patients, with the majority of patients tested having at least one PGx allele that could affect the medication(s) efficacy or toxicity (Ji et al. 2016). The Food and Drug Administration has issued black-box warnings on several medications with gene-drug interactions, and precautions about others (www.fda.gov/drugs/scienceresearch/researchareas/ pharmacogenetics/ucm083378.htm). Currently, >20 genes impact approximately 80 medications with clinical actionability (Relling and Evans 2015). PGx testing is often ordered for adults taking or being prescribed medications impacted by a known PGx gene. Knowledge of an individual's PGx genotypes could decrease the risk of major adverse drug reactions and improve therapeutic response (Relling and Evans 2015).
Secondary PGx results from diagnostic WES testing are often findings of convenience. Genes easily interrogated by NGS technology (e.g., CYP2C9, MIM:601130) with the majority of informative variants in the coding region, are easy to identify from WES data. However, for genes such as CYP2D6 (MIM124030), standard WES does not perform well and it is difficult to achieve highly accurate and informative results (Kramer et al. 2009;Black et al. 2012;Ji et al. 2016). Limiting PGx testing due to technical challenges may lead to an incomplete profile, minimizing the therapeutic benefit achieved by comprehensive testing. This is particularly relevant to medications metabolized by more than one pharmacogene.
The context in which PGx findings are reported in clinical WES testing is arguably dissimilar to a standalone PGx test. The PGx findings in a WES test are secondary to the variants identified in disease-causal genes that may explain the patient's symptomatology and, therefore, may be overlooked. Reported variants related to the primary genetic condition are already challenging to interpret and explain to the patient, making it even more onerous to put due focus on secondary results. Also, the physician ordering the WES test may not be the physician prescribing the patient's medications, adding another layer of complexity to the management and effective use of the PGx findings. WES test reports are often received as scanned static documents, and integrating this data into a record system capable of alerting prescribing physicians of pertinent PGx results is a significant need. PGx results for diagnostic odyssey patients, thus, have the potential to be overlooked with regard to current or future medication prescribing.
To assess the utility of the PGx secondary findings in clinical WES testing, we reviewed a cohort of individuals evaluated in Mayo Clinic's Individualized Medicine Clinic for undiagnosed disease and tested via clinical WES for the purpose of achieving a genetic diagnosis. Here, we report the PGx findings of this cohort, the immediate implications of these results on medication usage, and the unique characteristics and nuances associated with the appropriate management of these data. To the best of our knowledge, this is the first study evaluating the benefit of secondary PGx findings reported by a clinical WES test for patients seeking a genetic diagnosis.
Ethical compliance
The Mayo Clinic Institutional Review Board granted a waiver of consent for this study. To this end, it was the responsibility of the corresponding author of the study and/or his designee to check a patient's Minnesota research authorization status before reviewing any medical records generated from care received in the state of Minnesota for all patients included in this study. No patient included in this study declined Minnesota research authorization.
Patients
All patients included in this study were referred to Mayo Clinic's Individualized Medicine Clinic for a suspected genetic disorder, were evaluated by a medical geneticist, and counseled by a genetic counselor prior to pursuing WES for the purpose of elucidating the genetic etiology of disease. Each patient's current medication usage and demographics were abstracted from chart review. The patient's genetic disease phenotypes were abstracted from the clinical WES report, as reported by the ordering clinical geneticist.
Reported Baylor Genetics Methodology for Whole Exome Sequencing: 1 "Whole exome sequencing (WES): for the paired-end precapture library procedure, genome DNA is fragmented by sonicating genomic DNA and ligating to the Illumina multiplexing PE adapters. The adapter-ligated DNA is then PCR amplified using primers with sequencing barcodes (indexes). For target enrichment/ exome capture procedure, the precapture library is enriched by hybridizing to biotin-labeled VCRome 2.1 in-solution exome probes (Bainbridge et al. 2011) at 47°C for 64-72 h. Additional probes for over 3600 Mendelian disease genes were also included in the capture in order to improve the exome coverage. For massively parallel sequencing, the postcapture library DNA is subjected to sequence analysis on Illumina HiSeq platform for 100 bp paired-end reads. The following quality control metrics of the sequencing data are generally achieved: >70% of reads aligned to target, >95% target base covered at >20X, >85% target base covered at >40X, mean coverage of target bases >100X. SNP concordance to genotype array: >99%. This test may not provide detection of certain genes or portions of certain genes due to local sequence characteristics or the presence of closely related pseudogenes. Gross deletions or duplications, changes from repetitive sequences may not be accurately identified by this methodology. 2 As a quality control measure, the individual's DNA is also analyzed by a SNP-array (Illumina HumanExome-12v1 array). The SNP data are compared with the WES data to ensure correct sample identification and to assess sequencing quality. Sanger confirmation is noted in the "References/Comments" section of the tables if performed. It should be noted that the data interpretation are based on our current understanding of genes and variants at the time of reporting. 4 Pharmacogenetic variants are limited to CYP2C9*2, CYP2C9*3, CYP2C9*5, CYP2C9*6, VKORC1-1639G>A, CYP2C19*2, CYP2C19*3, CYP2C19*4, CYP2C19*5, CYP2C19*8, CYP2C19*10, and CYP2C19*17." Sanger confirmation for pharmacogenomics variants was not routinely done.
For each patient with a reported PGx finding, a pharmacist reviewed the PGx results and patient's current medication usage documented in the electronic medical record (EMR) to provide a clinical interpretation in a pharmacy eConsult. Multiple resources were consulted for reviewing each genotype and gene-drug relationship including the Clinical Pharmacogenetics Implementation Consortium (CPIC) guidelines (https://cpicpgx.org), UpToDate (https://www. uptodate.com), Micromedex (http://www.micromedex solutions.com) and AskMayoExpert (Cook et al. 2015). Thus, for drug-gene relationships that lacked CPIC guidelines, multiple resources were consulted and reviewed to assess their relevance prior to providing recommendations. These recommendations were documented in the patient's EMR to serve as a resource for the medical geneticist to act upon.
CYP2C9 variant allele frequencies
The variant allele frequencies were calculated from the Exome Aggregation Consortium data (Lek et al. 2016) for each population represented in the data as well as from a Qatari population using recently published data (Fakhro et al. 2016). The *1 allele (wildtype) was calculated by subtracting the sum of the variant alleles from 1.
Results
From September 2012 to November 2015, the Individualized Medicine Clinic saw 98 patients who received clinical WES results for the purpose of identifying the genetic cause of their disease and who could optionally receive secondary PGx results (Lazaridis et al. 2016). This cohort was primarily pediatric (Fig. 1A); the median age at the time of testing was 10 years. A majority of patients had neurological involvement in the disease presentation (71%). Eighty-eight patients (90%) were taking a total of 609 medications including 237 unique medications. The cohort was 77% white, 5% black, and 1% Asian, with 17% having designated their race as "unknown" or "other", and one patient having not disclosed race (Fig. 1B). Importantly, during review of the patients' pedigrees and family histories it was determined that those patients who self-identified as "unknown" or "other" were of Middle Eastern ancestry.
A clinical pharmacist reviewed the EMR of each patient with PGx variant alleles reported for current medication usage and made medication management recommendations based on potential gene-drug interactions. Recommendations were recorded as a clinical note by Pharmacy, and are accessible to any prescribing physician. Nineteen patients (20%) received recommendations for their current medication use as a result of the PGx variant alleles reported in their clinical WES test (Fig. 1C).
Cytochrome P450 2C19 (CYP2C19) metabolizes medications including proton-pump inhibitors (PPIs), antiepileptics, and the antiplatelet medication clopidogrel, among others. For CYP2C19, of the seven variant alleles reported by the testing facility, only the *2 and *17 alleles were identified in our cohort. The *2 variant is a loss-offunction allele and *17 an increased activity allele. The Baylor Genetics clinical test reports only specific variant alleles when identified in a patient and, therefore, the *1 allele (wildtype) was inferred in the absence of a reported variant allele. For example, a single heterozygous *2 variant identified in CYP2C19 was interpreted as the patient being the *1/*2 genotype. Likewise, a patient with no CYP2C19 variants reported was interpreted as being the *1/*1 genotype. The corresponding drug metabolism phenotype for each genotype, according to the 2016 CPIC term standardization , is shown in Table 1. A distribution of each drug metabolism phenotype across the patient cohort is illustrated in Fig. 2A. Of the 94 patients evaluated for these PGx alleles, 41% were classified as normal, 26% as rapid, 3% as ultrarapid, 24% as intermediate, and 5% as poor metabolizers for CYP2C19. These percentages are consistent with those in the 2013 CPIC guidelines (Scott et al. 2013). The medications for which management recommendations were made by the pharmacist, mainly consisting of antiepileptics, anticonvulsants, and PPIs, are shown in Table 1.
Cytochrome P450 2C9 (CYP2C9) metabolizes nonsteroidal anti-inflammatory drugs (NSAIDs), antiepileptics, and the anticoagulant warfarin, among other medications. Of the four variant alleles reported by the testing facility, only *2, *3, and *6 variant alleles were identified in our cohort. Similarly to the genotype interpretation for CYP2C19, we inferred the *1 allele is present for CYP2C9 in the absence of a reported variant allele. The inferred genotypes and interpreted metabolism phenotypes are shown in Table 1. Of the 94 patients evaluated for PGx variant alleles, 73% were normal, 18% were intermediate, and 8% were poor metabolizers. No patients were currently taking medications impacted by the CYP2C9 variant alleles reported ( Fig. 2B and Table 1).
Vitamin K epoxide reductase complex subunit 1 (VKORC1) is responsible for reducing and activating vitamin K and thereby allowing blood clot formation. The anticoagulant, warfarin, is an antagonist of this enzyme and its efficacy of anticoagulation is decreased by a polymorphism in the promoter of VKORC1 (c.-1639G>A). Heterozygous and homozygous carriers of this polymorphism were identified in our cohort of patients. Warfarin is primarily metabolized through CYP2C9; therefore, both VKORC1 and CYP2C9 PGx variant alleles contribute to warfarin dosing recommendations according to current CPIC guidelines (Johnson et al. 2011). The VKORC1 and inferred CYP2C9 genotypes are shown in Table 2 grouped by the warfarin dosing recommendations. The *6 variant allele is interpreted in the same manner as the *3 variant allele, since it is a null allele. These data are summarized in Fig. 2C. The suggested warfarin dosing according to the CPIC guidelines is 5-7 mg/day for 66%, 3-4 mg/day for 27%, and 0.5-2 mg/day for 7% of the 94 patients evaluated for PGx variants. As we interpreted the PGx findings in our cohort and made medication recommendations, we hypothesized that inference of the wildtype (*1) allele for CYP2C19 and CYP2C9 may not always be accurate. There are other actionable variant alleles identified in these genes that are not included in the subset of variant alleles reported in the clinical test ordered for these patients. Consequently, it is possible we may incorrectly infer a *1 allele when, in fact, a patient has one of these nonreported actionable variants. The inference of the *1 allele and the metabolizer phenotype interpreted from this genotype could then lead to inappropriate medication recommendations.
We also determined the variant allele frequencies from a Qatari population using recently published data (Fakhro et al. 2016), since nearly 17% of our cohort is of Middle Eastern descent. Only four of the actionable alleles we assessed were identified in CYP2C9 in the Qatari population data, and include *2, *8, *9, and *11 ( Fig. 3 and Table S1). For this population, the nonreported actionable alleles accounted for 1.7% of alleles, which could be incorrectly inferred as *1 in the clinical WES test.
Discussion
Pharmacogenomics (PGx), the study of how genetic variation may inform medication response, has been reported in a cohort of diagnostic odyssey patients and may be informative to the clinical care of this population. The reported genes (CYP2C19, CYP2C9, and VKORC1) and genotypes predict a patient's response to the well-known medications warfarin and clopidogrel, but in addition, CYP2C19 and CYP2C9 are critical to the activation and clearance of at least 23 other medications (https://cpicpgx. org/genes-drugs/). Thus, the appropriate interpretation of these findings is key to individualizing medication prescription, avoiding medication toxicity, and maximizing therapeutic response.
The interpretation, communication, and management of these results, however, are not without unique challenges. Because PGx results are only informative when particular medications are used, it is imperative they are interpreted in the context of the patient's medication needs. To be clinically actionable, they must be readily available at the time of medication prescription or review. As such, PGx results may be informative not only at the time the clinical test report is returned, but also at any future time the patient is prescribed new medications. Furthermore, additional gene-drug interactions are likely to be discovered, requiring PGx results to be actively maintained and dynamically interpretable.
There are both technical and clinical barriers to the appropriate access and use of PGx results. Considerable effort has been made to implement clinical decision support (CDS) systems with automatic alerts to notify a prescribing physician when a relevant gene-drug interaction is present for a patient and educational components to assist the clinician with understanding the alerts (Arwood et al. 2016;Caraballo et al. 2016;Hicks et al. 2016;Hoffman et al. 2016;Manzi et al. 2016;St Sauver et al. 2016). Even with these systems, however, barriers to successful and efficient integration of PGx results exist. The clinical reports for our patients are pdf files generated from an outside institution scanned into the patient's EMR, which does not allow the PGx CDS system in the institution to create alerts from the findings. To ensure the prescribing physicians have access to the PGx results for their patients, a pharmacy consult was conducted for each patient with PGx findings. Without added steps to highlight these secondary findings, institutions could be at significant risk and liability for mismanagement of their patients by failure to recognize PGx results in the medical record.
Importantly, 20% of the 94 patients evaluated for PGx variants were taking a medication potentially impacted by the PGx finding. The majority of the patients in our cohort were pediatric with neurological involvement, often including seizures, behavioral disorders, developmental delay, or intellectual disability. The majority of the prescribed medications with relevant PGx results were for the management of gastroesophageal reflux or seizure disorders; eight patients were taking diazepam and seven patients were taking omeprazole. The other medications with potential PGx variant impact included citalopram, clobazam, esomeprazole, lacosamide, and sertraline. Of the 98 patients who received clinical exome sequencing for a suspected genetic disorder, 21 patients (21%) had seizures included in their primary reason for referral.
There are, however, limitations to the interpretation of the PGx results presented in this cohort. The drug metabolism pathways are complex and often more than one PGx gene is involved in the metabolism of a particular medication. When only a subset of PGx genes are tested, the pharmacist is limited in what medication management recommendations can be made. For example, 15 individuals were taking diazepam, of which, eight had variant alleles identified in CYP2C19 that may influence the efficacy or toxicity. However, diazepam is a major substrate of both CYP2C19 and CYP3A4 (MIM:124010) that metabolize it into the active metabolites, N-desmethyldiazepam, temazepam, and oxazepam, and depending on the rate of the production of these metabolites, efficacy, and toxicity can be affected (Whirl-Carrillo et al. 2012). Therefore, a full understanding of the genetic influence on the efficacy and toxicity of diazepam can only achieved by evaluating the genetic variation in both genes.
While 91% of patients in this study had at least one PGx variant reported in their clinical WES results, expanding the number of genes being tested by only three, would increase the number of patients with reported variants to nearly 100%, according to a recent study (Ji et al. 2016). And, of the 88 patients (90%) taking medications in the cohort, 60 patients were prescribed a medication with known potential gene-drug interactions, suggesting the utility of expanded testing in this population of patients. If we were to pursue PGx testing based on individual medication usage and according to the current actionable genedrug pairs for drug metabolizing genes used at Mayo Clinic (Cook et al. 2015), 43 patients would be tested for CYP3A4/5, 35 for CYP2C19, 22 for CYP2D6, 8 for CYP1A2 (MIM:124060), and 9 for CYP2C9. A recent study from the NIH Undiagnosed Diseases Program (Lee et al. 2016) also established that PGx results were informative for guiding therapy in their cohort of 308 families. Lee and colleagues evaluated single nucleotide changes that have been reported to impact drug efficacy based on Pharmacogenomics Knowledgebase (PharmGKB). They report 9 patients with potential gene-drug interactions including the genes: HTR2C (MIM:312861), EPHX1 (MIM:132810), OPRM1 (MIM:600018), F13A1 (MIM:134570), and NOS3 (MIM:163729). As the cost of testing continues to decrease, it may be reasonable to expand the breadth of genetic testing for these patients to include more or "all" of the PGx genes.
A common challenge of PGx test interpretation is inferring the presence of the wildtype, or *1, allele, in the absence of a reported result. As we show with the allele frequencies of CYP2C9 across different populations, not reporting all actionable variants could lead to incorrectly inferring a *1 allele when an individual actually carries a nonreported but actionable allele. In the African population in the ExAC data, 15.4% of alleles are actionable but not reported when only reporting *2, *3, *5, and *6 variant alleles. That means on average 16.0% of inferred *1 alleles for this population are incorrect. Recommending additional variant testing may be warranted for individuals from this population if CYP2C9 metabolizer status is important to medications the patient may need. This limitation has been described with regard to warfarin dosing recommendations for the African American population and making dosing predictions without including the common African genotypes was associated with inappropriate dosing (Cavallari et al. 2010;Drozda et al. 2015).
The inference of *1 alleles is also potentially problematic for patients from populations who are underrepresented in terms of genetic sequence data. Approximately 17% of our patients are of Middle Eastern descent. There are limited large sequence datasets that include individuals of Middle Eastern descent; consequently, reference databases like ExAC have limited information on these populations. This lack of data makes interpretation of genetic results from individuals with these ethnicities challenging. Analysis of recent data from a Qatari population (Fakhro et al. 2016) as well as from East Asian and Latino populations in the ExAC database (Lek et al. 2016) identified fewer of the known PGx alleles. Further study of the variation present in specific populations contributing to drug metabolism phenotypes will improve our ability to interpret PGx results for these individuals.
Pediatric medication dosing is often difficult to determine due to the paucity of clinical studies focusing on children and the difficulty of translating recommended adult dosing paradigms into pediatric care (Leeder et al. 2014). Although total body size is a contributing factor to achieving appropriate active medication levels, other factors may impact drug response, including body composition, body proportions, and age-related differences in gene expression profiles throughout human development. Pharmacodynamics, pharmacokinetics, and subsequent pharmacogenomic studies are challenging to conduct in children (van den Anker et al. 2011;Neville et al. 2011;Kearns and Artman 2015). Pharmacokinetics is heavily driven by drug metabolism and our understanding of the development of the drug-metabolizing enzyme system from birth to adulthood is incomplete (Koukouritaki et al. 2004). CYP2C enzyme expression is activated around the time of birth with enzyme levels at~30% of adult levels in the first year of life and is largely comprised of CYP2C9 (Hines and McCarver 2002). The transition of the CYP2C expression to adult levels throughout childhood is poorly understood (Treluyer et al. 2000;Hines and McCarver 2002). This is further complicated by recent studies showing that CYP protein expression and enzyme activity can be discordant (Sadler et al. 2016). While we understand that these differences by age and stage of development exist, contributing to therapeutic variability, our ability to predict the appropriate dosing requirements by these developmental differences is understudied and limited (Hines and McCarver 2002).
The nuances to interpreting an individual's PGx results and determining their relevance in the context of the many intrinsic and extrinsic factors also contributing to the efficacy or toxicity of a medication to meet the individual's therapeutic needs is indeed a complex undertaking. While the technologies enabling the identification of genetic variation get better and PGx testing becomes more affordable and more widely adopted, our understanding of the meaning of this genetic variation will continue to improve. With it, the CDS systems that notify physicians of gene-drug interactions when making the prescription will continue to expand and be refined. A pharmacist trained in PGx may remain a key individual, however, for integrating PGx into the complexities of pharmacotherapy. Intrinsic factors such as age, body size, disease state, lifestyle choices, and medication compliance must be addressed alongside any potential gene-drug or drug-drug interactions and consideration of possible medication delivery routes. The pharmacist can make recommendations to maximize the therapeutic goals of the physician by addressing the limitations and complexities of the individual patient, including their PGx genotype. This type of consult may be particularly beneficial for patients on a diagnostic odyssey of which the majority are children taking many medications for complex symptomatology often as part of a poorly defined disease. Additionally, while we describe the reactive interpretation and impact on current medication use in this population, these results will continue to inform therapeutic strategies proactively for the patient's lifetime. For maximal efficacy of PGx testing to be realized, then, early proactive and comprehensive testing with EMR CDS integration of results is ideal.
In this study, we describe the secondary PGx findings from clinical WES testing in a cohort of patients seeking a genetic diagnosis for a suspected Mendelian disease. We show that a significant proportion of this mostly pediatric population had actionable PGx results based on their current medication use. The likely benefit of these results on patient medication management suggests continued, and potentially expanded, PGx testing in this population is warranted. However, it is important to be cognizant of the limitations inherent in PGx testing, as well as the complexities of result interpretation and data management. It is imperative that health-care institutions are aware of such secondary findings and take steps to ensure that PGx findings are properly integrated into the patient's medical record. Such steps should ensure all future prescriptions are properly informed by the PGx findings and recommendations dynamically reflect the continued expansion of PGx knowledge. Because of these challenges, we highlight the need for conscientious interpretation and management of the PGx results to ensure appropriate prescribing decisions can be made with regard to current as well as any future medication needs. | 6,025.6 | 2017-03-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Application of Discrete Wavelet Transform in Shapelet-Based Classification
Recently, several shapelet-based methods have been proposed for time series classification, which are accomplished by identifying the most discriminating subsequence. However, for time series datasets in some application domains, pattern recognition on the original time series cannot always obtain ideal results. To address this issue, we propose an ensemble algorithm by combining time frequency analysis and shape similarity recognition of time series. Discrete wavelet transform is used to decompose the time series into different components, and the shapelet features are identified for each component. According to the different correlations between each component and the original time series, an ensemble classifier is built by weighted majority voting, and the Monte Carlo method is used to search for optimal weight vector. 'e comparative experiments and sensitivity analysis are conducted on 25 datasets from UCR Time Series Classification Archive, which is an important open dataset resource in time series mining. 'e results show the proposed method has a better performance in terms of accuracy and stability than the compared classifiers.
Introduction
A time series is a data sequence that represents recorded values of a phenomenon over time. Time series data constitutes a large portion of the data stored in real world databases [1]. Time series data have widely existed in many fields, such as commerce, agriculture, meteorology, bioscience, and ecology. Data such as meteorological data in weather forecast, floating currency exchange rate in foreign trade, radio wave, images captured by medical devices, and continuous signals in engineering applications can be regarded as time series [2]. Time series data are more complex to analyse than the cross-sectional data due to the way in which measurements change over time [3]. Time series classification (TSC) is one of the important tasks in time series data analysis. e TSC is applied to build a classification model based on labelled time series, and then the model is used to predict the label of unlabelled time series. Unlike traditional classification methods, the TSC requires not only numerical relationships between different attributes but also the order relationship between data.
In the past ten years, hundreds of methods have been proposed to solve the TSC problem. One of the traditional methods is the 1-nearest neighbor (1NN) classifier, which uses different distance functions. Faloutsos et al. [4] used Euclidean distance for time series matching. e Euclidean distance can only deal with time series of equal length, and it calculates time series point-to-point in the time axis but cannot match similar shapes if they are out of phase in the time axis. In order to solve these problems, Berndt et al. [5] applied dynamic time warping (DTW) technology in the speech recognition field to the pattern detection in time series. e DTW is a much more robust distance measure for time series. e DTW not only eliminates the "point-topoint" matching defect of Euclidean distance but also achieves "one-to-many" matching of time series data points through stretching or compressing the series. e traditional DTW assigns the same weight to each observation value and ignores the phase difference between the observation value and the test value. On this basis, Jeong et al. [6] proposed to use weighted DTW for time series classification. is kind of 1NN classification algorithm has high classification accuracy and is easy to implement, but it consumes too long computing time and has poor interpretability. Many other researchers have concerned about the measurement of dissimilarity. erefore, several dissimilarity metrics, such as normalized eigenvector correlation (NEC) [7], signal directional differences (SDDs) [8], and square eigenvector correlation (SEC) [9], are proposed recently, which measure the dissimilarity between the features extracted from the distinct path between specific features. ese metrics have been verified to be effective in improving the accuracy of the feature matching technique.
Recently, many researchers have used shape similarity to solve TSC problems. e most popular method is shapeletbased classification. Shapelet is a time series subsequence which can be regarded as maximally representative of a class in some sense [10]. Classification algorithms based on shapelets were proposed at first time by Ye et al. [10,11], and the algorithms used information gain to measure the split point of data and build decision tree by recursively searching the most discriminating shapelets. is strategy is to build a classifier at the same time as shapelets are discovered. In contrast, the other strategy is to map the time series to other spaces at first and then build a classifier. Lines et al. [12] proposed a time series classification method based on shapelet transformation (ST).
is method creates new classification data before constructing the classifier, so that it keeps the explanatory power of shapelets and improves simultaneously the accuracy of classification.
Ensemble learning strategy has also been applied to time series classification, such as time series forest (TSF) proposed by Deng et al. [13], elastic ensemble (EE) method proposed by Lines et al. [14], Collection of Transformation Ensembles (COTEs) method proposed by Bagnall et al. [15], and the Hierarchical Vote Collective of Transformationbased Ensembles (HIVE-COTEs) method based on the COTE proposed by Lines et al. [16]. ese methods combined multiple subclassifications, such as distance measure, shapelet identification, spectrum analysis, other time series feature representation, and transformation strategies. Compared to the method with a single classifier, the ensembled classification method has a higher accuracy, but a higher time complexity. In terms of classification accuracy, Bagnall et al. did a comparative experiment with the current popular time series classification algorithms [17,18] and found the highest classification accuracy is in the order of HIVE-COTE, COTE, and ST. However, the ST is an important part of both COTE and COTE-HIVE algorithms. In other words, the ST is one of the effective methods to solve the time series classification.
Generally, new features extracted from time series may help to improve the performance of classification models. Techniques for feature extraction include singular value decomposition (SVD), discrete Fourier transform (DFT), discrete wavelet transform (DWT), and so on [19]. e DWT as formulated in the late 1980s has inspired extensive research into how to use this transform to study time series. e DWT is a powerful tool for a time-scale multiresolution representation on time series by using wavelets. In contrast to other techniques, the DWT is localized in time, and hence, the wavelet variance can be readily adapted for exploring processes that are locally stationary with time varying [20] and for detecting inhomogeneities in time series [21]. Due to its ability to separate original time series into its decompositions, the DWT is a powerful tool to help researchers capture trends and patterns in data. At the same time, it is a data transformation technique that concurrently localizes both time and frequency information from the original data in its multiscale representation [22].
In this study, combining with the advantages of the DWT and shapelet approach, we propose a new ensemble method, which embeds the DWT into shapelet-discovery algorithm to get a transformed data and then implements an ensemble classifier to train and test the transformed data. By using the DWT, the original time series data are divided to one low-frequency information component and several high-frequency information components. Each decomposed information component is still in the time domain. e shapelet sets are then selected from each component, respectively.
ese shapelet sets reflect the corresponding classification characteristics and are used to convert the original time series into feature vector representations accordingly. ese feature vectors contain more features of the original time series. Base classifier is trained with the transformed data. Finally, a weighted majority voting technique is used to integrate the prediction results of the base classifiers, and the Monte Carlo method is used to search for the local optimal weight vector. We make a comparative experiment with other popular time series classifiers and perform qualitative analysis in this study. e experiment is conducted on 25 datasets from UCR [23]. e results show the proposed method has a good performance in terms of accuracy and stability. e paper is structured as follows: Section 2 provides related definitions on time series classification and shapelet; in Section 3, we propose a new method and describe the overall framework and the details of the method; in Section 4, we describe our experimental design and results and perform qualitative analysis for the proposed method; finally, we draw conclusions based on our analysis results in Section 5.
Related Definitions
Univariate time series dataset: a univariate time series is a sequence of data that are typically recorded in temporal order at fixed intervals. e number of real-valued data is the length of the time series.
A dataset T � T 1 , T 2 , T 3 , . . . , T n has n time series. Each time series T i has m real-valued ordered data < t i,1 , t i,2 , t i,3 , . . . , t i,m > and a class label c i and then Sets of candidate shapelet: every subsequence of series in dataset T is defined as a candidate. So the set of candidate shapelets is the union of subsequences of each series in T.
e subsequence of T i is a contiguous sequence on T i . e length of subsequence can be 1, 2, 3, . . ., m. A subsequence of T i can be described as S i,p,l � t i,p , t i,p+1 , t i,p+2 , . . . , t i,p+l−1 , where p is the starting position and l is the length. So the set of all subsequences of length l in the time series T i is defined as S T i,l � S i,p,l , 1 ≤ p ≤ m − l + 1 .
Similarity measures: classification of time series depends on similarity measures between data. e common time series similarity measures include Euclidean distance, dynamic time warping, Fourier coefficients, and autoregressive model. In this study, Euclidean distance [24] is used to compare the similarity between two time series with the same length. For example, consider two m-length time series, S and R, and let Euclidean distance given by equation (1) be the utilized measure of similarity: Before calculating the distance, the z-normalization method is used to normalize each time series [25] according to equation (2). In equation (2), the X and σ X are mean and standard variance of m real-valued ordered reading data < t i1, t i2, , t i3, , . . . , t im > in each time series T i , respectively: e similarity between each candidate shapelet and each series is measured, and this sequence of distances with associated class membership is used to assess shapelet quality. e candidate shapelet is short, and the time series is relatively long. When calculating the distance between two time series with different lengths, the short series slides on the long series until getting the minimum distance between them. e distance between a time series T i and a candidate shapelet S with length l is defined by equation (3). e distances between S and all subsequences of length l in T i are calculated, and the minimum distance is taken as the distance between S and T i : Information gain and shapelet: in probability theory and information theory, the information gain (IG) is asymmetric to measure the difference between the two probability distributions. e IG is usually used to determine the quality of a shapelet [10,11,26]. After calculating all the distances between a candidate shapelet S and all time series in T, it will get a set D S with n distance values. e D S is sorted, and the IG at each possible split point sp is then assessed for S. Here, a valid split point is defined as the mean value between any two consecutive distances in D S . For each possible split point sp, as shown in Figure 1, the IG is calculated by partitioning all elements of D S < sp into A S , and all elements of D S > sp are grouped as B S , respectively. e IG at sp is calculated according to the following equation: where |D S | is the cardinality of the set D S and H(D S ) is the entropy of D S . e H(D S ) is defined as follows: where V is the set of class label and p v is the probability of each label. e IG of shapelet S, IG S , is calculated as In general, shapelets are extracted with maximum information gain by comparing all the candidate shapelets.
Method Structure.
e proposed method in this study consists of three major parts: decomposition, feature extraction, and classification. e whole process of the proposed method is outlined in Figure 2.
e three major parts of the proposed method are described briefly as listed below: to predict class label. Based on the predictive result of the base classifier, a weighted majority voting is implemented to build an ensemble classifier according to the correlation between components and original data. e weights are optimized by the Monte Carlo method, and then, the final classification result can be obtained.
Discrete Wavelet Transform.
e DWT is a technique of a mathematical origin and is very appropriate for a timescale multiresolution analysis on time series [22]. e DWT provides an effective way to isolate nonstationary signals into signals at various scales. is kind of signal processing is called signal decompositions. Various aspects of nonstationary signals such as trends, discontinuities, and repeated patterns are clearly revealed in the signal decompositions. Some time series data have multiscale signal components that are more meaningful in parts than in sum, such as audio signals and patients' ECG heart rates. For those reasons, the DWT is a suitable technique to combine with classification approaches in order to categorize an unknown signal into a predefined type of signals [22]. is section explains how the DWT assists in the classification process. e effective way to implement DWT is to use a filter, which was proposed by Mallat in 1988 and is well-known as Mallat algorithm.
is algorithm uses filter banks to implement the DWT which can decompose the signal into several different frequency components, and Figure 3 illustrates an example of a two-level wavelet decomposition and reconstruction processes of the decimated DWT.
Generally, a filter bank approach is adopted because of its efficiency. As shown in Figure 3, the S(n) is a real signal, h(n) is the high-pass filters which filter out the low-frequency part of the signal, and g(n) is the low-pass filters which can filter out the high-frequency part. e half-band filters downsample the signal by a factor of 2 at each level of decomposition. At the first level decomposition, the input signal is firstly passed through the wavelet filters and followed by a decimation factor of two. en, the output of the low-pass filter is used as the new input signal, and the same filtering and decimation process will be reiterated. is is carried out until the desired level of wavelet decomposition is reached, or the allowed maximum level is reached. e combination of the filtering and the decimation processes enables the same filters to be used throughout the entire wavelet decomposition procedure [27]. e outputs of the decomposition process are the approximation coefficients (cA i ) and detail coefficient (cD i ), where i denotes the level of filter. In practical application, the appropriate decomposition level is generally selected according to the characteristics of the signal or the appropriate standard.
For the reconstruction process, the original signal can be reconstructed from the approximate and detail coefficients at every level by upsampling by two, passing through highand low-pass synthesis filters, and adding them. e original signal can be reconstructed from the approximation coefficients of the last level and detail coefficients of each level.
Similarly, the approximate component (A) and the detail component (D) of the signal can be reconstructed from the approximate coefficient and the detail coefficient by omitting the other sets of coefficients, separately. is can be done best by setting the corresponding coefficients to zero of matching the same shape. In this way, the reconstructed component is the same length as the original signal. Approximation component can capture rough features that can be used to estimate the original data, while detail components can capture detail features that can be used to describe frequent movements of the data. For example, considering a dataset containing n time series and class labels, each time series has m data points. After choosing the mother wavelet, if the maximum level allowed is R, we can get approximation component matrix A n,m+1 and R detail component matrixes D n,m+1 . e DWT decomposes a single signal into multiscale signals using wavelet functions. e filter coefficients are determined by the mother wavelet. e characteristics of the transformation are also impacted by the choice of the mother wavelet. e commonly used mother wavelets include Haar, Daubechies, biorthogonal, Coiflets, and symlets. e influence of different mother wavelets on classification performance will be tested in the following experiments.
Feature Extraction.
We extract features of on each component through the shapelet transformation, which has been proposed by Lines et al. [12]. e main contribution of shapelet transformation is to separate shapelets discovery and classifier construction. e transformed data can be used in different classifiers. e corresponding algorithm includes two major steps: Step 1: the algorithm performs a single scan of the data to extract the best k shapelets.
Step 2: by calculating the distance between k shapelets and every time series, an instance with k attributes is obtained; then, a new transformed dataset is created.
Algorithm 1 describes the process of extracting k best shapelets from the dataset. e min and max parameters limit the length of the candidate shapelets. Each time a candidate shapelet is obtained, and the distance between the candidate shapelet and every time series is calculated. e results are sorted to calculate the split point that can be used to get the maximum information gain. After all the candidate shapelets are accessed, they are sorted according to the information gain and self-similar shapelets are removed. Finally, the top k shapelets are retained in the set of nonselfsimilar shapelets.
Once the best k shapelets have been found, the transform is performed with Algorithm 2. For each instance of data T i , the subsequence distance is computed between T i and SK j , where j � 1, 2, . . . , k. e calculated k distances are used to form a new instance of transformed data, where each attribute corresponds to the distance between a shapelet and the original time series. e subsequence distance calculation has been described in equation (3). With shapelet transformation technology, the selection process of shapelets is optimized, and different classification strategies can be flexibly applied. On this basis, several other shapelet approaches have been proposed, such as logical shapelets [26], fast shapelets [29], binary shapelets [30], and learnt shapelets [31]. e extracted low-frequency and high-frequency information components in the time domain are used as separate new time series to generate candidate matrix. en, the corresponding shapelets are extracted from the candidate matrix. e distance between the shapelets set extracted from each component will be calculated to form a set of new feature vector. In this step, we can get R+1 transformed matrix T k,m+1 ′ .
Ensemble Classification.
In this paper, we build a combined classifier finally. We train the base classifier on the R + 1 transformation matrix and use weighted majority voting to integrate the prediction results of the base classifiers, and then use the Monte Carlo method to optimize the weight vector. e above process is described by Algorithm 3.
In order to evaluate the strength and direction of relationship between each component and original time series, Pearson correlation coefficient is calculated. e obtained correlation coefficient matrix is normalized to meet the equation 7. e mean value of each type of component is taken as the initial value of weight ω j , where j can be 0, 1, 2, 3, . . ., R. e weights meet the condition shown as follows: For the component with high correlation with the original data, its classifier is assigned a larger weight, so as to improve the performance of the ensemble classifier.
We discuss a multiple classification task with class labels iϵ 1, 2, . . . , c { } and predict the class label y based on the predicted probabilities p for each base classifier L j , where j can be 0, 2, 3, . . ., R. e label y is calculated as follows: where ω j is the weight of the jth base classifier L j and p ij is the class probability for j th classifier L. e key part to build the ensemble classifier is the selection of weights. In the proposed method, the Monte Carlo Input: a list of time series T, min, and max length shapelet to search for and k the maximum number of shapelets to find Output: the best k shapelets 1: k shapelets ⟵ Φ 2: for all T i in T do 3: shapelets ⟵ Φ 4: for ⟵ min to max do 5: for p ⟵ 1 to m − l + 1 do 6: S T i,l ⟵ generateCandidate(T i , l) 7: for all candidate shapelet S in S T i,l do 8: D S ⟵ subdist(S, T) 9: quality ⟵ assessCandidate(S, D S ) 10: Shapelets.add(S, quality) 11: removeSelfSimilar(shapelets) 12: sortByQuality(nonself − similar shapelets) 13: return kShapelets ALGORITHM 1: ShapeletSelection(T, min, max, k).
Input: SK, a set of the best k shapelets which is generated from the training data and T, dataset containing time series and class labels Output: a new transformed dataset 1: method is used to find the optimal weight parameters, as described in Algorithm 4. It includes following major steps: Step 1: Pearson correlation coefficient of each component and the original time series is calculated and normalized. e mean value of each type of component is taken as the initial value of the weight ω j .
Step 2: the initial weight ω j is multiplied by the predicted class probability of the base classifier corresponding to each component, and the maximum probability is taken to determine the final class and to obtain the accuracy of the ensemble classifier.
Step 3: the extreme value of each component's Pearson correlation coefficient can be calculated in Step 1, and it is recorded as d j , where j can be 0, 2, 3, . . ., R. e new weight combination is generated by the Monte Carlo method. In each Monte Carlo event, we generate R+1 uniformly distributed random number in range of [ω j − d j , ω j + d j ]. After N simulations, N groups of weight combination will be produced.
Step 4: the N groups of weight combination will be substituted into Step 2 to calculate the accuracy, respectively. e maximum accuracy is the result of this step.
Each iteration contains N times Monte Carlo simulation. If the accuracy does not improve compared to the accuracy in last iteration, we will update d j to 2d j to broaden the domain of generated random numbers and increase the Monte Carlo statistics from N to 2N .
Monte Carlo simulation is a computerized mathematical technique to generate random sample data based on given distribution for numerical experiments. We use Monte Carlo to generate a large set of random weight vector, and the range of weight is constrained by d j so that the prediction result of components with strong correlation will be given a higher weight. Different weight vectors are calculated with the above method to get different accuracies, and the optimal weight vector and accuracy are obtained after several Monte Carlo iterations.
In Figure 5, the blue dot line indicates the termination position of the iterations. e condition of iteration termination is that the accuracy obtained is no longer increasing. Obviously, this method cannot obtain the global optimum, but the weight obtained is closest to the initial Input: R+1 transformed matrix T′ the original time series dataset T base classifier L simulation times N Output: the optimal weights and the maximum accuracy 1: get the initial weight ω < ω 1 , ω 2 , . . . , ω R+1 > and step length d j 2: for all T i ′ in T′ do 3:
Experimental Dataset.
In this paper, we use 25 datasets from UCR repository [23]. ese have been commonly adopted by TSC researchers. e basic information of the datasets is shown in Table 1.
e classification labels of multiclassification datasets are represented by Arabic numerals. For example, for 4 classification datasets, the classification labels are 1, 2, 3, and 4, respectively. As shown in Table 1, the types of datasets used are diverse and come from three fields, including sensor data, image contour information, human ECG, and action data. e length is also different, the shortest is 24, and the longest is 512. erefore, the performance of the algorithm can be comprehensively tested. In order to facilitate the performance comparison, the default training set and test set partition are adopted in this paper, k value is set to m/2, min value is selected to 3, the max value is m, and m is the length of time series. e initial value of N is 1000 in our experiments.
Experiment Design.
Our first objective is to choose a base classifier which has best performance on transformed data. For this purpose, we test the performance of five traditional classifiers on the transformed data constructed by the ST method. ese classifiers are Naïve Bayes [32], C4.5 decision tree [33], support vector machines [34] with polykernels, random forest [35,36] (with 100 trees), and Bayesian networks [37]. ese algorithms are commonly used in machine learning. e characteristics of the transformation are impacted by the choice of the mother wavelet and the number of detail levels, and thus, the mother wavelet type and the number of detail levels should be taken into consideration in the experiment. We try different mother wavelets and number of levels to test the influence of these two parameters on the results.
Finally, we implement a comparative experiment to compare the performance between our method (DSE) and other six time series classifiers, including 1-nearest neighbor classifiers using Euclidean distance (1NN-ED) based on raw data, 1-nearest neighbor classifiers using dynamic time warping (1NN-DTW) based on raw data, 1-nearest neighbor classifiers using dynamic time warping with window size set through cross validation (1NN-DTWCV) based on raw data, a random forest classifier based on raw data binary shapelet transform (BinaryST) [30], time series forest (TSF) [13], and elastic ensemble (EE) [14].
Evaluating Indicator.
To the classification problem, classification accuracy is the most important criterion to evaluate algorithm performance. In addition to accuracy, Friedman test and Nemenyi test are widely used in machine learning to evaluate the performance of algorithms over multiple datasets. After getting the accuracy of the K algorithms on the N dataset, Friedman test ranks algorithms for each dataset separately. e algorithm with the highest classification accuracy is marked as 1, and the secondhighest label is marked as 2, and so forth. e algorithms with the same accuracy value will be marked as average ranks between them. In this way, we can get a rank matrix of N × K. r ij is the rank mark of the i th dataset on the j th algorithm, and the average ranges R j are calculated as follows: Under the null hypothesis, all algorithms are equivalent, so their R j should be equal. e Friedman statistics is defined by Mathematical Problems in Engineering which is according to χ 2 F with K − 1 degrees of freedom. e research of Demiša et al. [38] shows that Friedman's statistics are too conservative and proposed a better statistical formula as follows: which is according to the F-distribution with K − 1 and (K − 1)(N − 1) degrees of freedom. If the null hypothesis is rejected, indicating significant differences between these algorithms, the difference between the algorithms can be tested by the Nemenyi test to compare all the algorithms to each other. At a significance level of α, the critical difference (CD) value is defined by the following equation: All algorithms were divided into different groups by the CD value so that there was no significant difference in the performance of the algorithms in the group. In this way, performance differences between different algorithms can be represented by the critical difference diagram.
Experiment Results.
e experimental platform used in this paper is Python 3.7, hardware configuration: Pentium Dual Core CPU (2.5 GHz), 8G memory. Table 2 lists the accuracy results from five classifiers on the transformed data. Random forest has a good performance, with an average rank of 2.2200 and the best performance in 13 out of 25 problems. e results show that random forest provides a reliable predictive performance on different datasets.
Base Classifier Selection.
Random forest [35] refers to an ensemble learning method of training, classifying, and predicting sample data by using multiple decision trees whose outputs are aggregated by majority voting. To classify a new instance, each decision tree provides a classification for input data; random forest collects the classifications and chooses the most voted prediction as the result. e input of each tree is sampled data from the original dataset. In addition, a subset of features is randomly selected from the optional features to grow the tree at each node. Each tree is grown without pruning. Essentially, random forest enables many weak or weakly correlated classifiers to form a strong classifier [36]. It does not need to assume data distribution; it can handle thousands of input variables without variable deletion. It is relatively fast, simple, robust to outliers and noise, and easily parallelized; avoids overfitting; and performs well in many classification problems.
In the following experiments, we chose random forest as the base classifier.
As shown in Figure 6, in terms of ECG dataset (ECG200, ECGFiveDays, and TwoLeadECG) and sensor dataset (DodgerLoopWeekend, SonyAIBORobotSurface1, and Ita-lyPowerDemand), the choice of parameters has little effect on the results. Generally, the best prediction accuracy can be achieved after one level decomposition. Increasing the value of level leads to increasing the amount of calculation and may also cause a significant decrease in accuracy. In terms of image dataset (BeetleFly, Herring, and BirdChicken), the choice of parameters has significant influence on the results. For example, the highest accuracy is 0.9500 with Haar wavelet in level 2 on the BeetleFly dataset, the highest accuracy is 0.9500 with Haar wavelet in level 2 on the BeetleFly dataset, the Note. e results highlighted in bold denote that the method gets the highest accuracy for this dataset.
Mathematical Problems in Engineering highest accuracy is 0.6562 with coif4 wavelet in level 2 on the Herring dataset, and the highest accuracy is 1.0000 with Haar wavelet in level 3 on the BirdChicken dataset. Table 3 lists the classification accuracies of seven classifiers for 25 datasets. e last two lines of Table 3 represent the average rank of each classifier on different datasets and best performing times, respectively. According to the results shown in Table 3, the EE is the best classifier, with an average rank of 2.38, and the best performance in 12 out of 25 problems. e performance of DSE proposed in this paper is slightly lower than the performance of EE. It wins on 8 out of 25 datasets and has the close average rank of 2.64 to the EE. e EE integrates a variety of distance measurement methods, and the DSE only uses Euclidean distance, which could lead to the little difference of performance between them. However, the DSE is still significantly more accurate than all the other alternatives, including BinaryST.
Comparison Result.
is underlines the utility of decomposition on original time series data. e DWT is effective to improve the accuracy of shapelet transformation method.
When the significance level is 0.05 and the degree of freedom is (6, 144),F F � 2.2781 > F 0.05 (6, 144) � 2.162. erefore, given the significant level of 0.05, the original hypothesis is rejected, and the seven classifiers are significantly different. e critical difference diagram is shown in Figure 7. e critical difference for α � 0.05 is 1.8019. Figure 7 depicts the superiority of the proposed method, and the EE and DSE have significantly a higher accuracy than the BinaryST, the TSF, the 1NN-DTW, and 1NN-ED on these datasets. e difference between the performance of DSE and the EE is not significant, relatively.
Based on the above analysis, the results show that the performance of the DSE method proposed in this paper is very close to the EE method and has higher accuracy and better stability than the other five compared classifiers.
Conclusions
In this study, an ensemble method by combining time frequency analysis and shape similarity recognition of time series is proposed to solve TSC problems. e proposed method embeds DWT into the shapelet-discovery algorithm to produce a transformed data and then trains and tests base classifier on the transformed data; finally, the method implements a weighted majority voting on the results of base classifiers according to the correlation between components and original data. e experiment results indicate that the proposed method outperforms other methods in terms of accuracy. We also pay attention to the influence of parameter selection for the results and carry out study, which gives suggestions on the selection of mother wavelet and number of levels for different time series data types. According to the results in our experimental comparative studies, the proposed method is not only robust and efficient but can also be generalized for use in different application domains. However, the proposed method is still timeconsuming. How to improve its efficiency will be considered in the next work.
Data Availability
e dataset used to support this study is the open dataset "UCR Time Series Classification Archive," which is available at https://www.cs.ucr.edu/∼eamonn/time_series_data_2018/.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. Table 3. | 8,003.8 | 2020-08-19T00:00:00.000 | [
"Computer Science"
] |
Light Higgs Boson from a Pole Attractor
We propose a new way of explaining the observed Higgs mass, within the cosmological relaxation framework. The key feature distinguishing it from other scanning scenarios is that the scanning field has a non-canonical kinetic term, whose role is to terminate the scan around the desired Higgs mass value. We propose a concrete realisation of this idea with two new singlet fields, one that scans the Higgs mass, and another that limits the time window in which the scan is possible. Within the provided time period, the scanning field does not significantly evolve after the Higgs field gets close to the Standard Model value, due to particle production friction.
I. INTRODUCTION
One of the main remaining puzzles of the Standard Model (SM), the Higgs mass, led physicists to search for heavy electroweak (EW) charged new physics at the TeV scale, as predicted by various scenarios, such as supersymmetry and composite Higgs. An alternative approach to the problem, named cosmological relaxation [1] (see [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] for subsequent developments), does not, a priori require this to be the case and can make the new physics either too heavy and beyond the reach of the current colliders, or very light and very weakly coupled. Given this difference, it seems especially important to examine theoretically this new concept to the greatest possible extent.
The key ingredient of cosmological relaxation scenarios is the coupling of the Higgs to a new spin-zero field, the relaxion. This coupling induces the Higgs mass dependence on the relaxion field value. Cosmological evolution of the latter then leads to the Higgs mass scan, starting from some generic large value, down to the much smaller value which is currently observed. The scan is stopped in the right place due to a backreaction of the Higgs on the relaxion evolution. Existing realizations of this mechanism feature a scalar potential characterized by two hierarchically different periods. The larger period is needed for the complete Higgs mass scan and the smaller one allows to settle the final Higgs mass at the EW scale. The task of producing a UV completion for such a potential is very nontrivial and requires a dedicated model building [18][19][20][21]. Interestingly, another known type of scanning scenarios, proposed in [22,23], does not require this feature. Instead of producing the short-period potential barriers for the scanning field, the whole scanning sector effectively decouples from the Higgs sector close to the SM Higgs mass. In the same spirit acts the mechanism proposed in this work.
We will examine a possibility of the Higgs mass scan termination by a noncanonical kinetic term of the relaxion field φ. For this to happen we will assume that the field-dependent prefactor of (∂ µ φ) 2 starts growing when the Higgs mass approaches its SM value. Enhancement of the kinetic term coefficient then results in the effective suppression of the relaxion potential and its coupling to the Higgs boson. With enough suppression the Higgs mass scan can slow down to an unobservably small speed. The relaxion field gets frozen around the value which gives the desired Higgs mass, which we will call an attractor point. Throughout this paper we will discuss concrete ways of implementing this idea. The main model-building challenge lies in finding a proper way to connect the value of the relaxion kinetic term with the Higgs mass. We start our analysis with a toy model, featuring a Higgs-dependent relaxion kinetic term ∼ 1/h n (∂ µ φ) 2 . Although this model straightforwardly realizes the φ kinetic term growth at small Higgs vacuum expectation values (vevs), it turns out to be incapable of producing a naturally light SM-like Higgs. But its detailed analysis proves to be useful in explaining some basic features of relaxation with noncanonical kinetic terms and, more importantly, provides a guideline for constructing realistic models. We present one such a model in the following, which features an extra scalar field χ. It is now this new field which is responsible for the growth of the relaxion kinetic term ∼ 1/χ n (∂ µ φ) 2 . In our construction, the χ field is not sensitive to the Higgs field, and simply rolls for a fixed amount of time, until it reaches the pole value, where the relaxion evolution is effectively terminated. A desired sensitivity of the relaxion to the Higgs mass is achieved by the h-dependent particle production friction of the relaxion. This friction is initially absent, allowing the relaxion to scan the Higgs mass. Once the Higgs mass approaches its SM value, the friction turns on. After that, the particle production significantly slows down the φ evolution, until the time it becomes completely shut down by the χ-dependent kinetic term when χ finally approaches the pole.
The structure of this paper is the following. In Sec. II we start by introducing a toy model. In Sec. III we describe a more complex setup, with two singlet fields; we discuss its one-loop structure and its evolution before and after the Higgs mass scan. The details of the scan are discussed in Sec. IV, preceded by a brief review of the particle production friction. Finally we discuss our results in Sec. V.
A. Main idea
As usual in the scanning scenarios, we promote the Higgs mass to a field-dependent variable by coupling the Higgs to another field, a spin-zero SM singlet φ. While we assume the Higgs potential to take a generic form, controlled by a cutoff Λ, the interactions of the φ field are kept under control by imposing the shift symmetry φ → φ + c, which is weakly broken by a dimensionless parameter κ. The leading terms of the resulting scalar potential are which makes the Higgs mass parameter depend on the φ vev Here and in the following we use h 2 for h † h, and for conciseness omit most of the order-one factors, as well as the Higgs quartic coupling constant λ and the quartic coupling term itself. In Eq. (II.2) we fixed the φ-independent part of the Higgs mass term to be negative, and the starting φ value is chosen to be less than Λ/κ, (φ = 0 for simplicity) such that the Higgs vev is initially of a cutoff size 1 . To make the Higgs vev decrease and approach the SM value, we fixed the sign of the leading term of the φ potential κΛ 3 φ so that φ increases with time. Notice that we have not required any fine-tuning of the theory parameters. As for the initial conditions, if we assume a uniform distribution of the relaxion field values over different space points in the beginning of the process, only an order-one fraction of them will give the φ value below Λ/κ. But despite not being able to assure the needed initial value, we are satisfied with an order-one probability for it. As the inflation stretches away the field inhomogeneities, we can assume our initial condition φ = 0 to hold soon after the beginning of inflation everywhere in the given causally connected part of the Universe. All the discussion so far closely followed the original relaxion proposal [1], up to the sign of the initial Higgs mass.
As was anticipated in the Introduction, the core of our scenario is the mechanism allowing us to stop the scanning when the Higgs mass approaches the SM value, using the diverging φ kinetic term. For this toy model we will simply take the kinetic Lagrangian where Λ k is a mass dimension-one parameter and n is some positive power. For now we will not give any comments on possible UV completions producing this type of kinetic terms, and we ignore such questions as the naturalness of the kinetic term choice and the scalar potential. The main purpose of this section is to introduce the reader to the dynamics of the relaxion with a noncanonical kinetic term. As can be immediately read off from Eq. (II.3), the prefactor of the φ kinetic term starts exploding upon approaching to h = 0. This means that every additional unit φ variation takes more and more time, and at some point the φ evolution effectively stops, with a Higgs vev and mass being close to zero. Interestingly, the attractor point thus generated, φ = Λ/κ and h = 0, does not correspond to either the local or global minimum of the φ potential. Clearly, in order to reproduce the SM we need the attractor to be around the SM Higgs vev h = v and not at zero. In this introductory section we will however limit ourselves with a less realistic but simpler case.
The metric on the field space described by the kinetic terms (II.3) is not flat; therefore we are not able to canonically normalize both fields in all the time points simultaneously. To have a first glance on the details of FIG. 1: Schematic representation of the scanning field potential and the corresponding Higgs mass, in terms of the initial field φ and after its canonical normalization, as explained in the text.
their evolution we will integrate out the Higgs field using h 2 → (Λ 2 − κΛφ) 2 . We thus arrive at the one-field Lagrangian where, for simplicity, we performed a shift κΛφ − Λ 2 → κΛφ. In new notations the attractor point simply corresponds to φ = 0 and the initial φ value is negative. We can now switch to a new canonically normalized fieldφ so that Depending on the power n, we choose the following redefinitions (omitting obvious constant factors) where in the last column we showed how the interval φ ∈ (−∞, 0) maps onto the canonical fieldφ. It shows that the n = 1 case is special as the attractor point is mapped onto a finite value ofφ. This potentially causes a problem once we try to move the attractor point away from h = 0 to some finite nonzero value. The simplest way to do so is by changing the φ kinetic term to (∂ µ φ) 2 /(h 2 − ∆ 2 ) n . Now, for n = 1 the h 2 = ∆ 2 corresponds to a finiteφ. Hence it can be reached and also overshot, making the φ kinetic term negative.
For n ≥ 2, on the other hand, the attractor point is mapped to infinity and henceφ will be eternally approaching it. This is because the stretching of the φ field outruns the φ time variation as we approach the attractor along the chosen trajectory. The resulting evolution of the h − φ system for n ≥ 2 is schematically depicted on Fig. 1. It is interesting to notice that the same behavior which we observe forφ after setting the Higgs into the minimum of its potential is used in various models of pole inflation (see e.g. [24] and references therein), with a difference that the inflaton field evolution goes away from what we call the attractor point.
We will now perform a slight refinement of the previous analysis, which will highlight a few additional features of the toy model. In particular we will see why the toy model cannot be used to produce a light SM-like Higgs boson. . Values of theφ0-independent part of the scalar potential V are shown in grey. In the first case the Higgs field follows the V minimum which evolves towards h 2 = 0, while in the second case the Higgs vev is driven towards zero, with the V minimum frozen close to the initial value.
B. Nonperturbativity
Let us jump straight to the major phenomenological problem of this model, namely the exploding h − φ coupling. In order to make it apparent we will split φ,φ and h into classical and fluctuation parts by where the zero subscript denotes a classical background at some time t 0 . On top of this we will locally (at the time point t 0 ) canonically normalize the φ field fluctuations δφ →φ h n 0 /Λ n k . After these manipulations the kinetic terms of the field fluctuations are contained in while the scalar potential (II.1) becomes, omitting fluctuation-independent terms We can now for instance estimate the amplitude of the Higgs decay into twoφ's. Close to the attractor (h 2 0 → 0) theφ potential is negligible and we can treatφ as massless. The h −φ interaction then arises from the first term of Eq. (II.7). The resulting non-SM Higgs decay amplitude is expected to be sizable, far beyond the experimental bounds, and moreover ill behaving close to the attractor.
C. Locking of the pole field
Even though the previously discussed problem rules out the toy model, we will still make use of it to explain the locking of the h field. It will be useful in the following as it also applies to any other field which produces the kinetic pole. After singling out the background component of the φ field (II.6), the kinetic Lagrangian (II.3) also generates aφ 0 -dependent term contributing to the Higgs potential 10) which is minimized at h 2 = 0, thus competing with the rest of the Higgs potential which prefers h 2 = Λ 2 −κΛφ.
In order to understand when this extra term becomes important we need to findφ 0 by solving the equation of motion (e.o.m.) following from the Lagrangians (II.1) and (II.3) where H is the Hubble parameter which appears after accounting for a metric expansion of the Universe. Besides the usual Hubble friction the equation contains a frictionlike term ∼ ∂ t h 2 coming from the noncanonical form of the kinetic term. In order to estimate the maximalφ 0 we will consider the slow-roll limit, i.e. φ 0 negligible compared to other terms. The maximal value ofφ 0 is achieved when the friction is minimized, hence determined mostly by the irreducible Hubble expansion contributioṅ Now substitutingφ 0 max into Eq. (II.10) we can estimate the maximal ∂δV /∂h and conclude that it is negligible compared to the potential (II.1) if In the opposite case δV drives the Higgs vev to zero, thus terminating φ evolution independently of the Λ 2 − κΛφ value. We will call this termination process "locking". The evolution of the Higgs field with and without locking is shown in Fig. (2). Therefore δV can significantly distort the evolution of the fields, and requires a special attention when considering this type of model. For the realistic model of the next section we will have to forbid this behavior for the relaxation to work.
III. FORMULATION OF A TWO-FIELD MODEL
The toy model analyzed in Sec. II was shown to fall into a strongly coupled regime close to the attractor, thus failing to reproduce the Standard Model. We will now show that a tractable realistic model of the pole attractor can be constructed using one additional spin-zero field χ which controls the relaxion kinetic term. The main goal of this section is to define the general structure of the two field (φ and χ) model, while its detailed analysis and numerical results will be presented in Sec. IV.
A. Formulation
The discussion of Section II suggests that the Higgs field cannot be simply put in the φ field kinetic term denominator. Hence we will introduce another spin-zero singlet field χ to produce a pole in the φ kinetic term. This will allow for more freedom in choosing the pole field properties, in particular, we would like to make the χ kinetic term have the same type of pole as the one of φ. In the following we will consider 3: Schematic plot of the φ − χ system evolution. The evolution starts in the upper right corner. The change between the initial (green) and final (red) regimes happens around h ∼ 0 due to the appearance of the particle production friction of the φ field. In the final point, both φ and χ evolution is effectively stopped by the growing kinetic terms.
This solves the problem of strong coupling pointed out in Sec. II B. We recall that it arises from the ill-behaved expansion of the Higgs field around the classical value h 0 → 0: Resulting interaction terms remain divergent even after φ is normalized canonically, as each φ removes only n powers of h 0 . Thus we get rid of this problem by switching to χ, which gets normalized as well, absorbing the remaining poles. The resulting theory is tractable within the usual perturbative approach. Notice that from this argument we are not strictly required to have the same order of poles for χ and φ, but we will stick to this option for definiteness. Additionally, having even-order poles we are safe from the problem of negative kinetic terms. The structures of the type (III.1) are also interesting as they are frequently used and motivated in supergravity models of inflation (see e.g. [25] for a review) 3 . In particular, the kinetic Lagrangian (III.1) can be associated to the Kähler potential K ∼ −Λ 2 k log[Φ +Φ] of a chiral superfield Φ. The scanning fields then are linked to the bosonic components of Φ as ReΦ ∼ χ and ImΦ ∼ φ .
The next step would be to construct a mechanism which blocks the φ evolution as the Higgs mass becomes small. To achieve this, we will build a model that gives the following behavior. First, both χ and φ roll down their potential from the beginning of inflation, and the φ field scans the Higgs mass the same way as before. Second, as the Higgs vev reaches the SM value, the relative speedφ/χ drops by a large factor. Therefore during the rest of the evolution until χ gets close to the pole, the field φ displaces by a much smaller amount than it did before the speed drop. As χ approaches the pole it blocks the φ evolution. The time dependence of the scanning fields is schematically shown in Fig. 3. In order to produce the relative speed drop we will use the Higgs-dependent particle production friction. For instance, if φ has a particle friction, its time variation will be limited in the simplest case byφ ∼ Hf , where H is the Hubble parameter and f is a mass scale 3 Notice that most of the relaxion models only address the little hierarchy problem, i.e. their cutoff Λ is significantly below M Pl . Therefore the presence of the physics capable of explaining Λ M Pl and for instance featuring supersymmetry (such as in [5,7]) or Higgs compositeness (see [9]) is necessary at the scales above Λ.
suppressing the particle production. In the absence of sizable particle production one will have instead a largerφ controlled by the Hubble frictionφ ∼ V φ /H. We postpone the detailed discussion of different types of friction to Sec. IV A and continue with the general description of the model.
Within the chosen approach, i.e. changing friction, we need to require that • the active scanning region (when χ is away from the pole) is long enough so the Higgs mass can be completely scanned; • after the Higgs mass reaches the SM value,φ/χ has to decrease by at least a factor of v 2 /Λ 2 , so that h mass is not changed significantly afterwards.
In the following we will present a model where theφ/χ drop originates solely from the growth of φ friction around h = v, while the χ friction is insensitive to h. The sole purpose of χ is thus to provide a limited time window for the scan, before it gets close to zero. This construction looks very different from the toy model. But the underlying principles, broadly defined, are similar -the kinetic pole slows down φ when h approaches v. The difference is that in the two-field model this backreaction on φ is delayed by the time χ needs to fall to the pole. This results in a certain amount of residual φ displacement which is suppressed by the high friction.
To fix the conventions we present the general (tree-level) Lagrangian of our model, omitting for the moment the terms relevant for the particle production, as well as the terms which are induced by quantum corrections where κ's are positive dimensionless parameters. The choice of signs of different terms in the Lagrangian (III.3) already suggests that we will exploit the scanning with an initially large Higgs vev and a growing φ value. As usual, we will assume that the relaxion field changes by an amount ∼ Λ/κ φ during the scan, and fix for simplicity its initial value at φ = 0. We will also assume that χ starts negative and evolves towards the pole χ = 0. Before analyzing the dynamics of the model we would like to address the stability of the relaxation mechanism against various possible modifications of Eq. (III.3), which could either be dictated by specific UV completions, or arise as quantum corrections in our EFT. First of all, the singular behaviour of the kinetic term guarantees that any additional contribution to the kinetic term with a weaker growth around the pole (i.e. weaker than 1/χ 2 ) can be neglected. Secondly, the mechanism is also insensitive to the displacement of the pole from the value χ = 0, which we choose for simplicity and also to make contact with the UV models discussed in [24]. For the mechanism to work, one only needs the χ field to roll towards the pole irrespectively of where exactly it is located. Given these considerations, after analyzing the quantum corrections to the effective action we do not find any contributions which can spoil the desired behavior of the kinetic terms.
Let us now discuss the structure the effective potential. The tree-level scalar potential alone features shift symmetries for φ and χ in the limit of vanishing κ φ,h and κ χ respectively. Since the interactions described by the h 2 φ term can induce the quantum corrections to the h-independent φ potential, we have to constrain κ φ κ h /(4π) 2 . More importantly, unlike the kinetic Lagrangian, there are quantum corrections which can significantly alter the general form of the effective action and affect the field evolution for certain values of the fields and parameters. They come from the kinetic terms which explicitly break the χ shift symmetry. The χ potential therefore does not vanish even in the limit of κ χ → 0. The analysis of these quantum corrections and their various implications is given in the two following subsections. Most importantly, we will show that these corrections are irrelevant for the scanning mechanism for a certain choice of model parameters.
B. One-loop potential
We will now discuss quantum corrections to the scalar potential, which are important for the general consistency of the mechanism and also specify the requirements to possible UV completions. This discussion will be qualitative and we will only obtain the general form of the most important quantum corrections omitting, in particular, terms dependent onφ,χ Λ 2 . Before computing the loop corrections let us first switch to the fieldχ defined as χ = −Λ k exp[−χ/Λ k ]. The field range χ ∈ (−Λ k , 0) is then mapped ontô χ ∈ (0, ∞). After this redefinition we obtain Notice the presence in Eq. (III.4) of a linear contribution to theχ potential. It appears due to the quartically divergent integral from the change of the path integral measure The kinetic term of the new fieldχ no longer contains interactions, and can induce no quantum corrections, unlike the χ kinetic term. However the presence of shift symmetry breaking interactions, associated with the kinetic term of the χ field, causes the presence of κ χ -independent term in the scalar potential ofχ. In order to recover this term, we could have alternatively used the noncanonical variable χ and computed the one-loop potential generated by the interactions contained in the χ kinetic term (we will do this in the following for the interactions contained in the φ kinetic term). Therefore the last term of Eq. (III.4) can be effectively seen as a one-loop contribution. This contribution, as we will see later, is crucial for the χ dynamics. Before discussing it, let us also compute the one-loop effective potential 4 arising from the interactions described by the new Lagrangians (III.4), (III.5). The most important one-loop contribution arises from theχ − φ interactions encoded in the φ kinetic term which depends linearly onχ for largeχ, similarly to the last term of Eq. (III.4). And, consequently, this correction also does not disappear in the limit of vanishing κ χ , κ φ . In order to estimate the relative importance of these loop terms, compared to the tree-level potential, we differentiate both with respect toχ and we find that the tree-level potential can only be dominant whenχ Λ k and (4π) 2 κ χ Λ k > Λ. We will assume that these requirements are satisfied during the active phase of the Higgs mass scan, as we would like to have a control over the χ potential during it. The former constraintχ Λ k is also needed to ensure that the φ evolution during the scan is unsuppressed by a large kinetic term and weakly dependent on χ. We can add here another condition that Λ k has to satisfy, namely κ χ χ < Λ and consequently κ χ Λ k < Λ. This is needed to provide a convergence of the κ χ χ/Λ expansion of our effective field theory. These constraints lead to Λ k ∼ Λ/κ χ , which we will assume in the following. The one-loop correction only becomes important forχ > Λ k , i.e. when the φ kinetic term is already enhanced and the Higgs mass scan is mostly ended. In the following we will discuss the effects of this correction on the evolution outside the scanning window, namely on setting the initial and final conditions for the scan.
C. Final vacuum after the scan
Depending on its sign the correction (III.7) would either block the χ movement to the pole and thus spoil the mechanism, or make it move towards the pole even faster. Assuming the latter to be the case, the χ potential becomes unbounded from below. Any phenomenologically viable UV completion of this type of model will therefore be required to contain a mechanism regularizing the scalar potential in the vicinity of χ = 0. This can be done for instance by adding to the potential an extra piece with a different functional form than (III.7) to balance it and produce a finite minimum of the χ potential close to the pole. As another option we could shift the kinetic pole by a small constant (III.9) This shift defines the maximal enhancement of the kinetic term, and thus the minimal slope of the φ potential and the time variation of the Higgs mass. It is thus limited by which for the current age of the Universe t ∼ 10 41 GeV −1 gives κ( /Λ k ) < 10 −39 /(Λ/GeV) 2 , for κ = κ φ = κ h . Such a correction will not affect any details of the scanning mechanism and hence we will not discuss it any further. As a consequence of such a regularization, χ can actually reach the minimum of its potential and stop its evolution there.
Notice, that such a regularization would also help to address the problem of the EFT validity of the model close to the pole. This problem is related to the fact that the quantum correction (III.7) grows withχ, and can eventually lead to the EFT breakdown. We estimate this breakdown condition from |V (1-loop) | ∼ Λ 4 , which gives the maximal allowedχ valueχ (III.11) However, once the regularization mechanism starts acting, the growth of |V (1-loop) | stops and the EFT breakdown may not occur. The regularization, changing the χ behavior or the pole structure, however, should not happen before χ approaches the pole by an amount which is sufficient to block the residual Higgs mass variation to the acceptable amount. The minimal sufficient value ofχ can be estimated from which givesχ /Λ k > log 10 39 κ(Λ/GeV ) 2 . (III.13) For the values of Λ and κ obtained in the numerical scan below, we conclude that theχ value of the EFT breakdown (III.11) is larger than the value ofχ (III.13) at which the regularizing mechanism is allowed to start acting. And once the mechanism of the type described above turns on, the χ field settles in the minimum of the scalar potential. Hence the system does not arrive at a state violating the EFT validity.
D. Initial conditions
The kinetic poles make the volume of the field space increasingly "stretched" upon approaching χ = 0. Thus, assuming uniformly distributed initial values of the renormalized fields over different patches of the Universe before inflation, we would find that most of the patches have χ which is very close to zero, almost completely blocking any possible Higgs mass scan. Having χ of the order of Λ k , which is needed for a successful scan, would instead correspond to a very tuned, nontypical initial condition. We would like to emphasize that this problem only arises if χ field values are indeed distributed with a weight defined by the size of the kinetic terms. To verify this assumption we would need to know the exact UV completion. It can also be the case that the UV complete theory automatically sets initial values of χ sufficiently far from the pole. We will now show that we do not necessarily need to rely on this latter possibility and there can be ways to successfully complete the scan even with the uniformly distributed values of the renormalized fields.
One of the ways to solve this issue would be in adding the second kinetic pole at χ −Λ k . It would stretch the field space at large |χ| and produce the second attractor value for the initial conditions. It will now be equally probable to start around the first or the second pole. Further, we require that the slope of the χ potential around this new pole repulse χ away from it towards zero. In this way, once χ starts its evolution close to the new pole, it will unavoidably pass the region χ ∼ Λ k where the scan can happen and then evolve to zero. If the scalar potential in the vicinity of poles is determined by its one-loop expression, its monotonic decrease with χ requires where −c i > 0 is the position of the extra pole. This sign-changing behavior may be achieved if the cutoff physics is sensitive to the vevs of the χ, φ and h fields, whose values change by an amount comparable to the cutoff during the evolution from one pole to another. An interesting consequence of such a construction is that χ becomes almost completely decoupled from all the other fields in the beginning and in the end of its evolution. It is only active in the window around |χ| ∼ Λ k when the Higgs mass scan happens. Alternatively, we could use the slowly varying χ field as the dominant source of inflation. In this case χ far from the minimum of its potential, and from the pole, would be a natural initial condition. In this case there is no need for the second pole, but a detailed analysis of such a possibility lies beyond the scope of this paper.
IV. RELAXATION IN THE TWO-FIELD MODEL
A. Review of particle production friction As we have estimated in Sec. III for the two-field attractor, we need to produce an order v 2 /Λ 2 drop of the ratioφ/χ when the Higgs mass approaches the SM value. This section is dedicated to the brief review of the process allowing for this drop -the particle production friction. The results given here are mostly based on works [13][14][15]26] where the particle friction was applied to the relaxion models, and the original model of inflation with a particle production [27]. We would like to emphasize that in the following we will be relying on analytic estimates of the relaxation dynamics. A comprehensive numerical study, while being important, lies beyond the scope of this paper. The results presented in this section will be applied to the two-field attractor dynamics in Sec. IV B.
We will consider an Abelian field A µ with a mass m A coupled to one of the scanning fields (e.g. φ for definiteness) by means of an interaction where F µν is the corresponding field strength tensor andF µν its dual. In the time-dependent φ background the transverse components of the A field can acquire exponentially growing modes, draining the φ field kinetic energy, the process called "particle friction". To see how it appears we first write down the solutions of the e.o.m. for two transverse polarizations of A, derived using the WKB approximation [26] A where ± stand for right and left helicity, a is a scale factor of the expanding Universe, τ is a conformal time adτ = dt, and k is a 3-momentum. The approximation (IV.2) is valid for |∂ τ ω/ω 2 | < 1. Given that we are looking for exponentially growing gauge fields, one can end up in a space filled with a plasma of particles charged under A. Therefore in Eq. (IV.2) we have also included the thermal correction to the dispersion relation Π t [29,30] with m 2 D = g 2 A T 2 p /6 defining the Debye mass of a plasma with a temperature T p . If the dispersion relation (IV.2) allows for imaginary ω ≡ iΩ, the vector field can experience exponential growth with time. Let us first notice that Π t is a positive function for complex ω; therefore the existence of complex solutions of (IV.2) for one of the two polarisations requires where without loss of generality we have assumedφ > 0. We further notice that in the Ω ∼ k limit Π t saturates around m 2 D while for Ω k we obtain Π t ∼ m 2 D |Ω/k|. Therefore, once the condition (IV.4) is satisfied, the maximal Ω, and hence maximal instability, is given by which shows that the instability growth becomes weaker in plasma. In all the cases the instability is maximized around We hence identified three regimes of A µ evolution: with no exponential growth, with a fast growth, and with a growth slowed down by the high temperature plasma, T p φ /f . In case of growing instability we can expect Notice that the time of the efficient exponential growth is limited by roughly one Hubble time, as after that the produced gauge field modes become significantly redshifted. In Appendix VI we collect the precise expressions for the quantities listed in Eq. (IV.7). Thus produced energy density stored in the gauge field can lead to several effects.
• First, growing gauge field modes backreact on the rolling field φ. The φ e.o.m. reads For the sake of this section we forget about the noncanonical kinetic terms, as the scan happens around |χ| ∼ Λ k , i.e. in the regime where the kinetic terms are not significantly enhanced. Hence our results will remain parametrically accurate. In case of negligible F µνF µν the φ evolution is driven by the slope of the potential V and the Hubble friction, with a maximal speed defined by If instead the term F µνF µν dominates, one can reach a stationary regime with the evolution defined by the last two terms of Eq. (IV.8): Using the exact dependence of F µνF µν onφ we can estimate the maximalφ aṡ To find out which of the two types of friction dominates we simply need to check which of the two gives the minimalφ.
• Second, if the gauge field A µ couples to the Higgs boson at tree level, through g 2 A A 2 h 2 , it can lead to a restoration of the electroweak symmetry due to the effective temperature which we define as T 2 eff ≡ A 2 µ . Therefore we do not need the Higgs and the gauge field to enter thermal equilibrium for this mass correction to appear, and T eff simply describes the classical value the A field.
• Finally, the energy density stored in the gauge bosons can thermalise, leading to a creation of a thermal plasma. This plasma can then slow down the gauge field growth as described above. For thermalization to occur we need it to be faster than the Hubble expansion rate. We will consider two plasma production channels, a perturbative pair production and a nonperturbative Schwinger production.
Efficient perturbative pair production of charged fermions happens if the typical gauge quanta energy is higher than the fermion mass m f , and if the pair production cross section, enhanced by the large multiplicity of gauge quanta N γ , is higher than the Hubble expansion rate [28] Ω > m f and N γ > 128π where N f is the number of produced fermionic degrees of freedom and g A is a gauge coupling. The plasma temperature can be estimated as an order-one fraction of the overall energy released by the rolling φ field in one Hubble time, T 4 p ∼ V φφ H −1 . The nonperturbative Schwinger production is characterized by where E stands for a modulus of the electric field analogue for A. For this production channel to be efficient one needs (IV.14) And the maximal plasma temperature can be estimated as an order-one fraction of energy stored in the electric field T 4 p E 2 [15].
Once one of the fermion production channels opens, A field modes can thermalize. Thermalization will happen if there is enough time for A modes to interact with the plasma before they exit the horizon. Concretely, we require the mean free path of gauge quanta to be less than the Hubble length [15] We now have at our disposal three different friction regimes -Hubble friction, thermal particle friction, nonthermal particle friction-with a potentially significant relative speed change. We have also identified the criteria leading to switching between different regimes, which one needs to satisfy when the Higgs mass gets close to its SM value.
B. Details of evolution and numerical results
We now present a concrete realization of the general scanning scheme presented in Sec. III A. In this realization we keep the χ field friction constant, which makes this field a spectator, whose only purpose is to give a finite amount of time for the relaxion field evolution, until χ reaches the pole. As the φ field rolls down, it initially only has a Hubble friction. φ couples to the SM gauge bosons, but the Higgs mass squared starts large and negative, making the EW gauge bosons heavy and forbidding the particle friction for φ, until the Higgs vev becomes small enough. Once this happens, φ gets slow, and during the rest of the time, given by χ, does not evolve significantly. The friction is produced by the term [13] φ f (g 2 2 W µνW µν − g 2 1 B µνB µν ) (IV. 16) where W and B correspond to the SM weak and hypercharge gauge bosons and g 1,2 are their gauge couplings. The combination of the field strengths in Eq. (IV.16) is chosen so that it does not contain a photon, which is massless during the whole scan, hence not sensitive to the Higgs mass change.
Notice that non-Abelian gauge bosons develop the so-called magnetic mass in plasma m M ∼ g 2 2 T p [31], which blocks the instability development, hence weakening the friction. This can happen to the W and even the Z boson, as in the broken EW phase it contains W 3 . The way out in this case would be a restoration of the EW symmetry immediately after the friction starts, making the Abelian hypercharge boson a mass eigenstate, and hence not mixing with the state possessing m M . As will be discussed later, the restoration of the EW symmetry is also necessary for another unrelated reason.
As for χ, we can either leave it with the Hubble friction only, or assign it some other particle friction. In the latter case the easiest would be to couple it to a dark photon χX µνX µν /f X . We will assume the inflation to happen in the background of the relaxation process. One important advantage of this is the absence of χ locking (see below), which requires a sizable Hubble scale. Another advantage is that inflation continues after the relaxation and washes away all its by-products, such as thermal plasma, which simplifies the phenomenology. We are now ready to consider the Higgs mass scan in detail and list the conditions needed for the described mechanism to work.
1. Initially EW gauge boson masses (which we collectively denote m W ) have to be too heavy to be produced, so that φ quickly scans the Higgs mass. The particle friction turns on when m W approaches the SM value, i.e. when the following condition is satisfied 5 : where the initial velocityφ in , acquired before the particle production turns on, is defined by the Hubble frictionφ (IV.18) 2. As soon as the condition (IV.17) is satisfied, we need to restore the EW symmetry. This makes the Abelian gauge boson B a mass eigenstate, allowing the associated instability to develop without creation of the magnetic mass [13] 6 . The symmetry restoration can be produced immediately by the classical value of the A field, or a bit later by plasma if it forms (see the second point of Sec. IV A). We find that once the condition (IV.17) is fulfilled, the A contribution dominates over the others; therefore we require 3. The resultingφ drop should be at least ∼ v 2 /Λ 2 . In this case there will be no significant residual scan of the Higgs mass after it approaches the SM value. From Eq. (IV.11), this gives a constrainṫ This condition is crucial to eventually ensure the naturalness of the weak scale with respect to the cutoff Λ, and it represents a rather tight upper bound on the inflationary Hubble scale (see the end of this section). Therefore we do not improve on this point with respect to other relaxion models operating during inflation, which typically feature analogous constraints.
4. The condition previously given in Eq. (IV.11) actually depends on whether the SM plasma is created or not. In the the latter case φ reaches the equilibrium speedφ f H, in the former the velocity will be higher than that because the plasma decreases the friction efficiency. The fermion plasma can be formed in two ways, either from perturbative or nonperturbative production. Let us now consider under what conditions these plasma formation channels can be active. In the case when both production channels are efficient, the one which gives the higher plasma temperature will dominate. For simplicity we will assume that W gauge bosons are never exponentially produced because of the magnetic mass. In this way we will obtain a conservative estimate of the allowed parameter space.
• To allow for a perturbative production of fermions at v = 0 at a plasma temperature T p we need (see Eqs. (IV.12) and (VI.2)) where N f ∼ O(100) counts the number of SM fermionic degrees of freedom, m 2 D = g 2 T 2 p /6 and ξ ≡φ f H is defined from the balance V φ FF /f as (for FF see Eq. (VI.2)) Thus, comparing the expressions (IV.21) and (IV.22) we conclude that the plasma temperature cannot grow above as the plasma will stop being produced otherwise. At the same time plasma cannot go above the equilibrium temperature, defined such that the order-one fraction of all the gauge field energy gets thermalized. The equilibrium plasma temperature can be estimated as the total energy density lost by the rolling φ field in one Hubble time T 4 p ∼ δρ ∼ V φφ /H • For the Schwinger production, the fermion plasma is produced if gE > m 2 f (m f = 0 in the unbroken phase) and (see Eq. (VI.3) for E) which defines the maximal plasma temperature above which the production stops At the same time the plasma temperature is also limited by the energy stored in the electric field. We can find this equilibrium temperature from T 2 p E: • Once the fermion plasma is formed (either from perturbative or nonperturbative production), it can only backreact on gauge bosons and the Higgs field if g 4 T p (4π) 2 H.
• The stability of plasma also requires the EW symmetry to remain unbroken when it forms, otherwise we obtain the magnetic mass blocking the instability and the plasma production. Hence either the plasma temperature should be high enough to restore the symmetry (min[T max , T eq ] > v) or it has to be restored by the A 2 (Eq. (VI.5)) correction 4: Results of the numerical parameter space scan, in terms of Λ, f , H and κ φ . Blue points correspond to evolution without plasma production, and orange to evolution with plasma production. The green line on the right plot shows minimal κ φ below which the φ field excursion becomes trans-Planckian.
8. Finally we have the constraint on the vacuum energy density, ensuring that inflation is not affected by our mechanism The main result of this section is the scatter plots on Fig. 4, showing the values of the main relevant parameters which satisfy the constraints listed above. For all the points with the plasma production the main production channel is the perturbative production. The maximal allowed cutoff scale Λ is around 50 TeV, while without trans-Planckian field excursions it decreases to ∼ 20 TeV. The friction without plasma is generated for a moderate (for this kind of scenario) Hubble scale O(0.1) GeV, while in the presence of plasma H needs to be several orders of magnitude lower, with the maximal cutoff reached for H ∼ 10 −8 GeV.
V. SUMMARY AND FUTURE DIRECTIONS
We have discussed a new scenario with the dynamical Higgs mass scan, within a class of models pioneered in [22,23] and [1]. The novelty of our proposal lies in employing a noncanonical kinetic term of the scanning field, leading to its decoupling from the Higgs sector at the end of the scan. This scenario therefore does not require Higgs-dependent barriers for the relaxion (and hence new light EW-charged states), or, a priori, the presence of two periods in the relaxion potential.
We have presented a particular realization of the pole attractor idea, in which the scanning field can evolve only during a limited time window during inflation. After that the second field, controlling the kinetic terms, reaches the pole value and blocks the scan. Starting with a large Higgs vev, the scanning field first evolves quickly until the Higgs mass gets close to the SM value. At this point the scanning field evolution is slowed down by a dissipation of energy into SM gauge bosons, hence it displaces by a very small distance before the scanning window closes. We have identified two viable regions of the parameter space, with and without a backreacting plasma. The latter allows for a higher inflationary Hubble parameter, but a slightly lower maximal cutoff. The maximal cutoff of the Higgs sector ∼ 50 TeV is well above the reach of current and near future collider experiments. It is nevertheless not restricted to be that far and can reside as low as the current lower bounds on new physics allow. The relaxion sector fields' couplings to the visible sector are exponentially suppressed making it very difficult to observe their direct effect. It is important to mention that this particular two-field realization of our scenario does feature two different "periods" for the φ field, similarly to the original relaxion models. The first period is given by ∼ Λ/κ φ . The second periodicity φ → φ + 2πf has to be assumed in order to ensure that the underlying physics responsible for the friction term (IV.16) does not induce additional unsuppressed shift symmetry breaking terms to the scalar potential.
All the models discussed in this work need the initial Higgs vev to be large, or in other words the Higgs mass squared to be negative. One can however think of modifications suitable to accommodate a vanishing initial Higgs field value h = 0 and a large positive mass squared. These should include a change of signs in the scalar potential to provide a scan in the right direction. For the toy model, in order to prevent having a singularity in the relaxion kinetic term one would need to shift the pole away from h = 0 as discussed in Sec. II A, which in any case is needed to stop the scan at h = v = 0. For the two field attractor one can think of assigning the Higgs-dependent particle production friction to χ instead of φ, so that initially χ is slowed down by SM gauge boson production at v = 0, and then the particle friction disappears and χ falls to zero, blocking the scan. The simplest implementation of this last mechanism however leads to a relatively low cutoff, at the TeV scale, so further refinements would be needed in order to increase it.
We would also like to mention that there is another potential way of implementing the two-field scenario, without using the concept of a limited scanning window and changing friction. This would be more in accord with the spirit of the toy model, where the pole depends on the Higgs vev. One can assign to χ a potential with a minimum (local or global) away from zero, which disappears or displaces to zero when the Higgs mass reaches the SM value. The simplest realization would be a scalar potential with a tadpole V χ = h 2 χ + Λ 2 χ 2 , which however has the problem that the quantum correction ∼ Λ 4 log χ quickly becomes more important than the Higgs-dependent tadpole. It would be interesting to further investigate this model building direction.
The proposed mechanism has a lot in common with the attractor models of inflation, sharing some structural features and, possibly, can find UV completions in analogue type of theories. In general, the existence of an appropriate UV completion for our scenario presents an important question for further studies, in particular because the behavior close to the attractor relies on certain crucial assumptions about the UV physics features.
Finally, we would like to mention that a distinctive phenomenological feature of this type of scenario is a slow change of the theory parameters with time, as the scan never completely stops. While we do not necessarily need this time variation to be detectable with the current experiments, its observation would be an interesting hint for this mechanism, especially given the limited direct experimental access to the relaxion sector, whose couplings to the SM particles are exponentially suppressed.
VI. APPENDIX: PARTICLE PRODUCTION
The combinations of the field strengths can be rewritten in terms of electric and magnetic fields as [27] F F ∼ E 2 + B 2 , FF ∼ EB . (VI.1) At zero temperature, they take the form [ | 12,083.4 | 2017-09-26T00:00:00.000 | [
"Physics"
] |
Selected Plant Metabolites Involved in Oxidation-Reduction Processes during Bud Dormancy and Ontogenetic Development in Sweet Cherry Buds (Prunus avium L.)
Many biochemical processes are involved in regulating the consecutive transition of different phases of dormancy in sweet cherry buds. An evaluation based on a metabolic approach has, as yet, only been partly addressed. The aim of this work, therefore, was to determine which plant metabolites could serve as biomarkers for the different transitions in sweet cherry buds. The focus here was on those metabolites involved in oxidation-reduction processes during bud dormancy, as determined by targeted and untargeted mass spectrometry-based methods. The metabolites addressed included phenolic compounds, ascorbate/dehydroascorbate, reducing sugars, carotenoids and chlorophylls. The results demonstrate that the content of phenolic compounds decrease until the end of endodormancy. After a long period of constancy until the end of ecodormancy, a final phase of further decrease followed up to the phenophase open cluster. The main phenolic compounds were caffeoylquinic acids, coumaroylquinic acids and catechins, as well as quercetin and kaempferol derivatives. The data also support the protective role of ascorbate and glutathione in the para- and endodormancy phases. Consistent trends in the content of reducing sugars can be elucidated for the different phenophases of dormancy, too. The untargeted approach with principle component analysis (PCA) clearly differentiates the different timings of dormancy giving further valuable information.
Introduction
The monitoring of bud dormancy in temperate latitudes is becoming a necessary tool to improve fruit flowering for sweet cherry and the correct implementation of agricultural actions such as protection against frost damage. Three characteristic phases of dormancy (para-, endo-, and ecodormancy) are discussed. Many studies have been directed to determine the course of these phenophases [1][2][3][4][5], through which a central role for the phytohormone abscisic acid (ABA) has been proposed [1,3,6]. The leaf fall (LF) generally marks the end of paradormancy and the onset of endodormancy [2]. Subsequently, no visual changes can be noted on the tree to determine the transition from endo-to ecodormancy. The release from endodormancy requires a sufficiently long period of low temperatures providing the necessary chilling requirement and can be described by chilling models [2,4,7]. Twigs from naturally growing trees are regularly taken to observe their development under controlled environmental conditions, allowing one to determine the transition from endo-to ecodormancy (t 1 ) by bud break [2,8]. It is a shortcoming of phenological models that the beginning of ontogenetic development cannot be accurately predicted. Generally, the end of ecodormancy, and consequently the beginning of ontogenetic development (t 1 *), is described by forcing models (2-phase models) [2,8].
Many biochemical processes are involved in regulating the consecutive transition of these different phases as determined by protein expression and identification [4]. The expression of many dehydrins, a family of ubiquitous proteins belonging to angiosperms and gymnosperms, can be induced by the phytohormone ABA [9]. Dehydrins help proteins to fold properly and/or prevent their aggregation under heat or freezing stress, providing cryoprotective activity. During investigations on seasonal changes in protein profiles in dormant flower buds of Japanese apricot (Prunus mume Siebold and Zucc.) cultivars, one protein, isolated by two-dimensional polyacrylamide gel electrophoresis of flower bud extracts, was shown by peptide sequencing to be a dehydrin [10]. Proteomic methods applied to follow the events from dormancy to release that occur during the apical bud development of Pinus sylvestris L. showed that the majority of these proteins (57%) are involved in metabolic and other cellular processes [11]. With regard to endodormancy breaking, a recent study on the buds of Japanese pear (Pyrus pyrifolia Nakai) collected in the pre-bud breaking period phase were used to identify the proteins with the result that the majority of the identified proteins (more than 20 proteins) were involved primarily in the oxidation-reduction process [5]. Among these, catalase, L-ascorbate peroxidase and peroxidase were identified, enzymes known to be very closely-involved in oxidoreductase activities [5]. The decomposition of H 2 O 2 by catalase and peroxidase seems to be relevant in the context of the redox reactions involved in the transition of the endodormancy phases [5]. The dormancy related data on protein expression of the Pinus sylvestris L. Var. Mongolica litv. apical buds also reveal that ascorbate peroxidase may be involved in the initiation of bud dormancy, whereas caffeoyl-CoA O-methyltransferase was one of the biomarkers suggested to be involved in the initiation of bud dormancy release [11].
On the other hand, an evaluation based on a corresponding metabolic approach has not yet been fully addressed [12]. Sugars, especially sucrose, oligosaccharides, amino acids, phenolic compounds and organic acids were among the relevant metabolites connected with bud opening in conifer buds under forced deacclimation (artificially induced spring) [12]. Mass-spectrometry-based metabolome analyses are becoming more relevant for the profiling of known and unknown plant metabolites (targeted/untargeted) [13,14]. Accordingly, studies related to cold acclimation in Picea sitchensis and Picea obovate have been addressed using gaschromatography/massspectrometry (GC/MS)-based metabolite profiling [15][16][17], but there is still a lack of data for trees economically relevant to fruit and berry farming. A long-term study obtained data on plant metabolites during winter rest, and ontogenetic development in sweet cherry buds (Prunus avium L.) of cultivar "Summit" to determine which of these plant metabolites could serve as biomarkers for the different phase transitions. The focus in this study was on the different metabolites involved in oxidation-reduction processes during bud dormancy. Buds collected from the cherry trees of the cultivar "Summit" were therefore analyzed by mass-spectrometry-based methods. We report here on data derived from the seasons 2014/15-2016/17.
Results and Discussion
The different phases of para-, endo-and eco-dormancy, and ontogenetic development for cherry flower buds of the cultivar "Summit" were characterized as recently reported [1,2]. The picking period (PR-LF, PR: picking ripeness, LF: Leaf Fall) is characterized by the flower buds' inability bloom. On average, this phenophase lasts for about 4 months, where the period S1-LF (S1: first sampling date) represents an integral part of it lasting 6 weeks (season 2014/15-2016/17) [6]. A shorter period of endodormancy (LF-t 1 ) follows, ending with its release at the timing t 1 , which can be determined by observing the harvested twigs under controlled conditions in a climate chamber experiment [1,2]. The period of endodormancy lasts for 21-28 days as observed for the three seasons 2014/15 to 2016/17 [6]. Subsequently, a relatively long period of ecodormancy (t 1 -t 1 *; 63-98 days, [6]) can be observed up to the beginning of the ontogenetic development (t 1 *). This date can be related to a steady and continuous increase of the water content of the buds, related to rising air temperatures [2]. In the following period, the ontogenetic development-related "bud swelling" (SB), "side green" (SG), "green tip" (GT), "tight cluster" (TC) and "open cluster" (OC) dates were documented [6]. The duration (d) and the average temperatures for the different stages are given in [6] (see also Supplementary Tables S1 and S2).
In the present work, focus has been placed on those metabolites involved in oxidation-reduction processes during aforementioned phases and their change in the course of dormancy and ontogenetic development for cherry flower buds of the cultivar "Summit".
Characterization of the Phenolic Compounds and the Antioxidative Potential of the Flower Buds
Based on data available from previous studies [5,11,12,17], we first focused on the group of phenolic compounds, which are known to partake in many different redox reactions. A high degree of compartmentation of phenylpropanoid and flavonoid compounds is generally observed and they may accumulate in vacuoles or are covalently integrated into plant cell membrane-like tissues [18]. The determination of the total phenolic compounds with 60% aqueous methanol using Folin-Ciocalteu phenol reagent is shown for the seasons 2014/15 to 2016/17 in Figure 1 for the three periods S1-t 1 * with the highest values at the beginning of the data sampling (113.4 ± 1.6, 92.6 ± 3.2, 67.6 ± 1.9 mg catechin equivalents·g −1 DM for the seasons 2014/15, 2015/16 and 2016/17 respectively). This high content (7-11% DM) makes the phenolic compounds one of the major group of redox metabolites present in the cherry flower buds of the cultivar "Summit". At the beginning of the dormancy, there is a decline in the content of phenolic compounds with a strong correlation (R 2 = 0.80-0.83) until the developmental milestone t 1 , indicating that these compounds are either transported towards other tissues, metabolized or degraded. In the following long period of ecodormancy (77 ± 18.5 d) [6], relatively constant values for the content of the total phenolics were observed (mean values were 68.0 ± 3.2 for 2014/15; 59.8 ± 3.8 for 2015/16; 45.3 ± 2.2 for 2016/17 in mg catechin equivalents·g −1 DM, see also Figure 1). Figure 2 shows the allocation and composition of the specific phenolic compounds in cherry buds. The main groups of phenolic compounds determined by the high performance liquid chromatography (HPLC) were caffeoylquinic acids (chlorogenic acid isomers/derivatives), coumaroylquinic acid (isomers/derivatives), catechins, quercetin and kaempferol derivatives, and one peonidin derivative (Figure 2A-C). Comparing this data for the two methods presented here (HPLC and total phenol content) also indicates that a few relevant phenolic compounds might be present at low concentrations, as discussed later in the results of the untargeted analysis (see also Supplementary Figure S1). In buds of black currants, hydroxycinnamic acids constituted the major group of phenolic acids and revealed two major phenolic acids, both being chlorogenic acid isomers (3-O-caffeoylquinic acid and 5-O-caffeoylquinic acid), just as is observed in the present study [19]. The data are also concordant with regard to the prevalent flavonoids (catechin/epicatechin, quercetin, kampferol derivatives) found in the present study. HPLC separation with tandem mass spectrometry (MS/MS) identification and subsequent evaluation of concentrations of individual phenolics in free, conjugated, and bound forms in the 7 sweet cherry cultivars (fruits) additionally revealed that the major compounds were chlorogenic acid isomers/derivatives, catechin and epicatechin [20]. Generally, the phenolic compounds detected in the present study were also found in cherry fruits of different cultivars, although there were differences in their relative levels [20,21]. During the seasons 2014/15 and 2015/16, the chlorogenic acid isomers/derivatives represent the major metabolites. In contrast, for the season 2016/17 higher amounts of catechin and epicatechin were observed. The factors responsible for the altered composition in season 2016/ 17 have not yet been determined and will require further long-term investigations. Changes in the composition are also reported to be induced by other environmental factors such as light or nutrient supply [22]. Further investigations about the changes in phenolic compounds, e.g., conversation reactions and glycosylation pattern in respect of dormancy/ontogenetic development are necessary. Generally, the distribution of the phenolic compounds changes only slightly up until the developmental milestone GT, following which higher differentiation in the composition was observed. Specifically, for the seasons 2014/15 and 2015/16, the concentration of flavonoid glycosides declined towards the milestone TC, then either increased or declined further obviously as an interaction with the environment (e.g., temperature, see also Supplementary Table S2). While the content of most compounds decreased over time, suggesting a dilution effect due to growth, the corresponding concentrations of kaempferol-3-rutinoside and quercetin-3-glucoside-7-diglucoside increased. Both are relatively complex compounds revealing the loss of antioxidant activity during development [23]. Furthermore, a high shift in some compounds was seen particularly in the last stages TC and OC, where catechin and neochlorogenic acid contents were enhanced while chlorogenic acid, known for its high antioxidant activity, was reduced [24]. Peonidin-3-glycoside was present in all developmental stages from SG to OC, representing the pinkish color of cherry flowers.
The acquired data does not allow for the determination of a consistent individual marker for the individual milestones when considering all three investigated seasons for the different stages of sweet cherry bud dormancy and ontogenetic development. The total phenol content generally decreases significantly after reaching the milestone SB (Supplementary Figure S2). In the same context, it was shown that the beginning of ontogenetic development was related to a steadily rising water contents in the buds, induced by steadily increasing air temperatures above the freezing point. Here, water content in the bud could be presented as a simple but very effective marker to define t 1 * [2]. Therefore, the content of total phenolic compounds was related to the water content of the buds for the phenophase t 1 *-OC, resulting in high negative correlation values (R 2 = 0.84-0.91, Figure 2D). Rising water content seems to facilitate the dilution of the phenolic compounds.
In summary, the results from this section document the de-accumulation of phenolic compounds (transport, metabolism, degradation) until the end of dormancy (t 1 ), after which a long period of constancy can be observed until t 1 * followed by a final phase of further decrease to the phenophase OC, correlating with increasing water content. Catalase, L-ascorbate peroxidase and peroxidase were identified to be closely-involved in oxidoreductase activities [5]. The decomposition of H 2 O 2 by catalase and peroxidase seems to be relevant in the context of the redox reactions involved in the transition of the endodormancy phases [5]. The decomposition of hydrogen peroxide to water and oxygen may protect the cell from oxidative damage by reactive oxygen species. On the other hand, the phenolic compounds may interact with their strong antioxidative properties in a similar way, partly explaining their decrease in the first phase (S1-LF). Later, after reaching t 1 *, the increasing water content may increase the concentration of solved phenolic compounds and facilitate their dilution. To follow-up this line of discussion, antioxidative activity was monitored for the whole investigative period. Figure 3A-C documents the corresponding results for the three seasons of 2014/15-2016/17. The antioxidant capacity of the same bud extracts as used for the above-mentioned analysis of the phenolics were evaluated by the TEAC and FRAP assays, both methods functioning on the basis of electron transfer during redox reactions. A positive correlation was previously found between antioxidant activity and total free phenolics of sweet cherry fruits [20]. A similar trend can also be observed for the buds in this study. The anti-oxidant capacity decreases with the content of phenolic compounds until t1, remains more or less stable over the period of ecodormancy (t1-t1*), then decreases further during the ontogenetic development of the sweet cherry buds (t1*-OC). The collected data allows one to differentiate these three phases (S1-t1; t1-t1*; t1*-OC) with common regions of transition periods. The corresponding correlation factors for the observed trends are provided (Supplementary Figure S3). The data also shows a better correlation between the content of total phenolic compounds and antioxidative activity when the latter were recorded with the ascorbic acid (FRAP) equivalents (R 2 = 0.86-0.98), as compared to those values determined with Trolox (TEAC) (R 2 = 0.70-0.81). Further, a negative correlation can be seen from t1* to OC for the between TEAC and FRAP values and the rising water content of the buds ( Figure 3D,E). This result further underlines that the major group of phenolic compounds additionally imprints itself on the observed changes in the antioxidative activity, especially with regard to the dilution caused in the latter stages of the ontogenetic development. To follow-up this line of discussion, antioxidative activity was monitored for the whole investigative period. Figure 3A-C documents the corresponding results for the three seasons of 2014/15-2016/17. The antioxidant capacity of the same bud extracts as used for the above-mentioned analysis of the phenolics were evaluated by the TEAC and FRAP assays, both methods functioning on the basis of electron transfer during redox reactions. A positive correlation was previously found between antioxidant activity and total free phenolics of sweet cherry fruits [20]. A similar trend can also be observed for the buds in this study. The anti-oxidant capacity decreases with the content of phenolic compounds until t 1 , remains more or less stable over the period of ecodormancy (t 1 -t 1 *), then decreases further during the ontogenetic development of the sweet cherry buds (t 1 *-OC). The collected data allows one to differentiate these three phases (S1-t 1 ; t 1 -t 1 *; t 1 *-OC) with common regions of transition periods. The corresponding correlation factors for the observed trends are provided (Supplementary Figure S3). The data also shows a better correlation between the content of total phenolic compounds and antioxidative activity when the latter were recorded with the ascorbic acid (FRAP) equivalents (R 2 = 0.86-0.98), as compared to those values determined with Trolox (TEAC) (R 2 = 0.70-0.81). Further, a negative correlation can be seen from t 1 * to OC for the between TEAC and FRAP values and the rising water content of the buds ( Figure 3D,E). This result further underlines that the major group of phenolic compounds additionally imprints itself on the observed changes in the antioxidative activity, especially with regard to the dilution caused in the latter stages of the ontogenetic development.
Monitoring of Ascorbate/Dehydroascorbate during Dormancy
Previous data has demonstrated increasing H 2 O 2 content in floral buds of pear cultivars (Pyrus pyrifolia) during the pre-breaking period of endodormancy, and the possible involvement of ascorbate peroxidase as a catalyst to the reduction of H 2 O 2 with ascorbate as an electron donor. Thus, ascorbic acid (AA) and its oxidation product dehydroascorbic acid (DHA) were identified as potential components of redox reactions [5,25]. Both are believed to be involved in preventing free radical toxicity and belong to the protective enzyme system of the glutathione-ascorbate cycle [5]. The amounts of AA ( Figure Though no other data are available from sweet cherry flower buds, it is notable that the AA content of fresh fruits was quantified at 7 ± 15 mg/100 g and also found to vary between the different growing seasons [21]. No consistent pattern in development orientated changes can be derived for the three seasons with regard to AA and DHA content, except that their content increases from SB (thus no marker for t 1 *) in sweet cherry buds. A positive correlation was found between the rising water content and either DHA or AA in the buds for the phenophase t 1 *-OC ( Figure 4D,E; R 2 = 0.67-0.83). Therefore, their DHA or AA content may also serve to characterize the period t 1 *-OC, where oxidative processes may prevail as indicated by the correspondingly high content of DHA. Similarly, the revival of AA-synthesis suggests that AA becomes one of the main redox active metabolites as the content of phenolic compounds decreases.
A possible involvement of ascorbate peroxidase in catalyzing the reduction of H 2 O 2 with ascorbate as an electron donor has been described for the initiation period of endodormancy [5,25]. For the seasons 2015/16 (DOY 272-307), an initial decrease in the content of AA up until LF (induction of endodormancy) was noted, but which could not be verified for the seasons 2014/15 or 2016/17. In comparison, no corresponding increases in DHA values were found (more or less constant values) up to LF for the seasons 2014/15-2015/16, although a decrease was noted for the season 2016/17 over the same period. Subsequent to LF, constant values for both AA and DHA (LF-t 1 ) were noted. DHA is generally actively imported, with the help of glucose transporters, into the endoplasmic reticulum of cells, where it can be reduced back to ascorbate by glutathione and/or other thiols. We presume that an equilibrium is reached between the contents of AA and DHA and the corresponding actions of the protective enzyme systems of the ascorbate and glutathione cycle.
Role of Reducing Sugars in Dormancy
The aldehyde functional group in reducing sugars can also partake in redox reactions on basis of electron transfer. Glucose is generally broken down during cellular respiration via an electron transport chain involving oxidative phosphorylation. It has also been observed that some relevant changes occur to glycolysis during dormancy, where glucose may be converted into pyruvate, generating small amounts of adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide (NADH) [5]. A total of the eight enzymes are involved in glycolysis, all of which were identified as expressed proteins during a study involving endodormancy breaking in pears [5]. An additional label-free quantitative proteomics study also indicated the central role of glycolysis where many enzyme driven processes (six enzymes: enolase, triosephosphate isomerase, fructosebisphosphate aldolase, alcohol dehydrogenase, glyceraldehyde 3-phosphate dehydrogenase, and 3-phosphoglycerate kinase) are upregulated during the dormancy stage in terminal poplar buds [26]. In the same study, 74 significantly altered proteins were identified, most of which are involved in carbohydrate metabolism (22%), redox regulation (19%), amino acid transport and metabolism (10%), and stress response (8%) [26]. On the other hand, glycosylation can significantly improve the solubility, stability, and bioactivity of phenolic compounds. Such processes may also occur in view of the high number of phenolic compounds/metabolites found in the buds of cherry blossoms [27]. Sugars and other compatible solutes exert cryoprotective properties and accumulate during frost hardening [12]. In this same context, a recent report details the role of abscisic acid and sucrose as two important metabolites that can help to identify the date of endodormancy release in sweet cherries [1]. Based on this background, the next targeted group of redox metabolites was identified as reducing sugars.
Molecules 2018, 23, x 9 of 19 cherries [1]. Based on this background, the next targeted group of redox metabolites was identified as reducing sugars. A quantification method using HPLC equipped with an evaporative light-scattering detector was established and complemented by identification and verification via HPTLC. Information regarding the latter is given in the supplementary information (Supplementary Figure S4). The provided data show the derivatization with the aniline diphenylamine o-phosphoric acid reagent and visualization with white light illumination, with UV at 366 nm as well as the HPTLC-ESI + -MS spectra of the main sugars. Initially, only the reducing sugars glucose and fructose were found in the bud samples at different milestones. After the end of ecodormancy stage (t 1 *), maltose was also identified. Figure 5 shows the composition of the reducing sugars at the different developmental milestones as determined by HPLC. The detailed course of weekly and developmentally orientated changes to the content of reducing sugars is given in the supplementary Figure S5 Figure S5). From t 1 * to SB, there is a decrease in the content of reducing sugars. Then, after the visually perceptible ontogenetic development of sweet cherry buds, SB-OC, a marked increase in the content of reducing sugars is observed. This data reveals that there is a highly regulated metabolism of glucose until LF, confirming the observations made in other studies regarding glycolysis during dormancy [5,26]. Subsequently, an accumulation of reducing sugars is initiated, eventually resulting from breakdown of oligo-and polysaccharides (LF-t 1 *). With the initiation of photosynthesis (accompanied by an increase in the content of the chlorophylls; Figure 6), their synthesis is again revived. Concluding this part, the lowest content of reducing sugars was found at LF and maltose is unfortunately no marker for t 1 *, because it starts to increase significantly after t 1 * at (2015/16) or after SB (2014/15, 2016/17) [1].
A quantification method using HPLC equipped with an evaporative light-scattering detector was established and complemented by identification and verification via HPTLC. Information regarding the latter is given in the supplementary information (Supplementary Figure S4). The provided data show the derivatization with the aniline diphenylamine o-phosphoric acid reagent and visualization with white light illumination, with UV at 366 nm as well as the HPTLC-ESI + -MS spectra of the main sugars. Initially, only the reducing sugars glucose and fructose were found in the bud samples at different milestones. After the end of ecodormancy stage (t1*), maltose was also identified. Figure 5 shows the composition of the reducing sugars at the different developmental milestones as determined by HPLC. The detailed course of weekly and developmentally orientated changes to the content of reducing sugars is given in the supplementary Figure S5 Figure S5). From t1* to SB, there is a decrease in the content of reducing sugars. Then, after the visually perceptible ontogenetic development of sweet cherry buds, SB-OC, a marked increase in the content of reducing sugars is observed. This data reveals that there is a highly regulated metabolism of glucose until LF, confirming the observations made in other studies regarding glycolysis during dormancy [5,26]. Subsequently, an accumulation of reducing sugars is initiated, eventually resulting from breakdown of oligo-and polysaccharides (LF-t1*). With the initiation of photosynthesis (accompanied by an increase in the content of the chlorophylls; Figure 6), their synthesis is again revived. Concluding this part, the lowest content of reducing sugars was found at LF and maltose is unfortunately no marker for t1*, because it starts to increase significantly after t1* at
Role of Fat Soluble Redox Metabolites in Dormancy
The investigations until now has been limited to redox metabolites with high to medium polarity, now a few selected more hydrophobic metabolites with reported potential antioxidative
Role of Fat Soluble Redox Metabolites in Dormancy
The investigations until now has been limited to redox metabolites with high to medium polarity, now a few selected more hydrophobic metabolites with reported potential antioxidative properties will be considered such as carotenoids [28] and chlorophyll derivatives [29]. In this context, abscisic acid has also been discussed as one of the central regulators of bud dormancy [3] and its biosynthesis appears to be well connected to the carotenoid pathways with participation of neoxanthin and violaxanthin as precursors [30]. In this context, it was also recently shown [6] that the results of the analysis for the content of neoxanthin and violaxanthin in the seasons 2014/15 to 2016/17 were unaffected over the different phases endo-and ecodormancy and their individual contribution to ABA synthesis was not clearly discernible.
The composition of the main carotenoids and chlorophylls during the course of the development stages were determined by HPLC and only data on selected milestones are shown in Figure 6. Their content remained more or less constant during the different stages of dormancy (para-, endoand ecodormancy) until t 1 *. Subsequently, with the beginning of visually perceptible ontogenetic development in sweet cherry buds (SB-OC), their content increases sharply similar to that of chlorophylls ( Figure 6). The participation of lipophilic metabolites in the redox processes during dormancy remains largely incomplete, since our focus was limited to redox metabolites with high to medium polarity.
Non-Targeted Analysis of the Metabolites
The targeted analysis of selected redox active metabolites was discussed in the preceding sections. In the following, an untargeted approach was applied for selected developmental milestones for the season 2014/15 (LF, t 1 , t 1 *, SB and OC), using a data set of 20 analysis samples representing n = 4 for each milestone, to identify further relevant metabolites. Altogether, 5469 from an initial 13,258, and 411 from an initial 1704 entities were respectively selected from MS-profiling in negative and positive modii. The quality of the filtered entities (based on flag/frequency settings) were further confirmed by statistical one-way ANOVA (p < 0.05). A principle component analysis (PCA) was conducted to differentiate the different milestones for the metabolites and is presented in supplementary Figure S6. The individual data sets for the different milestones were closely clustered and could be clearly separated from one another (supplementary Figure S6). Based on their molecular mass, the metabolites were identified using the Mass Hunter Metlin PCD software to give a general overview of their up-and down-regulations, and to identify further focusing areas of the corresponding targeted analysis of suitable candidates. Identification, therefore, was tentatively executed since, while isobaric substances may have the same mass, they may still be structurally completely different. The data showed a series of different molecular species that could be allocated, for the majority of the structures, to secondary metabolism (especially to those concerning the phenolic compounds), carbohydrate, lipid (phospholipids) and protein metabolism (peptides). A few interesting compounds were selected and data on them is given in supplementary Table S3. The tentative data on glutathione (reduced and oxidized status) supports the observation made for ascorbic acid, suggesting a hand in hand involvement of these in the protective enzyme system of the ascorbate and glutathione cycle [5]. An increased glutathione oxidation at the milestones LF and t 1 seems to occur and with the progression to ecodormancy it decreases with a corresponding elevated fold change for reduced glutathione. A series of peptides (many also containing cys) were also found among the allocated metabolites. One such example (asp met trp), given in supplementary Table S3, may support glutathione in such reactions since their participation in redox reactions is likely. The data for maltotetraose supports the observations reported for sugars and oligosaccharides as they may exert cryoprotective properties and accumulate during frost hardening metabolism in endodormancy [12]. It was only upregulated at the milestone t 1 (as compared to LF) after which it was subsequently down regulated. A large number of metabolites were allocated to phenolic compounds (kaempferol-, quercetin-, myricetin-, luteolin-, catechin-and gallocatechin as well as hydrooxycinnamic-and hydrooxybenzoic acid derivatives/conjugates). The regulation of two of these metabolites is shown as an example in supplementary Table S3, and exemplifies the need for complementary methods to the targeted method applied in this study. In Norway spruce (Picea abies), p-Hydroxyacetophenone caused trees needle-fall, retarded apical growth, and inhibited bud-sprouting in biological tests [31]. It was upregulated only at LH and may present a good marker for this milestone; further supplementary studies are needed to confirm this observation. Finally, the lipophilic metabolites still need to be evaluated, a few interesting candidates are presented in supplementary Table S3. More sensitive methods are needed to encompass such significant metabolites of low abundancy and further work will be directed to quantify them. for this milestone; further supplementary studies are needed to confirm this observation. Finally, the lipophilic metabolites still need to be evaluated, a few interesting candidates are presented in supplementary Table S3. More sensitive methods are needed to encompass such significant metabolites of low abundancy and further work will be directed to quantify them.
Chemicals
Liquid chromatography-mass spectrometry (LC-MS) grade solvents were used for the LC-MS/MS analysis. All the other chemicals used were of analytical grade (VWR, Darmstadt, Germany). Details to individual reference substances are given in the corresponding method description.
Chemicals
Liquid chromatography-mass spectrometry (LC-MS) grade solvents were used for the LC-MS/MS analysis. All the other chemicals used were of analytical grade (VWR, Darmstadt, Germany). Details to individual reference substances are given in the corresponding method description.
Study Design and Sampling
The details of the growth conditions and sampling are given in Götz et al. [32]. The experiments were performed at an experimental Sweet Cherry Orchard of the Humboldt-University in Berlin-Dahlem (52 • 28' Northern latitude and 13 • 18' Eastern longitude).
Bud and Twig Sampling
Four trees in the middle of the orchard were selected to collect the buds. Sampling of 3 clusters from each tree (n = 4) were taken weekly at random locations over the whole tree starting The release of endodormancy (t 1 ) was estimated by observing twigs under controlled conditions according to [2,7,32]. The first indication for the transition from the dormant stage to the beginning of ontogenetic development (t 1 *) was determined by evaluating the bud's water content [2,32].
Extraction of Phenolic Compounds
Freeze dried samples (25 mg) of each sampling date were extracted with 0.75 mL 60% methanol using an ultrasonification treatment (Sonorex RR 100, Bandelin electronic GmbH & Co. KG, Berlin, Germany) for 3 min followed by incubation at 4 • C overnight (AEG Santo 60240 DT 28, Electrolux Hausgeräte GmbH, Nürnberg, Germany). The extracts were centrifuged at 9300× g for 10 min at 4 • C, the extraction repeated, and the supernatants pooled together and stored at −20 • C until needed.
Total Content of Phenolic Compounds
The total phenolic content was estimated using Folin Ciocalteau procedure [24,33]. The data is expressed as mg catechin equivalents·g −1 DW using an external calibration with catechin (Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany).
Targeted Analysis of Reducing Sugars
The quantification of the sugars was performed by analyzing the samples with a Shimadzu HPLC system (Shimadzu Europa GmbH, Duisburg, Germany) equipped with an evaporative light-scattering detection (Shimadzu ELSD-LT II at 40 • C, Gain = 3). The extraction and quantification details are given in [1]. The sugar identification was complemented via High Performance Thin-Layer Chromatography (HPTLC) [38] and details are given in the supplementary information S1.
Targeted Analysis of the Fat Soluble Redox Metabolites (Carotenoids and Chlorophylls) by UHPLC-DAD-TOF-MS
An Agilent 1290 Infinity UHPLC coupled to an Agilent 6230 Time of Flight (TOF) MS equipped with an APCI ion source in positive ionization mode was applied as previously described [14]. Shortly, 5 mg of each freeze-dried sample (n = 4 for each week) were extracted three times using 0.5 mL of Methanol/tetrahyhdrofuran solution (1:1, v/v, 3 ×). The extracts were shaken at 1000 rpm for 5 min at room temperature and centrifuged at 4000×g for 5 min at 20 • C. The extracts were evaporated in a stream of nitrogen (evaporator VLM GmbH, Heideblüchenweg, Bielefeld, Germany) and then dissolved in 0.02 mL of dichloromethane and 0.08 mL of isopropanol. Prior to further analysis, the solutions were filtered through a 0.2 µm PTFE membrane and kept at 4 • C in the autosampler. The separation was done on a YMC C30 column (100 × 2.1 mm, 3 µm, YMC Co. Ltd., Kyoto, Japan). Methanol, methyl tert-butyl ether, and water mixtures (solvent A, 81:15:4; and solvent B, 6:90:4), were applied as mobile phases with a flow rate of 0.2 mL·min −1 . To improve the ionization, 20 mM ammonium acetate was included in the mobile phases. Following gradient was used: 100% A (10 min isocratic), 100% A to 80% A in 7 min (28 min isocratic), 80% A to 0% A in 10 min (5 min isocratic). The gas temperature (325 • C) had a flow rate of 8 L·min −1 , the vaporizer temperature was 350 • C, and the nebulizer pressure was 35 psi. The voltage was set to 3500 V. The fragmentor voltage (175 V) was applied at a corona current of 6.5 µA. Identification was achieved by co-chromatography with reference substances and on the basis of the mass-to-charge ratio of the pseudo molecular ions: α-carotene, and β-carotene, [ [14]. External standard calibration curves with authentic standards were used for quantification by dose−response. β-Carotene, chlorophylls A and B were from Sigma-Aldrich ChemieGmbH (Taufkirchen, Germany). α-Carotene, lutein, neoxanthin, zeaxanthin, and violaxanthin were from CaroteNature GmbH ( Lupsingen, Switzerland).
Non-Targeted Analysis of the Metabolites by UHPLC-Q-TOF-MS
The metabolites were monitored for the following selected specific development stages (n = 4 for each development stage) for the season 2014/15: Leaf fall (LF), release of endodormancy (t 1 ), beginning of ontogenetic development (t 1 *), "swollen bud" (SB) and "open cluster" (OC). The method was adapted from Errard et al. (2015) [14]. Freeze dried bud material (10 mg) were extracted with 1.5 mL of a solvent mixture (aqueous methanol with formic acid-70/30/0.1%; v/v/v) for 5 min on ice with an ultrasonic treatment at full power followed by shaking for another 5 min at 4 • C (1400 rpm, Thermomixer compact, Eppendorf AG, Wesseling-Berzdorf, Germany). The extracts were centrifuges at 4500× g for 5 min at 4 • C. The supernatant was transferred to a 10 mL flask. The extraction was repeated 4 times and finally filled up to 10 mL. Aliquots (800 µL) were membrane filtered (0.2 µm PTFE) at 3000× g for 5 min at 4 • C and transferred to vials. The analysis was conducted on the 1290 Infinity UHPLC coupled with an Agilent 6530 Q-TOF LC/MS (Agilent Technologies GmbH, Waldbronn, Germany). Samples (5 µL) were injected into a C18 column (2.1 × 50 mm, 1.8 µm; Agilent Zorbax Entend-C18-Rapid resolution HT). Samples and column temperature were kept at 4 and 30 • C, respectively. The eluents (eluent A, 0.01% aqueous formic acid; eluent B, 0.01% formic acid in acetonitrile) were applied to separate 5 µL of the samples by a gradient of increasing eluent B from 2 to 5% over 3 min and from 5 to 85% over 7 min. The flow rate was 0.5 mL·min −1 . An electrospray (ESI) source was applied, and spectra were collected in positive and negative ionization modes (acquisition rate, 1 spectra/s) over an m/z 100−1700 range (capillary voltage, 3.5 kV; source temperature, 300 • C; nebulizer gas flow, 8 L/min at 35 psi; skimmer, 65 V; fragmentor voltage, 175 V). The data obtained was converted and evaluated by Mass Profiler Professional (MPP; version 12.1, Agilent Technologies, Agilent, Santa Clara, CA, USA) using molecular feature extraction (Mass Hunter B.06.00, Agilent, Santa Clara, CA, USA) as described in Errard et al. (2015) [14]. The minimum absolute abundance was set at 3000 counts, retention window at ±0.2 min with mass tolerance of 20 ppm. The data were analyzed with MPP statistical analysis by one-way ANOVA (p ≤ 0.05; fold change ≥2 in negative modus; fold change was set at ≥1.5 in positive modus). Putative identification was performed with Mass Hunter Metlin PCD. The principle component analysis (PCA) was conducted to differentiate the different milestones for the metabolites.
Conclusions
The observations made during the course of dormancy and ontogenetic development for cherry flower buds of the cultivar "Summit" support the observation that potential metabolites/substrates for redox reactions could be an integral part of the signaling mechanisms in plants as also reported elsewhere [39]. Many studies have suggested that proteins and genes involved in the oxidation-reduction processes including the antioxidant defense system (e.g. glutathione peroxidase, superoxide dismutase, ascorbate peroxidase) might be involved in dormancy release [5,40,41]. The data e.g., on peach buds reveal that the majority of these proteins are involved in stress-response, detoxification, defense, carbohydrate metabolism and energy production [42]. The redox state of electron transport chain components and the reduction of antioxidants such as glutathione and ascorbate which can act to regulate gene expression at different transcriptional levels seem therefore to be the most relevant processes [39]. Therefore, these observations underline the statement that the redox status operates as a major integrator of cellular metabolism and is simultaneously regulated itself by metabolic processes [43]. According to the non-targeted LC MS/MS performed, we did not find any more sugars or organic acids, suggesting how energy metabolism as a whole (glycolysis but also respiration) could evolve during the different phases. Therefore, our future work combining the implications of our study and analysis of further redox metabolites such as sugars, organic acids and energy equivalents will shed light on the redox dynamics in cherry blossoms.
Supplementary Materials:
The following are available online, Supplementary information S1: Sugar identification by HPTLC; Table S1: Determination of the different phenological development stages from leaf fall (LF) in DOY (day of the year) for the seasons 2014/15 to 2016/17 as published in [2,6]; Table S2: Duration (D in d) and average temperature (T in • C) observed during the different development stages for the seasons 2014/15 to 2016/17 as published in [2,6]; Figure S1: Comparison of the weekly and development orientated changes of the content of phenolic compounds for season 2014/15 as determined by HPLC and total phenols by Folin-Ciocalteu phenol method (TP). Figure S2: Correlation (p = 0.008) of the content of total phenolic compounds for the milestones SB-OC of the three seasons 2014/15-2016/17. Figure S3: Correlation of the content of total phenolic compounds to the anti-oxidative potential (data presented as Trolox (TEAC) or ascorbic acid (FRAP) equivalents) for the three seasons 2014/15-2016/17. Figure | 9,184 | 2018-05-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Observation of the decay Bs0 → ηcϕ and evidence for Bs0 → ηcπ+π−
A study of Bs0 → ηcϕ and Bs0 → ηcπ+π− decays is performed using pp collision data corresponding to an integrated luminosity of 3.0 fb−1, collected with the LHCb detector in Run 1 of the LHC. The observation of the decay Bs0 → ηcϕ is reported, where the ηc meson is reconstructed in the pp¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ p\overline{p} $$\end{document}, K+K−π+π−, π+π−π+π− and K+K−K+K− decay modes and the ϕ(1020) in the K+K− decay mode. The decay Bs0 → J/ψϕ is used as a normalisation channel. Evidence is also reported for the decay Bs0 → ηcπ+π−, where the ηc meson is reconstructed in the pp¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ p\overline{p} $$\end{document} decay mode, using the decay Bs0 → J/ψπ+π− as a normalisation channel. The measured branching fractions are ℬBs0→ηcϕ=5.01±0.53±0.27±0.63×10−4,ℬBs0→ηcπ+π−=1.76±0.59±0.12±0.29×10−4,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \begin{array}{l}\mathrm{\mathcal{B}}\left({B}_s^0\to {\eta}_c\phi \right)=\left(5.01\pm 0.53\pm 0.27\pm 0.63\right)\times {10}^{-4},\hfill \\ {}\mathrm{\mathcal{B}}\left({B}_s^0\to {\eta}_c{\pi}^{+}{\pi}^{-}\right)=\left(1.76\pm 0.59\pm 0.12\pm 0.29\right)\times {10}^{-4},\hfill \end{array} $$\end{document} where in each case the first uncertainty is statistical, the second systematic and the third uncertainty is due to the limited knowledge of the external branching fractions.
Introduction
When a B 0 s meson decays through theb →ccs process, interference between the direct decay amplitude, and the amplitude after B 0 s − B 0 s oscillation, gives rise to a CP -violating phase, φ s . This phase is well predicted within the Standard Model (SM) [1] and is sensitive to possible contributions from physics beyond the SM [2][3][4][5]. The φ s phase is best measured using the "golden" channel 1 B 0 s → J/ψ φ [6][7][8][9][10] and the precision of this measurement is expected to be dominated by its statistical uncertainty until the end of LHC running. In addition to B 0 s → J/ψ φ, other modes have been used to constrain φ s : B 0 s → J/ψ π + π − [6], B 0 s → D + s D − s [11], and B 0 s → ψ(2S)φ [12]. In this paper, the first study of B 0 s → η c φ and B 0 s → η c π + π − decays is presented. 2 These decays also proceed dominantly through ab →ccs tree diagram as shown in figure 1. Unlike in B 0 s → J/ψ φ decays, the η c φ final state is purely CP -even, so that no angular analysis is required to measure the mixing phase φ s . However, the size of the data sample recorded by the LHCb experiment in LHC Run 1 is not sufficient to perform time-dependent 1 The simplified notation φ and ηc are used to refer to the φ(1020) and the ηc(1S) mesons throughout this article. 2 The use of charge-conjugate modes is implied throughout this article. Figure 1. Leading diagram corresponding to B 0 s → η c φ and B 0 s → η c π + π − decays, where the π + π − pair may arise from the decay of the f 0 (980) resonance.
analyses of B 0 s → η c φ and B 0 s → η c π + π − decays. Instead, the first measurement of their branching fractions is performed. No prediction is available for either B(B 0 s → η c φ) or B(B 0 s → η c π + π − ). Assuming The measurements presented in this paper are performed using a dataset corresponding to 3 fb −1 of integrated luminosity collected by the LHCb experiment in pp collisions during 2011 and 2012 at centre-of-mass energies of 7 TeV and 8 TeV, respectively. The paper is organised as follows: section 2 describes the LHCb detector and the procedure used to generate simulated events; an overview of the strategy for the measurements of B(B 0 s → η c φ) and B(B 0 s → η c π + π − ) is given in section 3; the selection of candidate signal decays is described in section 4; the methods to determine the reconstruction and selection efficiencies are discussed in section 5. Section 6 describes the fit models. The results and associated systematic uncertainties are discussed in sections 7 and 8. Finally, conclusions are presented in section 9.
Detector and simulation
The LHCb detector [14, 15] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of momentum, p, of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV/c. The minimum distance of a track to a primary vertex (PV), the impact parameter (IP), is measured with -2 -JHEP07(2017)021 a resolution of (15 + 29/p T ) µm, where p T is the component of the momentum transverse to the beam, in GeV/c. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers.
The online event selection is performed by a trigger [16], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction.
Samples of simulated events are used to determine the effects of the detector geometry, trigger, and selection criteria on the invariant-mass distributions of interest for this paper. In the simulation, pp collisions are generated using Pythia [17,18] with a specific LHCb configuration [19]. The decay of the B 0 s meson is described by EvtGen [20], which generates final-state radiation using Photos [21]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [22,23] as described in ref. [24]. Data-driven corrections are applied to the simulation to account for the small level of mismodelling of the particle identification (PID) performance [25]. In the simulation the reconstructed momentum of every track is smeared by a small amount in order to better match the mass resolution of the data.
Analysis strategy
In the analysis of B 0 s → η c φ decays, the φ meson is reconstructed in the K + K − final state and the η c meson is reconstructed in the pp, K + K − π + π − , π + π − π + π − and K + K − K + K − final states. For clarity, the three four-body final states are referred to as 4h throughout the paper. In determining the branching fraction, the decay B 0 s → J/ψ φ is used as a normalisation channel, where the J/ψ meson is reconstructed in the same decay modes as the η c meson. A similar strategy is adopted for the measurement of the branching fraction of B 0 s → η c π + π − decays. However, due to the higher expected level of combinatorial background compared to B 0 s → η c φ decays, the η c and J/ψ mesons are reconstructed only in the pp final state in the measurement of B(B 0 s → η c π + π − ). In both analyses, a two-stage fit procedure is performed. In the first stage, unbinned extended maximum likelihood (UML) fits are performed to separate signal candidates from background contributions. For the B 0 s → η c (→ pp)π + π − decay the fit is done to the ppπ + π − mass distribution, while for the decays B 0 where j stands for the event species, N j is the corresponding yield and N is the vector of yields N j , a is the vector of fitted parameters other than yields, n is the total number -3 -
JHEP07(2017)021
of candidates in the sample, and P j (m) is the probability density function (PDF) used to parametrise the set of invariant-mass distributions m considered. The RooFit package [26] is used to construct the negative log-likelihood function (NLL), which is minimised using Minuit [27]. Using information from these fits, signal weights for each candidate, ω l , are obtained using the s Plot technique [28].
In the second stage, for B 0 s → ppπ + π − candidates a weighted UML fit is made to the pp invariant-mass spectrum, and weighted UML fits of the pp and the 4h invariant-mass spectra are done for B 0 s → ppφ and B 0 s → 4hφ candidates, respectively, to disentangle η c and J/ψ candidates from nonresonant (NR) and remaining background contributions, as described in section 6. For the weighted fits, the NLL function is given by where ζ = l ω l / l ω 2 l ensures proper uncertainty estimates from the weighted likelihood fit [29]. For the observed numbers of η c and J/ψ candidates in final state f , N ηc,f and N J/ψ ,f , the measured branching fraction is where X refers to either the φ meson or the π + π − pair. The branching fractions [13], and the efficiency correction factors, ε, are obtained from simulation. In order to maximise the sensitivity to B(B 0 s → η c φ), a simultaneous fit to the pp and 4h invariant-mass spectra is performed.
Event selection
A common strategy for the event selection, comprising several stages, is adopted for all final states. First, online requirements are applied at the trigger level, followed by an initial offline selection in which relatively loose criteria are applied. Boosted decision trees (BDTs) [30], implemented using the TMVA software package [31], are then used to further suppress the combinatorial background arising from random combinations of tracks originating from any PV. Finally, the requirements on the output of the BDTs and on the PID variables are simultaneously optimised for each final state, to maximise the statistical significance of the signal yields.
At the hardware trigger stage, events are required to have a muon with high p T or a hadron with high transverse energy in the calorimeters. The software trigger requires a two-, three-or four-track secondary vertex (SV) with a significant displacement from any PV. At least one charged particle must have a large transverse momentum and be inconsistent with originating from a PV. A multivariate algorithm [32] is used for the identification of secondary vertices consistent with the decay of a b hadron into charged hadrons. In addition, for the 4h final states, an algorithm is used to identify inclusive -4 -JHEP07(2017)021 φ → K + K − production at a secondary vertex, without requiring a decay consistent with a b hadron.
In the initial stage of the offline selection, candidates for B 0 s → ppπ + π − and B 0 s → ppK + K − (B 0 s → 4hK + K − ) decays are required to have four (six) good quality, high-p T tracks consistent with coming from a vertex that is displaced from any PV in the event. Loose PID criteria are applied, requiring the tracks to be consistent with the types of hadrons corresponding to the respective final states. In addition, the B 0 s candidates, formed by the combination of the final-state candidates, are required to originate from a PV by requiring a small angle between the B 0 s candidate momentum vector and the vector joining this PV and the B 0 s decay vertex, and a small χ 2 IP , which is defined as the difference in the vertex-fit χ 2 of the considered PV reconstructed with and without the candidate. When forming the B 0 s candidates for B 0 s → ppπ + π − and B 0 s → ppK + K − decays, the pp mass resolution is improved by performing a kinematic fit [33] in which the B 0 s candidate is constrained to originate from its associated PV (that with the smallest value of χ 2 IP for the B 0 s ), and its reconstructed invariant mass is constrained to be equal to the known value of the B 0 s mass [13]. No significant improvement of the 4h mass resolution is observed for B 0 s → 4hK + K − decays. In order to reduce the combinatorial background, a first BDT, based on kinematic and topological properties of the reconstructed tracks and candidates, is applied directly at the initial stage of the offline selection of candidate B 0 s → 4hK + K − decays. It is trained with events from dedicated simulation samples as signal and data from the reconstructed high-mass sidebands of the B 0 s candidates as background. In the second step of the selection, the offline BDTs are applied. They are trained using the same strategy as that used for the training of the first BDT. The maximum distance of closest approach between final-state particles, the transverse momentum, and the χ 2 IP of each reconstructed track, as well as the vertex-fit χ 2 per degree of freedom, the χ 2 IP , and the pointing angle of the B 0 s candidates are used as input to the BDT classifiers used to select candidate B 0 s → ppπ + π − and B 0 s → ppK + K − decays. For the ppK + K − final state, the direction angle, the flight distance significance and the χ 2 IP of the reconstructed B 0 s candidate are also used as input to the BDT, while the p T of the B 0 s candidate is used for the ppπ + π − final state. The difference in the choice of input variables for the ppK + K − and the ppπ + π − final states is due to different PID requirements applied to pions and kaons in the first stage of the offline selection. The optimised requirements on the BDT output and PID variables for B 0 s → ppπ + π − (B 0 s → ppK + K − ) decays retain ∼ 45% (40%) of the signal and reject more than 99% (99%) of the combinatorial background, inside the mass-fit ranges defined in section 6.
Dedicated BDT classifiers are trained to select candidate B 0 s → 4hK + K − decays using the following set of input variables: the p T and the IP with respect to the SV of all reconstructed tracks; the vertex-fit χ 2 of the η c and φ candidates; the vertex-fit χ 2 , the p T , the flight-distance significance with respect to the PV of the B 0 s candidate, and the angle between the momentum and the vector joining the primary to the secondary vertex of the B 0 s candidate. The optimised requirements on the BDT output and PID variables, for each of the 4h modes, retain about 50% of the signal and reject more than 99% of the combinatorial background inside the mass-fit ranges defined in section 6.
From simulation, after all requirements for B 0 s → 4hK + K − decays, a significant contamination is expected from B 0 s → D + s 3h decays, where the D + s decays to φπ + and 3h is any combination of three charged kaons and pions. This background contribution has distributions similar to the signal in the 4hK + K − and K + K − invariant-mass spectra, while its distribution in the 4h invariant-mass spectrum is not expected to exhibit any peaking structure. In order to reduce this background contamination, the absolute difference between the known value of the D + s mass [13] and the reconstructed invariant mass of the system formed by the combination of the φ candidate and any signal candidate track consistent with a pion hypothesis is required to be > 17 MeV/c 2 . This requirement is optimised using the significance of B 0 s → J/ψ K + K − candidates with respect to background contributions. This significance is stable for cut values in the range [9, 25] MeV/c 2 , with a maximum at 17 MeV/c 2 , which removes about 90% of B 0 s → D + s 3h decays, with no significant signal loss.
Efficiency correction
The efficiency correction factors appearing in eq. (3.3) are obtained from fully simulated events. Since the signal and normalisation channels are selected based on the same requirements and have the same final-state particles with very similar kinematic distributions, the ratio between the efficiency correction factors for B 0 s → η c X and B 0 s → J/ψ X decays are expected to be close to unity. The efficiency correction factors include the geometrical acceptance of the LHCb detector, the reconstruction efficiency, the efficiency of the offline selection criteria, including the trigger and PID requirements. The efficiencies of the PID requirements are obtained as a function of particle momentum and number of charged tracks in the event using dedicated data-driven calibration samples of pions, kaons, and protons [34]. The overall efficiency is taken as the product of the geometrical acceptance of the LHCb detector, the reconstruction efficiency and the efficiency of the offline selection criteria. In addition, corrections are applied to account for different lifetime values used in simulation with respect to the known values for the decay channels considered. The effective lifetime for B 0 s decays to η c φ (η c π + π − ) final state, being purely CP -even (CP -odd), is obtained from the known value of the decay width of the light (heavy) B 0 s state [35]. The effective lifetime of B 0 s → J/ψ φ (B 0 s → J/ψ π + π − ) decays is taken from ref. [35]. The lifetime correction is obtained after reweighting the signal and normalisation simulation samples. The final efficiency correction factors, given in table 1, are found to be compatible to unity as expected. In this section the fit models used for the measurement of the branching fractions are described, first the model used for B 0 s → η c π + π − decays in section 6.1, then the model used for B 0 s → η c φ decays in section 6.2.
6.1 Model for B 0 s → η c π + π − decays Candidates are fitted in two stages. First, an extended UML fit to the ppπ + π − invariantmass spectrum is performed in the range 5150-5540 MeV/c 2 , to discriminate B 0 s → ppπ + π − events from combinatorial background, B 0 → ppπ + π − decays, and B 0 → ppKπ decays, where the kaon is misidentified as a pion. The ppπ + π − mass distribution of B 0 s → ppπ + π − and B 0 → ppπ + π − candidates are described by Hypatia functions [36]. Both Hypatia functions share common core resolution and tail parameters. The latter are fixed to values obtained from simulation. The distribution of the misidentified B 0 → ppKπ background is described by a Crystal Ball function [37], with mode, power-law tail, and core resolution parameters fixed to values obtained from simulation. The combinatorial background is modelled using an exponential function. The mode and the common core resolution parameters of the Hypatia functions and the slope of the exponential functions, as well as all the yields, are allowed to vary in the fit to data. Using the information from the fit to the ppπ + π − spectrum, signal weights are then computed and the background components are subtracted using the s Plot technique [28]. Correlations between the pp and ppπ + π − invariant-mass spectra, for both signal and backgrounds, are found to be negligible.
Second, a UML fit to the weighted pp invariant-mass distribution is performed in the mass range 2900-3200 MeV/c 2 . In this region, three event categories are expected to populate the pp spectrum: the η c and J/ψ resonances, as well as a possible contribution from nonresonant B 0 s → (pp) NR π + π − decays. The pp mass distribution of η c candidates is described by the convolution of the square of the modulus of a complex relativistic Breit-Wigner function (RBW) with constant width and a function describing resolution effects. The expression of the RBW function is taken as where m res and Γ res are the pole mass and the natural width, respectively, of the resonance. From simulation, in the mass range considered, the pp invariant-mass resolution is found to be a few MeV/c 2 , while Γ ηc = 31.8 ± 0.8 MeV/c 2 [13]. Thus, the pp distribution of η c candidates is expected to be dominated by the RBW, with only small effects on the total η c lineshape from the resolution. On the other hand, due to the small natural width of the J/ψ resonance [13], the corresponding lineshape is assumed to be described to a very good approximation by the resolution function only. For the η c and J/ψ lineshapes, Hypatia functions are used to parametrise the resolution, with tail parameters that are fixed to values obtained from simulation. A single core resolution parameter, σ cc res , shared between these two functions, is free to vary in the fit to data. The η c pole mass and the mode of the Hypatia function describing the J/ψ lineshape, which can be approximated by the pole mass of the resonance, are also free to vary, while the η c natural width is constrained to its known value [13]. The possible contribution from B 0 s → (pp) NR π + π − decays is parametrised by a constant.
The angular distributions of P-and S-waves are characterised by a linear combination of odd-and even-order Legendre polynomials, respectively. In the case of a uniform acceptance, after integration over the helicity angles, the interference between the two waves vanishes. For a non-uniform acceptance, after integration, only residual effects from the interference between η c (→ pp)π + π − and J/ψ (→ pp)π + π − amplitudes can arise in the pp invariant mass spectra. Due to the limited size of the current data sample, these effects are assumed to be negligible. Also, given the sample size and the small expected contribution of the NR pp component, interference between the η c (→ pp)π + π − and (pp) NR π + π − amplitudes is neglected.
In order to fully exploit the correlation between the yields of η c and J/ψ candidates, the former is parametrised in the fit, rearranging eq. (3.3), as where B(B 0 s → η c π + π − ) and N J/ψ are free parameters. The yield of the NR pp component is also free to vary.
Model for B 0
s → η c φ decays The procedure and the fit model used to measure B(B 0 s → η c φ) is based on that described in section 6.1. However, several additional features are needed to describe the data, as detailed below.
The K + K − invariant mass is added as a second dimension in the first step fit, which here consists of a two-dimensional (2D) fit to the ppK + K − or 4hK + K − and K + K − invariant mass spectra. This allows the contributions from φ → K + K − decays and nonresonant K + K − pairs to be separated. Thus, the first step of the fitting procedure consists of four independent two-dimensional UML fits to the ppK + K − versus K + K − and 4hK + K − versus K + K − invariant-mass spectra in the ranges 5200-5500 MeV/c 2 and 990-1050 MeV/c 2 , respectively. 3 Similar 2D fit models are used for each 4h mode. The 4hK + K − distributions of B 0 s → 4hφ signal and B 0 → 4hφ background contributions, as well as those of B 0 s → 4hK + K − and B 0 → 4hK + K − backgrounds, are described by Hypatia functions. The 4hK + K − distribution of the combinatorial background is parametrised using two exponential functions, one for when the K + K − pair arises from a random combination of two prompt kaons, and another for when the K + K − pair originates from the decay of a prompt φ meson. The K + K − distribution of each contribution including a φ in the final state is described by the square of the modulus of a RBW with mass-dependent width convolved with a Gaussian function accounting for resolution effects. The K + K − distributions of the contributions JHEP07(2017)021 including a nonresonant K + K − pair are parametrised by linear functions. The expression of the RBW with mass-dependent width describing the φ resonance is the analogue of eq. (6.1), with the mass-dependent width given by where m φ = 1019.461 ± 0.019 MeV/c 2 , Γ φ = 4.266 ± 0.031 MeV/c 2 [13], and q is the magnitude of the momentum of one of the φ decay products, evaluated in the resonance rest frame such that with m K ± = 493.677 ± 0.016 MeV/c 2 [13]. The symbol q φ denotes the value of q when m = m φ . The X(qr) function is the Blatt-Weisskopf barrier factor [38,39] with a barrier radius of r. The value of the parameter r is fixed at 3 (GeV/c) −1 . Defining the quantity z = qr, the Blatt-Weisskopf barrier function for a spin-1 resonance is given by where z φ represents the value of z when m = m φ .
The same 2D fit model is used for the pp mode with an additional component accounting for the presence of misidentified B 0 → ppKπ background events. The ppK + K − and K + K − distributions of B 0 → ppKπ candidates are described by a Crystal Ball function and a linear function, respectively.
Using the sets of signal weights computed from the 2D fits, the pp and 4h spectra are obtained after subtraction of background candidates from B 0 decays and B 0 s decays with nonresonant K + K − pairs as well as combinatorial background. Correlations between the invariant-mass spectra used in the 2D fits and the pp or 4h spectrum are found to be negligible. A simultaneous UML fit is then performed to the weighted pp and 4h invariantmass distributions, with identical mass ranges of 2820-3170 MeV/c 2 . Different models are used to describe the pp and 4h spectra.
The pp invariant-mass spectrum is modelled similarly to the description in section 6.1. However, as shown in section 7, the fit to the pp spectrum for B 0 s → ppπ + π − decays yields a contribution of NR pp decays compatible with zero. Thus, here, the contribution of such decays is fixed to zero and only considered as a source of systematic uncertainty, as described in section 8.
For the 4h modes, in addition to B 0 s → η c φ and B 0 s → J/ψ φ decays, other contributions are expected in the mass range considered: B 0 s → 4hφ decays, where the 4h system is in a nonresonant state with a total angular momentum equal to zero, and where B 0 s decays proceed via intermediate resonant states decaying in turn into two or three particles for instance, B 0 s → P P φ decays, where P and P could be any resonance such as K * (892), ρ(770), φ(1020), ω(782), f 2 (1270), f 2 (1525) and a 2 (1320). Similarly to B 0 s → D + s 3h decays, all these decays are expected to have smooth distributions in the 4h invariant-mass spectra. Therefore, lacking information from previous measurements, all these contributions are -9 -JHEP07(2017)021 merged into one category, denoted (4h) bkg . The 4h nonresonant contribution is denoted (4h) NR . The η c being a pseudoscalar particle, interference between B 0 s → η c (→ 4h)φ and B 0 s → (4h) NR φ amplitudes for each 4h final state are accounted for in the model. On the other hand, given the large number of amplitudes contributing to the (4h) bkg event category, the net effect of all interference terms is assumed to cancel. Similarly to the pp fit model, terms describing residual effects of the interference between the J/ψ and the other fit components are neglected. The total amplitude for each of the 4h modes, integrated over the helicity angles, is then given by Finally, taking into account the detector resolution, the total function, F tot , used to describe the invariant-mass spectra m f is given by with ξ f k = (α f k ) 2 and where the expressions for F k (m f ) are Re R ηc (m f ; a)e iδϕ ⊗ R(a (m f )), (6.12) where δϕ is the difference between the strong phases of (4h) NR φ and η c (→ 4h)φ amplitudes. The integrals in eq. (6.7) are calculated over the mass range in which the fit is performed. Only the η c and J/ψ components are used in the expression for F tot (m pp ). The fit fractions FF k measured for each component, as well as the interference fit fraction FF I between the -10 -
JHEP07(2017)021
η c and the NR amplitudes for the 4h modes, are calculated as: 14) The resolution, R(a (m f )), is described by a Hypatia function, with parameters a (m f ) that depend on the final state and the invariant-mass region. They are estimated using dedicated simulation samples in two mass regions: a high-mass region around the J/ψ resonance, and a low-mass region around the η c resonance. As in the model for B 0 s → ppπ + π − decays, the branching fraction B(B 0 s → η c φ) is directly determined in the fit. In this configuration, the squared magnitudes of the η c amplitudes, ξ f ηc , are parametrised as In the simultaneous fit to the pp and 4h invariant-mass spectra several parameters are allowed to take different values depending on the final state: the intensities ξ f k (free to vary), the slopes κ bkg and κ NR of the (4h) bkg and (4h) NR exponentials, respectively, (free to vary), the relative strong phase between the (4h) NR and η c amplitudes (free to vary) as well as the low and high mass resolution parameters (fixed). The η c pole mass, the mode of the Hypatia function describing the J/ψ and the branching fraction B(B 0 s → η c φ) are common parameters across all final states and are free to vary in the fit. The η c width is fixed to the world average value taken from ref. [13]. For each mode, ξ J/ψ and ϕ ηc are fixed as reference to 1 and 0, respectively.
Results
The yields of the various decay modes determined by the UML fit to the ppπ + π − invariant mass distribution, and from the 2D fits to the pp(4h)K + K − versus K + K − invariant mass planes, are summarised in table 2. The mass distributions and the fit projections are shown in appendix A. The ppπ + π − and 2D fit models are validated using large samples of pseudoexperiments, from which no significant bias is observed.
The pp invariant-mass distribution for B 0 s → ppπ + π − candidates, and the projection of the fit are shown in figure 2. The values of the η c and J/ψ shape parameters as well as the yields are given in table 3. The branching fraction for the B 0 s → η c π + π − decay mode is found to be B(B 0 s → η c π + π − ) = (1.76 ± 0.59 ± 0.12 ± 0.29) × 10 −4 , where the two first uncertainties are statistical and systematic, respectively, and the third uncertainty is due to the limited knowledge of the external branching fractions. The systematic uncertainties on the branching fraction are discussed in section 8. The significance -11 -
JHEP07(2017)021
Yield Table 2. Yields of the different final states as obtained from the fit to the ppπ + π − invariantmass distribution and from the 2D fits in the pp(4h)K + K − × K + K − invariant-mass planes. Only statistical uncertainties are reported. The abbreviation "n/a" stands for "not applicable". Table 3. Results of the fit to the pp invariant-mass spectra weighted for B 0 s → ppπ + π − candidates. Uncertainties are statistical only. The parameter N NR corresponds to the yield of B 0 s → (pp) NR π + π − candidates. The η c yield does not appear since it is parametrised as a function of B(B 0 s → η c π + π − ), the measured value of which is reported in eq. (7.1).
of the presence of B 0 s → η c π + π − decays in the pp invariant-mass spectrum is estimated, as √ −2∆ ln L, from the difference between the log-likelihood (ln L) values for N ηc = 0 and the value of N ηc that minimises ln L. For the estimation of the significance, N ηc is not parametrised as a function of B(B 0 s → η c π + π − ), but is a free parameter in the fit. As shown in figure 3, the significance of the η c component in the fit to the pp invariant-mass distribution is 5.0 standard deviations (σ) with statistical uncertainties and 4.6σ when including systematic uncertainties. The latter is obtained by adding Gaussian constraints to the likelihood function. This result is the first evidence for B 0 s → η c π + π − decays. The pp and 4h invariant-mass distributions for B 0 s → ppφ and B 0 s → 4hφ candidates, and the projection of the simultaneous fit are shown in figure 4. The values of the shape parameters, of the magnitudes and of the relative strong phases are given in where the two first uncertainties are statistical and systematic, respectively, and the third uncertainty is due to the limited knowledge of the external branching fractions. This measurement corresponds to the first observation of B 0 s → η c φ decays. As a cross-check, individual fits to the pp and to each of the 4h invariant-mass spectra give compatible values of B(B 0 s → η c φ) within statistical uncertainties. The precision of the B(B 0 s → η c φ) measurement obtained using each of the 4h modes is limited compared to the pp mode. This is expected due to the presence of additional components below the η c and J/ψ resonance in the 4h invariant-mass spectra, and due to the interference between B 0 s → η c (→ 4h)φ and B 0 s → (4h) NR φ amplitudes. The measurement of B(B 0 s → η c φ) from the simultaneous fit is largely dominated by the pp mode. Data Table 4. Result of the simultaneous fit to the pp and 4h invariant-mass spectra. Uncertainties are statistical only. The J/ψ and η c magnitudes do not appear since they are set to unity as reference and parametrised as a function of B(B 0 s → η c φ), respectively. In the simultaneous fit, the m ηc and m J/ψ parameters are shared across the four modes. The measured value of B(B 0 s → η c φ) is reported in eq. (7.2). The abbreviation "n/a" stands for "not applicable". 1.08 ± 0.08 1.00 Table 5. Fit fractions obtained from the parameters of the simultaneous fit to the pp and 4h invariant-mass spectra. Uncertainties are statistical only. Due to interference between B 0 s → η c (→ 4h)φ and B 0 s → (4h) NR φ amplitudes, for the 4h final states the sum of fit fractions, k FF k , may be different from unity. The abbreviation "n/a" stands for "not applicable".
Systematic uncertainties
As the expressions for B(B 0 s → η c π + π − ) and B(B 0 s → η c φ) are based on the ratios of observed quantities, only sources of systematic uncertainties inducing different biases to the number of observed η c and J/ψ candidates are considered. The dominant source of systematic uncertainties is due to the knowledge of the external branching fractions. These are estimated by adding Gaussian constraints on the external branching fractions in the fits, with widths corresponding to their known uncertainties [13]. A summary of the systematic uncertainties can be found in table 6.
To assign systematic uncertainties due to fixing of PDF parameters, the fits are repeated by varying all of them simultaneously. The resolution parameters, estimated from simulation, are varied according to normal distributions, taking into account the correlations between the parameters and with variances related to the size of the simulated samples. The external parameters are varied within a normal distribution of mean and width fixed to their known values and uncertainties [13]. This procedure is repeated 1000 times, and for each iteration a new value of the branching fraction is obtained. The systematic uncertainties on the branching fraction are taken from the variance of the corresponding distributions.
The systematic uncertainty due to the fixing of the values of the efficiencies is estimated by adding Gaussian constraints to the likelihood functions, with widths that are taken from the uncertainties quoted in table 1.
The presence of intrinsic biases in the fit models is studied using parametric simulation. For this study, 1000 pseudoexperiments are generated and fitted using the nominal PDFs, where the generated parameter values correspond to those obtained in the fits to data. The biases on the branching fractions are then calculated as the difference between the generated values and the mean of the distribution of the fitted branching fraction values.
To assign a systematic uncertainty from the model used to describe the detector resolution, the fits are repeated for each step replacing the Hypatia functions by bifurcated Crystal Ball functions, the parameters of which are obtained from simulation. The difference from the nominal branching fraction result is assigned as a systematic uncertainty. Table 6. Summary of systematic uncertainties. The "Sum" of systematic uncertainties is obtained from the quadratic sum of the individual sources, except the external branching fractions, which are quoted separately. All values are in % of the measured branching fractions. The abbreviation "n/a" stands for "not applicable".
The Blatt-Weisskopf parameter r of the φ is arbitrarily set to 3 (GeV/c) −1 . To assign a systematic uncertainty due to the fixed value of this r parameter, the fits are repeated for different values taken in the range 1.5-5.0 (GeV/c) −1 . The maximum differences from the nominal branching fraction result are assigned as systematic uncertainties.
To assign a systematic uncertainty due to the assumption of a uniform acceptance, the simultaneous fit is repeated after correcting the 4h invariant-mass distributions for acceptance effects. A histogram describing the acceptance effects in each of the 4h invariantmass spectra is constructed from the ratio of the normalised 4h invariant-mass distributions taken from simulated samples of B 0 s → (4h)φ phase space decays, obtained either directly from EvtGen, or after processing through the full simulation chain. The simultaneous fit is repeated after applying weights for each event from the central value of its bin in the 4h invariant-mass distribution. The difference from the nominal branching fraction result is assigned as a systematic uncertainty. No significant dependence on the binning choice was observed.
The systematic uncertainty due to neglecting the presence of a nonresonant pp contribution in the pp spectrum for B 0 s → ppφ candidates is estimated by repeating the simultaneous fit with an additional component described by an exponential function, where the slope and the yield are allowed to vary. The difference from the nominal branching fraction result is assigned as a systematic uncertainty.
Conclusions
This paper reports the observation of B 0 s → η c φ decays and the first evidence for B 0 s → η c π + π − decays. The branching fractions are measured to be where in each case the two first uncertainties are statistical and systematic, respectively, and the third uncertainties are due to the limited knowledge of the external branching fractions. The significance of the B 0 s → η c π + π − decay mode, including systematic uncertainties, is 4.6σ. The results for B(B 0 s → η c π + π − ) and B(B 0 s → η c φ) are in agreement with expectations based on eqs. (1.1), (1.2) and (1.3).
The data sample recorded by the LHCb experiment in Run 1 of the LHC is not sufficiently large to allow a measurement of the CP -violating phase φ s from time-dependent analysis of B 0 s → η c φ or B 0 s → η c π + π − decays. However, in the future with significant improvement of the hadronic trigger efficiencies [40], these decay modes may become of interest to add sensitivity to the measurement of φ s .
Acknowledgments
We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies:
A Fit projections
The ppπ + π − invariant mass distribution and the fit projection are shown in figure 5. The four pp(4h)K + K − and K + K − invariant-mass distributions and the corresponding twodimensional fit projections are shown in figures 6 to 9.
B Correlation matrix
The statistical correlation matrix for the simultaneous fit to the pp and 4h invariant-mass distributions for B 0 s → ppφ and B 0 s → 4hφ candidates is given in table 7. Figure 5. Distribution of the ppπ + π − invariant mass. Points with error bars show the data. The solid curve is the projection of the total fit result. The short-dashed blue, the dashed-double-dotted green, the dashed-single-dotted yellow and medium-dashed red curves show the B 0 s → ppπ + π − , B 0 → ppπ + π − , B 0 → ppK + π − and combinatorial background contributions, respectively. Table 7. Statistical correlation matrix for the parameters from the simultaneous fit to the pp and 4h invariant-mass spectra for B 0 s → ppφ and B 0 s → 4hφ candidates.
JHEP07(2017)021
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. [12] LHCb collaboration, First study of the CP-violating phase and decay-width difference in B 0 s → ψ(2S)φ decays, Phys. -24 - | 10,242.4 | 2017-07-01T00:00:00.000 | [
"Physics"
] |
Electron streams in air during magnetic-resonance image-guided radiation therapy
To investigate the undesired irradiations outside of the treatment field by electron streams in air (air-electron-stream) during magnetic-resonance image-guided radiation therapy (MR-IGRT). A custom-made support phantom adjusting angles between the beam central axis (CAX) and the phantom surface (termed phantom-angles), were used. Using the ViewRay system, a rectangular parallelepiped phantom placed on the support phantom, was irradiated with field sizes of 6.3 cm × 6.3 cm (FS6.3) and 12.6 cm × 12.6 cm (FS12.6) at gantry angles of 0°, 30°, and 330°, and phantom-angles of 10°, 20°, and 30°. For each beam delivery, the isocenter was located at the center of mass of the phantom and 3 Gy was delivered to the isocenter (prescription dose = 3 Gy). The doses given by the air-electron-streams were measured using the EBT3 films on the panels placed orthogonal to the direction of the magnetic field at distances of 10 and 17 cm from CAX. Two dose distributions per irradiation were measured on the panel facing the phantom surface of the incident beam (front panel) and on the panel facing the phantom surface of the beam exit (end panel). We investigated the doses by the air-electron-streams by calculating the average doses inside the circles drawn around a point of the maximum dose with radii of x cm (DRx) from the dose distributions on the panels (x = 1–5 cm). The largest value of DRx was DR1 (1.64 Gy, 55% of the prescription dose) at 10 cm distance from CAX, with FS12.6, at 30° phantom-angle and 330° gantry angle. The average difference of the DR1 at the end panels (FS12.6) between the calculations and measurements was 1.36 Gy. The average global gamma passing rate with 3%/3 mm on the dose distributions at the end panels (FS12.6) was 40.3%. The calculated dose distributions on both panels were not coincident with the measured dose distributions. The Spearman’s rank correlation coefficients between the projected areas and the DRx values were always higher than 0.75 (all with p < 0.001). The doses by the air-electron-streams increased with the projected areas of the cross-sections of the treatment beams on the panels.
Abstract
To investigate the undesired irradiations outside of the treatment field by electron streams in air (air-electron-stream) during magnetic-resonance image-guided radiation therapy (MR-IGRT). A custom-made support phantom adjusting angles between the beam central axis (CAX) and the phantom surface (termed phantom-angles), were used. Using the ViewRay system, a rectangular parallelepiped phantom placed on the support phantom, was irradiated with field sizes of 6.3 cm × 6.3 cm (FS6.3) and 12.6 cm × 12.6 cm (FS12.6) at gantry angles of 0˚, 30˚, and 330˚, and phantom-angles of 10˚, 20˚, and 30˚. For each beam delivery, the isocenter was located at the center of mass of the phantom and 3 Gy was delivered to the isocenter (prescription dose = 3 Gy). The doses given by the air-electron-streams were measured using the EBT3 films on the panels placed orthogonal to the direction of the magnetic field at distances of 10 and 17 cm from CAX. Two dose distributions per irradiation were measured on the panel facing the phantom surface of the incident beam (front panel) and on the panel facing the phantom surface of the beam exit (end panel). We investigated the doses by the air-electron-streams by calculating the average doses inside the circles drawn around a point of the maximum dose with radii of x cm (D Rx ) from the dose distributions on the panels (x = 1-5 cm). The largest value of D Rx was D R1 (1.64 Gy, 55% of the prescription dose) at 10 cm distance from CAX, with FS12.6, at 30˚phantom-angle and 330g antry angle. The average difference of the D R1 at the end panels (FS12.6) between the calculations and measurements was 1.36 Gy. The average global gamma passing rate with 3%/3 mm on the dose distributions at the end panels (FS12.6) was 40.3%. The calculated dose distributions on both panels were not coincident with the measured dose distributions. The Spearman's rank correlation coefficients between the projected areas and the D Rx values were always higher than 0.75 (all with p < 0.001). The doses by the air-electron-streams increased with the projected areas of the cross-sections of the treatment beams on the panels.
Introduction
Magnetic-resonance image-guided radiation therapy (MR-IGRT) became clinically available with the release of the first commercial MR-IGRT device, the ViewRay system (ViewRay Inc., Cleveland, OH), which is a combination of an on-board 0.35-T MR imaging system and a radiation therapy system with Co-60 sources [1]. Since this system does not require any extra imaging dose, the daily 3D MRI for patient setup can be acquired [2]. This enables daily adaptive radiation therapy (ART) combined with the rapid optimization algorithm and the rapid dose calculation capability of the ViewRay system [3]. Moreover, near-real-time cine planar MRI can be acquired during treatment with the ViewRay system (a single cine image at 4 frames/s or three cine images at 2 frames/s). Therefore, respiratory gated radiation therapy based on the actual near-real-time tumor motion can be performed without any external surrogate [3]. The ART capability, as well as the gating capability based on the actual tumor motion of the ViewRay system has the potential to minimize the target margins, which is beneficial for sparing doses to a nearby normal tissue around the target volume [4,5]. This potentially results in reduction in the complications induced by radiation therapy. Furthermore, reduction in the doses given for organs at risk (OARs) by the margin reduction capability of the ViewRay system has the potential to escalate the prescription doses to the target volumes, which potentially increases the efficacy of radiation therapy. The treatment planning system (TPS) of the ViewRay system is the MRIdian system of which dose calculation algorithm is based on Monte Carlo simulation [6]. Since the Monte Carlo dose calculation algorithm generally calculates doses at the voxels in the region of interest (ROI), i.e., voxel phantoms, the MRIdian system calculates dose distributions in the ROI, including a patient body as well as air around the patient body. This is a difference of the MRIdian system from other commercial TPSs, which generally calculate dose distributions only inside the body structure. Another feature of the MRIdian system is that it is possible to calculate dose distributions with or without magnetic field (0.35 T magnetic field) [7]. When we generated treatment plans for accelerated partial breast irradiation (APBI) with the magnetic field, we observed low dose streams in air in the direction or opposite direction of the magnetic field (termed air-electron stream) [7]. When we calculated dose distributions without the magnetic field, the directional nature of the low dose stream in air disappeared. Therefore, we concluded that it was generated by the secondary electrons scattered in air, i.e., charged particles. This phenomenon was observed frequently at the treatment plans for APBI since the target volumes of APBI were generally located close to the surface (sometimes including the patient surface owing to its margin) [7]. Because the energies of the secondary electrons of the View-Ray system are small (the gamma ray energies of Co-60 are 1.17 and 1.33 MeV), when the target volume is located deep in the patient body, the secondary electrons are absorbed in the patient body. However, if the target volume is located close to the surface or includes the patient surface (as in the situation of APBI treatment), the secondary electrons escape the patient body and scatter in air. In this situation, if there is a magnetic field, the scattered secondary electrons in air exactly orthogonal to the direction of magnetic field rotate on the direction of magnetic field due to Lorentz force [8,9]. If the scattering directions of the secondary electrons have vectors along the magnetic field, the secondary electrons would show helical movements in the direction (or opposite direction) of the magnetic field [10]. Because the energies of the secondary electrons are small, less than 1.33 MeV, the radii of the helix would be small, and therefore, a bunch of these secondary electrons would form the air-electron stream. According to the direction of the vectors of the secondary electrons along the magnetic field, the air-electron stream can be formed in the direction, or opposite direction, of magnetic field. When the air-electron stream was formed in the direction of magnetic field of the ViewRay system (from the couch to the bore) during APBI with a patient position of the headfirst supine, the air-electron stream sometimes could reach the patient's jaw, neck, armpit, or arm [7]. This results in undesired normal tissue irradiation outside of the treatment field.
In this study, we calculated and measured the doses outside of the treatment field by the air-electron streams under various conditions by utilizing a custom-made phantom. We investigated the doses by the air-electron streams as well as the particular conditions to enhance doses by the air-electron streams.
Phantoms and experimental setup
To investigate doses by the air-electron stream in the magnetic field, we used a custom-made acrylic phantom with dimensions of 15 cm × 15 cm × 10 cm (density of 1.18 g/cm 3 ), as shown in Fig 1(A).
To vary the angles between the incident beam and the acrylic phantom surface, we designed and fabricated a support phantom as shown in Fig 1(B) and Fig 2. Whole parts of the support phantom were made of acrylic to be compatible with the magnetic field. The support for the acrylic phantom (termed phantom support) was laid over the base of the support phantom, and they were connected to each other by a rotation axis, as shown in Fig 1(C). We cut grooves at the base to fix the supports for angle adjustments, termed angle support (Fig 1(D)), to the base. The angle support can be put into the corresponding groove at the base to form a particular angle between the phantom support and the base. By combination of the angle support and its corresponding groove, angles ranging from 5˚to 30˚at intervals of 5˚can be generated between the phantom support and the base. This means that the support phantom can adjust the angles between the couch surface and the surface of the acrylic phantom placed on the phantom support from 5˚to 30˚at 5˚intervals since the acrylic phantom surface is parallel to the surface of the phantom support. Since the angle between the central axis (CAX) and the vector orthogonal to the acrylic phantom surface (termed phantom angle) is same as that between the phantom surface and the couch surface, the phantom angles can be adjusted from 5˚to 30˚at 5˚intervals. To measure doses by the air-electron stream, two panels for the attachment of Gafchromic EBT3 films (Ashland ISP Advanced Materials, Wayne, NJ) were setup vertically to the base, as shown in Fig 1(B). Experimental setup with the phantom and the support phantom is shown in Fig 2. The panels for the attachment of EBT3 films were located parallel to CAX and vertical to the direction of magnetic field. We set the panels to be parallel to each other and to be located at the same distances from CAX, which were 17 cm (long-distance setup) and 10 cm (shortdistance setup). The panel in front of the incident beam cross-section was termed front panel and that facing the exit beam cross-section was termed end panel.
Conditions of beam irradiation
All beam irradiations in the present study were performed with the ViewRay system. For every beam irradiation, the isocenter located at 105 cm distance from the radiation source (source to axis distance, SAD = 105 cm) was always located at the center of mass of the acrylic phantom [1]. To investigate the effect of the phantom angles on the doses by the air-electron stream, the phantom angles tested in this study were 10˚, 20˚, and 30˚(three phantom angles). At each phantom angle, Co-60 beams with square field sizes of 6.3 cm × 6.3 cm and 12.6 cm × 12.6 cm were delivered to the acrylic phantom (two field sizes) to investigate the field size effect on the doses by the air-electron stream. For each field size, gantry angles of 0˚, 30˚, and 330˚(three gantry angles) were chosen. For each beam delivery, doses delivered to the surfaces of the front panel and end panel were measured with EBT3 films (two measurements for a single beam delivery). The surface doses of the front panel and end panel were measured at distances of 10 and 17 cm from CAX (two distances) under identical beam delivery condition. Therefore, 36 beams were delivered to the acrylic phantom (3 phantom angles × 2 field sizes × 3 gantry angles × 2 distances from CAX) and 72 dose distributions (× front and end panels) were measured. For the short-distance setup (panel setup at 10 cm distance from CAX) while adjusting the phantom angles from 10˚to 30˚, we chose the acrylic phantom (15 cm × 15 cm × 10 cm) for this study, of which dimension was smaller than those of the commercial solid water phantoms (30 cm × 30 cm × various thicknesses). For each irradiation in this study, 3 Gy was delivered to the isocenter located at the center of mass of the acrylic phantom; that is, the prescription dose was 3 Gy.
Dose calculations
We acquired CT image sets of the phantom with the support phantom to calculate dose distributions at the surfaces of the front and end panels. Since there were six experimental geometries in the present study (3 phantom angles × 2 distances between the panels and CAX), we acquired six CT image sets of the phantom with the support phantom by using the Brilliance CT Big Bore (Phillips, Amsterdam, Netherlands) with a slice thickness of 1 mm. With the CT images, dose distributions were calculated with the MRIdian system under the identical conditions as the measurements. Treatment plans were generated to deliver 3 Gy to the isocenter under each beam delivery condition. Dose distributions were calculated with a dose calculation grid size of 3 mm, which is recommended by the manufacturer for an optimal dose calculation [4,6]. To maintain the dose calculation accuracy with the Monte Carlo dose calculation algorithm of the MRIdian system while maintaining the dose calculation speeds to be appropriate for on-table ART, the ViewRay Inc. recommends to use a dose calculation grid size of 3 mm in the clinical setting.
Gafchromic EBT3 film measurements
The dose response of the EBT3 films was calibrated with the ViewRay system under the magnetic field to eliminate the magnetic field effect on the EBT3 films [11]. The dual channel method for red and blue corrections was applied (spatial resolution = 75 dpi) [12]. According to the previous studies, uniformity correction for the scanner was applied [13,14]. A flatbed scanner, Epson 10000XL scanner (Epson Canada Ltd., Toronto, Ontario, Canada), was used for scanning the EBT3 films. The films were scanned in 48-bit color mode, i.e., RGB mode, after 20 h of irradiation. The scanned dose distributions were analyzed with the RIT 113 software (Radiological Imaging Technology, Inc., Colorado Springs, CO).
Data analysis
At both the measured and calculated dose distributions on the surfaces of the front and end panels, the points of the maximum doses were found. After that, circles were drawn around the point of the maximum dose with radii extending from 1 to 4 cm with an interval of 1 cm. For each circle, average doses inside the circles were calculated. Therefore, the circle with a radius of 1 cm was the highest dose region in the dose distributions because there was a single inflection point in the dose distributions of the present study. The average dose inside the circle with a radius of x cm was termed D Rx . Naturally, the value of D R1 was the highest and that of D R4 was the lowest. In addition, we calculated areas of isodose lines of 30% (0.9 Gy), 50% (1.5 Gy), 70% (2.1 Gy), 90% (2.7 Gy), and 100% of the prescription dose (3 Gy) in the dose distributions. The area of an isodose line of y% of 3 Gy in cm 2 was termed A y% .
To examine the differences between the calculated and measured doses, we performed global gamma evaluation with absolute doses. A gamma criterion of 3%/3 mm was used and points with doses equal to or less than 10% of the maximum dose in the dose distribution were not evaluated, i.e., the threshold dose was 10%. In addition, we calculated percent differences in the values of D Rx between the calculated and measured dose distributions. For A y% , we acquired just the differences by subtracting the A y% value of the measured dose from that of the calculated dose.
To investigate the tendency of doses by the air-electron stream, we acquired percent differences in D Rx as well as in the A y% between the front and end panels, between the large and small field sizes (6.3 cm × 6.3 cm vs. 12.6 cm × 12.6 cm), and between the long and short distances from CAX (10 cm vs. 17 cm).
To analyze doses by the air-electron streams combined with the projected areas of the cross-sections of the treatment beams at the phantom surface on the panels, we mathematically calculated the projected areas on both the front and end panels under various conditions of the present study. For the end panels, the cross-sections of the exit beam at the support phantom beneath the acrylic phantom were calculated since the acrylic phantom was placed on the support phantom and the beam cross-section at the support phantom was where the secondary electrons were scattered in air. After that, correlations between the doses by the air-electron streams and the projected areas on the panels were analyzed by calculating Spearman's rank correlation coefficients (r) with the corresponding p values.
Doses on panel surfaces (irradiation outside of the treatment field)
End panel surface dose with a field size of 6.3 cm × 6.3 cm. The values of D Rx and A y% with a field size of 6.3 cm × 6.3 cm acquired from the calculated and measured dose distributions on the surfaces of the end panels are shown in Tables 1 and 2, respectively. D R1 with a field size of 6.3 cm × 6.3 cm acquired from the calculated and measured dose distributions on the surfaces of the end panels are plotted according to the phantom angles shown in According to its definition, the values of D Rx decreased with increasing radii of the circles and the values of A y% decreased with increasing value of y.
The values of D Rx as well as A y% with a gantry angle of 0˚were always significantly smaller than those with gantry angles of 30˚and 330˚. There were huge discrepancies in the values of both D Rx and A y% between the calculated and measured dose distributions. With increasing phantom angles, the values of D Rx also increased. For the calculated and measured doses, the maximum D Rx values were the values of D R1 with a phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance from CAX, which were 1.41 Gy (47% of the prescription dose) and 1.02 Gy (34% of the prescription dose), respectively.
In the case of A y% , isodose lines equal to or larger than 50% of the prescription dose were not observed for both the calculation and measurements. The values of A y% with a gantry angle of 0˚were always lower than those with gantry angles of 30˚and 330˚. The maximum value of A y% from the measured dose distributions was A 30% with a phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance from CAX, which was 9.2 cm 2 .
End panel surface dose with a field size of 12.6 cm × 12.6 cm. The values of D Rx and A y % with a field size of 12.6 cm × 12.6 cm acquired from the calculated and measured dose distributions on the surfaces of the end panels are shown in Tables 3 and 4, respectively. D R1 with a field size of 12.6 cm × 12.6 cm acquired from the calculated and measured dose distributions on the surfaces of the end panels are plotted according to the phantom angles in Fig 5. For the doses with a field size of 12.6 cm × 12.6 cm, the same tendency as that with a field size of 6.3 cm × 6.3 cm was observed, but the absolute values of D Rx and A y% with a field size of 12.6 cm × 12.6 cm were always higher than those with a field size of 6.3 cm × 6.3 cm. The maximum D Rx value was D R1 with a phantom angle of 30˚, gantry angle of 330˚, and the 10 cm distance (2.43 Gy and 1.64 Gy for the calculated and measured values, respectively). The maximum value of A y% from the calculated dose distributions was A 70% (26 cm 2 ) at the phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance from CAX; however, the values of A 70% from the measured dose distributions were always zero. From the measurements, A 50% at a phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance from CAX was the largest (18.5 cm 2 ). For the doses at the front panel with a field size of 6.3 cm × 6.3 cm, the same tendency as that for the end panel was observed, but the absolute values of D Rx and A y% of the front panel were always smaller than those of the end panel. The maximum D Rx value from the calculated and measured dose distributions were D R1 at the phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance, which were 1.16 Gy (39% of the prescription dose) and 0.36 Gy (12% of the prescription dose). For the calculated dose distributions, the A 30% at the phantom angle of 30˚, gantry angle of 30˚, and 10 cm distance was the largest (20.2 cm 2 ), but the values of A 30% from the measured dose distributions were always zero.
Front panel surface dose with a field size of 12.6 cm × 12.6 cm. The values of D Rx and A y% with a field size of 12.6 cm × 12.6 cm acquired from the calculated and measured dose distributions on the surfaces of the front panels are shown in Tables 7 and 8, respectively. The D R1 with a field size of 12.6 cm × 12.6 cm acquired from the calculated and measured dose distributions on the surfaces of the front panels are plotted according to the phantom angles in Fig 7. For the doses at the front panel with a field size of 12.6 cm × 12.6 cm, the same tendency as those of the other results was observed. The absolute values of D Rx and A y% of the front panel with a field size of 12.6 cm × 12.6 cm were always larger than those with a field size of 6.3 cm × 6.3 cm. However, the absolute values of D Rx and A y% of the front panel with a field size of 12.6 cm × 12.6 cm were always smaller than those of the end panel with the same field size. The maximum D Rx value from the calculated dose distributions was D R1 with a phantom angle of 30˚, gantry angle of 330˚, and 10 cm distance, which was 2.34 Gy (78% of the prescription dose). The maximum D Rx value from the measured dose distributions was D R1 with a phantom angle of 30˚, gantry angle of 30˚, and 10 cm distance, which was 0.61 Gy (20% of the prescription dose). For the A y% from the calculated dose distributions, A 70% at a phantom angle of 30˚, gantry angle of 30˚, and 10 cm distance was the largest (28.1 cm 2 ); however, even A 30% was always zero in the case of measurements.
Differences between the calculation and measurement
Results of gamma evaluation. The average global gamma passing rates with 3%/3 mm on the dose distributions at the end panels with field sizes of 6.3 cm × 6.3 cm and 12.6 cm × 12.6 cm were 42.0% ± 23.0% (ranging from 6.8% to 80.3) and 40.3% ± 9.8% (ranging from 17.0% to 63.1%), respectively. Those on the dose distributions of the front panels were 27.6% ± 12.9% (ranging from 10.0% to 70.4%) and 26.1% ± 13.3% (ranging from 7.9% to 52.9%), respectively. The calculated dose distributions on both panels were not coincident with the measured dose distributions.
Differences between the values of D Rx from the calculated and measured dose distributions. The average percent differences between the values of D Rx from the calculated and measured dose distributions are plotted in Fig 8. As shown in Fig 7, huge percent differences were observed at both the end and front panels. The percent differences between the calculations and measurements at the front panels were much larger than those at the end panels. This resulted from the smaller D Rx values at the normalization points (maximum doses) of the front panels than those at the end panels. The average absolute differences in D R1 of the front panels with field sizes of 6.3 cm × 6.3 cm and 12.6 cm × 12.6 cm were 26.4 ± 13.0 cGy and 46.4 ± 17.5 cGy, respectively, while those of the end panels were 61.1 ± 23.0 cGy and 136.1 ± 41.7 cGy. Although the absolute differences at the end panels were larger than those at the front panels, the percent differences normalized to the maximum dose in the dose distributions of the front panels appeared to be larger than those of the end panels because the maximum doses at the front panels were smaller than those at the end panels. In general, the calculated doses were larger than those of the measured doses, i.e., doses calculated by the air-electron streams were generally overestimated compared to the measurements.
Measured dose differences between the front and end panels
The average percent differences in the values of D Rx from the measured dose distributions between the front and end panels are plotted in S1 Fig, which can be found in supporting information file. Doses at the end panels were always larger than those at the front panels.
Measured dose differences between the large and small field sizes
The average percent differences in the values of D Rx from the measured dose distributions between the field sizes of 12.6 cm × 12.6 cm and 6.3 cm × 6.3 cm are plotted in S2 Fig, which can be found in supporting information file. Doses with a large field size (12.6 cm × 12.6 cm) were always larger than those with a small field size (6.3 cm × 6.3 cm).
Measured dose differences between the long and small distances from central axis
The average percent differences in D Rx values from the measured dose distributions between the distances of 17 cm and 10 cm from CAX are plotted in S3 Fig, which can be found in supporting information file. Electron streams in air during MR-IGRT Doses at the long distance from CAX (17 cm) were always smaller than those at the short distance (10 cm).
Correlations between the projected areas and doses on the panels
The calculated values of the projected areas of the beam cross-sections at the phantom surface on the panels are shown under various conditions of the present study in S1 Table, which can be found in supporting information file.
With an increase in the phantom angles and gantry angles, the projected areas increased. The projected areas on the end panels were larger than those on the front panels. The smallest and largest areas projected on the panels were 6.2 cm 2 (gantry angle = 0˚, phantom angle = 10˚, field size = 6.3 cm × 6.3 cm, and on the front panel) and 109.4 cm 2 (gantry angle = 30˚and 330˚, phantom angle = 30˚, field size = 12.6 cm × 12.6 cm, and on the end panel), respectively.
The values of D Rx according to the projected areas on the front and end panels are plotted in Figs 9 and 10, respectively.
The r values of the projected areas on the panels to the values of D Rx are summarized in Table 9 with the corresponding p values.
Discussion
In the present study, we investigated undesired irradiations outside of the treatment field by the air-electron stream during MR-IGRT for tumors located at superficial regions in the body when generating a slope between the patient's body surface and the direction of the magnetic field in cases such as MR-IGRT for APBI. This phenomenon occurs owing to the same reason Electron streams in air during MR-IGRT as the electron return effect during MR-IGRT, which is the Lorentz force [8,9]. Although the air-electron streams were generated with the same reason as the electron return effect, its effect on the treatment is different from that of the electron return effect because the air-electron streams result in an undesired irradiation outside of the treatment field, while the electron return effect results in an increased dose deposition at the tissue−air interface [9]. Under various conditions, we investigated which factor increased the doses outside of the treatment field by the air-electron stream. We found that the doses by the air-electron streams increased with increasing phantom angles, increasing field sizes, and decreasing distances from the treatment field. We also found that the doses by the air-electron stream increased at gantry angles of 30å nd 330˚compared to those at a gantry angle of 0˚. In addition, the doses on the end panels were always lager than those on the front panels. The conditions of the large phantom angles, large field sizes, measurement at the end panels, and oblique gantry angles result in an increase in the projected area of the treatment beam cross-section at the phantom on the panels. It is obvious that the increases in the phantom angles and field sizes, as well as oblique beams, to the phantom increase the projected area on the panels. The projected areas on the end panels are also larger than those on the front panels under an identical condition because the treatment beam diverges. Therefore, comprehensively reviewing the results, the increase in the projected area plays a major role in increasing the out-of-field doses by the air-electron streams. This is clearly identified in the correlations between the projected areas and the values of D Rx . We found very strong correlations, showing r values of up to 0.938, between the projected area and the values of D Rx (all with p < 0.001). Accordingly, the largest dose in the present study was observed on the end panel with a field size of 12.6 cm × 12.6 cm at a gantry angle of 330å nd phantom angle of 30˚, which was 164.1 cGy (D R1 ).
In the case of exit beams, not only the increases in the projected areas owing to the beam divergence but also the vectors of the electrons scattered from the phantom to air might affect Electron streams in air during MR-IGRT the increases in the doses outside of the treatment field by the air-electron stream. Since the electrons generated by the photon beams should escape the phantom to form the air-electron streams, the electrons scattered from the phantom at the beam exit would be more than those backscattered at the beam entrance considering the gamma ray direction into the phantom. This could increase doses on the end panels by the air-electron stream. However, the gamma rays at the exit were attenuated more than those at the entrance, which results in a decrease in the number of electrons scattered from the phantom. This could decrease doses on the end panels by the air-electron stream. Therefore, for the doses on the end panels, there were causes to both increase and decrease the doses on the end panels, simultaneously. Reviewing the results, the doses on the end panels were higher than those on the front panels. For further investigation, Monte Carlo simulation should be performed, and therefore, we will investigate the doses at the end panels with Monte Carlo simulation in the future. The doses at a short distance from CAX were always larger than those at a large distance, which is irrelevant to the projected area sizes. This seems to be owing to the low energies of the scattered electrons in this study. The air-electron streams in the present study were generated from the Co-60 gamma rays (maximum energy of 1.33 MeV), and therefore, the electron energies should be lower than those of the maximum energy of the gamma ray [1]. With these low energies, most electrons in the air-electron streams might not propagate far away in air. Therefore, as increasing the distance from the treatment beam, doses by the air-electron stream decreased as shown in the results. If high-energy photon beams were used for MR-IGRT utilizing the MR-linac, such as MRIdian Linac (ViewRay Inc., Cleveland, OH), with a 6 MV photon beam or the Elekta MR-linac with a 7 MV photon beam (Elekta, Strockholm, Sweden), the airelectron streams could propagate farther than those presented in this study; therefore, it is necessary to be highly cautious for the air-electron streams when using MR-linac [15,16].
Hackett et al. investigated the spiraling contaminant electrons utilizing the Elekta MRlinac, which increase surface doses outside of the treatment field. The spiraling contaminant electrons is quite similar to the air-electron stream in the present study since both are electrons with directional nature by the magnetic field and deposit doses outside of the treatment field. However, the spiraling contaminant electrons are generated in the components of the linac head, shielding, and in the air column though which the incident beam passes while the air- electron stream is generated by the secondary electrons scattered in air from a patient. In other words, the spiraling contaminant electrons are generated with the contaminant electrons while the air-electron streams are generated with the secondary electrons scattered in air from a patient. Therefore, the doses outside of the treatment field by the spiraling contaminant electrons were measured without a phantom in the study by Hackett et al. while the air-electron streams were measured with a phantom in the present study. The surface dose outside of the treatment field (field size = 10 cm × 10 cm) by the spiraling contaminant electrons was approximately 5% of the maximum dose at 5 cm distance from the field edge, which is much smaller than those of the present study although the photon beam energy of the Elekta MR-linac was larger than that of the ViewRay system (7 MV vs. 1.17 MeV and 1.33 MeV).
The calculated doses outside of the treatment field by the air-electron streams were not coincident with the measured ones. One of the reasons for this discrepancy might be the large dose calculation grid size of the MRIdian system in the present study, which was 3 mm. As mentioned above, to maintain the dose calculation accuracy based on the Monte Carlo algorithm, i.e., to reduce uncertainties in the calculated doses in the voxels due to small numbers of histories, as well as to enable fast dose calculation for the on-Table ART in the clinic, the dose calculation grid size in the present study was set as 3 mm following the manufacturer's recommendation. However, the doses by the air-electron streams were surface doses, which should be assessed with a fine dose calculation resolution. The doses by the air-electron streams measured with the EBT3 films were doses at the depths of approximately 0.14 mm since the thickness of the EBT3 film is approximately 0.27 mm. Therefore, the doses on the panel surfaces calculated with a dose calculation grid size of 3 mm would be different from those measured with the films. Besides the dose calculation resolution, there might be other reasons for the Electron streams in air during MR-IGRT discrepancy between the calculations and the measurements in this study. This will be investigated further in the future.
In an extreme case, the largest average dose inside a circle with a radius of 1 cm (area of approximately 3.14 cm 2 ) by the air-electron stream at 10 cm distance from CAX was as large as 54.7% of the prescription dose and the area irradiated by equal to or larger than 50% of the prescription dose was 18.5 cm 2 . These are clinically significant undesired irradiations considering that the irradiated regions would be normal tissue far from the treatment volume [17]. However, these high dose irradiations outside of the treatment field would hardly occur during an actual treatment in the clinic since multiple beams are generally utilized for the actual treatment. In addition, because a patient's body is generally thicker than the phantom used in this study, the exit dose would not be as high as those observed on the end panels in this study. In a previous study, we performed in vivo measurements of the doses by the air-electron streams during APBI, and found the average value of D R1 to be approximately 4% of the prescription dose, which is much smaller than those in the present study [7]. Owing to the low energies of the air-electron streams, we could easily shield these doses with only 1-cm-thick commercial build-up bolus (Superflab Bolus, Radiation Products Design Inc., Alvertville, MN). For the MR-linacs, materials thicker than 1-cm-thick bolus would be required to shield the air-electron streams owing to the higher energies of the photon beams than that of the Co-60 source.
Conclusions
The undesired irradiation outside of the treatment field owing to the magnetic field when treating tumors located close to the patient's surface is a unique feature of MR-IGRT, which Electron streams in air during MR-IGRT Electron streams in air during MR-IGRT requires careful caution. As shown in the results, the calculated doses by the air-electron streams are inaccurate compared to the measurements. We found that the doses by the airelectron streams increased with the projected area of the cross-section of the treatment beam on the irradiated surface. In this situation, shielding must be considered to prevent undesirable Electron streams in air during MR-IGRT out-of-field irradiation. The undesired irradiation outside of the treatment field would be more problematic for MR-linac, which uses photons with higher energies than that of Co-60 source in the present study because the ranges in air as well as the penetrating powers of the air-electron streams would be larger than those presented in this study. | 9,045.4 | 2019-05-15T00:00:00.000 | [
"Medicine",
"Physics",
"Engineering"
] |
Fractals and self-similarity in economics : the case of a stochastic two-sector growth model
We study a stochastic, discrete-time, two-sector optimal growth model in which th e production of the homogeneous consumption good uses a Cobb-Douglas technology, c ombining physical capital and an endogenously determined share of human capital. Education is intensive in human capital as in Lucas (1988), but the marginal returns of the share of huma n capital employed in education are decreasing, as suggested by Rebelo (1991). Assuming th at the exogenous shocks are i.i.d. and affect both physical and human capital, we build specific confi gurations for the primitives of the model so that the optimal dynamics for the state variables can be conv rted, through an appropriate log-transformation, into an Iterated Function Syste m converging to an invariant distribution supported on a generalized Sierpinski gasket.
INTRODUCTION
in his seminal work presented the first description of self-similar sets, namely sets that may be expressed as unions of rescaled copies of themselves.He called these sets fractals, because their (fractional) Hausdorff-Besicovitch dimensions exceeded their (integervalued) topological dimensions.The Cantor set, the von Koch snowflake curve and the Sierpinski gasket are some of the most famous examples of such sets.Hutchinson (1981) and, shortly thereafter, Barnsley and Demko (1985) and Barnsley (1989) showed how systems of contractive maps with associated probabilities, referred to as Iterated Function Systems (IFS), can be used to construct fractal, self-similar sets and measures supported on such sets.These sets and measures are attractive fixed points of fractal transform operators.
After these pioneering papers, applications of IFS theory in several fields have been widely developed, eventually landing, at the end of the last century, also into Economics.As a matter of fact, economists are intrinsically reluctant to accept the idea that economic dynamics may generate fractals.A first breakthrough has been introduced by Boldrin and Montrucchio (1986), who showed that complicated (chaotic) optimal dynamics can occur in deterministic concave intertemporal optimization models when the discount factor is small enough.This result opened a new chapter in mainstream Economics, starting a huge literature aimed at studying complexity and chaos in almost all economic fields.Prominent, but by no means exhaustive,1 references are Montrucchio (1994), Nishimura and Yano (1995), Brock and Hommes (1997) and, more recently, Gardini et al. (2009), who exploited the IFS framework to construct a deterministic OLG-model converging to a fractal attractor.
A decade later complex behavior started to be investigated in stochastic concave intertemporal optimization models as well.Montrucchio and Privileggi (1999) borrowed from the literature on fractal images generation (specifically, from the 'Collage Theorem' by Hutchinson, 1981;Barnsley, 1989;Vrscay, 1991) to show that standard stochastic concave optimal growth models may exhibit optimal trajectories which are random processes converging to singular invariant distributions supported on fractal sets regardless of the discount factor.Such economies have optimal dynamics defined by IFS with linear maps.Mitra et al. (2004) investigated a simple onesector growth model with two random shocks whose optimal path is defined by a linear IFS which, for some values of parameters, converges to a singular distribution supported on a Cantor set.They also characterized singularity versus absolute continuity of the invariant probability in terms of (almost) all parameters' values.Mitra andPrivileggi (2004, 2006) further generalized that model and eventually (2009) provided an estimate of the Lipschitz constant for the (nonlinear) maps of the IFS defining the optimal policy in a class of stochastic one-sector optimal growth models in the Brock and Mirman (1972) tradition.This result yields sufficient conditions for the model to converge to a singular distribution supported on a generalized Cantor set directly in terms of the parameters' values.
In this paper we consider a neoclassic stochastic, discrete-time, two-sector growth model in which production of a unique homogeneous good depends on both physical and human capital through a Cobb-Douglas technology, while education requires only human capital, as suggested by Lucas (1988).However, we modify the Lucas (1988) framework by postulating that the marginal returns of the human capital employed in education are decreasing, thus embedding Rebelo (1991) assumption.Production in both sectors is multiplicatively affected by random i.i.d.shocks taking on a finite number of values.Our main contribution is to provide sufficient conditions on the parameters of the model -namely, on the exponents of the Cobb-Douglas production function and of the human capital production function, and on the values of random shocks -such that the IFS corresponding to the optimal policy function converges to a unique invariant distribution supported on a (generalized) Sierpinski gasket.Hence, this result can be seen as a further extension of the approach pursued by Mitra and Privileggi (2004, 2006, 2009) for the onesector growth model to a multi-sector growth model under uncertainty.
In Section "Iterated function systems" the main results from the IFS theory are briefly recalled.In Section "The model", the model is stated and the optimal dynamics are explicitly computed.Section "Conjugate linear IFSP" contains the central contribution of this paper: a linear IFS conjugate to the true optimal dynamics is constructed and sufficient conditions for its attractor to be a Sierpinski gasket supporting the unique invariant distribution of the economy are provided directly in terms of parameters of the model.Finally, in Section "Examples of Sierpinski gasket-like attractors" a few examples of economies converging to differently shaped Sierpinski gaskets are described, while Section "Conclusions" reports some concluding remarks.All proofs are gathered in the Appendix.
ITERATED FUNCTION SYSTEMS
Iterated Function Systems allow to formalize the notion of self-similarity or scale invariance of some mathematical object.Hutchinson (1981) and Barnsley and Demko (1985) showed how systems of contractive maps with associated probabilities can be used to construct self-similar sets and measures.In the IFS literature, these are called IFS with probabilities (IFSP) and are based on the action of a contractive Markov operator on the complete metric space of all Borel probability measures endowed with the Monge-Kantorovich metric.Applications of these methods can be found in image compression, approximation theory, signal analysis, denoising, and density estimation (see, e.g., Freiberg et al., 2011;Kunze et al., 2007;Iacus and La Torre, 2005a,b;La Torre et al., 2006;La Torre andMendivil, 2008, 2009;La Torre and Vrscay, 2009;La Torre et al., 2009;Mendivil and Vrscay, 2002a,b).In what follows, let (X, d) be a complete metric space and w = {w 1 , . . ., w N } be a family of injective contraction maps w i : X → X, to be referred to as an N-map IFS.Let c i ∈ (0, 1) denote the contraction factor of w i and define c = max i∈{1,...,N} c i .Note that c ∈ (0, 1).Associated with the IFS mappings w 1 , . . ., w N there is a set-valued mapping ŵ : K (X) → K (X) defined over the space K (X) of all non-empty compact sets in X as where w i (S) = {w i (x) : x ∈ S} is the image of S under w i , for i = 1, . . ., N. A set S w ⊂ X is said to be an invariant set of w if it is compact and it is invariant under Eq. 1, that is, it satisfies ŵ (S w ) = S w .If in addition, the contractive mappings w i are assumed to be similitudes, i.e., if we assume that there exist numbers c i ∈ (0, 1) such that the invariant set S w is said to be self-similar.In K (X) it is possible to define the so-called Hausdorff distance d H between compact sets which reads as and it can be proved that (K (X), d H ) is a complete metric space (see Hutchinson, 1981).
Theorem 1 (Hutchinson, 1981) ŵ is a contraction mapping on the metric space We have the following corollary from the Banach fixed point theorem.
Corollary 1 There exists a unique compact set A ∈ K (X), such that ŵ (A) = A, which is called the attractor of the IFS w.Moreover, for any S ∈ K (X), d H ( ŵn (S) , A) → 0 as n → ∞.
The latter property provides a construction method of approximating a fractal.The equation ŵ (A) = A obviously implies that A is self-tiling, i.e., A is the union of (distorted) copies of itself.
Let M (X) be the space of probability measures defined on the σ -algebra B (X) of Borel measurable subsets of X and define for some a ∈ X the set Notice that the definition of M 1 (X) does not depend on the choice of a (if the integral is finite for a certain a ∈ X then it is finite for all a ∈ X).For µ, ν ∈ M 1 (X), we define the Monge-Kantorovich distance as follows where L ip 1 is the set of all Lipschitz functions with Lipschitz constant equal to 1.It can be proved that (M 1 (X) , d M ) is a complete metric space under the Monge-Kantorovich metric provided X be a separable complete metric space.Furthermore, if X is compact, then M (X) = M 1 (X) and both are compact metric spaces under the Monge-Kantorovich distance (see Barnsley et al., 2008).
Let p = (p 1 , p 2 , . . ., p N ), 0 < p i < 1, 1 ≤ i ≤ N, be a partition of unity associated with the IFS mappings w i , so that ∑ N i=1 p i = 1.Associated with this IFS with probabilities (IFSP) (w, p) is the so-called Markov operator, M : M 1 (X) → M 1 (X), defined as Corollary 2 There exists a unique probability measure μ ∈ M 1 (X), called invariant measure of the IFSP (w, p), such that M μ = μ.Moreover, for any µ Note that for any µ-integrable function u : Let C 0 (X) denote the Banach space of continuous functions on X endowed with the uniform metric d ∞ .Associated with the IFSP (w, p) define the following operator T : C 0 (X) → C 0 (X): For a given ν ∈ M 1 (X) define the linear functional Then T f , ν = f , Mν , i.e., T is the adjoint operator of M. The operator T is a contraction on the complete metric space C 0 (X) , d ∞ with contraction factor p = max i∈{1,...,N} p i < 1.Thus we have where µ n = M n λ → µ in the Monge-Kantorovich distance and λ is the Lebesgue measure on X.
It is worth mentioning the concept of V -variable fractals recently introduced by Barnsley et al. (2008) allowing for the description of new families of random fractals, which are intermediate between deterministic and random fractals, including recursive as well as homogeneous random fractals.More precisely, given a (not necessarily finite) family of IFSP's, such fractals are the result of random applications of the related set valued mappings and measure valued Markov operators.The parameter V describes the degree of "variability" of the realizations.Roughly speaking, this means that at each construction step we have at most V different fundamental shapes.
THE MODEL
We study an optimal growth model under uncertainty in which the social planner seeks to maximize the representative household's infinite discounted sum of instantaneous utility functionswhich are assumed to be logarithmic -subject to the laws of motion of physical, k t , and human, h t , capital.At each time t, the planner chooses consumption, c t , and the share of human capital, u t , to allocate into production of a unique homogeneous consumption good which uses a Cobb-Douglas technology that combines physical and human capital.Education is assumed to be intensive in human capital, as in Lucas (1988), but the marginal returns of the share of human capital employed in education are decreasing, in accordance with Rebelo (1991).
The final good and the education sectors are affected by exogenous perturbations, z t and η t respectively, which enter multiplicatively both production functions; they are independent and identically distributed, and take on finite values: z ∈ {q 1 , q 2 , 1} and η ∈ {r, 1}, with 0 < q 1 < q 2 < 1 and 0 < r < 1.We assume that only three pairs of shock values can occur with positive probability, (z, η) ∈ {(q 1 , r) , (q 2 , 1) , (1, 1)}, each with (constant) probability p 1 , p 2 and p 3 respectively, where p i ∈ (0, 1), i = 1, 2, 3, and ∑3 i=1 p i = 1.Such three shock configurations may be interpreted as 1) a deep financial crisis typically having wide effects on the economy as a whole and thus involving both production and education sectors,2 corresponding to (z, η) = (q 1 , r), 2) a sudden surge in raw materials' (e.g., oil) prices affecting only the production sector but not education, corresponding to (z, η) = (q 2 , 1), and 3) a scenario with no shocks in which the whole economy evolves along its full capacity, corresponding to (z, η) = (1, 1).
The Bellman equation associated to the problem defined in Eq. 2 reads as: Thanks to the log-Cobb-Douglas specification of the model, both the value function V (•, •, •, •) and the optimal policy of the problem defined in Eq. 2 can be explicitly computed by applying the "guess and verify" method 3 to the Bellman equation (Eq.4).
Proposition 1
1.The solution V (k, h, z, η) of the Bellman equation in Eq. 4 is given by: where the constants θ k , θ h , θ z and θ η are defined as follows: and the constant term θ is given by: 2. The optimal policy rules for consumption and share of human capital allocated to physical production are respectively given by: while physical and human capital follow the (optimal) dynamics defined by: The proof is reported in the Appendix.
An argument parallel to that described on pp.273-277 in Stokey and Lucas (1989) establishes that the function V (k, h, z, η) defined in Eq. 5 is actually the value function of problem in Eq. 2.
CONJUGATE LINEAR IFSP
The optimal dynamics for the physical and human capital in Eq. 9 have the form of products of powers, suggesting that a logarithmic transformation of both variables k t and h t may yield an equivalent conjugate system which is linear in the transformed variables.Specifically, a suitable transformation of Eq. 9 may lead to a contractive IFSP converging to a unique invariant distribution supported on some fractal attractor in accordance with Corollaries 1 and 2 of Section "Iterated function systems".The following proposition shows that, for specific sets of values for parameters α, φ , q 1 , q 2 an r, a linear system conjugate to Eq. 9 exists defining a IFSP that converges to an invariant distribution supported on a (generalized) Sierpinski gasket with vertices (0, 0), (1/2, 1) and (1, 0).
Then the one-to-one logarithmic transformation (k t , h t ) → (x t , y t ) defined by: with ) defines a contractive linear IFSP which is equivalent to the nonlinear dynamics in Eq. 9 and is composed of the three maps w 1 , w 2 , w 3 : R 2 → R 2 , with probabilities p 1 , p 2 , p 3 respectively, given by: The IFSP defined by Eq. 16 converges to an invariant distribution supported on a (generalized) Sierpinski gasket with vertices (0, 0), (1/2, 1) and (1, 0).
The proof is reported in the Appendix.
The mild restriction α = φ required in Proposition 2 precludes the possibility of generating the standard Sierpinski gasket with vertices (0, 0), (1/2, 1) and (1, 0) through Eq. 16, as its construction postulates that α = φ = 1/2 must hold.In this sense, we say that the attractor of Eq. 16 is a generalized Sierpinski gasket.As it is clear from the proof, condition in Eq. 10 turns out to be the key restriction needed to construct the dynamics in Eq. 16 equivalent to those in Eq. 9.
EXAMPLES OF SIERPINSKI GASKET-LIKE ATTRACTORS
We consider four different parametrizations of the physical production and human capital production parameters, α and φ .Note that any triple 0 < q 1 < q 2 < 1 and 0 < r < 1 satisfying condition in Eq. 10 of Proposition 2 does the job; thus we do not set values for these parameters.Similarly, probabilities p 1 , p 2 and p 3 can be any numbers between 0 and 1 summing up to 1.In the first two scenarios, we tackle a framework very close to the benchmark case α = φ = 1/2, corresponding to the standard Sierpinski gasket with vertices (0, 0), (1/2, 1), (1, 0) as the unique attractor of the IFSP in Eq. 16.As Proposition 2 requires α = φ , we set α = 0.5 and φ = 0.49.Fig. 1a shows the first 8 iterations4 of the map in Eq. 1 when the maps w 1 , w 2 , w 3 are given by Eq. 16 starting from the triangle of vertices (0, 0), (1/2, 1), (1, 0) as initial set S 0 .While α = 1/2 implies that the two lower triangles of each prefractal5 have one vertex in common [e.g., point (1/2, 0) after one iteration], the assumption that φ < 1/2 implies that the top vertices of the two lower triangles are disjoint from the bottom vertices of the top triangle.Clearly, whenever α ≥ 1/2 and φ ≥ 1/2 with at least one strict inequality, all triangles in each prefractal overlap, as shown in Fig. 1b for α = 0.5 and φ = 0.52.The last two cases consider a more realistic economy in which the capital share parameter is set to be α = 0.333.In the economic literature the capital share parameter in the output of the physical sector, α, measuring its marginal returns on capital, has been traditionally considered the to be close to one third (Mankiw et al.,1992;Barro and Sala-i-Martin, 2004).A clear measure of the marginal returns of human capital in education has never been found in the empirical literature, since the human capital share in education is usually set to 1 in order to generate endogenous growth (Lucas, 1988).However, as argued by Rebelo (1991), we can reasonably assume that marginal returns of human capital are decreasing too.Probably, the most empirically relevant case is the one in which the education sector is relatively intensive in human capital, that is φ ≤ 1 − α (Barro and Salai-Martin, 2004); therefore, in these two scenarios we assume a reasonable φ = 0.5 and a limiting case φ = 1 − α = 0.667.Figs.2a and 2b plot the first 7 iterations (which are enough in this case) of the map Eq. 1, again starting from the triangle of vertices (0, 0), (1/2, 1) and (1, 0) as initial set S 0 , for α = 0.333, φ = 0.5 and for α = 0.333, φ = 0.667 respectively.
CONCLUSIONS
In this paper we built a neoclassic, stochastic, discrete-time, two-sector optimal growth model in which the production of a homogeneous consumption good depends on physical and human capital.Our model exhibits two peculiar features: 1) the log-Cobb-Douglas structure of preferences plus production allows for a closed form solution of the Bellman equation, thus allowing for the explicit computation of the optimal dynamics of the state variables (Proposition 1), and 2) through a simple logtransformation of such dynamics we are able to show that for a sufficiently rich set of parameters' configurations this economy converges to an invariant distribution supported on a generalized Sierpinski gasket (Proposition 2).The only binding restriction is actually given by condition in Eq. 10 which relates the value r of the shock affecting the education sector to the two values q 1 and q 2 of the shock affecting the production sector.However, we believe that our approach is sufficiently general as there is total freedom of choice on the values of two out of three exogenous shock parameters, leaving only the third dependent to the first two.
After investigating the (approximation of) the attractors of some economies in Figs.1a, 1b, 2a and 2b, one may ask how the degree of overlapping among the prefractals may affect singularity properties of the invariant distribution.More precisely, it would be interesting to establish under what conditions on the model's parameters the invariant distribution turns out to be singular -or absolute continuous -with respect to Lebesgue measure.This exercise is left for future research.
APPENDIX
Proof of Proposition 1. Assuming the form as in Eq. 5 for the value function and dropping the time subscript, the Bellman equation (Eq.4) can be rewritten as: FOC on the RHS with respect to c and u yield respectively: while the envelope conditions read as: From Eq. 19 we get: which, when plugged into Eq.21, after some algebra leads to: Using Eqs.23 and 24 into Eq.22, again after some algebra yields: From Eqs. 20 and 22 we obtain u = 1 − β φ , which is the optimal human capital share as in Eq. 8 while joining Eqs.23 and 24 one immediately gets c = which is the optimal consumption as in Eq. 7. The optimal dynamics (Eq.9) are obtained by substituting Eqs.7 and 8 into the dynamic constraints (Eq.3).
Finally, in order to calculate the remaining constants θ , θ z and θ η we substitute θ k , θ h , c and u as computed above into Eq.18, so that the terms in ln k and ln h cancel out and we are left with: For this equation to hold both the terms in ln z and ln η must vanish, which requires: while θ turns out to be given by Eq. 6.
Proof of Proposition 2. Using Eq. 11, Eq. 17 can be rewritten as: (25) Let us focus on the first equation in Eq. 25.Substituting k t+1 and h t+1 as in the first equation of Eq. 9, rearranging terms and after dropping the common terms αρ a ln k t such equation becomes: In order to let the constant ρ c be independent of h t in the equation above, we need that (1 − α) ρ a + (φ − α) ρ b = 0, so that the last term in the LHS cancels out and, under the assumption that α = φ , we have: Using Eq. 27, Eq. 26 boils down to: (28) As the LHS in Eq. 28 is constant, we can use the three values γ t = 0, γ t = (1 − α) /2 and γ t = (1 − α), corresponding respectively to (z t , η t ) = (q 1 , r), (z t , η t ) = (q 2 , 1) and (z t , η t ) = (1, 1) for the original shocks, and write: From the second equation, using Eq.27 we easily get ρ a and ρ b as in Eq. 12. Note, however, that the first equation on the left must hold as well, which, consistently with ρ a = − (1 − α) / (2 ln q 2 ), is equivalent to condition in Eq. 10.As a matter of fact, condition in Eq. 10 is the key assumption to let Eq. 28 -or, equivalently, Eq. 26 -be independent of h t .Substituting γ t = 1 − α [corresponding to (z t , η t ) = (1, 1)] and ρ a as in Eq. 12 into Eq.28 easily yields ρ c as in Eq. 13.
As far as the second equation in Eq. 25 is concerned, substituting h t+1 as in the second equation of Eq. 9, rearranging terms and after dropping the common terms φ ρ d ln h t such equation becomes: (29) As the LHS is constant, we can use the two values ϑ t = 0 and ϑ t = (1 − φ ), corresponding respectively to η t = r and η t = 1 for the original shocks on human capital, and write: −ρ d ln r = 1 − φ , which immediately yields ρ d = − (1 − φ ) / ln r, while ρ e = 1 + ln (β φ ) φ / ln r is obtained by plugging the expression of ρ d into Eq.29.Finally, substituting ln r according to Eq. 10 yield ρ d and ρ e as in Eqs. 14 and 15.
As 0 < α < 1 and 0 < φ < 1, the IFSP in Eq. 16 -or, equivalently, Eq. 17 -is a contraction mapping; hence, Corollaries 1 and 2 apply and this is sufficient to show that the conjugate dynamics of system (Eq.9) describing the optimal evolution of the state variable in our economy have a unique invariant distribution supported on a generalized Sierpinski gasket to which the economy converges in the long run. | 5,829.6 | 2011-11-01T00:00:00.000 | [
"Economics"
] |
Component Order Connectivity in Directed Graphs
A directed graph D is semicomplete if for every pair x, y of vertices of D, there is at least one arc between x and y. Thus, a tournament is a semicomplete digraph. In the Directed Component Order Connectivity (DCOC) problem, given a digraph D=(V,A)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D=(V,A)$$\end{document} and a pair of natural numbers k and ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell $$\end{document}, we are to decide whether there is a subset X of V of size k such that the largest strongly connected component in D-X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D-X$$\end{document} has at most ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell $$\end{document} vertices. Note that DCOC reduces to the Directed Feedback Vertex Set problem for ℓ=1.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell =1.$$\end{document} We study the parameterized complexity of DCOC for general and semicomplete digraphs with the following parameters: k,ℓ,ℓ+k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k, \ell ,\ell +k$$\end{document} and n-ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-\ell $$\end{document}. In particular, we prove that DCOC with parameter k on semicomplete digraphs can be solved in time O∗(216k)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{16k})$$\end{document} but not in time O∗(2o(k))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{o(k)})$$\end{document} unless the Exponential Time Hypothesis (ETH) fails. The upper bound O∗(216k)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{16k})$$\end{document} implies the upper bound O∗(216(n-ℓ))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{16(n-\ell )})$$\end{document} for the parameter n-ℓ.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n-\ell .$$\end{document} We complement the latter by showing that there is no algorithm of time complexity O∗(2o(n-ℓ))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{o({n-\ell })})$$\end{document} unless ETH fails. Finally, we improve (in dependency on ℓ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell $$\end{document}) the upper bound of Göke, Marx and Mnich (2019) for the time complexity of DCOC with parameter ℓ+k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell +k$$\end{document} on general digraphs from O∗(2O(kℓlog(kℓ)))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{O(k\ell \log (k\ell ))})$$\end{document} to O∗(2O(klog(kℓ))).\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{O(k\log (k\ell ))}).$$\end{document} Note that Drange, Dregi and van ’t Hof (2016) proved that even for the undirected version of DCOC on split graphs there is no algorithm of running time O∗(2o(klogℓ))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{o(k\log \ell )})$$\end{document} unless ETH fails and it is a long-standing problem to decide whether Directed Feedback Vertex Set admits an algorithm of time complexity O∗(2o(klogk)).\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{o(k\log k)}).$$\end{document}
Introduction
Motivated by various practical network applications, many different vulnerability measures of undirected graphs have been introduced and studied in the literature. The two most studied of such measures are vertex and edge connectivity of an undirected graph. However, these two measures often do not capture the more subtle vulnerability properties of networks that one might wish to consider, such as the number of vertices in the largest remaining connected component.
While both undirected and directed graphs are of great interest in graph theory, algorithms and applications, undirected graphs have been studied much more than their directed counterparts arguably due to simpler structure of undirected graphs. In this paper, we study a number of parameterizations of a problem of interest from both theory and applications which was mainly studied for undirected graphs so far.
In many networks, the underlying graph is directed rather than undirected and the aim of this paper is to study an extension to directed graphs of the -component order connectivity of an undirected graph G, which is the size of a minimum set X ⊆ V (G) such that mco(G − X ) ≤ , where mco(G − X ) is the number of vertices in the largest connected component of G − X (mco stands for maximum component order). By Component Order Connectivity we will denote the following decision problem: component order connectivity Input: A graph G = (V , E) and a pair , k ∈ N of natural numbers Question: Is there a subset X of V of size k such that mco(G − X ) ≤ ?
For a survey on Component Order Connectivity, see Gross et al. [14]; for more recent research on the problem, see e.g. [11,16,17]. A directed graph D is semicomplete if for every pair x, y of distinct vertices of D, there is an arc between x and y. When we require that there is only one arc between x and y then we obtain a definition of a tournament. Clearly, the hardness results for the directed graphs row of Table 1 follow from the corresponding results in the undirected graphs row for columns n − and k. Directed Component Order Connectivity[ ] is para-NP-complete for semicomplete digraphs as Directed Component Order Connectivity on semicomplete digraphs is NP-complete for = 1. This follows from the fact that Directed Feedback Vertex Set is NP-complete even for tournaments, as proved by Bang-Jensen and Thomassen [3] and Speckenmeyer [19].
The FPT result in the directed graphs row of Table 1 is first obtained by Göke et al. [13] as discussed above. The running time of their algorithm is 4 (1) . By modifying their algorithm, we obtain an algorithm (1) , which decreases the asymptotic dependence of the running time on . 1 Our modification consists of replacing a branching algorithm in [13] with a randomized algorithm which can be derandomized without increasing the complexity upper bound. Note that Drange et al. [11,Theorem 14] proved that even for Component Order Connectivity on split graphs there is no algorithm of running time O * (2 o(k log ) ) (here we restrict ourselves to = k O(1) ) unless the Exponential Time Hypothesis (ETH) [15] fails and it is a long-standing problem to decide whether Directed Feedback Vertex Set admits an algorithm of time complexity O * (2 o(k log k) ).
The most interesting entry in the semicomplete digraphs row is a non-trivial result that Directed Component Order Connectivity[k] on semicomplete digraphs is FPT. This FPT algorithm boils down to finding a shortest path in a suitably defined auxiliary weighted acyclic digraph. The running time of the algorithm is O(2 16k kn 2 ). The other two FPT entries in this row follow from this result (for the parameter n − this is due to our assumption that k < n − ). We also prove the following lower bounds: no algorithm for Directed Component Order Connectivity[k] on semicomplete digraphs can have time complexity 2 o(k) n O(1) unless ETH fails 2 and no such deterministic algorithm can run in time o(n 2 ) for k = 0 (the last bound is it is information theoretic, not depending on any computational complexity hypothesis).
Our paper is organised as follows. The next section is devoted to terminology and notation on directed and undirected graphs, and basics on parameterized algorithms and complexity. In Sect. 3, we describe our improvement on the algorithm of Göke et al. [13]. In Sect. 4, we prove that Directed Component Order Connectivity[k] on semicomplete digraphs admits an algorithm of running time O * (2 16k ) and show the lower bounds on the running time with parameters k and n − . We conclude the paper in Sect. 5.
Directed and Undirected Graph Terminology and Notation
In this paper, all directed and undirected graphs are finite, without loops or parallel edges. As often the case in the directed graph theory, an edge of a digraph will be called an arc and the vertex and arc sets of a digraph D will be denoted by V (D) and A(D), respectively. The out-neighbourhood and in-neighbourhood of a vertex x of a digraph D are denoted by N + : yx ∈ A(D)}, respectively, and the subscript D will be omitted if D is clear from the context. The out-degree and in-degree of a vertex x of D is respectively. In this paper all paths and cycles in digraphs are directed, so we will omit the adjective 'directed' when referring to paths and cycles in digraphs. If D = (V , A) is a digraph and S ⊆ V , then we denote by D[S] the subdigraph induced by the vertices in S. A digraph D is strongly connected (or, just strong) if there is a path from x to y for every ordered pair x, y of distinct vertices. A strong component of a digraph D is a maximal strong induced subgraph of D. Strong components of D do not share vertices and can be ordered D 1 , D 2 , . . . , D p such that there is no arc in D from V (D j ) to V (D i ) when j > i. Such an ordering is called an acyclic ordering. Note that if D is a semicomplete digraph, then the strong components of D have a unique acyclic ordering D 1 , D 2 , . . . , D p and we have x y ∈ A(D) for every Basic digraph terminology not introduced in this section can be found in [1,2].
Parameterized Complexity
An instance of a parameterized problem is a pair (I , k) where I is the main part and k is the parameter; the latter is usually a non-negative integer. While FPT is a parameterized complexity analog of P in classic complexity, there are many hardness classes in parameterized complexity and they form a nested sequence starting from W [1]. It is well known [8,Chapter 14] that if the Exponential Time Hypothesis holds then FPT =W [1]. Due to this and other complexity results, it is widely believed that FPT =W [1] and hence W [1] is viewed as a parameterized analog of NP in classical complexity. para-NP is the class of parameterized problems which can be solved by a nondeterministic algorithm in time O( f (k)|I | c ), where f is a computable function and c is an absolute constant. It is well-known that if a problem with parameter κ is NP-hard when κ equals to some constant, then is para-NP-hard [12,Corollary 2.16]. It is also well known that FPT=para-NP if and only if P=NP [12,Corollary 2.13].
For more information on parameterized algorithms and complexity, see recent books [8,10,12].
Directed Component Order Connectivity[ + k] on General Digraphs
Göke, Marx and Mnich [13] showed that Directed Component Order Connectivity[ + k] is FPT with a running time given by The core of their algorithm is as follows. Begin with the iterative compression version of the problem, where in addition to (D, , k) the input also contains a solution X 0 with |X 0 | = k + 1, which can be used to guide the search for a smaller solution. This is a standard ingredient in FPT algorithms; see, e.g., [8]. At the cost of a simple branching step, we may also assume that we are looking for a solution X with X ∩ X 0 = ∅. Next, they observe that if we knew the strongly connected components of D − X that the vertices of X 0 are contained in, then the problem reduces to a previously studied, simpler problem known as Skew Separator [7], which occurs in the design of the FPT algorithm for Directed Feedback Vertex Set (DFVS) of Chen et al. [7]. Indeed, if the precise strong components containing the vertices of X 0 are known, then the problem can be solved in time O * (4 k k!) using a strategy much like that for DFVS [7,13]. Hence the bottleneck of the current best algorithm for Directed Component Order Connectivity[ + k] is the guessing of the strong components of X 0 in D − X . Göke et al. [13] solve this via a branching algorithm that they analyse as taking time at most (k + k + )!. We show a simpler randomized method solving this problem with an improved time bound of The method can be derandomized by standard methods.
Lemma 3.1 Let (D, , k) be an instance of Directed Component Order Connectivity[ + k], and let X 0 be a solution with |X
There is a randomized procedure that with success probability at least such that for every x ∈ X 0 , the strong components containing x in D − X and in D[S] are identical.
We declare a guess a success if the following conditions apply: Since these are independent events, this clearly happens with probability Above we used the bound 1+a ≤ e a (a ≥ 0), where we set a = 1/ . By the inequality , we conclude that the success probability matches the bound in the lemma. Now assume that the guess was successful for some set S and consider the strong For the derandomization, we employ a cover-free family construction of Bshouty and Gabizon [4]. We get the following lemma.
Lemma 3.2 Let (D, , k) be an instance of Directed Component Order Connectivity[ + k]
, and let X 0 be a solution with |X 0 | = k + 1. Let X be an unknown solution with |X | ≤ k such that X ∩ X 0 = ∅. There is a deterministic procedure that produces a set F ⊆ 2 V with
the strong components containing x in D − X and in D[S] are identical.
Proof Let r ≤ s < n be integers. Bshouty and Gabizon (in a slightly non-standard definition) define an (n, (r , s))-cover free family as a set F ⊆ {0, 1} n such that for every disjoint pair of sets A, B ⊆ [n] with |A| = r and |B| = s there is a set S ∈ F such that A ⊆ S and B ∩ S = ∅. Bshouty and Gabizon [4] show how to compute an (n, (r , s))-cover free family F of size As in Lemma 3.1, it suffices to guarantee that there is a set S ∈ F such that Y ⊆ S and X ∩ S = ∅. This guarantee is achieved by constructing a cover-free family with parameters n = |V (D)|, r = (k + 1) and s = k. Here r > s, but we can simply compute an (n, (s, r ))-cover free family and take the complement of every member. Hence we get a family of size The two lemmas of this section and (1) imply the following:
Directed Component Order Connectivity on Semicomplete Digraphs
Let us first summarize the main ideas behind our FPT algorithm, before providing more technical details. Let D = (V , A) be a semicomplete digraph, k, ∈ N and let X ⊆ V of size k such that mco(D − X ) ≤ . The vertices of D − X can be partitioned into C 1 , . . . , C q such that each C i is the vertex set of a strong component of D − X and 1. for every i ∈ [q] we have |C i | ≤ , and 2. for every i, j ∈ [q] with i < j and every x ∈ C i , y ∈ C j we have x y ∈ A and yx / ∈ A.
In our algorithm, we would like to discover the strong components one by one in the ascending order from C 1 to C q . Now let X 1 , . . . , X q be a partition of X into . The arcs uv, u ∈ C i , v ∈ C j for i < j are omitted as well as the arcs within X between X t and C t , t ∈ [q]. The set S i is the set of the three square vertices, one in each of X i , X i+1 , and X q . The set S i is a minimal vertex cover of the dashed arcs from Z i to Y i . Note that the vertex in X 1 is not in S i as the arc incident to it with the tail in Z i is already covered by S i . Note also the hollow circle vertex in X i , the only reason it is in X is to reduce the size of C i and as such it will not appear in any S j , j ∈ [q], in the set of q valid triples defining these components q (possibly empty) parts and let, for Such vertices can actually be replaced in X by any vertex in C i+1 . It follows that if we are given (Y 1 , Z 1 , S 1 ), . . . , (Y q , Z q , S q ), then we can easily reconstruct a solution of size |X | as i∈[q] S i plus some arbitrary vertices Therefore, our goal will be to search for triples The first step of our proof is to show that there are at most 2 8k+2 n triples we need to consider (Lemma 4.4). We will call these important triples valid and we postpone the precise definition for later. The main reason for the bound is that we only need to consider triples (Y i , Z i , S i ) for which |S i | ≤ k and that if we fix |Y i | (and hence also |Z i |), then vertices with out-degree at least |Z i | + |S i | + 1 (resp. in-degree at least |Y i | + |S i | + 1) have to be in Y i (resp. in Z i ) or in S i and we can fix these vertices in Y i (resp. in Z i ). Once we bound the number of the triples we need to consider, we can define compatible pairs of triples (Y 1 , Z 1 , S 1 ), (Y 2 , Z 2 , S 2 ) , for which Y 1 ⊂ Y 2 and these triples, loosely speaking can define a strong component of D − X with at most vertices as (Y 2 \ Y 1 ) \ (S 1 ∪ S 2 ) and the arcs from Z 2 to Y 1 are all hit by a vertex in S 1 ∩ S 2 . This allows us to create an auxiliary acyclic "state" digraph whose vertices are valid triples and arcs are the compatible pairs of triples. The paths from (∅, V , ∅) to (V , ∅, ∅) in this graph then define a solution for (D, , k). Note that our algorithm can be equivalently seen as a dynamic programming which computes for each valid triple (Y , Z , S) a minimum size set X such that mco(D[Y ] − (X ∪ S)) ≤ .
The following lemma allows us to show that if we fix |Y | in a triple (Y , Z , S), then only O(k) vertices of D could potentially be in both Y and Z and all other vertices are fixed. The lemma is an easy consequence of the fact that every semicomplete digraph on at least 2 p + 2, p ∈ N, vertices has a vertex of out-degree at least p + 1. We give the proof here for the convenience of the reader. D = (V , A) be a semicomplete digraph and let Y , Z be a partition of V such that for every y ∈ Y and every z ∈ Z , we have yz ∈ A. Then for every p ∈ N (1) there are at most 2 p + 1 vertices in Y with d + D (y) ≤ |Z | + p and (2) there are at most 2 p + 1 vertices in Z with d − D (z) ≤ |Y | + p. Proof We will first prove Part (1). Let Y ≤ be the set of vertices in Y with out-degree at most |Z | + p in D. Since for every y ∈ Y and every z ∈ Z , we have yz ∈ A, it follows that all vertices in
Lemma 4.1 Let
Since D is a semicomplete digraph, It follows that |Y ≤ | ≤ 2 p + 1. Part (2) follows directly from Part (1) Let D = (V , A) be a semicomplete digraph and t ∈ [n]. We will call a triple
Lemma 4.3 Let D = (V , A) be a semicomplete digraph, n = |V |, and let t ∈ [n].
If there exists a t-valid triple, then there are at most 7k
Proof Let us assume that there is at least one t-valid triple and let us denote it (Y , Z , S).
Note that for all y ∈ Y \ S and z ∈ Z \ S it holds that zy / ∈ A(D). Since D is a semicomplete digraph, it follows that yz ∈ A(D). Due to Lemma 4.1 applied to D − S, there are at most 2(k + |Z ∩ S|) + 1 vertices in Y \ S with d + D−S (y) ≤ |Z \ S| + k + |Z ∩ S| = n − t + k and there are at most 2(k + |Y ∩ S|) Thus, |F| ≤ 7k + 2. For the rest of the proof, we assume that we computed the set F of vertices v in
There are at most 2 8k+2 t-valid triples (Y , Z , S). Moreover, if we are given the in-and out-degrees of all vertices in D on the input, then we can enumerate all such triples in time O(2 8k kn).
Let (Y , Z ) be one of 2 7k+2 partitions that could lead to a t-valid triple.
We show that we can enumerate all minimal sets S , |S | ≤ k, such that for all y ∈ Y and z ∈ Z , if zy ∈ A(D), then |{y, z} ∩ S | ≥ 1. Let G be an undirected bipartite graph such that V (G) = V (D), the partite sets of G are Y and Z , and for every y ∈ Y , z ∈ Z , it holds yz ∈ V (G) if and only if zy ∈ A(D). Then S is a minimal vertex cover of size at most k in G. Moreover, every minimal vertex cover S in G leads to a t-valid triple (Y , Z , S ). It is well known and easy to show that we can enumerate all minimal vertex covers of size at most k in G in time O(2 k k 2 + kn). This is done by including all vertices with degree at least k + 1 in every vertex cover and removing every vertex they cover. If the resulting graph has more than k 2 edges, then there is no vertex cover of size at most k [5]. Then we can enumerate all vertex covers of size at most k, by using a simple search-tree algorithm that picks an edge, say uv, and recursively enumerates all vertex covers of size at most k − 1 that include u or v, respectively. By the algorithm, it is also easy to see that there are at most 2 k distinct vertex covers of size at most k. For each of these vertex covers, we can easily determine whether it is minimal in O(k 2 ) time by going over all of the at most k 2 edges and if exactly one endpoint of the edge is in vertex cover, then we mark this vertex as important. If all vertices at are marked important, then the vertex cover is minimal. Otherwise, any vertex that is not marked important at the end, can be removed from the vertex cover since all its neighbours are already in the vertex cover and the vertex cover is not minimal.
We are now ready to present our algorithm. Algorithm. Our algorithm boils down to finding a shortest path in an auxiliary weighted acyclic digraph whose vertex set consists of all the valid triples. The main idea is to find a sequence of valid triples (Y 1 , Z 1 , S 1 ), . . . , (Y q , Z q , S q ) such that S = i∈[q] S i is a solution for (D, , k) and the strongly connected components of We define the weighted directed acyclic state graph D = (V, A) as follows. The set of vertices V is the set of all t-valid triples for all t ∈ {0, 1, . . . , n}. The set of arcs A contains an arc from a t 1 -valid triple (Y 1 , Z 1 , S 1 ) to a t 2 -valid triple (Y 2 , Z 2 , S 2 ) if and only if the following conditions holds: We let the weight of an arc from (Y 1 , This finishes the description of the auxiliary weighted acyclic digraph. In the remainder of the proof we first show that (D, , k) is a YES-instance if and only if the cost of the shortest path in D from (∅, V (D), ∅) to (V (D), ∅, ∅) is at most k. Afterwards, we bound |V|+|A| by O(2 16k n 2 ) and prove that we can construct the auxiliary digraph in O(2 16k kn 2 ) time. We can then find a shortest path from (∅, V (D), ∅) to (V (D), ∅, ∅) in linear time, that is, in time O(2 16k n 2 ) since D is acyclic (by dynamic programming using an acyclic ordering of the vertices), which finishes the proof.
Correctness of the Algorithm. Suppose first that (D, , k) is a YES-instance of Directed Component Order Connectivity[k] such that D is a semicomplete digraph. Let X be a minimum size solution for (D, , k), that is, a minimum size set such that mco(D − X ) ≤ . Since (D, , k) is a YES-instance and |X | ≤ k, the vertices of D − X can be partitioned in sets C 1 , . . . , C q such that 1. for every i ∈ [q] we have |C i | ≤ , and 2. for every i, j ∈ [q] with i < j and every x ∈ C i , y ∈ C j we have x y ∈ A and yx / ∈ A.
Our goal is to define a sequence of valid triples is in A and the cost of the path in D defined by this sequence is |X |. We will construct these triples from X and C 1 , . . . , C q with some additional restrictions that make it easier to show that they indeed define a path in D of cost at most |X |. Namely, we will define them such that for all i, j ∈ [q], i < j the triples satisfy the following properties: We first show that a sequence with the above properties indeed exists and defer the computation of the cost of the path defined by this sequence later. Note that given above properties, the arc ( ) exists in D whenever the weight of the arc is at most k. This follows from the argument that the cost of the path defined by this sequence is at most k and is also deferred later.
To obtain this sequence, we need to discuss how to distribute the vertices of X in the sets Y i and Z i and how to compute S i , S j (note that the partition of the vertices in V \ X is fixed by properties 2 and 3).
We distribute the vertices of X between Y i and Z i as follows. We start with t i = |C 1 ∪ · · · ∪ C i | and while there are more than t i − |C 1 ∪ · · · ∪ C i | vertices x ∈ X with d + D (x) > n − t i + k we increase t i by one. Since n − t i + k > n − (t i + 1) + k, once d + D (x) > n − t i + k holds for a vertex x ∈ X , it will be true for this vertex even after increasing t i . Moreover, since |X | ≤ k, there is a value of t i between |C 1 ∪ · · · ∪ C i | and |C 1 ∪ · · · ∪ C i | + k such that there are precisely t i − |C 1 ∪ · · · ∪ C i | vertices in X with d + D (x) > n − t i + k. We put all of these vertices in Y i and the remaining vertices of X in Z i . Note that for j ∈ N such that i < j, we will start with t j = |C 1 ∪ · · · ∪ C j | > |C 1 ∪ · · · ∪ C i | and observe that if we include x ∈ X in Y i , then we include it in Y j as well.
Now |X | ≤ k and for all y ∈ Y i \ X = C 1 ∪ · · · ∪ C i and all z ∈ Z i \ X = C i+1 ∪ · · · ∪ C q we have zy / ∈ A(D). The set S i is defined to be those vertices x ∈ X such that one of the following holds: 1. x ∈ Y i and there exists z ∈ Z i \ X such that zx ∈ A(D), 2. x ∈ Z i and there is an arc x y ∈ A(D), y ∈ Y i such that y / ∈ S i .
Note that all arcs from Z i to Y i are covered by S i and for each x ∈ S i there is an arc zy from Z i to Y i with {y, z} ∩ X = {x}. Note that if x ∈ Y i \ S i , then x ∈ Y j \ S j for all j > i. On the other hand, if x ∈ Z i ∩ S i , then there is a vertex y ∈ Y i \ S i such that x y ∈ A(D). Moreover, for all j > i, y ∈ Y j \ S j . Therefore, if x ∈ Z j , then x ∈ S j . From the above two properties it follows that if x ∈ S i \ S j , then x / ∈ S j+1 ∪ · · · ∪ S q . This finishes the proof of the existence of a sequence of valid triples (Y 1 , Z 1 , S 1 ), . . . , (Y q , Z q , S q ) with properties 1-7.
We claim that the cost of the path following this sequence is at most k. First note that if x ∈ S i \ S i+1 , then x ∈ Y i+1 and for all j ≥ i + 1 it holds x / ∈ S j , hence every vertex in X is counted in at most one of the sets S i \ S i+1 . Now the set C i is precisely then from the properties 5, 6 and 7 of the sequence of triples it follows that x is in . Hence, if |C i | < , then X \ {x} would be a smaller solution for the instance (D, , k) and because of minimality of X , and (Y i , Z i , S i )) are t i−1 -valid and t i -valid triples, for some t i−1 , t i ∈ [n], respectively. Therefore, there is no arc from Z j \ X to Y i \ X for any i ≤ j ∈ [q]. It follows that each strongly connected component of D − X is a subset of (Z i−1 ∩Y i )\ X for some i ∈ [q]. In particular note that Hence the size of each strongly connected component is at most every vertex that appears in S i for some i ∈ [q] is counted in some |S j \ S j+1 |, where j ≥ i and every vertex that appears in T i for some i ∈ [q] is counted in max(0, |Z i ∩ Y i+1 \ (S i ∪ S i+1 )| − ) and the final set X has at most k vertices.
Construction of the Auxiliary Weighted
is an arc.
First, for every x ∈ S 1 we can in constant time check that x ∈ S 1 ∩ Z 1 (i.e., Second, by Lemma 4.2 and since |Y 1 | < |Y 2 | and |Z 1 | > |Z 2 |, we get that to check that Y 1 ⊂ Y 2 and Z 2 ⊆ Z 1 , we only need to check for every Finally, to compute the weight of the arc, we note that |Z 1 ∩ Y 2 | is precisely 1 , so we only need to check how many of the vertices in S 1 ∪ S 2 are in Z 1 ∩ Y 2 and how many of the vertices in S 1 are also in S 2 . Moreover, we only need to compute Else either the weight of the arc is precisely |S 1 \ S 2 | or it would be more than k and hence it is not an arc. Hence, we end up spending O(k + log n) time on the computation of the weight of each of at most O(2 16k kn) many arcs (for which < |Y 2 | − |Y 1 | ≤ + 2k ) and O(k) on all of at most O(2 16k n 2 ) remaining arcs. Since k ≤ n, we can construct D in O(2 16k kn 2 ) time.
In the rest of the section, we will show that the dependency on both k and n cannot be significantly improved. More precisely, we will show an unconditional lower-bound of (n 2 ) even if k = 0, as we show that we need to read at least (n 2 ) arcs of the input instance in the worst case to distinguish between k = 0 and k = 1. Furthermore, we show that any 2 o(k) n O(1) algorithm would imply that the Exponential Time Hypothesis fails. to H n 2 that A did not read. Let this arc be x y and let D xy be the graph obtained from D by replacing the arc x y by the arc yx. It follows that D xy is strongly connected and hence (D xy , , 0) is a NO-instance of Directed Component Order Connectivity. However, because the algorithm A decided that (D, , 0) is a YES-instance without considering the orientation of the arc between x and y on the instance (D, , 0) and the only difference between (D, , 0) and (D xy , , 0) is the orientation of the arc between x and y, it follows that A outputs that (D xy , , 0) is a YES-instance, which contradicts the assumption that A outputs the correct answer for every instance (D, , 0) of Directed Component Order Connectivity such that D is a tournament. Case 2: < n 2 . The proof is very similar to Case 1; the only difference is the construction of the digraph D. To construct D we first take the disjoint union of q = n copies of H , denoted H 1 , . . . , H q , and one copy of H n−q . We add the arc x y to D if x ∈ H i and y ∈ H j such that Finally, we will present our O * (2 o(k) ) lower bound result, based on the wellestablished Exponential Time Hypothesis (ETH). Our result uses the fact that the classical Vertex Cover problem cannot be solved in subexponential time under ETH. Given the above result by Cai and Juedes, the lower bound then directly follows from the proof of NP-hardness of Directed Feedback Vertex Set by Speckenmeyer [19]. In fact, given a graph G, Speckenmeyer constructs in O(|V (G)| 2 ) time a tournament T with 3|V (G)| − 2 vertices such that for every k the graph G has a vertex cover of size at most k if and only if T has a directed feedback vertex set of size at most k (see Theorem 6 in [19]). Hence, we obtain the following:
Conclusions
Since Directed Component Order Connectivity generalizes Directed Feedback Vertex Set, it would likely be hard to improve our upper bound and obtain a tight lower bound for the time complexity of Directed Component Order Connectivity[ + k] on general digraphs. It seems easier to improve our upper and lower bounds on the time complexity of Directed Component Order Connectivity[k] on semicomplete digraphs.
It would be interesting to consider the time complexity of the problem on wellstudied generalizations of semicomplete digraphs: (i) semicomplete multipartite digraphs which are digraphs that can be obtained from complete multipartite graphs by replacing every edge by an arc with the same end-vertices or a pair of opposite arcs with the same end-vertices, (ii) quasi-transitive digraphs which are digraphs in which if x y and yz are arcs such that x, y, z are distinct vertices then either xz or zx or both are arcs, too (in particular, a transitive digraph is quasi-transitive), (iii) locally semicomplete digraphs which are digraphs in which for every vertex x, both N + (x) and N − (x) induce semicomplete digraphs (a directed cycle is an example of a locally semicomplete digraph). Chapters 7,8, and 5, respectively, in the textbook on classes of directed graphs [2], provide extensive surveys on these classes of digraphs. | 9,055.2 | 2022-07-23T00:00:00.000 | [
"Mathematics"
] |
Implementation of Image Technology Based On Algorithm Optimization in Design System
Art design combines image optimization with information optimization. To improve the level of artificial intelligence and user-oriented design, a design method of visual art design system based on image optimization and information optimization is presented. Image art design optimizes the image brightness by using color compensation method, combines pixel quantization method to track and fuse the image, and uses the wavelet de-noising technology to de-noise the image, so as to complete the optimization of image art design. The design art design system is based on the MapInfo software development platform, and has finally completed the art design under the embedded Linux architecture. The software-integrated development of the design system is implemented. System tests state clearly that the design system can effectively achieve the output of artistic images, improve the output quality of artistic design images, have a high output signal-to-noise ratio, and have a better human-computer interaction.
Introduction
Environmental art design is the comprehensive utilization of architectural space environment. Environmental design is to meet people's needs for daily life use and aesthetic function [1]. Users have higher and higher requirements for the quality of design image. Using algorithms to improve graphics and image optimization technology, and finally applied to art design, can improve the ability of artificial intelligence and real-time optimization of art design. The research of art design system based on image optimization technology has broad application prospects [2].
The image optimization technology in art design mainly includes image noise reduction and image fusion filtering technology, and image purification optimization is carried out through wavelet noise reduction, median filter noise reduction, etc. [3], to improve the ability of image information expression in art design, and combine images The fusion method realizes the ability of image information tracking and recognition in art design, adopts adaptive corner detection and correction method to detect and analyze the key feature points in art design, and improves the ability of expressing feature information in art design [4], in the art design system design Among them, the current methods mainly include the art design system design method based on the Hadoop cloud platform, the embedded art design system based on the ARM core, and the art design method based on the Software-as-a-Service (SaaS) layer. According to the above design Principles, the design of design system based on image optimization is studied in related literature, which has certain practical value in improving the expression ability of art design. Among them, document [5] proposes an art design scheme based on image block matching and repair, combining The correlation dimension search method performs image correlation feature point matching, improves the visual performance of key information points in art design; Literature [6] proposes an image restoration method based on Criminisi algorithm for image optimization in artistic design. The system does not perform image noise reduction optimization, resulting in poor export image quality and poor artistic design effects.
To solve these problems, this paper discusses a design method of image optimization technology based on improved algorithm. First, the image of art design adopts chromatic aberration compensation method to perform image brightness equalization restoration optimization, combined with pixel quantization tracking method for image fusion, and adopts wavelet Noise reduction technology realizes image noise reduction optimization. Then the art design system is designed on the MapInfo development platform, and the optimization algorithm is used to improve the program loading. Then the software development of the design system is completed under the Linux architecture, and the simulation analysis and effectiveness conclusion of the development and design of the art design system are realized.
Image Optimization Algorithm Design
The design of optimization algorithm is the basis of image optimization technology to improve the design of design system. Image optimization mainly includes image noise reduction optimization, image fusion optimization and image edge contour feature extraction optimization [7], using grid matrix block Methods the grid division of the artistic design image is carried out. The block division method mainly adopts the rectangular block and the lasso block method. According to the artistically designed image to be divided, the block is divided into several sub-blocks according to the affine invariant moment. The number of blocks is the same as ((M/16)+1)* ((N/16)+1). The schematic diagram of image rectangular block in artistic design is shown in Figure 1.
Figure.1 An image rectangular block diagram in artistic design
In the artistically designed rectangular block model shown in Figure 1, the brightness of the image is improved by Jacobian iterative algorithm. The conductivity equation of the best matching block area is: On the basis of the above-mentioned image brightness equalization optimization in art design, image fusion and wavelet noise reduction optimization are combined with the pixel point quantization tracking method [8], in the grid points of the art design image area distribution, assuming the newly extracted The expression equation of artistic image feature is: 3 this result, the optimization design of the design system is carried out, and the image optimization algorithm is loaded into the program loading function of the system to carry out multiple compilation control, so as to carry out the system optimization design [9].
System Software Design
Art design system design based on Mapinfo software development platform. The image generation process is synchronized with the image input and export process. Select the image fusion program of the art system through Map Texture Tools to load the code and load the program. The image optimization algorithm is loaded. Reading, writing signals and chip information are controlled by the bus [10]. The software of the art design system designed in this paper mainly includes program loading, storage and reading and writing functions, and bus transmission and human-computer interaction. The design of each function is described as follows: The program loading of the art design system is a program loading function for image optimization algorithms and control instructions. The MVC (Model View Controller) model is used to construct the control components of the graphics rendering system, and MySQL is used as the default system for program loading of the art design system. The boot loader loaded by the system program is mainly composed of the user application (Application) of the graph-oriented management module. The system selects SuperViVi as the BootLoader, and uses the open source Linux kernel for algorithm reading and writing and image self-control. Adaptive optimization, execute program loading and data update according to the following cross-compilation instructions:
System Test and Result Analysis
To test the performance of the designed system in achieving the application of image optimization and in the artistic design, simulation experiments are carried out. The development environment of the experiment is in the Windows 10 operating system, using Visual C++7.0, Vega Prime, Multigen Creato and other image optimization Tool for image optimization algorithm design. The 3D model library of the art design system includes MFC42D.DLL and MFCD42D.DLL. The parameter settings of the image optimization algorithm In, the selected image size is 600*400 and 1200*1200, structural information similarity is 3.89, pixel-level parallax D=180, according to the above simulation The environment and parameters are set, and compared the image optimization test of the art design system.
The original the image to be designed is shown in Figure 2 Figure 2 as a design template, input it into the art design system designed in this article, and combine the color bar analysis to get the design effect diagram shown in Figure 3. Analysis of the design effect diagram in Figure 3 shows that the method of this article for artistic design, the image export quality is better, and the export signal-to-noise ratio is high, which improves the artificial intelligence of the artistic design.
Conclusion
This paper introduces artificial intelligence technology into the design system, and uses algorithm improvements to improve the export quality of the design system. First, by improving the image algorithm, the brightness and noise of the image are adjusted to a more suitable state, and then the design software is developed and designed. The focus of this work is the design of program loading modules, data storage and reading and writing modules, bus transmission modules and humancomputer interaction modules. Through the comparison test, a clear comparison result shows that the art design system designed in this paper has good graphics optimization and image optimization capabilities. The export of high-quality visual effects contributes to the expressiveness of artistic works. | 1,951.6 | 2021-09-01T00:00:00.000 | [
"Art",
"Computer Science"
] |
A new member of troglobitic Carychiidae, Koreozospeum nodongense gen. et sp. n. (Gastropoda, Eupulmonata, Ellobioidea) is described from Korea
Abstract A new genus of troglobitic Carychiidae Jeffreys, 1830 is designated from Nodong Cave, North Chungcheong Province, Danyang, South Korea. This remarkable find represents a great range extension and thus, a highly distant distribution of troglobitic Carychiidae in Asia. The Zospeum-like, carychiid snails were recently included, without a formal description, in records documenting Korean malacofauna. The present paper describes Koreozospeum Jochum & Prozorova, gen. n. and illustrates the type species, Koreozospeum nodongense Lee, Prozorova & Jochum, sp. n. using novel Nano-CT images, including a video, internal shell morphology, SEM and SEM-EDX elemental compositional analysis of the shell.
Introduction
It is estimated that the Korean peninsula harbors more than 1,000 caves within its Cambro-Ordovician limestone geology (Kashima et al. 1978, Woo et al. 2001. Of these caves, only one, 36°57.186'N,128°22.938'E) in North Chungcheong Province, South Korea ( Fig. 1) is so far known to contain finds of "Zospeum-like" carychiid microgastropods (Kwon et al. 2001, Lee and Min 2002, Min et al. 2004). The shell shape and microsculpture of these tiny snails most closely resemble the troglobitic genus Zospeum Bourguignat, 1856 (Ellobioidea, Carychiidae) rather than epigeal Carychium O. F. Müller, 1774 (Prozorova et al. 2010(Prozorova et al. , 2011. Cave-dwelling species are not known from nearby Japan, which was recently found to contain the highest lineage diversity for Carychiidae Jeffreys, 1830(Weigand et al. 2013a). The present material comprises the first account of troglobitic Carychiidae in Asia. Up to now, subterranean taxa included only members of the genus Zospeum, exclusively known to inhabit karst caves of southern Alpine Europe . (The North American species, Carychium stygium Call, 1897 is no longer considered an exclusively troglobitic species (Weigand et al. 2011(Weigand et al. , 2013b). The taxon described here represents an extreme range extension to Asia for subterranean ellobioid snails (Fig. 2).
Open to the public as a tourist attraction, Nodong cave extends approximately 800 m in length and drops 300 m in vertical depth. Geographically, it is located near the Nodongcheon, a branch of the Namhan River (Lee 2012) and near the city of Danyang, a resort town at the base of the extensive Sobaeksan National Park. Other known caves and potential habitats for troglobitic carychiid snails in the immediate vicinity include the public caves, Gosu and Cheondong.
When material, such as the shells of troglobitic carychiids, is particularly limited and rare, contemporary non-destructive techniques for taxonomic assessment are essential. Applied in taxonomy, contemporary methods used primarily in medicine and industry can provide new opportunities for understanding global and local biodiversity. They can also act as catalysts for igniting dire conservation strategies regarding rare populations and for extracting valuable information sequestered in their organic forms. In this work, one of six known Korean carychiid shells has been examined using Nano-CT imaging to assess and compare the internal shell morphology of Koreozospeum nodongense sp. n. with its supposed closest relative, the European genus Zospeum. In addition, available shell fragments of K. nodongense sp. n. material were examined via scanning electron microscopy (SEM) coupled with energy-dispersive X-ray spectrometry (EDX) to investigate the internal morphology of the shell and to determine the elemental composition of the shell matrix. In addition, and as a secondary consideration, limited information is available regarding the specific geology and ecology of Nodong cave and adjacent, potentially contiguous caves (i.e. Gosu cave and Cheongong cave) of North Chungcheong Province. SEM-EDX elemental compositional analysis opens windows for inference about the subterranean ecology of K. nodongense sp. n. and likely the ecology of adjacent caves for future investigation.
In this work, a new subterranean taxon is described from Korea based on characters significantly differentiating from European Zospeum morphotypes. SEM and Nano-CT images of the intact shell of the new species and SEM-EDX graphic images of the elemental composition of selected sections of shell fragments are presented.
Material and methods
Similar to conditions known for Zospeum (see , carychiid snails were collected live on muddy walls in January 2000 by J.-S. Lee in the dark zone of Nodong cave (Prozorova et al. 2010(Prozorova et al. , 2011. One shell (Holotype NMBE 534197/1) available for examination outside of Vladivostok and Korea (99 lost, see below) and six paratypes located in Vladivostok were measured according to fig. 1). The number of whorls was counted according to the method described in Kerney and Cameron (1979). For the species description, shell measurements are expressed as: shell height (SH); shell width (SW); height of the last whorl (HLWH); peristome height (PH); peristome diameter (PD); spire Angle (SA); number of whorls (W); widest diameter (WD) (distance from top to bottom). Spire angle (SA) is given in degrees. Other measurements are in mm. Measurements of the holotype (NMBE 534197/1) were taken from images obtained using a Leica DFC420 digital camera attached to a Leica M165c stereo microscope, supported by Leica LAS V4.4 software. Measurements of the paratypes (ZIN RAS 1) were taken using the LOMO MBS-10 stereo microscope (Lytkarino, Ru.). Qualitative aspects of shell morphology including peristome shape; whorl profile (whorl convexity); protoconch and teleoconch sculpture; description of the lamella on the parieto-columellar region of the aperture; configuration of the columellar lamella and the independent configuration of the columella are documented.
Since the individuals reported by Prozorova et al. (2011), which were housed in the Min Molluscan Research Institute in Seoul, South Korea have become regretfully lost to science, as much information as possible was extracted from the holotype (NMBE 534197/1), one paratype (IBSS FEB RAS 7787) and some fragments (paratype NMBE 534361/2) using Nano-CT imaging (whole shell), SEM and SEM-EDX energy-dispersive X-ray spectrometry (fragments). No individuals were preserved in ethanol, precluding molecular analyses and anatomical examination.
Image acquisition
SEM: Koreozospeum nodongense sp. n. (IBSS FEB RAS 7787 paratype) (now damaged) was coated with carbon and imaged (Prozorova et al. 2011) at the Centers of Collective Use in IBSS and the Institute of Marine Biology FEB RAS using the Zeiss EVO −40 scanning electron microscope (Jena, Germany) implementing the Variable Pressure (VP) mode. SEM-EDX: Morphological (SEM) and elemental composition (EDX) of Koreozospeum nodongense sp. n. paratype (NMBE 534361/2) fragments were assessed using the FEI-Aspex Explorer scanning electron microscope system (Hillsboro, OR, USA), implementing a BE detector for image generation. Non-coated shell material was placed on a cellulose membrane and mounted on a computer-controlled stage for scanning. Elemental composition was detected (i.e. each element shows a multiple-peak pattern in the spectrum) by using an emission current of 29 mA, an electron beam acceleration voltage of 20 kV under sample pressure of 0.15 Torr and a working distance of 22.9 mm at RJL Micro & Analytic GmbH, Karlsdorf-Neuthard, Germany. In our analyses, some peaks overlap, whereby the elemental letters also overlap. Peak height represents the intensity of the element and this is proportional to the mass percentage present in the assessed shell region.
Micro-CT: Koreozospeum nodongense sp. n. (NMBE 534197/1) was imaged using a nano-computed tomography system (Nano-CT), manufactured and developed by Bruker-Micro-CT/SkyScan (SkyScan 1172, Kontich, Belgium). The video of K. nodongense sp. n. was created using a SkyScan 1172 scanner at RJL Micro & Analytic GmbH, Karlsdorf-Neuthard, Germany. The scanner is equipped with a sealed micro focus X-ray source and a 11 Mpx CCD detector. The specimen was scanned with 4 µm voxel size in rotation steps of 0.6° at 59 kV tube voltage and 167 µA tube current. Reconstruction with cross sectional images was performed using a modified Feldkamp cone-beam reconstruction algorithm. Image resolution of the cross sectional images was 4 µm isotropic voxel side length with a grey scale resolution of 8 bit. The animated video was generated using a direct volume rendering method implemented in the software CTvox.
ANSP
Academy of Natural Differential diagnosis. Differs from Carychium by its squat ovate-conic form, absence of major apertural dentition and its singularly troglobitic ecology; from Zospeum by the oblong, slightly detached, oblique, auriform peristome, shallow suture, minimally convex whorls, interrupted low lamella on roof of interior penultimate whorl forming annular lamella, and the conspicuously pleated lip folded back onto the body whorl and not rolled into the body whorl as in Zospeum.
Derivatio nominis. The name derives from Korea, the land of the type locality and the similarity to European Zospeum.
Distribution. Only known from Nodong cave. Diagnosis. Shell small, thin, ovate-conic, smooth, fine spiral rows of interconnected pits constant throughout teleoconch, plicate apertural lip may or my not be present (side profile).
Koreozospeum nodongense
Description. Koreozospeum nodongense sp. n. is characterized by a very small, alabastrine, ovate conical shell with 5 regular, moderately increasing whorls. The penultimate whorl is slightly angularly shouldered at the uppermost extension of the peristome in left and right profile positions (Fig. 3B-C). Peristome oblong, auriform, oblique to shell axis, partially adnate to ultimate whorl, otherwise slightly detached (Fig. 3K), more or less thickened (Fig. 3A, D-E); the lip is folded back onto the body whorl and thickly plicate 3/4 of the lip side-view height (Figs 3C, 4B, E); deep umbilical notch (Figs 3 H-I) with wrinkles projecting into notch behind peristome region (Fig. 4D); robust columellar lamella running into the shell interior (Figs 3A, D-E, 5I-K). The protoconch is obtuse and shows a pattern of spiral interconnected pits (Fig. 4); the teleoconch bears tightly spaced irregular spiral striae of densely interconnected pits (Figs 4, 9) and shows a marbled surface pattern of faint, horizontally-elongated chevrons intercalating with each successive whorl (Fig. 5 A, C). Suture irregular and shallow, bordered by white marginal zone at each increasing abapical whorl (Figs 3C, F, 4). Interior perspectives show a parietal structure consisting of a partially discontinuous lamellar ridge on the roof of the penultimate whorl ( Fig. 4I-K), which then develops into the uniformly shaped annular lamella running directly under the penultimate whorl into the aperture. The columella is moderately slender, clavate ( Distribution. Only known from the type locality. Ecology. Suggested mix of volcanic elements in cave mud of Nodong cave. Conservation status. A cursory search through the Internet indicates that the region harboring caves encompassed within the administrative boundaries of Danyang County is greatly threatened. Due to the abundance of limestone in the area, cement factories are big industries there. Of more immediate threat, however, is the frequent human traffic that the caves of Nodong, Gosu and Cheondong receive in light of their popularity as tourist attractions. To exacerbate concerns, a newly built stairway into the deepest, darkest sections of the cave has made Nodong more accessible (Lee 2012). Since K. nodongense sp. n. is known to live in only one locality and the area is potentially declining due to human encroachment, this species is Critically Endangered (CR) under IUCN criteria (IUCN 2014).
Remarks. Koreozospeum nodongense sp. n. appears to be polymorphic in regards to the configuration of a plicate versus non-plicate apertural lip (side view). This elaboration of the lip was apparent in two shells (NMBE 534197/1; ZIN RAS 1) of the five examined shells (1 juvenile with undeveloped lip). We have little doubt that the plicate and non-plicate specimens co-occurring at Nondong cave are conspecific. Prozorova et al. (2011) initially examined the paratype specimen (IBSS FEB RAS 7787) using SEM (Fig. 4). This work revealed microstructural pitting on the protoconch in sync with the concentric pitting pattern reported by Jochum (2011) as a consistent character for the worldwide members of the extant Carychiidae. Protoconch pitting is also known in Eastern European carychiid fossils examined via SEM (Strauch 1977, Stworzewicz 1999, Harzhauser et al. 2014a, 2014b. In congruence with the findings of Prozorova et al. (2011), the fragments of K. nodongense sp. n. here show tightly spaced irregular spiral striae of densely interconnected pits with some occasional, non-pitted patchy zones over the entire teleoconch (Fig. 9). This dense pattern of total teleoconch pitting is also found in Zospeum isselianum Pollonera, 1887 and Zospeum bellesi Gittenberger, 1973 (Jochum, unpublished data).
The SEM-EDX analysis (Fig. 10A-B) of the surface structure located in the central zone of the fragment edge and the internal surface of the shell shows varying concentrations of the same elements, including calcium (Ca), aluminum (Al), silicon (Si), oxygen (O) and carbon (C) for these two separate regions of the shell. A band ( Interestingly for K. nodongense sp. n. is that the trace elements, aluminum (Al) and silicon (Si), might potentially be involved in the biomineralization process of the shell matrix. It is not clearly discernable whether or not they are intrinsic to the shell or represent contaminants from the substrate. Further study, independent of this work, involving major-and trace element analysis coupled with isotope geochemical analysis might suggest the relatively large variability of elements found in our SEM-EDX analyses to be due to the heterogeneous nature of different magmas mixing at different stages of their evolution in historic volcanic eruptions in South Korea (Brenna et al. 2012). Eroded lava particulates and ash may well constitute the sediment overlying the Ordovician limestone of Danyang County. Emmanuel Tardy (Museum d'Histoire Naturelle de Genève) and Katharina Jaksch and Anita Eschner (Naturhistorisches Museum, Wien) for their help in providing images of Zospeum type material. We are grateful to Ronald Janssen (Research Institute Senckenberg, Frankfurt am Main) and Eike Neubert (Naturhistorisches Museum der Burgergemeinde, Bern) for valuable discussion and NMBE support of this work. We are indebted to Ron Noseworthy for his help in the initial acquisition of the material and for his insights on Korean malacofauna. We wish to thank the editor, Martin Haase and the reviewer, Rajko Slapnik for their constructive input on an earlier version of the manuscript. This work was supported by grant number 15-I-6-069 from the Far Eastern Branch of the Russian Academy of Sciences. | 3,192 | 2015-08-12T00:00:00.000 | [
"Biology"
] |
Towards Lean Automation in Construction—Exploring Barriers to Implementing Automation in Prefabrication
: As a sustainable alternative to conventional cast-in-situ construction, modular construction (MC) offers several promising benefits concerning energy and waste reduction, shorter construction times, as well as increased quality. In addition, given its high degree of prefabrication, MC offers ideal conditions to solve the industry’s long-lasting productivity problem by implementing manufacturing concepts such as lean production and automation. However, in practice, the share of automation and robotics in the production process is still relatively low, which is why the potential of this construction method is currently far from being fully exploited. An overview of the particular barriers to implementing automation in the context of MC is still lacking. Therefore, a qualitative study was conducted including eight MC manufacturers from Germany, Austria, and Switzerland. Following a comprehensive literature review, expert interviews were conducted based on an academically proven framework. Thereby, seven barrier dimensions with 21 sub-categories could be identified. The findings of this study contribute to the understanding of current barriers to implementing automation in prefabrication and how they can be overcome most effectively. Additionally, recommendations for future research are proposed within a research agenda.
Introduction
With the highest amounts of energy consumption and CO 2 emissions among all industries, the building and construction industry is urged to take immediate action to meet the sustainable development goals [1]. However, on the path towards more sustainable operations, there are several problems inherent in the industry's culture that need to be addressed. Since the creation of value in the construction industry is generally projectbased, low levels of value chain integration with a large number of ever-changing project participants limit learning effects and productivity gains drastically [2]. Accordingly, it is not surprising that the industry's productivity has been stagnating over the last three decades, while the efficiency of producing goods in the general manufacturing industry almost doubled in the same time [3]. As a result, more often than not, building projects suffer from cost and time overruns [4].
One promising solution to solve these problems could be found in modular construction (MC). As a distinctive form of off-site construction (OSC), it is defined as a modern method of construction that uses pre-finished volumetric units (so-called modules) to assemble the final building on-site [5]. Scholars have shown that applying MC has the potential to reduce construction times [6], as well as improve building quality and working conditions [7,8]. Moreover, environmental sustainability can be achieved by reducing waste and energy consumption [9]. In addition, by relocating the vast majority of construction operations to a controlled factory environment, an integrated value chain can be created, building on concepts such as Design for Manufacture and Assembly (DfMA), lean production, and modularization [10]. Furthermore, MC offers ideal conditions for implementing advanced manufacturing procedures using automation and robotics, which has been regarded as a cornerstone of the recently advocated Construction 4.0 (C4.0) approach [11][12][13].
While these advantages over conventional cast-in-situ construction have led to a considerable uptake of this construction method in several countries worldwide [14], recent studies consistently showed that the adoption of automation in MC and OSC is still relatively low [15][16][17]. Current applications of MC are oftentimes only a mere shift of construction operations to a structured factory environment, where tasks are still carried out manually based on the craftsmanship approach [18]. As a consequence, productivity gains, as could be observed in the general manufacturing industry, remain far from being reached. In addition, although factory-based production enables MC manufacturers to make use of economies of scale, there are currently no significant cost reductions compared to conventional construction [19]. In the heavily cost-and profit-driven construction business, this circumstance hinders big players (i.e., developers and housing corporations) to apply this construction method in their projects [20].
It is therefore decisive to understand why automation has not yet been transferred to the production process of MC, despite the well-known benefits that are observed in other industries. A comprehensive overview of the barriers that MC manufacturers face when implementing automation in their production is still lacking. Therefore, the following research question is formulated: What Are the Barriers to Implementing Automation into the Production Processes of MC?
Accordingly, this study aims to identify, categorize, and evaluate the respective barriers in a suitable framework to close this research gap. Based on a comprehensive literature review, eight in-depth expert interviews with high-ranked representatives of companies that are actively producing modules in their facilities were conducted. Experts from the respective companies were interviewed using semi-structured interviews. In addition, a considerable number of secondary materials were integrated into the data set.
The remainder of this paper is structured as follows. Section 2 gives an overview of the theoretical background concerning the concepts of LC and C4.0, introduces MC as a potential means to efficiently implement lean automation in construction, and highlights the research gap and contributions of this paper. Section 3 states the applied research methodology. Section 4 presents the results structured along the identified barrier dimensions. Subsequently, the results are discussed in Section 5. Lastly, a conclusion is provided in Section 6.
Theoretical Background
In the following, Section 2.1. gives a brief overview of the concepts of lean construction and Construction 4.0. Section 2.2 introduces MC as a potential means to fully exploit the benefits of the aforementioned concepts by applying lean automation.
Lean Construction and Construction 4.0
The term lean construction (LC) originates from the concept of lean production [21] and refers to the adaptation and application of the underlying principles from manufacturing to the context of construction [22]. Lean production itself has its roots in the Toyota Production System [23], which is based on one core principle: focus on value-adding activities by eliminating all kinds of waste [24]. Lean process design is built on continuous improvement and pull production to reduce lead times and production costs, while increasing the quality of products and the efficiency of the underlying production system [25]. The simplicity of lean production combined with its potential to increase productivity has made it one of the prevailing management approaches over the last three decades [26].
Similarly, the concept of C4.0 originates from the concept of Industry 4.0 (I4.0), which is referred to as the fourth industrial revolution [27]. Driven by widespread digitalization and the emergence of advanced digital technologies, such as Artificial Intelligence (AI), Big Data, and the Internet of Things (IoT), the manufacturing industry is on the edge of Sustainability 2022, 14, 12944 3 of 22 a paradigm shift [28]. This shift towards the fourth industrial revolution is characterized by automated, decentralized, and smart value creation networks enabled by IoT technologies [29]. It enables the creation of a cyber-physical environment, in which machines are enabled to interact with each other (machine-to-machine communication) without any human intervention [30]. I4.0, therefore, has the capacity to fundamentally improve processes in every stage of value creation and thereby boost operational effectiveness and productivity [31].
In construction, there have been numerous approaches to adopt and transfer both concepts from manufacturing to construction operations. Accordingly, within LC, several concepts and techniques have evolved to enhance the productivity of construction projects. For instance, frequently applied methods include the Last Planner System [32], application of 5S (sort, straighten, shine, standardize, and sustain) to the construction site [33], and KANBAN for material storage on-site [34]. Among others, applying these techniques has been proven to significantly reduce the risk of project time overruns [35].
Similar to the adaptation of the principles of lean production, there have been efforts to apply the underlying principles of I4.0 to construction projects [36]. While the number of research papers has been continuously increasing over the last years, three scholars have attracted considerable attention [11][12][13]. In 2016, Roland Berger [11] coined the term 'Construction 4.0 to describe the future developments driven by the digital transformation of the industry. In their conceptualization, they listed the following four key factors: automation, connectivity, digital access, and digital data. Sawhney et al. [12] conceptualized those efforts in three transformational trends: industrial production, cyber-physical systems, and digital technologies. Craveiro et al. [13] emphasized that the construction industry would have to transform towards the fourth industrial revolution through the industrialization of the construction process and the general digitization of the construction industry.
Notably, all three conceptualizations include the transformation towards increased use of prefabrication (i.e., automation, industrial production, and industrialization of the construction process). Considering the characteristics of the current value creation process in conventional construction, this development can simultaneously be regarded as a great challenge, as well as a great opportunity [30,37]. More precisely, implementing innovations in the construction industry is hampered by several obstacles: Originating from the project-based structure, the complexity of processes is generally higher compared to other industries [38]. In theory, this complexity has mainly been ascribed to high uncertainty and interdependence in construction projects [39]. Due to many different participants in the overall value creation process, there are numerous interfaces between the distinct construction trades and the respective companies, leading to inefficiencies [34]. Some described this supply chain design as a "loosely coupled system", hindering participants to innovate and making use of learning effects hardly possible [40]. Effectively, pursuing technical innovations in a less integrated supply chain rather hampers collaborations than lets them flourish, since many partners are not capable or willing to take the same path [30]. Koskela [22] stated that characteristics such as temporary project-based collaborations, unique building designs, and on-site work lead to inefficient workflows and the generation of waste, contradicting the main principle of lean production. Therefore, to fully exploit the benefits of best practices from the manufacturing industry in the context of construction, the value creation process including the general supply chain structure would have to be redesigned and re-engineered [34,41]. The construction process should be aligned with manufacturing processes (product-based) [22], rather than improving traditional construction procedures with technological advancements [42]. By industrializing the construction process using high levels of prefabrication, the concepts of lean production and I4.0 could even be implemented simultaneously, which is referred to in the manufacturing industry as lean automation [43]. Recent research found that the combined use of both approaches not only facilitates the implementation of each concept [44], but also leads to additional benefits. Accordingly, I4.0 tools complement lean production by increasing flexibility, as well as higher customization of products, allowing more effective responses to market fluctuations [45]. In addition, further improvements within all three dimensions of the TBL of sustainability could be observed [46], which highlights the immense potential that adoption would have for the construction industry. However, while research on LC and C4.0 has been growing during the last few years [36], research on the actual implementation of automation in OSC production is still scarce.
Introducing Lean Automation in Modular Construction
As a feasible solution to automate construction processes, MC has been intensively studied from various perspectives over the last two decades [47]. The term MC is used interchangeably with denotations such as modular building [5], modular integrated construction (MiC) [10], or prefabricated prefinished volumetric construction (PPVC) [48]. Generally, it can be defined as a distinctive form of OSC with a very high degree of prefabrication. More specifically, it is characterized by fully furnished volumetric units that are manufactured in a factory environment and transported to the building site for final assembly [5]. Researchers reported its superiority over conventional construction from an economic point of view in terms of construction times and technical quality [8]. Furthermore, from an environmental perspective, waste generation and energy consumption can be decreased [9], while resilience and timeliness in the production process can be achieved [49].
With up to ninety percent of the value creation taking place off-site [18], the centerpiece of this construction method is the manufacturing process of the modules. Given the production in a structured factory environment, MC offers the optimal conditions to fully exploit the benefits of lean production [10] and even allows the implementation of more progressive concepts from the manufacturing context, such as I4.0 [50]. In addition, it enables an OEM-like industry structure known from the manufacturing industry [42], as opposed to the fragmented supply chain design in conventional construction [40]. Accordingly, by introducing state-ofthe-art production designs based on lean automation, not only tremendous productivity gains, but also a transformation of the entire construction process, can be achieved [18]. Besides shifting the value creation from a cost-driven to a value-driven approach, the use of fully automated and lean processes significantly improves overall transparency and access to relevant information during all stages of the construction process [51].
However, in practice, the potentials of this construction method are far from being fully exploited. Accordingly, Albus and Drexler [15] found in their practical-oriented study with German MC manufacturers that, although the manufacturing process of the modules has high levels of prefabrication, the level of automation is still relatively low. Similar observations were made in the context of New Zealand's OSC market [16]. According to the researchers, most manufacturers still rely on production setups, in which the vast majority of tasks are done manually with minimal use of automation. According to Bock and Linner [18], the flow of materials in the production of most OSC approaches is still organized like a workshop, rather than in a production line, as opposed to state-of-the-art production facilities. Consequently, current approaches to manufacturing modules can mainly be described as a shift from on-site to off-site craftsmanship.
Research Gap and Contributions
Concerning the extant literature focusing on the implementation of automation in construction, two research streams can generally be distinguished. First, there is research on automation in construction on a general level. While there have been numerous relevant studies referring to automating on-site construction operations [17,52,53], only a few scholars are specifically directed towards automation in OSC and MC [54]. Most existing studies investigated barriers of prefabrication as an integrated step of the traditional building approach (e.g., prefabrication of building components), rather than as a stand-alone construction approach. Accordingly, Davila Delgado et al. [17] examined the challenges of automation in OSC as only one of four parts forming activities to automate processes in the construction industry. In addition, most research on barriers to implementing automation and robotics in construction has been primarily conducted from the perspective of technol-ogy [55], despite recent findings stating that the adoption of automation is rather dependent on environmental and organizational circumstances than the technology itself [56].
Second, recently, many relevant studies examined hindrances to applying MC and OSC as an alternative to conventional construction [14,19,48]. While the barriers to the widespread adoption of this construction method can be regarded as well-known [20], research on barriers to the implementation of automation and robotics in the underlying production system is still scarce. Darlow et al. [16] devoted a section of their study to the status quo of automation in OSC in New Zealand, while Pan and Pan [57] investigated determinants to implement automation in precast concrete production. However, no study has specifically addressed barriers to applying automation in the context of MC, despite it offering ideal conditions for effective implementation. Consequently, a comprehensive overview of the underlying factors inhibiting the adoption of this technological advancement is still missing.
To close this research gap, this study aims to identify, categorize, and evaluate the underlying barriers to implementing automation in MC. It contributes to the academic literature by introducing a comprehensive framework of current barriers expanding the perspective to various dimensions of inhibiting factors. By revealing the underlying reasons for the currently low level of adoption from the perspective of MC manufacturers, researchers are equipped with numerous starting points for future research to effectively overcome pending barriers. The study, thereby, paves the way to an efficient application of LC and C4.0, resolving the long-lasting problem of stagnating productivity.
Research Methodology
In order to identify the barriers that MC manufacturers face when automating their production processes, a qualitative research approach was applied. As a comprehensive overview of the respective hindrances is still lacking, an explorative study design is particularly suitable in this context [58].
The study is based on a comprehensive literature review and in-depth expert interviews. Experts were interviewed using a semi-structured interview design to allow openness of responses while collecting data in a structured way [58]. The interviews were conducted between December 2021 and February 2022 with eight managers from MC manufacturers in Germany, Austria, and Switzerland. Considering the selection of experts, special emphasis was placed on high expertise in the field of prefabrication (more than 10 years of professional working experience), as well as active involvement in the decision process of developing the module manufacturing process. In addition, the selection of companies was limited to companies operating their own production facility to allow well-founded evaluations of potential barriers and challenges of implementing automation. This restriction can be regarded as the primary reason for the relatively low number of interviewed experts, as MC manufacturers in the aforementioned countries owning a production facility are still scarce. Table 1 gives an overview of the position of interviewed experts in their respective companies, as well as the company size according to the number of employees. For reasons of confidentiality, the names of the interviewees and companies have been anonymized. The interview guideline consisted of three parts. In the first part, experts were asked about their position, as well as their professional background and working experiences. In the second part, the underlying research context of this study was explained to the interviewees. Ultimately, in the third part, experts were asked to provide their detailed opinion on barriers that they expect to encounter or have already encountered in their practical experience when implementing automation into production processes. Inspired by the overview of risks of adopting Industry 4.0 by Birkel et al. [29], the experts were asked the following questions: • What are the economic barriers to automating manufacturing processes? • What are the ecological barriers to automating manufacturing processes? • What are the social barriers to automating manufacturing processes? • What are the process barriers to automating manufacturing processes? • What are the technical barriers to automating manufacturing processes? • What are the IT barriers to automating manufacturing processes? • What are the regulatory barriers to automating manufacturing processes?
The expert interviews were conducted via an online meeting tool and lasted between 30 and 53 min. To ensure all relevant informationwas captured, the interviews were audiorecorded and transcribed. Subsequently, qualitative content analysis [59] was applied to analyze the collected data by identifying common patterns and themes. To structure the data, following Gioia et al. [60], a systematic coding procedure consisting of three steps was applied (see Figure 1). Initially, first-order categories were derived from the interview data. As a second step, these categories were synthesized into second-order concepts inspired by previous findings of the extant literature [61]. Ultimately, the identified second-order concepts were consolidated into seven barrier dimensions. The resulting dimensions, as well as second-order (top-codes) and first-order (sub-codes) items, can be found in Table 2.
about their position, as well as their professional background and working experiences. In the second part, the underlying research context of this study was explained to the interviewees. Ultimately, in the third part, experts were asked to provide their detailed opinion on barriers that they expect to encounter or have already encountered in their practical experience when implementing automation into production processes. Inspired by the overview of risks of adopting Industry 4.0 by Birkel et al. [29], the experts were asked the following questions: • What are the economic barriers to automating manufacturing processes? • What are the ecological barriers to automating manufacturing processes? • What are the social barriers to automating manufacturing processes? • What are the process barriers to automating manufacturing processes? • What are the technical barriers to automating manufacturing processes? • What are the IT barriers to automating manufacturing processes? • What are the regulatory barriers to automating manufacturing processes?
The expert interviews were conducted via an online meeting tool and lasted between 30 and 53 min. To ensure all relevant informationwas captured, the interviews were audio-recorded and transcribed. Subsequently, qualitative content analysis [59] was applied to analyze the collected data by identifying common patterns and themes. To structure the data, following Gioia et al. [60], a systematic coding procedure consisting of three steps was applied (see Figure 1). Initially, first-order categories were derived from the interview data. As a second step, these categories were synthesized into second-order concepts inspired by previous findings of the extant literature [61]. Ultimately, the identified secondorder concepts were consolidated into seven barrier dimensions. The resulting dimensions, as well as second-order (top-codes) and first-order (sub-codes) items, can be found in Table 2.
Financial
High initial investment (7) Higher fixed costs (2) Higher personnel costs (3) "The investment costs are disproportionate to the benefits." (E6) "We would need IT specialists that have different wage structures compared to craftsmen." (E8) Demand Low production quantity (3) Low capacity utilization (2) Small project scale (1) "Due to small quantities, automation is not economically viable." (E6) "I would have to ensure high capacity utilization of the production facility." (E2)
Competition
Profitability of conventional con. (3) Loss of flexibility (4) "This expensive technology cannot compete with the prevalent wage structure of the industry." (E7) "We are customer-oriented and project-based. Creating a repetition effect using robots is therefore difficult." (E1) Ecological [51,62] Transport emissions Longer transport distances (3) "Through automation, I am more bound to one location. Transport distances will increase." (E8) Social [63][64][65] Corporate culture Internal resistance (3) Communication (3) Fear (2) "Older employees are skeptical about automation and digitization. And I think there is also great fear." (E4)
Industry culture
Individual customer requests (6) Negative attitude (4) Traditional thinking (2) "The industry is very traditional. I need my architect, my structural engineer, my landscape planner.
Authorities have certain procedures that do not allow industrialized construction." (E7)
Knowledge
Own Knowledge (2) Industry partners' knowledge (3) Authorities' knowledge (1) "Industrial planning is much, much more complex. You have to deal with the system much more. You are almost a mechanical engineer when you plan with modules." (E7) Workforce Job losses (2) Missing qualification (4) New job descriptions (2) "The craftsman who currently screws the drywall panels onto the walls will no longer be able to find a job in an automated production line because he simply cannot handle this relatively complicated technology." (E1) Process [18,62,66] Industry Late design changes (3) Fragmented industry (5) Tendering not suitable (1) "Also, the consistency. I must control the entire process chain to make it work optimally." (E8) "The construction sector is still very fragmented with thousands of two-person and three-person offices." (E3) Production Low standardization (7) Production sequence (5) Depth of planning (3) "Currently every building is a prototype." (E2) "The process would require restructuring. Flow principle should be implemented." (E4) "Shift from drawing to parameterizing an element." (E8) Logistics Transport restrictions (1) High stock of inventory (1) Space for finished products (1) "If we want to increase quantity, we have to stock more material, as there have been frequent supply shortages recently." (E4) (6) Availability of machinery (2) "How do I tile or paint a wall? How does outfitting work using automation?" (E6)
Material
Loss of flexibility (5) Availability of main material (2) Component assembly (2) "I have to commit myself to a building material: I cannot weld wood, but I need to weld steel." (E3) Building geometry Distinct construction sites (2) "Varieties in requirements of clients, construction sites, and federal states make high quantities hardly possible." (E3) IT [51,54,68] IT infrastructure IT capabilities of partners (2) "Externally, we cannot use BIM because our partners do not know what to do with it." (E1)
Database
Build-up of a database (2) Maintain a detailed database (2) "Each product that is purchased should be included in a database. [ . . . ] However, the administrative effort is high." (E2)
Software
Transfer of 3D models to machines (3) Software interfaces (6) Availability of suitable software (2) "Our plans are 3D and purely digital. The question is if we can transfer those to machines adequately." (E2) "There is no seamless IT solution like in the manufacturing industry. We are forced to work with interfaces." (E7) Regulatory [16,19,69] Regulations Regulations in each state differ (7) Unspecified construction method (4) Outdated industry norms (2) "It would be very helpful to have the same building regulations Germany-wide and all over Europe." (E3) "There are no appropriate regulations. There is no norm that is called 'modular construction'." (E4)
Permissions
Extensive inspection and testing (4) No standardization of permissions (5) Inefficient public authorities (3) "Every part is produced with the same reinforcement. However, reinforcement acceptance must always take place." (E7) "There is a lack of knowledge among authorities concerning this construction method." (E2)
Tendering and contracting
Low standardization of tenders (2) Low requirements in contracting (2) "Tender several projects at the same time that are built with the same system." (E2) "Bidders must meet stricter requirements concerning working conditions." (E7)
Funding
Changing subsidies for clients (2) Lack of financial support (2) Lack of know-how funding (1) "If one energy standard is promoted more than another, I have to adjust the product." (E8) "Know-how funding through consulting services regarding the implementation of automation." (E1)
Results
The results reveal the following seven barrier dimensions: economical, ecological, social, process-related, technical, IT-related, and regulatory. The dimensions are further classified into 21 second-order and 53 first-order categories that hinder the implementation of automation in MC. Table 2 gives an overview of the dimensions and sub-categories, including exemplary expert statements.
Concerning the financial barriers to implementing automation, it has already been shown in other sectors that replacing manual process steps with automated machinery and robotics comes with high costs [17]. Therefore, it is not surprising that the high initial investment for setting up fully automated and digitized factories for MC is also a major challenge in this context. Manufacturers cannot currently estimate the economic benefits of this technological innovation or they even expect the returns to be not high enough yet. Besides large investments beforehand, practitioners fear a loss of flexibility in their production system due to higher capital and fixed costs. More precisely, the costs for loans to acquire the machinery, as well as the operation and maintenance, are significantly higher than employing craftsmen workers to assemble the modules manually. In addition, the implementation of automated production systems would require hiring employees with different job profiles (i.e., IT specialists and mechanical engineers), likely leading to higher personnel costs, or, respectively, specific intensive training for current workers.
Demand
With regard to the current demand for MC, practitioners emphasized that the production volume is still too low for an economically viable implementation of automation. Accordingly, for a profitable application of robotics, the production output would have to be sufficiently high and stable over many years to ensure high capacity utilization and a considerable good return on investment, which is currently at least questionable. One problem in this context is the low degree of standardization concerning individual projects. Oftentimes, orders placed by customers are very individual and of a small scale, so the production line has to be re-adjusted with every new project and the repeatability is relatively low. Accordingly, the costs for implementing a highly complex infrastructure including state-of-the-art machinery and software are currently not considered to lead to the desired cost savings (i.e., economies of scale) that would be expected for such an investment.
Competition
Generally speaking, MC manufacturers are competing with traditional contractors. Due to the cost-and profit-driven nature of the construction business, low-wage structures in conventional on-site building projects are the reference. Considering the higher costs of manufacturing buildings in a fully automated factory environment, MC manufacturers can hardly compete with their competitors applying the conventional construction approach. Accordingly, E7 stated: "This expensive technology cannot compete with the prevalent wage structure of the industry". Some practitioners additionally claimed that to make automation a viable business case, the industry would have to change from a cost-to a value-driven approach. More precisely, commercial customers should focus more on other factors, such as the delivered quality, rather than the cheapest offer, when awarding a contract. Another problem MC manufacturers face is the still high economic feasibility of conventional construction. Consequently, applying conventional operations (craftsmanship approach) themselves in a factory environment yields profits that are considered high enough. The risk of changing these procedures and contributing to large investments is therefore considered to be inappropriately high. Another problem lies in the loss of flexibility regarding individual customer requests. Practitioners fear that their high customer orientation could suffer from increased automation of their production processes.
Ecological Transport Emissions
From an ecological point of view, barriers to automating the MC production process appear to be manageable, which might be due to the general superiority of OSC over conventional construction in terms of environmental sustainability [20]. Nevertheless, practitioners emphasized that, for an economical application of automation, the production would have to be bundled at one location to ensure a high production output with highcapacity utilization. As a consequence, the total transport distances from the factory to construction sites would very likely increase. Accordingly, the resulting increased transport emissions compared to closer, non-automated production facilities have to be considered.
Corporate Culture
The socio-cultural perspective on barriers can be distinguished between a companyinternal and an external, industry-wide perspective. It equally applies to both that the construction industry is well-known for conservative and risk-averse thinking [63]; challenging the status quo of operation procedures (i.e., craftsmanship vs. automation) traditionally attracts resistance [70]. Concerning internal factors, one problem lies in the resistance of MC manufacturers' employees towards this technological advancement. While shopfloor operators might fear their job security due to the potential obsolescence of their current manual tasks, managers could view this change in operating procedures with skepticism, as they are oftentimes used to traditional craftsmanship approaches.
Industry Culture
From an external perspective, industry participants mostly view off-site approaches with skepticism, which may be due to a negative attitude or aversion towards change in general. Another problem is the prevailing order mechanism of the industry. Accordingly, customers of MC manufacturers or general contractors are used to ordering highly individualized buildings rather than being offered standardized solutions. One expert stated: "When comparing cars and buildings, with buildings customers are less likely to accept design fixations". Consequently, the customer expectations towards value delivery based on an engineer-to-order approach limit the application of high levels of standardization and automation in the production of MC.
Knowledge
Since most employees, including managers and executives, of MC manufacturers have a professional background in architecture, engineering, and construction (AEC), their knowledge of manufacturing approaches, including the automation of manufacturing processes, might be limited and therefore a considerable barrier to introducing automation. Another problem lies in the industry partners' knowledge. For instance, architects are oftentimes not used to planning with high levels of prefabrication and automated systems. Since the final design has to be fixed earlier in the process and hardly allows later changes, planning can generally be regarded as more sophisticated and time-intensive compared to conventional construction. One expert stated that it would almost require the planner to be a "mechanical engineer".
Workforce
Considering problems related to the workforce, current field operators do not have the qualifications to control and configure an automated production line, as they are mostly craftsmen used to assemble building components manually. Therefore, workers either would have to be adequately trained to fit this new job description or would have to be replaced by workers with other job profiles. While this would open the chance of attracting younger people, thereby counteracting the problem of an aging workforce, MC manufacturers have to consider the social factor of potential job losses, as well as the required time and costs for the respective training of current employees.
Industry
Concerning process-related barriers, practitioners emphasized the unfavorable prevailing value-delivering approach of the construction industry. Since the integration along the construction supply chain is very low, there are many individuals and companies involved requiring many interfaces, leading to inefficiencies during the overall construction process. More precisely, the traditional approach includes specialists from different areas, such as architects, structural engineers, and landscape planners, that work independently from each other and are oftentimes organized in small offices. Due to this fragmented industry structure, many smaller players are not capable to innovate and adopt the measures to deal with industrialized construction. In addition, practitioners raised that authorities' working procedures (such as permissions) are incompatible with the OSC approach (also see Section 4.7).
Another problem that is hindering the automation of processes is late design changes. Since customers, as well as project participants, are used to on-site changes of the original design in the traditional approach, it is expected to be also possible in MC. However, late design changes require adoptions and configurations of production lines that sabotage the whole manufacturing process. All in all, the production process of MC is currently oftentimes controlled by external participants requiring late changes to the production facility or, respectively, manual changes.
Production
One of the most mentioned barriers to automation is the low level of standardization in current MC productions. Practitioners claimed that every building is planned and produced very individually ("Prototype", E2), which requires the adoption of the production process with every new project. Many MC manufacturers do not have fixed module sizes, which additionally results in higher variability and complexity.
Another problem is the current production set-up. The flow of material is organized like a workshop, rather than as a production line, which is not favorable for the application of advanced automation technologies. Consequently, the entire factory design and setup, as well as operating procedures, would have to be re-structured to enable an efficient implementation of robotics. Lastly, practitioners emphasized that the design and the corresponding work instructions are still based on drawings, rather than being parameterized. More precisely, automated machines would need to have digitized information to work with, so drawing would have to be translated into a parametric language readable by machines (computer-aided manufacturing).
Logistics
Concerning logistics, practitioners emphasized the required high production volume for introducing economically viable automation would bring several logistical challenges with it. Accordingly, the warehouses in which production materials are stocked would have to be expanded. One expert specifically mentioned that just-in-time construction would hardly be possible due to supply problems of essential components, such as insulation. Another expert claimed that, occasionally, modules that are ready for assembly on-site have to be stored close to the production facility until being transported to the construction site. Limited factory space might therefore hamper further increase in production output. about how specific operating procedures that are currently done manually could be done using robotics. For instance, E6 stated: "How do I tile or paint a wall? How does outfitting work using automation?". While the joining of large components such as walls and ceilings are of less concern to most practitioners, the steps inside the module (i.e., the furnishing of the modules) are seen as a major challenge for implementing automated machinery. In this context, the flexibility of the machinery to perform the required tasks is especially questioned. Another problem according to the experts can be seen in the general availability of adequate machinery and robotics. Since most machines are designed for different purposes than building a module for residential living, it was questioned whether the required technology even exists.
Material
From the perspective of materials used for constructing a module, there are traditionally three different possible choices: wood, concrete, and steel. When implementing an automated production system, MC manufacturers would have to commit themselves to one main material and therefore lose flexibility. This is considered to be due to the different processing for each of these materials. E3 put it as follows: "I cannot weld wood, but I need to weld steel." While a change of the main construction material would require various changes to the production facility, it is assumed that a machine cannot be easily adjusted to process other materials. MC manufacturers would therefore lose the chance of responding to changes in regulations or market dynamics for certain production components.
Building Geometry
While the building geometry or, respectively, the geometry of the building site can also be regarded as a barrier towards OSC in general, this factor is of specific concern for implementing an automated production. Accordingly, the requirements of distinct construction sites hinder the introduction of a module with standardized dimensions (height, length, width) and therefore are expected to limit the application of full automation due to frequent adjustments to the production line.
IT Infrastructure
One concern in terms of IT is the digital capability of external stakeholders. Some practitioners reported that the benefits of collaborating with external partners by using sophisticated approaches, such as BIM, are still very limited due to their low level of digitalization. E1 emphasized that this holds for project participants, such as planners of technical building equipment, as well as the clients themselves. E8 added that there is a lack of "continuity of the digital chain" from manufacturing to final delivery of the project to the clients.
Database
In order to implement robotics and automation, there must necessarily be digitization. Accordingly, a database is required consisting of all information for each component used in the production process. While E2 emphasized that the effort to implement and maintain such a database is extremely high, E4 stated: "Without data, there is no digitalization. The data is needed to communicate with the machines". Consequently, one challenge is to fully digitize current procedures before being able to implement automation.
Software
Concerning software, practitioners see three major barriers. First, there is the difficulty of translating the traditionally used 3D models to parameters that could be used by robotic applications to perform the required manufacturing procedures. While practitioners stated that the design of the modules is already fully digitized, some questioned if this translation from 3D model to parameters would even be possible. Second, there are many intersections between current software solutions used to manufacture the modules. There is currently no software that allows a continuous flow through production. Instead, drawings from architects in the design phase have to be transferred to other software applications for production plans that can be handled by mechanical engineers and craftsmen in the factory. Third, current software that would be suitable for automated manufacturing is not designed for construction, but rather for mechanical engineering approaches. According to an expert, software suppliers refrain from adapting the software to match industrialized construction approaches.
Regulations
Concerning regulations, practitioners claimed that the current regulatory construction framework is not in favor of OSC and MC. Generally, regulations differ not only between countries, but even between the states of one country. For instance, in Germany, there are 16 different state building codes with varying requirements for newly built buildings. Consequently, MC manufacturers must follow the code in force in the state in which the building is erected, regardless of where the modules were produced. This means that the production line needs to keep a certain degree of flexibility to enable meeting the different requirements in each state to operate nationwide or beyond.
Another barrier is the lack of an appropriate definition of the construction method itself from a building law perspective. More precisely, there is currently no guide for test engineers, such as structural engineers, on how to provide the required proof for proper execution for buildings built applying MC. As a consequence, the process of proof testing gets more complicated, leading to delays. Lastly, there are outdated norms that hamper the automation of production. One expert stated that some norms that are valid in conventional construction do not match OSC procedures. More precisely, not all norms make sense in the context of MC because the requirements can be met in other ways.
Permissions
In contrast to quality management approaches known from the general manufacturing industry, the grants of permissions in OSC are still oriented towards one-of-a-kind productions (i.e., an individual building). Accordingly, inspections and testing are the same as in conventional construction, although large parts of the underlying structure do not change from project to project and have therefore already been approved. While in conventional construction thorough inspections and testing are necessary to ensure the building's safety, in industrialized construction, it might be redundant and instead interrupts the production flow. As a consequence, the production capacity utilization and the respective production output are affected, which again hampers the potential for automation.
Besides the granting of permissions still being oriented towards conventional construction projects, many practitioners criticized that there are no standardized permissions. Although there is high repetition and modules are produced the same way in every batch, the underlying structure of the modules has to be approved in every project. This is not only time-consuming, but also costly for the builder who has to pay the inspection fees. Lastly, some experts claimed that there are inefficiencies concerning the authorities who grant the permissions to realize a specific construction project. Obtaining the appropriate permits still takes too long and thus delays the construction process.
Tendering and Contracting
Some experts considered current tendering approaches unfavorable for scaling up production with high levels of automation. More precisely, it was stated that many tenders are not suitable for MC because there are many requirements concerning specific parts of the building, such as individual dwelling designs that are fixed late in the process (e.g., the developer lets buyers choose the color of the tiles). E2 emphasized in this context that the first step would be to specifically tender MC, and the second to consolidate similar tenders, which would save time and costs for all involved parties.
Concerning contracting, experts criticized the still prevailing approach of the industry to focus primarily on the price of an offer, rather than giving more emphasis on other evaluation criteria in the decision-making process. E7 claimed that public contracting authorities in particular should act as role models by placing a higher emphasis on decision criteria such as working conditions, quality of the work, or level of digitization when awarding a contract. By implementing higher requirements for bidding companies, contractors would be incentivized to implement automated production systems.
Funding
The funding aspect can be divided into funding received by MC manufacturers themselves and funding received by potential customers. The latter relates to government aid provided to builders if the newly erected building meets certain requirements. For instance, in Germany, buildings with a high level of energy efficiency are incentivized with financial subsidies or low-interest loans. However, this funding is subject to short-term changes that can decrease the attractivity of a certain building conceptualization from one day to the next. For MC manufacturers, this circumstance results in a certain degree of uncertainty regarding the requirements of the offered product. In practice, the production line would have to be adjusted to meet the new requirements.
Concerning funding directed to MC manufacturers, experts criticized that there are currently no adequate public subsidies to incentivize the implementation of automated production systems. According to some experts, public authorities would either have to offer direct financial subsidies for acquiring the respective machinery and software or have to offer better depreciation options. Besides financial aid, E1 stated that funding for consultancy on an efficient implementation of an automated system would be even more desirable. Since most practitioners have a professional background in AEC, it would be favorable to obtain guidance from professionals with extensive knowledge of automation in manufacturing.
Discussion
Implementing automation into the production processes of MC has the potential to significantly boost productivity and production outputs. However, the adoption of automated production systems in the context of prefabrication is still relatively low. To facilitate widespread adoption, it is decisive to identify and understand the underlying barriers for MC manufacturers. In the following, the key findings of this study are reflected upon and discussed under consideration of the extant literature.
As illustrated in Table 2, numerous factors aggravating the implementation of automation were identified. Based on the identified dimensions and their underlying subcategories, a comprehensive framework illustrating the barriers to automation in MC has been created (see Figure 2). Notably, in addition to the high number of individual factors (53 sub-codes), there are numerous interrelationships between the respective barriers that contribute to the complexity of the framework.
From an economic perspective, with one of the most mentioned of all factors, the high initial investment can be regarded as a severe barrier to implementing automation. To replace craftsmanship operations with automated machinery, significant investments have to be made in terms of technical equipment [71], as well as training or even new personnel [65]. While this observation could also be made in other industries where automating processes use advanced technologies [29] and for adopting robotics in construction in general [17], in the case of MC, it is highly related to the demand and the production volume. Since the production output is currently considered too low, implementing an automated production system does not appear to be economically viable. Recent estimations proposed that 1000 units per year would be required to achieve the desired economies of scale with significant cost reductions [72]. In accordance, Bock and Linner [18] reported that productivity and efficiency increase significantly with a higher production output (Performance Multiplication Effect). However, sufficient yearly outputs are yet to be achieved. In addition, results reveal that practitioners cannot currently estimate the financial benefits of implementing automation, since there is no standardized practice, which has also been reported by Chen et al. [54]. In addition, research on barriers to generally adopting MC has shown that financial barriers are highly influenced by other factors [20]. Therefore, offering adequate funding in the form of financial subsidies and knowledge consulting services to MC manufacturers may lower the economic barrier to implementing automation. As illustrated in Table 2, numerous factors aggravating the implementation of automation were identified. Based on the identified dimensions and their underlying sub-categories, a comprehensive framework illustrating the barriers to automation in MC has been created (see Figure 2). Notably, in addition to the high number of individual factors (53 sub-codes), there are numerous interrelationships between the respective barriers that contribute to the complexity of the framework. From an economic perspective, with one of the most mentioned of all factors, the high initial investment can be regarded as a severe barrier to implementing automation. To replace craftsmanship operations with automated machinery, significant investments have to be made in terms of technical equipment [71], as well as training or even new personnel [65]. While this observation could also be made in other industries where automating processes use advanced technologies [29] and for adopting robotics in construction in general [17], in the case of MC, it is highly related to the demand and the production volume. Since the production output is currently considered too low, implementing an automated production system does not appear to be economically viable. Recent estimations proposed that 1000 units per year would be required to achieve the desired economies of scale with significant cost reductions [72]. In accordance, Bock and Linner [18] reported that productivity and efficiency increase significantly with a higher production output (Performance Multiplication Effect). However, sufficient yearly outputs are yet to be achieved. In addition, results reveal that practitioners cannot currently estimate the financial benefits of implementing automation, since there is no standardized practice, which has also been reported by Chen et al. [54]. In addition, research on barriers to generally adopting MC has shown that financial barriers are highly influenced by other factors [20]. Therefore, offering adequate funding in the form of financial subsidies and knowledge consulting services to MC manufacturers may lower the economic barrier to implementing automation. Another economic aspect that needs to be considered is competition. According to the experts, the risk of committing to heavy investments is too high considering the wellfunctioning approach of competitors using conventional construction approaches [54]. In addition, it appears that the current approach of most MC manufacturers using low levels of automation and high levels of manual work yield sufficiently good returns, which is why the pressure for innovating can be regarded as relatively low. This echoes findings from Davila Delgado et al. [17], who report that low necessity to improve productivity is among the most prevailing factors limiting the adoption of robotics. The authors assumed that the lack of innovation pressure may be due to easy access to labor. However, given current developments in the construction labor market, including problems related to a shortage of skilled labor and an aging workforce [73], this situation is likely to change in the future.
Contrary to the severity of economic factors, environmental barriers are the least considered by practitioners in terms of the number of mentions. This may be because, although there are considerable amounts of energy required to operate an automated production facility, shorter production and construction times generally reduce the required energy and thereby the environmental impact of the overall project [49]. However, longer transport distances due to consolidations of production volumes in one location need to be taken into account [51].
From a socio-cultural perspective, a major barrier to implementing new technologies goes back to the prevailing culture of the construction industry, which is characterized by conservative and risk-averse thinking with a strong resistance to change [63]. While this circumstance can be regarded as a general challenge for adopting MC at all, even within MC manufacturers, this cultural peculiarity poses a significant barrier. Accordingly, internal employees often view the implementation of new technologies with skepticism. One major reason for that might be the fear of being replaced by automated machinery and robotics, resulting in job losses [65]. To lower this resistant attitude, adequate communication and change management are required, which are currently lacking for the greater part [30,50]. Naturally, as a prerequisite of applying change management, there has to be the commitment of the top management [56].
Externally, the prevailing industry culture hampers the implementation of automation in multiple ways. Most considered by the experts are the expectations of customers for highly individualized buildings. Due to changing product specifications, the opportunities for standardization are considered to be low. Since this specific factor is highly interwoven with the process barrier concerning the current value creation, it did not get much attention in the extant literature as a stand-alone barrier. However, it is recommended to consider this challenge separately, as it refers to the attitude of customers that would need to change to overcome this barrier. Related findings from the literature include the lack of reference architectures [50] and the requirement of adapting business practices to meet customer expectations using prefabrication [64].
Concerning the process, the results reveal barriers in the context of the industry, production, and logistics processes. While industry and production processes both include challenges that have been mentioned by many experts in this study, problems in terms of logistics were only mentioned by a few practitioners. This may imply that automating production processes does not induce significant logistical restrictions. While there are certainly challenges comparing conventional and industrialized construction [62], barriers specifically referring to automation can mainly be limited to a higher stock of inventory and more space for finished products resulting from an increased production output.
With regard to the industry process, barriers can generally be ascribed to the fragmented industry structure [30]. Since the construction business is characterized by many interconnections and interdependencies between its stakeholders and project participants [22], the successful implementation of innovative technologies relies on collaborating partners taking the same path. However, since there are numerous small offices and medium-sized companies that are either not able or willing to financially commit to these innovations, the benefits of automation may not be fully exploited [51]. A countermeasure may be increasing integration of MC manufacturers along the value chain [54]. Accordingly, by creating a continuous process that incorporates decisive tasks of the overall process, such as design and construction operations, the information exchange can be significantly improved, resulting in less iterative work and fewer reworks on site. In contrast, Davila Delgado et al. [17] reported that the fragmented industry structure cannot be regarded as a severe barrier to implementing robotics, which may be due to their wider perspective including on-site applications of robotics.
In terms of the number of mentions, low standardization in the context of production is among the most challenging factors for implementing automation. According to the experts, current production operations have a low degree of standardization due to the individuality of the ordered buildings. This is indirectly in line with findings from Pan and Pan [56], who reported that introducing product standardization could be a significant driver to integrate robotics into production. Bock and Linner [2] emphasized that the current structure of the final product (i.e., a conventional building) does not fit the production process using automation. Consequently, the product structure would have to be changed towards a robot-oriented design. Similarly, the current production sequence is aligned with conventional construction operations in a workshop-like organization, rather than in a production line, which would require significant changes to the production facility when implementing automation [18].
Lastly, the results reveal that there is a barrier concerning the depth of planning. In this context, a great challenge appears to be machine-ready planning and design by architects and engineers. Accordingly, during the planning and design phase, architects already have to be able to incorporate the requirements for building a modular rather than a conventional building [51]. In this regard, it is decisive to work towards parametric and computational designs that are transferable to manufacturing machines because, otherwise, the "translation" may turn into a bottleneck for the whole production system [68].
From a technical perspective, barriers concerning the machinery that is supposed to perform the tasks that are currently mostly done manually were encountered. Many experts voiced their doubts about the feasibility of implementing robotics for assembling parts of a module. Similarly, researchers have reported the immaturity of robotics for handling non-standardized elements [56] or the immaturity of available technologies in general [17]. In line with this finding, Buchli et al. [74] reported that most automation and robotics technologies are not generally applicable, but rather domain-specific. Consequently, technologies used in other production contexts would either have to be adopted or re-engineered to fit the specific context of MC. It is therefore advisable to test these doubts for reasonability by creating prototype production lines [41]. Since this testing requires considerable amounts of financial resources, forming a consortium of MC manufacturers or collaborating with companies from other industries could facilitate conducting such a project.
In addition, experts raised attention to difficulties concerning the choice of main construction materials when implementing automation. Accordingly, since there are doubts that an automated production line can respond efficiently to a change in main materials (wood, steel, or concrete), there is a loss of flexibility compared to manually performed operations. In this context, Bock and Linner [18] stated that the choice of material is already a restriction in terms of customer preferences. Accordingly, the use of steel-framed buildings is used for functional buildings, such as hospitals, hotels, and offices, rather than residential buildings. Since implementing automation is a long-term commitment for MC manufacturers, and a change of materials may occur over time, the compatibility of robotics to handle the different materials should be verified in advance.
Concerning IT barriers, one of the most severe challenges investigated in this study is software interfaces. In particular, difficulties were observed in converting the geometric design to parametric and computational information that can be processed by automated machines. This echoes findings from Tibaut et al. [68], who investigated interoperability requirements for applying automated manufacturing systems in construction. According to the authors, the processing of the geometric data using computer-aided manufacturing (CAM) software still has some limitations. The resulting information that is readable for machines is still too low for complex building productions. Therefore, there is a need for integrated software solutions to streamline the generation of tasks for automated manufacturing machines that supersede the interfaces when processing geometric and parametric data. However, as also stated by an expert, suppliers of software for robotics and automation in the manufacturing environment have recently shown low interest in cooperating with firms that operate in the building industry [52].
Another barrier again refers to the fragmented supply chain structure of the industry. Accordingly, many industry partners are not able to implement state-of-the-art IT solutions, such as BIM, and therefore interrupt the digital chain [51]. Lastly, the results of this study reveal that there is an additional effort in terms of setting up, as well as maintaining, an appropriate database to implement automation [50]. Interestingly, concerns regarding data and cybersecurity driven by an increased digitalization that have been reported in other studies focusing on the implementation of advanced digital technologies [29,50] were been mentioned by experts in this study. The reason for this may be the early stage of adoption that the participating companies are in.
With regard to regulatory barriers, the results reveal the dimensions of regulations, permissions, tendering and contracting, and funding. Concerning regulations, most experts emphasized that the lack of a uniform building code is a critical barrier to implementing automation. Due to regional differences in terms of building codes, production has to stay flexible to be able to meet the requirements in each state. However, this is considered to be placed to the debit of product standardization. Consequently, authorities have to align codes and policies to ease the implementation of automation and robotics. In the extant literature, this barrier has not received much attention, which might be due to the specific regional circumstances in the context of this study.
Considering the granting of permissions, the results reveal that there is a lack of standardized permissions. Experts in this study criticized that, although there is a high repetition in their production, the required time and costs for receiving permissions are inappropriately high due to long and complicated approval procedures [16]. This is also in line with findings from Bock and Linner [18], who report that the construction method is not sufficiently defined, but rather considered "nonstandard", which even brings further difficulties with it, such as aggravated granting of permissions for mortgages by financial institutions for customers. This may result in a lower attractiveness of prefabricated construction and, consequently, further compounds the problem of insufficient demand for implementing automation and robotics.
Within contracting and tendering, current approaches are still heavily directed towards the lowest price of a certain service in the context of construction, rather than placing a higher emphasis on other evaluation criteria. Public authorities may either lead by example by considering criteria such as working conditions, quality of work, and the level of digitization more thoroughly, or implement mandatory regulations [56], defining the respective requirements in terms of the aforementioned criteria. Consequently, by implementing higher requirements for bidding companies, contractors would be incentivized to implement automated production systems.
Lastly, barriers concerning governmental funding have to be considered. As already reported by other researchers [17,56], there is currently a lack of adequate governmental incentives in the form of financial support. Since implementing an automated production system requires high initial investments, as well as high operating costs, MC manufacturers have to be subsidized to facilitate the adoption of this technological innovation. In accordance, Pan and Pan [56] reported that a supportive regulatory environment including incentives is a decisive driver for adopting automation and robotics. In addition, the lack of knowledge support has to be considered. Since most practitioners in the MC business have a professional background in AEC, experts with high expertise in automated manufacturing and robotics are needed to efficiently introduce automation and robotics. Therefore, authorities may subsidize consulting services for MC manufacturers planning to implement automated production systems. Alternatively, cross-industry collaborations could be pursued.
Conclusions
As an OSC approach with a very high level of prefabrication, MC offers ideal conditions to implement manufacturing concepts that are known for fundamentally increasing productivity, such as lean production and automation. However, currently, the share of automation and robotics in the production process of MC is still relatively low. Consequently, the potential of this construction method is far from being fully exploited. Given the well-known benefits of digitizing and automating production processes, questions arise regarding why MC manufacturers have not yet implemented the respective systems and what the barriers to this implementation could be. In the extant literature, a comprehensive overview of the particular barriers is still lacking.
Therefore, this study aimed to systematically investigate the factors hampering the implementation of automation and robotics in MC. Based on a comprehensive review of the extant literature, as well as in-depth expert interviews with highly experienced practitioners, the results of this study reveal a framework of barriers constituting seven dimensions: economical, ecological, social, process-related, technical, IT-related, and regulatory barriers.
From a theoretical lens, this study generally adds to the understanding of the underlying barriers to implementing automation in MC. Considering the developed framework, researchers are provided with plenty of opportunities for future research. For instance, future studies may investigate the identified factors quantitatively, measuring the severity of each barrier to determine which factor should be tackled first or with the most amount of resources. Similarly, the study reveals several interrelationships between the respective barriers (such as funding influencing the economic attractiveness of the implementation). Future studies may investigate the interaction between the respective barriers by applying appropriate research methods, such as multi-criteria decision-making analysis approaches.
From the perspective of practitioners, the results include multiple recommendations for action to efficiently lower the barriers to implementing automation and robotics. Generally, the developed framework can be used as a guideline for decision makers planning to implement the required measures for automating their production. The study thereby paves the way to an increased level of digitization and automation in the construction industry, which is likely to resolve the long-lasting problem of stagnating productivity. While the results provide multiple practical implications, three applicational contributions should be stressed in particular.
First, the study emphasizes the need for MC manufacturers to integrate along the value chain to create a continuous process, lowering the dependencies on project participants that are not capable or willing to innovate operation procedures. Second, low standardization and individual customer requests were identified as major barriers to implementing an automated production. While the latter can only partially be influenced, an increase in standardized production is a necessary condition to introduce economically viable automation. MC manufacturers are therefore advised to reconsider the general product structure of the modules. While current designs are based on conventional construction operations, a design approach is needed to enable high levels of robotic applications (robot-oriented design). Third, the results reveal that many practitioners even questioned the technical feasibility of implementing automation in MC production processes, highlighting the severity of this barrier. It is, therefore, indispensable to either engage in close collaboration with other MC manufacturers or firms from other industries with comparable production processes to enable testing and the creation of low-scale prototypes.
Naturally, this study is not without limitations. As already indirectly mentioned above, since this study is of an explorative and qualitative nature, the identified severity of the barriers, as well as the corresponding interrelationships between the factors, can only partly be assessed. Consequently, quantitative research approaches are needed to deepen the findings of this study. In addition, although countermeasures and recommendations for actions are discussed in this study, future research may investigate possible solutions more thoroughly in a practical context using in-depth case studies. Lastly, expert interviews are limited to eight representatives of companies from German-speaking regions. While the low number of participants can be ascribed to the low number of MC manufacturers in the countries under study in general, some results may be bound to specific regional circumstances. Future studies should, therefore, strive to verify the findings of this study in other regional settings.
Funding: I acknowledge financial support by Deutsche Forschungsgemeinschaft and Friedrich-Alexander-Universität Erlangen-Nürnberg within the funding programme "Open Access Publication Funding". | 14,905.8 | 2022-10-10T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Raddeanin A suppresses breast cancer-associated osteolysis through inhibiting osteoclasts and breast cancer cells
Bone metastasis is a severe complication of advanced breast cancer, resulting in osteolysis and increased mortality in patients. Raddeanin A (RA), isolated from traditional Chinese herbs, is an oleanane-type triterpenoid saponin with anticancer potential. In this study, we investigated the effects of RA in breast cancer-induced osteolysis and elucidated the possible mechanisms involved in this process. We first verified that RA could suppress osteoclast formation and bone resorption in vitro. Next, we confirmed that RA suppressed Ti-particle-induced osteolysis in a mouse calvarial model, possibly through inhibition of the SRC/AKT signaling pathway. A breast cancer-induced osteolysis mouse model further revealed the positive protective effects of RA by micro-computed tomography and histology. Finally, we demonstrated that RA inhibited invasion and AKT/mammalian target of rapamycin signaling and induced apoptosis in MDA-MB-231 cells. These results indicate that RA is an effective inhibitor of breast cancer-induced osteolysis.
Introduction
Anemone raddeana Regel has been widely used to treat cancer, rheumatism, and neuralgia [1][2][3] . This traditional Chinese medicinal herb belongs to the Ranunculaceae family and exhibits antitumor efficacy, anti-inflammatory efficacy, and analgesic activity 4 . Raddeanin A (RA), an oleanane-type triterpenoid saponin, has been shown to be the main bioactive constituent of Anemone raddeana Regel [4][5][6] . Recent studies have demonstrated that RA can prevent proliferation, induce apoptosis, and inhibit invasion in various human tumor cells, including gastric cancer cells, hepatocellular carcinoma cells, and nonsmall-cell lung carcinoma cells [6][7][8] . The mechanisms through which RA exerts these effects may be attributed to its ability to inhibit angiogenesis by preventing the phosphorylation of vascular endothelial growth factor receptor 2 and associated protein kinases, including phospholipase C γ1, Janus kinase 2, focal adhesion kinase, Src, and AKT 9 . Further research has indicated that RA can also induce apoptosis and autophagy in SGC-7901 cells 10 . Therefore, RA may be a promising agent with broad antitumor effects.
Breast cancer is the most common cancer in women worldwide and is related to a high frequency of bone metastasis. A previous report demonstrated that bone metastasis occurs in 70% of patients who died from prostate cancer or breast cancer 11 . The mechanism of bone metastasis, sometimes referred to as the "vicious cycle," is complex and involves interactions among metastatic breast cancer cells, osteoblasts, and osteoclasts 12,13 . It is believed that inflammatory cytokines and parathyroid hormone-related protein secreted by breast cancer cells can stimulate osteoblasts to produce receptor activator of nuclear factor-κB (NF-κB) ligand (RANKL) and further enhance osteoclast differentiation and bone resorption 12,14 . Thus, a number of factors with potential chemoattractive properties are released to stimulate breast cancer cell proliferation and migration 15 . Bisphosphonate and denosumab have been shown to slow down the progression of breast cancer-induced osteolysis 16,17 . However, due to adverse events, such as osteonecrosis of the jaw, toothache, and hypocalcemia, and because antiresorptive treatment is only palliative, novel therapies for breast cancer-induced osteolysis should be considered.
The aim of this study was to assess the effects of RA on osteoclasts, osteoblasts, and MDA-MB-231 breast cancer cells. Subsequently, we evaluated the effects of RA in mouse models of Ti-particle-induced calvarial osteolysis and breast cancer-induced osteolysis. The related molecular mechanisms were further determined.
RA inhibited RANKL-induced osteoclast formation in vitro
To explore the effect of RA on RANKL-induced osteoclast differentiation, bone marrow-derived macrophages (BMMs) were treated with 0, 0.2, 0.4, and 0.8 µM RA in the presence of macrophage-colony stimulating factor (M-CSF) and RANKL. RANKL differentiated BMMs into mature tartrate-resistant acid phosphatase (TRAP)-positive multinucleated osteoclasts, but RA produced an inhibitory effect on the formation of TRAPpositive multinucleated osteoclasts in a concentrationdependent manner (Fig. 1a, b). We further treated BMMs with 0.4 µM RA for 3, 5, and 7 days. As shown in Fig. 1c, RA significantly suppressed osteoclast formation at day 7. The number of dead osteoclasts was also calculated and an increase of osteoclast apoptosis was observed with the increasing of the RA doses (Supplementary 1A, B). The results of cytotoxicity assays on BMMs revealed that slight cytotoxic effect was observed for a dose of 0.391 µM, and no significant inhibitory effects for doses below 0.195 µM (Fig. 1e). Collectively, these evidences suggested that RA prevented RANKL-induced osteoclast formation in vitro.
RA suppressed RANKL-induced osteoclast-related gene expression in vitro
To confirm the inhibitory potential of RA on RANKLinduced osteoclast differentiation, we examined the osteoclast-related genes, including TRAP (ACP5), cathepsin k (CTSK), calcitonin receptor (CTR), V-ATPase-a3, V-ATPase-d2, and the nuclear factor of activated T cells 1 (NFATc1). Compared to the control group treated by RANKL, the expression of CTSK and NFATc1 was dramatically suppressed with the addition of RA (Fig. 2a-f). The protein expression level of CTSK and NFATc1 was also decreased in the RA treatment group (Fig. 2g). These data confirmed that RA inhibited the expressions of osteoclast-related genes.
RA inhibited osteoclastic bone resorption in a concentration-dependent manner
We performed pit formation assay to investigate the function of RA on the osteoclastic bone resorption activity. BMMs without RA treatment obviously resorbed the bone surface (Fig. 3a), while the RA treatment group showed fewer resorption pits and almost no resorption pits were observed in the 0.8 µM RA group. The average resorption area in each group were 43, 18, 7, 1%, respectively (Fig. 3b). These results suggested that RA inhibited osteoclastic bone resorption in vitro.
RA did not inhibited osteoblast differentiation and osteoblastic-related genes expression in vitro
No inhibitory effect was observed on the survival of MC3T3-E1 cells below doses of 0.781 µM (Supplementary 2A). To determine the role of RA on osteoblast differentiation, we further cultured MC3T3-E1 cells and analyzed alkaline phosphatase (ALP) activity at day 7. No significant difference of ALP was detected between control and the 0.2, 0.4, and 0.8 µM RA treatment groups (Supplementary 2B, D). We also evaluated osteoblastic mineralization with Alizarin red staining at day 21, and found that more total mineralized area was observed in 0.2 µM RA compared to the control group (Supplementary 2C, E). Though no significant difference between RA treatment and control group was observed in the mRNA expressions of osteoblastic-specific genes at day 7; however, secreted protein acidic and rich in cysteine were significantly increased after 14 days of treatment with RA (Supplementary 2F). The above results suggested that RA, at least, had no inhibitory effect on osteoblast differentiation.
RA suppressed Ti-particle-induced osteolysis in vivo
Since RA could inhibit osteoclastic bone resorption in vitro, we further explored its property on Ti-particleinduced osteolysis in a mouse calvarial model. Microcomputed tomography (CT) revealed that massive surface erosion was seen in the vehicle group. On the contrary, treatment with low or high concentration of RA significantly reduced Ti-particle-induced osteolysis (Fig. 4a). We then measured and calculated the ratio of bone volume to total volume (BV/TV) as well as the percentage of total porosity in the region of interest from threedimensional (3D) reconstruction images. Compared with the vehicle group, treatment with low or high concentration of RA significantly increased the BV/TV and decreased the percentage of porosity (Fig. 4b).
Meanwhile, TRAP staining indicated that the number of multinucleated osteoclasts (arrows) lining along the eroded bone surface was increased in the vehicle group, but was significantly decreased after low or high dose of RA treatment (Fig. 4c, d). CTSK immunohistochemical staining showed similar trends (Fig. 4e, f). These results illustrated that RA also inhibited osteoclasts formation and function in vivo.
RA inhibited SRC/AKT signaling during osteoclastogenesis
We next focused on elucidating the potential mechanism of RA in inhibiting osteoclasts formation and function. RAW264.7 cells were cultured with RANKL for different time to investigate mitogen-activated protein kinase (MAPK), NF-κB, and SRC/AKT signaling pathways. We found the RANKL-induced phosphorylation of AKT was significantly inhibited by RA at 10 and 30 min (Fig. 5a). This inhibitory effect can be partly rescued by the AKT activator, SC79. Moreover, expression of SRC increased since day 3 after stimulating by RANKL, but the addition of RA significantly inhibited this trend at the same time point (Fig. 5b). SC79 could reverse the RArelated decrease of SRC. However, RA did not show any suppressive effect on the RANKL-induced phosphorylation of c-Jun N-terminal kinase (JNK), p38, extracellular signalregulated kinase (ERK), as well as degradation of IκBα (Fig. 5c). These results revealed that RA specifically inhibited the SRC/AKT signaling pathways during osteoclastogenesis without affecting MAPK or NF-κB signaling pathways.
RA inhibits breast cancer-associated osteolysis in vivo
To determine whether RA suppressed breast cancerassociated osteolysis, MDA-MB-231 cells were injected in mice tibiae plateau and treated with phosphatebuffered saline (PBS) or RA (100 µg/kg) for 28 days. Micro-CT and histology was performed to assess osteolytic bone metastasis. Compared with the RA treatment group, trabecular bone loss in the mice tibias was more remarkable in the vehicle group (Fig. 6a).
Quantitative analysis revealed that the RA treatment group had significantly higher BV/TV ratios and smaller trabecular separation (Tb. Sp) compared to the vehicle group (Fig. 6b). From histology, extensive trabecular bone resorption and discrete cortical bone could be observed in the vehicle group, while intact bone cortex was remained in the RA treatment group (Fig. 6c). The transferase-mediated dUTP nick end labeling (TUNEL) assay further revealed that the degree of apoptosis was significantly increased in the RA treatment group compared to the vehicle group (Fig. 6d). All above data suggested that RA could inhibit breast cancer-induced osteolytic lesions. The BV/TV and the percentage of total porosity of each group were measured. c TRAP staining was used to access RA prevention of titanium-particle-induced murine calvarial osteolysis. TRAP-positive osteoclasts were indicated by black arrows. d The number of TRAP-positive cells per field of tissue was determined. e CTSK staining was used to evaluate RA prevention of titanium-particle-induced murine calvarial osteolysis. CTSK-positive osteoclasts were indicated by black arrows. f The number of CTSK-positive cells per field of tissue was calculated (magnifications: ×100; *p < 0.05, **p < 0.01)
RA inhibits growth and invasion of breast cancer cells in vitro through promotion of apoptosis and inhibition and AKT/mTOR signaling
We further explored the mechanism how RA regulated the growth of breast cancer cells. Cell Counting Kit-8 (CCK-8) assays were performed on MDA-MB-231 cells after a 48 and 96 h culture, and RA treatment significantly decreased cell amount at doses higher than 6.25 μM (Fig. 7a). Next, we used ethynyl-2-deoxyuridine (EdU) incorporation assays to determine the effect of RA. After RA treatment for 24 h, the proliferation of MDA-MB-231 cells showed a significant decrease at both 6.25 and 12.5 μM doses (Fig. 7b, c). The flow cytometric analysis revealed that RA could increase the percentages of apoptotic cells (Fig. 7d, e). We used the transwell assay to examine the effect of RA on cell invasion. Our results revealed that RA significantly reduced the invasion of MDA-MB-231 cell in a concentration-dependent manner (Fig. 7f, g). We also used another breast cancer cell, BCAP37, and generated similar results (Supplementary 3). Further, MDA-MB-231 cells were cultured with RA for 0, 6, and 12 h to investigate AKT/mTOR signaling pathways. Both phosphorylation of p-AKT and expression of mTOR was significantly downregulated by treatment with RA (3 µM), indicating the inhibitory effect of RA on AKT/ mTOR signaling (Fig. 7h). These data suggested RA could suppress the growth and invasion of breast cancer cells in vitro with inhibition of AKT/mTOR signaling.
Discussion
One of the major causes of cancer-associated death among women is breast cancer, and bone is the major site of metastasis in invasive breast cancer 18 . The mechanisms underlying bone metastasis in breast cancer are still unclear; however, the concept of the "vicious cycle" during bone breakdown and tumor invasion has been widely accepted 19 . It is believed that pro-osteoclastic factors released by tumor cells stimulate osteoclastogenesis, whereas pro-tumorigenic growth factors secreted from the bone matrix promote tumor expansion [12][13][14][15] . Currently, no available treatment is sufficient to treat bone metastasis and resulting osteolysis 20 . RA is one such compound that is derived from anemone raddeana Regel and has been demonstrated to suppress the growth of gastric and colorectal tumors 6,9 . Our results revealed that RA possessed inhibitory effects on breast cancerassociated osteolysis through suppression of osteoclasts and breast cancer cells. The possible mechanisms might be that RA inhibited the SRC/AKT signaling pathway in osteoclasts as well as AKT/mTOR signaling in breast cancer cells.
Our study provided evidence for the effects of RA on RANKL-induced osteoclastogenesis. Different doses of RA were used in our experiment, and the number of TRAP-positive multinuclear osteoclasts was significantly decreased after RA exposure. The levels of osteoclast phenotypic markers, including CTSK and NFATc1, were also downregulated following the addition of RA. Furthermore, results of bone resorption assays indicated that the area of bone resorption pits was significantly reduced when treated with RA. The effects of RA on Ti-particleinduced osteolysis were further explored with a murine calvarial model. Micro-CT assessments demonstrated that Ti-particle-induced osteolysis was obviously inhibited in the RA treatment group compared with that in the control group.
To elucidate the molecular mechanisms underlying the above results, we first investigated the effects of RA on the RANKL-initiated signaling pathway, because RANKL has been shown to be a key regulator of osteoclast activation by breast cancer cells 12,14,21 . RANKL-induced signaling pathways include MAPK, NF-κB, and SRC/AKT pathways, which play a pivotal role in osteoclast differentiation Western blotting for SRC was analyzed with the cell lysates. c RAW264.7 cells were treated with or without RA (0.8 μM) with the addition of RANKL (50 ng/mL) for 0, 10 or 30 min, respectively. Western blotting for MAPK and IκBα signaling pathways was analyzed with the cell lysates and function [22][23][24] . A significant outcome of our study was that the RANKL-related SRC expression in osteoclasts was significantly downregulated after treatment with RA. Previous studies have shown that SRC is essential for the normal function of osteoclasts. Inhibition of SRC suppresses osteoclastogenesis and the formation of resorption pits 25 , which was consistent with our results. Although the numbers of osteoclasts increased compared with that in wild-type mice, Src −/− mice developed osteopetrosis, suggesting the vital role of SRC in osteoclast function rather than differentiation 26 . AKT is a downstream target of SRC in response to RANKL 27 . TNF receptor-associated factor 6 (TRAF6) is recruited upon the activation of RANK by RANKL, which also leads to a complex of c-Src and TRAF6 and ultimately the activation of phosphoinositide 3-kinase (PI3K) and AKT 28,29 . Specifically, the expression of Src251, which lacks the entire kinase domain, inhibits AKT activity and osteoclast survival in transgenic mice 30 . In our study, decreased AKT phosphorylation was observed following the addition of RA, consistent with the above reports. We also found that the addition of AKT activator, SC79, can rescue the inhibitory effect of RA on AKT phosphorylation and SRC expression. Because MAPK and NF-κB signaling pathways were not affected by RA, it is tempting to speculate that RA may inhibit the formation and function of osteoclasts through downregulation of the SRC/AKT signaling pathway, which may explain why osteolysis was reduced in the RA group.
We then investigated the effects of RA on osteolysis using a breast cancer-associated osteolysis mouse model. Our results revealed that RA reversed severe osteolysis caused by MDA-MB-231 cells. Moreover, RA significantly increased BV/TV ratios and decreased trabecular separation compared with that in the vehicle group, which was in accordance with the results of histological analysis. In TUNEL assays, higher levels of apoptosis were detected in the RA treatment group than in the vehicle group.
Based on the above results, we further explored the direct effects of RA on MDA-MB-231 breast cancer cells. The survival and invasion of MDA-MB-231 cells was inhibited by RA, and RA also suppressed breast cancer cell proliferation and invasion. Flow cytometric analysis revealed that apoptosis rates in MDA-MB-231 cells increased significantly when treated with RA, in accordance with the results of TUNEL analysis. The mechanism may involve the inhibitions of phosphorylation of AKT and expression of mTOR. The PI3K/AKT/mTOR pathway is believed to be the main signaling pathway regulating cell proliferation, survival, metabolism, and angiogenesis [31][32][33][34] . Hyperactivation of the PI3K/AKT/ mTOR pathway is frequently observed in breast cancer and is often associated with resistance to both anti-ERBB2-targeted and endocrine therapies 35 . Various PI3K/ AKT/mTOR inhibitors have been identified as promising antitumor drugs in advanced breast cancer. Everolimus, an inhibitor of mTOR, was found to increase progressionfree survival among patients in a phase 3, randomized trial 36 . Therefore, suppression of AKT activation and mTOR expression mediated the inhibitory effects of RA on breast cancer cell-associated osteolysis.
Another interesting finding in our study was that RA tended to promote osteoblast differentiation and osteoblastic-related genes expression in vitro. This is the first study reporting the potential effects of RA on osteoblast differentiation; however, further studies are still required to determine this effect and underlying mechanism.
In conclusion, RA exerted protective effects against breast cancer-associated bone osteolysis by decreasing osteoclast formation and resorption and by suppressing tumor cell proliferation and invasion. Analysis of the mechanisms involved in this process showed that RA inhibited SRC/AKT signaling in osteoclasts and AKT/ mTOR signaling in MDA-MB-231 cells. Therefore, RA may serve as a potential therapeutic agent for the treatment of breast cancer-associated bone diseases in the future.
Cell culture
BMMs were isolated from femoral and tibial bone marrow of 6-week-old female C57BL/6 mice, incubated in α-MEM containing 10% FBS, 100 U/mL penicillin/streptomycin, and 30 ng/mL M-CSF in a T75 flask in a 5% CO 2 atmosphere at 37°C until it reached 90% density. Then BMMs were moved to a 96-well plate at a density of 8 × 10 3 cells per well and incubated for further differentiation. American Type Culture Collection. Human breast cancer cell lines (MDA-MB-231 and BCAP37) were gifts from Dr. Linbo Wang (Sir Run Run Shaw Hospital, Zhejiang University). They were cultured in DMEM supplemented with 10% FBS and antibiotics with the condition mentioned above. Cell culture media were replaced every 2 days.
Cell viability assay
BMMs (8 × 10 3 cells per well) were seeded into a 96-well plate, adherent cells were treated with various concentrations of RA in α-MEM containing 10% FBS, and 30 ng/mL M-CSF for 48, 72, or 96 h. MC3T3-E1 cells (5 × 10 3 cells per well) were seeded into a 96-well plate with DMEM containing 10% FBS and treated with indicated concentrations of RA for 48 or 96 h. MDA-MB-231 cells (5 × 10 3 cells per well) were seeded into a 96-well plate with DMEM containing 10% FBS and various concentrations of RA for 48 or 96 h. The culture medium was replaced every second days. The cytotoxic effect of RA on BMMs, MC3T3-E1, or MDA-MB-231 cells were assessed by a CCK-8 assay, 10 μL of CCK-8 buffer was added to each well, and plates were incubated for an additional 2 h. The absorbance was measured at 450 nm (650 nm reference) using an ELX800 microplate reader (Bio-Tek, USA).
Bone resorption assay
BMMs (2 × 10 4 cells per well) were seeded on bovine bone slices in 96-well plates for 24 h, and then stimulated with 0, 0.2, 0.4, and 0.8 µM RA in the presence of M-CSF (30 ng/mL) and RANKL (50 ng/mL) for another 3 days. Cells were then fixed with 2.5% glutaraldehyde. Bone slices were visualized under a scanning electron microscope (SEM, FEI Quanta 250; FEI, Hillsboro, OR, USA), and the resorption areas were quantified with Image J software (NIH).
TRAP staining
BMMs were seeded into a 96-well plate at a density of 8 × 10 3 cells per well and treated with 0, 0.2, 0.4, and 0.8 μM RA in the presence of 30 ng/mL M-CSF and 50 ng/ mL RANKL. The culture medium was replaced every second day until mature osteoclasts were formed. Then, the cells were washed twice with PBS, fixed with 4% paraformaldehyde for 30 min, and stained for TRAP. TRAP-positive cells with five or more nuclei were counted under the light microscopy. To assess the survival of osteoclast, osteoclast ghosts were identified as dead osteoclasts and the total number of each well was calculated 37 .
ALP and Alizarin red staining
MC3T3-E1were cultured into a 12-well plate and incubated with 0, 0.2, 0.4, or 0.8 µM RA in osteogenic medium (1 mM β-glycerophosphate and 5 μM L-ascorbic acid 2-phosphate). At day 7, ALP staining was performed and the area of positive cells was determined with Image J software (NIH). For Alizarin red staining, at day 21, cells were washed with PBS twice, fixed with 4% paraformaldehyde for 30 min, and stained with Alizarin red solution for 10 min at 4°C. The area of Alizarin red S-stained mineralization nodules was also calculated with Image J software (NIH).
EdU incorporation assay
Cell proliferation was evaluated with Click-iT EdU Cell Proliferation Kit (KeyGEN, Nanjing, China) following the manufacturer's instructions. Breast cancer cells were pretreated with 0, 6.25, and 12.5 µM RA for 24 h. Then, the cells were incubated with 25 µM Edu for 2 h. Subsequently, the cells were fixed for 20 min with 4% paraformaldehyde. After permeabilization with 0.5% Triton X-100, the cells were incubated with 1× Click-iT EdU reaction cocktail for 30 min. Then, the cells were subjected to 1× Hoechst 33342 solution for 30 min. The cells were washed and observed under fluorescence microscopy.
Flow cytometric analysis
Breast cancer cells were cultivated by the addition of 0, 6.25, 12.5, or 18.75 µM RA with medium described above for 24 h. Afterwards, the cells were washed twice with PBS and then resuspended in binding buffer. The cells were then stained with Annexin V and propidium iodide for 15 min at room temperature in the dark. Flow cytometric analyses were carried out using a flow cytometer, and the data were analyzed with the Cell Quest software, version 3.0 (BD Biosciences, Sunnyvale, CA, USA).
Transwell invasion assay
A 24-well invasion chamber system was used to evaluate the effect of RA on invasion (Corning Inc., New York, NY, USA). Cells were seeded in the upper chamber at a density of 5 × 10 4 cells in 200 µl serum-free medium by the addition of different concentrations of RA (0, 6.25, 12.5, and 25 µM). The lower chamber was filled with 500 μl of 10% fetal bovine serum-containing medium. The plates were incubated for 24 h at 37°C. Then, the cells were fixed with methanol and stained with Trypan blue. Cotton swabs were used to remove the non-migrating cells on the upper side. The number of migrating cells was calculated by counting one randomly selected field of each well.
Micro-CT assessment
The fixed calvaria and tibiae were analyzed by micro-CT scanner (Skyscan 1072; Skyscan, Aartselaar, Belgium). The scanning protocol was set at an isometric resolution of 9 mm, with X-ray energy settings of 80 kV and 800 μA. 3D images were reconstructed using Cone Beam Reconstruction software (SkyScan). BV, bone mineral density, BV/TV, mean trabecular number, and mean trabecular separation were recorded with resident reconstruction program (Skyscan).
Histological analysis
After micro-CT analysis, the calvaria and tibiae were decalcified in 10% EDTA for 3 weeks, followed by paraffin embedding. Hematoxylin and eosin, TRAP, and CTSK staining were performed, after which specimens were examined and photographed under a high-quality microscope. The number of TRAP-positive and CTSKpositive multinucleated osteoclasts was counted.
Deoxynucleotidyl TUNEL
Tumor tissues were decalcified in 10% EDTA for 3 weeks, and embedded in paraffin. TUNEL assay was performed with an In Situ Cell Death Detection Kit (Roche Applied Science, Indianapolis, IN, USA) according to the manufacturer's instructions.
Western blotting
Cells were lysed with RIPA buffer (Beyotime, Shanghai, China), then the lysate was centrifuged at 12,000 rpm for 10 min, and the protein in the supernatants was collected and quantified. Each protein lysate (30 µg) was resolved using sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to a polyvinylidene difluoride membrane (Millipore, Bedford, MA, USA). Following transfer, membranes were blocked with 5% skim milk for 2 h and probed with primary antibodies at 4°C overnight and incubated with appropriate secondary antibodies. Antibody reactivity was detected by exposure in an Odyssey V3.0 image scanning (Li-COR Inc., Lincoln, NE, USA).
RNA isolation and real-time PCR analysis
BMMs were cultured in 6-well plates at a density of 2 × 10 5 cells per well, treated with 30 ng/mL M-CSF, 50 ng/mL RANKL, and 0, 0.2, 0.4, and 0.8 µM RA for 5 days. MC3T3-E1 cells were cultured in osteogenic medium at the same density with above indicated concentrations of RA for 7 or 14 days. Total RNA was extracted using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). RT-PCR was performed using SYBR Premix Ex Tag Kit (TaKaRa, Biotechnology, Otsu, Japan) and an ABI 7500 Sequencing Detection System (Applied Biosystems, Foster City, CA, USA). The following cycling conditions were used: denaturation at 95°C for 10 min, 40 cycles at 95°C for 10 s, and amplification at 60°C for 34 s. The quantity of each target was normalized to GAPDH.
Ti-particle-induced calvarial osteolysis mice model
A mouse calvarial osteolysis model was established using 8-week-old male C57BL/6 mice. After anesthesia, 30 mg of Ti particles were embedded under the periosteum at middle suture of calvaria in the Ti, low and high RA groups. In the sham group, the incision was closed without further intervention. Mice in the low or high RA groups were injected daily with 50 or 100 µg/kg per day RA, while mice in the sham or Ti group received PBS. After 14 days, mice were sacrificed and the calvariae were collected for micro-CT assessment and histological analysis.
Breast cancer-induced osteolysis model
The model of human breast cancer bone metastasis was established through injection of the MDA-MB-231 cells (1 × 10 6 /mL) into the tibiae plateau of 5-week-old BALB/c nu/nu female mice. The mice were then randomly assigned to two groups, treated with PBS (n = 6) or RA (100 µg/kg body weight in vehicle, n = 6) by intraperitoneal injection every other day for 28 days and then sacrificed. The tibiae were scanned with a micro-CT and proceeded with histological or immunohistochemical analysis.
Statistics analysis
The SPSS 20.0 software was used to analyze the data which were expressed as the mean ± SD. Groups were compared using the Student's t test. Results with values of P < 0.05 were considered statistically significant.
Ethical statement
All animal experiments were performed in accordance with guidelines for animal treatment of Sir Run Run Shaw Hospital. All experimental protocols in our study were approved by the Ethics Committee of Sir Run Run Shaw Hospital.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 6,217.4 | 2018-03-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Passive Ventilation for Indoor Comfort : A Comparison of Results from Monitoring and Simulation for a Historical Building in a Temperate Climate
When environmental sustainability is a key feature of an intervention on a building, the design must guarantee minimal impact and damage to the environment. The last ten years have seen a steady increase in the installation of highly efficient systems for winter heating, but this trend has not been mirrored for summer cooling systems. Passive ventilation, however, is a means of summer air conditioning with a low financial and environmental impact. Natural ventilation methods such as “wind towers” have been used to achieve adequate levels of internal comfort in buildings. However, the application of these systems in old town centres, where buildings are often of great architectural value, is complex. This study started with the analysis of various ventilation chimneys in order to identify the most suitable system for temperate climes. Ventilation systems were then designed using static analysis of ventilation with specific software, and installed. The results were assessed and monitored using climatic sensors over the summer period, in order to establish the period of maximum functionality to optimize the system’s performance.
Introduction
A third of the world's population lives in areas with hot-dry or hot humid weather covering a fifth of the planet's surface whilst internal continental areas, even in high latitudes (50 • ), are characterized by summer temperatures over comfort levels [1].The use of air conditioning systems and the related energy costs, especially in Europe, is constantly increasing.This trend risks canceling the benefits produced by the energy conservation incentive policies implemented by European countries and other industrialized countries [2].
If the problem of summer air conditioning is not tackled, electricity consumption will continue to rise.Two of the factors that have led to an increase in air conditioning are:
•
global warming due to gas production from greenhouse effects; • growing economic development of emerging areas of the Asian continent-often without excessive environmental controls.
The need to cool habitable areas is becoming increasingly important in the current building landscape especially as the increase in efficiency of winter heating has led, in some cases, to less efficiency in buildings' cooling systems during the summer [3].The consequence is therefore an increase in energy costs caused by the need to control indoor temperature and humidity of homes in the summer, and year round for other buildings.A valid alternative to this trend is represented by the use of design criteria and technologies based on the "passive" air conditioning of buildings, or the use of physical-technical mechanisms, natural or induced, aimed at achieving comfort conditions within a building without or with a minimum use of exogenous energies [4,5].
The widespread adoption of passive cooling systems that do not use electrical energy would clearly be advantageous from an environmental and economic perspective [6].Passive cooling consists of a series of measures adopted to control internal conditions, minimizing energy consumption through the use of local climatic resources and creating low environmental impact systems [7].
The use of wind, for example, to provide comfortable living conditions is not new and has been used for centuries [8,9].The villas of Costozza are an interesting example; wind was channelled through caves in order to cool the rooms in summer.The caves were manmade having been excavated in ancient times for stone, creating a series of underground caves and galleries called "covoli".In the seventeenth century, when landowners built their villas on this land, they used the air of the caves to cool the internal environments by connecting their cellars to the caves with tunnels and, within the villas themselves, devising a system that allowed the regulation of the air flow.The system is still in use today and fresh air arrives through the basement, and reaches the upper floors, the air flow being generated by the difference in temperature and pressure [10].
Another innovative project of cooling and natural ventilation can be found in Zisa (Figure 1), the summer residence of kings in the city of Palermo (Italy).The natural ventilation and cooling of the castle were achieved as a result of five elements: the large fishpond in the front; the fountain on the ground floor; two ventilation chimneys; large damp towels hanging in the various rooms on the upper floors; and two side towers similar to "ventilation chimneys" connected to all three floors of the structure (Figure 2).The sea breezes were first cooled by the pool and the fountain, the wind then entered the building and began to warm up.Hot air rose through the ventilation chimneys due to the cooler air below ("chimney effect").Thus, natural circulation of air was created in all of the rooms and facilitated by a series of holes in the doors.Hot air was also cooled by large damp cloths hanging from the beams, the fittings of which are still visible today.Aesthetic comfort and awareness of the environment makes Zisa an outstanding example of bioclimatic architecture [11,12].
Sustainability 2018, 10, x FOR PEER REVIEW 2 of 20 increase in energy costs caused by the need to control indoor temperature and humidity of homes in the summer, and year round for other buildings.A valid alternative to this trend is represented by the use of design criteria and technologies based on the "passive" air conditioning of buildings, or the use of physical-technical mechanisms, natural or induced, aimed at achieving comfort conditions within a building without or with a minimum use of exogenous energies [4,5].
The widespread adoption of passive cooling systems that do not use electrical energy would clearly be advantageous from an environmental and economic perspective [6].Passive cooling consists of a series of measures adopted to control internal conditions, minimizing energy consumption through the use of local climatic resources and creating low environmental impact systems [7].
The use of wind, for example, to provide comfortable living conditions is not new and has been used for centuries [8,9].The villas of Costozza are an interesting example; wind was channelled through caves in order to cool the rooms in summer.The caves were manmade having been excavated in ancient times for stone, creating a series of underground caves and galleries called "covoli".In the seventeenth century, when landowners built their villas on this land, they used the air of the caves to cool the internal environments by connecting their cellars to the caves with tunnels and, within the villas themselves, devising a system that allowed the regulation of the air flow.The system is still in use today and fresh air arrives through the basement, and reaches the upper floors, the air flow being generated by the difference in temperature and pressure [10].
Another innovative project of cooling and natural ventilation can be found in Zisa (Figure 1), the summer residence of kings in the city of Palermo (Italy).The natural ventilation and cooling of the castle were achieved as a result of five elements: the large fishpond in the front; the fountain on the ground floor; two ventilation chimneys; large damp towels hanging in the various rooms on the upper floors; and two side towers similar to "ventilation chimneys" connected to all three floors of the structure (Figure 2).The sea breezes were first cooled by the pool and the fountain, the wind then entered the building and began to warm up.Hot air rose through the ventilation chimneys due to the cooler air below ("chimney effect").Thus, natural circulation of air was created in all of the rooms and facilitated by a series of holes in the doors.Hot air was also cooled by large damp cloths hanging from the beams, the fittings of which are still visible today.Aesthetic comfort and awareness of the environment makes Zisa an outstanding example of bioclimatic architecture [11,12]."Wind towers" (common in Arabic architecture) are another example of passive ventilation and well-known structures that apply the principle of natural convection to cooling.A wind tower is constructed from the foundations of a building and undergoes internal subdivisions into a series of chimneys and vertical ducts before emerging at the top of the building.Convective motions (adjustable with a series of doors and openings) are triggered between ducts, which influence the circulating air temperatures as desired.The motion of the air accelerates in the presence of wind and the cooling effect is often increased by "evaporation" through fountains carefully positioned in the passage of the currents; an important contribution to cooling is also the humidity arising from the subsoil and the foundations [13,14].
Many studies have looked at solar chimneys through mathematical simulations and experimental investigations: this choice of passive ventilation depends on design parameters and the thermal performances for different geometrical configurations.Research has shown that air speeds in chimneys are influenced by the width of the channel and the angle of inclination of the chimney.Saifi et al. [15] developed an experimental and numerical study for a tilted solar chimney (30° and 45°) whilst Chung et al. [16] studied the performance of a solar chimney in hot and humid climates in order to improve the thermal performance of a terrace house in Malaysia: nine configurations of chimney dimensions were tested and validated using CFD in Design Builder software in order to find the best solution for the case study analyzed.Another CFD studied was developed by Baxevanou and Fidaros for a two-story building with a solar chimney: three modifications of the basic 2D geometry were examined in order to exploit the functionality design of a solar chimney that operated better in the morning and afternoon, the worst time being noon in June [17].Yan et al. [18] compared theoretical research, numerical simulations and experimental results showing how factors like heat collection height and width, solar radiation intensity, the inlet and outlet area ratio of chimney and air inlet velocity, etc. affect solar chimney ventilation.
Despite the large amount of literature on analytical studies of ventilation chimneys operation, widely validated by CFD analysis and optimized in geometry, there is a lack of research into the integration of these systems in historical buildings, which represents a large part of the Italian built heritage.
Unfortunately, the potential of wind as a renewable, alternative energy source to oil for the production of electricity in Italy is rather limited, [11].Italy's geomorphological characteristics determine widespread wind with a prevailing breeze regime; wind with relatively low average speeds (1-2 m/s), variable frequency, and alternating directions during the day.However, these characteristics, although unfavourable for the production of electricity, are particularly suitable for use in natural ventilation systems for the renewal of air in confined spaces and passive cooling of buildings [9].This use, if exploited, would result in far higher savings in terms of electrical energy than could be obtained directly from wind power generation.
It was therefore with this in mind that this study looked at the integration of ventilation chimneys in historic buildings in central Italy.The goal was to evaluate the benefits in terms of internal comfort after the installation of a passive ventilation chimney in Palazzo Galeota, Poggio "Wind towers" (common in Arabic architecture) are another example of passive ventilation and well-known structures that apply the principle of natural convection to cooling.A wind tower is constructed from the foundations of a building and undergoes internal subdivisions into a series of chimneys and vertical ducts before emerging at the top of the building.Convective motions (adjustable with a series of doors and openings) are triggered between ducts, which influence the circulating air temperatures as desired.The motion of the air accelerates in the presence of wind and the cooling effect is often increased by "evaporation" through fountains carefully positioned in the passage of the currents; an important contribution to cooling is also the humidity arising from the subsoil and the foundations [13,14].
Many studies have looked at solar chimneys through mathematical simulations and experimental investigations: this choice of passive ventilation depends on design parameters and the thermal performances for different geometrical configurations.Research has shown that air speeds in chimneys are influenced by the width of the channel and the angle of inclination of the chimney.Saifi et al. [15] developed an experimental and numerical study for a tilted solar chimney (30 • and 45 • ) whilst Chung et al. [16] studied the performance of a solar chimney in hot and humid climates in order to improve the thermal performance of a terrace house in Malaysia: nine configurations of chimney dimensions were tested and validated using CFD in Design Builder software in order to find the best solution for the case study analyzed.Another CFD studied was developed by Baxevanou and Fidaros for a two-story building with a solar chimney: three modifications of the basic 2D geometry were examined in order to exploit the functionality design of a solar chimney that operated better in the morning and afternoon, the worst time being noon in June [17].Yan et al. [18] compared theoretical research, numerical simulations and experimental results showing how factors like heat collection height and width, solar radiation intensity, the inlet and outlet area ratio of chimney and air inlet velocity, etc. affect solar chimney ventilation.
Despite the large amount of literature on analytical studies of ventilation chimneys operation, widely validated by CFD analysis and optimized in geometry, there is a lack of research into the integration of these systems in historical buildings, which represents a large part of the Italian built heritage.
Unfortunately, the potential of wind as a renewable, alternative energy source to oil for the production of electricity in Italy is rather limited, [11].Italy's geomorphological characteristics determine widespread wind with a prevailing breeze regime; wind with relatively low average speeds (1-2 m/s), variable frequency, and alternating directions during the day.However, these characteristics, although unfavourable for the production of electricity, are particularly suitable for use in natural ventilation systems for the renewal of air in confined spaces and passive cooling of buildings [9].This use, if exploited, would result in far higher savings in terms of electrical energy than could be obtained directly from wind power generation.
It was therefore with this in mind that this study looked at the integration of ventilation chimneys in historic buildings in central Italy.The goal was to evaluate the benefits in terms of internal comfort after the installation of a passive ventilation chimney in Palazzo Galeota, Poggio Picenze (L'Aquila, Italy).The aspects relating to its operational optimization were not considered in this study: in fact, the integration imposes in most cases constraints in the size of the duct.These constraints should not discourage intervention, which, as this paper shows, can nevertheless guarantee an improvement in the performance of the building.
Method and Tools
This research aims to show how thermo-hygrometric characteristics that already exist within a built volume can be exploited to improve internal thermal comfort and to ensure significant energy savings with respect to the use of air conditioning [19].The study looks at ventilation chimneys with various openings and analyzes the benefits produced by the installation of a ventilation duct in a historical building in central Italy.
The methodology for this research consisted of five steps-the analysis of ventilation chimneys, design of a ventilation device, simulation with software, installation and monitoring, validation of models and concludes with testing.The methodology of the study is shown in Figure 3. Picenze (L'Aquila, Italy).The aspects relating to its operational optimization were not considered in this study: in fact, the integration imposes in most cases constraints in the size of the duct.These constraints should not discourage intervention, which, as this paper shows, can nevertheless guarantee an improvement in the performance of the building.
Method and Tools
This research aims to show how thermo-hygrometric characteristics that already exist within a built volume can be exploited to improve internal thermal comfort and to ensure significant energy savings with respect to the use of air conditioning [19].The study looks at ventilation chimneys with various openings and analyzes the benefits produced by the installation of a ventilation duct in a historical building in central Italy.
The methodology for this research consisted of five steps-the analysis of ventilation chimneys, design of a ventilation device, simulation with software, installation and monitoring, validation of models and concludes with testing.The methodology of the study is shown in Figure 3.The models for simulations were created using Design Builder.Only natural ventilation was taken into account and the results were calculated for the whole day of the 21st June (summer solstice) at 12:00 p.m. and the summer period from June to August using average values.
Simulations were carried out assuming that the ventilation chimney was used in a temperate climate, that of Campobasso in Italy.The results of these simulations showed the effectiveness of passive ventilation models, according to the Fanger indices, achieved.The results are, however, dependent on location; different surroundings may not produce the same results.Indeed, climatic zone plays a crucial role in establishing how thermo-hygrometric wellbeing can be achieved and it would be interesting to look at the functionality of these systems in more extreme climatic zones e.g., tropical climates [9,20].
The testing and validation process consisted of comparing the results of the simulation obtained from the schematic and realistic models, with the data collected from the monitoring of a case study, after the installation of the ventilation duct.
Construction Typologies, Functioning Schemes and Models
The most ancient "thermal machines" built by man are chimneys.Chimneys are responsible for the natural ventilation of a building, and this phenomenon is known, in bioclimatic architecture, as the "stack effect".Often, in fact, our buildings act like gigantic chimneys, in which air circulates according to the different pressure.The differences in pressure are, in fact, responsible for the natural ventilation of the building, and fundamental for the change of air in internal areas and the thermal hygrometric wellbeing of the occupiers.Different pressure between the various floors of the building, even slight, increases with altitude and with the difference in temperature between outside and inside [1,21].
In a house with several floors, the warmer air rises to the top floors creating a pressure that is higher than the atmospheric one, differently from what happens on the lower floors, where the pressure is lower than the atmospheric one.Doors and windows play an important role in the regulation of the natural ventilation, as does the type of ventilation chimney and the number of floors, and it is their roles that this study seeks to determine through a careful analysis of various examples.There are two types of wind tower: the passive cooling court and the passive cooling tower (see Figure 4).The models for simulations were created using Design Builder.Only natural ventilation was taken into account and the results were calculated for the whole day of the 21st June (summer solstice) at 12:00 p.m. and the summer period from June to August using average values.
Simulations were carried out assuming that the ventilation chimney was used in a temperate climate, that of Campobasso in Italy.The results of these simulations showed the effectiveness of passive ventilation models, according to the Fanger indices, achieved.The results are, however, dependent on location; different surroundings may not produce the same results.Indeed, climatic zone plays a crucial role in establishing how thermo-hygrometric wellbeing can be achieved and it would be interesting to look at the functionality of these systems in more extreme climatic zones e.g., tropical climates [9,20].
The testing and validation process consisted of comparing the results of the simulation obtained from the schematic and realistic models, with the data collected from the monitoring of a case study, after the installation of the ventilation duct.
Construction Typologies, Functioning Schemes and Models
The most ancient "thermal machines" built by man are chimneys.Chimneys are responsible for the natural ventilation of a building, and this phenomenon is known, in bioclimatic architecture, as the "stack effect".Often, in fact, our buildings act like gigantic chimneys, in which air circulates according to the different pressure.The differences in pressure are, in fact, responsible for the natural ventilation of the building, and fundamental for the change of air in internal areas and the thermal hygrometric wellbeing of the occupiers.Different pressure between the various floors of the building, even slight, increases with altitude and with the difference in temperature between outside and inside [1,21].
In a house with several floors, the warmer air rises to the top floors creating a pressure that is higher than the atmospheric one, differently from what happens on the lower floors, where the pressure is lower than the atmospheric one.Doors and windows play an important role in the regulation of the natural ventilation, as does the type of ventilation chimney and the number of floors, and it is their roles that this study seeks to determine through a careful analysis of various examples.There are two types of wind tower: the passive cooling court and the passive cooling tower (see Figure 4).Different models of ventilation chimneys were studied to obtain solutions compatible with different case studies.After defining several standard models, simulations were carried using specific software.The rooms were assumed to be of medium size (20 m 2 ) and the buildings located in climatic zone E. The model was created using Design Builder.Only natural ventilation was taken into account and the results were calculated for the whole day of the 21st June (summer solstice) at 12:00 p.m. and the summer period from June to August using average values.Simulations and the results for each model analyzed are described in depth in [22][23][24].
The features of each model analyzed in this work are shown in Table 1 and in Figure 5. Different models of ventilation chimneys were studied to obtain solutions compatible with different case studies.After defining several standard models, simulations were carried using specific software.The rooms were assumed to be of medium size (20 m 2 ) and the buildings located in climatic zone E. The model was created using Design Builder.Only natural ventilation was taken into account and the results were calculated for the whole day of the 21st June (summer solstice) at 12:00 p.m. and the summer period from June to August using average values.Simulations and the results for each model analyzed are described in depth in [22][23][24].
The features of each model analyzed in this work are shown in Table 1 and in Figure 5. Figure 6 shows the ventilation chimney analyzed in the case study, which consists of a wind tower that connects two internal areas of the building, situated on the different levels, with an opening on the lower floor [25,26] as well as ducts designed to achieve passive ventilation based on the systems used in the Renaissance villas of Costozza, Torri del Vento and the Zisa di Palermo. Figure 6 shows the ventilation chimney analyzed in the case study, which consists of a wind tower that connects two internal areas of the building, situated on the different levels, with an opening on the lower floor [25,26] as well as ducts designed to achieve passive ventilation based on the systems used in the Renaissance villas of Costozza, Torri del Vento and the Zisa di Palermo.
Case Study
The case study is a building of elevated historical and architectural value situated in Poggio Picenze (municipality of L'Aquila) and known as "Palazzo Galeota" (Figure 7).Palazzo Galeota was built in the 15th century over a previous underground structure.The building was damaged in the earthquake, which hit L'Aquila and the surrounding territory on 6th April 2009 and has not yet been repaired.The building suffered serious damage including the partial collapse of floors and cracks along the bearing walls; an external structure was also required to secure the external walls that were still standing.The building is constructed from mixed masonry brickwork stones and the roof covering is wood; these materials have been conserved over the years and to this day many original features are still present.There was no thermal insulation in vertical, horizontal or inclined structures and the windows are the original ones in wooden frames and single pane glass.
Case Study
The case study is a building of elevated historical and architectural value situated in Poggio Picenze (municipality of L'Aquila) and known as "Palazzo Galeota" (Figure 7).Palazzo Galeota was built in the 15th century over a previous underground structure.The building was damaged in the earthquake, which hit L'Aquila and the surrounding territory on 6th April 2009 and has not yet been repaired.The building suffered serious damage including the partial collapse of floors and cracks along the bearing walls; an external structure was also required to secure the external walls that were still standing.The building is constructed from mixed masonry brickwork stones and the roof covering is wood; these materials have been conserved over the years and to this day many original features are still present.There was no thermal insulation in vertical, horizontal or inclined structures and the windows are the original ones in wooden frames and single pane glass.The Palazzo has an interior court structure consisting of two floors above ground and one below.
There is an open well on the underground floor as well as rooms and a wine cellar.The floors above ground are characterized by internal loggia, whilst the underground floors can be accessed by a staircase from the internal courtyard.The main entrance to the palazzo is along Via Galeota, which leads into the entrance hall that acts as a horizontal connecting element: from here, the internal court can be accessed as well as the whole length of the loggia and other rooms.
From an energy point of view, the structure is inefficient during the winter but more efficient in summer, mainly due to the massive covering o thick walls that guarantee high thermal inertia of the system, therefore maintaining comfort temperatures-however, not on the first floor where rooms directly in contact with the light wooden roof covering were exposed to solar radiation making internal temperatures uncomfortable during the summer months.
It is possible to assume that there is a different internal temperature between the three floors (underground, ground floor and first floor) due to the intrinsic and material characteristics of the building, besides that of the solar exposure of the whole volume of the roof covering.The underground floor has particular potential with regards to passive cooling as the average temperature registered during the days from 1-15 April was 6 °C.The humidity from the open well helps to obtain the lower internal temperature and there is very little variability between day and night regime, both in summer and winter.The Palazzo has an interior court structure consisting of two floors above ground and one below.
There is an open well on the underground floor as well as rooms and a wine cellar.The floors above ground are characterized by internal loggia, whilst the underground floors can be accessed by a staircase from the internal courtyard.The main entrance to the palazzo is along Via Galeota, which leads into the entrance hall that acts as a horizontal connecting element: from here, the internal court can be accessed as well as the whole length of the loggia and other rooms.
From an energy point of view, the structure is inefficient during the winter but more efficient in summer, mainly due to the massive covering o thick walls that guarantee high thermal inertia of the system, therefore maintaining comfort temperatures-however, not on the first floor where rooms directly in contact with the light wooden roof covering were exposed to solar radiation making internal temperatures uncomfortable during the summer months.
It is possible to assume that there is a different internal temperature between the three floors (underground, ground floor and first floor) due to the intrinsic and material characteristics of the building, besides that of the solar exposure of the whole volume of the roof covering.The underground floor has particular potential with regards to passive cooling as the average temperature registered during the days from 1-15 April was 6 • C. The humidity from the open well helps to obtain the lower internal temperature and there is very little variability between day and night regime, both in summer and winter.
Installation of the Ventilation Duct
As has been fully described elsewhere [22], a duct connects the basement to room 1, transporting cold air from the underground room up to the first floor (Figures 8 and 9).As the original windows were removed as a result of the 2009 earthquake, the openings were sealed with a PVC sheet to guarantee the room the same solar gains.This certainly favored incident solar radiation and an increase of indoor temperature, producing a hothouse effect similar to glazing.
Installation of the Ventilation Duct
As has been fully described elsewhere [22], a duct connects the basement to room 1, transporting cold air from the underground room up to the first floor (Figures 8 and 9).As the original windows were removed as a result of the 2009 earthquake, the openings were sealed with a PVC sheet to guarantee the room the same solar gains.This certainly favored incident solar radiation and an increase of indoor temperature, producing a hothouse effect similar to glazing.Sustainability 2018, 10, x FOR PEER REVIEW 8 of 20
Installation of the Ventilation Duct
As has been fully described elsewhere [22], a duct connects the basement to room 1, transporting cold air from the underground room up to the first floor (Figures 8 and 9).As the original windows were removed as a result of the 2009 earthquake, the openings were sealed with a PVC sheet to guarantee the room the same solar gains.This certainly favored incident solar radiation and an increase of indoor temperature, producing a hothouse effect similar to glazing.The type of duct used was one compatible for the re-use of existing flues.The tube had a diameter of 250 mm and is made of a conductive material (metallic) to allow external heat exchange by enhancing the movement of the air through differences in temperature and pressure (Figure 10).In addition, during the monitoring campaign, it has been verified that the insulation of the duct does not lead beneficial effects on ventilation capacity.The investigated rooms are oriented to the south in order to simulate a more disadvantageous condition in terms of internal comfort due to the higher overheating inside caused by exposure.The type of duct used was one compatible for the re-use of existing flues.The tube had a diameter of 250 mm and is made of a conductive material (metallic) to allow external heat exchange by enhancing the movement of the air through differences in temperature and pressure (Figure 10).In addition, during the monitoring campaign, it has been verified that the insulation of the duct does not lead beneficial effects on ventilation capacity.The investigated rooms are oriented to the south in order to simulate a more disadvantageous condition in terms of internal comfort due to the higher overheating inside caused by exposure.
Measurement and Calibration Instruments
Electronic monitoring enabled us to obtain the data in real time for this study.Sensors for temperature, humidity, rainfall, wind direction and an anemometer were installed (indoor air quality was not monitored in this study but would prove of interest in future studies) (Figure 11).This sensory network was made up of elements able to measure, elaborate and send data to a central station and incorporated a network protocol for communication with the various sensors, an application necessary for the treatment and memorization of the data, an external interface for the consultation and analysis of data, a database as well as a web server with a specific web application (Figure 12).
A sensor network structure usually provides several wireless nodes, or when possible wired, distributed in a well-defined area, which periodically send the data surveyed through sensors to a collection point (known as sink or base station or gateway).In the collection point, data is gathered and sent to another remote system for recording and further elaboration.In this setup, the sensors were used to monitor rooms 1 and 2, the cellar and the external climatic conditions through a weather station installed on the roof.
Measurement and Calibration Instruments
Electronic monitoring enabled us to obtain the data in real time for this study.Sensors for temperature, humidity, rainfall, wind direction and an anemometer were installed (indoor air quality was not monitored in this study but would prove of interest in future studies) (Figure 11).This sensory network was made up of elements able to measure, elaborate and send data to a central station and incorporated a network protocol for communication with the various sensors, an application necessary for the treatment and memorization of the data, an external interface for the consultation and analysis of data, a database as well as a web server with a specific web application (Figure 12).
A sensor network structure usually provides several wireless nodes, or when possible wired, distributed in a well-defined area, which periodically send the data surveyed through sensors to a collection point (known as sink or base station or gateway).In the collection point, data is gathered and sent to another remote system for recording and further elaboration.In this setup, the sensors were used to monitor rooms 1 and 2, the cellar and the external climatic conditions through a weather station installed on the roof.The section in Figure 13 shows the position of the sensors.There are two sensors for temperature and relative humidity in the hypogeum cellar, room 1, room 2 and outside.There is also a weather station positioned on the roof for the recording of atmospheric pressure, rainfall, anemometer and wind direction data.The section in Figure 13 shows the position of the sensors.There are two sensors for temperature and relative humidity in the hypogeum cellar, room 1, room 2 and outside.There is also a weather station positioned on the roof for the recording of atmospheric pressure, rainfall, anemometer and wind direction data.The section in Figure 13 shows the position of the sensors.There are two sensors for temperature and relative humidity in the hypogeum cellar, room 1, room 2 and outside.There is also a weather station positioned on the roof for the recording of atmospheric pressure, rainfall, anemometer and wind direction data.The section in Figure 13 shows the position of the sensors.There are two sensors for temperature and relative humidity in the hypogeum cellar, room 1, room 2 and outside.There is also a weather station positioned on the roof for the recording of atmospheric pressure, rainfall, anemometer and wind direction data.
Simulation Output with Software Design Builder
The building was modeled using Design Builder, so the conditions of the building before and after the duct installation were analyzed with a simulation instrument.
With Design Builder, it is possible to perform accurate studies of thermal masses and natural ventilation flows according to external meteorological conditions, under the control of dynamic programs of operation (Energy Plus).In particular, Design Builder allows us to determine the mass of air exchanged between internal and external environments as well as between differing zones of the model, through the openings, as a result of wind and differences in pressure.After selecting the checkbox under the natural ventilation header, the Outside air definition method allows us to select the method to be used to set the maximum outside air natural ventilation rate.In this case, the options used were "by zone" and the zones in the model connected by holes were merged.
Only one wing of the building was modeled (Figure 14). Figure 15 shows the outline for the modeling of the hypogean level and rooms 1 and 2 of the first floor.The duct was simplified to represent a true and proper ventilation chimney.Residential use was assumed for the two rooms in the simulations, so metabolic activity was set to 1.2 (relaxed) and typical indoor clothing to 0.5.
Simulation Output with Software Design Builder
The building was modeled using Design Builder, so the conditions of the building before and after the duct installation were analyzed with a simulation instrument.
With Design Builder, it is possible to perform accurate studies of thermal masses and natural ventilation flows according to external meteorological conditions, under the control of dynamic programs of operation (Energy Plus).In particular, Design Builder allows us to determine the mass of air exchanged between internal and external environments as well as between differing zones of the model, through the openings, as a result of wind and differences in pressure.After selecting the checkbox under the natural ventilation header, the Outside air definition method allows us to select the method to be used to set the maximum outside air natural ventilation rate.In this case, the options used were "by zone" and the zones in the model connected by holes were merged.
Only one wing of the building was modeled (Figure 14). Figure 15 shows the outline for the modeling of the hypogean level and rooms 1 and 2 of the first floor.The duct was simplified to represent a true and proper ventilation chimney.Residential use was assumed for the two rooms in the simulations, so metabolic activity was set to 1.2 (relaxed) and typical indoor clothing to 0.5.
Simulation Output with Software Design Builder
The building was modeled using Design Builder, so the conditions of the building before and after the duct installation were analyzed with a simulation instrument.
With Design Builder, it is possible to perform accurate studies of thermal masses and natural ventilation flows according to external meteorological conditions, under the control of dynamic programs of operation (Energy Plus).In particular, Design Builder allows us to determine the mass of air exchanged between internal and external environments as well as between differing zones of the model, through the openings, as a result of wind and differences in pressure.After selecting the checkbox under the natural ventilation header, the Outside air definition method allows us to select the method to be used to set the maximum outside air natural ventilation rate.In this case, the options used were "by zone" and the zones in the model connected by holes were merged.
Only one wing of the building was modeled (Figure 14). Figure 15 shows the outline for the modeling of the hypogean level and rooms 1 and 2 of the first floor.The duct was simplified to represent a true and proper ventilation chimney.Residential use was assumed for the two rooms in the simulations, so metabolic activity was set to 1.2 (relaxed) and typical indoor clothing to 0.5.
Results
Four temperatures were measured: air temperature, radiant temperature, effective temperature and dry bulb temperature.
In order to confirm the validity of the dynamic simulation carried out with the Design Builder software, the results obtained with the software were compared with those of the monitoring.
Simulation Results
What is obtained simulating a ventilation with the use of a chimney, having modeled a duct, is a significant fall in the temperature of room 1 with respect to room 2 of about 2.5 • C. Similar benefits were not obtained for humidity.The humidity values in rooms 1 and 2 are due to the presence of the open water well on the hypogeum floor from where cool air comes.However, since the operating temperatures registered in room 1 differed from that of room 2, the comfort checks for the two rooms led to different results [27].See Tables 2 and 3 and Figure 16.
Results
Four temperatures were measured: air temperature, radiant temperature, effective temperature and dry bulb temperature.
In order to confirm the validity of the dynamic simulation carried out with the Design Builder software, the results obtained with the software were compared with those of the monitoring.
Simulation Results
What is obtained simulating a ventilation with the use of a chimney, having modeled a duct, is a significant fall in the temperature of room 1 with respect to room 2 of about 2.5 °C.Similar benefits were not obtained for humidity.The humidity values in rooms 1 and 2 are due to the presence of the open water well on the hypogeum floor from where cool air comes.However, since the operating temperatures registered in room 1 differed from that of room 2, the comfort checks for the two rooms led to different results [27].See Tables 2 and 3 and Figure 16.
Monitoring Campaign and Experimental Data
After the installation of the sensors, data was recorded and monitored from May through to the first week of September.During this period, the building was monitored under seven different configurations (Table 4), to verify and optimize the system according to the different weather conditions and surroundings.These configurations related to different periods Time 0, Time 1, Time 2, etc..., as shown in Table 4 and Figure 17.The seven configurations analyzed have been designed as follows: The data was recorded every five minutes throughout the day.Although the data were recorded every 5 min during the day, the time chosen for the analysis of the duct operation corresponds to 12:00 p.m. because the difference in external and internal temperature was at its greatest.During the first monitoring period, the data relating to two rooms, at T0, were recorded keeping both rooms insulated and the duct closed to establish basic measurements for comparison.
The analysis of period T0 revealed a temperature difference of about a half degree (∆T = 0.50 • C) and a relative humidity difference of ∆U = 0.32% between the two rooms.These differences were taken into consideration in the subsequent analyses.
The following table shows the data for the various periods monitored with the corrective factors for Room 1 (Table 5): Comparing these recorded data with standard comfort conditions, a relative humidity of between 40-60% and a comfort temperature between 20-26%, the results closest to achieving optimal indoor comfort were obtained in the summer period, from July until the first week of September (T4, T5 and T6).The relative humidity varied significantly between the two rooms, due to the presence of the duct in room 1 that lowers the level of relative humidity in the room.Graph summarizes the differences in air temperature in the seven periods analyzed (Figure 18).The data was recorded every five minutes throughout the day.Although the data were recorded every 5 min during the day, the time chosen for the analysis of the duct operation corresponds to 12:00 p.m. because the difference in external and internal temperature was at its greatest.During the first monitoring period, the data relating to two rooms, at T0, were recorded keeping both rooms insulated and the duct closed to establish basic measurements for comparison.
The analysis of period T0 revealed a temperature difference of about a half degree (ΔT = 0.50 °C) and a relative humidity difference of ΔU = 0.32% between the two rooms.These differences were taken into consideration in the subsequent analyses.
The following table shows the data for the various periods monitored with the corrective factors for Room 1 (Table 5): Comparing these recorded data with standard comfort conditions, a relative humidity of between 40-60% and a comfort temperature between 20-26%, the results closest to achieving optimal indoor comfort were obtained in the summer period, from July until the first week of September (T4, T5 and T6).The relative humidity varied significantly between the two rooms, due to the presence of the duct in room 1 that lowers the level of relative humidity in the room.Graph summarizes the differences in air temperature in the seven periods analyzed (Figure 18).The graph registers the temperature on the y-axis and the days monitored on the x-axis.The whole period has been divided into seven periods: from T1 to T6.The external temperature is in orange, the temperature in the cellar-blue, Room 1-red and Room 2-green.In the first monitoring period, the temperature in room 2 is lower than in room 1, while, in the last period, the curve overlaps and is often reversed.This is because, during the first monitoring period, the system is still not yet fully operational and by the last period there is optimal operation.The following graph illustrates the relative humidity of the two rooms, the cellar and the outside and wind velocity (Figure 19).The graph registers the temperature on the y-axis and the days monitored on the x-axis.The whole period has been divided into seven periods: from T1 to T6.The external temperature is in orange, the temperature in the cellar-blue, Room 1-red and Room 2-green.In the first monitoring period, the temperature in room 2 is lower than in room 1, while, in the last period, the curve overlaps and is often reversed.This is because, during the first monitoring period, the system is still not yet fully operational and by the last period there is optimal operation.The following graph illustrates the relative humidity of the two rooms, the cellar and the outside and wind velocity (Figure 19).The red line represents room 1 with the duct whilst the green line represents room 2. The green line follows the external relative humidity (purple), whilst the relative humidity in room 1 (red) decreases during the hotter months when the system becomes operative.The intensity of the wind also influences the humidity of room 1: as the intensity of the wind increases, the relative humidity decreases thanks to the presence of a chimney on the roof that increases the pulling force of the duct.In period 5, the relative humidity sensor in room 1 malfunctioned due to a technical problem.
Discussion
The monitoring shows how the ventilation chimney improved comfort conditions during the summer months.It is therefore possible to achieve comfortable indoor conditions in a temperate Mediterranean climate without mechanical means [17,21].By analyzing the data from the varying configurations, it emerges that the best results were obtained in T3, T4 and T5) and that even a slight variation in the relative humidity modifying the internal comfort manages to gets parameters verified inside the room (see Figure 20).The red line represents room 1 with the duct whilst the green line represents room 2. The green line follows the external relative humidity (purple), whilst the relative humidity in room 1 (red) decreases during the hotter months when the system becomes operative.The intensity of the wind also influences the humidity of room 1: as the intensity of the wind increases, the relative humidity decreases thanks to the presence of a chimney on the roof that increases the pulling force of the duct.In period 5, the relative humidity sensor in room 1 malfunctioned due to a technical problem.
Discussion
The monitoring shows how the ventilation chimney improved comfort conditions during the summer months.It is therefore possible to achieve comfortable indoor conditions in a temperate Mediterranean climate without mechanical means [17,21].By analyzing the data from the varying configurations, it emerges that the best results were obtained in T3, T4 and T5) and that even a slight variation in the relative humidity modifying the internal comfort manages to gets parameters verified inside the room (see Figure 20).
As is clear from Table 6, the values of the models and the monitored data are reliable within a 6% error margin.In the real installation, however, a significantly different percentage was observed with regards to the schematic model; this was probably due to the climate file of the model simulation containing historic rather than current data.
It is possible to affirm that monitoring allowed us to verify the validity of the simulations involving the ventilation duct during the warmer months: and, indeed, the research shows that a ventilation duct assists the achievement of adequate level of comfort for residential settings.Furthermore, although the consequent energy savings were not calculated in this study, it would be logical to conclude that energy consumption would fall as a result of the reduced use of air conditioning systems.
In conclusion, although the use of a ventilation chimney cannot always be optimized in historic buildings, the advantages that its installation brings to the thermo-hygrometric conditions of the living environment are nevertheless significant.The installation of the ventilation duct in the historic building "Palazzo Galeota" is an example to be followed employing passive ventilation and the optimization of existing cavities or the integration of new devices with a low impact on pre-existing structures.
Research Developments
A further area for research will undoubtedly be indoor air quality and the evaluation of the benefits of natural ventilation and cooling systems [7].The system installed in this study has shown how to draw on natural resources, such as cool air from hypogeal areas that have constant thermal characteristics throughout the year, being below ground.However, these hypogeal rooms are often used as cellars and warehouses and the lack of airflow often results in the growth of microorganisms and moulds harmful to human health.In this case study, the extracted air derives from a winery, which poses the question of air quality and the need to install additional sensor to check air quality.In fact, air temperature and humidity sensors, air quality sensors have also been installed: CO 2 , VOC, air quality, in the room where the duct is present (Figure 21).
As is clear from Table 6, the values of the models and the monitored data are reliable within a 6% error margin.In the real installation, however, a significantly different percentage was observed with regards to the schematic model; this was probably due to the climate file of the model simulation containing historic rather than current data.
It is possible to affirm that monitoring allowed us to verify the validity of the simulations involving the ventilation duct during the warmer months: and, indeed, the research shows that a ventilation duct assists the achievement of adequate level of comfort for residential settings.Furthermore, although the consequent energy savings were not calculated in this study, it would be logical to conclude that energy consumption would fall as a result of the reduced use of air conditioning systems.
In conclusion, although the use of a ventilation chimney cannot always be optimized in historic buildings, the advantages that its installation brings to the thermo-hygrometric conditions of the living environment are nevertheless significant.The installation of the ventilation duct in the historic building "Palazzo Galeota" is an example to be followed employing passive ventilation and the optimization of existing cavities or the integration of new devices with a low impact on pre-existing structures.
Research Developments
A further area for research will undoubtedly be indoor air quality and the evaluation of the benefits of natural ventilation and cooling systems [7].The system installed in this study has shown how to draw on natural resources, such as cool air from hypogeal areas that have constant thermal characteristics throughout the year, being below ground.However, these hypogeal rooms are often used as cellars and warehouses and the lack of airflow often results in the growth of microorganisms and moulds harmful to human health.In this case study, the extracted air derives from a winery, which poses the question of air quality and the need to install additional sensor to check air quality.In fact, air temperature and humidity sensors, air quality sensors have also been installed: CO2, VOC, air quality, in the room where the duct is present (Figure 21).These sensors will have the task of checking the indoor air quality in the months of July and August, when the system is at maximum operating speed, and, consequently, the air flow is higher.Sample air particles will also be collected with bio-aerosol cassettes and analyzed in the laboratory.If the air quality is not good, filtering systems will be installed in the duct to improve the parameters.
Conclusions
The installation of natural ventilation systems are commonly included in new buildings, while these systems are rarely used in historical buildings because of the difficulty of integrating them into the original design, and in light of the potential visual impact.Further difficulties with architectural These sensors will have the task of checking the indoor air quality in the months of July and August, when the system is at maximum operating speed, and, consequently, the air flow is higher.Sample air particles will also be collected with bio-aerosol cassettes and analyzed in the laboratory.If the air quality is not good, filtering systems will be installed in the duct to improve the parameters.
Conclusions
The installation of natural ventilation systems are commonly included in new buildings, while these systems are rarely used in historical buildings because of the difficulty of integrating them into the original design, and in light of the potential visual impact.Further difficulties with architectural integration also arise in the distribution and management of habitable interior spaces.In fact, natural ventilation strategies necessarily require that airflow "passes" through the confined spaces.
This aspect therefore conditions the internal organization of the building, to the extent that it limits the partitions perpendicular to the prevailing airflow.Unfortunately, national and regional regulatory legislation has not taken any steps to encourage the use of technological devices for passive cooling nor favored their architectural integration even in more sensitive urban contexts, such as historical centers.Nevertheless, a careful study of the building object of intervention allows the application of such systems through the reuse of the chimneys of disused fireplaces or by creating ad hoc passages placed in correspondence with non-value elements (floors, walls, etc.).Fortunately, hypogeal environments, caves, and cavities are widespread in Mediterranean regions as are rooms below ground characterized by lower temperatures than the floors above.Therefore, a greater awareness of systems able to exploit the natural flow of ventilation could determine a more "widespread" application in the most varied of contexts with consequent savings in terms of energy and fossil resources.
The study of cooling systems in relation to the climatic context is essential: this research, developed in a temperate climate, confirms that the use of passive cooling systems allows adequate levels of comfort to be reached without the use of mechanical systems and consequently without cost.Further studies of varying building configurations in differing climatic contexts will lead to more efficient energy use.
Figure 1 .
Figure 1.Main facade of Zisa in Palermo and the fountain inside.Figure 1. Main facade of Zisa in Palermo and the fountain inside.
Figure 1 .
Figure 1.Main facade of Zisa in Palermo and the fountain inside.Figure 1. Main facade of Zisa in Palermo and the fountain inside.
Figure 2 .
Figure 2. Outside the wind chimney and inside the uptake air hole.
Figure 2 .
Figure 2. Outside the wind chimney and inside the uptake air hole.
Figure 4 .
Figure 4.A cross section of an internal court ventilation chimney (a) and a tower ventilation chimney (b).
Figure 4 .
Figure 4.A cross section of an internal court ventilation chimney (a) and a tower ventilation chimney (b).
Figure 6 .
Figure 6.Standard historical models of passive ventilation and the case study installation.
Figure 6
Figure6shows the ventilation chimney analyzed in the case study, which consists of a wind tower that connects two internal areas of the building, situated on the different levels, with an opening on the lower floor[25,26] as well as ducts designed to achieve passive ventilation based on the systems used in the Renaissance villas of Costozza, Torri del Vento and the Zisa di Palermo.
Figure 6 .
Figure 6.Standard historical models of passive ventilation and the case study installation.Figure 6.Standard historical models of passive ventilation and the case study installation.
Figure 6 .
Figure 6.Standard historical models of passive ventilation and the case study installation.Figure 6.Standard historical models of passive ventilation and the case study installation.
Figure 7 .
Figure 7. Image of the case study building's facade.
Figure 7 .
Figure 7. Image of the case study building's facade.
Figure 8 .
Figure 8. Model of the ventilation path through the various levels (underground, ground floor and first floor).
Figure 9 .
Figure 9. Preliminary steps for duct installation: (a) removal of the grid from the cellar window for the passage of the duct and storage in the site yard of the duct; (b) duct installation phase.
Figure 8 .
Figure 8. Model of the ventilation path through the various levels (underground, ground floor and first floor).
Figure 8 .Figure 9 .
Figure 8. Model of the ventilation path through the various levels (underground, ground floor and first floor).
Figure 9 .
Figure 9. Preliminary steps for duct installation: (a) removal of the grid from the cellar window for the passage of the duct and storage in the site yard of the duct; (b) duct installation phase.Figure 9. Preliminary steps for duct installation: (a) removal of the grid from the cellar window for the passage of the duct and storage in the site yard of the duct; (b) duct installation phase.
Figure 10 .
Figure 10.The duct once installed (a) in the cellar; (b) on the ground floor under the arcade.
Figure 10 .
Figure 10.The duct once installed (a) in the cellar; (b) on the ground floor under the arcade.
Figure 12 .
Figure 12.Weather station, "Arturo" software for the survey of the data.
Figure 13 .
Figure 13.Building section with sensors position in Room 1 (sensors of temperature and relative humidity are also located in Room 2, not visible in this section).
Figure 12 .
Figure 12.Weather station, "Arturo" software for the survey of the data.
Figure 13 .
Figure 13.Building section with sensors position in Room 1 (sensors of temperature and relative humidity are also located in Room 2, not visible in this section).It was not possible to insert other sensors inside the duct.
Figure 12 .
Figure 12.Weather station, "Arturo" software for the survey of the data.
Figure 12 .
Figure 12.Weather station, "Arturo" software for the survey of the data.
Figure 13 .
Figure 13.Building section with sensors position in Room 1 (sensors of temperature and relative humidity are also located in Room 2, not visible in this section).It was not possible to insert other sensors inside the duct.
Figure 13 .
Figure 13.Building section with sensors position in Room 1 (sensors of temperature and relative humidity are also located in Room 2, not visible in this section).
Figure 14 .
Figure 14.Three-dimensional (3D) model in Design Builder.Image of the complete model.
Figure 14 .
Figure 14.Three-dimensional (3D) model in Design Builder.Image of the complete model.
Figure 15 .
Figure 15.3D model in Design builder.Image of the duct in the hypogeum room and in the monitored room.(a) hypogeum cellar with the opening of the water well and the entrance for the cold air into the chimney; (b) upper floor with air opening from the chimney in room 1.
Figure 15 .
Figure 15.3D model in Design builder.Image of the duct in the hypogeum room and in the monitored room.(a) hypogeum cellar with the opening of the water well and the entrance for the cold air into the chimney; (b) upper floor with air opening from the chimney in room 1.
Figure 16 .
Figure 16.Comparison between comfort curves of the rooms analyzed (a) room 1: verified; (b) room 2: not verified.
Figure 17 .
Figure 17.Configurations models.Hot air flow in red, the cold air flow in blue and insulating material in orange [20].
Figure 17 .
Figure 17.Configurations models.Hot air flow in red, the cold air flow in blue and insulating material in orange [20].
Figure 18 .
Figure 18.Graph that records the temperature of the cellar (blue line), room 1 (red line), room 2 (green line) and outside (orange line) during the seven monitoring periods.
Figure 18 .
Figure 18.Graph that records the temperature of the cellar (blue line), room 1 (red line), room 2 (green line) and outside (orange line) during the seven monitoring periods.
Figure 19 .
Figure 19.The relative humidity of the cellar (blue line), room 1 (red line), room 2 (green line), and outside, and the wind velocity (orange line) during the seven monitoring periods.
Figure 19 .
Figure 19.The relative humidity of the cellar (blue line), room 1 (red line), room 2 (green line), and outside, and the wind velocity (orange line) during the seven monitoring periods.
Figure 21 .
Figure 21.From the left: an air quality detection sensor, a formaldehyde detector, a CO2 detector and bio-aerosol cassettes.
Figure 21 .
Figure 21.From the left: an air quality detection sensor, a formaldehyde detector, a CO 2 detector and bio-aerosol cassettes.
Table 1 .
Description of the models analyzed.
Table 1 .
Description of the models analyzed.
Table 1 .
Description of the models analyzed.
Table 2 .
Average air temperature, average operative temperature and average relative humidity in rooms 1 and 2 from June to August.
Table 3 .
The different configurations monitored.
Table 2 .
Average air temperature, average operative temperature and average relative humidity in rooms 1 and 2 from June to August.
Table 3 .
The different configurations monitored.
Table 4 .
Comparison between data monitored: Room 1, Room 2 and external conditions.
Table 5 .
Comparison of the data obtained from the schematic modeling, realistic modeling and data collected by the sensors installed (room 1).
Table 5 .
Comparison of the data obtained from the schematic modeling, realistic modeling and data collected by the sensors installed (room 1). | 14,070 | 2018-05-14T00:00:00.000 | [
"Engineering"
] |
Meningococcaemia causing necrotizing cellulitis associated with acquired complement deficiency after gastric bypass surgery: a case report
Background Neisseria meningitidis has rarely been described as an agent of necrotic soft tissue infection. Case presentation We report a case of a septic shock with necrotizing cellulitis due to Neisseria meningitidis serogroup W, treated by urgent extensive surgical debridement followed by skin grafts. The invasive meningococcal disease occurred together with a complement deficiency, possibly acquired after bypass surgery that took place 1 year before. Conclusions Necrotic tissue infections should be considered part of the invasive meningococcal diseases spectrum and should prompt clinicians to look for complement deficiencies. Gastric bypass surgery associated malnutrition may be implicated but further verification is needed.
Background
Neisseria meningitidis are virulent bacteria known for causing fulminant purpura and purulent meningitis, but unusual presentations have been observed. We report here a rare case of necrotizing soft tissue infection (NSTI) related to meningococcaemia associated with a recently acquired complement deficiency.
Case presentation
In April 2019, a 50-year-old woman was admitted to our Intensive Care Unit for septic shock related to a necrotizing soft tissue infection.
Her medical history mentions a complicated bypass surgery 1 year before followed by severe malnutrition still in need of enteral feeding supplement. Earlier diagnoses include arterial hypertension and discoid lupus erythematosus. She never required immunosuppressive therapy.
A few hours before admission, the patient developed a sudden intense leg pain, associated with malaise. At the emergency room, she presented hyperaemia and swelling of both anterior thighs and right abdominal flank (Fig. 1a). Blood pressure was 50/35 mmHg, heart rate 140 bpm and temperature 35.3°C. Arterial blood lactate was 12 mmol/L (N < 2 mmol/L).
Urgent surgical exploration of the skin lesions revealed extensive subcutaneous necrosis not encompassing the fascia. The lesions underwent extensive debridement (Fig. 1b).
All surgical sample and blood cultures returned positive for Neisseria meningitidis. The first blood culture drawn at admission being so after 9 h. The strain isolated was identified as a serogroup W subtype W: P1.5,2: F1-1: ST-11 (cc11). Genetic comparison based on Core Genome Multilocus Sequence Typing (cgMLST) using the international Neisseria public database for molecular typing (pubMLST) indicated that the isolate belonged to the UK 2013 lineage. The sequence has been deposited into the European Nucleotide Archive (ENA) database and is available under study accession number PRJEB37139.
Antibiotic treatment was de-escalated to benzylpenicillin related to a minimal inhibitory concentration (MIC) measured at 0,06 mg/L.
A lumbar puncture performed at day 2 after coagulation correction returned normal values.
After 52 days spent in ICU with several complications occurring, the patient benefited from successful repair skin grafts (Fig. 1c) and was discharged to the hospital ward 10 days later.
Three months before this necrotizing cellulitis, the patient had undergone an immune status workup by a nephrologist for low-level proteinuria. ANA/ANCAs were negative, while C3 and total haemolytic activity (CH50) levels were low at 55 and 19% respectively. A normal C4 level was found. Complement had been reported normal at the occasion of the earlier diagnostic investigation for discoid lupus erythematosus.
Necrotic cellulitis differentiates from Purpura fulminans, which also leads to skin necrosis, but on the basis of confluent petechiae and as a result of endotoxinrelated microthrombi [6].
Previously published case reports of N. meningitidis related necrotizing soft tissue infections have been treated with extensive surgical debridement just as has been this patient [7,8]. Although necrosis did not extend beyond the fascia, early surgery in addition to prompt antibiotic treatment may have contributed to the patient's survival.
The meningococcal strain isolated, which belongs to the serogroup W of the genotype ST-11, is increasingly reported in many European countries with patients presenting abdominal symptoms in contrast to the more conventional presentation of meningococcal infections [9,10]. This particular ST-11 strain was described as in the UK and is therefore known as the South American/UK lineage. The original UK strain later evolved through further genetic rearrangements to become the UK 2013 strain [11]. This rising incidence led to the promotion of the ACWY vaccination rather than the MenC vaccination (recommended since July 2019 in Belgium).
Acquired deficiency in C5, induced by the therapeutic monoclonal antibody eculizumab (an inhibitor of C5 cleavage), has also been shown to favour IMD [18].
Hypocomplementemia can be due to immune complex formation in antibody-mediated immune diseases such as cryoglobulinemia, systemic lupus erythematosus and endocarditis. However, to our knowledge, this phenomenon has not been linked to an increased susceptibility to IMD [17].
In our case, the CH50 and the C3 were abnormally low 3 months before the IMD though normal several years earlier. We hypothesize that the complement deficiency was acquired following the complicated gastric bypass surgery. This is suggested by the study of Gómez-Abril et al. [19] whose systematic exploration of immunological and laboratory abnormalities following bypass surgery demonstrated low levels of C3.
While vaccination against N. meningitidis in complement deficient persons is widely recommended [20], it is not clear at this stage how frequent hypocomplementemia is to be found after gastric bypass surgery since our case is likely the first one described. Vaccination of such patients could be offered broadly once complement deficiencies is demonstrated regularly.
In conclusion, we report a rare case of N. meningitidis related necrotizing cellulitis, an entity different from fulminant purpura. Meningococcemia was possibly favoured by an acquired classical pathway complement deficiency following a complicated gastric bypass entailing severe malnutrition.
Neisseria meningitidis should be considered among the causes of necrotizing cellulitis. Whether gastric bypass surgery associated malnutrition impairs complement function deserves further confirmation. | 1,283.8 | 2020-05-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
Normal-Power-Logistic Distribution: Properties and Application in Generalized Linear Model
The applications of Normal distribution in literature are verse, the new modified univariate normal power distribution is a new distribution which is adequate for modelling bimodal data. There are many data that would have been modelled by normal distribution, but because of their bimodality, they are not, since normal distribution is unimodal. In this paper, a new extension of the normal linear model called the normal-Power generalized linear model, derived from the T-Power\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lbrace$$\end{document}{Logistic\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rbrace$$\end{document}} framework is presented. The statistical properties of the distribution and the proposed model were derived such as quantiles, median, mode, robust skewness, robust kurtosis and moment. The maximum likelihood estimation method was considered to obtain the unknown model parameters. Three real data sets were analyzed to demonstrate the flexibility and usefulness of the proposed model. The new model would be very useful as alternative in cases where skewed or bimodal response variables, which are not well fitted with normal linear model.
Introduction
In probability and statistics, the power function and normal distributions are very useful in their individual applications. Not many authors have thought it to combine these two distributions. The normal distribution does not have a shape parameter, Copyright ©2021 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License.
1 3 but power function has; while power function does not have a location parameter but normal has. Both are flexible, so combining them will produce a more flexible distribution. The power function distribution is the inverse of Pareto distribution (Dallas 1976). The power function distribution is a special model that can be formed or related to the uniform, Weibull, Kumaraswamy distributions. The power function distribution is considered one of the simplest and handy lifetime distributions. Meniconi and Barry (1996) proposed the two-parameter power function distribution as a simple alternative to the exponential distribution when it comes to modelling failure data related to mortality rate and component failures. It is a special case of the beta distribution and one may sight the importance of the distribution in statistical tests such as the likelihood ratio test. The normal distribution on the other hand has been combined with other distribution to form a more flexible distribution, such as exponentiated-Normal (Gupta et al. 1998), Beta-Normal distribution (Eugene and Lee 2002), Gamma-Normal (GN) distribution (Zografos and Balakrishnan 2009), Kumaraswamy-Normal distribution (Cordeiro and de Castro 2011). Estimation of the power function parameters has been done by various authors, such as Zaka and Akhter (2013) .
Many classical distributions have been extensively used for modelling real data in many areas. However, in many situations; there is a clear need for extended forms of these distributions to improve the flexibility and goodness of fit of these distributions. For that reason, families of continuous distributions are developed by introducing one or more additional shape parameter(s) to the baseline distribution or by combining two or more distributions to produce new ones. Akarawak et al. (2013) described such new distributions as convoluted distributions. Some authors in recent years have developed frameworks used in combining these distributions to form new ones. A good example is the T-R{Y} framework (Aljarrah et al. 2014). Since then, a lot of authors have been using it to develop flexible life time distributions that are hazard weighted functions of the baseline distributions. Weibull-Normal distribution (Alzaatreh et al. 2014) was one of the first normal distribution combined with other distribution using the T-R{Y} framework. The Weibull power function distribution (Tahir et al. 2016) has a combination of power function and weibull distribution, using weibull distribution as a baseline distribution.
The simplicity and usefulness of the power function distribution compelled the researchers to explore its further extensions, generalizations, and applications in different areas of science (Arshad et al. 2020;Ekum et al. 2020b). Recently, Gamma-Power{log-logistic} distribution was proposed by Ekum et al. (2021) and demonstrated its usefulness in modelling skewed data. None of these study have combined normal and power function distribution, especially making power function distribution a baseline, except the normal-power{logistic} distribution (NPLD) proposed in the work of Ekum et al. (2021). More so, many properties of the NPLD has not been defined and studied, and it has not been developed into a generalized linear model for predicting relationship in regression applications.
Predicting oil spillage is of a major interest to researchers in the field of Geoscience and geological statistics. In Nigeria, oil spillage is a major problem that have devastated the ecosystem and biodiversity of the Niger Delta region in Nigeria. The quantity of oil spilled may be estimated using the estimated spilled volume. The estimated spill volume of crude oil may be determined by the duration of clean-up (Whanda et al. 2016;Deinkuro et al. 2021). Also, researchers may want to know if they can predict their researchgate score using their citations and research items. These are emerging issues of interest to researchers, especially the ones in academics (Jordan (2015); O'Brien (2019)). More so, the COVID-19 mortality rate per population and the linear effect on the economic wellbeing of Nigerians is also worth to study. This is because, the GDP per capita can be affected by COVID-19 mortality. The COVID-19 factor is also an extra burden to the wellbeing of the people (Pak et al. (2020); Iluno et al. (2021)).
In literature, there are some modifications of the normal distribution, which produced multimodality (Kundu 2017), which has multiple modes with less number of parameters. The modification of the normal distribution developed by Kundu (2017) is a bivariate family of distributions, why the one developed here is a univariate family. More so, Kundu (2017) did not extend their distribution to generalized linear model. The motivation of this work is based on the modelling of independent variables in regression modelling that have bimodal features. Other authors such as Famoye et al. (2018), Kundu (2017), etc, had developed distributions that are bimodal but none has extended it to regression modelling. More so, real life problems like the crude oil spill volume, number of citations in research gate, GDP per Capita, etc are real variables which maximum values can be estimated, so they are bounded below by zero (non negative) and above by a real value, rather than infinity. Thus, a distribution with bounded support is necessary [0, ], where > 0 is a real upper bound (Ekum et al. 2020b).
Thus, in this study, the aim is to adopt a novel univariate continuous probability distribution called the normal-power-logistics distribution NPLD, which was derived from the T-Power{logistic} family proposed and studied by Ekum et al. (2021) and extends it into generalized linear model in order to solve real regression problems, where the dependent variables are bimodal and skewed with a known maximum value. The model has four parameters, two from the normal distribution and the other two from the power function distribution, which one of it is a shape parameter and the other is an upper bound parameter to control the extremes of the distribution. The scope covers different characterizations, properties, regression model, and parameter estimation of the NPLD model. The method of Maximum Likelihood Estimation (MLE) was used to estimate the model parameters. The importance of the new model was proved empirically using three real-life datasets. The proposed model would be very useful in engineering, medicine, and all fileds of life, where the dependent variable of interest to be predicted has bimodal features. It is expected to perform well when normal distribution fails to fit the data of interest.
Materials and Methods
In this section, the theory and application of the proposed scheme are considered.
The Method of Generating the T-R{Y} Family of Distributions
The method of generating T-R{Y} family of distributions is considered. The T-R{Y} is a general approach for defining the W[F(x)] (a non-decreasing differentiable function) using the quantile function of a random variable Y in the T-X framework. Let T, R and Y be three random variables with cdf F T ( and Q Y (x) are their corresponding quantile functions. It is assumed that T is supported on the interval (a, b) and Y is supported on the interval (c, d) such that b > a and d > c are real numbers.
Important Operational Definition of Terms
The following definitions will be very useful in characterising the proposed model.
Journal of the Indian Society for Probability and Statistics (2023) 24:23-54 Definition 5 : The cumulative hazard function of the distribution from T-power{ logistic} family is given by Definition 6 : The reverse hazard function of the distribution from T-power{logis-tic} family is given by Definition 7 : The quantile function of T-power{logistic} family is the inverse function of its cdf and it is given by The quantile function is used in Monte Carlo method to simulate random variates of a distribution, and it is used to determine measures of partition. Several ways of quantile approximation when it is not in closed form are available in literature, of which quantile mechanics is one of such approach (Akagbue et al. 2017).
Definition 8 :
The T-power{logistic} family of distributions is derived from T-R{ Y} family proposed by Aljarrah et al. (2014) and Alzaatreh et al. (2014). The relationship among T, R, and Y are given thus: Definition 9 : Let R be a non-negative random variable with pdf f R (x) , and let E(R k ) denote the k th moment of R, then where E(X k ) is the k th moment of the random variable, X; [ 1 − F Y (.) ] is the survival function of the random variable Y, and T is the quantile values random variable T with respect to f T (x).
Normal-Power function {logistic} Model
The proposed model is a generalized linear model that takes the form where g( i ) is the link function, and the right hand side is the linear predictor. Six goodness-of-fit criteria are used to compare the flexibility of the proposed model with other known models. The goodness-of-fit criteria are log-likelihood (LogL), Akaike Information Criterion (AIC), Kolmogorov-Smirnov statistic (D), Anderson-Darling statistic (A), Cramer-von Mises statistic ( ) and Chi-square statistic ( 2 ). See (Chen and Balakrishnan 1995) for detailed information on A and . The lower the value of the criteria, the better the performance of the model. Also, to show the relationship between the observed dependent variable y and the predicted dependent variable ŷ , the coefficient of correlation is used. This shows the model that performs well if the correlation coefficient is high. It is assumed that the dependent variable y has a normal-power distribution.
Cumulative Distribution and Probability Density Functions of NPLD
Recall the cdf of T-power{logistic} defined by Ekum et al. (2021) given in Definition (1) as where F T [t] is the cdf random variable T. So, T can follow any known distribution.
If T follows a normal distribution with parameters and , then the pdf of T is given by and the cdf of T is given by Therefore So, put the value of t into F T (t) to have So, put the value of t into F T (t) to have where error function, erf(.) is given by The corresponding pdf of NPLD is given by taking the first derivative of F X (x) with respect to x and it is given by where is a location parameter, k is a shape parameter, is a scale parameter, and doubles as a scale and upper bound parameter. A random variable X follows a NPLD if it can be defined as X ∼ NPLD( , , k, ). Figure 1 is the pdf plot of NPLD, which shows that NPLD can be bimodal for some parameters values, skewed and kurtosis .
Useful Transformation
follows a normal distribution with parameters and , then the pdf of W is given by Proof Recall the pdf of NPLD in (9) We want to show that random variable W follows a normal distribution with parameters and .
By change of variable, let
Differentiating w with respect to x, and making dx the subject of the equation gives Now, changing the support from x to that of w, we have It follows from inverse transformation and we have is the pdf of normal distribution with parameters and . Equation (14) completes the proof. ◻ From Lemma 2.1, it shows that the pdf of NPLD with parameters ( , , k, ) is a proper pdf. No further proof is needed.
Survival and Related Functions of NPLD
The survival function of NPLD is given by Journal of the Indian Society for Probability and Statistics (2023) 24:23-54 The hazard function of NPLD is given by The cumulative hazard function of NPLD is given by The reverse hazard function of NPLD is given by
Quantile Function
Theorem 2.2 Let X be a random variable that follows NPLD with cdf F X (x) , then the inverse function of the cdf, which is the quantile function exist, and it is given by Proof Recall the cdf of NPLD given by Solving for x gives Equation (20) is the inverse function of the cdf of X, and it can be written as where Q X (p) is the quantile function of NPLD; Φ −1 (p) is the inverse function of the cdf of standard normal distribution, and p is a probability value uniformly generated, that is, P ∼ U(0, 1) . Thus, Equation (21) completes the proof. ◻
Measures of Partition
The quantile function can be used to derive all the measures of partition, such as, median, quartile, octile, decile and percentile. The median of NPLD is The 1st quartile of NPLD, which is the same as the 25th percentile is given by The 3rd quartile of NPLD, which is the same as the 75th percentile is given by Journal of the Indian Society for Probability and Statistics (2023) Theorem 2.3 Let X be a random variable that follows NPLD with quantile function Q X (p) , then the skewness is robust, because it is a resistance measure, which is not affected by extreme value, .
Proof Recall the median, 1st quartiles ( Q 1 ) and 3rd quartile ( Q 3 ) of NPLD given by and respectively.
The mode can be derived by differentiating the pdf, equate to zero, and solve for x.
Using product rule Let and Differentiating u with respect to x gives Differentiating v with respect to x gives Inserting (41), (42), (43) and (44) into (40) and equating to zero gives The solution to (45) is the mode of NPLD. Now, assume that = k = = 1 and = 0 , (45) becomes It is obvious from (46) that the mode of NPLD is not unique and it is possibly bimodal. The value of the shape parameter determines if it is bimodal or Journal of the Indian Society for Probability and Statistics (2023) 24:23-54 multi-modal. If k = 1, it is bimodal, if k = 2, it will have 3 peaks, if k = 3, it will have 4 peaks. However, some of these peaks might not be visible or obvious graphically because there can be repeated roots of the polynomial equation. The resulting equation for the mode is a polynomial of order k+1 as shown in equation (45). ◻
Series Expansion of NPLD
Theorem 2.6 Let X be a random variable that follows NPLD with parameters , , k, , the pdf of X, f X (x) , is a weighted pdf of power function distribution with parameters k and , that is, where f R (x) is the pdf of power function distribution, and Ψ is the weight.
Proof Recall the pdf of NPLD given in (9). Given the following series expansions Inserting (48-56) into the pdf of NPLD in (9) gives Equation (57) n m a n−m y m ,
Moment of NPLD
Let X be a continuous random variable with pdf f X (x) , the rth moment is given by Recall the series expansion form of NPLD pdf given as Inserting f X (x) into Equation (60) gives
Note that
So that Let So that Equation (65) is the rth moment of GPLD.
The likelihood function of NPLD is given by
Taking the log gives The maximum likelihood estimation parameters of the NPLD are given by differentiating partially with respect to , and k and equating the results to zero and solve for each parameter. The equations obtained by setting the partial derivatives with respect to k to zero is not in closed form and the values of the parameter k is found using Newton's numerical procedure provided by R package (R Development Core Team 2009). The parameter cannot be estimated using the MLE method because it depends on X, thus, is estimated from from data using where > 0 is a very small positive number less than 1 chosen by the user. It should be noted that the maximum likelihood estimators of the parameters and are in close form and will always exist provided the values of parameters k and are known. The value of parameter cannot be determined by the maximum likelihood estimation method because it is an upper bound, so it can be estimated by equation (75) from the data. Parameter k is not in closed form and a numerical optimization method is used to estimate it. We find the initial value of k used in the numerical optimization by first assuming that the random sample is from power function distribution. We estimate the initial value of k from power function distribution. The moment estimate of parameter k is given by k =̄x −x ,x < , where x is the sample mean (Ekum et al. 2020b), estimated from data.
Numerical Optimization of Parameter k
In a case where the parameter estimated using Newton approximation is not optimal, a new relationship is derived by EM algorithm. Let where Ω is the parameter space of NPLD, so that we have Recall the pdfs of normal distribution and NPLD as and respectively. Substituting the pdfs of normal distribution and NPLD into Equation (77) gives where , and are known, such that, ̂=x , ̂= S , and ̂= sup x =x (n) , where x and S are the sample mean and sample standard deviation of ln x −x . Note that x (n) − x > 0, ∀ x ∈ X . Note k 1 is the initial value of k assumed as suggested, that is, k 1 =̄x −x ,x < . So that k +1 is the new estimate of k and it is optimal. Now that optimal value of k is known, then we can estimate the values of and using equations (72) and (73) respectively.
Error Bound and Confidence Interval for NPLD
The error bound for estimating a generic parameter Θ of NPLD is given by where is the level of significance, Θ is the parameter to be estimated, Q * p is the standard quantile function of NPLD with p = 1 − ;p ∈ [0, 1] , and S Θ is the standard error of Θ , that is, the square root of the variance of Θ.
The standard quantile function of NPLD is derived when k = = 1 and = 0 from the quantile function of NPLD and it is given by where Q * p is the standard quantile function of NPLD, Φ −1 (p) is the inverse function of the cdf of standard normal distribution known as the quantile function, and p is a probability value uniformly generated. Note that > 0 is a regulator parameter in this case. Its value is adjusted to determine how large the error bound should be. In this research, is taken as 2 to accommodate the population parameter. So, the level of significance, and are always chosen. The values of can be 1, 2 or 3 depending on how large you want the error bound to be.
Thus, the 100(1 − )% confidence interval for parameter Θ is given by where Θ is the point estimate of Θ.
Simulation Study of NPLD
The simulation study is presented to show the performances of the maximum likelihood estimators and their consistency. The procedure used to perform the simulation studies involves, generating uniform distribution of n quantiles, p. The quantile function defined in equation (21) for NPLD was used to generate NPLD random variates for the sample sizes n = 50, 100, 200 and 300 replicated 1000 times. The parameters values are set as k = = = 0.5 , k = = = 1 , and k = = = 2 and for a fixed = 2 . The actual values, mean estimates, standard errors, and 95% confidence interval are presented in Tables 1, 2 and 3. Tables 1, 2 and 3 show that the standard error decreases as the sample size increases, which implies that the MLEs are consistent.
Generalized Linear Regression Model for NPLD (NPGLM)
Let assume that the dependent random variable Y of interest in our linear model follows a NPLD given independent variable(s) X. The linear regression model is called NPLD Generalized Linear Model (NPGLM). Given the linear model in matrix form where Y is a n-dimensional vector called the dependent vector for all observations n; X is the set of k independent variables packed into a ( n × k + 1 ) matrix called the 82) Y = XB + e design matrix; B is a ( k + 1)-dimensional vector called the slope vector; e is the error term packed into a n-dimensional vector called the error vector.
Conditions for NPGLM
The conditions to use the GPGLM to fit the model are given thus: • Y must be continuous random variable • Y must be positive real number strictly greater than zero but strictly less than (upper bound for Y) • Y must follow NPLD • NPLD must be a member of the exponential family
Exponential Class of NPLD
An exponential family or class is a parametric set of probability distributions that has a certain form. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality (Akarawak et al. 2017). It is assumed that each component of Y follows a distribution in the exponential family of the form where a( ) is a function of a known parameter only, b( ) is a function of a canonical parameter and c(T(y), ) is a function of y and only, and T(y) is a function of y, known as the sufficient statistics for Y. Let assume that Y is a random variable that follows NPLD. Recall the pdf of the NPLD with parameters , , k, given by where parameter is an upper bound. The pdf f(y) is not free from parameter ( ), and hence, might be difficult to express as a member of the exponential family.
However, a simple transformation can be done with the data that follows a NPLD to a normal distribution as proved in Lemma (2.1).
Recall the transformed pdf Taking the log of (84) gives Taking the exponential of (85) gives Comparing (86) with (83) gives where w is a function of y, k, given by Since (86) can be written in exponential class, we can directly derive the joint sufficient statistics from it. So, the joint sufficient statistics for and are w and w 2 respectively. Thus, w and w 2 can give all information concerning parameters and respectively.
Maximum Likelihood Estimation of the Parameters of NPLD Regression Model
The log-likelihood of the pdf of NPLD is The link function is given by (84)
Then
The MLE parameter estimate for b j is in closed form and it is given by where the value of lambda can be approximated from the data using nth order statistic or simply ̂= max(y i ) +̄y ∀ i, where ȳ is the standard error of y computed from the data. An approximation for k can also be derived from data using k =̄y −ȳ ,ȳ < , where ȳ is the sample mean, derived from Ekum et al. (2020b).
Application
In this section, applications to three real data sets were provided to illustrate the uses and importance of the NPLD. Three competing models are used to fit the two data of interest, they are NPLD, Normal are Gamma GLMs.
Application 1: Estimated Spill Volume (ESV) of Crude Oil in Nigeria
The data on the estimated spilled volume (ESV) is collected from 7th January 2011 to 27th December 2019, at Shell Nigeria webisite (www.shell.com.ng/sustainability/ environment/oil-spills.html). Figure 2 shows that the oil spill data is bimodal with positive skewness (1.1302) and kurtosis (3.3977).
Fitting the Models to Oil Spill Data
The estimated spill volume of crude oil can be determined by the Duration of Cleanup (DOC). If the duration of clean-up is known, the spill volume can be estimated from an appropriate model. Thus, the dependent variable is ESV and the independent variable is the DOC. Table 4 shows the model parameters estimated, their standard errors and their corresponding P-values. Table 5 shows that the NPLD regression model outperforms the other regression models using all the selection criteria.
Application 2: Total Research Gate Score
Total Research Gate (TRG) score data is a cross-sectional data collected from Research Gate page of 100 selected researchers in the field of Mathematical Science (Fig. 3). Figure 3 shows that the TRG score data is bimodal with positive skewness of 0.1595 and kurtosis of 1.9747.
Fitting the Models to Research Gate Data
The TRG score can be predicted by Citations and Research Items. If citations and research items increased, the TRG score will also increase. Thus, the dependent variable is TRG score, while the independent variables are citations and research items. Table 6 shows the model parameters estimated using MLE, their standard error and their corresponding P-values. The fitted NPLD regression model shows that the estimates 0 and 1 are significant at 5 % level of error. This is also true for gamma and normal regression models. Table 7 shows that the NPLD regression model outperforms the other regression models using all the goodness-of-fit criteria.
Application 3: Gross Domestics Product per Capita per COVID-19 Cases
The data used here are daily data collected from World Health Organisation (WHO) from 1st June 2020 to 31st December 2020, spanning 214 datasets, used by Iluno et al. (2021). The independent variable is a measure of COVID-19, termed COVID-19 Mortality per 1 million persons in the population (CMP), while the dependent Total Research Gate Score TRG Score per Author The CMP is a proxy to measure COVID-19 mortality, while RGDPC is a proxy to measure the economic wellbeing of a country. Figure 4 shows that the RGDPC data has a positive skewness of 2.317554 and kurtosis of 7.896267. This data is highly skewed and very peaked (leptokurtic).
Fitting the Models to COVID-19 Data
The RGDPC can be predicted by the CMP. If COVID-19 Mortality per Population is high, it can affect the GDP per Capita of a country negatively. Thus, the dependent variable is RGDPC and the independent variable is the CMP. Four competing distributions are used to fit the GLM. The performance of the three competing models are presented in Table 8 to show the performance of the models when fitted to the RGDPC data (Table 9). Table 8 shows the model parameters estimated, their standard errors and their corresponding P-values. Table 9 shows that the NPLD regression modeloutperforms the other regression models using all the selection criteria.
Conclusions
This study developed a novel NPLD model, using the T-Power{logistic} family of distributions. The cdf, pdf, survival function, hazard rate, cumulative hazard function, reverse hazard function, useful transformation, quantile functions, mode, robust skewness, robust kurtosis, series expansion and moment are derived. The maximum likelihood estimation of the parameters of the distribution were derived and that of its generalized regression model. The NPLD regression model was applied to three real-life data namely, Estimated Spill Volume (ESV) of crude oil in Niger Delta area of Nigeria, Total Research Gate (TRG) score of some selected researchers in research gate and GDP per Capita per COVID-19 cases (RGDPC; and the results of its performance was compared favourably with normal and Gamma regression models. The goodness of fit statistics showed that the NPLD regression model outperforms the other regression models using all the selection criteria. Also, the goodness of fit statistics also show that the NPLD regression model outperforms the other regression models using all the criteria for the TRG score model as well as the RGDPC model. Hence, NPLD regression model can be used effectively to analyze and model the crude oil spill volume data, TRG score data, RGDPC and other related data when normal is not good fit.
This research therefore recommends that • NPLD model should be used to estimate spill volume of crude oil, and total research gate score. • It is recommended that the convoluted distribution NPLD should be used when normal is not a good fit to emerging data of interest. • It is recommended based on the applications that clean-up of spilled oil should be carried out immediately and complete it at record time, because it can be used to estimate the spilled volume of crude oil. • It is also recommended that researchers should increase the research items they upload to research gate and write quality papers to increase their citations, in order to increase their total research gate score. • It is also recommended that COVID-19 mortality be reduced, by providing medical response to infected individuals, because, it can affect the economic wellbeing of the nation. | 6,965 | 2022-11-19T00:00:00.000 | [
"Mathematics"
] |
Hamiltonian quantum simulation with bounded-strength controls
We propose dynamical control schemes for Hamiltonian simulation in many-body quantum systems that avoid instantaneous control operations and rely solely on realistic bounded-strength control Hamiltonians. Each simulation protocol consists of periodic repetitions of a basic control block, constructed as a suitable modification of an"Eulerian decoupling cycle,"that would otherwise implement a trivial (zero) target Hamiltonian. For an open quantum system coupled to an uncontrollable environment, our approach may be employed to engineer an effective evolution that simulates a target Hamiltonian on the system, while suppressing unwanted decoherence to the leading order. We present illustrative applications to both closed- and open-system simulation settings, with emphasis on simulation of non-local (two-body) Hamiltonians using only local (one-body) controls. In particular, we provide simulation schemes applicable to Heisenberg-coupled spin chains exposed to general linear decoherence, and show how to simulate Kitaev's honeycomb lattice Hamiltonian starting from Ising-coupled qubits, as potentially relevant to the dynamical generation of a topologically protected quantum memory. Additional implications for quantum information processing are discussed.
Introduction
The ability to accurately engineer the Hamiltonian of complex quantum systems is both a fundamental control task and a prerequisite for quantum simulation, as originally envisioned by Feynman [1,2,3].The basic idea underlying Hamiltonian simulation is to use an available quantum system, together with available (classical or quantum) control resources, to emulate the dynamical evolution that would have occurred under a different, desired Hamiltonian not directly accessible to implementation [4].From a control-theory standpoint, the simplest setting is provided by open-loop Hamiltonian engineering in the time domain [5,6], whereby coherent control over the system of interest is achieved solely based on suitably designed time-dependent modulation (most commonly sequences of control pulses), without access to ancillary quantum resources and/or measurement and feedback.While open-loop Hamiltonian engineering techniques have their origin and a long tradition in nuclear magnetic resonance (NMR) [8,7], the underlying physical principles of "coherent averaging" have recently found widespread use in the context of quantum information processing (QIP), leading in particular to dynamical symmetrization and dynamical decoupling (DD) schemes for control and decoherence suppression in open quantum systems [9,10,11,12,13,14].
As applications for quantum simulators continue to emerge across a vast array of problems in physics and chemistry, and implementations become closer to experimental reality [3,15,16], it becomes imperative to expand the repertoire of available Hamiltonian simulation procedures, while scrutinizing the validity of the relevant control assumptions.With a few exceptions (notably, the use of so-called "perturbation theory gadgets" [17]), open-loop Hamiltonian simulation schemes have largely relied thus far on the ability to implement sequences of effectively instantaneous, "bangbang" (BB) control pulses [18,19,20,21,22,23,24,25]. While this is a convenient and often reasonable first approximation, instantaneous pulses necessarily involve unbounded control amplitude and/or power, something which is out of reach for many control devices of interest and is fundamentally unphysical.In the context of DD, a general approach for achieving (to at least the leading order) the same dynamical symmetrization as in the BB limit was proposed in [26], based on the idea of continuously applying bounded-strength control Hamiltonians according to an Eulerian cycle, so-called Eulerian DD (EDD).From a Hamiltonian engineering perspective, EDD protocols translate directly into bounded-strength simulation schemes for specific effective Hamiltonians -most commonly, the trivial (zero) Hamiltonian in the case of "non-selective averaging" for quantum memory (or "time-suspension" in NMR terminology).More recently, EDD has also served as the starting point for boundedstrength gate simulation schemes in the presence of decoherence, so-called dynamically corrected gates (DCGs) for universal quantum computation [27,28,29,30].
In this work, we show that the approach of Eulerian control can be further systematically exploited to construct bounded-strength Hamiltonian simulation schemes for a broad class of target evolutions on both closed and open (finite-dimensional) quantum systems.Our techniques are device-independent and broadly applicable, thus substantially expanding the control toolbox for programming complex Hamiltonians into existing or near-term quantum simulators subject to realistic control assumptions.
The content is organized as follows.We begin in Sect.II by introducing the appropriate control-theoretic framework and by reviewing the basic principles underlying open-loop simulation via average Hamiltonian theory, along with its application to Hamiltonian simulation in the BB setting.Sect.III is devoted to constructing and analyzing simulation schemes that employ bounded-strength controls: while Sec.III.A reviews required background material on EDD, Sec.III.B introduces our new Eulerian simulation protocols for a generic closed quantum system.In Sec.
III.C we separately address the important problem of Hamiltonian simulation in the presence of slowly-correlated (non-Markovian) decoherence, identifying conditions under which a desired Hamiltonian may be enacted on the target system while simultaneously decoupling the latter from its environment, and making further contact with DCG protocols.Sect.IV presents a number of illustrative applications of our general simulation schemes in interacting multi-qubit networks.In particular, we provide explicit protocols to simulate a large family of two-body Hamiltonians in Heisenbergcoupled spin systems additionally exposed to depolarization or dephasing, as well as to achieve Kitaev's honeycomb lattice Hamiltonian starting from Ising-coupled qubits.In all cases, only local (single-qubit, possibly collective) control Hamiltonians with bounded strength are employed.A brief summary and outlook conclude in Sec.V.
Control-theoretic framework
We consider a quantum system S, with associated Hilbert space H, whose evolution is described by a time-independent Hamiltonian H.As mentioned, Hamiltonian simulation is the task of making S evolve under some other time-independent target Hamiltonian, say, H. Without loss of generality, both the input and the target Hamiltonians may be taken to be traceless.Two related scenarios are worth distinguishing for QIP purposes: • Closed-system simulation, in which case S coincides with the quantum system of interest, S (also referred to as the "target" henceforth), which undergoes purely unitary (coherent) dynamics; • Open-system simulation, in which case S is a bipartite system on H ≡ H S ⊗ H B , where B represents an uncontrollable environment (also referred to as bath henceforth), and the reduced dynamics of the target system S is non-unitary in general.
In both cases, we shall assume the target system S to be a network of interacting qudits, hence H S (C d ) ⊗n , for finite d and n.In the general open-system scenario, the joint Hamiltonian on H may be expressed in the following form, where the operators H S (H B ) and S α (B α ) act on H S (H B ) respectively, and all the bath operators are assumed to be norm-bounded, but otherwise unspecified (potentially unknown).The closed-system setting is recovered from Eq. (1) in the limit S α ≡ 0. Likewise, we may express the target Hamiltonian H in a similar form, with two simulation tasks being of special relevance: Sα ≡ 0, in which case the objective is to realize a desired system Hamiltonian HS while dynamically decoupling S from its bath B, thereby suppressing unwanted decoherence [11]; or, more generally, H S → HS and S α → Sα , where the simulated, dynamically symmetrized error generators Sα may allow for decoherence-free subspaces or subsystems to exist [13,31].
Formally, the dynamics is modified by an open-loop controller acting on the target system according to where the operators {X u = X † u } and the (real) functions {f u (t)} represent the available control Hamiltonians and the corresponding, generally time-dependent, control inputs respectively.Clearly, if the Hamiltonian ( H − H) is contained in the admissible control set, the corresponding control problem is trivial and the desired time-evolution, Ũ (t) = e −i Ht , t ≥ 0, can be exactly simulated continuously in time.However, this level of control need not be available in settings of interest, including open quantum systems where control actions are necessarily restricted to the target system S alone, H c (t) ≡ H c (t) ⊗ I B in Eq. ( 2).Following the general idea of "analog" quantum simulation [3], we shall assume in what follows a restricted set of control Hamiltonians (in a sense to be made more precise later) and focus on the task of approximately simulating the desired time evolution Ũ (t) at a final time t = Tf , or more generally, stroboscopically in time, that is, at instants t = tM , where and T is a fixed minimum time interval.Choosing T sufficiently small allows in principle any desired accuracy in the approximation to be met, with the limit T → 0 formally recovering the continuous limit.Specifically, let U (t) and U c (t) denote the unitary propagators associated to the total and the control Hamiltonians in Eq. ( 2), respectively: where we have set = 1 and T indicates time-ordering, as usual.Then, for a given pair (H, H), we shall provide sufficient conditions for H to be "reachable" from H and, if so, devise a cyclic control procedure such that the resulting controlled propagator where T c is the cycle time of the controller, that is, U c (t + T c ) = U c (t).In general, we shall allow for T c to differ from T , corresponding to an overall scale factor in the simulated time, as it will become apparent later.If, for a fixed input Hamiltonian H, arbitrary target Hamiltonians are reachable for given control resources, the simulation scheme is referred to as universal.In this case, complete controllability must be ensured by the tunable Hamiltonians X u in conjunction with the "drift" H S [6].In contrast, we shall be especially interested in situations where control over S is more limited.
Similar to DD protocols, Hamiltonian simulation protocols are most easily constructed and analyzed by effecting a transformation to the "toggling" frame associated to U c (t) in Eq. (4) [7,11,14].That is, evolution in the toggling frame is generated by the time-dependent, control-modulated Hamiltonian with the corresponding toggling-frame propagator U (t) being related to the physical propagator in Eq. ( 3) by U (t) = U c (t)U (t).Since the control propagator is cyclic and H is time-independent, it follows that U (t M ) = U (t M ) and, furthermore, H (t) acquires the periodicity of the controller, U (t M ) = [U (T c )] M .Thus, the stroboscopic controlled dynamics of the system is determined by Average Hamiltonian theory [7,35] may then be invoked to associate an effective timeindependent Hamiltonian H to the evolution in the toggling-frame: where H is determined by the Magnus expansion [32], H = H(0) + H(1) + H(2) + . . .Explicitly, the leading-order term, determining evolution over a cycle up to the first order in time, is given by with (absolute) convergence being ensured as long as t H < π [34].Subject to convergence condition, higher-order corrections for evolution over time t can also be upper-bounded by (see Lemma 4 in [33]) Ideally, one would like to achieve HT c = H T , so that equality would hold in Eq. ( 5) for all M ∈ N. In what follow, we shall primarily focus on achieving first-order simulation instead, by engineering the control propagator U c (t) in such a way that whereby, using Eq. ( 10) with κ = 1, In general, the accuracy of the approximation in Eq. ( 11) improves as the "fast control limit", T c → 0, is approached.Physically, this corresponds to requiring that the shortest control time scale (pulse separation) involved in the control sequence be sufficiently small relative to the shortest correlation time of the dynamics induced by H [35,36].
While the problem of constructing general high-order Hamiltonian simulation schemes is of separate interest, we stress that second-order simulation can be readily achieved, in principle, by ensuring that Since all odd-order Magnus corrections vanish in this case [35], it follows (by using again Eq. (10), with κ = 2), that HT c = H T + O[( H T c ) 3 ], correspondingly boosting the accuracy of the simulation.
Hamiltonian simulation with bang-bang controls
BB Hamiltonian simulation provides the simplest control setting for achieving the intended objective, given in Eq. (5).Two main assumptions are involved: (i) First, we must be able to express the target Hamiltonian H as where {U j } are unitary operators on S and the {w j } non-negative real numbers (not all zero).(ii) Second, the available control resources include a discrete set of instantaneous pulses {P j } on S, whose application results in a piecewise-constant control propagator U c (t) over [0, T c ], with corresponding toggling-frame propagators {U j }, U j ≡ j k=1 P k , U 1 = I S [9,14].Assumptions (i)-(ii) together allow for the time-average in Eq. ( 9) to be mapped to a convex (positive-weighted) sum.Eq. ( 13) may be interpreted as a sufficient condition for the target Hamiltonian H to be reachable from H given open-loop unitary control on S alone.Reachable Hamiltonians must thus be at least as "disordered" as the input one in the sense of majorization [21,14].Specifically, Eq. ( 13) leads naturally to the following BB simulation scheme.Given simulation weights {w j }, define the following simulation intervals and timing pattern: A piecewise-constant control propagator for the basic simulation block to be repeated may then be constructed as follows: By using Eq. ( 9), it is immediate to verify that resulting in the desired controlled evolution, Eqs. ( 11)- (12), provided that the convergence conditions for first-order simulation under H are obeyed.Since, in practice, technological limitations always constrain the cycle duration to a finite minimum value T c > 0, such conditions ultimately determine the maximum simulated time tM up to which evolution under H may be reliably simulated using the physical Hamiltonian H.
In analogy with BB DD schemes, realizing the prescription of Eq. ( 15) requires to discontinuously change the control propagator from U j to U j+1 = (U j+1 U † j )U j , via an instantaneous BB pulse U j+1 U † j at the jth endpoint t j .As a result, despite its conceptual simplicity, BB simulation is unrealistic whenever large control amplitudes are not an option, and the evolution induced by H during the application of a control pulse must be considered from the outset.This demands redesigning the basic control block in such a way that the actions of H and H c (t) are simultaneously accounted for.
Eulerian simulation of the trivial Hamiltonian
The key to overcome the disadvantages of BB Hamiltonian simulation is to ensure that the control propagator varies smoothly (continuously) in time during each control cycle.We achieve this goal by relying on Eulerian control design [26].To introduce the necessary group-theoretical background, we begin by revisiting how, for the special case of a target identity evolution (that is, H ≡ 0, also corresponding to a "noop" gate, in terms of the end-time simulated propagator), EDD can be naturally interpreted as a bounded-strength simulation scheme.
In the Eulerian approach, the available control resources include a discrete set of unitary operations on S, say, {U γ }, γ = 1, . . ., L, which are realized over a finite time interval ∆ through application of bounded-strength control Hamiltonians Note that the choice of the control Hamiltonians h γ (t) is not unique, which allows for implementation flexibility.The unitaries {U γ } are identified with the image of a generating set of a finite group under a faithful, unitary, projective representation ρ [26].That is, let G ≡ {g} be a finite group of order |G|, such that each element may be written as an ordered product of elements in a generating set Γ ≡ {γ} of order |Γ| = L, g → ρ(g) ≡ U g be the representation map [37], and G ≡ {U g }.The Cayley graph C(G, Γ) of G relative to Γ can be thought of as pictorially representing all elements of G as strings of generators in Γ.Each vertex represents a group element and a vertex g is connected to another vertex g by a directed edge "colored" (labeled) with generator γ if and only if g = γg.The number of edges in C(G, Γ) is thus equal to N ≡ |Γ||G|.
Because a Cayley graph is regular, it always has an Eulerian cycle that visits each edge exactly once and starts (and ends) on the same vertex [38,39].Let us denote with C ≡ (γ 1 , . . ., γ N ) the ordered list of generators defining an Eulerian cycle on C(G, Γ) which, without loss of generality, starts (and ends) at the identity element of G.
Once a control Hamiltonian for implementing each generator as in Eq. ( 17) is chosen, an EDD protocol is constructed by assigning a cycle time as T c ≡ N ∆ and by applying the control Hamiltonians h γ (t) sequentially in time, following the order determined by the Eulerian cycle C.Thus, where U γ j is the image of the generator labeling the jth edge in C. As established in [26], the lowest-order average Hamiltonian associated to the above EDD cycle has the form H(0 , where for any operator A acting on H S , the map projects onto the centralizer of G (i.e., Π G (A) commutes with all U g ∈ G), and implements an average of H over both the control interval and the group generators.Accordingly, bounded-strength simulation of H = 0 is achieved provided that the following DD condition is obeyed: By Schur's lemma, this is automatically ensured if the group representation acts irreducibly on H S .Formally, the BB limit may be recovered by letting F Γ (A) ≡ A for all A [26], reflecting the ability to directly implement all the group elements (with no overhead, as if |Γ| = 1) and corresponding to uniform simulation weights w j = 1/|G|.
Eulerian simulation protocols beyond noop: Construction
We show how the Eulerian cycle method can be extended to bounded-strength simulation of a non-trivial class of target Hamiltonians.We assume that H may be expressed as a convex unitary mixture of the group representatives U g , We construct the desired control protocol starting from an Eulerian cycle C = (γ 1 , . . ., γ N ) on C(G, Γ).Specifically, the idea is to append to each of the N control slots that define an EDD scheme a free-evolution (or "coasting") period of suitable duration, in such a way that the net simulated Hamiltonian is modified from H = 0 to H = 0 as given in Eq. (22).A pictorial representation of the basic control block is given in Fig. 1.As in Eq. ( 17), let ∆ denote the minimum time duration required to implement each generator, hence, to smoothly change the control propagator from a value U g to U g along the cycle.While such "ramping up" control intervals have all the same length, each "coasting" interval is designed to keep the control propagator constant at U g for a duration determined by the corresponding weight w g .Since the control is switched off during coasting, continuity of the overall control Hamiltonian H c (t) may be ensured, if desired, by requiring that in addition to the bounded-strength constraint.An Eulerian simulation protocol may be formally specified as follows.As before, let the jth time interval be denoted as [t j−1 , t j ], j = 1, . . ., N , with t 0 = 0 and t N defining Schematics of an Eulerian simulation protocol.The basic control block consists of N time intervals, each involving a "ramping-up" sub-interval of fixed duration ∆, during which H c (t) = 0, followed by a "coasting" (free evolution) period of variable duration Θ k , Eq. ( 24), during which no control is applied.During the jth ramping-up sub-interval we apply h γj , i.e., the control Hamiltonian that realizes the generator γ j , smoothly changing the control propagator from U gj−1 to U gj .In this way, the control protocol corresponding to Eqs. ( 26)-( 27) is implemented.By construction, a standard EDD cycle with H = 0 is recovered by letting Θ k → 0 for all k, while in the limit ∆ → 0 standard BB simulation of H is implemented.
the cycle time T c .For each j, let τ g j ≡ w g j T as in the BB case.The duration of the jth coasting period is then assigned as resulting in the following timing pattern {t j } [compare to Eq. ( 14)]: As the expression for the cycle times makes it clear, the resulting protocol may be equivalently interpreted in two ways: starting from an EDD cycle, corresponding to N ∆ and H = 0, we introduce the coasting periods to allow for non-trivial simulated dynamics to emerge; or, starting from a BB simulation scheme for H, corresponding to W T , we introduce the ramping-up periods to allow for control Hamiltonians to be smoothly switched over ∆.Either way, bounded-strength protocols imply a time-overhead N ∆ relative to the BB case, recovering the BB limit as ∆ → 0 as expected.Explicitly, the control propagator for Eulerian simulation has the form: The resulting first-order Hamiltonian H(0) under Eulerian simulation is derived by evaluating the time-average in Eq. ( 9) with the control propagator given by Eqs. ( 26)- (27).Direct calculation along the lines of [26] yields: where the last equality follows from two basic properties of Eulerian cycles: firstly, the list {g 0 , g 1 , . . ., g N −1 } (and also {g 1 , g 2 , . . ., g N }) of the vertices that are being visited contains each element g ∈ G precisely |Γ| times; secondly, in traversing the Cayley graph, each group element g is left exactly once by a γ-labeled edge for each generator γ ∈ Γ.Thus, by recalling the definitions given in Eqs. ( 19) and ( 20), we finally obtain which indeed achieves the intended first-order simulation goal, Eqs. ( 11)-( 12), as long as convergence holds and the DD condition of Eq. ( 21) is obeyed.
The simulation accuracy may be improved by symmetrizing U EUS c (t) in time.In analogy to symmetrized EDD protocols [9], this can be easily accomplished by running the protocol and then suitably running it again in reverse.Specifically, let the duration of the coasting interval be changed as Θ j → Θ j /2.Run the protocol as described above until time t = N ∆ + 1 2 W T .Then, from time t = N ∆ + 1 2 W T until time t = T c = 2N ∆ + W T , modify Eqs. ( 26)- (27) as follows: for j = N, . . ., 1. Provided that one is able to implement u γ j (∆ − δ), we again obtain hence ensuring that H(1) = 0.
Eulerian simulation while decoupling from an environment
The ability to implement a desired Hamiltonian on the target system S, while switching off (at least to the leading order) the coupling to an uncontrollable environment B, is highly relevant to realistic applications.That is, with reference to Eq. ( 1), the objective is now to simultaneously achieve HS ≡ H target , Sα ≡ 0, by unitary control operations acting on S alone.Because the first-order Magnus term H(0) is additive [recall Eq. ( 9)], it is appropriate to treat each summand of H individually, leading to a relevant average Hamiltonian of the form where for a generic operator on H S we let We can then apply the analysis of Sec.3.2 to the internal system Hamiltonian ( HS ) and each error generator ( Sα ) separately, to obtain in both cases a simulated operator of the form given in Eq. ( 28): Since the task is to decouple S from B while maintaining the non-trivial evolution due to HS = H target , the reachability condition of Eq. ( 22) must now ensure that Accordingly, it is necessary to extend the DD assumption of Eq. ( 21) to become such that Ā = ( T /T c ) Ã holds for each of the summands in H. Altogether we recover It is interesting in this context to highlight some similarities and differences with DCGs [27], which also use Eulerian control as their starting point and are specifically designed to achieve a desired unitary evolution on the target system while simultaneously removing decoherence to the leading [27,28,30] or, in principle, arbitrarily high order [29].By construction, the open-system simulation procedure just described does provide a first-order DCG implementation for the target gate Q ≡ exp(−i HS Tf ): in particular, the requirement that Eqs. ( 29)- (30) be obeyed together (for the same weights w g ) is effectively equivalent to evading the "no-go theorem" for black-box DCG constructions established in [28], with the coasting intervals and the resulting "augmented" Cayley graph playing a role similar in spirit to a (first-order) "balancepair" implementation.Despite these formal similarities, a number of differences exist between the two approaches: first, an obvious yet important difference is that DCG constructions focus directly on synthesizing a desired unitary propagator, as opposed to a desired Hamiltonian generator.Second, while the internal system Hamiltonian, H S , is a crucial input in a Hamiltonian simulation problem, it is effectively treated as an unwanted error contribution in analytical DCG constructions, in which case complete controllability over the target system must be supplied by the controls alone.Although in more general (optimal-control inspired) DCG constructions [30], limited external control is assumed and H S may become essential for universality to be maintained, emphasis remains, as noted above, on end-time synthesis of a target propagator.Finally, a main intended application of DCGs is realizing low-error single-and two-qubit gates for use within fault-tolerant quantum computing architectures, as opposed to robust Hamiltonian engineering for many-body quantum simulators which is our focus here.
Eulerian simulation protocols: Requirements
Before presenting explicit illustrative applications, we summarize and critically assess the various requirements that should be obeyed for Eulerian simulation to achieve the intended control objective of Eq. ( 5) in a closed or, respectively, open-system setting: (i) Time independence.Both the internal Hamiltonian H and the target Hamiltonian H are taken to be time-independent (and, without loss of generality, traceless).
(ii) Reachability.The target Hamiltonian H must be reachable from H, that is, there must be a control group G, with a faithful, unitary projective representation mapping g → ρ(g) = U g , such that Eq. ( 22) holds.For dynamically-corrected Eulerian simulation in the presence of an environment, this requires, as noted, that for the same weights {w g }, the desired system Hamiltonian is reachable from H S while the trivial (zero) Hamiltonian is reachable from each error generator S α separately, such that both Eqs. ( 29)-( 30) hold together.
(iii) Bounded control.For each generator γ of the chosen control group G, we need access to bounded control Hamiltonians h γ (t), such that application of h γ (t) over a time interval of duration ∆ realizes the group representative U γ = ρ(γ) = u γ (∆), additionally subject (if desired) to the continuity condition of Eq. ( 23).
(iv) Decoupling conditions.Suitable DD conditions, Eq. ( 21) in a closed system or Eqs. ( 31)- (32) in the open-system error-corrected case, must be fulfilled, in order for undesired contributions to the simulated Hamiltonians to be averaged out by symmetry to the leading order.
(v) Time-efficiency.If the choice of G is not unique for given (H, H), the smallest group should be chosen, in order to keep the number of intervals per cycle, N = |G||Γ|, to a minimum.In particular, efficient Hamiltonian simulation requires that |G| (hence also |Γ|) scales (at most) polynomially with the number of subsystems n.
The key simplification that the time-independence Assumption (i) introduces into the problem is that the periodicity of the control action is directly transferred to the toggling-frame Hamiltonian of Eq. ( 6), allowing one to simply focus on single-cycle evolution.Although this assumption is strictly not fundamental, general time-dependent Hamiltonians may need to be dealt with on a case-by-case basis (see also [40,41,42]).A situation of special practical relevance arises in this context for open systems exposed to classical noise, in which case H B C and the system-bath interaction in Eq. ( 1) is effectively replaced by a classical, time-dependent stochastic field.Similar to DD and DCG schemes, Eulerian simulation protocols remain applicable as long as the noise process is stationary and exhibiting correlations over sufficiently long time scales [9,43].
The reachability Assumption (ii) is a prerequisite for Eulerian Hamiltonian simulation schemes.Although BB Hamiltonian simulation need not be group-based, most BB schemes follow this design principle alike.Assumption (iii), restricting the admissible control resources to physical Hamiltonians with bounded amplitude (thus finite control durations, as opposed to instantaneous implementation of arbitrary group unitaries as in the BB case) is a basic assumption of the Eulerian control approach.As remarked, our premise is that the available Hamiltonian control is limited, restricted to only the target system if the latter is coupled to an environment, and typically non-universal on H S ; in particular, we cannot directly express H = H + H c and apply H c = H − H, or else the problem would be trivial.In addition to errorcorrected Hamiltonian simulation in open quantum systems, scenarios of great practical interest may arise when the control Hamiltonians are subject to more restrictive locality constraints than the system and target Hamiltonians are (e.g., two-body simulation with only local controls, see also Sec. 4.1).
The required decoupling conditions in Assumption (iv) are automatically obeyed if the representation ρ acts irreducibly on H S .This follows directly from Schur's lemma, together with the fact that the map F Γ defined in Eq. ( 20) is trace-preserving, and both H S and S α can be taken to be traceless.While convenient, irreducibility is not, however, a requirement.When the representation ρ is reducible, care must be taken in order to ensure that Assumption (iv) is nevertheless obeyed.It should be stressed that this is possible independently of the target Hamiltonian H. Therefore, if the choice (G, ρ) works for one Eulerian simulation scheme (whether ρ is irreducible or not), then it can be used for Eulerian simulation with any target H that belongs to the reachable set from H, that is, that can satisfy Eq. (22).
We close this discussion by recalling that it is always possible for a finite-dimensional target system S to find a control group G for which both Assumptions (ii) and (iv) are satisfied, by resorting to the concept of a transformer [22,14].A transformer is a pair (G, ρ), where G is a finite group and ρ : G → U(H S ), g → ρ(g) = U g is a faithful, unitary, projective representation such that, for any traceless Hermitian operators A and B on H S with A = 0, one may express We illustrate this general idea in the simplest case of a single qubit, H = H S = C 2 .Let X, Y, Z denote the Pauli matrices and R the unitary matrix defined by which corresponds to a rotation by an angle 4π/3 about an axis n ≡ (1, 1, 1)/ √ 3. Direct calculation shows that R 3 = I and that conjugation by R cyclically shifts the Paulimatrices, i.e., R † XR = Y, R † Y R = Z, and R † ZR = X.Consider now the group G given by the presentation Using the defining relations of this group, its elements can always be written as x a z b r c , where a, b ∈ {0, 1} and c ∈ {0, 1, 2}.Clearly, the assignment ρ given by x → X, y → Y, z → Z, r → R yields a faithful, unitary, irreducible representation since the Pauli matrices commute up to phase.It is shown in [22] that the pair (G, ρ) defines a transformer in the sense given above, namely, any 2 × 2 traceless matrix B may be reached from any fixed 2 × 2 traceless, nonzero matrix A, for suitable nonnegative weights w g .The irreducibility property for any transformer pair can be easily established by contradiction [44].
A drawback of the transformer formalism is that general transformer groups tend to be large, making purely transformer-based simulation schemes inefficient.In practice, given the native system Hamiltonian H S , the challenge is to find a group G that grants a reasonably efficient scheme while satisfying Assumptions (ii) and (iv), and subject to the ability to implement the required control operations.As we shall see next, transformerinspired ideas may still prove useful in devising simulation schemes in the presence, for instance, of additional symmetry conditions.
Illustrative applications
In this section, we explicitly analyze simple yet paradigmatic Hamiltonian simulation tasks motivated by QIP applications.While a number of other interesting examples and generalizations may be envisioned (as also further discussed in the Conclusions), our goal here is to give a concrete sense of the usefulness and versatility of our Eulerian simulation approach in physically realistic control settings.In particular, we focus on achieving non-local Hamiltonian simulation using only bounded-strength local (singlequbit) control, in both closed and open multi-qubit systems.
Eulerian simulation in closed Heisenberg-coupled qubit networks
Let us start from the simplest case of a system consisting of n = 2 qubits, interacting via an isotropic Heisenberg Hamiltonian of the form where J has units of energy and the second equality defines an equivalent compact notation.We are interested in a class of target XYZ Hamiltonians of the form For instance, J x = J y = ±J, J z = 0 corresponds to an isotropic XX model, whereas if J x = J y with J z = 0, an XXZ interaction is obtained, the special value J z = ∓2J corresponding to the important case of a dipolar Hamiltonian.The construction of a simulation protocol starts from observing that Hamiltonians as in Eq. ( 34) are reachable from H, in the sense of Eq. ( 22), based on single-qubit control only.Specifically, let G ≡ Z 2 × Z 2 ≡ Z 2 2 , and let the representation ρ map (n, m) ∈ G to X n Z m ⊗ I.That is, G is mapped to the following set of unitaries: Choosing the generators of G to be (1, 0) → γ x,1 = X 1 and (0, 1) → γ z,1 = Z 1 , we assume that we have access to the control Hamiltonians where the control inputs f x (t) and f z (t) satisfy f u (0) = 0 = f u (∆) and ∆ 0 f u (τ )dτ = π/2, for u = x, z.Recalling Eq. ( 17), this yields the control propagators with u x (∆) = X 1 and u z (∆) = Z 1 (up to phase), as desired.
Note that for any single-qubit Hamiltonians A and B, averaging over the unitary group in Eq. ( 35) results in the following projection super-operator: In general, the map F Γ is trace-preserving and, in this case, it acts non-trivially only on the first qubit.Thus, F Γ is trace-preserving on the first qubit.Since each term in H is traceless in the first qubit, the decoupling condition Π G [F Γ (H)] = 0 follows directly from Eq. ( 36), even though the relevant representation ρ is, manifestly, reducible.Having satisfied our main requirements for Eulerian simulation, reachability of XYZ Hamiltonians as in Eq. ( 34) is equivalent to the existence of a solution to the following set of conditions: for non-negative weights w g .While infinitely many choices exist in general, minimizing the total weight W = g w g keeps the simulation time overhead to a minimum.For instance, it is easy to verify that a dipolar Hamiltonian of the form may be simulated with minimum time overhead by choosing weights The Cayley graph associated with the resulting Eulerian simulation protocol is depicted in Fig. 2, with the explicit timing structure of the control block as in Fig. 1 and 34), which is proportional to the time τ g = w g T spent at vertex g during the coasting subinterval; see also Fig. 1.
N = 2 × 4 = 8 control segments per block.It is worth observing that although the weights w X 1 and w Y 1 are zero in the particular case at hand, all group members of G are nonetheless required, and the unitaries X 1 and Y 1 still show up in the simulation scheme (during the ramping-up sub-intervals, as evident from Eq. ( 26)).This is crucial to guarantee that the unwanted F Γ term is projected out.The above analysis and simulation protocols can be easily generalized to a chain of n qubits (or spins), subject to nearest-neighbor (NN) homogeneous Heisenberg couplings, that is, a Hamiltonian of the form where for later reference we have introduced the standard compact notation σ i ≡ (X i , Y i , Z i ) and we assume for concreteness that n is even.In this case, we need only change the unitary representation ρ of Z 2 × Z 2 to be defined by the two generators Physically, the required generators γ x,odd and γ z,odd correspond to control Hamiltonians that are still just sums of 1-local terms, and that act non-trivially on odd qubits only: We expect that the design of Eulerian simulation schemes for more general scenarios where both the input and the target (H, H) are arbitrary two-body Hamiltonians (including, for instance, long-range couplings) will greatly benefit from the existence of combinatorial approaches for constructing efficient DD groups [45,41].A more indepth analysis of this topic is, however, beyond our current scope.
Error-corrected Eulerian simulation in open Heisenberg-coupled qubit networks
Imagine now that the Heisenberg-coupled system S considered in the previous section is coupled to an environment B, and the task is to achieve the desired XYZ Hamiltonian simulation while also removing arbitrary linear decoherence to the leading order.The total input Hamiltonian has the form (38) where H B and B u,i , for each i and u = x, y, z, are operators acting on H B , whose norm is sufficiently small to ensure convergence of the relevant Magnus series, similar to first-order DCG constructions [27,28].The target Hamiltonian then reads in terms of suitable coupling-strength parameters J u as in Eq. (34).As before, we start by analyzing the case of n = 2 qubits in full detail.Our strategy to synthesize a dynamically corrected simulation scheme involves two stages: (i) We will first decouple S from B, while leaving the system Hamiltonian H S = H iso unaffected; (ii) We will then apply the closed-system protocol of Sec.4.1 to convert H iso into the target system Hamiltonian HS = H XYZ .Once a suitable group and weights are identified in this way, both stages are carried out simultaneously in application.
A suitable DD group able to suppress general linear decoherence is provided by G DD = Z 2 × Z 2 , under the n-fold tensor power representation yielding (see also [28]): generated, for instance, by the two collective generators γ x,all = X (all) = X 1 X 2 and γ z,all = Z (all) = Z 1 Z 2 .In addition to the order of G GL being minimal, with |G GL | = 4 independently of n, step (i) above is automatically satisfied for the input Hamiltonian at hand, since Given a generic operator A on H = H S ⊗ H B , we may define the superoperator Φ DD as A + X (all) AX (all) + Y (all) AY (all) + Z (all) AZ (all) , corresponding to weights {w h } given by w In step (ii), we still rely on the group Z 2 × Z 2 , but now under a different representation.We choose the representation yielding the set G 1 of Eq. (35), with the same single-qubit generators γ x,1 = X 1 , γ z,1 = Z 1 , and the corresponding weights {w g 1 } determined by the solution of Eqs.(37).Define the superoperator Φ 1 to act as Then the combined action of the two superoperators Φ DD and Φ 1 yields where , with unitary representation elements corresponding to the full Pauli group on two qubits: The above representation is irreducible, with Π G implementing the complete depolarizing channel on two qubits: for every input A. Together with the fact that all of the system terms in H are traceless and F Γ is trace-preserving, this ensures that the DD conditions of Eqs. ( 31)-( 32 A practically important case, where simpler simulation schemes are possible, occurs if qubits couple to their environment along a fixed axis, effectively corresponding to pure dephasing -say, for concreteness, that B y,i = 0 = B z,i for i = 1, 2 in Eq. (38).A smaller DD group suffices in this case [28], namely G DD = Z 2 , represented again in terms of collective qubit rotations, and generated by the single element γ z,all .Clearly, the commutation relationship in Eq. ( 39) is maintained, still allowing our two-step procedure to be followed.In this case, the combined group for simulation is 2 , with |G| = 8, |Γ| = 3, reducibly represented as follows on the two-qubit space: Suppose, for instance, that the task is to simulate a dipolar Hamiltonian H dip as in Sec.4.1.By following the above general procedure, with weights {w h } given by w I = w Z 1 Z 2 = 1/2 for G D alone, it is easy to see that Eq. ( 40) simplifies, leading to simulation weights w I = 1/4, w Z 1 = 3/4 = w Z 2 , w Z 1 Z 2 = 1/4, with the remaining 4 weights equal to 0. While this implies that the simulation can now be achieved with only N = 8 × 3 = 24 segments per cycle and minimum weight W = 2, care is needed in ensuring that the DD conditions in Eqs. ( 31)- (32), are still obeyed.This may be checked by inspection.In particular, the fact that Π G [F Γ (X i )] = 0 for i = 1, 2 follows by analyzing the structure of each toggling-frame "error Hamiltonian", u † γ j (t)X i u γ j (t), for γ j ∈ Γ = {X 1 , X 2 , Z 1 + Z 2 }, and verifying that no term proportional to Z 2 is generated, that would be left uncorrected by averaging over the representation in Eq. (41).Likewise, the fact that Π G [F Γ (H S )] = 0 for H S = H iso may be directly established by a similar calculation, or by using the trace argument in Sec.4.1 for the two group generators γ x,1 = X 1 and γ z,1 = Z 1 , while also noting that for the third generator γ z,all = Z 1 Z 2 , we have F Z 1 Z 2 (H iso ) = H iso and the latter is decoupled by the representation in Eq. ( 41), Π G (H iso ) = 0. Thus, Eulerian Hamiltonian simulation in the presence of single-axis errors can be efficiently achieved.
Again, the schemes we have just presented for n = 2 can be generalized to a chain consisting of n spins, which interact according to a NN Heisenberg interaction and are each linearly coupled to the environment, according to Eq. (38).In this case, exploiting the results of Sec.4.1, a useful group for simulation is provided by G Z 4 2 , under the unitary representation {U g } ≡ G GL × G odd , corresponding to generators γ x,all , γ z,all , γ x,odd , γ z,odd , all of which can be implemented using only 1-local (single-qubit) Hamiltonians.As before, each simulation cycle will consist in the general case of arbitrary linear decoherence of N = 16 × 4 = 64 time segments.Despite the reducibility of the above representation (with the full Pauli group on n qubits consisting of 4 n elements), the DD conditions given by Eqs. ( 31)- (32) remain valid for reasons similar to those outlined for n = 2 under pure dephasing.
Eulerian simulation of Kitaev's honeycomb lattice Hamiltonian
We return to Eulerian simulation in closed quantum systems, but tackle a more complicated Hamiltonian of paradigmatic relevance to topological quantum memories, namely, Kitaev's honeycomb lattice model [46].Suppose that the target system consists of a network of qubits arranged on a honeycomb lattice and interacting via NN Ising couplings.The relevant Hamiltonian H is graphically displayed in Fig. 3 The basic idea to accomplish this simulation is to exploit the matrix R given in Eq. ( 33), in conjunction with the symmetry of our problem: since all Hamiltonian terms are precisely two-local and of the homogeneous form σ ⊗ σ, it will be possible to avoid using the full machinery of a transformer.Consider the group G generated by the three unitaries, ρ X , τ X , and R global , where ρ X , shown in Fig. 4(left) with σ = X, has X's on every second forward-slash, τ X , shown in Fig. 4(center) with σ = X, has X's on every second back-slash, and R global , shown in Fig. 4(right), has R applied to every vertex.These unitaries can be generated by one-local Hamiltonians.By repeatedly conjugating ρ X and τ X with R global , we immediately see that we can also perform ρ σ and τ σ , shown in Fig. 4, for any σ = X, Y, Z.Note that up to phase, all such ρ and τ commute.Because conjugation by R maps Pauli matrices to Pauli matrices, for any Pauli σ we have Rσ = (RσR −1 )R = σ R, where σ is another Pauli matrix.Thus, up to phase, we can write any element of G in the canonical form where and R a global only appears on the right.To construct an Eulerian simulation protocol we must be able to choose w g so that H is reachable from H, i.e., obeys Eq. ( 22), while ensuring that the DD condition of Eq. ( 21) is also fulfilled.We start from the fact that Observe that when U g = ρ X , all forward-slash edges connect vertices that are acted upon by either I ⊗ I or X ⊗ X, while all other edges connect vertices that are operated by X ⊗ I. Consequently, 1 2 I † HI + 1 2 ρ † X Hρ X removes all Hamiltonian terms except for those along the forward-slashes; upon conjugating by R global , we may then convert these surviving ZZ terms to XX terms, as desired.To summarize, gives the Hamiltonian shown in Fig. 5(left).Similarly, the effect of 1 2 I † HI + 1 2 τ † X Hτ X is to leave precisely the back-slash edges, which can be converted from ZZ to Y Y by conjugation by R 2 global .Thus, ) gives the Hamiltonian shown in Fig. 5(center).Lastly, it is not hard to see that the product ρ X τ X has X's on every second row of verticals; accordingly, Φ ZZ (H) ≡ 1 2 isolates precisely the verticals, giving the Hamiltonian shown in Fig. 5(right).In this case, no R-conjugation is necessary since we wish to maintain ZZ edges along the verticals.Putting all these steps together, we conclude that thus providing the desired weights for the Eulerian protocol.Since there are |Γ| = 3 generators and, from Eq. ( 42), |G| = 4 × 4 × 3 = 48 group elements, each control block consists of N = 144 time intervals.Lastly, we must verify that Eq. ( 21) holds.Note that F Γ (H) acts via conjugating each vertex by unitaries (since the generating pulses are one-local), and since such an operation is trace-preserving at each vertex, this necessarily takes the precisely twolocal terms in H to precisely two-local terms in F Γ (H).Since no one-local terms can arise, all terms are of the form σ Due to the canonical form of our group elements, Eq. ( 42), the action of Π G reads where τ ∈ {I, τ X , τ Y , τ Z } and ρ ∈ {I, ρ X , ρ Y , ρ Z }, respectively.Just as the map 1 2 IHI + 1 2 ρ X Hρ X removes all non-forward-slash ZZ terms, the map ρ ρF Γ (H)ρ depolarizes precisely one vertex of each pair of non-forward-slash vertices, and therefore suppresses all non-forward-slash terms.With only forward-slash terms remaining, τ τ [ ρ ρF Γ (H)ρ]τ = 0, since the τ -sum removes all non-back-slash terms.Thus, we conclude that Π G [F Γ (H)] = 0, as desired.
Conclusion and outlook
We have shown that the Eulerian cycle technique successfully employed in both dynamical decoupling schemes and dynamically corrected gates can be extended to also enable Hamiltonian quantum simulation with realistic bounded-strength controls.For given internal dynamics and control resources, we have characterized the family of reachable target Hamiltonians and provided constructive open-loop control protocols for stroboscopically implementing a desired evolution in the family with accuracy (at least) up to the second order in the sense of average Hamiltonian theory.We have additionally shown how Hamiltonian simulation may be accomplished in an open quantum system while simultaneously suppressing unwanted decoherence, provided that appropriate time-scale requirements and decoupling conditions are fulfilled.The usefulness and flexibility of our Eulerian simulation techniques have been explicitly illustrated through several QIP-motivated examples involving both unitary and open-system dynamics on interacting qubit networks.In all cases, access to purely local (single-qubit) control Hamiltonian is assumed, subject to finite-amplitude constraint.
It is our hope that our results may be of immediate relevance to ongoing efforts for developing and programming quantum simulators in the laboratory.A a number of possible generalizations and further theoretical questions may be worth considering.As an additional simulation problem dual to the one we analyzed for Heisenberg-coupled spin chains, exploring schemes where a target Heisenberg Hamiltonian is generated out of Ising couplings only would be of special interest, given the experimental availability of the latter in existing large-scale trapped-ion simulators [16].Likewise, it could be useful to explore whether bounded-strength simulation as proposed here may be made compatible with open-loop filtering techniques for modulating coupling strengths, such as proposed in [47], as well as in [48] in conjunction with non-unitary control via field gradients.Building on existing results for dynamical decoupling schemes [42], the use and possible advantages of randomized simulation schemes in terms of efficiency and/or robustness may be yet another venue of investigation, especially in connection with large control groups.Lastly, it remains an important open question to determine whether simulation schemes able to guarantee a minimum fidelity over long evolution times may be devised, in the spirit of [49] for the particular case of the zero Hamiltonian.
Figure 1 .
Figure 1.Schematics of an Eulerian simulation protocol.The basic control block consists of N time intervals, each involving a "ramping-up" sub-interval of fixed duration ∆, during which H c (t) = 0, followed by a "coasting" (free evolution) period of variable duration Θ k , Eq. (24), during which no control is applied.During the jth ramping-up sub-interval we apply h γj , i.e., the control Hamiltonian that realizes the generator γ j , smoothly changing the control propagator from U gj−1 to U gj .In this way, the control protocol corresponding to Eqs. (26)-(27) is implemented.By construction, a standard EDD cycle with H = 0 is recovered by letting Θ k → 0 for all k, while in the limit ∆ → 0 standard BB simulation of H is implemented.
Figure 2 .
Figure 2. Cayley graph for the Eulerian simulation of the dipolar Hamiltonian in Heisenberg-coupled qubits.Vertices are labeled by group elements; edges are labeled by group generators.Numbers in parentheses next to vertices indicate the weights w g of the corresponding group elements g in Eq. (34), which is proportional to the time τ g = w g T spent at vertex g during the coasting subinterval; see also Fig.1.
) are satisfied.Since |G| = 16 and |Γ| = 4, the resulting Eulerian simulation cycle will involve in general N = 64 time segments, with the number of non-zero weights (hence the total weight W and the time-overhead of the simulation) being determined by the details of the error model and/or the target Hamiltonian.
Figure 3 .
Figure 3. Input and target Hamiltonians on a 2D honeycomb lattice, where qubits are placed at each vertex.Left: The system Hamiltonian H describes a system where all adjacent vertices have ZZ Ising couplings.Right: The target Hamiltonian H realizes Kitaev's honeycomb lattice model, with XX, Y Y , and ZZ couplings depending on the type of the edge.
(left), where vertices represent qubits and edges represent two-qubit couplings of the form Z k Z , with vertices k and being adjacent in the graph and Z k indicating, as before, the Pauli Z operator acting non-trivially only on qubit k.The target Hamiltonian H is shown in Fig.3(right), where some of the edges are now of the form X k X and Y k Y .In accordance with the figure, we shall also call the XX-edges forward-slashes, the Y Y -edges backslashes, and the ZZ-edges verticals henceforth.
Figure 4 .
Figure 4. Pictorial representation of different control operations.Left: The unitary ρ σ , with σ on the vertices of every second forward-slash and I on all other vertices, where σ is a fixed X, Y, or Z operator.When σ = X, this is the generator ρ X .Center:The unitary τ σ , with σ on the vertices of every second back-slash, where σ is a fixed X, Y, or Z operator.When σ = X this is the generator τ X .Right: The generator R global , with R at every vertex.
Figure 5 .
Figure 5. Pictorial representation of different simulation superoperators (see text).Left: Action of the superoperator Φ XX , leaving XX terms at forward-slashes only.Center: Action of the superoperator Φ Y Y , leaving Y Y terms at back-slashes only.Right: Action of the superoperator Φ ZZ , leaving ZZ terms at verticals only.
k and are adjacent vertices and σ u , σ v ∈ {X, Y, Z}.Thus, we may write | 12,663 | 2013-10-15T00:00:00.000 | [
"Physics"
] |
A New Compact Octagonal Shape Perfect Metamaterial Absorber for Microwave Applications
A new compact octagonal shape perfect metamaterial absorber (PMA) design, numerical simulation, fabrication, and investigational verification of unit cell that is based on a simple structure are presented in this paper. The suggested structure comprised of three layers, in which interact to produce the plasmonic resonances. The finite-integration technique (FIT) based Computer Simulation Technology (CST) microwave electromagnetic simulator was utilized to examine the design parameters and conduct absorption analysis. The design structure exhibited peak absorption values as 99.64% and 99.95% at frequencies 8.08 GHz and 11.41 GHz, respectively. The absorption characteristics were analysed using the polarization angle of the structure, layer thickness, PMA with resistive load, and number of rings. An N5227A vector network analyser was used for the measurement. The measured results of the fabricated prototype were in good agreement with the simulation results. The suggested perfect absorber structure enables innumerable application aimed at X-band for applications like, defence, security, and stealth technology.
Introduction
Artificial metamaterials (MMs) are engineered composites consisting of sub-wavelength metallic structures in a host dielectric medium, which are engineered to obtain unconventional properties that are not found naturally.Due to the unconventional electromagnetic properties of numerous metamaterials, namely the (-) ve permittivity, (-) ve permeability, (-) ve refractive index, and invisibility, the design and application of MMs has gained the priority of vigorous research [1,2].Nevertheless, MMs are also being extensively studied for different applications, for instance, perfect absorbers through the wide electromagnetic spectrum from millimetre to nanometre wavelengths [3,4], multiband absorber [5], polarization insensitive absorber [6], imagers and detectors [7,8], and broad band absorber [9], smart antenna, and beam shaping devices [10,11].A 24 × 24 mm 2 Jerusalem cross with meandered load absorber depicts absorptions of more than 95% at 14.75 GHz and 16.1 GHz [12].Lin et al. recommended a metamaterial unit cell structure with 10.92 × 10 mm 2 dimension that was applicable in microwave regime.The absorption peak of the structure was 96.5%, 96.8%, and 99.6% at 2.15 GHz, 2.28 GHz, and 2.38 GHz, respectively [13].Zhao et al. offered 10 × 10 mm 2 ultra broadband perfect absorber based on an electric split-ring resonator (ESRR) loaded with lumped resistors.The design structure displayed the absorption of 99.3%, 97.1%, and 98.6% at 5.45 GHz, 15.46 GHz, and 19.48 GHz, respectively [14].Dincer et al. [15] suggested the design for an absorbing metamaterial element with a near unity absorbance.They designed, fabricated, characterized, and analysed a metamaterial absorber (MA) with an absorbance of around
Design, Numerical Simulation, and Experiment
The proposed perfect metamaterial absorber (PMA) consists of three octagonal shape resonators on the dielectric material and a ground plane backside.The proposed perfect metamaterial absorber structure and structural parameters are shown in Figure 1.All of the metallic elements of the PMA are made of copper with a conductivity of 5.8 × 10 7 S/m and the thickness of copper resonators are 35 µm that are printed on a substrate with standard relative permittivity ε = 4.3, as well as loss tangent δ = 0.025.The parameters of the structure are L s = 10 mm, W s = 10 mm, W 1 = W 2 = W 3 = 0.8 mm.The double-sided copper laminated PCB (Printed Circuit Board) (Shenzhen Zhongxinhua Electronics Co., Ltd., Shenzhen, China) is available in the market.Using LPKF S63 PCB prototyping machine (LPKF Laser & Electronics, Tualatin, OR, USA) or chemical process, it is easy to fabricate the design structure.Low cost, high efficient flame retardant4 (FR4) epoxy dielectric material (Shenzhen Zhongxinhua Electronics Co., Ltd., Shenzhen, China) was utilized as substrate material.
Appl.Sci.2017, 7, 1263 2 of 11 metamaterial element with a near unity absorbance.They designed, fabricated, characterized, and analysed a metamaterial absorber (MA) with an absorbance of around 99.99% at 5.48 GHz and 99.92% at 0.865 THz.However, the dimension of the unit cell structure was 36 × 36 mm 2 .Hossain et al. recommended a design structure of 12 × 12 mm 2 composite double negative metamaterial for multiband operation and reported effective medium ratio was 7.44 [16].On the other hand, the above author analysed double negative characteristics and compactness, but absorption properties were not analysed in their study.Islam et al. [17] proposed multi-band split S-shaped metamaterial structure for absorption analysis, whereas authors obtained maximum 55% absorption.Kim et al. proposed a dual-band multilayer metamaterial absorber in the megahertz region, which were absorptions of 96% in the 4.0-6.0GHz range because of the irregular thickness of the resistive sheets [18].A multi-band perfect metamaterial absorber based on spiral showed the absorption of 99.4%, 96.7%, and 99.1% at three resonant frequencies 9.86 GHz, 12.24 GHz, and 15.34 GHz, simultaneously [5].Wen et al. suggested a dual-band metamaterial absorber in the terahertz area, which was two discrete absorptions of 80.8% and 63.4% near 0.45 and 0.92 THz [19].A dual-band MA design, fabrication, and characterization was offered by Tao et al. [20].Their MA exhibited absorption peaks of 85% and 94% at 1.4 THz and 3.0 THz, respectively.Polarization-independent MAs proposed by Kollatou et al. that performed in the microwave regime.The maximum absorption value was 95.81% at 10.31 GHz [21].
In this paper, compact octagonal shape perfect metamaterial absorber exhibits dual resonance at X-band.The absorption has been changed by altering the polarization angle of unit cell, thickness of substrate and metallic materials, number of rings, and resistive load.Moreover, the size of the proposed unit cell is 10 × 10 mm 2 , which is physically smaller than the stated metamaterials unit cells in literature [12,13,15,17,[22][23][24].Moreover, the design structure provides high absorption peaks of 99.64% at 8.08 GHz and 99.95% at 11.41 GHz, respectively.It is observed that the absorption of the proposed design is better than the suggested metamaterials unit cells in [12][13][14][17][18][19][20][21][22][23][24][25].To determine the absorption parameters, the Computer Simulation Technology (CST) microwave studio suite simulator 2015 is used.
Design, Numerical Simulation, and Experiment
The proposed perfect metamaterial absorber (PMA) consists of three octagonal shape resonators on the dielectric material and a ground plane backside.The proposed perfect metamaterial absorber structure and structural parameters are shown in Figure 1.All of the metallic elements of the PMA are made of copper with a conductivity of 5.8 × 10 7 S/m and the thickness of copper resonators are 35 µm that are printed on a substrate with standard relative permittivity ε = 4.3, as well as loss tangent δ = 0.025.The parameters of the structure are Ls = 10 mm, Ws = 10 mm, W1 = W2 = W3 = 0.8 mm.The double-sided copper laminated PCB (Printed Circuit Board) (Shenzhen Zhongxinhua Electronics Co., Ltd., Shenzhen, China) is available in the market.Using LPKF S63 PCB prototyping machine (LPKF Laser & Electronics, Tualatin, OR, United States) or chemical process, it is easy to fabricate the design structure.Low cost, high efficient flame retardant4 (FR4) epoxy dielectric material (Shenzhen Zhongxinhua Electronics Co., Ltd., Shenzhen, China) was utilized as substrate material.Finite-integration technique (FIT) based CST microwave studio suite simulator has been implemented to examine this design structure in this paper.The electric field and magnetic field have been polarized along the x-axis and the y-axis, respectively, thus wave propagation is along the zaxis.The boundary conditions of perfect magnetic conductor (PEC) and the perfect electric conductor (PMC) are utilized along the x-axis and y-axis, individually, and two waveguide ports are placed on the positive and negative z-axis.In addition, the periodic boundary conditions with the waveguide ports are used in simulation.The schematic drawing of the proposed structure, unit cell with resistive load, fabricated unit cell, side view of the unit cell, and array of the structure are demonstrated in Figure 1.
CST based frequency domain solver was used to determine the reflection coefficient in simulation at 1001 frequency samples.The boundary condition and measurement set up are shown in Figure 2. The fabricated sample encloses 18 × 22 arrays of the structure of copper materials, and the dimension of the fabricated sample are 180 × 220 mm 2 .The measurement has been performed with two waveguides in the free-space environment.The scattering parameters have been measured by the PNA network analyser (N5227, Agilent Technologies Sdn.Bhd., Petaling Jaya, Malaysia) where the frequency range of the device was 10 MHz-67 GHz.In addition, a calibration kit (Agilent N4694-60001, Agilent Technologies Sdn.Bhd., Petaling Jaya, Malaysia) was utilized to calibrate the network analyser.For that reason, the measurements were completed precisely.The absorption of the design structure is determined using the following equation, A (ω) = 1 − R (ω) − T (ω), where T (ω), R (ω), and A (ω) are the transmittance, reflectance, and absorption at angular frequency ω, respectively.Absorption depends on the scattering parameters, for instance, |S21| 2 = T (ω), and |S11| 2 = R (ω).The absorption of the incidence electromagnetic (EM) wave is simplified as A (ω) = 1 − R (ω), because back side copper plane blocks the transmission of EM wave.Nonetheless, from the absorption equation, it can be seen that minimizing scattering parameters, which can maximize the absorption of the metamaterials.Finite-integration technique (FIT) based CST microwave studio suite simulator has been implemented to examine this design structure in this paper.The electric field and magnetic field have been polarized along the x-axis and the y-axis, respectively, thus wave propagation is along the z-axis.The boundary conditions of perfect magnetic conductor (PEC) and the perfect electric conductor (PMC) are utilized along the x-axis and y-axis, individually, and two waveguide ports are placed on the positive and negative z-axis.In addition, the periodic boundary conditions with the waveguide ports are used in simulation.The schematic drawing of the proposed structure, unit cell with resistive load, fabricated unit cell, side view of the unit cell, and array of the structure are demonstrated in Figure 1.
CST based frequency domain solver was used to determine the reflection coefficient in simulation at 1001 frequency samples.The boundary condition and measurement set up are shown in Figure 2. The fabricated sample encloses 18 × 22 arrays of the structure of copper materials, and the dimension of the fabricated sample are 180 × 220 mm 2 .The measurement has been performed with two waveguides in the free-space environment.The scattering parameters have been measured by the PNA network analyser (N5227, Agilent Technologies Sdn.Bhd., Petaling Jaya, Malaysia) where the frequency range of the device was 10 MHz-67 GHz.In addition, a calibration kit (Agilent N4694-60001, Agilent Technologies Sdn.Bhd., Petaling Jaya, Malaysia) was utilized to calibrate the network analyser.For that reason, the measurements were completed precisely.The absorption of the design structure is determined using the following equation, A
Results and Discussions
The simulations of the unit cell structure and array structure are executed using a full-wave frequency domain solver based on the FIT.In simulation, the waveguide port with periodic boundary conditions is applied.In this paper, scattering parameters, absorption of design structure with angular rotation, thickness of substrate materials, thickness of radiated patch and ground materials, effect of the resonator's number, and resistive load have been analysed.The numerical simulation and experimental result of compact octagonal shape PMA has been presented.The numerical simulation and measured absorption A (ω) of the design structure has been demonstrated in Figure 3.As is seen from Figure 3, the frequencies of the proposed design are in X-band.From Figure 3, the numerical simulation and experimental results are showing agreement with each other.The measured results have been shown the same band when comparing with numerical results in Figure 3.In simulation, the absorption values of the structure are 99.64% and 99.95% at frequencies 8.08 GHz and 11.41 GHz, respectively.However, the values of the absorption are 97.74% at 8.00 GHz, and 98.97% at 11.30 GHz, correspondingly by measurement.Furthermore, the absorption parameter of measured results has been slightly shifted towards the lower frequency and decrease small amount of the magnitude when comparing with simulation results.The small difference can be attributed to fabrication tolerance and open-space measurement procedure.
Results and Discussions
The simulations of the unit cell structure and array structure are executed using a full-wave frequency domain solver based on the FIT.In simulation, the waveguide port with periodic boundary conditions is applied.In this paper, scattering parameters, absorption of design structure with angular rotation, thickness of substrate materials, thickness of radiated patch and ground materials, effect of the resonator's number, and resistive load have been analysed.The numerical simulation and experimental result of compact octagonal shape PMA has been presented.The numerical simulation and measured absorption A (ω) of the design structure has been demonstrated in Figure 3.As is seen from Figure 3, the frequencies of the proposed design are in X-band.
Results and Discussions
The simulations of the unit cell structure and array structure are executed using a full-wave frequency domain solver based on the FIT.In simulation, the waveguide port with periodic boundary conditions is applied.In this paper, scattering parameters, absorption of design structure with angular rotation, thickness of substrate materials, thickness of radiated patch and ground materials, effect of the resonator's number, and resistive load have been analysed.The numerical simulation and experimental result of compact octagonal shape PMA has been presented.The numerical simulation and measured absorption A (ω) of the design structure has been demonstrated in Figure 3.As is seen from Figure 3, the frequencies of the proposed design are in X-band.From Figure 3, the numerical simulation and experimental results are showing agreement with each other.The measured results have been shown the same band when comparing with numerical results in Figure 3.In simulation, the absorption values of the structure are 99.64% and 99.95% at frequencies 8.08 GHz and 11.41 GHz, respectively.However, the values of the absorption are 97.74% at 8.00 GHz, and 98.97% at 11.30 GHz, correspondingly by measurement.Furthermore, the absorption parameter of measured results has been slightly shifted towards the lower frequency and decrease small amount of the magnitude when comparing with simulation results.The small difference can be attributed to fabrication tolerance and open-space measurement procedure.From Figure 3, the numerical simulation and experimental results are showing agreement with each other.The measured results have been shown the same band when comparing with numerical results in Figure 3.In simulation, the absorption values of the structure are 99.64% and 99.95% at frequencies 8.08 GHz and 11.41 GHz, respectively.However, the values of the absorption are 97.74% at 8.00 GHz, and 98.97% at 11.30 GHz, correspondingly by measurement.Furthermore, the absorption parameter of measured results has been slightly shifted towards the lower frequency and decrease small amount of the magnitude when comparing with simulation results.The small difference can be attributed to fabrication tolerance and open-space measurement procedure.The electric field and magnetic field distributions are examined at the resonant frequency of 8.08 GHz to understand the physical mechanism of operation.The field distributions are shown in Figure 4.It is observed that there is high concentration of electric field around the outer side of the rings.The electric field are powerfully coupled with the rings and produce electric response that works as like an electric dipole moment.Hence, the charges of outer surface excite along the external electric field.As a result, a magnetic dipole is induced and it produces a magnetic response that makes a resonant absorption.In addition, high concentration of magnetic field is observed around the upper side and lower side of middle rings.The magnetic fields are strongly coupled with the rings and induce magnetic response that works as like a magnetic dipole moment.Consequently, an electric dipole is induced and produces an electric response that makes a resonant absorption.However, the desired response of electric and magnetic fields occur at this resonant frequency concurrently, resulting in almost complete absorption of the EM wave under the condition (Z (ω) = Z 0 (ω)).The electric field and magnetic field distributions are examined at the resonant frequency of 8.08 GHz to understand the physical mechanism of operation.The field distributions are shown in Figure 4.It is observed that there is high concentration of electric field around the outer side of the rings.The electric field are powerfully coupled with the rings and produce electric response that works as like an electric dipole moment.Hence, the charges of outer surface excite along the external electric field.As a result, a magnetic dipole is induced and it produces a magnetic response that makes a resonant absorption.In addition, high concentration of magnetic field is observed around the upper side and lower side of middle rings.The magnetic fields are strongly coupled with the rings and induce magnetic response that works as like a magnetic dipole moment.Consequently, an electric dipole is induced and produces an electric response that makes a resonant absorption.However, the desired response of electric and magnetic fields occur at this resonant frequency concurrently, resulting in almost complete absorption of the EM wave under the ideal condition (Z (ω) = Z0 (ω)).Therefore, the energy of EM is dissipated in the structure, which produces zero transmission and reflection with unity absorption.Moreover, properties are also observed for the second resonance found at 11.41 GHz suppose its approach, as shown in Figure 5.It is seen from Figure 5, the highest concentration of electric filed occurs at the right side of the octagonal resonator.The non-uniform perturbation of field distribution is also investigated.It indicates that the response of electric and magnetic field are similar to 8.08 GHz response with higher intensity.Hence, this approach is higher than the previous one.Hence, this produces higher absorption rate than 8.08 GHz.Therefore, the energy of EM is dissipated in the structure, which produces zero transmission and reflection with unity absorption.Moreover, properties are also observed for the second resonance found at 11.41 GHz suppose its approach, as shown in Figure 5.It is seen from Figure 5, the highest concentration of electric filed occurs at the right side of the octagonal resonator.The non-uniform perturbation of field distribution is also investigated.It indicates that the response of electric and magnetic field are similar to 8.08 GHz response with higher intensity.Hence, this approach is higher than the previous one.Hence, this produces higher absorption rate than 8.08 GHz.The electric field and magnetic field distributions are examined at the resonant frequency of 8.08 GHz to understand the physical mechanism of operation.The field distributions are shown in Figure 4.It is observed that there is high concentration of electric field around the outer side of the rings.The electric field are powerfully coupled with the rings and produce electric response that works as like an electric dipole moment.Hence, the charges of outer surface excite along the external electric field.As a result, a magnetic dipole is induced and it produces a magnetic response that makes a resonant absorption.In addition, high concentration of magnetic field is observed around the upper side and lower side of middle rings.The magnetic fields are strongly coupled with the rings and induce magnetic response that works as like a magnetic dipole moment.Consequently, an electric dipole is induced and produces an electric response that makes a resonant absorption.However, the desired response of electric and magnetic fields occur at this resonant frequency concurrently, resulting in almost complete absorption of the EM wave under the ideal condition (Z (ω) = Z0 (ω)).Therefore, the energy of EM is dissipated in the structure, which produces zero transmission and reflection with unity absorption.Moreover, properties are also observed for the second resonance found at 11.41 GHz suppose its approach, as shown in Figure 5.It is seen from Figure 5, the highest concentration of electric filed occurs at the right side of the octagonal resonator.The non-uniform perturbation of field distribution is also investigated.It indicates that the response of electric and magnetic field are similar to 8.08 GHz response with higher intensity.Hence, this approach is higher than the previous one.Hence, this produces higher absorption rate than 8.08 GHz.
Analysis of Design Structure with Polarization Angle
The numerical results of compact octagonal shape PMA have been presented.The absorption A (ω) of the design structure has been illustrated in Figure 6.
Analysis of Design Structure with Polarization Angle
The numerical results of compact octagonal shape PMA have been presented.The absorption A (ω) of the design structure has been illustrated in Figure 6.The effect of polarization on the design structure of PMA has been analysed and the results of the absorption is little bit changed for polarization angle.The absorption values for different polarization angle, such as φ = 0°, 10°, 20°, and 30° are demonstrated in Figure 6.It is seen from Figure 6 that the maximum absorption values of design structure are 99.64% at 8.08 GHz, and 99.95% at 11.41 GHz, for 0° polarization; the highest peaks of the absorption are 98.93% at 8.05 GHz, and 97.21% at 10.89 GHz, for 10°; the maximum absorption values of structure are 98.22% at 7.87 GHz, and 99.47% at 10.97 GHz, for 20°; and, the highest peaks of the absorption are 97.54% at 7.89 GHz, and 98.28% at 11.04 GHz, for 30°, respectively.At 0° polarization, the design structure has exhibited highest peak of the absorption.The absorption value is high while the transmittance value is near zero.However, the reflectance value is also less at that point.The polarization of electromagnetic wave has marginally changed the resonance frequencies and absorption values due to altering the material properties.
Analysis of Design Structure with Different Thickness of FR4 Substrate Materials
Different thickness of substrate materials, for example, 0.8 mm, 1.6 mm, and 2.4 mm have been considered in the design structure.To observe the absorption, three different thicknesses have been utilized for proposed design.It is observed from Figure 7, the absorption values of the structure are 89.20% at 7.85 GHz, and 95.76% at 10.94 GHz, respectively, for FR4 substrate material with 0.8 mm thick.Similarly, the values of the absorption are 99.64% at 8.08 GHz, and 99.97% at 11.40 GHz, correspondingly for 1.6 mm thickness.Besides, absorption values are 92.88% at 8.07 GHz and 99.56% at 11.57 GHz, individually for 2.4 mm thickness of substrate material.The high absorption value are 99.64% at 8.08 GHz, and 99.97% at 11.40 GHz for 1.6 mm thickness of substrate material.Due to different thickness of substrate material, the absorption rate has been different at different frequencies.
Absorption of design structure with polarization angle.
The effect of polarization on the design structure of PMA has been analysed and the results of the absorption is little bit changed for polarization angle.The absorption values for different polarization angle, such as Φ = 0 • , 10 • , 20 • , and 30 • are demonstrated in Figure 6.It is seen from Figure 6 that the maximum absorption values of design structure are 99.64% at 8.08 GHz, and 99.95% at 11.41 GHz, for 0 • polarization; the highest peaks of the absorption are 98.93% at 8.05 GHz, and 97.21% at 10.89 GHz, for 10 • ; the maximum absorption values of structure are 98.22% at 7.87 GHz, and 99.47% at 10.97 GHz, for 20 • ; and, the highest peaks of the absorption are 97.54% at 7.89 GHz, and 98.28% at 11.04 GHz, for 30 • , respectively.At 0 • polarization, the design structure has exhibited highest peak of the absorption.The absorption value is high while the transmittance value is near zero.However, the reflectance value is also less at that point.The polarization of electromagnetic wave has marginally changed the resonance frequencies and absorption values due to altering the material properties.
Analysis of Design Structure with Different Thickness of FR4 Substrate Materials
Different thickness of substrate materials, for example, 0.8 mm, 1.6 mm, and 2.4 mm have been considered in the design structure.To observe the absorption, three different thicknesses have been utilized for proposed design.It is observed from Figure 7, the absorption values of the structure are 89.20% at 7.85 GHz, and 95.76% at 10.94 GHz, respectively, for FR4 substrate material with 0.8 mm thick.Similarly, the values of the absorption are 99.64% at 8.08 GHz, and 99.97% at 11.40 GHz, correspondingly for 1.6 mm thickness.Besides, absorption values are 92.88% at 8.07 GHz and 99.56% at 11.57 GHz, individually for 2.4 mm thickness of substrate material.The high absorption value are 99.64% at 8.08 GHz, and 99.97% at 11.40 GHz for 1.6 mm thickness of substrate material.Due to different thickness of substrate material, the absorption rate has been different at different frequencies.
The thickness of substrate material with 1.6 mm has revealed the maximum peak.On the other hand, the minimum peak has been achieved from FR4 substrate material with 2.4 mm thickness.The thickness of substrate material with 1.6 mm has revealed the maximum peak.On the other hand, the minimum peak has been achieved from FR4 substrate material with 2.4 mm thickness.
Analysis of Design Structure with Different Thickness of Metallic Materials
Thickness of the resonator and ground plane of the PMA has an effect on the absorption of electromagnetic energy.Different thickness like 0.1 mm, 0.2 mm, 0.3 mm, and 0.4 mm have been analyzed in the design structure.
The variation of absorption is shown in Figure 8.The absorption values of the structure are 99.04% at 8.32 GHz and 96.98% at 11.59 GHz, respectively, for metallic material with 0.1 mm thickness.Likewise, the values of the absorption are 97.96% at 8.66 GHz, and 97.39% at 11.89 GHz, correspondingly for 0.2 mm thickness.In addition, absorption values are 96.70% at 9.01 GHz and 99.71% at 12.22 GHz, respectively, for 0.3 mm thickness as well as 99.04% at 9.11 GHz and 99.92% at 12.43 GHz, respectively, for 0.4 mm thickness of copper metallic material.The highest absorption peaks are 99.04% at 9.11 GHz and 99.92% at 12.43 GHz for 0.3 mm thickness of metallic material.Due to different thickness of metallic material, the amount of absorption has been different at different frequencies.The thickness of metallic material with 0.3 mm has shown the maximum peak.However, the minimum peak has been achieved from metallic material with 0.2 mm thick.However, the variation of absorption has occurred in a small amount.
Analysis of Design Structure with Different Thickness of Metallic Materials
Thickness of the resonator and ground plane of the PMA has an effect on the absorption of electromagnetic energy.Different thickness like 0.1 mm, 0.2 mm, 0.3 mm, and 0.4 mm have been analyzed in the design structure.
The variation of absorption is shown in Figure 8.The absorption values of the structure are 99.04% at 8.32 GHz and 96.98% at 11.59 GHz, respectively, for metallic material with 0.1 mm thickness.Likewise, the values of the absorption are 97.96% at 8.66 GHz, and 97.39% at 11.89 GHz, correspondingly for 0.2 mm thickness.In addition, absorption values are 96.70% at 9.01 GHz and 99.71% at 12.22 GHz, respectively, for 0.3 mm thickness as well as 99.04% at 9.11 GHz and 99.92% at 12.43 GHz, respectively, for 0.4 mm thickness of copper metallic material.The highest absorption peaks are 99.04% at 9.11 GHz and 99.92% at 12.43 GHz for 0.3 mm thickness of metallic material.Due to different thickness of metallic material, the amount of absorption has been different at different frequencies.The thickness of metallic material with 0.3 mm has shown the maximum peak.However, the minimum peak has been achieved from metallic material with 0.2 mm thick.However, the variation of absorption has occurred in a small amount.The thickness of substrate material with 1.6 mm has revealed the maximum peak.On the other hand, the minimum peak has been achieved from FR4 substrate material with 2.4 mm thickness.
Analysis of Design Structure with Different Thickness of Metallic Materials
Thickness of the resonator and ground plane of the PMA has an effect on the absorption of electromagnetic energy.Different thickness like 0.1 mm, 0.2 mm, 0.3 mm, and 0.4 mm have been analyzed in the design structure.
The variation of absorption is shown in Figure 8.The absorption values of the structure are 99.04% at 8.32 GHz and 96.98% at 11.59 GHz, respectively, for metallic material with 0.1 mm thickness.Likewise, the values of the absorption are 97.96% at 8.66 GHz, and 97.39% at 11.89 GHz, correspondingly for 0.2 mm thickness.In addition, absorption values are 96.70% at 9.01 GHz and 99.71% at 12.22 GHz, respectively, for 0.3 mm thickness as well as 99.04% at 9.11 GHz and 99.92% at 12.43 GHz, respectively, for 0.4 mm thickness of copper metallic material.The highest absorption peaks are 99.04% at 9.11 GHz and 99.92% at 12.43 GHz for 0.3 mm thickness of metallic material.Due to different thickness of metallic material, the amount of absorption has been different at different frequencies.The thickness of metallic material with 0.3 mm has shown the maximum peak.However, the minimum peak has been achieved from metallic material with 0.2 mm thick.However, the variation of absorption has occurred in a small amount.
Analysis of Design Structure with Resistive Load
The effects of the resistive load of the octagonal resonators on the absorption have been analysed.The absorption of design structure with different values of a resistive load are shown in Figure 9.The maximum absorption peaks of the structure are 99.40% at 8.17 GHz and 99.74% at 11.23 GHz, respectively, for 1500 Ω resistive load.Similarly, the maximum peak values of the absorption are
Analysis of Design Structure with Resistive Load
The effects of the resistive load of the octagonal resonators on the absorption have been analysed.The absorption of design structure with different values of a resistive load are shown in Figure 9.The maximum absorption peaks of the structure are 99.40% at 8.17 GHz and 99.74% at 11.23 GHz, respectively, for 1500 Ω resistive load.Similarly, the maximum peak values of the absorption are 99.54% at 8.19 GHz, and 99.77% at 11.41 GHz, correspondingly for 3000 Ω resistive load.Additionally, absorption peak values are 99.41% at 8.20 GHz and 99.76% at 11.26 GHz, respectively, for 4500 Ω resistive load, as well as 99.33% at 8.21 GHz and 99.94% at 11.26 GHz, respectively, for 6000 Ω resistive load that placed between two rings.The high absorption peaks are 99.54% at 8.19 GHz, and 99.77% at 11.41 GHz for 3000 Ω resistive load.Due to different resistive load, the absorption values have been different at different frequencies.The resistive load with 3000 Ω has shown the maximum absorption value.On the other hand, the minimum value of absorption has been attained from resistive load with 1500 Ω.The effects of resistive load also showed that the performance of absorption of PMA also depends on lumped elements.Furthermore, the absorbing performance has been optimized by proper choice of the lumped resistor of the structure.
Analysis of Design Structure with Resonator's Number
The absorption of electromagnetic energy also is contingent on the resonator's number used in the structure.Effect of resonator's number on the substrate material of the PMA has been analyzed.The effect of absorption is shown in Figure 10.The absorption peaks of the structure are 58.35% at 7.54 GHz and 91.97% at 10.89 GHz for two inner resonators.Similarly, the peaks of the absorption are 81.77% at 10.99 GHz for inner and outer resonators.Moreover, absorption peaks are 98.99% at 8.10 GHz and 68.97% at 11.43 GHz, respectively, for two outer resonators.Furthermore, 99.64% at 8.08 GHz and 99.95% at 11.41 GHz, respectively, for three metallic resonators.The maximum absorption peaks are 99.64% at 8.08 GHz and 99.95% at 11.41 GHz, for three metallic resonators.Due to different resistive load, the absorption values have been different at different frequencies.The resistive load with 3000 Ω has shown the maximum absorption value.On the other hand, the minimum value of absorption has been attained from resistive load with 1500 Ω.The effects of resistive load also showed that the performance of absorption of PMA also depends on lumped elements.Furthermore, the absorbing performance has been optimized by proper choice of the lumped resistor of the structure.
Analysis of Design Structure with Resonator's Number
The absorption of electromagnetic energy also is contingent on the resonator's number used in the structure.Effect of resonator's number on the substrate material of the PMA has been analyzed.The effect of absorption is shown in Figure 10.The absorption peaks of the structure are 58.35% at 7.54 GHz and 91.97% at 10.89 GHz for two inner resonators.Similarly, the peaks of the absorption are 81.77% at 10.99 GHz for inner and outer resonators.Moreover, absorption peaks are 98.99% at 8.10 GHz and 68.97% at 11.43 GHz, respectively, for two outer resonators.Furthermore, 99.64% at 8.08 GHz and 99.95% at 11.41 GHz, respectively, for three metallic resonators.The maximum absorption peaks are 99.64% at 8.08 GHz and 99.95% at 11.41 GHz, for three metallic resonators.
The absorption values have been different due to the number of metallic resonators.The PMA with three metallic resonators has shown the maximum peak.However, the minimum peak and single absorption peak have been achieved from metallic resonator with the combination of inner and outer resonators.Hence, the variation of absorption has occurred for different ring's number.
Table 1 demonstrates the comparisons with proposed PMA and another reported PMA.The compared parameters of PMA have been considered here, for example, design structure, size of unit cell structure, applicable band, absorption rate, and year published.Kollatou et al. [21] proposed modified square-shape structure with 8 × 8 mm 2 size and obtained absorption of 95.81% in Table 1.Likewise, other authors, Rana et al. in [22], suggested a U-shape absorber for multiband application.On the other hand, authors have achieved 98% absorption with 15 × 15 mm 2 size of design structure.Borah et al. [23] exhibited an O-shape with 12 × 12 mm 2 dimension of unit cell structure for X-band application.However, Authors obtained absorption of 98.90%.In [17] an S-shape metamaterial absorber was analysed using various substrate materials and different propagation axis of electromagnetic wave.In contrast, authors have obtained a small amount of absorption.Sen et al. recommended L-shape structure with 9 × 9 mm 2 dimension for X-and Ku-band operations and attained absorption of 95%.In addition, the authors, Mahmood et al. [24] suggested modified S-shape structure with 16 × 16 mm 2 dimensions and achieved absorption of 90%.The compact octagonal shape PMA has been analysed and attained higher absorption (98.97%) with compact size of unit cell (10 × 10 mm 2 ) in this paper.The proposed metamaterial has attained compactness and high absorption comparing mentioned references that are suitable for the microwave regime.In addition, the compact size of the design structure is feasible for X band application.Hence, the manufacturability and robustness of the properties of our design structure are very good for commercial adoption.The absorption values have been different due to the number of metallic resonators.The PMA with three metallic resonators has shown the maximum peak.However, the minimum peak and single absorption peak have been achieved from metallic resonator with the combination of inner and outer resonators.Hence, the variation of absorption has occurred for different ring's number.
Table 1 demonstrates the comparisons with proposed PMA and another reported PMA.The compared parameters of PMA have been considered here, for example, design structure, size of unit cell structure, applicable band, absorption rate, and year published.Kollatou et al. [21] proposed modified square-shape structure with 8 × 8 mm 2 size and obtained absorption of 95.81% in Table 1.Likewise, other authors, Rana et al. in [22], suggested a U-shape absorber for multiband application.On the other hand, authors have achieved 98% absorption with 15 × 15 mm 2 size of design structure.Borah et al. [23] exhibited an O-shape with 12 × 12 mm 2 dimension of unit cell structure for X-band application.However, Authors obtained absorption of 98.90%.In [17] an S-shape metamaterial absorber was analysed using various substrate materials and different propagation axis of electromagnetic wave.In contrast, authors have obtained a small amount of absorption.Sen et al. recommended L-shape structure with 9 × 9 mm 2 dimension for X-and Ku-band operations and attained absorption of 95%.In addition, the authors, Mahmood et al. [24] suggested modified S-shape structure with 16 × 16 mm 2 dimensions and achieved absorption of 90%.The compact octagonal shape PMA has been analysed and attained higher absorption (98.97%) with compact size of unit cell (10 × 10 mm 2 ) in this paper.The proposed metamaterial has attained compactness and high absorption comparing mentioned references that are suitable for the microwave regime.In addition, the compact size of the design structure is feasible for X band application.Hence, the manufacturability and robustness of the properties of our design structure are very good for commercial adoption.S-shape 20 × 20 S-, X-, Ku-band 55.00 2017 Sen et al. [25] L-shape 9 × 9 X-, Ku-band 95.00 2017 Mahmood et al. [24] Modified S-shape 16 × 16 X-band 90.00 2017 S-shape 20 × 20 S-, X-, Ku-band 55.00 2017 Sen et al. [25] L-shape 9 × 9 X-, Ku-band 95.00 2017 Mahmood et al. [24] Modified S-shape 16 × 16 X-band 90.00 2017 Proposed PMA Octagonal shape 10 × 10 X-band 98.97 -
Conclusions
A new design of compact octagonal shape PMA was proposed for analysing the absorbing properties based on the numerical simulation and experimental results.The geometry of the offered PMA structure was very simple and it showed a high absorption in microwave frequencies.The results of measurement are in agreement with the numerical ones.The suggested PMA is appropriate for x-band microwave application.A comparative analysis also carried out on the basis of the polarization angle of the unit cell, different thickness of substrate and metallic material, number of resonator, and energy absorption with resistive load.The mentioned variation of PMA shows good performance and the values of absorption are around unity.The open-space measurement method was applied to validate the results of the prototype of the structure.The metamaterial structure was compact in size, and high absorption, which makes it ideal for defence and stealth systems.
Figure 1 .
Figure 1.(a) The proposed sketch of perfect metamaterial absorber (PMA); (b) Unit cell with resistive load; (c) The fabricated unit cell structure; (d) Side view of PMA; and, (e) Fabricated array of PMA.
where T (ω), R (ω), and A (ω) are the transmittance, reflectance, and absorption at angular frequency ω, respectively.Absorption depends on the scattering parameters, for instance, |S 21 | 2 = T (ω), and |S 11 | 2 = R (ω).The absorption of the incidence electromagnetic (EM) wave is simplified as A (ω) = 1 − R (ω), because back side copper plane blocks the transmission of EM wave.Nonetheless, from the absorption equation, it can be seen that minimizing scattering parameters, which can maximize the absorption of the metamaterials.
Figure 3 .
Figure 3. Simulated-measured of absorption for the proposed PMA.
Figure 3 .
Figure 3. Simulated-measured of absorption for the proposed PMA.
Figure 3 .
Figure 3. Simulated-measured of absorption for the proposed PMA.
Figure 4 .
Figure 4. (a) Electric field; (b) Magnetic field distribution at resonance frequency 8.08 GHz of PMA.
Figure 5 .
Figure 5. (a) Electric field; (b) Magnetic field distribution at resonance frequency 11.41 GHz of PMA.
Figure 4 .
Figure 4. (a) Electric field; (b) Magnetic field distribution at resonance frequency 8.08 GHz of PMA.
Figure 4 .
Figure 4. (a) Electric field; (b) Magnetic field distribution at resonance frequency 8.08 GHz of PMA.
Figure 5 .
Figure 5. (a) Electric field; (b) Magnetic field distribution at resonance frequency 11.41 GHz of PMA.
Figure 5 .
Figure 5. (a) Electric field; (b) Magnetic field distribution at resonance frequency 11.41 GHz of PMA.
Figure 6 .
Figure 6.Absorption of design structure with polarization angle.
Figure 7 .
Figure 7. Absorption of the design structure for different thickness of substrate materials.
Figure 8 .
Figure 8. Absorption of the design structure for different thickness of metallic materials.
Figure 7 .
Figure 7. Absorption of the design structure for different thickness of substrate materials.
Figure 7 .
Figure 7. Absorption of the design structure for different thickness of substrate materials.
Figure 8 .
Figure 8. Absorption of the design structure for different thickness of metallic materials.Figure 8. Absorption of the design structure for different thickness of metallic materials.
Figure 8 .
Figure 8. Absorption of the design structure for different thickness of metallic materials.Figure 8. Absorption of the design structure for different thickness of metallic materials.
Figure 9 .
Figure 9. Absorption of the design structure for different resistive load.
Figure 9 .
Figure 9. Absorption of the design structure for different resistive load.
Figure 10 .
Figure 10.Absorption of the design structure for number of metallic resonators.
Figure 10 .
Figure 10.Absorption of the design structure for number of metallic resonators.
Table 1 .
Justification of previous perfect metamaterial absorber (PMA) and proposed PMA.
Table 1 .
Justification of previous perfect metamaterial absorber (PMA) and proposed PMA. | 9,275 | 2017-12-06T00:00:00.000 | [
"Materials Science"
] |
UOR at SemEval-2021 Task 12: On Crowd Annotations; Learning with Disagreements to optimise crowd truth
Crowdsourcing has been ubiquitously used for annotating enormous collections of data. However, the major obstacles to using crowd-sourced labels are noise and errors from non-expert annotations. In this work, two approaches dealing with the noise and errors in crowd-sourced labels are proposed. The first approach uses Sharpness-Aware Minimization (SAM), an optimization technique robust to noisy labels. The other approach leverages a neural network layer called softmax-Crowdlayer specifically designed to learn from crowd-sourced annotations. According to the results, the proposed approaches can improve the performance of the Wide Residual Network model and Multi-layer Perception model applied on crowd-sourced datasets in the image processing domain. It also has similar and comparable results with the majority voting technique when applied to the sequential data domain whereby the Bidirectional Encoder Representations from Transformers (BERT) is used as the base model in both instances.
Introduction
In recent years, there has been some major advancement in the use of deep learning for solving artificial intelligence problems in different domains such as sentiment analysis, image classification, natural language inference, speech recognition object detection. They have also been used in many other numerous cases where human disagreements are encountered such as speech recognition, visual object recognition, object detection and machine translation (Rodrigues and Pereira, 2018). It is however, an essential requirement for deep learning models to utilise labelled data to undertake the representational learning of the underlying datasets. These labelled data are most at times not available and hence the need for humans to manually undertake the labelling of these data becomes a necessity.
In recent years, crowd-sourcing has been used in the annotation of large collections of data and has proven to be an efficient and cost-effective means of obtaining labeled data as compared to expert labelling (Snow et al., 2008) It has been utilised in the generation of image annotations to train computer vision systems (Raykar et al., 2010), to provide the linguistic annotations used for Natural Language Processing (NLP) tasks (Snow et al., 2008), and has also been used to collect the relevant judgments needed to optimize search engines (Alonso, 2013) .
It is a well known fact that crowd-sourced labels are known to be associated with noise and errors as a result of the annotations being provided by annotators with uneven expertise and dedication which can result in the compromise of practical applications that uses such data (Zhang et al., 2016). This paper therefore seeks to apply a novel approach to minimize and mitigate the noise and errors in crowd sourced labels. The aim is to investigate the use of a unified testing framework to learn from disagreements using crowd source labels collected from different annotators.
Related Work
Crowdsourcing has proven to be an inexpensive and efficient way to collect large labels of data and has attracted much research interest from the machine learning community to address noise and unreliabilities associated with them. The proposal for using an Expected Maximization (EM) algorithm to obtain density estimate rate of errors of patients providing conflicting responses to medical questions by Dawid and Skene (1979), is one of the key pioneer contributions to this field. This work served as the catalyst for many other approaches used for the aggregation of labels from crowd annotators with different levels of expertise, such as the one proposed in Whitehill et al. (2009), which further extends Dawid and Skene's model by also accounting for item difficulty in the context of image classification. Similarly, Ipeirotis et al. (2010) proposed using Dawid and Skene's approach to extract a single quality score for each worker that low-quality workers to be pruned. The approach proposed in our paper contrast with this line of work, by allowing neural networks to be trained directly on the softmax output of the noisy labels of multiple annotators, thereby avoiding the need to resort to prior label aggregation schemes. Smyth et al. (1995) also collated the opinions of many experts to establish ground truth and there has been a large body of research work using EM approaches to annotate labels for datasets by many experts (Whitehill et al., 2009;Raykar and Yu, 2012). Rodrigues et al. (2014) also used the EM approach of labelling datasets by experts through the use of Gaussian Process classifiers. Rodrigues and Pereira (2018) also deployed the use of crowd layer with a CNN model to capture the biases of different annotators and correct them, our approach is the first to be built on the Wide Residual Network (WideResNet) model (Zagoruyko and Komodakis, 2017) and the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019). Our approach differs from the method used by Rodrigues and Pereira (2018) because our technique initially finds the softmax of the output of the crowd responses before it is used for the modelling whereas the Rodrigues and Pereira (2018) approach works on the responses from the crowd directly.
Systems Description
These systems are proposed for image classification tasks and NLP tasks with sub-task-specific modifications and training schemes applied to each of the dataset.
softmax-Crowdlayer
A special type of network layer known as softmax-Crowdlayer initially proposed by (Rodrigues and Pereira, 2018), was used to train a deep neural network directly from the noisy labels of multiple annotators from the crowd-sourced data. It used the output layer of a deep neural network as its input and was trained to learn from an annotator-specific mapping from the output layer to the labels of the different soft-maxed crowd annotators; and by so doing it was able to learn the reliability and biases of each annotator in the process. As can be seen from Figure 1, which is the generalised architecture encompassing either a Multi-layer Perceptron (MLP), WideResNet, or BERT as its' base model, was used together with a softmax-Crowdlayer for the respective datasets. The output layer from the deep neural network served as a bottleneck and input for the crowd Annotators to learn from. It used a specialised cross-entropy loss known as Masked Multi Cross Entropy loss during training to handle the missing answers from Annotators. After the training of the network with the crowd layer and the specialised loss function, the crowd layer was removed to expose the Bottleneck layer which was then used to make the predictions.
The intuition behind the deployment of the crowd layer on top of the base model was that; the softmax-Crowdlayer would adjust the gradients from the labels of each annotator depending on their level of expertism and adjusts their weights and propagate the errors through the entire neural network system. Sections 3.2 covers the use of WideResNet together with SAM on the CIFAR10-IC dataset whilst sections 3.3 and 3.4 covers the use of softmax-Crowdlayer for image classification whilst section 3.5 explores the use of BERT and softmax-Crowdlayer to cover the NLP aspect of the task which has been visualised in figure 1. The motivation behind the preference of the BERT model over the baseline models was to investigate the potential of using BERT, which is a state-of-the-art model, with the softmax-Crowdlayer.
WideResNet with Sharpness Aware Minimisation (SAM) For Majority Voting
The CIFAR10 dataset had a model made with WideResNet; first implemented by (Zagoruyko and Komodakis, 2017). A widening factor of 12 and convolutions of size together with 16 layers were used. A learning rate of 0.1 with weight decay of 0.001 and momentum of 0.8 was used with the SAM optimiser which had schochatic Gradient Descent (SGD) as its base optimiser. The training epochs for the dataset were scheduled in batches of 1000 for 60, 5, 10 and 20 respectively. The minimization of the commonly used loss functions such cross-entropy and the use of the custom Masked loss function designed specifically for the crowd layer on the CIFAR10-IC were not sufficient to achieve superior results since the training loss landscapes of models used for noisy labels are complex and non-convex, with a multiplicity of local and global minima (Foret et al., 2020). The Sharpness-Aware Minimization (SAM) Foret et al. (2020), was applied to the CIFAR dataset with the use of WideResNet model generalization which aided in the simultaneous loss in value and sharpness of the noisy labels from the crowd annotators as it has been shown to be robust to noisy labels (Foret et al., 2020). The inner working of the sharpness Aware Minimization is such that rather than using a parameter value that simply have low training loss value, a parameter value whose entire neighborhoods have uniform training loss value is the utilised.
The SAM optimiser technique was not applied to the NLP tasks because, its performance on them was not as good as that of the CIFAR-10 dataset.
WideResNet with softmax-Crowdlayer for CIFAR10-IC Dataset
The CIFAR10-IC data was made up of transformed Images that belonged to one of the 10 classes below: The WideResNet described in section 3.2 was used as the base model which had a softmax-Crowdlayer added to the output layer and through the action of back-propagation, it was able to correct the errors of the 2571 Annotators. A training epoch of 400 and batch size of 64 were used for with this approach. One hot encoding, together with a specialised function were used to generate the set of missing annotations which was then trained using the masked multi cross-entropy loss function for error corrections and predictions through the weights update.
MLP with softmax-Crowdlayer for LabelMe-IC Dataset
The LabelMe-IC data was made up of VGG16 encoded images that belonged to one of the 8 categories or classes below: 'highway', 'inside city', 'tall building', 'street', 'forest', 'coast', 'mountain' or 'open country' This was an image classification task that had a standard MLP architecture together with softmax-Crowdlayer applied to it. The MLP was made up of 4 hidden layers with 128 Relu Units each, an optimiser made of Adam optimizer, loss function made of categorical cross entropy and a drop out of 0.2. A training epoch of 400 and batch size of 32 were used. The output layer had a softmax activation that outputted to the 8 distinct classes highlighted earlier. The softmax-Crowdlayer described in section 3.1 was then connected to this output layer where the Annotators errors and biases were back-propagated through a training scheme which reduced the noise in the crowd Annotators through the use of a specialised loss function to handle crowd annotations known as masked multi cross entropy loss function.
BERT with softmax-Crowdlayer for Gimpel-POS and PDIS Datasets
In Gimpel-POS dataset, each sample consisted of a tweeted text, a specific word/token appears in a tweeted text and a crowd label which is a list of multiple labels from different annotators. The task was to predict a part of speech (POS) of a given token. The POS labels include 'ADJ' (adjective), 'ADP' (adposition), 'ADV' (adverb), 'CCONJ' (coordinating conjunction), 'DET' (determiner), 'NOUN' (noun), 'NUM' (numeral), 'PRON' (pronoun), 'PART' (particle or other functional word), 'PUNCT' (punctuation), 'VERB' (verb) and 'X' (others). Table 1 shows an example of Gimpel-POS dataset. In this example, 'Texas' is a token needed to be tagged. It is at the beginning of the tweeted text shown in the first row. Considering the crowd label provided, the first and the second annotators both labeled this token as a noun, while the last annotator labeled this token as a pronoun. Considering PDIS dataset, the goal was to predict whether a given noun phrase refers to new information or to old information in a document. Each sample consisted of a document (tokenised sentences), a noun phrase appear in the document, a pre-computed syntactic feature of a given noun phrase, and a crowd label. Table 2 shows an example of PDIS dataset. The document and the noun phrase are in the first and the second row of the table respectively. The noun phrase is 'The cat' at the beginning of the document. Syntactic feature of this noun phrase is a feature vector shown in the third row. The fourth row shows a crowd label of the given noun phrase. The first and the second annotator labeled the noun phrase as 0 and 1 respectively 0 means that the noun phrase refers to new information and 1 means that it refers to old information.
Document
The cat ate the rat. Thereafter the dog ate the cat.
Noun phrase
The cat Syntactic feature [0,1,0,..,0] Crowd label [0,1] In this work, we propose to fine-tune the pretrained BERT model for both Gimpel-POS task and PDIS task based on crowd labels. To do so, the original input format of both tasks was firstly converted to the BERT conventional format. For each sample in Gimpel-POS dataset, a tweeted text and a given token were first concatenated in the following format:
[CLS] Tweeted text [SEP] Token [SEP]
where '[CLS]' token is added for classification and two '[SEP]' tokens are used to identify the boundary of a tweeted text and a token. Similarly, for PDIS dataset, a document is also concatenated with a noun phrase as follows: [CLS] + Document + [SEP] + Noun phrase + [SEP] These concatenated texts are used for fine-tuning the pre-trained BERT model. To fine-tune the pre-trained BERT model, a dense layer was added at the end of the pre-trained BERT model. This layer took a '[CLS]' token embedding from the pre-trained BERT model as an input and outputted a vector with the size equal to the number of classes in either dataset (12 for Gimpel-POS and 2 for PDIS). A softmax activation layer was added after the dense layer to compute the probabilities of each class. These additional layers can be seen as a classifier module that is added on top of the pre-trained BERT model. This is a common way to fine-tune the pre-trained BERT model for a specific task with regular labels as targets (Devlin et al., 2019).
In order to deal with crowd labels in the datasets, the softmax-Crowdlayer was added next to the classifier module. Similarly to the MLP model with the crowd layer highlighted in 3.4, The proposed model for fine-tuning the pre-trained BERT with the softmax-Crowdlayer is illustrated in Figure 1. The Gimpel-POS example in 1 is used for demonstration in this figure. As previously mentioned, only the '[CLS]' token is passed through the additional classifier module to predict primary classification output. This output is further used as an input of the softmax-Crowdlayer to predict the final output as described in the previous section. The proposed model can be instantly applied with PDIS dataset by changing the output size of the dense layer in the classifier module to 2. Due to lack of resources, the fine tuning of all the Bert model was run for 1 epoch.
Results and Discussion
The results were evaluated using two metrics known as F 1 score, referred to as hard evaluation and and cross Entropy, referred to as soft evaluation. Models with Higher F 1 scores and lower cross entropy values are the desired outcomes expected from the models.
As can be seen in Table 3, The use of MLP together with the softmax-Crowdlayer on the LabelMe-IC dataset achieved the highest F 1 score of 0.7839, which was 0.739 greater than the majority voting model provided as the baseline model by the task organisers and also had a comparative lowest cross entropy value of 1.7693. The vast difference in the performance of the majority voting and the softmax-Crowdlayer can be attributed to the calculation of the number of missing annotations together with the ability of the softmax-Crowdlayer to learn the true labels from the crowd labels. This leads to the correction of the errors and mislabelling from inexperienced annotators through the process of back-propagation. The majority voting does not have this unique ability and therefore uses the wrong labelling without any of such adjustments.
The use of WideResNet together with SAM resulted in a superior performance with F 1 score of 0.7693 and cross entropy of 0.8274 as compared to the performance WideResnet with the softmax-Crowdlayer which had an F 1 score of 0.4427 and cross entropy of 1.9286 when applied to the CIFAR10-IC. It's cross entropy of 1.9286 was better than the baseline majority method which was 2.8306. The PDIS data which was fine-tuned with a pre-trained BERT model plus softmax-crowdlayer The BERT + softmax-Crowdlayer did not perform comparatively well when applied to the Gimpel-POS data since it only managed to achieve an F 1 score of 0.1254 and corresponding cross entropy of 2.3318. From Table 3 it can also be seen that the BERT + Majority voting had the same results as the BERT + softmax-Crowdlayer model so further investigation needs to be conducted to find out why this was so. As can be seen in Table 3, the use of the full base model provided for the PDIS and Gimpel-POS by the organisers achieved superior results and should have been used with the softmax-crowdlayer, but it could not be done because the full base model provided by the organisers was written in Pytorch framework whilst the softmax-crowdlawyer was written in Keras. There should therefore have been the need to convert the full base model to Keras before using the softmaxcrowdlayer and it's eventual evaluation, but as a result of the limited availability of time, it has been reserved as part of our future work to be covered in section 5 Refer to Appendix A for the analysis of the class distribution of the datasets.
Conclusion
This paper used a softmax-Crowdlayer approach combined with a deep neural network to train noisy labels from multiple crowd annotators. WideRes-Net together with softmax-Crowdlayer has been applied on CIFAR10-IC datasets, whilst MLP combined with softmax-Crowdlayer has been used on the LabelMe-IC data and BERT combined with softmax-Crowdlayer has been used on Gimpel-POS and PDIS data respectively. Future work will explore the effect of the distribution of the class annotation on the labeling accuracy and also investigate more efficient approaches of combining the BERT model with the softmax-Crowdlayer to further improve the results. It will also involve the application of the softmaxcrowdlayer on the Humour dataset which was not included in this work due to time constraint posed as a result of the complicated data points of the humour dataset.
A Class label distribution analysis
The table 4, summarises the number of Annotators, number of data points, and number classes for each data set used. The Figure 2 contains the probability density estimates of how the annotators perceived the class labels to which each respective item belonged to. Based on the simple majority voting, it can be observed from Figure 2(a) that the distribution was uniform across all classes with the labeling ; [0:airplane, 1:automobile, 2:bird, 3:cat, 4:deer, 5:dog, 6:frog, 7:horse, 8:ship, and 9:truck ] for the CIFAR10-IC dataset. Figure 2(b) depicting the kernel density estimates of the LabelMe-IC data, captures the distribution of how the annotators labelled the data into their respective classes. Majority of the samples were labelled as forest with the respective encoding of the labels shown as; [0:highway, 1:inside city, 2:tall building, 3:street, 4:forest, 5:coast, 6:mountain, 7:open country].
The labels of PDIS dataset are encoded as [0: refer to new information, 1: refer to old information]. | 4,466 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
2-[1-(3-Aminophenylimino)ethyl]phenol
The title compound, C14H14N2O, exists as the enol–imine tautomer. A strong intramolecular hydrogen bond between O and N atoms forms a six-membered ring with an S(6) graph-set motif, which is approximately coplanar with the phenol ring, the interplanar angle being 3.4 (3)°. In the crystal, intermolecular C—H⋯O hydrogen bonds and N—H⋯π interactions link the molecules into infinite chains along [100].
Comment
Schiff bases are some of the most widely used chelating ligands in the field of metal-organic coordination chemistry (Blagus et al., 2010). The Schiff bases derived from ortho hydroxy aldehydes or ketons and aromatic diamines often have photochromic and thermochromic characteristics (Hadjoudis & Mavridis 2004). In this work we report the preparation and the crystal and molecular structure of a novel ketimine Schiff base 2-[1-(3-aminophenylimino)ethyl]phenol (Scheme 1).
The presence of intramolecular O1-H···N1 hydrogen bond [2.540 (2) Å] shows unequivocally that the molecular conformation of compound (1) in the crystalline state is in the enol-imino form. As shown in Figure 2, the Schiff base molecules link mutually in an one-dimensional chain forming a graph-set motif C(5) in the notation of Bernstein et al., (1995) refers to the C9-C14 aromatic system centroid). All bond lengths are within the standard values (Allen et al., 1987) and are comparable with the similar ketimine Schiff bases as cited above (Blagus & Kaitner, 2007).
Experimental
The title compound was prepared by refluxing a methanolic solution of m-phenylendiamine (540 mg, 5 mmol) and 2-hydroxyacetophenone (1.25 ml, 10 mmol) for 4 h at the temperature of 80 °C. The water formed during the reaction was removed by a Dean-Stark trap. After cooling, the brown solid precipitate was filtered. Diffraction quality crystals were obtained by slow evaporation from ether solution.
Refinement
All N-and O-bound H atoms were located in the difference Fourier map. The position and the isotropic thermal parameters of N-bound H atoms were refined, while the O-bound H atom was treated as riding atom. Aromatic H atoms were placed in calculated positions and treated as riding on their parent C atoms with C-H = 0.93 Å and U iso (H) = 1.2 U eq (C) for Csp2.
In the absence of significant anomalous scattering effects Friedel pairs have been merged. Fig. 1. ORTEPIII molecular structure of (I) showing our atom-labelling scheme. Thermal ellipsoids are drawn at the 50% probability level. The intramolecular hydrogen bonds O-H···N is shown as thin line. | 553 | 2011-05-14T00:00:00.000 | [
"Chemistry"
] |
GlobTherm, a global database on thermal tolerances for aquatic and terrestrial organisms
How climate affects species distributions is a longstanding question receiving renewed interest owing to the need to predict the impacts of global warming on biodiversity. Is climate change forcing species to live near their critical thermal limits? Are these limits likely to change through natural selection? These and other important questions can be addressed with models relating geographical distributions of species with climate data, but inferences made with these models are highly contingent on non-climatic factors such as biotic interactions. Improved understanding of climate change effects on species will require extensive analysis of thermal physiological traits, but such data are both scarce and scattered. To overcome current limitations, we created the GlobTherm database. The database contains experimentally derived species’ thermal tolerance data currently comprising over 2,000 species of terrestrial, freshwater, intertidal and marine multicellular algae, plants, fungi, and animals. The GlobTherm database will be maintained and curated by iDiv with the aim to keep expanding it, and enable further investigations on the effects of climate on the distribution of life on Earth.
Background & Summary
A long-standing challenge in ecology and biogeography is to understand what generates patterns in species diversity and distributions 1 . Undertaking this challenge is of increasing importance if we are to manage the effects of global change on biodiversity 2 . The upper and lower temperature limits to performances, sublethal irreversible conditions and molecular degradation are central to determining the geographic distributions and range shifts of species under climate change 3 . Thus, thermal tolerances limits can be used to evaluate the relative contribution of macrophysiology and macroevolution to generating species diversity gradients in terrestrial, coastal, and marine realms 4 .
Inferring species' thermal tolerance limits based on realized climatic niches can be confounded by non-physiological factors including biotic interactions, dispersal ability, and/or habitat patch size 5,6 . Studies using experimentally-derived estimates of species' fundamental climatic niches have significantly advanced our knowledge of how species' ranges conform to thermal tolerance limits at land and sea 7,8 and how thermal physiological traits are asymmetrically conserved through evolution 9 . However, these studies have generally been limited in taxonomic coverage, with only one study focused on trans-realm comparisons 7 .
In order to overcome these limitations and develop unified theories and methodologies on the influence of fundamental thermal niches on the geographic distribution of diversity worldwide and across realms, a comprehensive cross-taxon and cross-realm dataset of thermal tolerance limits is urgently needed. Here we present the GlobTherm database, a large global cross-realm multi-taxon dataset comprising published experimentally-derived species' thermal tolerances for over 2,000 species of multicellular algae, plants, fungi and animals. Experimentally-derived measures of thermal limits provide a direct estimate of relevant aspects of species' fundamental thermal niches 10,11 . Hence, these metrics overcome many of the confounding factors associated with the currently popular but possibly flawed method of inferring species' thermal tolerance limits from realized geographic niches 12,13 .
Thermal tolerance limits are highly relevant to key issues in the current ecological literature, including which taxa have realized niches that are closer to their upper physiological tolerances and therefore may be more vulnerable to climate change 13 . The GlobTherm dataset centralizes data-collection efforts across taxon and synthesizes it in a format ready for researchers to use in order to conduct common analyses in macroecology, macroevolution and macrophysiology. While entries describing "thermal ranges" are often available in other databases (e.g. Fishbase, Mammalbase), the estimate of thermal tolerance is often based on distributional data and is not published alongside information on the methodology used to estimate thermal tolerance. GlobTherm is unique in collating experimentally-derived thermal tolerance data, which are independent-and thus comparable-to species' realized ranges.
Methods
From November 2015 until October 2016, data were compiled from published experimental estimates of upper and lower temperature tolerance limits following the protocols established by Clusella-Trullas 14 .
Measures of thermal tolerance that allow the greatest across taxon coverage were targeted; these included (i) critical (threshold) and (ii) lethal temperatures. (i) Critical temperatures mark the loss of key ecological functions, such as locomotion, ability to gain nutrition, or maintain basal metabolism (as per thermal neutral zone TNZ for endotherms) and are measured with critical thermal maximum (CTmax) or minimum (CTmin), and TNZ or reduced by a predefined amount (i.e. 50%, CT50). (ii) At lethal temperatures mortality occurs in whole individuals or part thereof i.e. leaf die back to a predefined percentage (commonly measured as lethal temperature 100% (LT100) or 50% measured as LT50) after a fixed duration of time. For studies in which data were presented graphically and not stated as text, values were extracted using Plot digitizer software, version 2.0 15 . Species names and taxonomy were standardized into the National Center for Biotechnology Information (NCBI) taxonomic system using 'taxize' package 16 in the statistical program R 17 .
The protocol was as follows. JMB searched for published articles, books and thesis using the following search terms: 'critical thermal maximum', 'critical thermal minimum', 'upper thermal tolerance', 'lower thermal tolerance', 'thermal tolerance breadth', 'heat tolerance', 'cold tolerance', 'upper lethal temperature limit', 'lower lethal temperature limit', 'thermal tolerance window', 'species temperature tolerance', 'thermo-neutral zone', and 'frost resistance' in Google Scholar (see Table 1 (available online only)). JMB then examined the abstracts and methods sections of the manuscripts to determine if they complied with our selection criteria. When insufficient information on experimental methods or sampling locations was provided within the publication, the authors were contacted to request additional information. Measures of thermal tolerance were only recorded if methodology and sampling locations were provided (either in the manuscript or by the author). When reviews were found in the literature search that complied with our data quality requirmens, the cited papers or authors attributed were located and the data extracted from these original sources when possible. A total of 567 studies were found to provide data of a high enough quality to be included in the dataset, out of the thousands of candidate studies.
Species phenotypes are intrinsically plastic. In particular, thermal limits show a considerable level of plasticity among different life stages and/or populations of a same species living along temperature gradients associated with latitude. To make the estimates of species thermal limits in the dataset comparable, only estimates from study specimens in their later life stages were used, i.e. eggs, larvae, seeds, gametes etc. were all excluded from the present form of our dataset. When multiple estimates for a species' thermal limits were available, to standardize methodologies between estimates as much as possible, priority was given to estimates that had the greater share of the following attributes with more weight given to attributes in the following order: (1) thermal limits measured using more common metrics, i.e. CTmax and CTmin over LT50, LT50 over LT100, and LT100 over super cooling point (SCP) (with the exception of mammals and birds for which all data were TNZ and algae where lethal measures were given preference due to the inconstancy among the methods used to determine critical measures in these taxa) (2) estimates of upper and lower thermal limits in the same population; (3) field-fresh specimens over acclimated specimens and acclimated specimens over those in long-term captivity; (4) whole individuals over part specimens, (i.e. tree branches); (5) measurements taken during active seasons and phases (i.e., diurnal during the day and overnight for nocturnal species); (6) measurments with larger sample sizes (7) measurements taken from fasted individuals over fed; (8) mean measures over median (due to the paucity of the latter); (9) the loss of righting response and/or locomotion over the onset of spasms (OS) as the end point of CTmax and CTmin in ectothermic animals (due to the rarity of OS); and (10) estimates with stronger supporting information including location, ramping rate (rate of temperature increase) and acclimation temperature. In all cases, these criteria lead to the selection of a single study that optimized comparability between species measures. Despite such precautions variations in the methods used between studies will add some random error to the estimates, however our methods should not bias the error in any one direction 14 .
Data were excluded if measurements were taken from individuals bred for commercial purposes, such as agriculture, aquaculture, or the pet trade, to reduce confounding issues associated with artificial selective history. Individuals held in managed populations i.e., zoos, university labatory populations and botanical gardens or those bought from wild life traders were only used if we were able to insure the animals were not of a commercial origin. If this information was not provided in the manuscript i.e. if the location of their original wild capture/collection was not given the authors were contacted before a study was included.
Data Records
This database includes thermal tolerance metrics for 2,133 species of multicellular algae, plants, fungi, and animals in 43 classes, 203 orders and 525 families from marine, intertidal, freshwater, and terrestrial realms, extracted from published studies (Data citation 1, and Figures 1 and 2). The data presented here are available in both Excel and text formats in the Data Dryad (Data citation 1). Updates to the data and metadata will be curated through the iDiv data portal (https://idata.idiv.de/). For example, in the future it is planned to include intraspecific variation in the dataset, to provide multiple estimates of thermal tolerance limits for a given species. Where, estimates determined using the best possible methods will be more highly ranked.
Technical Validation
JMB gathered the data from published and peer-reviewed scientific studies. The differences among experimental methods, observers, and pre-conditions (i.e. season and capture locations) are known to generate some variance in the estimates of species temperature tolerance. Information relating to experimental methods were recorded alongside the thermal tolerance limits to enable data users to incorporate these parameters in data analyses and approaches for methods validation of data. Provision of such metadata also enables users to filter data based on their specific needs and research questions.
In particular, the experimental methods used to determine the lethal temperature for algae and the upper boundary of the thermal neutral zone (UTNZ) for mammals and birds may have an effect on the quality of the estimate. We provide the temperature intervals between lethal measurements for algae and information on the quality of the regression used to estimate the UTNZ for mammals and birds (for more information on each column in the dataset please see Table 2 (available online only)). Similar to other assessments of the quality of published UTNZ measures [18][19][20] we found that only~50 % of the literature compiled contained valid estimates i.e., evidence that the boundary of the UTNZ was reached in the experiment.
The dataset has a wide global spatial coverage (Figure 1), though clear geographical data gaps do exist, for example, in central Africa, Russia, India, parts of Canada and in the deep ocean. The data gaps present in this study are unfortunately common as they represent locations that are either hard to access due to geography (i.e. northern Canada and Russia, deep ocean, the tropics), or where scientific literature is difficult to access due to language and related citation indexing barriers 21 . The distribution of the data across realms reflects the distribution of known species on Earth, where~80% of macroscopic species live on land (most being insects) compared to 15% in the ocean (showing however the greatest phyla difference) despite the much larger area and volume, and 5 % in freshwater 22,23 . The dataset contains approximately 0.20% of plants 24 , 0.72 % of algae 25 , 0.00024 % of insects 23 , 0.55% of fish 26 , 3.33% of reptiles 23 , 6.01% of mammals 27 , 1.86% of birds 27 currently described. Taxonomically, Chordata are overrepresented in our data set, while algae, plants, and, to a greater extent, invertebrates, are underrepresented given their greater contribution to the world's total number of species. In sum, the GlobTherm dataset reflects both geographic and taxonomic bias in sampling of thermal tolerances, which project and contributed to the writing of the manuscript. I.M.C. conceived the idea of developing the database, was a principle investigator on the project and contributed to the writing of the manuscript.
Additional Information
Tables 1 and 2 are only available in the online version of the paper. | 2,861.6 | 2018-03-13T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
New insights in Trichochloritis Pilsbry, 1891 and its relatives (Gastropoda, Pulmonata, Camaenidae)
Abstract The genus Bellatrachia Schileyko, 2018 was described based on a specimen identified as Helix (Chloritis) pseudomiara Bavay & Dautzenberg, 1909. We concluded that the examined specimen is not that species, but Helixcondoriana Crosse & Fischer, 1863. Therefore, (1) the type species of Bellatrachia must be replaced with Helixcondoriana; (2) the species Helix (Chloritis) pseudomiara must be re-allocated to the genus Trichochloritis; (3) the erroneous treatment of the genus Trichochloritis by Schileyko (2007) needs to be corrected through the description of a new genus, Dentichloritisgen. nov. based on Helixbrevidens Sowerby I, 1841. In addition, Chloritismicrotricha Möllendorff, 1898 is treated as a synonym of Helixcondoriana, and further information on the genitalia of Chloritis (?) bifoveata (Benson, 1856) is presented.
Introduction
Almost 20 years ago, the second author of this work became fascinated by the enormously rich shell collection of Colonel Messager (see Breure and Páll-Gergely 2019) from northern Vietnam and Laos housed in the MNHN. While many type specimens taken from Messager's collection were distributed through the activities of the describ-ing authors to other institutions, the main body of the collection remained untouched in Paris. At the suggestion of the first author, we started to systematically compile data on the haired camaenid species of Southeast Asia.
This group was traditionally classified in the genera Trichochloritis Pilsbry, 1891and Trachia E. von Martens, 1860(Richardson 1985Schileyko2011;Wu et al. 2019); however, it was clear from the beginning that haired and non-haired shells are present in many camaenid genera, that the current classification is rather a paraphyletic "wastebasket taxon", and that only the investigation of the morphology of the genital organs in combination with genetic data will recover the correct phylogenetic relationships. Nonetheless, even current modern research can add to the confusion rather than unravelling some of the old errors.
According to Schileyko(2007), the genus Trichochloritis consists of 10-12 species from southern China, Indochina Peninsula, and the Philippines. He published an illustration (drawing) of the shell of the type species, H. breviseta (Schileyko2007: fig. 2032a), and added drawings of the reproductive anatomy of H. brevidens Sowerby I, 1841(Schi-leyko2007: fig. 2032b as representative of Trichochloritis. However, the morphology of the genital organs of the latter species differs strongly from the conchologically similar genera as used here (Trichochloritis, Bellatrachia) from Continental Asia. In 2018, Schileykodescribed the monotypic genus Bellatrachia, a genus which was introduced based on conchological characters and traits of the genital anatomy of Helix (Chloritis) pseudomiara Bavay & Dautzenberg, 1909. Unfortunately, the anatomically examined specimen, which was collected in the Cat Tien National Park, southern Vietnam, was misidentified: in fact, Schileyko's (2018) specimen is Helix condoriana Crosse & Fischer, 1863. These misidentifications and errors have nomenclatorial and taxonomical consequences: 1) the type species of Bellatrachia must be replaced; 2) the species Helix (Chloritis) pseudomiara Bavay & Dautzenberg, 1909 must be re-allocated in the genus Trichochloritis; 3) the erroneous treatment of the genus Trichochloritis by Schileyko(2007) needs to be corrected through the description of a new genus, Dentichloritis nov. gen. based on Helix brevidens Sowerby I, 1841. In addition, the position of two continental species usually confined to Chloritis Beck, 1837, is discussed.
Materials and methods
An ethanol-preserved specimen of Chloritis (?) bifoveata (Benson, 1856) was dissected under a Leica stereo microscope with a camera attachment to provide photographs of the external genital structure, from which drawings were produced. The inner structure of reproductive organs was illustrated from photographs.
Institutional abbreviations: (Crosse & Fischer, 1863). Diagnosis. Shell depressed globular, apex not sunken, hairs or hair scars cover the entire shell. Penis rather long, subcylindrical, its inner surface bears longitudinal pilasters; penial verge absent; penial caecum absent; epiphallus slender, long, convoluted; retractor muscle attached at the penis-epiphallus transition; flagellum thick, with attenuated tip, approximately 2-2.5 times shorter than epiphallus; vagina slender, shorter than penis; stalk of bursa copulatrix long, with thickening at some distance from its origin, shape of bursa unknown (based on Schileyko2018; see Fig. 4 Diagnosis. Shell biconvex with a whitish subsutural spiral, narrow umbilicus, and hair scars covering the entire surface.
Description. Shell middle sized, biconvex, moderately thin-walled; last whorl only slightly expanding and descending abruptly towards aperture; colour dirty yellowish with a broad pale subsutural spiral band; whorls 4.5-5, separated by a rather shallow suture; body whorl faintly slightly angled; subsutural furrow shallow but present on the complete last whorl; protoconch consists of 1.25-1.5 whorls, very finely squamous, matte; the pattern of hair scars is dense and covers the complete teleoconch; aperture obliquely rounded, and the peristomal rims are close; peristome strongly expanded and somewhat reflected and reinforced by a white lip; parietal side with very thin, inconspicuous light layer; umbilicus open, of medium size, with blunt peripheral angulation, and partly covered by the columellar reflection.
Remarks. The syntype of B. condoriana ( Fig. 1) is similar to the specimen identified as Helix (Chloritis) pseudomiara by Schileyko (2017) (Fig. 3), but the shell of the latter is somewhat more depressed. The shell of the lectotype of B. microtricha ( Fig. 2) is larger and somewhat more globular that that of B. condoriana. However, both taxa agree quite well in other details such as the relative size of the umbilicus, formation of lip and aperture, and microsculpture of the teleoconch. In contrast, absolute dimensions proved to be insufficient traits for species-level distinction. Therefore, we consider Chloritis microtricha as a synonym of Bellatrachia condoriana. The subtle conchological differences in the shell morphology shown in Figs 1-3 may be part of the overall variation of B. condoriana or may signal a difference at the species level. This question can only be clarified by a revision of a larger number of specimens from the area. Included species. Helix breviseta L. Pfeiffer, 1862, Trachia penangensis Stoliczka, 1873. Diagnosis. Shell depressed globular, apex not sunken, hairs or hair scars cover the entire shell. Penis thickened, probably with penial verge (?) and a slender, relatively long penial caecum; epiphallus slender, shorter than penis; retractor muscle attached at the penis-epiphallus transition; flagellum short; vagina slender, shorter than penis; stalk of bursa copulatrix long, with thickened base and oval bursa (based on the drawings of Stoliczka 1873: plate 3, fig. 18 and Collinge 1903: plate 12, fig. 17.).
Remarks. The anatomy of the genital organs of Helix (Trachia) malayana Möllendorff, 1887 (= Trichochloritis breviseta; see Maassen 2001) was described by Collinge (1903), and that of T. penangensis is known from Stoliczka (1873), here re-drawn and provided in Fig. 8 (penangensis) and Fig. 9 (breviseta). Both species possess a penial caecum, which is here considered as a diagnostic trait for the genus. Without knowing the full anatomy, it is uncertain how many of the hairy Chloritis-like species of continental Asia belong to this group. Diagnosis. Shell depressed, unicoloured, yellowish, with permanent hairs; umbilicus funnel-shaped with a blunt peripheral angulation.
Description. Spire only slightly elevated, shell depressed, shell thin; last whorl bluntly angled, a subsutural furrow is present but insignificant; colour yellowish, spiral band missing; the 4.5 whorls separated by a rather shallow suture; protoconch consists of slightly more than 1.5 whorls, squamous, bears minute wrinkled hair scars; tel- eoconch completely covered by a moderately dense pattern of hairs; bristles stiffy and durable and stick to the shell (their apical part breaks off, but a dark brown conical bristle cone is left making the surface of the shell quite rough); aperture subrectangular with only slightly oblique columella; peristome reflected and covered by a white lip; parietal region with very slight whitish, blunt lime layer, inconspicuous; columellar reflection small; umbilicus wide and funnel-shaped with a blunt peripheral keel.
Distribution. Malaysia and Thailand
Type specimens. The types should be in the Zoological Survey of India in Kolkata but were not found during a recent search (S.K. Sajan, pers. comm., December 2018). They were likewise not found in the NHM. Type locality. "Penang". Remarks. "Chloritis penangensis has a much more globular shell with less expanded whorls compared to Chloritis breviseta which has more expanded (perpendicular to the axis) whorls and thus, "wider" looking shells. These characters appear consistent for each species across Peninsular Malaysia (based on conchological comparisons), although shell size varies within each species." (Junn Kitt Foon, pers. comm., 01 Dec 2018). To illustrate these differences, we illustrated the shells of both species (Figs 10, 11).
Genus uncertain
Chloritis ( Remarks. For a detailed description of the shell refer to Sutcharit & Panha, 2010. Our data on the reproductive anatomy largely matches that of Sutcharit and Panha (2010), with the following two exceptions: the flagellum is relatively long and slender, and the penial verge is not irregularly shaped but conical and deeply grooved with the folds starting from the epiphallus. Fig. 12 Helix (Chloritis) pseudomiara Bavay & Dautzenberg, 1909a: 236;Bavay and Dautzenberg 1909b: 181, pl Description. Shell rather large, almost flat, with relatively thick wall; body whorl rounded; last half whorl with or without very shallow subsutural furrow; the 4.75-5.25 whorls are separated by a shallow suture; colour greyish yellowish, or brown to olive green; protoconch consists of 1.5 whorls, finely granulate, with fine radial lines near the suture of the last half whorl; teleoconch finely, irregularly wrinkled, and covered with very deep hair scars, which are visible to the naked eye as well on the body whorl; hairs not permanent, although we did not have access to live collected specimens; aperture ovoid; peristome expanded and slightly reflected, and reinforced by a thickened whitish/light brown lip; parietal region with an inconspicuous layer, which is often darker than the rest of the shell; umbilicus widely open, concave and funnel-shaped, slightly covered by reflected peristome.
Remarks. This species can easily be identified based on the dark green-coloured shell and the deep, widely spaced hair scars that cover the entire teleoconch.
Type species. Helix brevidens
Diagnosis. Shell depressed globular, apex not sunken, hairs or hair scars cover the entire shell, aperture with a basal denticle. Penis very thick-walled, with narrow lumen, internally with very large conic tubercles in main chamber; flagellum and epiphallus absent; vas deferens passes gradually enlarging into penis; retractor muscle inserts at curvature of vas deferens close to its joint with penis; penial sheath thin, surrounds upper two third part of penis; vagina shorter than penis, thick.
Etymology. The name Dentichloritis refers to the presence of a denticle on the basal peristomal lip and the conchological similarity to Chloritis.
Remarks. There are seven Trichochloritis species known from the Philippines (Richardson 1985), and four of them have been photographed by Zilch (1966). They differ from D. brevidens in the open umbilicus and the lack of denticle on the basal lip. Therefore, we retain them in Trichochloritis until ethanol-preserved specimens become available.
Type locality. Philippines, Puerto Galero (Municipality of Puerto Galera, municipality in the province of Oriental Mindoro).
Diagnosis. A middle-sized, yellowish species with a slender reddish peripheral belt, short hairs on the entire shell, nearly closed umbilicus (only visible in oblique view), and a slight thickening (denticle) on the basal part of peristome.
Description. Shell medium sized, depressed globular; body whorl rounded with slight indication of a blunt shoulder; last quarter to half whorl with a very shallow subsutural furrow; the 3.75-4 whorls are separated by a shallow suture; colour yellowish to ochre with a reddish slender belt above shoulder (midpoint of body whorl); protoconch consists of 1.5-1.75 whorls, finely granulate, with fine radial wrinkles; teleoconch covered by short hairs or hair scars, which are visible to the naked eye as well; aperture semilunar; peristome expanded and slightly reflected, and reinforced by a thickened whitish brown lip; a slight swelling (denticle) visible on basal part of peristome, between the midpoint of the basal peristome and the columella; parietal region with an inconspicuous layer, which is matter than the rest of the shell; umbilicus nearly closed by columellar reflection, visible only by oblique view.
Anatomy: Penis very thick-walled, with narrow lumen, internally with short plicae in basal part and very large conic tubercles in main chamber; flagellum and epiphallus absent; vas deferens rather long, evenly thin down to atrium; approximately one third way up it is attached to penis, and after penis is enlarged and fusiform, then in becomes very thin, thread-like, forming a sharp curvature and passes to penis, gradually enlarging; penial retractor attached to curvature of vas deferens and continues as a fine membrane down to middle part of penis; penial sheath thin, surrounds upper two third part of penis. Vagina shorter than penis, thick; spermatheca without visible division to stalk and reservoir, not attending albumen gland and provided with apical ligament (based on Schileyko2007: 2113-2114, fig. 2032b, c).
Discussion
Based on an anatomically examined specimen from southern Vietnam identified as Helix pseudomiara Bavay & Dautzenberg, 1909, Schileyko(2018 described the genus Bellatrachia Schileyko, 2018. However, that specimen is clearly incorrectly identified. Schileyko's (2018) specimen has a rounded aperture and fine hair scars with fine silky periostracum. Thus, it closely resembles Helix condoriana Crosse & Fischer, 1863, also known from southern Vietnam. In contrast, the true Helix pseudomiara is known only from northern Vietnam, and its shell has characteristic deep and sparsely arranged hair scars. Furthermore, the aperture of the latter is rather oval, not rounded. The reproductive anatomy of type species of Trichochloritis Pilsbry, 1891, Trichochloritis breviseta (L. Pfeiffer, 1862), was described by Collinge (1903). Although it is not sufficiently detailed (i.e., the inner structure of penis is unknown), it is useful enough to diagnose Trichochloritis. The anatomy of Trichochloritis penangensis (Stoliczka, 1873) was described in the original generic description, and it largely matches with that of T. breviseta. Schileyko(2007) described the genitalia of Trichochloritis brevidens (Sowerby I, 1841), originally described from Mindoro Island, the Philippines, as a representative of Trichochloritis. The reproductive anatomy of that species, however, differs from those of continental (true) Trichochloritis in several important characters. Therefore a new genus, Dentichloritis gen. nov. is erected for T. brevidens. The largely different anatomy, together with biogeographical reasons, suggest that Trichochloritis (continental Asia) and Dentichloritis gen. nov. (Philippines) are probably not even closely related.
In the original description of Trichochloritis, Pilsbry (1891) claimed that the most closely related genus was Planispira Beck, 1837. The anatomy of the type species of that genus (Helix zonaria Linnaeus, 1758) was described by Schileyko(2003), and is distinguished from Trichochloritis at first sight by the absence of a penial caecum.
It is difficult to interpret the relationship of Trichochloritis with Chloritis, because the reproductive anatomy of the type species of the latter (Helix ungulina Linnaeus, 1758, by subsequent designation of Martens in Albers, 1860, from Ceram Island, Indonesia) is unknown. Chloritis is diagnosed conchologically mainly based on the sunken spire and the hairless shell (Schileyko2003). Thus, the two continental species assigned to Chloritis, namely Chloritis bifoveata (Benson, 1856) and Chloritis diplochone Möllendorff, 1898, do not even fit due to their strongly hairy shells. It is very unlikely that the two species inhabiting Thailand and Malaysia would belong to the same group as a species from Ceram Island. However, we refrain from erecting a genus for C. bifoveata and C. diplochone until we have more information on the anatomy of C. ungulina. | 3,516.8 | 2019-07-22T00:00:00.000 | [
"Biology"
] |
Understanding Microwave Heating in Biomass-Solvent Systems
A new mechanism is proposed to provide a viable physical explanation for the action of microwaves in solvent extraction processes. The key innovation is Temperature-Induced Diffusion, a recently-demonstrated phenomenon that results from selective heating using microwaves. A mechanism is presented which incorporates microwave heating, cellular expansion, heat transfer and mass transfer, all of which affect the pressure of cell structures within biomass. The cell-pressure is modelled with time across a range of physical and process variables, and compared with the expected outputs from the existing steam-rupture theory. It is shown that steamrupture is only possible at the extreme fringes of realistic physical parameters, but Temperature-Induced Diffusion is able to explain cell-rupture across a broad and realistic range of physical parameters and heating conditions. Temperature-Induced Diffusion is the main principle that governs microwave-assisted extraction, and this paves the way to being able to select processing conditions and feedstocks based solely on their physical properties.
Introduction
Biomass is an attractive alternative to fossil reserves for the production of fuels and platform chemicals [1]. This includes novel routes to platform chemicals using green chemistry and industrial biotechnology approaches [2], but also the extraction of bio-based chemicals with wide-ranging applications including in the pharmaceutical and health industries [3], and the production of functional materials (e.g. adsorbents) from solid residues [4]. The heterogeneity and recalcitrance of biomass materials limits the effectiveness of available processing technologies [5], so the development of novel processes and chemistry is required for large-scale processing to become economically viable.
Microwave processing has been widely reported to accelerate or enhance biomass-upgrading processes [6]. The unique ability of electromagnetic waves to transfer and dissipate energy volumetrically means that microwave processes can be fast, continuous, compact and flexible in operation. They can be portable, potentially operating on farms or food production sites, avoiding the major logistical challenge of transporting distributed feedstocks to a central processing facility, and minimising degradation during transport. Microwaves also heat selectively, which means that they heat different components of heterogeneous systems at different rates, and it is thought that this can lead to rupture of cells within biomass, resulting in higher extraction yields and the ability to treat recalcitrant lignocellulosic materials. Although there is widespread recognition of the potential benefits of microwave processes in the chemical and pharmaceutical sectors, the effects of the unique heating mechanisms on heat and mass transfer, chemical transformations and potential physical rupture of the plant material are poorly understood. It is this lack of mechanistic understanding that poses the major barrier to scale-up [7].
The currently-accepted theory is that microwaves promote rapid loosening of the cell wall matrix [8,9] and cause cell rupture [10][11][12], and this alteration in the biomass microstructure enhances component extraction. To date, although many authors have discussed this phenomenon qualitatively, and some have illustrated structural changes in biomass using microscopic imaging [12][13][14], there is no clear evidence of how (and arguably if) microwave heating leads to structural modification of biomass. There have been two recent attempts to quantify this phenomenon. The first relates to the common hypothesis that rapid and internal heating by microwaves induces vaporisation within the cellular structures, thereby quickly increasing pressure and causing cell rupture [3,[15][16][17][18][19][20][21][22].
A quantitative model has been proposed by Chan et al. [23] to link microwave heating with cell pressure due to intracellular steam generation, which couples microwave heating, water vaporisation and mechanical cell wall properties to predict internal pressure and cell rupture time. The model correctly predicts a rupture time of the order of minutes, which is consistent with empirical observations, however conventional heat transfer is not considered in this approach. Biomass heats selectively and attains a higher temperature than the surrounding solvent, however no consideration is made of the heat flow from biomass to solvent nor the steady state temperature that could result during microwave heating. The Chan model represents a major step towards a mechanism for microwave-assisted extraction but there are missing physical phenomena that need to be included and investigated over a realistic set of conditions before a quantitative assessment of steam-rupturing can be presented.
An alternative theory to explain plant cell disruption using microwaves has been proposed by Lee et al. [24]. The extraction of solutes from plant materials is characterised as a mass transfer process [16], involving (i) penetration of the solvent into the solid, (ii) solubilisation-desorption of the solute from the solid matrix and/or hydrolysis, (iii) diffusion to the surface of the biomass, and (iv) external transfer into the bulk solution. The Lee theory proposes that microwave selective heating can fundamentally change these mass transfer processes, and this could lead to disruption of the cellular structure. Solvents such as water flow between the cell and solvent phases down a chemical potential gradient until mass equilibrium is achieved. Chemical potential is a quantity that combines the different driving forces for mass transfer into a single mathematical expression [25]. These driving forces include gradients in pressure, temperature and component activity. Selective heating with microwaves induces temperature gradients between cells and the solvent phase in a biomass-solvent system. This phenomenon, which is absent in conventional heating, acts as an additional driving force for mass transfer. If the intracellular components are heated selectively over the solvent, the chemical potential of these intracellular components is reduced, and this leads to movement of the solvent into cells, inducing higher cell pressures. Liquids such as water are nearly incompressible, and if liquid flows into a cell there will be a subsequent pressure increase due to the resistance to expansion provided by the cell walls, a phenomenon which could lead to disruption of the cellular structure. Lee et al. [24] showed that a temperature difference of just 1 ℃ could potentially lead to equilibrium cell pressures exceeding 100 bar, which the authors stated would be sufficient to exceed the yield stress of most cellular structures. Furthermore, the theory that selective heating can drive mass transfer has recently been validated experimentally; reverse osmosis for water desalination was achieved without the need for the application of pressure as in conventional reverse osmosis processes [26]. Despite the novelty and step-change in understanding, the study by Lee et al. [24] was limited to steady-state, so it is not possible to assess whether the kinetics of Temperature-Induced Diffusion are within the same timeframe as empirical microwave extraction studies.
The aim of this work is to build on the theoretical approaches of Chan and Lee, adding heat transfer and mass transfer kinetics. This will determine whether temperatures high enough for intracellular steam generation can be achieved in the Chan approach, and if the mass transfer kinetics in the Lee model are within empirically-observed ranges, ultimately allowing an assessment of each model as a viable mechanism for microwave-assisted extraction. Two distinct methodologies are required, one for temperature and one for mass transfer.
Temperature distribution during microwave heating
This section presents the methodology and results when a heat transfer element is included with microwave heating of biomass-solvent systems. The temperature distributions established using this approach will subsequently be used to test the steam-rupture hypothesis, and to provide input parameters to investigate Temperature-Induced Diffusion.
System Geometry
Biomass cells were approximated as a cuboid, which resembles cell types such as onion epidermal cells [27][28][29][30][31] that are well characterised in terms of their mechanical behaviour [29][30][31]. Individual cells are 100 length, 50 width, 10 height and 1 wall thickness [27][28][29][30][31]. When multiple cells are considered, they are assumed to have an identical geometry and a regular arrangement, with their interior assumed to consist of an aqueous solution with defined activity. The amount of solvent surrounding a cell cluster is defined by a solvent to solid ratio of 100 ml/g.
Heating Rate Equation
Microwaves heat volumetrically due to energy dissipation as the electric field component interacts with the process material. The main mechanisms by which electromagnetic waves heat materials within the microwave frequency range are dipolar polarisation and ionic conduction [32]. The extent to which electromagnetic energy is converted to heat is governed by the dielectric loss factor ( ′′ ), which varies with frequency and temperature [33]. As an extraction process advances the temperature of both biomass and solvent will increase due to a combination of microwave heating and conventional heat transfer (EQUATION 1). Both the solvent and the cell can be heated with microwaves, and the extent depends on the electric field intensity (E) and the dielectric loss factor ( ′′ ) [34]. E is a function of the applied power and reactor geometry, and also varies as energy is dissipated throughout the process material. Microwave attentuation is accounted for in this work using Lambert's law [35], where the strength of the electric field decays within a material as the wave attenuates. Conduction occurs through the cellular structure, and convective heat transfer takes place from the outer surface of the cell structure to the surrounding solvent. A steady state exists when the rate of volumetric energy dissipation in the biomass equals the rate of heat transfer into the surrounding solvent. As this study focussed on ambient pressure extraction processes the surrounding solvent is assumed to attain a maximum temperature at its normal boiling point. Further energy transfer into the solvent results in vaporisation, rather than a temperature increase.
Numerical solution
This work employs a finite-difference time domain (FDTD) method to solve the partial-differential equation EQUATION 1 is solved for a range of input parameters to give a series of temperature-time relationships, which are the required output. Outputs are presented according to the key physical and process variables, which are the dielectric loss factor ( ′′ ), electric field intensity (E), biomass size and thermal conductivity (k). A single independent parameter was varied, keeping all other parameters constant.
Effect of Dielectric Loss Factor (ɛ''biomass , ɛ''solvent)
The value of " was fixed at 25, while " was varied from 0 (∆ " = 25) to 50 (∆ " = −25). This is not intended to represent a particular physical condition, but will span the full range from a microwave-transparent solvent to one that absorbs much more strongly than the biomass itself. k was fixed at 0.05 −1 −1 , E was taken as 10000 Vm -1 which the limit for single-mode microwave cavities [36], and the solvent was assumed to have a boiling point of 100 ℃. FIGURE 2 shows the temperature increase with time for both the solvent and a single cell for the two extreme cases of ∆ ". For simplicity, the solvent's temperature corresponds here to that of a numerical point at the edge of the cell cluster. In reality the solvent will only have a homogeneous temperature when at the boiling point. shows that the cell is at a higher temperature than the solvent during the heat-up period, which is to be expected given that the solvent is microwave-transparent, and only heats indirectly due to heat transfer from the cell. The solvent temperature plateaus at the boiling point (100°C) and the cell temperature continues to increase.
The cell temperature plateaus shortly after the solvent temperature, which is due to thermal equilibrium between volumetric heating and heat transfer to the surrounding solvent. In FIGURE 2A the cell temperature is around 0.4°C higher than the surrounding solvent at thermal steady state under these physical conditions. For the microwaveabsorbent solvent, FIGURE 2B shows that the solvent temperature is higher than the cell temperature during the heat-up period, which is to be expected given that the solvent absorbs more energy volumetrically than the biomass. The solvent attains its boiling point temperature in around 0.5 seconds in this case, compared to 5 seconds for the case of a microwave-transparent solvent. Whilst the solvent temperature is limited to the boiling point the same is not true for the cell, so although it absorbs less energy volumetrically than the solvent it can continue to heat beyond 100°C. In this case the cell temperature plateaus at around 100.4°C as a thermal steady state is reached, which is identical to the equilibrium temperature in FIGURE 2A. The outcomes shown in FIGURE 2 indicate that the steady state temperature of a single cell of biomass is independent of the dielectric loss factor of the solvent, and always higher than that of the solvent given that the cell absorbs microwaves.
When multiple cells are considered there is a gradient in temperature, with a maximum in the centre of the cluster and a minimum at the biomass-solvent boundary.
Number of cells
" " ΔTmax is a function of the size of the cell cluster and the dielectric loss factor of the biomass ( " ), but is independent of the dielectric loss factor of the solvent ( " ). In this case the largest ΔTmax was just 1.2°C for a 1000 cell cluster and biomass loss factor of 25. The biomass material will attain steady state temperatures that are higher than that of the solvent phase provided that " is greater than zero.
Effect of Electric Field Intensity (E)
Electric field intensity (E) was varied at a constant k value of 0. Table 2: Maximum temperature difference between biomass and solvent at thermal steady state (ΔTmax) for variable Electric field intensity (E) and biomass geometry. " = 10, " = 5, k = 0.6 Wm -1 K -1 . Solvent temperature is 100℃ and biomass temperature is higher.
It is shown in that ΔTmax increases non-linearly with electric field intensity (E), with the effect being more pronounced for larger cell clusters. Higher E results in more volumetric energy dissipation within the biomass, and so higher temperature differences are required between the biomass and solvent phases to increase heat transfer into the solvent phase and achieve thermal steady state. When E = 0 −1 no microwave power is applied, and hence the system resembles the case of conventional heating with no temperature differential between the biomass and the solvent.
Effect of Thermal conductivity (k)
Thermal conductivity was varied at a constant of E=10000 Vm -1 ; " = 10 and " = 5. The maximum temperature difference between biomass and solvent at thermal steady state (ΔTmax) is shown in Table 3: Maximum temperature difference between biomass and solvent at thermal steady state (ΔTmax) for variable thermal conductivity (k) and biomass geometry. " = 10, " = 5, E = 10000 Vm -1 . Solvent temperature is 100℃ and biomass temperature is higher.
When thermal conductivity (k) is low there is more resistance to heat flow, so materials possessing very small values are expected to attain much higher steady state temperatures than the solvent. In this case for a 1000-cell cluster a temperature difference of over 5.5°C is apparent when k = 0.05 Wm -1 K -1 , compared to just 0.28°C when k = 1.0 Wm -1 K -1 . Compared to TABLE 1 the magnitude of the temperature variation in this case is much higher, which indicates that thermal conductivity has a more dominant effect on equilibrium temerpature than the loss factor of either the biomass or the solvent.
Assessment of steam rupturing theory
The enhanced understanding of heat flows and temperature during microwave processing can be used to evaluate the likelihood of intracellular steam generation. Using a similar approach to that shown in TABLE 1 -TABLE 3, the physical and process parameters are investigated within ranges that are realistic for microwave-assisted extraction and biomass feedstocks. E is fixed at 10000 Vm -1 , which the limit for single-mode microwave cavities [36]. The limit for " is set at 35, as the largest reported " is around 30 at 2.45 GHz [37]. The minimum limit for was set at 0.1 −1 −1 as values as low as 0.2 have been reported for fruits and vegetables with 80% moisture [38]. The biomass geometry has been modified here to better resemble experimental situations. The number of cells in each dimension is defined so as to constitute a cube with dimensions ranging from 0.1 -0.5 mm. 4 shows that the largest ΔTmax is around 84 ℃ and occurs under the extreme conditions of electric field intensity, particle size, thermal conductivity and loss factor. Under more realistic conditions it is likely that a temperature difference of just 10-20°C is achieved, and it should also be noted that the highest temperature will occur within the centre of the biomass, with lower temperatures towards the surface where it approaches the solvent temperature. For cell rupture to occur, the internal pressure must exceed the mechanical resistance of the cell walls. If the pressure is generated from steam created during microwave heating, then the boiling temperature must be consistent with the higher internal cell pressure. Under the most extreme conditions the maximum temperature that can occur within biomass during an ambient-pressure extraction process is 184 ℃ (i.e. the solvent temperature of 100 ℃ and temperature difference of 84 ℃). At 184°C the steam pressure equates to 10 atm, which is sufficient to cause rupture within some types of cell. However, away from the centre of the biomass and under realistic physical and processing conditions the temperature and pressure will be much lower. Cell rupture due to intracellular steam generation could therefore occur at the extreme fringes of physical and experimental conditions, but it is highly unlikely to be as widespread as previously proposed. An alternative mechanism must exist for the generation of internal cell pressure during microwave heating.
Cell pressure during microwave heating
Lee et al. [24] were the first to highlight the possibility of Temperature-Induced Diffusion due to microwave heating. This section extends their initial work to encompass a realistic range of physical and heating conditions, and introduces mechanical and kinetic elements to understand the impact on cell pressure, and ultimately whether cell rupture can occur within the timeframes reported in empirical studies.
Chemical potential and mass transfer
The chemical potential of a component in a mixture is a universal property that dictates the direction and It is assumed that the biomass-solvent system contains two species: the solvent, which exists in pure form in the solvent phase and is able to diffuse into the cells, plus a solute which only exists within the cells. The rate-limiting step is assumed to be diffusion though the cell membrane, and component activity is assumed to equal molar concentration in this case (ideal mixture). The diffusion coefficient, , is estimated from the molar diffusivity [25], which is taken as 10 −12 2 −1 [39] and varied by an order of magnitude as part of a sensitivity analysis.
Solving the partial differential mass equation (EQUATION 4)
Cell Expansion
Cells see an increase in pressure if mass influx of liquid takes place, or if the liquid density decreases due to an increase in temperature. The increase in pressure takes place due to the resistance to volumetric expansion provided by the rigid wall that surrounds each cell, with internal pressure directly related to stress in the cell wall.
A cell wall fragment can be envisaged as a polymer constituted of cellulose microfibrils contained within an amorphous matrix [40][41][42]. Therefore, cell wall fragments are expected to exhibit mechanical behaviour similar to that of typical rubber materials, but with significantly higher elastic moduli due to the reinforcing components present. This has been supported experimentally for hydrated cell wall fragments of onion epidermis [31]. The stress-strain behaviour of a typical non-lignified cell wall fragment can be represented mathematically using an expression for rubber materials (EQUATION 6) [43] that is subsequently fitted to empirical data for biomass to yield the adjustable parameters. The parameters and are manipulated to fit experimental stress-strain curves, with the additional constraint that their product is equal to , the elastic modulus of the fragment at zero strain. The value of Y is taken as 800 MPa so as to reproduce experimental findings [31], and is varied as part of a sensitivity analysis (TABLE A2).
As solvent diffuses into the cell the volume expands, leading to a strain in the cell wall that can be calculated from the change in volume. The stress can then be calculated from EQUATION 6 and used to infer cell pressure. It is assumed that cell expansion is accompanied by a reduction in the wall thickness [28,[40][41][42], and that cells will only experience strain in their largest dimension [29,42]. A numerical analysis was conducted (APPENDIX B) to construct the cell pressure-volume relationship and thus define the cell expansion mechanics. This relationship is shown in (FIGURE B1), where a cell pressure value can be interpolated if its volumetric strain is known, and vice versa.
Combining heat transfer, mass transfer and cell expansion
The temperature gradient between cell and solvent calculated from EQUATION 1 provides an additional driving force for mass transfer between the biomass and the solvent phase, and this extra driving force is included within the chemical potential gradient (EQUATION 3). The mass transfer rate is subsequently calculated using EQUATION 4, which consequently changes the pressure within the cell (due to volumetric expansion) and the activity (due to dilution). As activity and pressure change the chemical potential gradient also changes, which in turn influences the mass transfer rate. The interdependence of the different parameters is illustrated in FIGURE 4. The novelty of this work is that it combines heat transfer, mass transfer and cell mechanics into a single mathematical framework, which is able to calculate cell temperature and pressures over time for clusters of cells.
Predicted Pressure Distribution
Cell pressure is a function of time, electric field intensity, dielectric properties, thermal conductivity, diffusivity, cell cluster size, elastic modulus and the solvent activity within the cell. The model outputs are presented in the first instance for variable process conditions with physical properties set at realistic values for biomass and solvent extraction. A sensitivity analysis is subsequently carried out to determine the effect of physical properties on cell pressure. TABLE A2 (APPENDIX A) summarises the variables and constants for the different analyses conducted.
Effect of microwave heating parameters
A biomass-solvent system consisting of a single cell was initially considered, with E = 10000 Vm -1 , k = 0.05 Wm - Under these conditions the cell pressure is induced during microwave heating rises to around 50 bar before appearing to reach equilibrium. The pressure build-up occurs in less than two minutes, which is well within the timeframe of numerous experimental microwave extraction studies [18,20,21,24,[44][45][46][47][48][49].
It has been shown in section 2 that multicellular biomass materials exhibit a temperature distribution during microwave heating. Consequently, for clusters of cells a range of pressures are to be expected with a maximum around the centre of the cluster where the temperature is highest. For the cluster of 540 cells that comprise the 0.3mm particle the pressure within the centre cells is over 200 bar, whilst pressures on the outer edges of the cluster are still of the order of 50 bar. The pressure in this case is primarily the result of the temperature that results from selective heating, and its subsequent effect on mass transfer and cell expansion. FIGURE 7 shows how the maximum pressure (Pmax) within the cluster varies with both total particle size and the electric field intensity (E). FIGURE 7 that the maximum equilibrium pressure in a cell cluster (Pmax) increases with both the cluster size and the electric field intensity (E). Pmax is more sensitive to E with larger particles, indicating that while particles of all sizes see the same pressure profile under conventional heating (i.e. E = 0 V/m), they develop increasingly different pressure profiles according to their size and geometry as the input microwave power increases. FIGURE 7 includes a reference pressure of 70 bar as a rupture pressure for cells [24], and it is clearly evident that pressures beyond this threshold are entirely possible for a realistic range of particle size and E. When conventional heating is considered, i.e. the cell and solvent temperature both peak at 100°C, the cell pressure does not exceed 33.9 bar, which is well below the pressure needed for rupture in most cases. The cell pressure for a broader range of process conditions is shown in TABLE 5. is evident that Pmax increases with any change in the process conditions so as to increase ΔTmax (see section 2).
It is shown in
Hence, Pmax is more sensitive to E when " and particle size are larger, and when k is smaller. It can be seen that, while the process conditions presented in Considering 70 bar as a reference cell rupture pressure [24], it is clear again that pressures beyond this threshold are entirely possible for a realistic range of process conditions.
Sensitivity to physical properties
The sensitivity of the maximum equilibrium cell pressure (Pmax) to a secondary set of parameters was investigated.
These were the initial cell water activity ( 0− ), diffusivity ( ) and the elastic modulus of the cell at zero strain ( ). For this analysis the electric field intensity was set at 6000 Vm -1 , the thermal conductivity was 0. The initial cell water activity ( 0− ) was varied between 0.97 and 0.99, which is a realistic range for plant cell water activities [24]. The base value used in all previous analyses is 0.98. It can be seen in seconds, and similarly a decrease in by an order of magnitude increases t99 to 2890 seconds.
Assessment of Temperature-Induced Diffusion for cell rupture
The model results confirm that Temperature-Induced Diffusion can lead to pressures that are high enough to achieve cell rupture, even when the temperature difference between the biomass and the surrounding solvent is relatively small. High pressures can exist within a realistic timeframe, and with realistic values for electric field strength, dielectric loss factor, thermal conductivity and biomass particle/cell cluster size. The pressures do not show significant sensitivity to water activity or the elastic modulus of the cell wall. Diffusivity through the cellcell wall boundary has a direct effect on the kinetics of the pressure increase, but not the equilibrium value, and across a realistic range of diffusivity values the time required for the cell pressures to increase is consistent with the duration of empirical studies carried out by numerous different researchers.
Further developments will be introduced to this model to make it applicable to wider range of experimental settings. For instance, it is important to remove the current restriction on solvent choice which in this work can only be high-purity water. Furthermore, future iterations of this predictive tool should consider extracts originating from the cell wall rather than the cell only. Hence, it is required to consider the cell wall as a distinct phase in which the kinetics of water and solute flow and the kinetics of desorption from the cell wall are characterised.
Also, the rates of solute generation within the cell wall should be considered, given that according to recent literature microwave heating is capable of enhancing reaction speeds [50] and overcoming some reaction ratelimiting steps constituted under conventional heating [51]. This work is intended as a first step towards a broader predictive tool that combines both theoretical and experimental approaches.
Conclusion
A new model was developed to describe the action of microwave heating on biomass-solvent systems, which includes microwave volumetric heating, heat transfer, mass transfer and cellular expansion mechanics. The model explains how temperature gradients arise within clusters of cells due to competing effects of volumetric heating and conventional heat transfer. Electric field strength, dielectric loss factor, thermal conductivity and the number of cells were all found to affect the internal cell temperature. However, in all but the most extreme of cases the magntidue of the temperature difference obtained under microwave heating was less than 40 ℃, which is not sufficient to underpin the steam-rupturing hypothesis. The multitude of empirical observations of cell rupture during microwave heating are therefore caused by another mechanism. The model was applied to the alternative hypothesis of Temperature-Induced Diffusion that was introduced by Lee et al. [24]. The kinetics of the pressure increase due to Temperature-Induced Diffusion are of the order of minutes, well-within the timeframes of empirical observations; pressures >70 bar can readily occur, which are high enough to cause cell rupture. It was found that pressures needed to cause cell rupture could be readily achieved within a range of processing conditions that are consistent with previous laboratory studies, and that there was little sensitivty to changes in initial water activity, diffusion coefficient and elastic modulus. The Temperature-Induced Diffusion model developed here provides a significant advance in the mechanistic understanding of microwave heating and mass transfer within biomass, and for the first time allows an experimentally-observed phenomenon to be rationalised with a realistic set of physical parameters. Further work will combine theoretical and experimental approaches to develop this new model into a broader predictive tool that can determine the suitability of different biomass feedstocks for microwave extraction processes based on widely available physical properties. = 1 ⁄ Depends on temperature. Water data [52] 4.182 −1 −1 [53] 2.45 0 8.854 × 10 −12 −3 −1 4 2 1 cm; typical order of magnitude for water [32]
Appendix B: Cell pressure-volume relationship
The cell pressure-volume relationship has been determined numerically, by defining an input cell pressure array with small increments and computing the stress (by Newton's third law), strain (using EQUATION 6) and cell volume for each pressure value in a stepwise manner. An initial condition was defined where the cell incorporates a known set of input geometry, atmospheric pressure and zero stress and strain in the cell wall. A base value of the elastic modulus at zero strain (Y) was taken as 800 MPa, which reproduces experimental stress-strain curves reported for hydrated cell wall fragments of onion epidermis [31]. FIGURE B1 shows cell pressure-volume relationship. Figure B1: Cell pressure-volume relationship. | 6,925.8 | 2020-08-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Study of Coffee Grounds Oil Action in PVC Matrix Exposed to Gamma Radiation: Comparison of Systems in Film and Specimen Forms
The poly (vinyl chloride), PVC, undergoes changes in its physicochemical properties when it is exposed to gamma radiation. Thus, the radiolytic stabilization of PVC is one way for obtain a material with radiation resistance. In this work, we studied the coffee grounds oil as PVC radiolytic protection in two systems forms: PVC film and PVC specimen. The systems were irradiated at sterilization dose of medical devices and viscosity measurements were performed. According to viscosity assays of PVC films, there was a 67% protection in polymer matrix promoted by the oil. On the other hand, the mean viscosity molar mass (Mv) of PVC specimens increased about 13%, indicating predominance of crosslinking effect, however samples containing the oil showed no Mv significant changes. Therefore, the oil can be considered a PVC radio stabilizing substance and open a way for use of sustainable additive in PVC industry.
Introduction
Recent decades have seen a significant rise in coffee consumption and consequently an increase in the coffee waste generation. Thus, alternative routes are needed for coffee grounds management, developing new treatment or valorization strategies that should be viable both technically and economically. The composition of coffee grounds is very complex as a wide variety of chemical compounds are present, suggesting that this residue can be used for various applications. Kondamudi et al (2008) 1 recorded a possible valorization of coffee waste route is the production of sugars to be fermented for bioethanol that can be used as fuel or for any other purpose due to its high lignocellulosic content. Caetano et al (2012) 2 found that coffee grounds have oil content in the order of 10-20 wt% which can be used for biodiesel. In addition, bioethanol can be used in conjunction with the lipid fraction extracted from coffee to produce biodiesel via a transesterification reaction 2 .
On the other hand, poly (vinyl chloride), PVC, is a polymer widely used for food packaging and medical devices both sterilized by gamma irradiation. However, when the polymer systems are submitted to sterilization by gamma radiation (25 kGy dose) their molecular structures undergo modification mainly as a result of main chain scission and crosslinking effects 3 . Both processes coexist for PVC molecules and either one may be predominant depending not only upon the chemical structure of the polymer, but also upon the conditions (temperature, environment, dose rate, etc.) under which irradiation is performed. The crosslinking and main scissions that take place during irradiation may lead to sharp changes in physical properties of the PVC 4,5,6 . Furthermore, HCl molecules are also released in the radiolytic process. There are some studies about radiolytic stabilization of PVC 7,8,9 . For example, 7 reported radioprotective action of a common photo-oxidative stabilizer like HALS (Hindered Amine Light Stabilizer) in PVC films plasticized with DEHP (di-2-ethylhexyl phthalate). The HALS additive, is not manufactured to radiation resistance action, but the successful of it use is believed to interrupt oxidative propagation reaction by scavenging of chlorine radical formed in PVC radiolysis. However, no studies that reported the use of vegetable oil as radiolytic protectors of polymers are known. In this way, preparation of PVC films and PVC specimens containing oil extracted from coffee grounds (OCG), which is a discard, has of a great interest and not found data about this proposed system. Films and specimens of PVC with OCG were exposed to gamma irradiation and the effects of the oil on the viscosity average molar mass (M v ) of gamma irradiated PVC were studied. In addition, the free radical scavenger action of OCG, FT-IR spectrum, and mechanical properties of PVC with OCG, for both forms, were discussed in this study. Oil extraction from dried coffee grounds was performed in a soxhlet apparatus utilizing n-hexane as solvent. The 8h extraction was carried out for total removal of oil. Solvent was removed from the resulting product using a simple distillation at 60°C. The oil was kept away from light and air at 18°C until processing and analysis took place.
Preparation of PVC films and PVC specimens
The studied polymer material was commercial PVC (BRASKEM, Brazil). The films (≈ 60 µm thickness) of PVC and PVC with addition of OCG (PVC/OCG) were prepared by solvent-casting from methyl-ethyl-ketone (MEK) solvent by slow evaporation in air at room temperature (≈ 27ºC) upon 48h of magnetic stir of the polymer solution (1,8g of the PVC/40 mL of the MEK). MEK was dried with Na 2 SO 4 and purified by distillation.
On the other hand, the PVC and PVC/OCG specimen samples were produced by BRASKEM, Brazil. The Norvic SP 1300FA resin (K = 71) was used for the production of PVC specimens. The resin was mixed (Mecanoplast mixer, 9 liter ML9, 1200 rpm) with solid additives as the thermal stabilizers, at room temperature (≈ 27ºC). Then the mixture was heated and at 80°C the liquid additives as plasticizer Dioctyl Phthalate and OCG were added. The blend was processed on a two roll calender at a temperature of 150 °C for 3 minutes (20 rpm). Then, the specimens were pressed in Luxor press with a pressure of 100Kgf/cm 2 for 2 min and a pressure of 200Kgf/cm 2 for 1 minute, consecutively. Afterwards, the PVC specimens were cooled to 40°C and they were cut in tie-type IV dimensions of standard ASTM-D 638 (≈ 3mm thickness). According the manufacturer, the specimens were produced for medical applications (catheter). They were processed with additives as plasticizers, lubricants, and thermal protectors. For both systems the concentrations of OCG used was 0.50 wt%. This concentration was obtained in ours previous study 10 .
Viscosity measurements
The viscosity measurements of PVC and PVC/OCG samples were carried out in THF solution at 25.0 ± 0.1°C using an Ostwald viscometer in a thermostatic bath. The intrinsic viscosity of the samples was calculated from the relative viscosity, η rel ≈ ν/ν 0 ≈ t/t 0 , within range of 1.1 -1.9, where ν and ν 0 are the cinematic viscosities on the polymer solution and the solvent, respectively. The t and t 0 are flow times of solution and solvent, respectively. Therefore, η rel was calculated from t/t 0 ratio. The specific viscosity (η sp = η rel -1) and the reduced viscosity (η red = η sp /C), where C is the concentration of the solution (0.6 g/dL), were calculated as well. The intrinsic viscosity [η] was determined by the Solomon-Ciuta equation 8,11 . The viscosity average molar mass, M v , was then calculated from the corresponding [η] values trough the Mark-Houwink equation ( 12 . For Mark-Houwink equation, K and a are 1.5 x 10 -4 dL/g and 0.766, respectively for the THF-PVC system at 25°C 13 . Radio stabilizing action of OCG on PVC matrix can be assessed by comparison of degradation index (DI) parameter, DI = (M vo /M v ) -1, for a determined irradiation dose. The M v0 and M v are the viscosity average molar mass before and after the gamma irradiation, respectively. DI is obtained from viscosity analysis and reflects the number of main chain scissions per original molecule after irradiation.
Irradiation of samples
The samples were exposed to gamma radiation from a 60 Co source (dose rate of 6.13 kGy/h) at dose of 25 kGy (sterilization dose) in presence of atmosphere air and at room temperature (≈ 27ºC).
Free radical scavenger action of the coffee grounds oil
The efficiency of OCG to scavenge the 2,2-diphenyl-1-(2,4,6-trinitrophenyl)-hydrazyl radical (DPPH) was determined in this study. The standard reaction consists in the mixture 0.0024g of the DPPH in the 100 mL of ethanol. Appropriate amount of oil was mix with the DPPH solution and the system must be vigorously agitated. The reaction was carried out at ambient temperature (≈ 27ºC) for 30 min. The absorbance at 515 nm was measured against a blank of pure ethanol after the reaction in a UV-vis spectrophotometer Spectro 22, 108-D and 60 Hz. Radical DPPH scavenging capacity (%SC) was estimated from the difference in absorbance with or without OCG (equation 1).
Mechanical properties
The tensile properties of the PVC film samples were determined according to ASTM D-882 using an Instron machine IMIC, DL-500 N. The crosshead speed was 3 mm/min. The tests were carried out at room temperature (≈ 27ºC) and the results shown in this study are an average of four samples with sample dimension of 2.5 x 7.5 cm x 0.11 mm. On the other hand, the tensile properties of PVC
%SC
As As Asn 100 # = -specimen samples were determined according ASTM D-638, four samples and using similar machine used for PVC film samples. Assays were performed under the following conditions: load cell of 500 N, crosshead speed of 2 mm/ min, and at room temperature (≈ 27ºC).
Radiolytic action of coffee grounds oil in PVC matrix
The oil content of the coffee (Coffea arabica L) grounds was calculated to 10% and are comparable to the ones for commercial vegetable oils like soybean (11-25%) 14 . The oil was incorporated in the PVC matrix and were formed a homogeneous material for both PVC film and PVC specimens. After irradiation a yellow color was observed for all systems.
Radio stabilizing results are shown in Table 1 that shows M v for the PVC systems before and after irradiation. The results revealed that M v decreased for irradiated PVC films and increased for PVC specimens. In radiolysis process may occurs the scission and the crosslinking effects of polymeric materials. The formation of unsaturated linking, formation of gasses, and low molecular weight products also are generated by radiolysis 4,5 .
Ours results for PVC film showed the predominance of main chain scission, in agreement with the literature that reports about gamma radiation effects on the PVC matrix 4,5,7 . However, the analysis of Table 1 revealed less chain scissions occur in PVC/OCG films at 0.50wt% concentration. At sterilization dose (25 kGy) we calculated DI =0.126 for PVC and DI=0.041 for PVC/OCG films. These data represent a decrease 67% in scissions per original molecule of PVC.
On the other hand, according Table 1, the PVC specimens undergoes an increase of 13% on Mv and it represents the crosslinking as main effect in the PVC chain. The PVC specimens have larger thickness than PVC film then it is probable that larger section interacting with radiation which can contribute to increased formation of radiation products. In next step, these products seem to suffer predominantly radiation crosslinking effect, which leads to increased Mv. In addition, the antioxidant additives phenolic and phosphate usually presents in PVC plasticized, undergo preferably degradation in the processes of irradiation because of the partial radiolysis 15,16 . Thus, PVC specimens are more sensitive to the conditions of irradiation and more susceptible to oxidative degradation than the own resins which not contain additives like the PVC film. However, no significant event (scission or crosslinking effects) was found for PVC/OCG specimens systems. This result, open an important way for radiolytic stabilization of PVC by use of sustainable additive.
In addition, the Mv of PVC specimen are lower than Mv of PVC film. These results imply decreases in intrinsic viscosity, which are results of the contraction of PVC molecule coil in solution. The molecules of some additives can cause the contraction of the PVC molecules due to lack of chemical affinity, for example. Thus, the contracted PVC coil (in PVC specimens) yields a lower hydrodynamic volume, which facilitated the passage of the polymer solution through the viscometer capillary tube and decrease its viscosity.
On the other hand, further evidences on intermolecular interactions between PVC and OCG also could be assessed by viscosity analysis. The Table 1 shows an increase in M v for PVC/OCG for both film and specimen systems. These increases in M v means expansion of PVC molecule coil in solution, most probably due to interactions of some OCG group and chlorine of PVC molecule by dipole-dipole interaction and cause expansion of molecule with increasing viscosity. Nevertheless, further discussions about molecular interactions between the OCG and PVC will be presented by FT-IR analyses.
To the best of our knowledge, no information about use of OCG in the radiolytic stabilization of polymers has been published and consequently the mechanism of radiolytic stabilization effect of this oil is not clear. However, some probable reactions may occur when the polymer system is exposed to gamma irradiation. The gamma rays can break covalent bonds in PVC molecule to directly produce the free radicals 17,18 . The efficiency of certain composts in the stabilization of polymer molecules against radiation may be evaluated by measuring theirs effect on the free radical population after irradiation, as well as on its rate of decay. The Table 2 shows the results obtained by use of OCG as a scavenger free radical on the DPPH solution. DPPH is a stable free radical, non-natural, and it can react with another free radical. The DPPH solution presenting a strong absorption at visible spectrum in wavelength of 515 nm, characterized by an intense violet coloration, due to the presence of free electrons (Fig. 1a). When the DPPH is in the presence of substances able to scavenge free radicals, the absorption is inhibited, leading to a stoichiometric discoloration in relation to the number of reduced molecules of DPPH 19,20 . The degree of discoloration is directly correlated with the free radical scavenger activity of the evaluated substance 21,22 .
We assumed the quencher stabilizer is the principal function of the OCG on the PVC films and PVC specimens, but further work is required to providing a better understanding of all processes involved in the radiolytic action of the oil on PVC matrix.
Characterization of the PVC film and PVC specimen
FT-IR spectroscopy was used to detect and identify the presence of intermolecular interactions between PVC and OCG molecules. The existence of specific C-Cl oil interactions in the PVC/OCG system could be inferred from a shift in C-Cl stretching vibrations with the presence of OCG or other changes such as broadening of the C-Cl stretching peak or its intensity change, and even a new peak formation 23,24 . Figure 2 shows FT-IR spectra of PVC and PVC/OCG for non-irradiated and irradiated films in the 4000-500 cm -1 wavenumber for PVC film (2a) and PVC specimens (2b). The band assignments for all systems (irradiated and nonirradiated) of PVC and PVC/OCG are listed in Table 3.
It can be observed in the spectra of PVC specimen (Figure 2a) and spectra of PVC film (Figure 2b) that OCG addition is not easily visually detected, because the positions of absorbance peaks are similar. However, according Table 2, the vibrational peaks assignment to C-Cl stretching of PVC are shifted in PVC/OCG for both PVC specimen and PVC film systems. Moreover, the peak of PVC specimens at 1423 cm -1 has changed from two sharp peaks at 1423 and 1460 cm -1 in PVC/OCG specimen for non-irradiated system, for example. Similar behavior was observed for 1354 cm -1 absorption in PVC specimen (non-irradiated). This peak has from two sharp peaks at 1356 and 1381 cm -1 in PVC/ OCG specimen. The assignments of additional peaks are showed in Table 2. The change in shape and position of the peaks confirms that interactions occurred between PVC and OCG molecules 25 . On the other hand, the spectra obtained for irradiated samples do not showed significant changes.
In addition, the C=O vibration was observed in the systems, except the irradiated PVC/OCG film. In case of films, the vibration may be attributed to residual solvent (methyl-ethyl-ketone) used in the casting production of samples. However, for PVC specimens, the additives, like plasticizers, are incorporate in the polymer matrix and C=O vibration may be assignment to these molecules. The confirmation is the presence of peak assignment to C-O-C axial vibration only in the PVC/OCG specimen spectrum (see Table 3).
Mechanical properties
The results of mechanical measurements for PVC and PVC/OCG are summarized in Table 4. The properties studied were elongation at break (Eb) and Young's modulus (Ym). The mean values of the mechanical properties were Ours results reveal that OCG in the amount of 0.0090g (equivalent to concentration of 0.5 wt% in PVC matrix) not have scavenger free radical action because the DPPH+OCG solution showed no discoloration (Fig. 1c). On the other hand, the positive control of BHT exhibited radical scavenging capacity by solution discoloration (see Fig. 1b).
Thus, the mechanism proposed to action of OCG in the PVC matrix is by Quencher action. Quencher stabilizer acts dissipating the excess energy by fluorescence, phosphorescence, or conversion to heat, instead of letting it break chemical bonds. Then the OCG molecule causes a decrease in the formation of free radicals, which are responsible for scission degradation or crosslinking reaction. The possible mechanism is represented on the scheme 1.
Scheme 1.
Proposed mechanism of OCG action in PVC molecule exposed to gamma irradiation Analyzing first the PVC films non-irradiated it was found that the value of Ym for PVC/OCG decreases 3% when compared with the Ym value of PVC. This result means a decrease in rigidity of the PVC film and consequently explains the increase of 9% on Ea value of PVC/OCG. Generally, PVC shows dipole-dipole attraction as a result of the electrostatic interactions between the chlorine atom of one polymer chain (negative pole) and the hydrogen atom of another polymer molecule (positive pole). Intermolecular interactions between PVC and OCG were discussed by FT-IR analyses. These interactions could be weakened by special action of OCG, which promote decrease in the density of entanglements points of the polymer molecules. In addition, it was found a decrease of 26% in Ym value with consequent increase of 16% in Eb value for PVC films irradiated at 25 kGy. The chain scission effect obtained by gamma irradiation (Table 1) provokes the decrease of average length of PVC molecule. The density of entanglements points decreases leading to a decrease of the Ym value as consequence of PVC radiolytic degradation. The lower molecular weight also makes fibrils less stable and therefore favors brittle fracture 12,17 . On the other hand, decrease of 15% in Ym value and less influence in Eb were found for irradiated PVC/OCG. These results are explained by stabilizer action of OCG in the PVC matrix and agree with the viscosity measurements.
Similar results were found for non-irradiated PVC specimens, i. e., the OCG provoke decrease in rigidity of polymer with decrease of Ym value and consequent increase of percentage of elongation. The results also showed a good radiation resistance of PVC specimens, unlike PVC in film form systems, since the radiation does not influence significantly their mechanical properties. On the other hand, increase of Ym and decrease of Ea values were found in PVC/OCG specimens as effect of radiation. It should be noted that the results shown in Table 4 reveal a fairly plasticized material due to the low Ym value and consequent high Eb value. The large amount of plasticizer and other additives in the polymer matrix must have influenced the non-significant action of OCG on the mechanical properties of PVC specimens.
Conclusions
The oil content in the coffee (Coffea arabica L) grounds was calculated to 10%. This oil was added in PVC matrix to form two systems: the PVC/oil films and PVC/ oil specimens. The viscosity analyses suggest that oil (0.5 wt%) protected PVC for both forms against radiolysis by Quencher mechanism. The FT-IR analyses showed specifics molecular interactions between PVC and oil molecules. The incorporation of coffee ground oil in PVC film influenced directly its mechanical properties. The material in film form has become more plasticized and the gamma irradiation undergoes less damage in its mechanical properties. On the other hand, great mechanical resistance of PVC specimens was found and no-significant action of oil was found for these systems.
Our results confirmed, which the polymer industry can produce materials with sustainable additives and applications that require resistance to gamma radiation. The pioneering nature of our study opens a fruitful path for new studies that use discards to generate a positive impact on the environment and on polymer science. | 4,709.6 | 2017-12-18T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Black P/graphene hybrid: A fast response humidity sensor with good reversibility and stability
Black phosphorus (BP) materials have attracted considerable attention owing to their ultra-sensitive humidity sensing characteristics because of the natural absorption of water (H2O) molecules on the BP surface caused by the specific 2D layer-crystalline structure. On the other hand, the BP-based humidity sensor is less repeatable due to the instability of BP with water molecules and the stability of the sensor is reduced. In this study, this limitation of the BP-based humidity sensor was overcome by preparing a BP/graphene hybrid as a novel humidity sensing nanostructure. The BP/graphene interface improved the stability of the humidity sensor after a few weeks with a linear response within the relative humidity (RH) range of 15–70%. The sensor’s response/recovery speed of the humidity sensor was extremely fast within few seconds. The response (S) of the humidity sensor based on the BP/graphene hybrid is 43.4% at RH = 70%. The estimated response and recovery time of the sensor is only 9 and 30 seconds at RH = 70% at room temperature. The experimental investigation reveals that the BP/graphene hybrid not only improves the reversibility and hysteresis factors but also enhances the stability of the humidity sensor.
control provides the latest advantages of 2D electronics, such as low electronic noise, low power consumption and excellent stability interfaces.
To date, exfoliated 2D layers, particularly BP, could only be transferred in small sizes onto substrates in basic and proof-of-concept studies 12,13 . However, to take advantage of unique characteristics of scalable technologies, it is essential to establish a mass production process for 2D materials synthesis and sensor device fabrication. This very exciting challenge will be addressed in the current work. In this study, a large quantity of BP materials was synthesized in powder form (several gram-scale) with a novel technique of high energy ball milling (HEBM). The BP powder was exfoliated by a mild ultrasonication process and BP/graphene heterojunction was formed on the graphene surface using an electrospray system for humidity sensor. The humidity sensing performances of the sensors prepared with a pure BP and BP-graphene heterojunction were investigated and compared in terms of the sensitivity, reversibility, and stability. The role of graphene and the interface between BP/graphene for humidity sensors was investigated in detail. Figure 1 presents the sequence of humidity sensor fabrication process using BP material on graphene. The fabrication process starts at transfering graphene onto SiO 2 /Si with wafer-level. Then, the platform of humidity sensor chip comprise patterned graphene between two gold (Au) electrode. The BP powder was synthesized by a HEBM method and deposited on the patterned graphene by electro-spray (see in Supplementary data for more detail). Figure 2a presents a SEM image of fabricated humidity sensor based on BP/graphene hybrid. The patterned graphene was located between two gold (Au) electrodes. The distance between two Au-electrodes was 100 µm. The BP particles were synthesized using a commercially available red P powder according to the HEBM technique (Fig. 2b). The size of BP particles was ca. 200 nm, which were well deposited on the patterned graphene area by an electrospray system, as shown in Fig. 2b.The well-developed crystalline structured BP with an interesting 2D puckered-layer crystalline structure was confirmed by HRTEM (Fig. 2c). HRTEM electron diffraction, as in Fig. 2d showed that the sample was corresponded well to the orthorhombic BP. Figure 3a shows the XRD pattern of BP powder. Major peaks in the pure BP sample were observed at 2θ = 16.44°, 34.8°, and 55.78°, which corresponds to the (020), (040), and (060) planes of BP, respectively, as denoted by the International Center for Diffraction Data (JCPDS # 74-1878). Figure 3b presents the Raman spectrum for the BP-graphene heterojunction, where the characteristic A g 1 , B 2g , and A g 2 peaks of BP are clearly visible at 359, 432, and 462 cm −1 , respectively 8,19 . The A g 1 and A g 2 peaks of the pure BP sample were observed at 356 and 459 cm −1 , respectively. The Raman spectrum of single layer graphene consisted of two major sharp peaks (G and 2D peaks) at 1584 and 2674 cm −1 . The major peaks (G and 2D peaks) originated from the doubly degenerate zone center E 2g mode and the second order zone boundary phonons, respectively 20 . The D peak at 1350 cm −1 is related to the defects present 20 . The features of those peaks are indicators of the quality of graphene, such as doping and strain. The G and 2D peaks in BP/graphene were 1590 and 2689 cm −1 , respectively. No obvious increase in the D peak in the Raman spectra before and after depositing BP powder on graphene was observed. The as-synthesized single layer graphene has Raman spectra with 2D/G ratio > 1. However, the 2D/G ratio value was reduced (2D/G < 1) after the sensor fabrication process (see detail in Fig. S4, Supplementary data). Figure 4 shows the transient response of the as-fabricated humidity sensor, which is based on a pure BP and BP-graphene heterojunction, and their response after 1 hour with a reproducible relative humidity (RH) of 70%. Figure 4a shows the response of humidity sensor using pure BP powder with non-repeatability and degradation after 1 hour. This degradation of the BP-based sensor in a high humidity environment was similar to other works 21,22 , and were reported in other published papers 21,22 . Compared to the pure BP sample, the humidity sensor based on BP-graphene in Fig. 4b showed a much better response with the advantages of fast response/ recovery, good repeatability and non-degradation after 1 hour. For comparison, the response of as-fabricated humidity sensor using pure graphene can be found in Fig. S5 (Supplementary data). The initial resistance of the humidity sensor was 7.5 kΩ and 500 kΩ for the BP/graphene and pure BP sample, respectively. Owing to the good conductivity of single layer graphene (the initial resistance of humidity sensor using pure graphene was only 0.63 kΩ, see in Fig. S5, Supplementary data), the BP/graphene heterojunction sensor had a very low resistance compared to that of pure BP. Moreover, the signal-to-noise level in the BP/graphene heterojunction was higher than that of pure BP, leading to a clear signal of resistivity sensor in the BP/graphene sample.
Results and Discussion
The sensor relative response (S) is defined as the percentage resistance change of the resistivity sensor by exposure to humidity: where R a is the resistance of the sensors in the presence of dry N 2 gas only and R h is the resistance in the presence of humidity at the given concentrations. The response time is defined as the time required for the humidity sensor to reach 90% of the resistance change (ΔR) when the sensor is exposed to a given humidity. The recovery time is defined as the time needed to recover to 90% of the initial baseline after turning-off the humidity. The responses S (%) of the humidity sensor based on BP-graphene heterojunction calculating from Fig. 5a were 43.4, 35, 25.1, 13, and 3% with a RH of 70, 55, 40, 25, and 15%, respectively. The response/recovery time of humidity sensor was 9/30 seconds at a RH of 70%.Compared to other similar studies, the humidity sensor using the liquid exfoliation of pure BP has a response/recovery time of 255/10 seconds 21 and 5/5 seconds 12 . Meanwhile, the response/recovery time of our humidity sensor using pure BP was 24/72 seconds. The results showed that the humidity sensor using BP/graphene heterojunction is better than that using pure BP with a 2-fold faster response time. Moreover, the humidity sensor using pure BP without a passivation method showed large degradation after a few cycles (within 2 hours) 21 . Fig. 5b explains the sensing mechanism of the humidity sensors based on the BP/graphene heterojunction. The humidity (H 2 O) molecules withdraw a free electron from the BP flakes and increase the hole density in BP. The BP exhibits a p-type semiconductor behavior 8,9,15 . Therefore, the increasing hole density in p-type BP leads to decreasing resistance in the humidity sensor of the pure BP sample (as see in Fig. 4a). With BP-graphene heterojunction sample, this encourages the free electron to graphene transfer to BP via the BP/ graphene interface. Finally, the increasing holes in p-type graphene decrease the resistance of the BP/graphene sample (as see in Fig. 4b). Figure 5c shows the transient response of the humidity sensor based on the as-fabricated BP-graphene heterojunction and the response on the same sensor sample after 2 weeks. In general, there is little difference between these responses, except that the response/recovery is slower after 2 weeks. On the other hand, the sensor had a large reduction in response after 4 weeks, as shown in Fig. 5d (see also in the Supplementary data). Figure 5d confirmed the linearity of humidity sensor in the RH range of 15-70% and the stability of the sensor after 2 weeks. Compared to previous publications on humidity sensing using BP materials 12 , the BP-graphene heterojunction humidity sensor in this study has the advantageous features of a fast response/recovery time, linearity and mass production process of sensor. The degradation of the BP only sensor was caused by lower stability of the surface of the BP flake in moisture and humidity 14 . In this case, pure BP played a role both as a humidity sensing material and conducting patch between the Au electrodes in the sensor device, leading to higher noise and degradation during humidity absorption/desorption. The sensing mechanism of BP/graphene heterojunction involves charge transfer at the interface between BP/graphene and graphene playing a role as a stable conducting path. In addition, due to the good stability at the BP/graphene interface [15][16][17] , the humidity sensor could show better stability and reversibility, compared to the sensor using the BP only sample. Passivation of BP surface is essential for sensing applications 12,22 , and electronic devices 14, 15 based on BP materials. This is the greatest challenge for applying BP as a sensing material because the passivated BP process reduced the sensitivity of the BP surface, leading to the complicated fabrication of the sensor. However, the sensor based on passivated BP showed a long-term stability up to one month 22 or three months 12 . In this study, the BP/graphene heterojunction humidity sensor showed high sensitivity, fast/response recovery owing to BP without a passivation process and acceptable stability for up to two-weeks, which is better than that of the sensor using the pure exfoliated BP material with stability for only a few hours.
Conclusions
A humidity sensor based on BP/graphene heterojunction was developed using a large scale fabrication technique. The role of graphene in the heterojunction of the humidity sensor was investigated by comparing the sensor performance of the BP and BP/graphene sample. The BP-only samples showed good sensitivity to humidity with the advantages of a strong response and rapid response/recovery time, but were less stable, non-linear and reversible. On the other hand, the BP/graphene heterojunction-based humidity sensor overcomes these limitations and has excellent sensing properties with a response of 43.4% and a response/recovery time of 9/30 seconds with a RH of 70%. In addition, the humidity sensor has good repeatability, low hysteresis and long-term stability over two weeks by forming a BP-graphene heterojunction. Moreover, it solves the instability of the BP-only sensor.
Methods
The BP powder was synthesized using the HEBM technique at ambient temperature and pressure. The detail experimental conditions for the synthesis of BP powder by HEBM can be found elsewhere (see the Supplementary data) 23 . The wafer-scale of single layer graphene was synthesized by chemical vapor deposition (CVD) 20 . The sensor chip (graphene chip) with patterned graphene between the two gold (Au)-electrodes (distance of 100 µm) was fabricated in wafer-scale level by a dry-etching technique and conventional MEMS process (see the Supplementary data). The BP powder dispersed in dimethyl sulphonate (DMSO, Sigma-Aldrich) by an ultrasonic treatment was deposited on a graphene chip to fabricate a humidity sensor via an electrospray system. DMSO was used as the solvent for BP in the electrospray experiment, where BP was deposited on the patterned graphene surface. The water flow rate was controlled precisely by a syringe pump (KD 200; KD scientific). The electric potential was adjusted using a power supply (+ 0-30 kV; Korea switching) connected to the metal capillary. The metal capillary was used as an electrospray emitter with an inner diameter of 250 μm. The nozzle tip-to-substrate distance was fixed to 10 mm. For rapid evaporation of the solvent, the surface temperature of the substrate was adjusted using a hot plate (more details, see the Supplementary data). For comparison, the humidity sensor using pure BP powder was fabricated but without graphene between the Au electrodes. Humidity sensors using pure BP powder for comparison were fabricated under the same conditions but without graphene between the Au electrodes. In the wire-bonding process, the humidity sensor was mounted on the TO-39 chip to finish the sensor fabrication process.
The surfaces of the BP/graphene hybrid were characterized by field emission scanning electron microscope (FESEM, a JSM-6500F). Transmission electron microscopy (TEM) and high resolution TEM (HRTEM) images of the BP were captured using an ultra-high resolution field emission electron microscope (JEOL JEM-2100). The crystalline characteristics of the BP were investigated by X-ray diffraction (XRD, D/MAX 2500 Rigaku, Japan). Raman spectroscopy(Xplora Horiba, France) was performed in a back scattering configuration using a 532 nm laser source and holographic grating of 1200 grooves/mm. The sensors were mounted inside an enclosed environmental chamber and a multi-meter (Fluke 8846 A) connected to the computer was used to record the resistance of the sensors. The humidity in gas chamber was generated using a water bubble controller and introduced into the chamber by N 2 gas. A computerized mass flow controller (ATOVAC, GMC 1200) was used to change the humidity and concentration of N 2 gas. The relative humidity (RH%) was varied from 15 to 70%. N 2 gas with different humidity level was delivered to the chamber at a constant flow rate of 100 standard cubic centimeters per minute (sccm). The gas chamber was purged with N 2 gas between each humidity pulse, allowing the surfaces of the sensors to return to the standby state. | 3,339 | 2017-09-05T00:00:00.000 | [
"Materials Science"
] |
Causes of the Convergence Slowdown in the Countries of Central and Eastern Europe, 2008-2014
In this paper we analyze the growth and real convergence process of the Central and Eastern European countries which joined the European Union in May 2004, namely Poland, the Czech Republic, Slovakia, Hungary, Lithuania, Latvia, Estonia and Slovenia (henceforth CEEC-8), vis-à-vis the European Union (EU) as a whole, individual EU members, and OECD countries (non-EU members). The analyses cover the period from 1995 to present. Results of testing beta-convergence indicate that in the period 2008-2014 the countries of the CEEC-8 group converged to Mediterranean countries but did not converge to rich countries of the European Union or non-EU OECD countries. We estimate parameters of the dynamic panel model to identify the causes of the convergence slowdown of CEECs. According to the results of the estimation, the low level of innovation in the countries under consideration was the main cause of both the slower TFP growth and the convergence slowdown.
Introduction
The main aim of this paper is to identify the causes of the significant growth and convergence slowdown in the 2008-2014 period in Central Eastern Europe Countries (CEEC-8) -Poland, the Czech Republic, Slovakia, Hungary, Lithuania, Latvia, Estonia, Slovenia -in relation to both the EU and the OECD countries (non-EU members), for which noticeable if not significant divergent tendencies can also be observed.
The growth and convergence slowdown and even the divergence process are relatively new phenomena for the CEECs, and it has come as a surprise to many economists and experts in the field because it has emerged after a slow but steady real convergence growth, as observed since these countries' successful systemic transformation following 1995, and especially due to their entry into the EU in 2004. Namely, due to these two extremely important factors (systemic transformation and the benefits derived from their access to the EU) the CEECs have moved up or converged from an average of approximately 40% of EU gross domestic product (GDP) per capita at purchasing power parity (PPP) in 1990 to 55% in 2007 (Czasonis, Quinn, [8]).
Though the literature devoted to growth determinants in the countries of Central and Eastern Europe is very extensive, there is still a gap that should be bridged. Most analyses concentrate on the period of convergence, neglecting the fact that after the outbreak of the global financial crisis, a slowdown in convergence was observed. Using data showing the problem of innovativeness levels in the countries of Central and Eastern Europe, we point out the most important reasons for their convergence slowdown. Additionally, we contribute to the existing literature by showing the differences between countries in terms of both TFP determinants and TFP growth, and conduct counterfactual analyses which could provide useful hints for economic policies in the CEE countries.
The paper is divided into two parts. The first is devoted to the analysis of the growth and convergence process in the CEEC-8 group as observed since the beginning of the systemic transformation process in 1995 until 2014. It consists of both statistical analyses of the convergence process and the testing of hypotheses concerning beta-convergence, especially after the year 2007/2008 when the initial significant growth and convergence process broke down. The convergence process is monitored and analyzed by dividing the period 1995-2014 into three sub-periods The second part of the paper is devoted to identifying and evaluating the causes of the CEECs' convergence slowdown after 2007/2008. For this purpose, we estimate the parameters of an econometric model. As it is widely recognized that success in real convergence depends on total factor productivity (TFP), we concentrate on the TFP 370 Causes of the Convergence Slowdown in the Countries of Central and Eastern Europe, 2008-2014 determinants of the CEECs. We estimate the parameters of the dynamic panel model using the Blundell-Bond [5] systemic estimator in order to avoid the problem of endogeneity of some regressors.
The debate concerning middle-income economiesand the middle-income trap in particulararose somewhat later, a decade or so ago, when some middle-income countries evidently stopped developing, at least in terms of real convergence towards developed ones. A number of valuable research studies were undertaken in relation to some Latin American and East Asian countries (Yusuf and Nabeshima [41]; Felipe [12]). Much of the research work has been done and/or supported by international financial institutions, for example the Asian Development Bank (Felipe [12]), the World Bank Development Research Group and the Inter-American Development Bank (Yusuf and Nabeshima [41]), as well as by the OECD Development Centre (e.g., Jankowska, Nagengast, and Perea [22]).
The middle income trap is a theorized economic development situation, when a country attains a certain income but gets stuck at that level. A country in the middle income trap will have lost competitive edge because wages are on a rising trend and work productivity increases slowly. Such country is unable to keep up with more developed economies in the high-value-added market. In order to avoid the middle income trap, strategies to introduce new processes should be identified and new markets should be found to maintain export growth.
The problem has become especially interesting, if not intriguing, in view of the great success of other ambitious countries -the Republic of Korea, Taiwan, Singapore, and Hong Kong -which tried to catch up and successfully converged, proving that real convergence is possible in the real world, not only in theory, provided the country embarks on an intelligently structured industrial policy, and remains consistent and persistent for a certain, rather long time (Lee [27], Lee et al. [28]).
Though all countries of the CEEC-8 group are classified as high-income countries (WESP [40]), convergence of the Central and Eastern European countries with the developed World slowed down after the global financial crisis took hold, suggesting a typical situation in which the above group of countries has entered the "middle-income economy trap". Indeed many economists from the countries under consideration warn about the middle income trap problem (Radło [36]). As a result, an analysis of the convergence slowdown seems to be crucial and the identification of the causes may provide useful recommendations for economic policy.
Many studies have been devoted to analysing the determinants of the convergence of the CEECs. Baran [3] analysed determinants of the TFP growth of CEEC-4 countries (Poland, Czech Republic, Slovakia and Hungary) in the period 1995-2010. According to the results, TFP contributions to growth were very important in the CEEC-4 in 1995-2006. However, when the global crisis began, a significant slowdown in technological progress was recorded. According to the results obtained by Czasonis and Quinn [8], the countries of Central and Eastern Europe converged to the rich Western European countries very quickly before 2007. Kutan and Yigit [26] found convergence to Germany in industrial production for the new members of the European Union, in the 1993-2000 period. In contrast, Dogan and Saracoglu [10] did not find any evidence of convergence.
Analysis of the Convergence of the CEEC-8
To analyze the phenomenon of the convergence over the whole period 1995-2014 and in three sub-periods, we compare the growth rates of GDP per capita for the group of CEEC-8 with the growth rates of different groups of OCED member states. We divide the entire period into three sub-periods. The first sub-period encompasses 1995-2003, before the analysed countries joined the European Union. The second sub-period (2004)(2005)(2006)(2007) covers the years after joining the European Union and before the global financial and economic crisis started.
At first glance, one can conclude that most of the CEEC-8 converged towards the EU-15 over the whole period 1995-2014. This progress is most visible in the Baltic countries, Poland and Slovakia, less impressive in the Czech Republic, Hungary and Slovenia. Another initial observation concerns the visible difference between the sub-periods in the CEECs' convergence process. During the first period, which we call the systemic transformation phase, CEECs moved from centrally planned economies based on public ownership to increasingly market-oriented economies based on private ownership, as well as shifting from foreign trade state monopolies toward open trade policies run by private companies and individuals. The process, as we know it, unleashed a considerable amount of previously frozen entrepreneurial energy, and started to reduce inefficiency in old publicly owned industrial structures, but this evolved in different ways in those countries because of the speed and/or nature of the economic policies adopted by them since 1990. The results were sometimes unexpectedly negative, for example in terms of the efficiency of the newly formed private companies vis-à-vis old state enterprises, both in manufacturing and in agriculture (see Brada,and King [6]; Brada, King, and Ma [7]). The second period in the CEECs' convergence process is symbolically marked by 2004, when the analyzed countries became EU members and so we define this phase the EU yield period. This is a symbolic distinction because the effects/benefits related to the CEECs' membership started to take place even earlier (ca. 2000), when the pre-accession agreements indicated clearly that the countries had already embarked on the institutional convergence process with the EU (acquis communautaire), establishing a safe ground for international investors that resulted, ultimately, in both a significant FDI inflow into the CEEC-8 and the acceleration of growth.
The other obvious benefits of either expected or real EU membership included free access to EU markets for CEEC-8 exporters (e.g., Poland attained duty free access to EU markets when its accession agreement had been signed and approved by all EU member countries in 1994), EU assistance programs (which could be up to 4% of CEECs' GDP), and the free movement of people. This latter benefit included systematically implemented legal work permits for the outflow of labor, easing thein some countries dramaticunemployment problem, as well as resulting in substantial money transfers remitted by emigrants to their mother countries. As an example, Poland has been receiving US$5-7 billion annually from such remittances since EU membership.
The positive effects of the above factors, in addition to the benefits coming from the increasingly mature systemic transformation process (mostly thanks to progressive privatization processes, open trade benefits, and on-going institutional adjustments), resulted in the significant acceleration of GDP growth in these countries. As can be seen from the Table 1, CEECs experienced an unprecedented rapid GDP growth in the period 2004-2007, leading them to be considered the group of "fast growing countries," that is, countries with a GDP per capita average annual growth rate of 3.5% for seven or more years.
The problem is that the happy era of growth in the period 2000-2007, and in the 2004-2007 sub-period in particular, ended dramatically following 2007, bringing most of the CEECs' convergence almost to a stop. This came as a surprise to economists, politicians, and experts alike (see EBRD, 2014). Over time, however, the negative tendency of growth in the CEECs started to be better recognized and understood when data showed that unemployment in the CEECs had started to grow (see Table 3). As a result of the slowdown in growth and investment, which brought a rise in unemployment, yet another plague started to be a problem, namely CEECs' emigration began to grow rapidly. This trend has eased unemployment on the one hand, but it has a dangerous weak point, namely the age composition of the labor outflow consisting primarily of young and middle-aged educated people (see Table 3). The development of all these negative factors has resulted in a visible slowdown in the convergence of CEECs vis-à-vis the total EU. The CEECs' declining GDP growth rates since 2008 have contributed to the convergence slowdown.
Other factor making the CEECs' convergence vis-à-vis the total EU easier to achieve has been the EU Mediterranean countries' negative and/or relatively slow growth for most of the period 2008-2014. This phenomenon is in contrast to the relatively positive growth rates of most of the rest of the EU-15, which we can call the EU-North, comprising the Scandinavian countries, Austria, Germany, Belgium, the Netherlands, the UK, Ireland, and Luxemburg. As we can observe, the Mediterranean countries as a group, having had negative and/or very slow GDP growth rates over that period, are the only EU area against which the CEECs can claim a real convergence process ( Table 1). As a result of the simultaneous development of the two opposite tendencies, namely relatively faster EU-North growth and negative and/or very slow growth in the Mediterranean countries (see Table 1), and given the visible slowdown in growth in the CEECs in that period, the CEECs' convergence toward the total EU and EU-15 has been very low, with some minor positive differences with respect to Poland, Slovakia, and Estonia. By examining Non-EU OECD countries, we note that the total EU position has worsened vis-à-vis both OECD Anglo-Saxon countries overseas and most small and medium-sized OECD countries, such as Switzerland, Chile, Israel, and Turkey. Indeed GDP per capita differences have been growing in favor of OECD non-EU members, and the EU surplus position has diminished vis-à-vis Chile, Switzerland, Turkey, and Israel.
We can also see that the OECD non-EU members have been growing at faster pace than most of the countries in Central and Eastern Europe. Similar negative conclusions may be found comparing CEECs' GDP growth rates with both OECD Anglo-Saxon countries and Switzerland, Israel, and Turkey, to mention but a few of the fast moving OECD non-EU members. As a result, we find that a divergence process has been well under way, rather than the convergence experienced by the CEECs' before 2008. This may suggest that the CEECs have reached a kind of plateau in their developmental path as measured by the real convergence process and the progress. If the convergence process cannot be revived, we must admit that the CEECs have become stuck in the middle-income trap, as did many Latin American and some East Asian countries years before.
Though all countries of the CEEC-8 group are classified as high-income countries (WESP [40]), convergence of the Central and Eastern European countries with the developed World slowed down after the global financial crisis took hold, suggesting a typical situation in which the above group of countries has entered the "middle-income economy trap". Indeed many economists from the countries under consideration warn about the middle income trap problem (Radło [36]).
Analysis of GDP growth rates for the period 2015-2016 indicates that in these years negative tendency did not improve. Average growth rates for Estonia and Lithuania was below the EU-average. In the case of Latvia, Slovenia and Poland the GDP growth rate was a bit better and only in the case of Romania this indicator was satisfactory. However some high-income countries recorded very high growth rates in the analyzed period (e.g. Ireland, Iceland, Malta, Luxembourg), which means that the problem of divergence of some countries of the Central and Eastern Europe is still valid.
Econometric Analysis of the Determinants of the Total Factor Productivity (TFP) Growth Rate
According to the Solow growth accounting, the growth rate of output is decomposed into (Solow [36]): changes in the quantity of the physical capital, changes in the amount of labour, unexplained factor reflecting technological progress and called the "Solow residual" or "Total factor productivity".
TFP accounts for a significant proportion of the differences in income across countries. Since obtaining a measure of TFP growth on an economy-wide level is a difficult task, two main alternatives are used: growth accounting and frontier analysis. According to the growth accounting approach, TFP growth is identified as the value of the residual of the production function after accounting for the contribution of the inputs' changes to output growth. Therefore changes in the Total-Factor Productivity reflect both changes in the efficiency of production and technological progress, while the non-parametric (production-frontier) method enables this decomposition (see: Baran [3]). In the empirical research, we used the growth accounting approach to calculate the Total-Factor Productivity for all countries of the CEEC-8 group for the period 1996-2014. Before the econometric model is specified, economic theories devoted to the determinants of TFP should be mentioned. The following groups of variables are distinguished (see: Barro, Sala-i-Martin [4]; Isaksson [20]; Herrendorf, Valentinyi [18]; Danquah, Moral-Benito, Ouattara, [9]): variables associated with the creation, transmission, and absorption of knowledge, variables associated with factor supply and efficient allocation, variables associated with institutions, integration, and invariants, variables associated with competition, social dimensions, and the environment.
An important factor for TFP growth is an effective innovation system. Through this means, research and development (R&D) is fostered and results in new products, processes, and knowledge. Therefore, the value of investment in R&D (or its stock) is very often used as an explanatory variable in models explaining TFP (see e.g. Gullec and Van Pottelsberghe de la Potterie [17]). As knowledge is created by a small number of countries and most countries need state-of-the-art technology, they must acquire it from elsewhere. Foreign direct investment (FDI) is very often perceived as a key channel for the transfer of advanced technology from highly developed to developing countries. Moreover, it is believed that FDI generates positive externalities in the form of spillovers of knowledge to the domestic economy due to linkages with local clients and suppliers, learning from foreign firms, and employee training programs. Therefore, FDI is perceived as a factor that has a positive impact on productivity (see, e.g. Isaksson [20]).
Trade is also considered a carrier of knowledge and some authors argue that thanks to imports, advanced foreign technology is introduced into domestic production, which in turn positively affects TFP. On the other hand, technology adoption from abroad and the creation of good domestic technology require human capital. An improvement results in an increase in TFP. Therefore, trade openness is very often used as an explanatory variable in models explaining TFP (see, e.g. Danquah et al. [9]).
Good education and training help a society increase its ability to acquire and use relevant knowledge. Level of education, which is commonly used as a measure of human capital, has an important effect on TFP as it plays a very important role in shaping an economy's capacity to carry out technological innovation and adopt new technology (see, e.g. Romer [37]). The level of health in a society influences growth in TFP directly through household wealth, and indirectly through labor productivity, investments, savings, and demography. Healthy workers are more productive and lower mortality rates result in larger savings (Danquah et al. [9]). Infrastructure plays an important role in expanding productive capacity by increasing resources and enhancing private capital productivity. Therefore, the stock of infrastructure is very often used as a factor influencing the level of productivity.
The strength of state institutions has an important impact on TFP in a country. In economies with weak institutions, the availability of funds for investment and capital accumulation is poorer than in strong economies. As a result, variables associated with the level of development of state institutions should also be used to explain TFP. In economies with developed financial systems, investment opportunities can be seized, the allocation of resources is closer to optimum, and specialization is promoted. Moreover, financial constraints may prevent poor countries from obtaining the advantage of technology transfer. The role of financial development is to help firms or industries to take advantage of growth opportunities by allocating resources to the most productive use. Therefore a variable associated with the level of development of the financial system should also be used as an explanatory variable in regression analysis explaining TFP. The percentage of state-owned enterprises most likely also plays an important role in explaining TFP. State-owned enterprises are inefficient compared to private ones (see: Isaksson [20]).
374
Causes of the Convergence Slowdown in the Countries of Central and Eastern Europe, 2008-2014 The privatization of enterprises therefore results in increased competition, as it reduces management slack. The relative inefficiency of state-owned enterprises might be due to political pressures, and the lack of separation between control and ownership. Another explanation for the lower efficiency of state-owned enterprises is the fact that they seldom try to maximize profits and have greater incentives to adopt anti-competitive behavior. All in all, variables associated with the state of privatization should also be considered in models explaining growth in TFP.
Some works devoted to the determinants of TFP highlight the role of the social dimension, which denotes income, wealth distribution, and the wealth level in an economy. It seems that the greater the difference between owners of capital and workers, the lower the motivation of workers and their productivity (e.g. Isaksson [20]). Therefore, measures associated with income inequality may also be used in regression analyses of TFP growth.
On the basis of data availability and the significance 2 of variables in explaining technological change, we finally estimate the parameters of the following model: where it z consists of explanatory variables as defined in Table 4. Before we began estimation, panel unit root tests were conducted. The results of testing of the order of integration of time series are presented in Table 5. They indicate that the variables used in the specification (1) are stationary. In the case of non-stationary variables, we used differences. Therefore, we didn't have any problems with spurious regression. We estimated the parameters of the model (1) using the Blundell-Bond systemic estimator (Blundell,Bond [5]). Use of this method results from the fact that we have the problem of endogeneity of certain variables in the equation (1). Frankel and Romer [13] show that trade is endogenous, while Dollar and Kraay [11] show that finance is also endogenous. The Blundell-Bond estimator solves the problem of endogeneity (Baltagi, [2]). Moreover, results of the Monte Carlo simulations indicate that this estimator is the most efficient among dynamic panel data estimators. A similar methodology in the same context was applied by, among others, Khan [23] for the African continent.
Due to availability of data, we can estimate the parameters of the dynamic panel model using yearly data for the period 1996-2014. Our sample covers Central and Eastern European countries that joined the EU in 2004, i.e., Poland, the Czech Republic, Hungary, Slovakia, Lithuania, Latvia, Estonia and Slovenia. Table 5 presents the results of the estimation of the parameters of the dynamic panel model using Blundell and Bond's [5] systemic estimator, as well as the results of validity testing for over-identifying restrictions and the presence of autocorrelation of order 2. 2 We drop insignificant variables The positive and significant estimate of the parameter for the variable TFP_GR it-1 means that the growth rate of TFP is positively correlated with its lag. The positive estimate of the parameter for the variable SAV_GR it means that in the group of CEEC-8 increasing the ratio of savings to GDP had a greater chance of an increase in productivity in the years 2008-2013. A larger level of savings results in the accumulation of capital, which is necessary for buying new technology enabling the attainment of higher levels of productivity. The ratio of domestic credit to private sector to GDP turned out to have a negative impact on the rate of growth of TFP. This means that the "too much finance" hypothesis may be valid for the countries of the Central and Eastern Europe (Próchniak, Wasiak [35]; Grabowski, Maciejczyk-Bujnowicz [15], [16]).
It can be noted that the ratio of the ICT goods trade to total trade also had a positive impact on the rate of TFP growth in the years 2011-2014. The value of quantity informs on the level of development of the ICT sector in an economy; the better the development of this sector, the higher the rate of TFP growth. The positive and significant estimate of the parameter for the variable FDI means that FDI generated positive externalities in the form of spillovers of knowledge to the domestic economy due to linkages with local clients and suppliers, learning from foreign firms, and employee training programs. However the estimate of the parameter for the variable TO is negative, which is a very strange result. This may be due to the low level of advancement of the majority of goods traded between the Central and Eastern European countries and their trade partners. Indeed, the percentage of high-technology exports in total exports was only about 10% (average) in the group of countries analyzed in 2013 and was much lower than in highly-developed countries (Singapore 47%, Switzerland 27%, the Netherlands 20%, Norway 19%). The ratio of the ICT goods trade to total trade is very low in the group of countries analyzed, which means that less advanced goods dominate in trade. As a result, an increase in the ratio of trade to GDP does not lead to higher productivity levels. The ratio of bank capital to assets turns out to have a negative and significant impact on the level of productivity in year 2009. This provides information on the role of stability in the banking sector in influencing investors' perceptions of the level of risk associated with a specific country. A higher level of banking assets indicates a higher level of the safety of investments in a specific country to investors, leading to greater investment and a higher productivity level. Moreover, a higher level of stability in the banking sector results in higher levels of credit provision to firms, which is necessary for buying new technology. Two more variables associated with the level of innovativeness of the economy turn out to be statistically significant, exerting a positive impact on the level of productivity in the group of countries analyzed. A) An upward change in the number of scientists in R&D positively affects the change in TFP. Therefore, governments should spend more money on research and concentrate on branches in which R&D is important. Instead of spending money on less skilled jobs and university faculties not associated with R&D, more money should be spent on faculties in which R&D is essential. B) A percentage change in trademark applications has a significant and positive impact on the change in TFP. An increase in trademark applications is associated with an increase in the range of products supplied. The provision of products new to the market requires the use of advanced technology, which ultimately leads to an increase in the level of productivity.
The positive and significant estimate of the parameter for the variable EDUC_EXP_GR it-2 means that the growth in expenditure on education positively affects productivity growth with a lag of two years. This result is in line with expectations, as an increase in educational expenditure leads to an improvement in human capital, which should lead to the higher productivity of workers. In addition, an increase in expenditure on tertiary education associated with engineering and science faculties should lead to the development of new technologies and an increase in productivity. A change in the proportion of value added in agriculture to total output has a negative impact on the change in TFP. At the beginning of the transformation process, the share of employment in agriculture and the share of added value in agriculture were at relatively high levels in the Central and Eastern European countries. The decrease in the significance of agriculture started as a result of the process of transformation from centrally planned to market economies. Simultaneously, a decrease in the proportion of value added in agriculture to total output resulted in an increase of the significance of services and manufacturing. This might have led to the development of more innovative products in services and manufacturing, and to the increase in the change in TFP, which is in line with the estimate.
A variable representing the percentage of women employed as academic staff in tertiary education turned out to be significant, and had a positive impact on TFP growth. This result means that the governments of Central and Eastern Europe should be open to stimulating the academic careers of women and provide conditions that enable maintaining an academic career while also bringing up children. A higher percentage of well-educated women indicates a higher level of gender equality in a given country.
378
Causes of the Convergence Slowdown in the Countries of Central and Eastern Europe, [2008][2009][2010][2011][2012][2013][2014] This means that all citizens contribute to the innovativeness of their country.
Two variables associated with the situation on the labour market turned out to be statistically significant: the unemployment rate and percentage of unemployed with a tertiary education both have a significant negative impact on TFP growth. This means that the governments of the CEE countries should take special care of their labour markets, and introduce programmes that enable a reduction of the unemployment rate among new university graduates. If new graduates have problems finding appropriate jobs they emigrate, and so the percentage of well-educated citizens in the labour force decreases. Estimates of the parameters for the variables INTEREST_PAYMENT and EXPEND show that governments should look at their public finances, in order to reduce interest payments, and that the percentage of expenses in GDP should be reasonable. If a government does not look after its public finances, investors' fears about debt sustainability are reflected in a higher level of treasury bond yields. Investors are then less prone to investing in their own countries, which leads to slower technological progress. A positive estimate of the parameter for the variable EQ_INDEX is in line with expectations. Countries recording an increase of equity prices are wealthier, and their citizens can invest more money in innovative technology.
A variable representing situation in the banking sector turned out to be statistically significant as well. The higher is the bank capital to assets ratio, the slower is TFP growth. This result is in line with Mero and Piroska [32]. These authors have found that banking and economic nationalism in the countries of the Central and Eastern Europe prevented economic development and better integration to EU structures.
It should also be noted that some variables turned out to be nonsignificant, and these variables were not included in the final specification. This especially concerns variables associated with female labor participation and the proportion of women in wage employment in non-agricultural sectors. Because of cultural aspects, women in Central and Eastern European countries quite frequently participate in the labor force (in some countries, female labor participation is greater than that of male labor participation). Therefore, changes in female labor participation do not have a significant impact on changes in TFP. Variables demonstrating inequality were not included in the final specification due to poor data availability. The results of testing for autocorrelation of order 2 confirm that including further lags of the dependent variable is not justified. The results of the Sargan test confirm that over-identifying restrictions are valid and the specification of the dynamic panel model is correct.
To identify the main reasons for convergence slowdown and weak change in Total-Factor Productivity, the performance of the main variable, as well as variables reflecting the level of innovativeness will be analysed. Data for countries from the CEEC-8 group will be compared with data for highly-developed countries. Table 7 presents the average annual changes of Total-Factor Productivity and the performance of innovative variables in the period 2011-2014 in the countries under consideration, as well as in selected highly-developed countries. Results from table 7 indicate that the level of innovativeness in the countries of Central and Eastern Europe is much lower than in Germany and Israel. The number of R&D researchers and the percentage of ICT service exports are very low in comparison with highly-developed countries. Moreover, the trend in ICT exports and trademark applications is not optimistic for most of the CEEC-8 group, with the most pessimistic trend observed in Hungary. However, positive cases in this group can be identified as well. A relatively high level of innovativeness is recorded for the Baltic states (except Estonia), and Slovakia. Analysis of the results from tables 1 and 7 indicates that the countries of Central and Eastern Europe should make efforts to increase their level of innovativeness if they want to catch up. In the countries with the strongest divergence tendencies (the Czech Republic, Hungary, Slovenia), the level of innovativeness is low. Countries that recorded increases or only slight decreases of GDP per capita in the crisis period (Slovakia, Lithuania, Latvia) are outperforming the other countries of the analysed region in terms of innovativeness.
To evaluate what the situation would have been if the Central and Eastern European countries had made stronger efforts in terms of innovativeness, we compared (using exact numbers) the empirical level of the TFP growth rate with the theoretical level in the period 2012-2014, assuming the values of innovation variables (ICT_TRADE_GR, RD_RES_GR, TRMARK_GR, COMP_EXPORT) to have been at the level of two highly-developed countries (Germany and Israel) in all countries from 2008. Table 8 presents the empirical and theoretical paths. In Table 8, we also note that the rate of TFP growth would have been significantly higher had the Central and Eastern European countries concentrated more on ICT trade and tried to increase the ratio of R&D researchers in the population. All in all, appropriate measures associated with increasing innovativeness and improving human capital should be implemented, to put a stop to the very dangerous tendency toward divergence noted in the years 2008-2014. Because of differences in the innovative effort of the countries under consideration, there are differences in the "TFP growth gap" across countries. The Czech Republic, Hungary and Slovenia are able to increase their Total-Factor Productivity significantly when they make the effort to increase their innovativeness, which -especially in these countries -is very weak.
Conclusions
Though all countries of the CEEC-8 group are classified as high-income countries (WESP [40]), convergence of the Central and Eastern European countries with the developed World slowed down after the global financial crisis took hold, suggesting a typical situation in which the above group of countries has entered the "middle-income economy trap". Therefore many economists from the countries under consideration warn about the middle income trap problem and identification of the causes of convergence slowdown seems to be important. The results of the analysis of the CEECs' performance in terms of both GDP per capita growth and the convergence process, as well as the results of the estimation of the parameters of the TFP growth, show that the convergence of the Central and Eastern European countries with the world economy slowed down after outbreak of the global financial crisis in 2008, suggesting a typical situation in which the above group of countries has fallen into the "middle-income economy trap".
In the first part of our analyses, based primarily on a in-depth statistical investigation, we find that although the countries under consideration had relatively high GDP growth rates after 2000, and especially after their access to EU, soon after, in the period of the global financial crisis 2007-2008, their GDP slowdown was very significant. What is more, the slow convergence of the CEECs toward the total EU in the period 2008-2013 can be attributed predominantly to the very deep recession in part of the euro area, mostly in the Mediterranean countries, rather than to their good economic performance in absolute terms. Indeed, measuring the CEECs' performance vis-à-vis EU-North, namely such countries as Germany or Sweden, during the period 2008-2014, a divergent process can even be observed. Similar negative results are observed after 2008, especially when comparing the CEECs' growth results with those of other world economies, such as OECD non-EU member countries, for example the Anglo-Saxon countries, such as the USA, Canada, Australia, and New Zealand, and other OECD member states, such as Switzerland, Chile, Israel, and Turkey, not to mention the East Asian tigers such as the Republic of Korea and Taiwan, which show two or three times higher GDP growth rates two or three times higher than those of the CEECs over the 2008-2014 period.
To deepen our statistical analyses and observations, and knowing that most of the GDP growth rates in the countries 380 Causes of the Convergence Slowdown in the Countries of Central and Eastern Europe, 2008-2014 analyzed have predominantly been TFP-driven, we have implemented econometric modeling to estimate the significance of TFP factors in the development of CEECs' GDP and convergence. In other words, we propose the estimation of parameters of the chosen dynamic panel model and the use of the Arellano-Bond [1] and Blundell-Bond [5] estimators. Using this dynamic panel model and the estimators, we come to the conclusion that the CEECs' slowdown in convergence results from relatively low rates of growth in TFP in Central and Eastern Europe. This may be due to low investments generally and the low innovativeness of these countries (especially Poland). The numbers of patents and R&D expenditure are relatively low in the case of Central and Eastern European countries. This leads to slower growth in TFP and a slowdown in convergence. These countries' competiveness has primarily been driven by low labor costs, which is not a sufficient condition for maintaining the long-term ability to compete. Although the human capital index increased very rapidly in these countries, the coefficient may give misleading information as higher education in CEECs was not adjusted to market requirements, resulting in a large percentage of people with university education but not the appropriate capabilities. To advance and avoid being stuck in the middle-income trap, these countries should spend less money on consumption and more money on investment and R&D. Policy makers in Central and Eastern European countries should concentrate more on adjusting the profiles of higher education to the new challenges of the modern world.
Moreover convergence slowdown in the countries under consideration was due to bad economic policy choices of CEECs governments to combat the crisis (Myant, Drahokupil, [34]). Banking and economic nationalism prevented economic development and better integration to EU structures (Mero,Piroska [32]).
The analyses showed differences within the CEEC-8 group. The strongest divergence tendencies were observed for the Czech Republic, Slovenia and Hungary, where increases in innovation were very slow. On the other hand, Lithuania, Latvia and Slovakia outperformed other countries in terms of innovativeness and converged faster than the highly-developed OECD countries. Analysis of the results across countries could contribute something to discussion of the choice of exchange rate mechanisms in the countries of Central and Eastern Europe. In general, the Baltic states and Slovakia -members of the EMU -perform better than the Czech Republic, Hungary and Poland, which have their own currencies. Large fluctuations in the EUR/PLN, EUR/HUF and EUR/CZK exchange rates might have resulted in a high level of uncertainty associated with doing business in Poland, Hungary and the Czech Republic. Therefore, enterprises in these countries made less innovative effort and used price levels as their main competitive tool. In the short-run, a positive GDP growth rate (in local currency) was recorded in the case of Poland at the beginning of the global financial crisis. In the long-run, however the countries that entered the EMU and increased their innovative effort (Latvia, Lithuania, Slovakia) outperformed Poland. | 9,035.6 | 2017-07-01T00:00:00.000 | [
"Economics"
] |
A Generic Workflow for Bioprocess Analytical Data: Screening Alignment Techniques and Analyzing their Effects on Multivariate Modeling
UV chromatographic data in combination with multivariate data analysis (MVDA) has been extensively used for bioprocess monitoring. However, they are usually attributed to shifts along the retention time and require preprocessing. Misaligned UV chromatographic data result in inconsistent MVDA models. Numerous preprocessing techniques are available, each varying in the number of meta-parameters to optimize, complexity and computational time. Therefore, we aimed at developing a generic workflow to screen for preprocessing techniques. We chose four datasets with increasing complexity containing UV chromatographic data from reverse-phase and size exclusion chromatography HPLC. We aligned all four datasets using three preprocessing techniques, namely icoshift, PAFFT and RAFFT algorithms. We chose several statistical tools to validate the performance of the preprocessing techniques and to screen for meta-parameters. We validated the performance of the preprocessing techniques in terms of data preservation, complexity and computational time, and identified the optimal ranges of meta-parameters for each dataset. Finally, we established principal component analysis (PCA) models to evaluate the chosen alignment technique. Summarizing, in this study a generic workflow has been developed to validate alignment of chromatographic data using statistical tools.
INTRODUCTION
UV chromatography is a powerful tool, extensively used in bioprocess analytical techniques for quantitative and qualitative analysis [1,2]. The main advantages of UV chromatography are short analysis time, ability to generate high amounts of data containing process information, wide variety of column chemistry and high precision. However, UV chromatographic data are prone to shifts along the retention time, which render subsequent automation and establishment of modeling techniques cumbersome or even impossible. Particularly in biochemical assays done with label free LC analysis, alignment of various analyte profiles to their respective retention time would be of utmost importance [3,4]. HPLC is often coupled with different techniques for biochemical analysis [5][6][7]. Automation of such assays for extracting valuable process information in bioprocesses for real time analysis would necessitate correcting misalignments in peak profiles. In the past decades, various alignment techniques have been used to correct shifts along the retention time. Peak alignment is necessary for peak identification and quantification, but more importantly for automation and application of subsequent chemometric models, such as principal component analysis (PCA), hierarchical cluster analysis (HCA) and partial lease squares (PLS). For establishing such multivariate models, the chromatographic dataset must contain information about the changes in the process, which are associated with changes in the UV chromatograms. In other words, the retention time of a particular compound must not vary across different samples, as otherwise the predictive ability of the model is compromised [8,9]. A typical UV chromatogram with retention time shifts is shown in Figure 1.
Various peak alignment approaches to correct misalignments in retention time have been proposed in literature. Most alignment techniques require a reference chromatogram and additional metaparameters for misalignment correction. These meta-parameters are dependent on the dataset and have to be screened in a case-by-case approach [10]. Various target functions for alignment are also used, with the most common being Pearson correlation coefficient [11], Euclidean distance [12], fast Fourier transform (FFT) cross correlation [13] and other even more sophisticated methods. In general, the peak alignment techniques have three different correction methods, namely shifting, insertion/deletion and polynomials models. A more detailed collection of various alignment techniques, their mode of function and relevant metaparameters has been published recently [14].
Although different alignment techniques are available, generic, generally accepted criteria for choosing an alignment technique for processing UV chromatographic data are not available. The three main challenges with aligning chromatographic data are 1) choosing a relevant reference spectrum, 2) defining meta-parameters and 3) data preservation. A more detailed description of these challenges is shown in Table 1.
The reference spectrum, to which all other spectra are aligned, plays a critical role in the overall performance of the alignment technique [10]. It is important that the reference spectrum represents all peaks in the entire dataset. Different approaches have been reported for calculating the reference spectrum, the most common being calculating the average (mean) or median of the entire dataset [15]. In addition to the reference spectrum, each peak alignment technique would require different meta-parameters. Alignment techniques are influenced by different meta-parameters, such as segment length or allowed shifts [16], which are defined prior to the alignment. However, these meta-parameters are dependent on the alignment technique and the dataset used and thus have to be screened. For multivariate modeling, the peak shape and intensity must not change during the alignment procedure, otherwise important information from the dataset is lost.
In this study, we established statistical tools to screen for metaparameters using correlation analysis, explained variance and peak factor. We compared the performances of three peak alignment techniques on three UV chromatographic datasets with different complexity based on the determined meta-parameters. We compared the peak alignment technique with the determined meta-parameters based on alignment correlation, peak factor and by visualization using heat maps and 2D plots. We chose three peak alignment techniques which use FFT cross correlation as target function, namely interval correlation optimized shifting (icoshift) algorithm [13,17], peak alignment by FFT (PAFFT) and recursive alignment by FFT (RAFFT) [18]. We chose them for their attributed low computational times and a lower complexity in terms of meta-parameters in comparison to warping peak algorithms [15]. We investigated different reference spectrum selection techniques for peak alignment and defined the optimal reference spectrum based on highest correlation of reference spectrum to each individual spectrum. Furthermore, we analyzed PCA models, established on the best and worst aligned UV chromatographic datasets and the original dataset, to highlight the impact of the peak alignment method on the multivariate models. Finally, we present a generic workflow for screening meta-parameters as well as choosing and evaluating different peak alignment methods for UV chromatographic data.
UV chromatographic datasets
Datasets 1 and 2: UV chromatographic data from size exclusion (SE-) HPLC: Samples from four different E. coli cultivations were used for analyzing protein purity through SEC. UV chromatographic data at 280 nm were acquired using a modular HPLC device (PATfinderTM) purchased from BIAseparations (Slovenia). The setup comprised of an autosampler (Optimas), a pump (Azura P 6.1L) and a UV detector (Azura MWD 2.1 L). The samples were loaded onto a Superdex 75 10/300 GL size exclusion chromatography (SEC) column purchased from GE Healthcare (Germany). A loading buffer with 20 mM potassium phosphate,
Challenge Requirements
Choosing a reference spectrum Reference spectrum must represent all peaks in the UV spectrum.
Defining meta-parameters Meta-parameters are usually defined on a case-by-case basis, since they are dependent on each peak alignment technique. The meta-parameters determined for a chosen dataset affect peak alignment.
Data preservation Peak alignment technique must not change peak shape, intensity and other important attributes which contain process information. The varying complexity of the datasets arises from the chromatographic method used. For clarity, the SEC-HPLC datasets (1 and 2) render Gaussian (or 'bell') shaped peaks which are broader in resolution, whereas RP-HPLC datasets are characterized by their needle shaped peaks. Furthermore, the number of peaks between SEC and RP-HPLC datasets vary enormously. Therefore, four datasets with varying complexity was considered for this study. Exemplary chromatograms to highlight the complexity in all four datasets considered in this study are shown in Figure 2.
Reference spectrum selection
The reference spectrum is usually selected based on a priori knowledge of the dataset. The reference spectrum must be representative of the (most) significant peaks in a dataset, which is important for extracting process information using multivariate models. Often, the reference spectrum is either calculated by determining the mean or median of the entire dataset or by choosing the latest sample in the sequence which usually represents the highest number of peaks [20,21]. Skov et al. [10] proposed a selection criterion for identifying the reference spectrum by calculating the product of correlation coefficients between the chosen reference spectrum and each individual sample [10]. The reference spectra and the rationale for selecting them are shown in Table 2.
Although, mean and median measures contain significant peak information, they can be biased towards a few peaks with high maxima. Thus, we opted for a bi-weighted mean approach, which imposes a bias-correction to avoid maximum peak intensities which influences the peak alignment procedure. The maximum of all chromatograms in the dataset captures all maximum values or significant peak information and therefore, was considered also as a reference spectrum. In total, seven different reference spectra were used for identifying the optimal reference spectrum for further peak alignment methods.
150 mM sodium chloride, pH 7.0 was used. The flow velocity was kept constant at 0.5 mL/min. The dataset of UV chromatograms at 280 nm from four different E. coli cultivations with 24 samples with each chromatogram having 9,001 data points is termed as Dataset 1.
Samples from downstream unit operations, in particular protein refolding, from E. coli bioprocesses were used for analyzing product yield and purity through SEC. The HPLC setup and analysis conditions were the same as from Dataset 1. UV chromatographic data at 280 nm were acquired with 15 samples with each chromatogram having 12001 data points is termed as Dataset 2.
Datasets 3 and 4: UV chromatographic data from reverse-phase (RP-) HPLC: Samples of corn steep liquor (CSL), which is used as media supplement for Penicillium chrysogenum cultivations [14], were analyzed for vitamin composition using a reverse-phase HPLC column (Acclaim PA; Thermo Fisher Scientific, USA). The HPLC setup (Ultimate 3000; Thermo Fisher Scientific, USA) comprised of a pump (LPG-3400SD), an autosampler (CTC autosampler), column oven (TCC-3000SD) and a diode array detector (DAD 3000). Samples were loaded with 25 mM potassium phosphate buffer, pH 3.5 and eluted with acetonitrile. A more detailed explanation of the data acquisition procedure is published elsewhere [19]. The flow rate was kept constant at 1 mL/min. The dataset of UV chromatograms at 260 nm was analyzed for vitamin composition from sixteen different CSL media stocks and termed as Dataset 3 comprising of 16 samples each with 4800 data points.
Samples from four different E. coli cultivations were used for quantifying metabolite concentrations through RP-HPLC column (Supelcogel C-610 H, Thermo Fisher Scientific, USA). Samples were loaded with a running buffer comprising of 0.1% phosphoric acid in distilled water. The flow rate was kept constant at 0.5 mL/min. The HPLC setup was the same as for Dataset 3.
The UV chromatograms at 210 nm were analyzed for metabolite concentrations from E. coli cultivations and termed as Dataset 4 with 51 samples each containing 9001 data points. For all Datasets, all samples were centrifuged and filtered prior to injection and a sample volume of 10 µL was injected.
Peak alignment techniques
Three different peak alignment methods were tested in this study. The main properties of the different alignment techniques are shown in Table 3.
All individual chromatograms which had buffer artefact peaks were considered as outliers and removed based on the Hotelling's T2 statistic from the PCA models on raw chromatographic dataset prior to peak alignment procedures.
Icoshift:
The icoshift algorithm was initially developed for ID NMR data [17], but it also has been used for UV-chromatographic data (e.g. [1,13]). The icoshift algorithm splits each UV chromatogram into segments and aligns these segments from the dataset to the segments in the reference spectrum by shifting the segments sideways to achieve maximum cross-correlation. It is driven by an FFT engine for simultaneous alignment and has been shown to outperform warping algorithms (e.g. COW; [13]). The main advantage of icoshift is its shifting procedure where the number of shifts of a particular segment can be determined either by the algorithm automatically or user-defined. In common warping algorithms, the search for the shift parameter is tedious as it is powered by dynamic programming (e.g. dynamic time warping (DTW); [20]). Some other advantages of the algorithm include high computational power, user-defined segments and option to fill in missing values (e.g. through interpolation) [17]. The algorithm is available from [22].
In this study, the number of segments was set between 1 (indicating the entire chromatogram of a sample as a segment), and the total number of data points in the datasets (eg. 4799 segments for Dataset 3). The maximum number of shifts allowed was not fixed and the algorithm was allowed to shift until it found the best fit. The chosen values for the different meta variables for icoshift are shown in the supplementary information (Table S1). Missing parts on segment edges were replaced by repeating the value of the segment edge.
Pafft: Similar to icoshift, the PAFFT algorithm also corrects misalignments by shifting the segments to achieve highest correlation. The optimal shift size is determined by sliding the segment of a sample over the corresponding segment in the reference spectrum to achieve maximum correlation. PAFFT does not allow addition of missing values with zeros or interpolations, therefore possible endpoint contamination (by addition of interpolated values) in the chosen segments may occur. On the other hand, since no extra data points are added to the UV chromatographic data, no artifacts are generated. Additionally, PAFFT provides an option to limit the number of shifts of a particular segment. PAFFT also uses the FFT engine for peak alignment. Since two meta-parameters need to be defined, we used a simple two factorial screening design for exploring the optimal meta-parameter combinations. The number of segments were chosen between 1 (corresponding to all data points in each chromatogram) and 1/16 of the chromatogram length (where the entire chromatogram is split into 16 parts, with each segment containing different data points in accordance with the dataset). The number of times the chromatograms were split (16) was chosen arbitrarily and can be changed. The number of shifts allowed by the PAFFT is dependent on the complexity of the dataset. In other words, it depends on the peak properties such as retention time and peak width in the dataset, therefore we assumed a maximum shift corresponding to 1 min in the retention time. Five combinations of shifts and segments based on the experimental design were chosen for the PAFFT algorithm and are shown in Table S1. The algorithm for PAFFT can be downloaded from [23].
Rafft: RAFFT is an extensively used peak alignment method which also uses FFT cross correlation for peak alignment [16,18]. In contrast to PAFFT, the RAFFT algorithm splits the entire spectrum into smaller segments for identifying the highest correlation. The maximum number of shifts allowed for each segment is specified by the user. At the beginning of the alignment procedure, the bigger segment is selected for alignment and this segment is gradually broken down to smaller segments until either the highest correlation is achieved or the maximum number of allowed shifts is reached. RAFFT has also been shown to be faster in comparison to other warping algorithms [16]. In this study, the maximum number of shifts allowed was fixed based on the retention time as in PAFFT. We assumed that the segment, comprising of a few peaks, should not shift more than 1 min of the retention time. Therefore, we chose fixed values with 61, 121, 181, 241 and 301 shifts, corresponding to 0.2, 0.4, 0.6, 0.8 and 1 min in retention time. The algorithm for RAFFT can be downloaded [23]. First injection represents all peaks at the beginning of the process 7 Last injection represents all peaks at the end of the process
Evaluation criteria
Correlation analysis: Correlation of the aligned samples from each peak alignment method with the chosen reference spectrum renders similarity measures. If all peaks in the sample dataset are aligned precisely to the reference spectrum, we obtain a correlation value of 1. However, this measure is only a rough estimate of the alignment procedure and depends entirely on the reference spectrum selection.
Explained variance: The explained variance measure calculated from the PCA model can be used to evaluate the performance of the alignment method. Perfectly aligned chromatograms have a higher variance explained in the first principal components in comparison to misaligned data. Therefore, the sum of the explained variance of the first principal component(s) was calculated for all aligned datasets by establishing PCA models on all datasets. The explained variance in combination with the correlation analysis indicate the optimal setting for a given peak alignment method.
Peak factor: Skov et al. proposed the peak factor as a measure for analyzing the performance of peak alignment techniques [10]. The peak factor measures absolute changes in the spectroscopic data due to peak alignment procedures. This is relevant since the alignment technique must not modify the actual data since any changes affect the subsequent multivariate models. The peak factor is calculated by comparing the Euclidian length (norm) of a UV chromatogram before and after alignment. For warping algorithms such as DTW, peaks from the original data have been reported to be distorted [14]. However, if there is no change in the peak shape, the peak factor has a value of 1.
Computational time: Although this measure may not be relevant for the chosen peak alignment methods used in this study, owing to their fast computations, we included this measure for applicability. Chromatographic and spectroscopic data have been successfully used for bioprocess monitoring [24][25][26], which necessitates fast preprocessing techniques to be on par with bioprocess dynamics [27]. Warping algorithms often have very high computational times [28]. Initially, we considered including dynamic multi-way warping (DMW) as a peak alignment method in this study. However, DMW rendered a 1,000-fold higher computational time (data not shown) than icoshift, PAFFT and RAFFT and hence was not included. However, it is practical for the user to have an overview of time invested for a particular peak alignment method. Therefore, we calculated the computational time for the chosen peak alignment procedures. We performed all analyses in a stand-alone PC with Intel i5-3330 @ 3.00 GHz processor and 8 GB RAM.
Visualization: Visual inspection of datasets renders better understanding of peak alignment methods and contributes to further improvement of the alignment procedure by optimization of meta-parameters. Heat maps were used in this study for visualizing the UV chromatographic data based on their intensities. Strong misalignments can be easily identified using heat maps. For ease of visualization, 2D plots of the original and best alignment were generated to give the user a clear overview of the alignment procedure.
Multivariate models
As an application example, PCA models were developed on the original (misaligned) and the 'best' aligned datasets. In general, the PCA models are used to realize the impact of different peak alignment techniques on chemometric models. In short, PCA is an exploratory technique which decomposes the entire chromatographic dataset to a few latent principal components. Each sample is represented as a score and is projected across different principal components based on their similarities or differences. The resulting score plots from the PCA model can be used to identify possible groupings or similarities between samples in the UV chromatographic data.
Software
All data analyses were done using MATLAB R2016a (Mathworks, US). The PCA models were established in SIMCA v13.0 (Umetrics, Sweden).
RESULTS AND DISCUSSION
In this study we developed a methodology to screen for metaparameters and to choose a peak alignment technique based on different evaluation criteria such as correlation analysis, peak factor and computational time. Four UV chromatographic datasets with varying number of samples, complexity and data volume were analyzed in this study to show the generic applicability of our workflow.
Reference spectrum selection
Seven reference spectra were generated and correlated to each UV chromatogram from all datasets. The correlation coefficients from all datasets and their respective reference spectrum are shown as boxplots in Figure 3. The line inside the box indicates the absolute correlation of the chosen reference spectrum to all four datasets.
It is interesting to note that the first and last injections from all datasets cannot be used as reference spectrum. In Datasets 1 and 2, it is clear that the first and the last injections were not representative of all peak information. Similarly, the peak information in the first and last injections represent different vitamin compositions in Dataset 3 and metabolite profiles in Dataset 4 and render the least correlation. This can be explained with the changes in analyte concentrations over process time, which indicates release (appearance of new peaks) and/or utilization (disappearance of existing peaks) over time. Since the reference spectrum calculated with the arithmetic mean of UV chromatograms, of all samples from Datasets 1-4, rendered the highest correlation, it was chosen as the optimal reference spectrum.
Evaluation criteria
Correlation analysis: Three peak alignment methods were chosen based on their FFT cross correlation for high throughput analysis and less complexity in comparison to warping algorithms. Peak alignment was done using the chosen reference spectrum from respective datasets and correlation analysis was done between the reference spectrum and aligned datasets. For each peak alignment method five different meta-parameter constraints were used. The results from the correlation analysis for all four Datasets are shown in Figure 4.
All the chosen methods with the chosen meta-parameters achieved high correlations above 0. In Dataset 1, it is interesting to note that the RAFFT algorithm has overall lower standard deviations (as indicated with the error bars) in comparison to icoshift or PAFFT algorithms. This can be explained by complete shifts of the chromatogram in the RAFFT algorithm rather than dividing the chromatographic data into segments as in icoshift and PAFFT algorithms.
For Dataset 3 (Figure 4), the correlation coefficients of the selected reference spectrum and icoshift increased with higher intervals to be shifted, but started to decline after 1200 intervals. This indicates that the optimum intervals to be shifted using icoshift algorithm should be close to 1200 intervals. Interestingly, with PAFFT algorithm we can see Explained variance: The explained variance was calculated using a PCA model on the dataset and used to indicate the degree of alignment. The explained variance from the principal components for all alignment methods and their chosen meta-parameters are shown in Figure 5.
Aligned chromatograms explain higher variance in the first PC from a PCA model, therefore, the higher the explained variance the better is the alignment of the dataset. For Dataset 1, the results from the explained variance are in agreement with the results achieved in correlation analysis. The RAFFT algorithm between 121-301 shifts rendered the highest explained variance for Dataset 1. It is more interesting to note in Dataset 2, the peak alignment Peak factor: The peak factor indicates net changes in the aligned chromatograms in comparison to the original chromatogram. The optimal peak value is '1' corresponding to 'no change'. The peak factors for almost all meta-parameter settings and peak alignment methods for Dataset 1 were higher than 0.96 (icoshift: 1500 intervals). This could be due to endpoint contaminations. For Dataset 2, all peak alignment methods resulted in a peak factor of 1, indicating no loss of information or distortion of peaks. The peak factor for Datasets 3 and 4 were higher than 0.97, which indicates that the used peak alignment methods did not alter the chromatographic information significantly (less than 3%). As mentioned earlier, peak shapes are altered mainly when warping or interpolation functions are integrated into the peak alignment procedure. However, for shift-based algorithms employed in this study little to no peak distortion is to be expected. Comparing all four datasets, the increasing order of computation time can be clearly seen with the increase in the number of samples. The PAFFT algorithm always rendered the minimal computational time for all the datasets considered in this study. However it has to be noted that the PAFFT algorithm performed less in terms of correlation and explained variance with comparison to other peak alignment procedures. Warping algorithms are usually 100-folds higher in computational time in comparison to the FFT correlation methods used in this study [10]. Overall, all algorithms used in this study took less than 5 seconds for peak alignment procedure.
Visualization:
Heat maps or 2D plots can be used to visualize the alignment results. In heat maps, the intensities of the significant peaks are highlighted and possible misalignments are identified. Furthermore, any improvement on a peak alignment method based on a different set of meta-parameters can be directly seen. The results from the heat map and 2D plots of the chosen methods, from Datasets 1-4, showing the unaligned dataset and best alignments achieved are shown in Figure S1. The heat maps from the original and best aligned datasets clearly highlights the misalignments in the raw dataset and alignment efficiency of the algorithm. The 2D plots shows the efficiency of the alignment procedure, where one can clearly see the improvement in peak alignment. Finally, any outliers in the UV chromatographic data can be easily identified (e.g. buffer peaks) by visualizing peak distortions using heat maps.
From all these results, we can see that the correlation analysis and explained variance rendered similar indications to peak alignment performance for the chosen meta-parameters. Peak factor resulted in similar results and indicated no interference in the peak properties, thereby no loss in information. The correlation analysis and explained variance indicated RAFFT with 181 shifts for Dataset 1 and icoshift with 1 interval for Dataset 2. For Dataset 3, PAFFT algorithm with 300 segment size and 61 shifts performed better than all other peak alignment algorithms used in this study, whereas for Dataset 4, icoshift algorithm with 1 interval considering the whole chromatogram outperformed all other algorithms. It is clear that no golden standard of preprocessing technique is available globally for all datasets. However, such a generic strategy must be used to screen for different preprocessing techniques to avoid misleading multivariate models. In order to describe deviations in modeling results, we chose the original datasets, the best aligned datasets and the worst aligned datasets for establishing multivariate models.
Multivariate models
PCA models were established on the 'best' alignments and worst alignments achieved from the peak alignment technique which was identified from all datasets. PCA models render different model variables such as scores and loadings which can be used to extract relevant information from the input datasets. In PCA, the closer the scores are to each other the more similar they are, with respect to the principal components. We analyzed the performance of the peak alignment technique based on the trends in score plots from the PCA models. The score scatter plots from the PCA model from Datasets 1-4 are shown in Figure 6A-6D.
In Figure 6, the score plot of the original data shows a wide spread of scores each representing a UV chromatogram. In the best alignment, we can see a compact trend where samples similar to each other are projected closer. This is further highlighted with the score plot from the worst alignment, where the scores are even more scattered than the scores from original data showing strong dissimilarities. We can see a clear improvement, between the original dataset and the best alignment with respect to clustering in the score plot, highlighting the peak alignment performance. Similarly, we can clearly see similarities between the original and worst aligned datasets for all datasets. In Figure 6, original and worst datasets yield almost identical results as suggested from the very similar results in the evaluation criteria (i.e. 36.1%, 36.8% explained variance for original and worst aligned datasets). It is interesting to note that in Dataset 4, the best and worst alignment was achieved with the same algorithm (icoshift) with different metaparameters (intervals). This further highlight the significance of meta-parameters in peak alignment procedures and the subsequent data driven models.
CONCLUSION
UV chromatographic data are prone to shifts along the retention time, which requires preprocessing prior to establishing multivariate models. In this study, we established a generic strategy for screening and validating different preprocessing techniques for UV chromatographic data. We chose different peak alignment techniques with different meta-parameters to evaluate their performance on four datasets. We analyzed the performance using different statistical tools to identify the optimal peak alignment technique and its meta-parameter ranges. The evaluation from statistical tools illustrated that peak alignment techniques, even though similar in correction methods and target functions, can render different results. The complexity and the sample numbers of each dataset also have an impact on the peak alignment procedure. Therefore, it is safe to hypothesize that the performance of the peak alignment technique is dependent on the initial, raw dataset and no global standard exists for all datasets. The impact of the meta-parameters of the chosen peak alignment technique affects the model results, which can be highlighted with the score scatter plots from the PCA models. Summarizing, the proposed methodology was used to choose the reference spectrum, screen for meta-parameter ranges and validate the results using data driven models. The generic methodology can be used for different chromatographic datasets and has a modular-setup which allows incorporation of any peak alignment technique and any statistical tool as evaluation criterion. We envision the proposed workflow also for spectroscopic data which is usually hampered with peak and baseline shifts. | 6,745.2 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A novel approach to quantify metrics of upwelling intensity, frequency, and duration
The importance of coastal upwelling systems is widely recognized. However, several aspects of the current and future behaviors of these systems remain uncertain. Fluctuations in temperature because of anthropogenic climate change are hypothesized to affect upwelling-favorable winds and coastal upwelling is expected to intensify across all Eastern Boundary Upwelling Systems. To better understand how upwelling may change in the future, it is necessary to develop a more rigorous method of quantifying this phenomenon. In this paper, we use SST data and wind data in a novel method of detecting upwelling signals and quantifying metrics of upwelling intensity, duration, and frequency at four sites within the Benguela Upwelling System. We found that indicators of upwelling are uniformly detected across five SST products for each of the four sites and that the duration of those signals is longer in SST products with higher spatial resolutions. Moreover, the high-resolution SST products are significantly more likely to display upwelling signals at 25 km away from the coast when signals were also detected at the coast. Our findings promote the viability of using SST and wind time series data to detect upwelling signals within coastal upwelling systems. We highlight the importance of high-resolution data products to improve the reliability of such estimates. This study represents an important step towards the development of an objective method for describing the behavior of coastal upwelling systems.
Introduction
Eastern Boundary Upwelling Systems (EBUS) are characterized as vast regions of coastal ocean occurring along the western shores of continents bordering the Pacific and Atlantic Oceans [1][2][3][4]. Coastal upwelling associated with EBUS is known to have a large influence on the associated ecosystem's primary productivity, and hence the abundance, diversity, distribution, and production of marine organisms at all trophic levels [3][4][5][6][7][8][9][10]. Changes in the upwelling process over time is hypothesized to be strongly affected by anthropogenic climate change. According to the 'Bakun hypothesis', an increase in greenhouse gases facilitate an increase in daytime warming and night-time cooling and ultimately cause an increase in temperature gradients which will form stronger atmospheric pressure gradients [1,11,12]. These pressure gradients modulate the winds which ultimately affect the intensity and duration of upwelling [3, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 9, [12][13][14][15][16][17]. Because changes in SST indirectly affect coastal ecosystems and have considerable, often far-reaching economic impacts [2,3,[18][19][20], a better understanding of which SST products can most accurately detect upwelling will be important for any studies looking to identify and understand long-term changes to this phenomenon in EBUS [9,15,12,17,21,22].
Previous attempts at identifying upwelling 'events' have employed a variety of approaches and incorporating an assortment of coastal temperature and wind variables and Ekman processes to estimate occurrences of upwelling, for example, Fielding and Davis [23] used a combination of wind speed, wind direction, and the orientation of the coast to calculate an alongshore wind component to quantify upwelling occurrences off the Western Cape coast of South Africa. Pfaff et al. [24] derived an upwelling index by contrasting offshore and onshore bottom temperatures in the southern Benguela region. Lamont et al. [25] used wind vectors to quantify upwelling variability along the same coastal region. More recently, El Aouni et al. [26] Used SST and wind data together with image processing techniques to detect and quantify upwelling signals. Several other authors made use of various other techniques to determine upwelling signals such as; Cury and Roy [27]; Demarcq and Faure [28]; Rossi et al., [29]; Benazzouz et al. [30] and Jacox et al. [31]. These examples primarily relied on wind data [11] to act as their main determinant for potential upwelling occurrences, rather than SST data. While wind patterns can act as a strong correlate for the presence of upwelling in many cases [11,27]. SST data should arguably be more effective as these indicate presence of cold water of deep origin on the sea's surface. However, until recently, SST data were limited in several regards concerning data quality and quantity [32][33][34].
SST is regarded as one of the most important variables in the coupled ocean-atmosphere system and is a particularly useful research tool in the scientific fields of meteorology and oceanography [35][36][37][38][39][40][41][42]. For over 150 years, SST data have been collected using in situ measurement techniques [32] with satellite measurements of SST being available since the late 1970s [43][44][45][46][47]. Over the past decade, techniques have been developed to allow the assimilation and blending of different SST datasets from various in situ and satellite platforms. These are referred to as the Level-3 and Level-4 high resolution products, with the Level-4 data being gap-free [34], and are being widely applied in studies of coastal areas [48][49][50][51]. Previous studies demonstrated that satellite-based SST data are less accurate than in situ data due to the complexity of the oceanic and atmospheric conditions that need to be accounted for in deriving satellite SST products [52][53][54][55][56] and such errors vary both regionally and temporally [57]. However, in comparison to in situ temperature measurements collected from ships or buoys, a major advantage of satellite SST is their global coverage and near real time availability. SST datasets with a high level of accuracy, spatial consistency and completeness, and fine-scale resolution are necessary for weather and climate forecasting and are of great importance for reliable climate change monitoring [9,12,17,34,45,51,[58][59][60][61].
For many applications, SST data are not used or provided at the full resolution of the sensors but are averaged over defined areas to produce a gridded product [45,62]. Gridding in this way destroys more detailed information and as a result a gridded SST measurement is taken as an estimate of the average SST across a specific grid cell over a certain time. Smallscale features can evolve during the day, but the sensor sampling during this time is not dense enough for the sub-daily global analyses at a high spatial resolution [47,63]. Furthermore, considering that the satellites are passing overhead only once every~24 hours, images are only captured at very specific times during the day. To capture these small-scale features in a gridded analysis, it is suggested that the development of an improved analysis would have high resolution at small-scale features in regions of good coverage and lower resolution in areas of poor coverage [47].
Here, we aimed to test the utility of a new method for detecting upwelling signals and characterizing them in terms of intensity, frequency, and duration of upwelling events in an objective manner. Our approach is analogous to the marine heatwave methodology proposed by Hobday et al. [64]-in fact, it uses the same algorithm. By assessing increases in south easterly wind with concomitant decreases in coastal SST we can more reliably estimate the likelihood of an upwelling event. Given the importance of upwelling to the coastal productivity [65,66], regional climate, and marine ecology, the ability to measure upwelling metrics such as the frequency, duration, and intensity of upwelling signals-in addition to the occurrence of the signals itself-allows us to quantify patterns of upwelling dynamics over time, in a manner that offers the potential to link these metrics to measures of ecosystem function. Furthermore, since the resultant increase in global temperature driven by climate change has a direct influence upon increase in global SST and will also manifest in changes in the upwelling process, being able to use a variety of metrics to subject to trend analysis in upwelling will be important for ecosystem management decisions.
To this end, this study aimed to observe patterns and trends in upwelling signals in the Benguela Upwelling System (BUS) across a range of localities and spatial scales off the South African West Coast. The BUS is divided into the northern (NBUS) and southern Benguela Upwelling Systems (SBUS) by a zone of intense perennial upwelling activity in Lüderitz within the Namibian region [25,26,[67][68][69]. Meteorologically these regions are distinct. In the south, wind-induced upwelling reaches a maximum during spring and summer, whereas the northern region exhibits relatively less seasonal variation [67,[70][71][72]. Coastal upwelling commonly occurs between Cape Agulhas, in the south, to southern Angola in the north. We selected the SBUS upwelling system for this study because this physical process provides a strong seasonal signal of increasing and decreasing SST that is strongly localized to known centers of upwelling, and which relates to the coastal wind field that drives the offshore advection of water mass [71][72][73]. We apply our new method for identifying upwelling signals to data representative of this region. Because upwelling is such a well-characterized oceanographic process, the resultant fluctuating SST signal should be observed across independent SST products. Here we assess blended SST products covering a range of spatial grid resolutions from 0.05˚× 0.05˚to 0.25˚× 0.25˚. We hypothesized that the higher resolution data should have a greater fidelity at detecting these upwelling signals, some of which might only be confined to smaller spatial scales or localized closer to the shore.
Site description
The western region of the South African coastline is dominated by the Benguela Current, which forms the foundation of the Benguela Upwelling System (BUS) [74], and provides a natural laboratory for this study. Seasonal upwelling is controlled by south-easterly trade winds, with intense upwelling occurring throughout the summer months. This creates distinct temperature variations with much lower temperatures within the upwelling cells over a narrow continental shelf from the Cape Peninsula to Cape Columbine. To assess upwelling within the BUS, four sites from the South African Coastal Temperature Network (SACTN) dataset [61,75] were selected as points of comparison (see below). Each site was situated along the West Coast of South Africa, and shore normal transects were used to sample the data at 0, 25 and 50 kms (Fig 1). Where 0 km pixels were those closest to their corresponding in situ site.
Upwelling processes in the southern Benguela are highly influenced by bottom topography [76]. The continental shelf that forms the eastern boundary of the Cape Basin, defined roughly by the 200 m isobath, varies in width from 10 km at prominent capes to 150 km near Port Nolloth. In the vicinity of the Cape Peninsula and Cape Columbine, the coastline is irregular, and two canyons associated with these features cut into the shelf, parallel to the coast [76]. The dynamic topography of the area is such that the Agulhas Current water is fed into the Benguela systems from south of the Agulhas Bank. Upwelling in the BUS occurs in several distinct upwelling cells that form at locations of maximum wind stress curl, and where there is a change in the orientation of the coastline. Lutjeharms and Meeuwis [77] distinguished eight different cells: Cunene, Namibia, Walvis Bay, Lüderitz, Namaqua, Columbine, Cape Peninsula, and the Agulhas cell. Shannon and Nelson [78] included three more upwelling cells along the south coast. Given that this research study is restricted to the southern Benguela, discrete upwelling cells at Cape Columbine and the Cape Peninsula will be discussed [76]. The Cape Columbine and Cape Peninsula upwelling cells are identified as two distinct bands of cold water on the inner and mid-continental shelves at a depth of 0-100 m, where upwelling is generally more intense during summer [76]. This cold water is apparent along the length of the inner (0-100 m) and mid-continental (100-200 m) shelves [79]. In the Cape Peninsula region, a change in Sea Surface Temperature (SST) is present at Port Nolloth notably owing to the combined effects of being at the point of the southern limit of the Cape Peninsula upwelling cell and the sudden broadening of the inner shelf immediately to the south of the Peninsula.
Datasets
This study uses four Level-4 remotely sensed temperature datasets compiled by several organizations. Product 1 is the AVHRR-only (Advanced Very High-Resolution Radiometer) Optimally Interpolated Sea Surface Temperature (OISST) dataset, which has been providing global SST for nearly four decades [80]. OISST is a global 0.25˚× 0.25˚gridded daily SST product that assimilates both remotely sensed and in situ sources of data to create a gap-free product [81]. The second product is the Group for High Resolution Sea Surface Temperature (GHRSST) Canadian Meteorological Center (CMC) Level-4 0.2˚× 0.2˚version 2; it combines infrared satellite SST at numerous points in the time series from the AVHRR, the European Meteorological Operational-A (METOP-A) and Operational-B (METOP-B) platforms, as well as the microwave SST data from the Advanced Microwave Scanning Radiometer 2 in conjunction with in situ observations of SST from ships and buoys from the ICOADS program. The third dataset is the Multi-scale Ultra-high Resolution (MUR) SST Analysis, which is produced using satellite instruments with datasets spanning 1 June 2002 to present times. MUR provides SST data at a spatial resolution of 0.01˚× 0.01˚and is currently among the highest resolution SST datasets available. The final dataset is the GHRSST analysis produced daily using a multiscale two-dimensional variational (MS-2DVAR) blending algorithm on a global 0.01˚grid known as G1SST. This product uses satellite data from a variety of sensors, such as AVHRR, the Advanced Along Track Scanning Radiometer (AATSR), the Spinning Enhanced Visible and Infrared Imager (SEVIRI), the Moderate Resolution Imaging Spectroradiometer (MODIS), and in situ data from drifting and moored buoys. We acknowledge that not all products are completely independent as they share the use of AVHRR SST data, but the amount of subsequent blending, the incorporation of other SST data sources, the different blending and interpolation approaches used, and the differing final grid resolutions make them acceptably different for this study.
These SST products are compared against in situ temperature records from the South African Coastal Temperature Network (SACTN). This dataset consists of coastal seawater temperatures at 129 sites along the South African coastline, measured daily from 1972 until 2017 [61,75]. Of these, 80 were measured using hand-held thermometers and the remaining 49 were measured using underwater temperature recorders (UTRs). For this analysis, the data were combined and formatted into standardized comma separated values (CSV) files which allowed for a fixed methodology to be used across the entire dataset. In situ SST measurements were collected using a thermometer at a depth of 0 m for the four sites used in this study. The objective of this study was to identify upwelling signals using a variety of separate SST products for the period between 2011-01-01 to 2016-12-31. We specifically selected this range of years as they provide a sufficient overlap in time series between four remotely sensed SST and in situ datasets thereby offering candidate years for points of comparison.
An advantage to using in situ data over satellite data is that they may provide a more realistic representation of the thermal properties closer to the coast, whereas satellite data fail to accurately capture and represent temperature properties within the same spatial context. The result is that in situ data may be better at explaining upwelling signals within the coastal inshore environment. Further, evidence by Smit et al. [54] has shown that satellite data along the South African coastline may have a warm bias as much as 6˚C greater than in situ temperatures within the nearshore. Time series for each of the remotely sensed SST data products were created at the nearest pixel to each in situ station, and at each pixel along the shore-normal transects from these stations at 25 and 50 km from the coast (Fig 1). Wind speed and direction data were provided by the South African Weather Service (SAWS) at a three-hour resolution. The wind stations closest to each of the in situ stations were used to calculate the upwelling index (see below).
Defining and detecting upwelling
To detect and analyze upwelling at the four sites within the BUS, it was first necessary to define when upwelling occurred. To accomplish this, a set of threshold values for identifying when the phenomenon was taking place was required. For the wind component, we parsed alongshore, wind events at each site. We limited this to only include alongshore winds stronger than 5 m.s -1 [11,27]. since upwelling tends to only occur when wind exceeds the above speeds. We then used several parameters of those winds to inform an upwelling index calculated using the formula presented by Fielding and Davis [32]: where μ represents the wind speed (m/s), θ represents the wind direction in degrees, and 160 is the orientation of the west coast in degrees [82]. The above equation produces a value called the 'upwelling index'. An upwelling index < 0 represents downwelling whilst an upwelling index > 0 represents upwelling [32]. For the temperature component, we evaluated coincidental drops in SST at each site when the upwelling index was greater than 0. If temperature dropped to the seasonally varying 25 th percentile of SST for a particular site, we deemed this as confirmation of the occurrence of an upwelling event at that site. See Schlegel et al. [61] for a similar threshold used to detected marine heatwaves and coldspells. with these thresholds established, it was then necessary to identify the number of consecutive days that must be exceeded for an upwelling signal to qualify as a discrete event. It must be noted that upwelling is known to vary on a seasonal basis and may also occur hourly (sub-daily). Therefore, the minimum duration for the classification of an upwelling signal was set as one day, the rationale being that data from the SACTN dataset as well as the satellite remotely sensed SST data are collected only at a daily resolution, preventing a temporally finer definition. With the upwelling index, SST data, and duration for an upwelling signal established, the detect_event() function from the heatwaveR package [83] was used to calculate metrics for the upwelling signals. Because upwelling signals were calculated relative to percentile exceedances, rather than a fixed temperature threshold, upwelling signals could occur any time of the year; however, upwelling was shown to be more dominant during summer months (December, January, and February), as expected. This method of determining upwelling signals is novel as it considers both SST and wind parameters, and provides us with a descriptive statistical output, which include three metrics that define the properties of each of the signals detected (Table 1).
ANOVAs were used to compare the upwelling metrics against three main effects: site, product, and distance. Upwelling metrics as a function of satellite product type were assessed using product as the main effect, and nesting distance within site. To establish whether differences existed between sites or distances from the shore, the upwelling metrics were assessed as a function of site or distance independently for each satellite. Restrictions to experimental design prevented testing interaction effects within product types. These analyses sought to test if significant differences occurred between sites and data products. A Pearson product moment correlation was used to identify if the same upwelling signal detected at 0 km from the coastline were also regularly detected at 25 and 50 km from the coastline. The signals were classified by start and end date within the same data product. Thereafter, the average numbers of upwelling signals detected by each individual data product across all sites were compared using an ANOVA test. Thereafter, a Chi-square analysis was used to compare of the number of upwelling signals detected when including and excluding an SST filter when determining upwelling signals.
Results
One-way ANOVA indicated no significant difference in upwelling duration between sites across each respective data product: SACTN (d. (Fig 2A) products. The Sea Point site displayed the longest mean duration of upwelling signals. Lamberts Bay had the shortest duration upwelling signals. Particularly, the Lamberts Bay data from the SACTN dataset showed the shortest duration upwelling signals.
A significant difference was found in mean intensity of upwelling between sites in the OISST (d.f. = 3, F = 5.82, p < 0.001) and SACTN (d.f. = 3, F = 7.39, p < 0.001) products. Conversely, no significant difference was found in the CMC (d.f. = 3, F = 1.04, p > 0.05), MUR (d. f. = 3, F = 2.48, p > 0.05) and G1SST (d.f. = 3, F = 2.66, p > 0.05) products ( Fig 2B). There was no significant difference in cumulative intensity of upwelling between sites in the CMC (d.f. = 3, F = 0.58, p = 0.62) (Fig 2C). The mean intensity of upwelling signals was highest in Saldanha Bay and Sea Point for the MUR and G1SST data. We found that there was a significant difference between cumulative intensity of upwelling signals between sites only when using the SACTN dataset. The cumulative intensity of upwelling signals was most intense in Saldanha Bay and Sea Point for all of the products.
An ANOVA showed no significant difference in the duration of upwelling signals detected at different distances from the shore during the summer season in the CMC (d.f. = 2, F = 1.03, p = 0.35) and G1SST (d.f. = 2, F = 2.55, p > 0.05) products. However, a significant difference was present across the MUR (d.f. = 2, F = 3.33, p < 0.05) and OISST data (d.f. = 2, F = 5.17, p < 0.05) products. The MUR and G1SST often yielded the longest duration of upwelling signals at 0 and 25 km from the shore (Fig 3A). Significant differences in the mean intensity of upwelling signals were present across different distances from the shore in the G1SST (d.f. = 2, F = 15.38, p < 0.001), MUR (d.f = 2, F = 5.12, p < 0.001) and OISST (d.f. = 2, F = 5.17, p < 0.05). MUR and G1SST products displayed the highest mean intensity of upwelling signals at 0 km from the coast (Fig 3B). The mean intensity of upwelling decreased further away from the coast in the higher resolution products.
A one-way ANOVA showed a significant difference in the cumulative intensity of upwelling signals detected at different distances from the shore in the G1SST (d.f. = 2, F = 7.03, p < 0.05) and MUR (d.f. = 2, F = 4.69, p < 0.05) data products. (Fig 3C). The CMC (d.f. = 2, F = 0.33, p > 0.05) and OISST (d.f. = 2, F = 0.06, p > 0.05) products showed no significant difference in cumulative intensity. The OISST, MUR and G1SST products yielded the highest cumulative intensity at 0 km from the coastline. The cumulative intensity of upwelling signals for all products decreased further from the coast. The results of a nested ANOVA showed that there was a significant difference in the duration of upwelling signals detected amongst the data products (nested ANOVA, d.f. = 3, F = 3.01, p < 0.02). The G1SST product had the longest duration of upwelling signals while the OISST products had the shortest. We found a significant difference in the mean intensity of upwelling signals between data products (nested ANOVA, d.f. = 3, F = 49.93, p < 0.001). The G1SST and MUR data products showed the highest mean intensity while CMC had the lowest. We also found a significant difference in the cumulative intensity of upwelling signals between the data products of different resolutions (nested ANOVA, d.f. = 3, F = 5.71, p < 0.05). The G1SST product showed the strongest cumulative intensity of upwelling and the CMC data the weakest.
Pearson correlation revealed the possibility of observing the same upwelling signal detected at 0, 25, and 50 km from the coast respectively varied across the individual data products at each of the four sites (Table 2). Overall, we found that upwelling occurred simultaneously at 0 km and at 25 km considerably more frequently than between 0 km and 50 km from the coastline. In addition, the likelihood of detecting upwelling signals at 50 km from the coastline were notably lower throughout all pairwise comparisons. The individual data products yielded different counts of upwelling signals at distances of 0 km, 25 km, and 50 km from the coastline. There was no significant difference between the number of upwelling signals collected at the different sites (one-way ANOVA: F = 1.73, d.f = 3, SS = 520, p > 0.05). However, there was a significant difference in the number of signals detected between products (F = 146.611, d.f = 3, SS = 40638, p < 0.001) and at different distances from the coastline (F = 0.76, d.f = 2, SS = 141, p > 0.05).
Comparisons of the number of upwelling signals detected when including and excluding SST data revealed that significantly more upwelling events were present across sites and data products when using only wind data (Table 3; χ 2 = 141.18, p < 0.001). The results of Chisquared test comparing the mean number of upwelling events between filtered and non-filtered counts per data product showed that on average the filtered data had lower numbers of upwelling events than expected when assessing each dataset individually. However, these differences in the count of upwelling events were only significant in all of the products (Table 3). Similarly, site-specific comparisons revealed that upwelling events at all sites showed Boxplots showing the upwelling A) duration, B) mean intensity, and C) cumulative intensity for the upwelling signals detected with the four satellite products and the SACTN in situ collected data at the different sites during summer months (December, January, and February), over a six-year period. The lower and upper hinges correspond to the first and third quartiles, and outliers are shown as points. The notches offer a guide to significant difference in medians, i.e., if the notches of two box plots overlap it suggests that there is no statistically significant difference between the medians being compared.
https://doi.org/10.1371/journal.pone.0254026.g002 significant differences between filtered and unfiltered counts of upwelling events, with unfiltered counts being notably higher in all cases.
Detection of upwelling signals
Over the past few decades, upwelling has been mainly described and determined in general terms using a variety of upwelling indices derived from diverse combinations of wind, SST, and Ekman transport variables [2-26, 29-31, 84]. We demonstrate that our novel approach to characterize upwelling events using SST in combination with wind variables to determine metrics that objectively and quantitatively describe the upwelling process offers a similarly versatile means for detecting changes in upwelling dynamics associated with climate change. We calculate a set of summary statistics (i.e., the metrics) for each upwelling 'signal,' including its intensity, duration, and frequency by making use of the marine heatwave algorithm [61,64]. Time series of these metrics are intuitively understood and allow for upwelling signals to be uniquely described and compared across space and time, even between upwelling regions. The use of this approach is not independent on the nature of the data, and here we explore this for SST.
Data products
Our analysis showed that differences exist between SST products and sites when comparing the upwelling metrics. The highest resolution data, MUR and G1SST, which are available on a 0.01 grid, yielded the longest duration and cumulative intensity of upwelling signals compared to the coarser resolution data products. The MUR product consistently yielded upwelling signals of the greatest intensity. Upwelling signals were most intense at the shore in all the SST products. Analysis of the CMC and SACTN datasets revealed that signals did not often exceed a duration of 10 days, whereas in OISST, MUR and G1SST the signals were detected for up to 14 days and even longer in some rare cases. Moreover, most of the signals detected in CMC and SACTN products only lasted for three days. This was similar for the higher resolution data products (G1SST and MUR) which also showed a high prevalence of signals lasting for just four days. In most cases, the number of signals detected at 0 km was higher than the number of signals detected at 50 km for the data products with the highest resolution. We also noted differences in mean intensity between products and distances from the site. The highest number of signals detected were recorded in the OISST and CMC products. The results show that the use of wind data without corresponding SSTs is likely to produce exaggerated estimations of upwelling. However, by incorporating SST data allows for a greater chance of reducing type I errors, i.e., false positives for estimating upwelling and reducing the overall likelihood for erroneously claiming an upwelling event based on wind data alone when corresponding SST are not cooling. Level-4 gridded SST datasets obtained from satellite imagery have provided an important understanding of offshore oceanographic processes. Their utility often stems from the fact that they are spatially complete. However, coastal features such as upwelling cells are often smaller than the highest resolution of most SST products [54]. In this study, estimates of upwelling duration, mean intensity and cumulative intensity may have been overestimated from data collected by the MUR and G1SST data products when comparing them to the in situ collected SACTN data. These products are more likely to be susceptible to errors relating to limitations and data collection biases associated with satellite-derived sampling [85,86]. The overestimated metrics of upwelling may be due to errors from different sources which are produced at each of the successive data processing level [86]. SST accuracy refers to the retrieval error produced at Level-2 (derived SSTs at pixel bases), but Level-3 (binned, gridded, and averaged Level-2 values) and Level-4 fields are extensively used in climate and modeling studies, mainly because of the desirable features of being "gridded and gap-free" [86].
It is important to note that the data sources are intrinsically different in the ways in which they were obtained or recorded. Consequently, discrepancies between datasets are to be expected. For example, the SACTN in situ collected data will reflect the actual temperature of the water being measured but instrumental differences when using a thermometer or an electronic sensor will result in inconsistencies. This is particularly prevalent because satellite temperatures are collected remotely, and sensors do not contact the water. Smit et al. [54] showed that warm and cold biases exist along the southern and western coastal region of South Africa, and the juncture between upwelling and non-upwelling regions tend to influence the variability and magnitude of the SST bias. While flagging techniques are supposed to occasionally flag 'good' values [87], it was found that flagging may occasionally be too vigorous for EBUS [88]. For example, the flagging method used on an OISST reference test induces warm coastal bias in data from both the MUR and G1SST data during summer [88]. It should be noted that this phenomenon can be explained by strong coastal SST gradients in these upwelling regionshere pixel-based corrections developed for oceanic applications often fail or are inappropriate due to the strong thermal gradients associated with upwelling.
Flagging techniques used to de-cloud data are also known to reduce strong biases at a monthly scale with strong horizontal SST gradients especially in upwelling systems [54]. Missing pixels at the land/sea edge or 'land bleed'-i.e., pixels not flagged as missing, but which are influenced by land temperatures 'mixing' with the actual sea temperatures, may also influence temperature data obtained. Contributing towards the magnitude of differences in upwelling signals detected between the different SST products are factors such as data resolution, proximity from the coastline, and the presence or absence of upwelling cells or embayments.
SST generally shows a high degree of correspondence with measurements obtained by buoys and other sources of in situ seawater temperature measurements [54,89]. However, although SST products developed offshore and within the open ocean are being applied to the coastal regions, reports exist to inform users to exercise caution when using SST datasets in these coastal regions [90]. Many upwelling pulses may be localized and of short duration (i.e., lasting for a few hours or days; Duncan et al. [91], Sawall et al. [92]), which may contribute to the higher resolution (MUR and G1SST) products yielding more signals lasting for a longer period when compared to the coarser resolution products (e.g., OISST). Prior investigations for quantifying the durations of upwelling events across the globe have adopted several approaches and estimates derived using various methodologies. For example, Wang et al. [93] used wind driven Ekman transport indices to estimate that upwelling events in the southern hemisphere last fewer than 10 days on average. Contrastingly, Iles et al. [94] used PFEL indices to estimate upwelling duration as > 6 days. Here we estimate upwelling as only lasting for 3-6 days on average, considerably shorter than previous estimates elsewhere. Both MUR and G1SST have a limited time series length (MUR: 2002-Jun-01 to Present, G1SST: 2010-Jun-09 to 2019-Dec-09) and for this reason are not well suited to climate change studies, which require time series of at least 30 years in duration. In this case, the OISST dataset would be more suitable. The adoption of a consistent definition and metrics for upwelling will facilitate comparisons between different upwelling signals, across seasons and at regional scales. It will also facilitate the comparison of observed signals against modelled projections, which will be useful in understanding future changes in upwelling signals. Confidence in the robust detection of upwelling signals will only be achieved with the use of high-quality datasets and a verifiable method.
Oceanography
At the latitude of the Cape Peninsula, cooler upwelled water (<14˚C) is confined primarily to the narrow inner shelf and this is evident in our data as we observe the most intense upwelling signals closer to the shore. It is also evident that the high resolution G1SST and MUR data sampled in Lamberts Bay, Saldanha Bay and Sea Point show the highest number of upwelling signals detected at the narrow inner shelf with fewer signals collected at the mid latitude shelf. Our findings further show that the coarser resolution (OISST) product fails to detect signals further offshore, as seen in Sea Point. Currie [95] and Hart and Currie [96] further explain that the BUS consists of a series of anticyclonic eddies of interlocking cool and warm water, which is in a constant state of change. This allows for upwelling cells or patches, formed by water that originates from between 200 and 300 m deep, to not be uniform along the coast. By understanding the topography, it is evident that, although upwelling is not visible at the surface, subsurface upwelling is possible [76]. This further suggests that in cases when the same signal was detected at the shoreline and 25 km from the coast, a corresponding signal would not be identified at 50 km and this may be explained by sub-surface upwelling.
While the SST data may be satisfactory for interpretation of regional phenomena, they nevertheless suffer from several drawbacks when applied within the coastal region. Here the interaction of hydrodynamic and atmospheric forces creates a complex system which is influenced by larger variability at smaller spatial scales than further offshore [88]. Hydrodynamic regimes, such as stratified water columns, may break down at the coast in very shallow waters, and seawater temperatures measured there may not directly relate to SSTs sampled further from the coast at the ocean's surface [97]. This inshore hydrodynamics may be described by a) the injection of turbulence through breaking waves, thus increasing the breakdown of the mixed layer; b) convective mixing due to the cooling through the process of evaporation, which occurs during winter months under cool dry air; c) tidal mixing which minimizes the vertical thermal gradient; and d) mixing through velocity often caused by wind driven currents. Together, these processes homogenize the first few meters of the water column and therefore minimize the difference between the surface temperature and deeper bulk temperature [98]. In hydrodynamically active zones, such as the BUS, the absence of shallow stratification would cause a portion of cooler water than the bulk surface waters of the ocean to which satellite SSTs have been referenced. Thermal heating of coastal waters may also be exaggerated due to the proximity to the coast [88]. This type of heating is commonly seen in embayments, which reduce water exchange and limit wave activity and ultimately affect the deepening of the thermocline. These processes are highly variable on a spatial and temporal scale depending on the coastal bathymetry and wind regime.
Conclusions
Overall, in the rapidly changing climate, the detection, characterization, and prediction of upwelling signals will become increasingly important. The impact of climate change on upwelling is an emerging area of interdisciplinary research with potential for collaborative initiatives in understanding coupled phenomena across physical oceanographic, ecological, and socio-economic areas of inquiry. The metrics of upwelling that we introduce here-intensity, duration, and frequency of signals of upwelling-provide a consistent framework that lends itself to be quantitatively coupled to metrics of change indicative of aspects of the regional biology, ecological impacts, and trends in the societal aspects of stakeholders whose livelihoods and businesses are coupled with the functioning of upwelling systems. Our approach not only provides us with a new method of detecting upwelling signals, which is useful to observe trends in upwelling signals over time, but also emphasizes the importance of selecting the correct data product in concert with knowledge about the nature of the physical phenomena being studied. | 8,600.2 | 2021-07-08T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Epoxidation of Tall Oil Fatty Acids and Tall Oil Fatty Acids Methyl Esters Using the SpinChem® Rotating Bed Reactor
Tall oil fatty acids are a second-generation bio-based feedstock finding application in the synthesis of polyurethane materials. The study reported tall oil fatty acids and their methyl esters epoxidation in a rotating packed bed reactor. The chemical structure of the synthesized epoxidized tall oil fatty acids and epoxidized tall oil fatty acids methyl ester were studied by Fourier-transform infrared spectroscopy. Average molecular weight and dispersity were determined from gel permeation chromatography data. The feasibility of multiple uses of the Amberlite® IRC120 H ion exchange resin as a catalyst was investigated. Gel permeation chromatography chromatograms of epoxidized tall oil fatty acids clearly demonstrated the formation of oligomers during the epoxidation reaction. The results showed that methylation of tall oil fatty acids allows obtaining an epoxidized product with higher relative conversion to oxirane and much smaller viscosity than neat tall oil fatty acids. Epoxidation in a rotating packed bed reactor simplified the process of separating the catalyst from the reaction mixture. The Amberlite® IRC120 H catalyst exhibited good stability in the tall oil fatty acids epoxidation reaction.
Introduction
The synthesis of polymeric materials with the principles of sustainability and cleaner production has been a widely researched topic in recent years. These principles are intended to reduce the environmental impact of products and production by reducing the use of fossil-based raw materials and replacing them with bio-based or waste/recycled resources; reducing energy consumption through the use of more efficient processes and equipment; reducing or eliminating toxic and harmful raw materials; reducing the amount and toxicity of waste [1].
A widely available bio-based raw material with high potential in chemical synthesis is crude tall oil (CTO). CTO is a by-product of the wood pulp industry generated at an average rate of 30-50 kg per 1000 kg of processed wood [2]. World production of CTO is between 1.6 and 2 million tonnes/year, of which approximately 650 000 tonnes/year is produced in Europe [3]. CTO contains 30-50 wt.% of free fatty acids (mainly oleic and linolic acid), 15-35 wt.% of rosin acids and residues composed of sterols, fatty alcohols, phenols and hydrocarbons. CTO can be burned as an alternative to heavy fuel oil. However, CTO also can be used as a high-value feedstock for chemical syntheses after separation into various fractions, i.e. tall oil fatty acids (TOFA) and tall oil rosins (TOR). TOR are used as an ingredient in printing inks, adhesives, soaps, detergents, emulsifiers, sealing waxes and soldering fluxes [4]. TOFA is mainly used as a feedstock to produce tall oil fatty acids methyl ester (TOFAME) used as an alternative to diesel fuel [5]. TOFA can also be converted to hydrocarbons by hydrodeoxygenation/decarboxylation reactions [6,7].
Moreover, TOFA also has been investigated as a potential raw material for the synthesis of bio-based polyols (bio-polyols). Polyols, conventionally petrochemical based, are one of the main components for the production of polyurethanes [8,9]. The global market for polyols in 2019 was US$26.2 billion, and further growth is expected [10]. Commercially produced polyols are mainly made from non-renewable petrochemical feedstocks. In recent years, there has been an increase in the availability of commercial bio-polyols made from vegetable oils such as castor oil, soybean oil and palm oil [10,11].
TOFA has several advantages in comparison to vegetable oil-based polyols. A significant advantage of TOFA is the high iodine value (about 155 g I 2 /100 g) compared to vegetable oils (e.g., the iodine value of palm oil is 44-58 g I 2 /100 g; the iodine value of rapeseed oil is 94-120 g I 2 /100 g; the iodine value of soybean oil is 117-143 g I 2 /100 g [12]). The higher iodine value indicates more unsaturated double bonds in the structure of fatty acids that can be chemically modified [13]. Moreover, TOFA is a second-generation feedstock and do not pose a concern about competition with food and feed supplies.
The most commonly used method for synthesizing biopolyols from TOFA is a two-step process of epoxidation followed by oxirane ring-opening with proton donors [14]. The classical Prilezhaev epoxidation method uses peroxycarboxylic acids formed in-situ to oxidize the double bonds. Formic acid or acetic acid and hydrogen peroxide are most commonly used in this process [15]. The main disadvantage of the epoxidation of fatty acids is that the carboxyl groups of fatty acids react with hydrogen peroxide to form peroxy fatty acids which act as oxygen carriers, leading to extensive oxirane ring-opening and formation of oligomeric products [16,17]. The use of heterogeneous catalysts such as acid ion exchange resins helps to reduce the occurrence of oxirane ring-opening side reactions compared to the use of heterogeneous catalysts such as H 2 SO 4 [18,19]. Heterogeneous catalysts can be easily separated from the reaction mixture, washed and reused, thus reducing process costs [20,21].
A modern type of reactor that can facilitate the process of separating the catalyst from the reaction mixture is the rotating packed bed reactor (RBR), in which the catalyst is separated from the rest of the reaction mixture. The mixing occurs due to the centrifugal force generated by a rotating catalyst container. The use of the RBR leads to reduced energy consumption and the water needed to separate the catalyst from the reaction mixture. The literature describes studies where RBR was used for epoxidation of vegetable oils using ion exchange resin under conventional heating [22], oleic acid, TOFA and distilled tall oil under microwave irradiation [23], as well as epoxidation of oleic acid in the presence of ultrasound irradiation [24].
Polymer laboratory at Latvian State Institute of Wood Chemistry has studied the epoxidation of TOFA before. Kirpluks et al. studied the epoxidation process of TOFA under conventional heating [9,17]. Studies on the epoxidation of TOFA using in-situ formed peracetic acid, catalyzed by the Amberlite® IRC120 H ion exchange resin, have confirmed that the resulting epoxidized TOFA is a mixture of monomers, dimers, trimers and oligomers [14,17,19]. Thus, the bio-polyols synthesized from ETOFA exhibited high viscosity, which significantly limits their potential application. The high viscosity of bio-polyols is undesirable as it complicates the large-scale production of rigid polyurethane foams [14].
The objective of this article was to compare the epoxidation of neat TOFA and their methyl ester with the use of an RBR reactor. Esterification of TOFA could help to reduce the occurrence of undesirable side reactions. The epoxidation reactions were carried out using varying catalyst content of 10, 15, 20 and 25 wt.%. The stability and reusability of the catalyst were also tested. The following characteristics were determined for obtained products: epoxy value, acid value and viscosity. The chemical structure of epoxidized TOFA and epoxidized TOFAME were studied by Fourier transform infrared spectroscopy and gel permeation chromatography.
Synthesis of TOFAME
The methylation of TOFA was carried out in a 2 l threenecked round bottom flask. The flask was immersed in a water bath equipped with a stirrer, a thermocouple, and a reflux condenser. The reaction conditions were chosen based on literature data [25]. The reaction temperature was 55 °C, and the reaction time was 30 min. The molar ratio of methanol to TOFA double bonds was 6:1. Catalyst content was 0.5 wt.% of TOFA. At first, 900 g of TOFA was added to the flask. The flask was immersed in the water bath, and the catalyst-methanol mixture (4.5 g of H 2 SO 4 and 600 g MeOH) was added to the TOFA and stirred (100 rpm) under reflux. The reaction start time was assumed when the mixture reached the set temperature of 55 °C. After the reaction was completed, the mixture was poured into a separating funnel, and about 100 ml EtOAc was added. The bottom water-waste phase was poured out. The upper organic phase consisting of TOFAME was washed four times with warm distilled water at a temperature of 55 °C and then dried using a rotatory vacuum evaporator.
Synthesis of Epoxidized TOFA and Epoxidized TOFAME
Epoxidation was carried out using TOFA and TOFAME, resulting in epoxidized tall oil fatty acids (ETOFA) and epoxidized tall oil fatty acids methyl ester (ETOFAME). The synthesis scheme is given in Fig. 1.
The epoxidation of TOFA was carried out in a 1200 ml RBR, model V3 manufactured by Spinchem® (Sweden), as shown in Fig. 1. The reaction vessel is made of borosilicate glass. The rotating bed with a diameter of 70 mm and a height of 30 mm, the catalyst separation filter with a porosity of 104 µm and the shaft are made of stainless steel. The RBR was equipped with a heating/cooling jacket and a bottom drain. A thermocouple, dropping funnel, and a reflux condenser were attached to the 5-neck lid. A rotating bed filled with ion exchange resin was also used as a stirrer. The epoxidation of TOFA was carried out using peroxyacetic acid generated in-situ in the reaction of AcOH and H 2 O 2 and using ion exchange resin as a catalyst. The molar ratio of TOFA double bonds to H 2 O 2 to AcOH was 1.0:1.5:0.5. The mass of the catalyst was kept constant (40 g), while variable catalyst content of 10, 15, 20, and 25 wt.% in relation to the TOFA content was used.
At first, the calculated amount of TOFA and AcOH was added to the reactor. The initial set temperature of the jacket was 40 °C. The speed of the RBR was set to 400 rpm, and the mixture was started to stir. The calculated mass of H 2 O 2 was added to the dropping funnel. After the reaction mixture reached a temperature of 40 °C, H 2 O 2 was added during 60 min. The reaction temperature was increased by 5 °C at intervals of 15 min, finally setting the reaction temperature to 60 °C and running the reaction for 7 h total. The temperature of the reaction mixture did not exceed the set temperature by more than 2 °C. During the epoxidation, small amounts of product were collected every h by the bottom drain of RBR for analysis. Products were washed by adding EtOAc and warm distilled water at a temperature of 55 °C. The organic phase was washed 3 times with the addition of distilled water in a separating funnel. Products were dried using a rotatory vacuum evaporator to remove water and EtOAc residues. Fresh ion exchange resin was used for every reaction. Reagent weights for TOFA or TOFAME epoxidation are given in Table 1.
The epoxidation synthesis of TOFAME was carried out in the same way as the TOFA epoxidation reaction. The overall reaction time was extended to 9 h to obtain additional information about the synthesis.
Reusability of Amberlite® IRC120 H Ion Exchange Resin in the Epoxidation of TOFA
Ten epoxidation reactions were performed to determine the reusability of the catalyst. Syntheses of ETOFA were carried out as described in Sect. "Synthesis of Epoxidized TOFA and Epoxidized TOFAME" with the difference that the catalyst was not removed from the synthesis media. The catalyst load for TOFA epoxidation was 20 wt.%. The reaction time was reduced to 4 h. The design of the RBR reactor allows the catalyst to be separated from the reaction mixture without any losses and additional operations such as filtration, washing and drying, as is the case when a batch reactor is used [19]. Separation of the reaction mixture from the catalyst in Fig. 1 The synthesis scheme of epoxidized TOFA or TOFAME the RBR reactor involves solely pouring it out through the bottom drain.
Methods of Analysis
The iodine value (IV) was determined according to ISO 3961:2018, and it is calculated by using Eq. (1).
where V b and V s are volumes of sodium thiosulfate required for the blank and the sample, in ml, c t is the concentration of sodium thiosulfate, in mol/l, m s is the mass of the sample, in g. 12.96 is the conversion factor from milliequivalents sodium thiosulfate to grams of iodine.
Iodine value was used to determine fatty acid unsaturation (n db, moles of double bonds in gram of oil) by using Eq. (2). where M I2 is the molar mass of I 2 , in g/mol. The epoxy value (EV) (the content of oxirane rings) was determined according to ASTM D1652-11(2019) standard. Epoxy group content in moles per 100 g of oil was calculated by using Eq. (3).
where V t is volume, in ml, of titrant used, c t is titrant concentration, in mol/l, m s is mass of the sample.
The percentage of relative conversion of unsaturated bonds to oxirane (RCO) was calculated by Eq. (4) [26].
where OO ex is the experimentally determined content of oxirane (%), calculated by Eq. (5). The OO th is the theoretical maximum oxirane content of oxirane in 100 g of fatty acids (%), which was calculated by Eq. (6).
where A o is the atomic mass of oxygen.
where A i is the atomic mass of iodine, and IV o is the initial iodine value of the fatty acid sample.
The relative ethylenic unsaturation (REU) was calculated by Eq. (7): where IV o is the initial iodine value, and IV ex is the remaining iodine value during synthesis.
where V b , V s are volumes, in ml, of potassium hydroxide required for the blank and the sample, respectively, c t is the concentration of KOH, in mol/l m s is the mass of the sample, in g. 56.106 is the molar mass of KOH, g/mol.
where V t is the volume of titrant used, in ml, c t is the concentration of KOH, in mol/l, m s is the mass of the sample, in g. 56.106 is the molar mass of KOH, g/mol.
From the determined relative conversion to oxirane (Eq. 4) and relative ethylenic unsaturation (Eq. 7), the selectivity (S) of TOFA and TOFAME epoxidation reaction was calculated according to Eq. 10.
The viscosity was measured at 25 °C using the Thermo Science HAAKE (Medium-High Range Rotational Viscometer, Thermo Fisher Scientific, Waltham, MA, USA).
The spectroscopic analysis of the chemical structure of the precursors and products was carried out using a Fourier-transform infrared spectrometer (FTIR) model iS50 (Thermo Fisher Scientific, Waltham, MA, USA) at a resolution of 4 cm −1 (32 scans) in the infrared range of 4000-500 cm −1 . The FTIR data were collected using an attenuated total reflectance (ATR) accessory with a diamond crystal.
An Agilent Infinity 1260 HPLC system (Agilent Technologies, Inc., Santa Clara, CA, USA) with degasser, autosampler, refractive index (RI) detector, and MALS (miniDAWN) detector was used to perform gel permeation chromatography (GPC) analysis. The analysis was performed using two GPC analytical columns connected in series: PLgel Mixed-E (3 uL, 300 × 7.5 mm). The flow rate was 1 ml/min, and the temperature of the RI detector was 35 °C. A total of two duplicate trials were carried out.
Results and Discussion
The epoxidations of TOFA and TOFAME were carried out using peracetic acid generated in-situ by the reaction of acetic acid and hydrogen peroxide. The kinetic curves of RCO increase depending on the applied catalyst content are presented in Fig. 2.
In the case of TOFA epoxidation (Fig. 2a), the application of 10 wt.% catalyst content resulted in RCO of 42.5% over a period of 6 h and a decrease in RCO to 40.1% at the seventh hour of the reaction, which corresponds to an EV of 0.237 mol/100 g and 0.223 mol/100 g, respectively. The reduction in the RCO implies that the oxirane ring-opening reactions occurred more intensively than the formation of new epoxy groups after the sixth hour. The increase in the catalyst content led to an increase in RCO to 47.5% after 7 h of the reaction (EV of 0.262 mol/100 g), 49.1% (EV of 0.275 mol/100 g), and 50.4% (EV of 0.281 mol/100 g) for 15, 20 and 25 wt.% of the catalyst content, respectively. The maximum RCO was reached after 5 h of reaction for the 15 and 20 wt.% of the catalyst content, while for the 25 wt.% of the catalyst content after 4 h of reaction. In a comparable experiment conducted in a batch reactor, the maximum RCO value achieved after 5 h of reaction at 20 wt.% catalyst content was 42.9% [19].
The kinetic curves of TOFAME are shown in Fig. 2b. The obtained epoxidized TOFAME exhibited significantly higher RCO in relation to TOFA of 65.2% (EV = 0.337 mol/100 g), Methylation of TOFA contributed to reducing side reactions caused by the opening of oxirane rings with a carboxyl group. All kinetic curves except the reaction catalyzed by 10 wt.% catalyst content showed virtually no increase in RCO after 7 h of the epoxidation reaction, which may be caused by side reactions occurring due to the presence of small amounts of fatty acids or by the reactions with acetic acid, peracetic acid, hydrogen peroxide or water [27].
For the TOFAME epoxidation (Fig. 2b), an RCO of about 50% is achieved between 2 and 3 h of reaction at catalyst content > 15 wt.%. In comparison, the same RCO was achieved between 4 and 6 h for TOFA epoxidation at the same catalyst concentrations. The shorter epoxidation time leads to lower costs of the epoxidation process.
The intensity of side reactions is affected by the content of carboxyl groups in the fatty acid. The synthesized TOFAME had an average AV of 39.95 mg KOH/g (the average AV of TOFA was 198.03 mg KOH/g), indicating that not all carboxyl groups were esterified. Figure 3 shows the change in AV during the epoxidation reactions of TOFA (Fig. 3a) and TOFAME (Fig. 3b). The decrease in AV during epoxidation confirmed that carboxyl groups took part in side reactions by opening oxirane rings. The most significant changes in AV were observed for the TOFA epoxidation catalyzed with 25 wt.% catalyst content.
Application of heterogeneous catalysts such as functionalized acidic ion exchange resin in epoxidation reaction provides higher selectivity and reduces side reactions compared to homogeneous catalysts [26]. Small molecules of organic acids can easily diffuse into the structure of porous acidic ion exchange resin, where the formation of peracetic acid occurs. Larger-sized molecules, such as triglycerides, can penetrate the catalyst structure much more restrictedly; thus, the generated oxirane rings are protected from attack by protons confined to the catalyst matrix [28]. The intensive occurrence of side reactions in TOFA epoxidation may suggest that due to their small size, TOFA molecules (about 3 times smaller than triglyceride) penetrated the catalyst structure.
A study on the kinetics of the epoxidation of a high-linolenic triglyceride catalyzed by an ion exchange resin showed that the chemical groups (unsaturated and oxirane) at the 9th and 12th positions possess lower reactivity compared to the reactivity of the same groups at the 15th position. Chemical groups at the 15th position are not affected by steric and electronic effects of the glycerol center that highly affected the closer groups (at the 9th and 12th positions). The opening of the epoxy group at the 15th position can cause steric hindrance affecting the epoxidation of the rest of the double bonds but also preventing any interaction between organic acid and the epoxy groups, thus preventing their cleavage [29]. TOFA and TOFAME do not contain glycerol center, which could interact sterically and electrically with chemical groups at the 9th, 12th and 15th positions. However, in the case of TOFA epoxidation reactions, intensive oligomerization reactions leading to an increase in the molecular weight of the molecule can cause a steric hindrance preventing epoxidation of the remaining unsaturated bonds. Such phenomenon, together with the occurrence of side reactions of oxirane ring opening, can be responsible for the low RCO of TOFA epoxidation. The lower AV in the case of TOFAME reduces the formation of dimers and trimer responsible for steric hindrance thus a significantly higher RCO is achieved.
According to La Scala and Wool, rate constants of epoxidation of fatty acids methyl ester increased as the level of unsaturation increased, therefore oleic acid should undergo relatively slower epoxidation compared to linoleic and linolenic acids. An explanation for this phenomenon is that as the number of unsaturated bonds increases, the electron density increases, resulting in an increase in the reaction rate constant [30].
The change in REU over time at different catalyst content is presented in Fig. 4. The REU of TOFA (Fig. 4a) ranged from 10 to 27% and decreased with increasing catalyst content after 7 h of reaction. The lower the REU value, the more double bonds have reacted. At the same time, low RCO of TOFA (Fig. 2a) (from 40 to 48% depending on catalyst content) combined with low REU confirmed the effect of a high content of carboxyl groups (high AV) on the intensity of the oxirane ring-opening side reaction. The REU of TOFAME (Fig. 4b) ranged from 20 to 33.5% after 7 h of reaction. It was observed that increasing the catalyst content for TOFAME above 15 wt.% has no significant effect on the REU value.
The chemical structures of TOFA during the epoxidation reaction were investigated using FTIR spectroscopy. The overall FTIR of TOFA spectra are shown in Fig. 5a. The = C-H double bond stretching peak with the maximum at 3009 cm −1 (Fig. 5a) disappeared during the reaction, while the stretching peak at 823 cm −1 originating from the -C-O-C-epoxy groups appeared (Fig. 5a). The close-up of the C-O-C oxirane ring stretching vibration peak (Fig. 5c) shows the gradual increase of epoxy group content in the TOFA structure. The intensity of the epoxidation reaction decreased with time, which correlates with the RCO data (Fig. 2a). The gradual decrease of the = C-H stretching bond at 3009 cm −1 is shown in Fig. 5b. The peak practically disappeared, confirming previous REU results (Fig. 4a). Figure 5d shows the decreasing intensity of the stretching peak of the carboxyl groups -C = O at 1707 cm −1 while increasing the This indicates that the carboxyl group of TOFA opens the epoxy group with the formation of an ester group in dimers and molecules with multiples of TOFA molecular weight. FTIR analysis of TOFAME and its epoxidation products was also performed. The overall FTIR spectra are shown in Fig. 6a. Similar to the TOFA, the = C-H double bond stretching peak at 3009 cm −1 (Fig. 6b) disappears almost completely during the reaction while the stretching -C-O-C-epoxy groups peak at 823 cm −1 increased (Fig. 6c). Figure 6d shows a close-up of the region of the carboxyl and ester group peak bands. The disappearance of the -C = O carboxyl group peak bands at 1707 cm −1 results in the gradual uncovering of the ester group peak band and the shift towards lower wavenumbers.
The initial presence of carboxyl group peak bands confirms their incomplete conversion during the esterification process. Nevertheless, the occurrence of side reactions affecting the increase of molecular weight and viscosity of the products was significantly reduced. Figure 7a shows the change in TOFA viscosity during the epoxidation with different catalyst content. The initial viscosity of TOFA was 27.26 mPa•s. The resulting epoxidized TOFA synthesized with a catalyst content of 10 wt.% had a viscosity of 956.93 mPa•s. Higher catalyst content led to products with higher viscosities of 2168.00, 1736.10 and 1836.50 mPa•s for synthesis with catalyst content of 15, 20 and 25 wt.%, respectively.
The change in viscosity during TOFAME epoxidation is shown in Fig. 7b. The initial viscosity of TOFAME was 7.36 mPa•s. After 9 h of reaction (2 h longer than TOFA), Neat TOFAME TOFAME -1h TOFAME -2h TOFAME -3h TOFAME -4h TOFAME -5h TOFAME -6h TOFAME -7h TOFAME -8h TOFAME -9h Oxirane ring 823 cm Neat TOFAME TOFAME -1h TOFAME -2h TOFAME -3h TOFAME -4h TOFAME -5h TOFAME -6h TOFAME -7h TOFAME -8h TOFAME -9h Carboxyl -C=O TOFA and TOFAME samples synthesized at 25 wt.% of the catalyst content were analyzed using GPC chromatography. It was necessary to follow the changes in molecular weight to analyze the course of the synthesis and determine the optimal duration of the synthesis. The GPC chromatograms are shown in Fig. 8. During TOFA epoxidation (Fig. 8a), a significant reduction in peak intensity with a retention time of ~ 15.30 min corresponding to the monomer content is observed. At the same time, the intensity of the peaks characterizing the content of dimers (retention time ~ 14.15 min) and trimers (retention time ~ 13.10 min) increased significantly. This indicates side reactions that occur during epoxidation involving oxirane ring-opening with the carboxyl group of fatty acid. The chromatogram also showed an increase in the peak at retention time ~ 14.90 min during the process, corresponding to byproducts formed during the epoxidation process by oxirane ring-opening with AcOH. Figure 8b characterizes content during TOFAME epoxidation. The changes in peak intensity are relatively small compared to the TOFA epoxidation process. Although the peak corresponding to the dimers is clearly observed, it is significantly lower in intensity than in the case of TOFA epoxidation, which means that they are formed in much lower content. Significantly fewer by-products, such as dimers and trimers, are formed. The formation of dimers and trimers increases the viscosity of the product and thus makes it difficult to use it in further processing. Therefore, preventing the formation of these undesirable by-products during the synthesis is a significant benefit. Polydispersity characterizes the molecular weight distribution. The change of polydispersity is shown in Fig. 9. In the case of TOFA epoxidation, the polydispersity increased significantly during the first h of epoxidation, reaching 1.8 in the 4 th h. In contrast, the change in dispersity during the oxidation of TOFAME was less pronounced. It indicated greater homogeneity of TOFAME epoxidation products and smaller width of the molecular weight distribution. Table 2 summarizes the physico-chemical properties of epoxidized TOFA and TOFAME at optimal synthesis time, which is considered to be the time when the reaction reached the highest RCO.
The selectivity of the TOFAME epoxidation reaction was about 0.98-1.00, while the TOFA epoxidation selectivity was 0.61-0.67. It was lower due to the occurance of side reactions involving oxirane ring-opening with a carboxyl group of fatty acid. It was found that the catalyst content of 20 wt.% of is sufficient to obtain products with high EV. Increasing the catalyst content to 25 wt.% did not lead to a significant increase in EV for both TOFA and TOFAME epoxidations. A lower catalyst content can reduce the cost of the epoxidation process and is consistent with cleaner production principles. The reusability and easy separation of the catalyst from the reaction products is especially important for the cost-effectiveness of the industrial scale process [21]. The further investigation consisted of conducting ten epoxidation reactions and determining the feasibility of using ion exchange resin as a catalyst multiple times. Figure 10 summarises the RCO and REU of 10 successive TOFA epoxidation reactions at 20 wt.% catalyst content. The results indicated good catalyst stability: the RCO was 48.2-41.9%, and the REU % was 32.2-36.3%. In a comparable experiment conducted in a batch reactor at 20 wt.% catalyst content, the RCO value after 10th reuse of the catalyst was reduced from 41.5 to 35.3% [19]. In the study by Aguilera et al. [23], the RCO of tall oil epoxidation in an isothermal batch reactor in four consecutive reactions was 42-45%.
By performing a linear approximation of the RCO and assuming a minimum value of 30% (EV approx. 0.18 mol/100 g), it can be concluded that the reusability of the catalyst in the TOFA epoxidation reaction is about 25 reactions. The use of TOFAME for the preparation of epoxidized derivatives with similar EVs to TOFA can lead to a significant extension of the catalyst lifetime.
Conclusions
A series of TOFA and TOFAME epoxidation reactions were carried out in the RBR reactor using Amber-lite® IRC120 H catalyst. Results showed that methylation of TOFA allows obtaining a product with a higher The selectivity of TOFAME epoxidation reaction was higher than the selectivity of TOFA epoxidation. It was found that 20 wt.% of the Amberlite® IRC120 H is sufficient to obtain products with high epoxy value. Increasing the catalyst content to 25 wt.% did not significantly increase epoxy value for both TOFA and TOFAME epoxidations. Conducting the epoxidation reaction in the RBR reactor facilitated the separation process and provided the opportunity of reusing the catalyst. The Amberlite® IRC120 H catalyst was found to exhibit good stability in the TOFA epoxidation reaction. The relative conversion to oxirane decreased from 48.2 to 41.9% over 10 subsequent reactions. It was found that the conversion of double bonds to oxiranes in the TOFA epoxidation reaction carried out in the RBR reactor was higher than when a batch reactor was used. The produced TOFAME epoxy derivatives characterized by very low viscosity and high epoxy value are more suitable to be used as raw material for the synthesis of bio-polyols than TOFA epoxy derivatives.
Conflict of Interest
The authors declare that they have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,984.6 | 2022-08-24T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science"
] |
Hadronic medium effects on $Z_{cs} (3985)^-$ production in heavy-ion collisions
In this work we study the interactions of the multiquark state $Z_{cs} (3985)^-$ with light mesons in a hot hadron gas. Using an Effective Lagrangian framework, we estimate the vacuum cross sections as well as the thermal cross sections of the production processes $ \bar{D}_{s}^{(*)} D_{s}^{(*)} \rightarrow Z_{cs}^- X \, (X=\pi , K, \eta) $ and the corresponding inverse reactions. The results indicate that the considered processes have sizeable cross sections. Most importantly, the thermal cross sections for $Z_{cs}$ annihilation are much larger than those for production. This feature might produce relevant effects on some observables, such as the final $Z_{cs}$ multiplicity, measured in heavy ion collisions.
I. INTRODUCTION
Recently, the BES-III Collaboration has observed an excess of events in the K + recoil-mass spectrum of the reaction e + e − → K + (D * − s D 0 + D − s D * 0 ) for events collected at center-of-mass energy √ s = 4.681 GeV, with estimated statistical significance of 5.3 σ [1]. By using an amplitude model based on the Breit-Wigner formalism, this peak has been fitted to a resonance with mass and width given by M = (3982.5 +1.8 −2.6 ± 2.1) MeV, Γ = (12.8 +5. 3 −4.4 ± 3.0) MeV, respectively, and has been denoted as Z − cs (3985). Its minimum valence quark content should be most likely ccsū, giving it the status of the first candidate for a charged hidden-charm tetraquark with strangeness.
Since the experimental discovery of the Z − cs (3985) state (or simply Z − cs ), the hadron spectroscopy community has been intensely debating its internal structure and the possible mechanisms of its decay and production . Because of its proximity to the D * − s D 0 and D − s D * 0 thresholds, the hadronic molecular interpretation for the Z − cs seems natural. Along this line, this new state would be the strange partner of the Z c (3900) [3][4][5][6][7][8][9]. Notwithstanding, other possible interpretations have also been proposed, namely: the compact tetraquark configuration resulting from the binding of a diquark and an antidiquark [10,11,13]; a virtual pole state [12]; a kinematic effect caused by triangle singularities [14,15]; a resonance [16], and so on. More experimental and theoretical studies are clearly needed.
A new and promising scenario to investigate the properties of exotic states are heavy-ion collisions (HICs). They are characterized by the formation of a locally thermalized state of deconfined quarks and gluons (the quark-gluon plasma or QGP). At the end of the QGP phase, quarks coalesce to form conventional bound states and also exotic states. The latter will exist in a hadron gas and interact with other light hadrons. As pointed out in previous studies, the exotic states can be destroyed in collisions with the comoving light mesons, as well as produced through the inverse processes [28][29][30][31][32][33][34][35]. Their final yields depend on the interaction cross sections, which, in turn, depend on the spatial configuration of the quarks. In the study of the most famous exotic state, the X(3872), it has been shown that the molecular configuration (i.e., the bound state (DD * +c.c.)) is larger than a diquark-antidiquark configuration [(cq)(cq)] by a factor about 3-10 [30]. Consequently, meson molecules have larger cross sections and are expected to be more easily produced as well as more easily destroyed than compact tetraquarks in a hadronic medium.
The recent observation of the X(3872) in P b − P b collisions at √ s N N = 5.02 TeV by the CMS Collaboration [36] has opened a new era for the study of exotic states. This observation strengthens our belief that HICs provide an unique and promising experimental environment to study the nature of exotic hadrons.
The present contribution is part of a series of works devoted to the production of exotics states in heavy ion collisions. In the following sections we will analyze the interactions of the Z − cs state with light mesons. In Section II we present our effective Lagrangian formalism. In Section III we use it to calculate the Z − cs production and absorption cross sections and in Section IV we compute the corresponding thermal averages. Finally, Section V is dedicated to the summary and to the concluding remarks.
II. THE FORMALISM
To understand how the Z − cs behaves in a surrounding hadronic medium, we will study its interactions with the lightest pseudoscalar mesons π, K and η. More precisely, we will focus on the reactionsD s D ( * ) → Z cs η, as well as the inverse processes. In Fig. 1 we present the lowest-order Born diagrams contributing to these pro-cesses, without specifying the charge of the particles.
In the evaluation of the reactions in Fig. 1, we make use of the effective theory approach. Consequently, the couplings involving π, K ( * ) , D ( * ) and D ( * ) s mesons are based on the effective formalism in which the vector mesons are identified as dynamical gauge bosons of the hidden U (N ) V local symmetry, and are properly explained in Refs. [28][29][30][31][32]; they read where τ are the Pauli matrices in the isospin space; π denotes the pion isospin triplet; and D ( * ) = (D ( * )0 , D ( * )+ ) and K = (K + , K 0 ) T represent the isospin doublets for the pseudoscalar (vector) D ( * ) and K mesons, respectively. The coupling constants in Eq. (1) describe pseudoscalar-pseudoscalar-vector and vector-vectorpseudoscalar vertices and are given by [29][30][31][32], where with m V being the mass of the vector meson; we take it as the mass of the ρ meson and f π is the pion decay constant. As pointed in Ref. [29], the factor m D * /m K * in the coupling g P P V is introduced in order to reproduce the experimental decay width found for the process D * → Dπ, and comes from heavy-quark symmetry considerations. The couplings involving the Z − cs are introduced assuming that it is a S-wave bound state engendered by the superposition of D * − s D 0 and D − s D * 0 configurations with quantum numbers I(J P ) = 1 2 (1 + ). As a consequence, the effective Lagrangian describing the interaction between the Z − cs and the D * − s D 0 and D − s D * 0 pairs is given by [22], where Z cs denotes the field associated to Z − cs state; this notation will be used henceforth. Also, theD * sµ D and D s D * µ mean the D * − s D 0 and D − s D * 0 components, respectively. The effective coupling constant g Zcs is considered to be g Zcs = 6.0 − 6.7 in order to describe the Z cs width, as discussed in Ref. [22].
Based on the effective Lagrangians introduced above, the amplitudes of the processes shown in Fig. 1 can then be calculated. They are given by where the explicit expressions are and where τ I is the isospin factor related to part of the particles in the vertices P P V and V V P ; p 1 (p 3 ) and p 2 (p 4 ) are the momenta of initial (final) state particles; and t, u are two of the Mandelstam variables: The isospin coefficients τ I of the reactions listed in Eqs. (5) are determined by considering the charges Q 1f and Q 2f for each of the two particles in final state, whose combination gives the total charge Q = Q 1f + Q 2f = 0, −1. There are two possible charge configurations (Q 1f , Q 2f ) for each process in Eq. (5). The values of τ (i) I for the possible configurations are listed in Table I.
Process
Vertice
III. CROSS SECTIONS
The isospin-spin-averaged cross section in the center of mass (CM) frame for the processes in Eq. (5) is given by where √ s is the CM energy; | p ab | and | p cd | stand for the three-momenta of initial and final particles in the CM frame, respectively; the symbol S,I denotes the sum over the spins and isospins of the particles in the initial and final state, weighted by the isospin and spin degeneracy factors g 1i,r = (2I 1i,r + 1)(2I 2i,r + 1) and g 2i,r = (2S 1i,r +1)(2S 2i,r +1) of the two particles forming the initial state, namely: Finally, as usual, we have introduced the form factor F to account for the composite nature of hadrons and their finite extension observed at increasing momentum transfers. The form factor introduces a suppression of the high momentum region and therefore tames the artificial growth of the cross sections. We make use of a monopolelike expression, defined as [33,34]: with q being the momentum of the exchanged particle in a t-or u-channel in the center of mass frame, and Λ the cut-off, chosen to be in the range m min < Λ < m max , taking m min (m max ) as the mass of the lightest (heaviest) particle entering or exiting the vertices. In the present approach we fix Λ = 2.0 GeV. For a detailed discussion on the role and choice of the form factor, we refer the reader to Ref. [34]. Using the detailed balance relation, we can also evaluate the cross sections of the inverse processes, which lead to the absorption of the Z − cs state. The calculations of the present work are done with the isospin-averaged masses reported in the PDG [37]. Since we use a range of values for the coupling g Zcs (in order to take into account the uncertainties), the results are shown in terms of bands.
The cross sections for the Z − cs -production as functions of the CM energy √ s are plotted in Fig. 2. Excluding the contribution of the channel D * s D * → Z cs π, all the cross sections are endothermic, having a substantial increase near the threshold and after that a weak dependence on √ s. In the region close to the threshold we note that the distinct channels present magnitudes of the order of ∼ 10 −4 − 10 −2 mb. For the Z − cs production induced by kaon and η mesons, the channels with final states D sD * s and D sDs have maximal cross sections at smaller CM energies. This pattern remains at moderate CM energies (i.e. 500 MeV above the threshold) for the channels involving the Z cs K, η-production, whereas those of Z cs π have closer magnitudes.
Let us now examine the inverse processes. Their cross sections as functions of the CM energy √ s are plotted in Fig. 3. We see that all these absorption cross sections are exothermic, becoming very large near the threshold. The exception is the case of Z cs π →D * s D * , which has a distinct behavior: it starts small at the threshold but rapidly increases and becomes very large, and after that decreases as in the other cases. From the region close to the threshold up to moderate energies, we observe that the cross sections are of the order ∼ 10 −3 − 10 −1 mb.
The comparison between Z − cs absorption and production by comoving light mesons can be done more easily when the different contributions are added up. The total cross sections for All → Z − cs X and Z − cs X → All (X = π, K, η) as functions of √ s − √ s 0 ( √ s 0 being the mass threshold for each channel) are plotted in Fig. 4. The results suggest that the cross sections σ All→Z − cs X have similar magnitude and a weak dependence on √ s − √ s 0 . This fact reflects the dynamics as well as the choice of the values of the coupling constants. In the case of absorption processes, this similarity is less pronounced and the dependence with √ s − √ s 0 is stronger. The most important information contained in Fig. 4 is that, for the energy values which are more relevant to heavy ion collisions ( √ s − √ s 0 < 0.6GeV ), σ Z − cs X→All > σ All→Z − cs X , i.e. the absorption cross sections are greater than the production ones.
In order to better understand this behavior it is useful to rewrite the ratio of momenta in Eq. (8) in an expanded and more instructive form as: Now let us consider the processes with the largest cross sections: Z cs π →D * s D * and the corresponding inverse processD * s D * → Z cs π. Assuming, just for the sake of the discussion, that m π = 0, mD * s = m D * = m and m Zcs = 2m, and substituting these masses in (11) we find that the ratio is s/(s − 4m 2 ) for Z cs absorption and it is (s − 4m 2 )/s for Z cs production. We see then that the difference of these two processes comes to a large extent from the phase space and can be big.
Apart from the ratio of momenta, differences can also be due to the degeneracy factors. In the absorption process, the initial state is the Z cs − π system, for which the isospin (g I ), spin (g S ) and total (g a T ) degeneracy factors For the production process, we haveD * s and D * in the initial state and the corresponding degeneracy factors are: In this example g a T = g p T and the difference between absorption and production comes solely from the phase space. However, in other process g a T and g p T can differ by one order of magnitude.
IV. THERMAL CROSS SECTIONS
Motivated by the results of the previous section, we turn our attention to the heavy-ion collision environment. Keeping in mind that the temperature of the hadronic medium drives the collision energy, it is convenient to evaluate the thermal cross sections, defined as convolutions of the vacuum cross sections with thermal momentum distributions of the colliding particles. This thermal average leads to a strong suppression of the kinematical configurations very close to the thresholds, and therefore threshold effects will not play a relevant role in the presence of a hot hadronic medium.
The cross section averaged over the thermal distribution for a reaction involving an initial two-particle state going into two final particles ab → cd is given by [28,[30][31][32]38] where v ab denotes the relative velocity of the two initial interacting particles; the function f i (p i ) is the Bose-Einstein distribution; β i = m i /T ( T being the temperature); z 0 = max(β a + β b , β c + β d ), and K 1 and K 2 are the modified Bessel functions of second kind. In Figs. 5 and 6 we show the thermal cross sections for Z cs production and absorption plotted as functions of the temperature. The results reveal that in general the thermal cross sections for the Z cs absorption do not change much in this range of temperature, staying almost constant. On the other hand, in the case of Z cs production, most of the cross sections grow significantly with the temperature.
These features can be understood from the energy dependence of the cross sections shown in Figs. 2 and 3. As it can be seen, all the cross sections (with one exception) of Z cs production grow with the CM energy and as the temperature increases and the charmed mesons in initial state become more energetic (surpassing the threshold), the thermal production cross sections grow with T .
We emphasize that our most important result is that the thermal cross sections for Z cs absorption are greater than those for production, at least by one order of magnitude. For instance: the cross section of Z cs π →D * s D * is bigger than that for the corresponding inverse reaction by one order of magnitude; in the case of the channel Z cs K → D sDs and its inverse, this difference is at least of two orders of magnitude, depending on the temperature.
This result might have important implications for the observed final yield of the Z cs state in heavy ion collisions. The Z cs multiplicity at the end of the quark-gluon plasma phase (which may be estimated via the coalescence model) might go through sizeable changes because of the interactions during the hadron gas phase. The different magnitudes of the thermal cross sections for the Z cs annihilation and production by comoving hadrons might lead to a suppression of Z cs .
V. CONCLUDING REMARKS
In this work we have investigated the interactions of the multiquark state Z cs with light mesons in the hadron gas phase. We made use of an effective Lagrangian framework. The vacuum cross sections as well as the thermal cross sections for the Z cs X− absorption and production processes (X = π, K, η) have been estimated.
Our results have uncertainties coming from the couplings constants and from the form factors (with the corresponding cut-off). Nevertheless, they clearly show that the thermal cross sections for Z cs annihilation are larger than the corresponding ones for production. It would be tempting to conclude that there will be a reduction of the multiplicity of this state due to the absorption by the hadron gas. However, in the rate equation which controls the evolution of the Z cs abundance there are gain and loss terms and they depend on the initial number of D ( * ) 's and D ( * ) s 's. Since these mesons are much more abundant than the Z cs 's, it is not clear a priori what will be the final outcome. A similar feature was also observed in other multiquark states, such as the T + cc . In this case, it was observed in [35] that the rise or fall of the initial abundance depended on several factors, including the internal structure (compact tetraquark or large meson molecule). This is certainly a very interesting question and work in this direction is already in progress. | 4,356.8 | 2022-06-07T00:00:00.000 | [
"Physics"
] |
Anisotropic leaky-like perturbation with subwavelength gratings enables zero crosstalk
Electromagnetic coupling via an evanescent field or radiative wave is a primary characteristic of light, allowing optical signal/power transfer in a photonic circuit but limiting integration density. A leaky mode, which combines both evanescent field and radiative wave, causes stronger coupling and is thus considered not ideal for dense integration. Here we show that a leaky oscillation with anisotropic perturbation rather can achieve completely zero crosstalk realized by subwavelength grating (SWG) metamaterials. The oscillating fields in the SWGs enable coupling coefficients in each direction to counteract each other, resulting in completely zero crosstalk. We experimentally demonstrate such an extraordinarily low coupling between closely spaced identical leaky SWG waveguides, suppressing the crosstalk by ≈40 dB compared to conventional strip waveguides, corresponding to ≈100 times longer coupling length. This leaky-SWG suppresses the crosstalk of transverse–magnetic (TM) mode, which is challenging due to its low confinement, and marks a novel approach in electromagnetic coupling applicable to other spectral regimes and generic devices.
various PICs, transverse-magnetic (TM) mode, whose dominant electric field is vertical to the chip surface, doubles chip capacity and plays important roles in biochemical and gas sensing with its extended fields in the vertical direction 11,38,39 . Despite its significance, TM is difficult to confine due to a low height-to-width aspect ratio (for easy etching) and exhibits larger crosstalk than TE. The eskid waveguide also causes a stronger coupling for TM mode with increased skin depth 40 , and this large TM crosstalk issue still remains a challenge, impeding progress toward high-density chip integration.
As illustrated in Fig. 1b, a leaky mode can be formed by coupling a guided waveguide mode to the continuum of radiation modes in the surrounding infinite clad media 12,41,42 . While the mode is propagating, the spread of these radiations enables coupling with other devices even when they are far apart. This radiative coupling provides a major advantage in directional couplers [43][44][45] and polarization splitters 46 , as the coupling length remains short with increasing separation distance. But it also proves to counteract when it comes to unwanted waveguide crosstalk, as the cladding radiation significantly enhances the coupling strength between waveguides. Leaky modes, therefore, are not considered ideal for dense integration. However, by orienting the SWGs perpendicular to the propagation direction (Fig. 1c), we can form a leaky mode for TM polarization and achieve zero crosstalk. This counter-intuitive approach relies on the anisotropic nature of SWGs, for which each field component (i.e., E x , E y , and E z ) in the radiative waves will be weighted differently than the isotropic cladding case (Fig. 1b). Each component can be engineered anisotropically to cancel out the overall coupling strength by changing the homogenized optical indices of SWG metamaterials. For the practical use of anisotropic leaky mode, the SWG lengths can be truncated finite as in Fig. 1d, forming a leaky-like mode. Despite the reduced cladding width, this guided mode still exhibits anisotropically oscillating fields in the cladding. These oscillating patterns are the primary characteristic of a leaky mode, and the field perturbations can be engineered depending on the finite width of SWGs that corresponds to the spacing between the two identical waveguides. The finite SWG width also removes the radiative losses, which is due to the leakage through the cladding.
In this work, we show that an anisotropic leaky-like oscillation realized by SWG metamaterials (as in Fig. 1d) can cancel crosstalk completely, i.e., zero crosstalk. The leakylike oscillation and zero crosstalk are realized with TM polarization, the bottleneck for chip integration due to its lower confinement. Starting by looking into the modal properties of leaky SWG modes, we apply coupled-mode analysis to reveal the unique dielectric perturbation of anisotropic leaky-like mode, finding zero crosstalk between closely spaced identical SWG waveguides. Then, using Floquet boundary simulations, we design practically implementable SWG waveguides on a standard silicon-oninsulator (SOI) platform and experimentally demonstrate near-zero crosstalk, drastically increasing the coupling length of TM mode by more than two orders of magnitudes.
Anisotropic leaky-like oscillation with SWGs
To see the modal properties, we first simulated the fundamental TM (TM 0 ) mode of a single waveguide and plotted their field components in each direction. Figure 2a-c shows the cross-sections of the strip, infinite-SWG, and finite-SWG waveguides, respectively. In order to model the anisotropic SWGs, we used the effective medium theory (EMT) with the permittivities ε x ¼ ε y ¼ ε jj and ε z ¼ ε ? given by 47 , where ρ is the filling fraction of silicon (Si) in the cladding, and ε Si and ε air are the permittivities of Si and air, SiO 2 ). The red lines illustrate the fundamental TM 0 modes (E y ). a A typical strip waveguide supporting a guided mode with exponentially decaying evanescent fields. b Placing infinite slabs adjacent to the strip results in leaky mode with a radiative loss into the slabs. c A perpendicular array of infinite SWGs can replace the slab, supporting a leaky mode, but SWGs provide anisotropic field oscillations. d By truncating the SWGs, the mode will be guided without radiative losses while preserving its leaky-like oscillations in the anisotropic SWG claddings. When coupled with other waveguides, this leaky-like anisotropic oscillation exhibits a non-conventional anisotropic perturbation and can result in zero crosstalk respectively. The electric field profiles of each waveguide scheme are shown in Fig. 2d-f, from top to bottom, plotting the normalized Re(E y ) , Re(E x ), and Im(E z ) (see Supplementary Information Fig. S1 for magnetic field components). The strip waveguide (Fig. 2a, d) supports a well-confined/guided TM 0 mode, exhibiting a dominant E y field. On the other hand, the infinite-SWG waveguide (Fig. 2b, e) shows a leaky mode with laterally radiating waves. Now the E x and E z are not negligible due to radiative waves, while E y is still dominant in the core. Truncating the SWG cladding layers to a finite width w swg (Fig. 2c) makes the mode confined, but the oscillating fields in the SWG claddings remain the same exhibiting leaky-like field patterns (Fig. 2f). These oscillating waves in the SWGs can be controlled by changing the w swg (see Supplementary Information Fig. S2), introducing nontrivial dielectric perturbations once coupled with other waveguides. The gap g is introduced between the Si core and SWG claddings to minimize scattering losses from a sharp corner in the experiment, but the modal properties show similar trends even without the gap (see Supplementary Information Fig. S3).
Zero crosstalk in leaky-like SWG TM modes
To examine the coupling effect, we simulated the coupled modes of the two identical waveguides and compared their coupling lengths. The cross-sections and geometric parameters of the coupled strip and SWG waveguides are depicted in Fig. 3a, b, respectively. The EMT models in Eq. (1) represent the finite, perpendicular SWG claddings in Fig. 3b. The coupling length L c is used to quantify the crosstalk, which defines the minimal length over which optical power is maximally transferred from one waveguide to the other 48 . The coupling length is a critical metric for comparing the degree of waveguides crosstalk (i.e., ratio of power exchange), as the degree of crosstalk varies per waveguide length. The simulated effective indices of the coupled TM 0 symmetric (red, n s ) and antisymmetric (blue, n a ) modes are plotted in Fig. 3c (strip) and Fig. 3d (SWG) as a function of SWG width w swg , with their corresponding coupling lengths shown in Fig. 3e, f, respectively. The coupling lengths are normalized by the free-space wavelength λ 0 = 1550 nm, and are evaluated using 48,49 , where Δn = |n sn a | is the index difference between the symmetric and anti-symmetric modes. With the coupled strip waveguides (Fig. 3a), a typical trend where n s is larger than n a and they get closer as w swg increases are seen (Fig. 3c), having limited L c /λ 0 less than 100 waves (Fig. 3e). This very short coupling length is due to less confinement from TM 0 mode for the given separation distances, making TM 0 mode difficult for dense integration. For comparison, a typical L c /λ 0 of fundamental TE mode for the same separation distance ranges approximately between 10 3 and 10 4 waves (see Supplementary Information Fig. S4). However, the coupled SWG waveguides (Fig. 3b) show a non-trivial coupling region where n s < n a (gray-shaded region, Fig. 3d). Moreover, at the transition point from the trivial coupling (n s > n a ) to the non-trivial one (n s < n a ), the index difference Δn becomes zero (n s = n a ), which indicates infinitely long coupling length L c = 1 (from Eq. 2). This infinitely long coupling length is directly seen in Fig. 3f. It is worth noting that the TM 0 of SWG waveguides supports leakylike radiative waves in the cladding, which is supposed to exhibit larger crosstalk (thus, less coupling length) unless there is such a non-trivial coupling.
Anisotropic dielectric perturbation with SWGs
To further understand the role of leaky-like SWG mode in achieving zero crosstalk, we investigate each coupling scheme using the coupled-mode analysis 48,49 . The coupling coefficients κ x , κ y , and κ z from each field component (E x , E y , and E z ) are calculated separately and then summed together to get the total coupling coefficient |κ| = |κ x + κ y + κ z | (see "Methods"). Figure 3g, h show the calculated coupling coefficients of the coupled strip and SWG waveguides, respectively, as a function of w swg : normalized κ x , κ y , and κ z (dashed lines, left axis) and |κ| (solid red line, and the corresponding normalized L c /λ 0 of the coupled strip and SWG waveguides are plotted in Fig. 3i, j, respectively. For a guided TM 0 mode (Fig. 3a, g), κ y is dominant with a high E y field, while the other components κ x and κ z are negligible. As w swg enlarges, all the coupling coefficients decrease due to the exponentially decaying evanescent fields in the cladding, reducing the dielectric perturbation strength between the coupled waveguides.
On the other hand, in the coupled SWG waveguides (Fig. 3b, h), the κ x and κ z show a non-conventional trend, i.e., their magnitudes increase with w swg. The oscillating fields in the leaky SWG attributed to this non-conventional dielectric perturbation, which allows the negative κ x and κ z to counteract the positive κ y component, leading to the complete cancellation of the total coupling coefficient |κ| = 0 at a certain point (Fig. 3h). The corresponding L c approaches infinity at this |κ| = 0 point, as seen in in Fig. 3j. The results closely match with the full numerical simulations in Fig. 3f. A small difference between Fig. 3f and Fig. 3j is noted, which is likely due to the strong perturbation of the leaky-like SWG mode, which is not adequately accounted for in the coupled-mode analysis. It is important to note that the coupled-mode analysis is an approximation that assumes small perturbations, such as exponentially decaying evanescent coupling in the case of guided modes. Despite this deviation, the results in Fig. 3h provide valuable insight into the zero crosstalk behavior of the leaky-like SWG mode. The shaded regions in Fig. 3h, j show the non-trivial coupling regimes where κ < 0, which corresponds to the n s < n a region in Fig. 3d, f. Note that this exceptional coupling achieving zero crosstalk is due to the anisotropic dielectric perturbations of the leaky-like oscillations realized by SWGs, where Δɛ x = Δɛ y > Δɛ z . With a conventional isotropic (Δɛ x = Δɛ y = Δɛ z ) leaky mode, such a complete zero crosstalk is impossible to achieve as |κ| is always greater than zero. In the case of a conventional well-confined guided mode, including an eskid waveguide for TE mode 19,20 , the coupling coefficient components κ x , κ y , and κ z typically decrease as the separation distance increases due to the exponentially decaying evanescent field. However, as shown in Fig. 3h, Numerical mode simulations n s > n a n s = n a c, d Numerically simulated effective indices of the coupled symmetric (n s , red) and anti-symmetric (n a , blue) TM 0 modes for c strip and d SWG waveguides, and e, f their corresponding normalized coupling lengths L c /λ 0 . g, h Normalized coupling coefficients κ x (purple dashed), κ y (blue dashed), κ z (green dashed), and the total coupling coefficient |κ | =|κ x + κ y + κ z | (red solid). i, j Corresponding L c /λ 0 for the coupled strip and SWG waveguides, respectively. The gray-shaded areas represent the non-trivial coupling region, where d, f n a > n s and h, j κ < 0. The free-space wavelength is λ 0 = 1550 nm, and the other parameters are h = 220 nm, w = 530 nm, and g = 65 nm our leaky-like SWG mode exhibits an unconventional trend in which κ x and κ z increase even as the separation distance increases (here, w swg ). The anisotropic perturbation realized by this oscillative leaky trend is the key to achieving zero crosstalk in TM mode. This stands in contrast to the highly confined eskid approach for TE mode 19,20 , as summarized in Table 1. These coupling singularities via anisotropic dielectric perturbations could also vary with different core widths w, as shown in Supplementary Information Fig. S5. Furthermore, this anisotropic perturbation approach can be easily extended to multiple waveguides array (see Supplementary Information Fig. S6) and also be optimized to reduce the width of the SWGs, potentially pushing the limits of the separation distance between the two waveguides.
Experimental results
In order to verify our findings, we fabricated the coupled SWG waveguides and experimentally characterized their crosstalks. We fabricated our SWG devices on a 220 nmthick SOI wafer using a standard electron beam nanolithography process (see "Methods"). Figure 4a shows the scanning electron microscope (SEM) images of the fabricated devices with a schematic experimental setup for measuring the crosstalk. As the ideal EMT model and practical SWGs would differ in effective indices, we used Floquet modal simulations to optimize structures with realistic parameters (see "Methods"). Figure 4b shows schematics of the simulation domains (top: perspectiveview and bottom: top-view), and Fig. 4c represents the mode profiles (E y ) of the coupled TM 0 symmetric (top) and anti-symmetric (bottom) modes. Figure 4d shows the simulated crosstalk spectra of the coupled SWG waveguides (solid lines) for different core widths w = 565 nm (red), 570 nm (blue), and 575 nm (green). For comparison, the crosstalks of the coupled strip waveguides without SWGs are also plotted (dashed lines). As expected from previous modal simulations using an ideal EMT, complete zero crosstalks (dips) are seen, resulting in infinitely long coupling lengths as in Fig. 4e. The zero crosstalk phenomenon is highly dependent on the anisotropic properties of SWGs, which can be manipulated by varying the filling fraction of the grating structures. This allows for the engineering of the zero crosstalk wavelength, as shown in Supplementary Information Fig. S8. For the experimental characterization, we sent light I 0 through one of the coupled waveguides and measured output power ratios I 2 /I 1 , which defines the crosstalk. Grating couplers are used for interfacing the chip and fibers. Figure 4f, g show the experimentally characterized crosstalk and L c /λ 0 corresponding to the results in Fig. 4d, e, respectively. The crosstalk of coupled SWG waveguides is drastically suppressed down to as low as -50 dB (Fig. 4f), approximately 40 dB lower than coupled strip waveguides. In terms of the coupling length (Fig. 4g), the maximum L c /λ 0 of the SWG waveguides is ≈10 4 waves, which is about two orders of magnitudes longer than the strip case. Unlike the ideal SWG simulations, there is a practical limit in measuring the minimum crosstalk due to background noise in the chip, either from sidewall roughness These data are for a range of w swg = 400-750 nm (see Supplementary Fig. S7 for detailed data) scattering or cross-coupling at the strip to SWG transition. Still, to our knowledge, the TM 0 crosstalk suppression shown here is the lowest recorded, with a coupling length encompassing ≈10 4 waves. To explicitly show the effectiveness of our approach, we summarize key performance factors in Table 2 and compare them with other TM crosstalk suppression approaches 15,18,[50][51][52] .
It is worth noting that the zero crosstalk condition is sensitive to geometric parameters, and therefore the zero crosstalk bandwidth is limited. There is a trade-off between bandwidth and L c /λ 0 ; quantitatively, the bandwidth for L c /λ 0 > 1000 waves is ≈20.1 ± 3.0 nm (see Supplementary Information Fig. S9). This bandwidth can be broadened by tailoring the modal dispersions or by smoothly tapering the widths of SWGs or core, but this may come at the cost of reduced peak crosstalk suppression.
Discussion
In summary, we uncovered that anisotropic leaky-like oscillations can achieve complete zero crosstalk by engineering dielectric perturbations anisotropically to cancel out the couplings from each field component. We realized such anisotropic leaky-like oscillations using the perpendicularly arrayed SWGs and optimized via Floquet numerical simulations. We experimentally demonstrated the extreme suppression of TM crosstalk on an SOI platform, achieving ≈40 dB crosstalk suppression and two orders of magnitude longer coupling lengths than typical strip waveguides. Our work directly provides a practical and easily applicable waveguide platform for overcoming the integration density limit of TM mode and should be pivotal for advancing PIC technologies in applications like on-chip biochemical/gas sensing and polarization-encoded quantum/signal processing. Furthermore, our proposed method of using anisotropic SWGs to achieve zero crosstalk reveals a novel coupling mechanism with a leaky mode, easily extendable to other integrated photonics platforms and covering visible to mid-infrared and terahertz wavelengths beyond the telecommunication band.
Coupled-mode analysis
The coupling coefficient components κ i of the coupled strip and SWG waveguides were calculated using the coupled-mode theory 48,49 , where i = x, y, and z denotes the coupling coefficient from each field component. By isolating the waveguides at each side (without coupling), the unperturbed normalized electric fields of the TM 0 modes are obtained as E 1i and E 2i . Δɛ i is the dielectric perturbation imposed by the presence of the individual waveguides on each other. The total coupling coefficient |κ| between the coupled waveguides was obtained by adding the individual components together, and the corresponding coupling length is given by L c = π/(2|κ|). The analysis was carried out at a free-space wavelength of λ 0 = 1550 nm.
Numerical simulations
For the conceptual studies conducted in Figs. 2 and 3, we used the EMT to account for the anisotropic properties of SWGs and ran 2D modal simulations. However, since there is a mismatch between the EMT and real SWGs, we used the Floquet modal approach for designing real experimental devices. The method of the Floquet approach is described below.
Floquet modal simulations
We used a commercially available finite element method simulator (COMSOL Multiphysics) to model and simulate the practically implementable SWG waveguides. We simulated the eigenfrequencies of the given structure by alternating Si layers perpendicular to the waveguide propagation direction (z-axis) with Floquet boundary conditions. The structure is spatially repeated with period Λ = 100 nm by imposing the Floquet boundary conditions at each end of the simulation domain (see Fig. 4b). For setting the Floquet periodicity, we defined the wave vector as k z ¼ 2π λ n eff , where n eff is the effective index of the TM 0 mode at a particular wavelength λ. The simulations were carried out for different core widths indicated by w = 565 nm (red), 570 nm (blue), and 575 nm (green) in Fig. 4d, e. The Floquet simulations can reasonably estimate the geometric parameters required to achieve complete zero crosstalk. These optimized parameters are fixed at height h = 220 nm, SWG width w swg = 570 nm, and gap g = 65 nm with a filling fraction of 0.45. Edges of SWGs are also rounded, considering the fabricated devices shown in the SEM images.
Device fabrication
The photonic chips were fabricated on an SOI wafer with 220 nm thick Si and 2 μm SiO 2 substrate, using the JEOL JBX 6300-fs electron beam lithography (EBL) system. The operating conditions were 100 KeV energy, 400 pA beam current, and 500 μm × 500 μm field exposure. A solvent rinse was done initially, followed by O 2 plasma The wavelength of ref. 18 is at 1310 nm, while all the other wavelengths are at 1550 nm b The detailed loss value varies per design treatment for 5 min. Hydrogen silsesquioxane resists (HSQ, Dow-Corning XR-1541-006) was spin-coated at 4000 rpm and pre-exposure baked on a 90°hotplate for 5 min. The exposure dose used was 2800 μC/cm 2 . During shot shape writing, the machine grid shape placements, the beam stepping grid, and the spacing between dwell points were 1 nm, 4 nm, and 4 nm, respectively. The resist was developed in 25% tetramethylammonium hydroxide (TMAH) heated to 80°and placed into the solution for 30 s, and then rinsed in flowing deionized water for 2 min and isopropanol for 10 s. Nitrogen was blown in for air drying. The die was placed in an O 2 plasma asher at 100 W for 15 s with 10 sccm O 2 flowing into the system. The unexposed top Si device layer was etched using Trion Minilock III ICP-RIE etcher at 50 W RF power and 6.2 mTorr pressure with Cl 2 and O 2 gas flowing into the chamber at 50 sccm and 1.4 sccm, respectively. An active cooling system maintained the stage temperature stably at 10°C during the entire etching process.
Crosstalk characterization
The crosstalk of the strip and SWG coupled waveguides was characterized by measuring their respective output power ratio. Light from a tunable laser source with optical power I 0 was coupled to the input port using grating couplers (see Fig. 4a). A Keysight Tunable Laser 81608A was used as a source, and an angle-polished (8°) fiber array was used to couple light into the grating coupler. A polarization controller was used to ensure the input light polarization was TM. By simultaneously measuring the output powers I 1 and I 2 at the through and coupled ports, the crosstalk was calculated as the ratio I 2 /I 1 . A Keysight N7744A optical power meter was used to detect the output powers. The coupling length L c was extracted from the relation 48 , where L = 30 μm is the length of the coupled waveguides. The measurements were taken for core widths w = 580 nm (red), 585 nm (blue), and 590 nm (green) (see Fig. 4f, g). | 5,408.2 | 2023-06-02T00:00:00.000 | [
"Physics"
] |
Local null controllability of a fluid-rigid body interaction problem with Navier slip boundary conditions
The aim of this work is to show the local null controllability of a fluid-solid interaction system by using a distributed control located in the fluid. The fluid is modeled by the incompressible Navier-Stokes system with Navier slip boundary conditions and the rigid body is governed by the Newton laws. Our main result yields that we can drive the velocities of the fluid and of the structure to 0 and we can control exactly the position of the rigid body, provided that its shape is not a disk. One important ingredient consists in a new Carleman estimate for a linear fluid-rigid body system with Navier boundary conditions.
Introduction
Let Ω be a bounded, non empty open subset of R 2 with a regular boundary. We assume that Ω contains a rigid body and an incompressible viscous fluid. At each time t > 0, the domain of the rigid body is denoted by S(t) ⊂ Ω that is assumed to be compact with non empty interior and regular. The fluid domain is denoted by F(t) = Ω\S(t), and is assumed to be connected.
We consider the following system describing the evolution of the fluid which is governed by the incompressible Navier-Stokes system ∇ · U = 0 in (0, T ) × F(t).
where ν is the viscosity of the fluid. We denote for each time t, the position of the structure by h(t) ∈ R 2 and by R θ(t) the rotation matrix of angle θ of the solid defined by R θ(t) = cos θ(t) − sin θ(t) sin θ(t) cos θ(t) .
Then, the flow of the structure is given by X S (t, ·) : S −→ S(t) where X S (t, y) = h(t) + R θ(t) y, t ∈ (0, T ), y ∈ S, (1.2) where S is a fixed subset of R 2 , non empty, compact with a regular boundary. We notice that X S (t, ·) is invertible and a C ∞ -diffeomorphism, we denote its inverse by Y S (t, ·) : Thus, the Eulerian velocity of the structure is given by We denote by a ⊥ , the vector −a 2 a 1 , for any a = a 1 a 2 ∈ R 2 . We notice that R θ(t) R −1 θ(t) is a skew-symmetric matrix, then the Eulerian velocity of the structure writes where ω(t) = θ (t) represents the angular velocity of the rigid body.
We denote by S h,θ the set S h,θ = h + R θ S, and we define the corresponding fluid domain for any h ∈ R 2 , θ ∈ R. Then, with these notations, we have We point out that the fluid domain is depending on the displacement of the solid structure, consequently, it depends on time.
The motion of the structure is governed by the balance equations for linear and angular momenta mh (t) = − ∂S(t) T(U, P ) n dΓ t ∈ (0, T ), Jω (t) = − ∂S(t) (x − h(t)) ⊥ · T(U, P ) n dΓ t ∈ (0, T ). (1.3) We complete (1.1) and (1.3) by the Navier slip boundary conditions. In order to write these boundary conditions, we need to introduce some notations. Let τ be a tangent vector to ∂F(t). We denote by a n and a τ the normal and the tangential parts of a ∈ R 2 : a n = (a · n) n, a τ = a − a n .
Then, the boundary conditions write as follows where β Ω 0 and β S 0 are the friction coefficients.
Let h 0 , 0 ∈ R 2 , θ 0 , ω 0 ∈ R and u 0 ∈ [H 1 (F h 0 ,θ 0 )] 2 . We furnish the following initial conditions such that the following compatibility conditions are satisfied where u 0 S (x) = 0 + ω 0 (x − h 0 ) ⊥ . Without loss of generality, we assume that the center of gravity of S is at the origin. Then, h(t) will be the position of the center of mass of the rigid body S(t).
Our main objective in this paper is to look for a control v * acting on O such that for any (h T , θ T ) ∈ R 2 × R with we get that h(T ) = h T , θ(T ) = θ T and the velocities of the fluid and of the rigid body are equal to 0 at time T . The main result of this paper is stated below: Theorem 1.1. Assume that β S > 0 and let (h T , θ T ) that satisfies (1.7). Then, there exists ε > 0 such that for any (u 0 , h 0 , 0 , ω 0 , θ 0 ) that satisfies (1.6) and there exists a control v * ∈ L 2 (0, T ; [L 2 (O)] 2 ) such that U (T, ·) = 0 in F h T ,θ T , h(T ) = h T , h (T ) = 0, ω(T ) = 0, θ(T ) = θ T .
Without loss of generality, we can always assume that h T = 0, θ T = 0, and thus S h T ,θ T = S, F h T ,θ T = F.
In fact: in general, we have X S (t, y) = h(t) + R θ(t)−θ T (y − h T ), t ∈ (0, T ), y ∈ S. and in this case, we set Then, we notice that we are reduced to the case (1.2). Thus, by translation of vector −h T and rotation of angle −θ T , one can reduce the controllability problem to the case h T = 0 and θ T = 0. In what follows, the vectors n and τ stand respectively for the outer unit normal and the unit tangent vector to ∂F. Several works were devoted to the study of fluid-rigid body interaction systems, in particular, when the fluid is governed by the Navier-Stokes system. Existence results concerning this kind of systems with Dirichlet boundary conditions were considered in [9,12,13,20,24,[28][29][30] etc. For the case of the Navier slip boundary conditions (1.4), the existence of weak solutions is proved in [18] and the existence of strong solutions is obtained in [31]. In [19,31], the authors proved that collisions can occur in final time between the rigid body and the domain cavity with some assumptions on the solid geometry.
Concerning the controllability, let us mention [16,25], where the authors obtained the local exact controllability of the 2D or 3D Navier-Stokes equations with Dirichlet boundary conditions considering distributed controls. The local exact controllability of the Navier Stokes system with nonlinear Navier boundary conditions with distributed controls was studied in [22]. Moreover, in [23], the authors established the local controllability with N − 1 scalar controls. With Navier-slip conditions on the fluid equations, global null controllability is obtained for the weak solution in [11] such that the controls are only located on a small part of the domain boundary. Concerning controllability results of fluid-structure systems with Dirichlet boundary conditions, in dimension 2, we mention the paper [7], where the authors proved the null controllability in velocity and the exact controllability for the position of the rigid body assuming some geometric properties for the solid and provided that the initial conditions are small enough, more precisely a condition of smallness on the H 3 norm of the initial fluid velocity is needed. The authors used the Kakutani's fixed point theorem to deduce the null controllability of the nonlinear system. We have also the paper [26] where the authors considered the structure of a rigid ball, their result relies on semigroup theory. In the latest paper, only an assumption on the H 1 norm of the initial fluid velocity is needed. In dimension 3, we mention [6], the same result was proved without any assumptions on the solid geometry while a condition of smallness on the H 2 norm of the initial fluid velocity is needed. We also mention [27], where the authors considered the interaction between a viscous and incompressible fluid modeled by the Boussinesq system and a rigid body with arbitrary shape, they proved null controllability of the associated system. In the case of the stabilization of fluid-solid ineraction systems, we have [2,3].
In this paper, we prove the local null controllability of the system (1.1), (1.3), (1.4), (1.5), that is the case of the Navier slip boundary conditions in the presence of a rigid structure of arbitrary shape. We follow the same method as [26]: we use a change of variables to write our system in a fixed domain and use a fixed point argument to reduce our problem to the null controllability of a linear fluid-rigid body system, that is coupling the Stokes system with ODE for the structure velocity. To do this we derive a Carleman estimates for the corresponding system.
One of the main difficulties to obtain such an estimate is to manage the boundary conditions and more precisely to obtain estimates of the rigid velocity with the good weights. An important step for this calculation is a Carleman estimates for the Laplacian equation with divergence free condition and Navier slip boundary conditions, which is given in Section 4. We emphasize that this is the first result concerning the null controllability of a fluid-structure interaction system with boundary conditions different from the standard no-slip ones. Note that with the Navier boundary conditions considered here, one of the additional difficulties with respect to the Dirichlet boundary conditions lies on the fact that in the Carleman estimate, it is more complictaed to estimate the structure velocities from the fluid velocity. There are several possible extensions to this work. First let us recall that in [11], the authors obtain the global exact controllability of the Navier-Stokes system with Navier boundary conditions. One of their ingredients is to use the local exact null controllability of [22]. Here, one can also consider the global exact controllability but the arguments of [11] may be difficult to adapt due to the presence of the structure. Second, one can also consider a heat conducting fluid and remplace the Navier-Stokes system by the Boussinesq system. This has been done for instance in [27] with a rigid body and Dirichlet boundary conditions. Our method here should be adapted to this case and we would obtain a similar result. Finally, one can try to reduce the number of controls as it is done in [23] for the Navier-Stokes system with Navier boundary conditions. However, let us note that due to the presence of the structure velocities in the boundary conditions, some parts of the proof in [23], might be difficult to adapt, mainly the manipulation of the curl of the fluid velocity on the boundary.
The outline of this paper is as follows: in Section 2, we give some preliminaries. We emphasize that one of the main difficulties in this problem is that we are dealing with a coupled system set on a non cylindrical domain. Then, in Section 3, we remap the problem into an equivalent system given in a fixed geometry. In Section 5, we establish a new carleman inequality. In Section 6, we prove the null controllability of the linearized system. Finally, in Section 7, we prove Theorem 1.1 and deduce the null controllability of the system by applying a fixed-point argument.
Preliminaries
In this section, we prove some regularity results of an associated linearized problem. We consider the following linear system where w S (y) = w + k w y ⊥ , completed with the initial conditions We have the following regularity result for the system (2.1), (2.2) and (2.3) which is proved in [31].
Then, there exists a unique solution to problem (2.1), (2.2) and (2.3) such that Moreover, it satisfies the following estimate Proof. The proof of the above theorem is based on semigroup theory. For the sake of completeness, we just recall the main ideas of the proof. We note that w 0 and w are extended by 0 w + k 0 w y ⊥ and w + k w y ⊥ on S respectively. Let define the following Hilbert spaces We notice that the condition D(w) = 0 on S is equivalent to w = w S ∈ R on S where For w, v ∈ H, we define the inner product on H by Let define also the orthogonal projector P : [L 2 (Ω)] 2 −→ H.
The system (2.1), (2.2) and (2.3) can be reduced to the following form where the operator A is defined by In Lemma 3.1 of [31], it is proved that the operator A is self-adjoint and it generates a semigroup of contractions on H. Thus, we deduce Theorem 2.1 (see [31], Prop. 3.3).
We note here that since A is a self-adjoint operator, then for any w ∈ D(A), we have We also need some regularity results on the linear system (2.1), (2.2), (2.3).
Change of variables
To treat the free boundary problem (1.1), (1.3), (1.4), (1.5), we consider an equivalent system written in a fixed domain using a change of variables that was already introduced in [29]. In fact, we construct an extension of the structure flow (1.2) over Ω by a regular and incompressible flow. First, we need to control the distance between the structure and the boundary ∂(Ω\O).
The condition (1.7) implies that there exists d > 0 such that Then, we get Thus, we obtain In other words, we only assume that no collision occurs between the structure and the boundary ∂(Ω\O) at time T . In fact, if the initial data are small enough, then the displacement of the structure remains small, then (3.2) is satisfied. Thus, no contact can occur between the solid and the boundary for any t ∈ [0, T ]. Following [29], we can construct a change of variables X and Y with the following properties -For any t ∈ [0, T ], X and Y are C ∞ diffeomorphisms from Ω into itself, -The function X is invertible of inverse Y , -In a neighborhood of S = S(T ), X(t, y) = X S (t, y) = h(t) + R θ(t) y, -In a neighborhood of ∂Ω and of O, X(t, y) = y, -det ∇X(t, y) = 1, for all y ∈ Ω, -In a neighborhood of S, ∇X(t, y) = R θ(t) and ∇Y (t, X(t, y)) = R −1 θ(t) . Moreover, we have where C depends on T . Now, we set u(t, y) = Cof(∇X(t, y)) * U (t, X(t, y)), P (t, y) = p(t, X(t, y)).
Then, we have where n and τ respectively stand for the normal and the tangential vectors on ∂F, with Finally, we set the initial conditions for y ∈ F
Carleman estimate for the Laplacian problem with Navier slip boundary conditions
We prove first, a Carleman inequality for the Laplacian problem with non-homogeneous Navier boundary conditions. From Lemma 1.1 of [8], we can construct a function η ∈ C 2 (F) such that Let λ > 0 and let take α = e λη . We have the following proposition.
satisfies the inequality for any s s 1 and λ λ for a complete proof.
Proof. The proof is inspired from [22] where in our case, we need to take into account the non homogeneous Navier slip boundary conditions and thus, one need to manipulate carefully the surface integrals that appear.
We notice that Using the inequality in Theorem II.4.1 of [17] with r = 2, q = 2, we obtain Applying the same arguments, we get for I 3 Then, combining all these inequalities, we get We recall that Since ψ is divergence free, we have that We have used the fact that for any scalar function a : We recall the Green formula Thus, the last term in the right hand side of the inequality (4.32) gives To adsorb the second term of the right hand side, we proceed like ( [15], inequality (1.62)) which shows that the integral of e 2sα α 2 |∇ψ| 2 over O η can be estimated by e 2sα α 4 |ψ| 2 over a larger set O. Indeed, we define θ ∈ C 2 0 (O) such that θ ≡ 1 in O η and 0 θ 1. We obtain
Carleman estimate for the linearized system
We consider the following adjoint system Let η ∈ C 2 (F) which verifies (4.1) with O η ⊂⊂ O a non empty open set. Let λ > 0 and with N > 0 an integer number to be defined later on.
Using Theorem 2.1, we have Step 2: In this part, we are going to obtain a Carleman estimate for the system (5.8) by following the proof in [6]. However, we need to deal with the Navier boundary conditions (5.9). We apply the curl operator to the first equation of (5.8) in order to eliminate the pressure, to get We obtain a one dimensional heat equation. We recall that We apply Proposition A.1 replacing ψ by ∇ × ϕ. We get Arguing as pages 7-8 of [6], we treat the local terms appearing in the right hand side of (5.13), we obtain for λ C and s C(T N + T 2N ). We notice that ϕ satisfies the following problem where we have used that a = ϕ S 1 ∂S and b = β S (ϕ S ) τ 1 ∂S . We replace s in (5.16) by se 2N λ η L ∞ (Ω) and integrating over (0, T ), we get Applying the estimates obtained in Theorem 2.2 of [1], we get Then, we multiply (5.18) by s 3 λ 4 e −2s β (ξ * ) 3 , we get Adding (5.19), (5.17) and (5.14), we deduce Taking (s, λ) large enough, the fifth term in the right hand side of (5.20) can be transported to the left side. Indeed, since ϕ S is rigid, from Lemma 2.2 of [27], we have F |ϕ(t, ·)| 2 dy C ϕ S (t) · n 2 H 3/2 (∂S) , for any shape of the body S. Moreover, we have the following relation ∇ × ϕ = (∇ϕn) · τ − ((∇ϕ) * n) · τ, on ∂F, (5.22) where τ = −n 2 n 1 . In the other hand, we have Using the boundary conditions (5.9), we can write β S (ϕ S ) τ = ν (∇ϕn + (∇ϕ) * n) τ + β S ϕ τ , on ∂S.
Step 3: Now, it remains to treat the two terms Using (5.22), (5.23), and the fact that we get, Then, we have It implies Then The second and the third term in the right hand side of the above inequality can be absorbed using (5.27) by the left side of the inequality (5.32). To absorb the first term in the right hand side of the inequality (5.36), we use the elliptic estimate of the system (5.15), we obtain The terms in the right hand side of (5.37) can be absorbed by the left hand side of (5.32), moreover the last term in the right side of (5.36) can be manipulated as (5.27) and thus, it can be absorbed by the left side of (5.32). To estimate the second term in (5.33), observe that from (5.34) and (5.35), we have We take ζ 2 (t) = s −1/2 λ −1/2 e −sβ(t) (ξ * ) −1/2 (t) and let consider the system with the boundary condition We notice that in the above system all final conditions are equal to zero, then all the compatibility conditions mentioned in Proposition 2.2 are satisfied. To absorb the last two terms in the right hand side of (5.38), we use L 2 regularity results of the system satisfied by ζ 2 ϕ. In fact, we have We note that |ζ 2 | Cs 1/2 λ −1/2 (ξ * ) 1/2+1/N e −s β , |ζ 2 ρ | Cs 1/2 λ −1/2 (ξ * ) 1/2+1/N e −s β ρ.
Using the trace theorem, we have Let estimate the terms in the right hand side of (5.44). We have The terms appearing in the right hand side of (5.45) can be absorbed by the left hand side of the Carleman inequality (5.32). In the other hand, we have and By an interpolation argument, we get We rewrite the right hand side of the inequality (5.46) to obtain Applying again Young's inequality, we get for N 4 The first term in the right hand side of (5.48) can be absorbed by the left hand side of the Carleman inequality (5.32) while the second term is absorbed by the left hand side of (5.44).
Using again an interpolation argument, we obtain similarly, The left hand side of (5.49) can be rewritten as Then, for N 4 we get The first term in the right hand side of (5.51) can be absorbed by the left hand side of the Carleman inequality (5.32) while the second term is absorbed by the left hand side of (5.44).
In the other hand, that can be rewritten as Then, for N 4, we get The first term in the right hand side of (5.54) can be absorbed by the left hand side of the Carleman inequality (5.32) while the second term is absorbed by the left hand side of (5.44). On the other hand, we notice that |ζ 2 ρ | Cs 1/2 λ −1/2 e −s β (ξ * ) 1/2+1/N ρ and we get as for (5.38) Using the decomposition (5.7) and the regularity estimate (5.11), we deduce from the above inequality The first and the second term in the right hand side of (5.55) can be absorbed by the left hand side of the Carleman inequality (5.32). Since ρ = − 3 2 s( β) ρ, we have where we have used that Using interpolation arguments and the Young inequality, we find as for ζ 2 ϕ We get also and Combining (5.32), (5.59) and (5.7), we get finally (5.6) for N 4, λ C and s C(T N + T 2N ).
Therefore, the couple (v, q) satisfies the system Moreover, from (6.19), we have v = 0 in (0, T − ε) × O. Then, using the unique continuity property of the Stokes system (see for instance [14]), we get The boundary conditions read to Since β S > 0, we get that v = 0 and k v = 0 in (0, T − ε). Then, we obtain in particular that γ 2 = ( , k) = 0 from the equations of the structure motion which contradicts (6.18).
Fixed point
In this section, we prove Theorem 1.1 by applying a fixed-point argument. For this purpose, we follow the same steps as [26]. First, we give some estimates on the terms appearing in the system (3.4), (3.5) and (3.6). We have the following lemma that is proved in [29].
Lemma 7.1. Let X and Y satisfying the properties given in Section 3. We obtain for all (u, π) ∈ [H 2 (F)] 2 × H 1 (F), the following estimates, for all t ∈ [0, T ] We have also Lemma 7.2. Let X and Y satisfying the properties given in Section 3. We obtain for all (u, π) ∈ [H 2 (F)] 2 × H 1 (F) the following estimates, for all t ∈ [0, T ] Now, we are in position to prove Theorem 1.1.
Proof of Theorem 1.1. For all r > 0, let us set Let F ∈ K r , and assume that From Proposition 6.1, the solution (u, π, h, θ) of the linear system (6.1), (6.2), (6.4) with v * = E T (Z 0 , a 0 , F ) satisfies h(T ) = 0, θ(T ) = 0 and Using ( Using the condition (7.2), we can construct the change of variables defined in Section 3. We can thus, define the mapping Φ : K r −→ K r , that associates F ∈ K r , we set where (u, π, h, θ) is the solution of the linear system (6.1), (6.2) and (6.3). Combining Lemma 7.1, the estimate (7.2) and Then, for r small enough, we get Φ(K r ) ⊂ K r . Similarly, using Lemma 7.2, we get that . | 5,780.6 | 2020-01-24T00:00:00.000 | [
"Mathematics"
] |
Holographic constraints on Bjorken hydrodynamics at finite coupling
In large-$N_c$ conformal field theories with classical holographic duals, inverse coupling constant corrections are obtained by considering higher-derivative terms in the corresponding gravity theory. In this work, we use type IIB supergravity and bottom-up Gauss-Bonnet gravity to study the dynamics of boost-invariant Bjorken hydrodynamics at finite coupling. We analyze the time-dependent decay properties of non-local observables (scalar two-point functions and Wilson loops) probing the different models of Bjorken flow and show that they can be expressed generically in terms of a few field theory parameters. In addition, our computations provide an analytically quantifiable probe of the coupling-dependent validity of hydrodynamics at early times in a simple model of heavy-ion collisions, which is an observable closely analogous to the hydrodynamization time of a quark-gluon plasma. We find that to third order in the hydrodynamic expansion, the convergence of hydrodynamics is improved and that generically, as expected from field theory considerations and recent holographic results, the applicability of hydrodynamics is delayed as the field theory coupling decreases.
Introduction
Hydrodynamics is an effective theory [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15] of collective long-range excitations in liquids, gases and plasmas.Its applicability across energy scales has made it a popular and fruitful field of research for over a century.A particularly powerful aspect of hydrodynamics is the fact that it provides a good effective description over a vast range of coupling constant strengths of the underlying microscopic constituents.This is true so long as the mean-free-time between microscopic collisions t mft is smaller than the typical time scale (of observations) over which hydrodynamics is applicable, t mft t hyd .At weak coupling, the underlying microscopic dynamics can be described in terms of kinetic theory [16][17][18][19][20][21][22][23][24], which relies on the concept of quasiparticles.On the other hand, at very strong coupling, the applicability of hydrodynamics to the infrared (IR) dynamics of various systems without quasiparticles has been firmly established much more recently through the advent of gauge-gravity duality (holography) [25][26][27][28].In infinitely strongly coupled CFTs with a simple holographic dual, the mean-free-time is set by the Hawking temperature of the dual black hole, t mft ∼ /k B T . 1 In a CFT in which temperature is the only energy scale, this implies that hydrodynamics universally applies to the IR regime of strongly coupled systems for ω/T 1, where the frequency scales as ω ∼ 1/t hyd (and similarly for momenta, q/T 1).A natural question that then emerges is as follows: how does the range of applicability of hydrodynamics depend on the coupling strength of the underlying microscopic quantum field theory?Qualitatively, using simple perturbative kinetic theory arguments (see e.g. a recent work by Romatschke [29] or Ref. [30]), one expects the reliability of hydrodynamics to decrease (at some fixed ω/T and q/T ) with decreasing coupling constant λ.The reason is that, typically, the mean-free-time increases with decreasing λ.From the strongly coupled, non-perturbative side, the same picture recently emerged in holographic studies of (inverse) coupling constant corrections to infinitely strongly coupled systems in [31][32][33][34],2 which we will further investigate in this work.
In holography, in the limit of infinite number of colors N c of the dual gauge theory, inverse 't Hooft coupling constant corrections correspond to higher derivative gravity α corrections to the classical bulk supergravity.In maximally supersymmetric N = 4 Yang-Mills (SYM) theory, dual to the IR limit of ten-dimensional type IIB string theory, the leading-order corrections to the gravitational sector (including the five-form flux and the dilaton), are given by the action [37][38][39][40][41] compactified on S 5 , where γ = α 3 ζ(3)/8, κ 10 ∼ 1/N c and the term W is proportional to fourth-power (eight derivatives of the metric) contractions of the Weyl tensor 2) The 't Hooft coupling of the dual N = 4 CFT is related to γ by the following expression: , where L is the anti-de Sitter (AdS) length scale.For this reason, perturbative corrections in γ ∼ α 3 are dual to perturbative corrections in 1/λ 3/2 .Another family of theories, which have been proven to be a useful laboratory for the studies of coupling constant dependence in holography, are curvature-squared theories [31-34, 42, 43] with the action given by Although the dual(s) of (1.3) are generically unknown, 3 one can treat curvature-squared theories as invaluable bottom-up constructions for investigations of coupling constant corrections on dual observables of hypothetical CFTs. 4 From this point of view, it is natural to interpret the α n coefficients as proportional to α .Since the action (1.3) results in higher-derivative equations of motion, the α n need to be treated perturbatively, i.e. on the same footing as the γ ∼ α 3 corrections in N = 4 SYM.The latter restriction can be lifted if one instead considers a curvature-squared action with the α n coefficients chosen such that α 1 = −4α 2 = α 3 .The resulting theory, known as the Gauss-Bonnet theory results in second-derivative equations of motions, therefore enabling one to treat the Gauss-Bonnet coupling, λ GB ∈ (−∞, 1/4], at least formally, non-perturbatively. 5Even though this theory is known to suffer from various UV causality problems and instabilities [47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64], one may still treat Eq. (1.4) as an effective theory which can, for sufficiently low energy and momentum, provide a well-behaved window into non-perturbative coupling constant corrections to the low-energy part of the spectrum.This point of view was advocated and investigated in [31,34,42,43] where it was found that a variety of weakly coupled properties of field theories, including the emergence of quasiparticles, were successfully recovered not only from the type IIB supergravity action (1.1) but also from the Gauss-Bonnet theory (1.4). 6An important fact to note is that these weakly coupled predictions follow from the theory with a negative λ GB coupling (increasing |λ GB |).
We can now return to the question of how coupling dependence influences the validity of hydrodynamics as a description of IR dynamics by using the above two classes of top-down and bottom-up higher derivative theories.The first concrete holographic demonstration of the failure of hydrodynamics at reduced (intermediate) coupling was presented in [31].The same qualitative behaviour was observed in both N = 4 and (non-perturbative) Gauss-Bonnet theory.Namely, as one increases the size of higher derivative gravitational couplings (decreases the coupling in a dual CFT), there is an inflow of new (quasinormal) modes along the negative imaginary ω axis from −i∞.Note that at infinite 't Hooft coupling λ, these modes are not present in the quasinormal spectrum.However, as λ decreases, the leading new mode on the imaginary ω axis monotonically approaches the regime of small ω/T .In the shear channel, 7 which contains the diffusive hydrodynamic mode, the new mode collides with the hydrodynamic mode after which point both modes acquire real parts in their dispersion relations.Before the modes collide, to leading order in q, the diffusive and the new mode have dispersion relations [31,34] ) where the imaginary gap ω g , the shear viscosity η and energy density ε, and pressure P depend on the details of the theory [31,34].Note also that both the IIB coupling γ and the Gauss-Bonnet coupling −λ GB have to be taken sufficiently large in order for this effect to be well described by the small-q expansion (see Ref. [34]).In the sound channel, ) where c s = 1/ √ 3 is the conformal speed of sound and Γ = 2η/3 (ε + P ).In both channels, it is clear that the IR is no longer described by hydrodynamics.To quantify this, it is natural to define a critical coupling dependent momentum q c (λ) at which Im |ω 1 (q c )| = Im |ω 2 (q c )| in the shear channel, and Im |ω 1,2 (q c )| = Im |ω 3 (q c )| in the sound channel.With this definition, hydrodynamic modes dominate the IR spectrum for frequencies ω(q), so long as q < q c (λ).To leading order in the hydrodynamic approximation, in N = 4 theory, q c scales as q c ∼ 0.04 T /γ ∼ 0.28 λ 3/2 T , while in the Gauss-Bonnet theory, q c ∼ −3.14 T /λ GB .Even though these scalings are approximate, they nevertheless reveal what one expects from kinetic theory: the applicability of hydrodynamics is limited at weaker coupling by a coupling dependent scaling whereas at strong coupling, hydrodynamics is only limited to the region of small q/T , independent of λ 1.8 Understanding of hydrodynamics has been important for not only the description of everyday fluids and gases, but also a nuclear state of matter known as the quark-gluon plasma that is formed after collisions of heavy ions at RHIC and the LHC.Hydrodynamics becomes a good description of the plasma after a remarkably short hydrodynamization time t hyd ∼ 1 − 2 fm/c measured from the moment of the collision [66][67][68][69][70][71].In holography, heavy ion collisions have been successfully modelled by collisions of gravitational shock waves [72][73][74][75][76][77][78][79], including the correct order of magnitude result for the hydrodynamization time (at infinite coupling).Coupling constant corrections to holographic heavy ion collisions were studied in perturbative curvature-squared theories (Gauss-Bonnet) in [32], which found that for narrow and wide gravitational shocks, respectively, the hydrodynamization time is where T hyd is the temperature of the plasma at the time of hydrodynamization.For λ GB = −0.2,which corresponds to an 80% increase in the ratio of shear viscosity to entropy density, we thus find a 25% and 290% increase in the hydrodynamization time [32].Thus, t hyd was found to increase for negative values of λ GB , which is consistent with expectations of the behavior of hydrodynamization at decreased field theory coupling.Consistent with these findings, the investigation of [33,80] further revealed that for negative λ GB , the isotropization time of a plasma also increased, again reproducing the expected trend of transitioning from infinite to intermediate coupling.
In this paper, we continue the investigation of coupling constant dependent physics by studying the simplest hydrodynamic model of heavy ions-the boost-invariant Bjorken flow [81]-in higher derivative bulk theories of gravity.The Bjorken flow has widely been used to study the evolution of a plasma (in the mid-rapidity regime) after the collision.While the velocity profile of the solution is completely fixed by symmetries, relativistic Navier-Stokes equations need to be used to find the energy density, which is expressed as a series in inverse powers of the proper time τ .The details of the solution will be described in Section 2.
In N = 4 SYM at infinite coupling, the energy density of the Bjorken flow to third order in the hydrodynamic expansion (ideal hydrodynamics and three orders of gradient corrections) takes the following form [82][83][84][85][86][87][88]: where w is a dimensionful constant. 9Physically, the energy density of the Bjorken flow must be a positive and monotonically decreasing function of the proper time τ , capturing the late-time expansion and cooling of the fluid.For a conformal, boost-invariant system, the energy density (1.10) uniquely determines all the components of the stress-energy tensor.Energy conditions then imply that the solution becomes unphysical at sufficiently early times, when (1.10) is negative.For instance, by considering the first two terms in (1.10), it is clear that the solution becomes problematic at times τ < τ 1st hyd , where Physically, the reason is that for τ < τ hyd , the first viscous correction becomes large and the hydrodynamic expansion breaks down, making the Bjorken flow unphysical. 10Ref. [90] further analyzed the evolution of non-local observables in a boost-invariant Bjorken plasma, finding stronger constraints on the value of initial τ for the Bjorken solution.For instance, equal-time two-point functions and space-like Wilson loops are expected to relax at late times as for some f and g such that f (τ w 3/2 ) → 0 and g(τ w 3/2 ) → 0 as τ → ∞.In the hydrodynamic regime, both f and g must be positive and monotonically decreasing functions of τ , implying that, as the plasma cools down, these non-local observables relax smoothly from above to the corresponding vacuum values.Such exponential decays have indeed been observed from the full numerical evolution in shock wave collisions [91,92].The interesting point here is that, if we were to truncate the hydrodynamic expansion to include only the first few viscous corrections, then f and g may become negative or non-monotonic at some τ crit > τ hyd , imposing further constraints on the regime of validity of hydrodynamics.In [90], it was found that a much stronger constraint (approximately 15 times stronger than (1.11)) for first-order hydrodynamics comes from the longitudinal two-point function: while for Wilson loops, the constraint was weaker: In addition, Ref. [90] also studied the evolution of entanglement (or von Neumann) entropy in a Bjorken flow, but found that the bound obtained in that case was equal to τ 1st hyd given by Eq. (1.11), i.e. weaker than the two constraints above.The reason for this equality is that in the late-time and slow-varying limit considered for the computation, the entanglement entropy satisfies the so-called first law of entanglement, where V A is the volume of the subsystem and T A is a constant that depends on its shape.Such a law holds for arbitrary time-dependent excited states provided the evolution of the system is adiabatic with respect to a reference state [93].
In this paper, we ask how higher-order hydrodynamic and coupling constant corrections affect the critical time τ crit after which the Bjorken flow yields physically sensible observables.In particular, we extend the analysis of [90] focusing on equal-time two-point functions and expectation values of Wilson loops.From the point of view of our discussion regarding viscous corrections and their role in keeping ε(τ ) positive, it seems clear that at decreased coupling, when the viscosity η becomes larger, the applicability of the Bjorken solution should become relevant at larger τ .Our calculations provide further details regarding the applicability of hydrodynamics.As a result, we will be computing an observable that is related to a coupling-dependent hydrodynamization time [32], but is analytically-tractable and therefore significantly simpler to analyze, albeit for realistic applications limited to the applicability of the Bjorken flow model.In this way, we obtain new holographic coupling-dependent estimates for the validity of hydrodynamics, analogous to the statement of Eq. (1.9), which allow us to compare top-down and bottom-up higher derivative corrections.
We will consider both the effects of higher-order (up to third order [94]) hydrodynamics and coupling constant corrections.Up to third order in the gradient expansion, we find no surprises as the Bjorken flow observables become well defined in higher-order hydrodynamics at earlier times.In other words, no effects of asymptotic expansion divergences [95] are found to third order.As for coupling dependence, what we find is that the most stringent constraints arise from the calculations of a longitudinal equal-time two-point function, i.e. with spatial insertions along the boost-invariant flow direction.For the two higherderivative theories, to first order in the coupling and to second order in the hydrodynamic expansion, ) where τ crit is the initial critical proper-time.At γ = 6.67 × 10 −3 (λ = 7.98, having set L = 1) and at λ GB = −0.2(each increasing η/s by 80%), we find that τ 2nd crit w 3/2 increases by 92.3% and by 150% in N = 4 and a linearized dual of Gauss-Bonnet theory, respectively (see Tables 1 and 2 for other numerical estimates).In a fully non-perturbative Gauss-Bonnet calculation, the increase is instead found to be 145%, which shows a rather quick convergence of the perturbative Gauss-Bonnet series for this observable to the full result at λ GB = −0.2(see also [32]).Thus, our results lie inside the interval of increased hydrodynamization time found in narrow and wide shocks obtained from non-linear shock wave simulations [32].
The paper is structured as follows: In Section 2, we discuss higher-order hydrodynamics and details of the hydrodynamic Bjorken flow solution, including all necessary holographic transport coefficients that enter into the solution.In Section 3, we discuss the construction of holographic dual geometries to Bjorken flow.We focus in particular on the case of the Gauss-Bonnet theory which, to our understanding, has not been considered in previous literature. 11In Section 4, we analyze the relaxation properties of two-point functions and Wilson loops, extracting the relevant critical times at which the hydrodynamic approximation breaks down.Finally, Section 5 is devoted the discussion of our results.
Hydrodynamics and Bjorken flow
We begin by expressing the equations that describe the boost-invariant evolution of chargeneutral, conformal relativistic fluids, which will be studied in this work.In the absence of any external sources, the equations of motion (relativistic Navier-Stokes equations) follow from the conservation of stress-energy The constitutive relations for the stress-energy tensor of a neutral, conformal (Weylcovariant) relativistic fluid can be written as (see e.g [97]) where we have chosen to work in the Landau frame.The transverse projector ∆ ab is defined as ∆ ab ≡ g ab + u a u b , with u a being the velocity field of the fluid flow.In four spacetime dimensions, the pressure P and energy density ε are related by the conformal relation P = ε/3.The transverse, symmetric and traceless tensor Π ab can be expanded in a gradient expansion (in gradients of u a and a scalar temperature field).To third order in derivatives [94,98,99], where we have used the longitudinal derivative D ≡ u a ∇ a and a short-hand notation which ensures that any tensor A ab is by construction transverse, u a A ab = 0, symmetric and traceless, g ab A ab = 0.The tensor σ ab is a one-derivative shear tensor The vorticity Ω µν is defined as the anti-symmetric, transverse and traceless tensor The transport coefficients appearing in (2.3) are the shear viscosity η, 5 second order coefficients ητ Π , κ, λ 1 , λ 2 , λ 3 , and 20 (subject to potential entropy constraints) conformal third order transport coefficients λ i , which multiply 20 linearly independent, third order Weyl-covariant tensors O ab i that can be found in [94].The boost-invariant Bjorken flow [81] is a solution to the hydrodynamic equations (Eq.(2.1)), and has been widely used as a simple model of relativistic heavy ion collisions (see [77]).Choosing the direction of the beam to be the z axis, the Bjorken flow is boost-invariant along z, as well as rotationally and translationally invariant in the plane perpendicular to z (denoted by x ⊥ ).By introducing the proper time τ = √ t 2 − z 2 and the rapidity parameter y = arctanh(z/t), the velocity field, which is completely fixed by symmetries, and the flat metric can be written as ) Note that the solution is also invariant under discrete reflections y → −y.What remains is for us to find the solution for the additional scalar degree of freedom that is required to fully characterize the flow.In this case, it is convenient to work with a proper time-dependent energy density ε(τ ) and write Eq. (2.1) as in [98]: (2.9) By using the conformal relation P = ε/3 and the fact that the only non-zero component of ∇ a u b is ∇ y u y = ∇ ⊥y u y = τ , Eq. (2.9) then gives with Π yy from Eq. (2.3) expanded as (3) 2
+ 3λ
(3) 8 (2.11) Each transport coefficient appearing in (2.11) can only be a function of the single scalar degree of freedom-the energy density-with dependence on ε determined uniquely by its conformal dimension under local Weyl transformations [94,98]: where C, η, τΠ and λ(3) n are constants.Finally, the Bjorken solution to Eq. (2.1) for the energy density, expanded in powers of τ , becomes with ν = 2/3.Terms at order O τ −2−3ν are controlled by the hydrodynamic expansion to fourth order, which is presently unknown.
In this work, we will not look beyond third-order hydrodynamics.What is important to note is that the gradient expansion is believed to be an asymptotic expansion, similar to perturbative expansions.As a result, the Bjorken expansion in proper time formally has a zero radius of convergence [95].In practice, this means that at some order, the expansion in inverse powers of τ breaks down and techniques of resurgence are required for analyzing long-distance transport (see e.g.[95,[107][108][109][110][111][112]).
Gravitational background in Gauss-Bonnet gravity
In this section, we begin our analysis of holographic duals to Bjorken flow.Throughout this paper, we will be interested in three separate cases: • Einstein gravity.Bjorken flow in N = 4 SYM at infinite coupling, expanded to third order in the hydrodynamic series.
• λ GB -corrections.Bjorken flow in a hypothetical dual of Gauss-Bonnet theory with λ GB coupling corrections, expanded to second order in the hydrodynamic series.
In the first case, the holographic dual geometry is well known (see Refs. [82][83][84][85][86][87][88]).What one finds is that in the near-boundary region, which is the only region relevant for computing the non-local observables studied in this paper (two-point correlators of operators with large dimensions and Wilson loops), the geometries are specified by symmetry and (relevant order) hydrodynamic transport coefficients. 12As we will see, the same conclusions can also be drawn in higher-derivative theories.As a check, we derive here the full geometric Bjorken background in non-perturbative Gauss-Bonnet theory.All details of the perturbative calculations in Type IIB supergravity with α corrections will be omitted, but we refer the reader to [96] for the explicit derivation.
Static background
Equations of motion for Gauss-Bonnet gravity in five dimensions can be derived from the action (1.4) and take the following form: where This set of differential equations admits a well-known (static) asymptotically AdS black brane solution: with the emblackening factor In the near-boundary limit, the asymptotically AdS region exhibits the following scaling: where η ab is the flat metric and the AdS curvature scale, L, is related to the length scale set by the cosmological constant, L, via The Hawking temperature, entropy density and energy density of the dual theory are then given by13 ) In what follows, we will set L = 1 unless otherwise stated.
To make the metric manifestly boost-invariant along the spatial coordinate z, we transform (3.2) by introducing a proper time coordinate τ = √ t 2 − z 2 .Next, we perform an additional coordinate transformation to write the metric in terms of ingoing Eddington-Finkelstein (EF + ) coordinates with which gives the metric It should be noted that the EF + time, τ + , mixes the proper time, τ , and r in the bulk.At the boundary, however, lim A static black brane with a constant temperature cannot be dual to an expanding Bjorken fluid, which has a temperature that decreases with the proper time, T fluid ∼ τ −1/3 .As in the fluid-gravity correspondence [99], where the black brane is boosted along spatial directions, here, one may make an informed guess and allow for the horizon to become time-dependent by substituting r h → wτ where w is a constant and τ + is the fluid's proper time at the boundary.The Hawking temperature is then and the static black brane metric (3.10) takes the form Of course, as in the fluid-gravity correspondence, Eq. (3.14) is not a solution to the Gauss-Bonnet equations of motion.As will be shown below, however, the background solution asymptotes to (3.14) at late times, i.e.Eq. (3.14) is (approximately) dual to Bjorken flow in the regime dominated by ideal hydrodynamics.
Bjorken flow geometry
The full (late-time) geometry is systematically constructed following the procedure outlined in Ref. [113] (see also [114]).In EF + coordinates, the most general metric respecting the symmetries of Bjorken flow is where a, b, c are functions of r and τ + and our boundary geometry is expressed in proper time-rapidity coordinates (see the discussion above Eq.(2.8)).
At late times, the equations of motion (3.1) can be solved order-by-order in powers of τ + fixed.To perform the late time expansion, we will change coordinates from {τ + , r} → {v, u}, where and assume the metric functions a, b and c can be expanded as We then solve the equations order-by-order in powers of u and impose Dirichlet boundary conditions (at the boundary) at every order: At a given order, i, the equations of motion form a system of second-order differential equations for a i , b i and c i along with two constraint equations.We therefore have six integration constants at each order.One integration constant is related to a residual diffeomorphism invariance of our metric under the coordinate transformation [113] r → r + f (τ + ) , (3.20) and can be freely specified without affecting the physics of our boundary field theorya feature that will be exploited to simplify the solutions.Three of the five remaining integration constants can fixed by requiring the bulk geometry to be free of singularities (apart from at v = 0) and imposing the asymptotic AdS boundary conditions above.In practice, to the order considered, we find that the integration constant which ensures bulk regularity can be set by requiring ∂ v c i to be regular at a particular value 14 of v.The remaining integration constants are specified by the two constraint equations.For i > 0, one of the constraint equations can specify a constant at order i, while the other specifies a constant at order i − 1.
Solutions
We now present the full zeroth-and first-order solutions in the late-time (hydrodynamic gradient) expansion.At second order, we were unable to find closed-form solutions analytically that would extend throughout the entire bulk.However, sufficiently complete solutions for the purposes of this work can be found non-perturbatively in λ GB near the boundary, or perturbatively in the full bulk.
Zeroth Order
At zeroth order in the hydrodynamic expansion (ideal fluid order), the equations of motion are solved by 15 One can see immediately that the zeroth-order solution is the boosted black brane metric given by Eq. (3.14).Near the boundary we find
First Order
At first (dissipative) order, our equations of motion are solved by 14 With the next section in mind, we require lim v→w + ∂vci < ∞. 15 We note that this is not the most general solution to the equations of motion at this order-there is an additional nonphysical integration constant corresponding to a gauge degree of freedom.A simple coordinate transformation [113] brings the solution into the form presented here.Similar remarks apply for the first-order solution. where (3.24) For simplicity, here we have presented c 1 in an integral representation.An explicit evaluation of the integral would result in an Appell hypergeometric function (see Ref. [34]). 16ear the boundary,
Second Order
As in Gauss-Bonnet fluid-gravity calculations [34], at second order in the hydrodynamic expansion, one is required to solve non-homogeneous differential equations with sources depending on complicated expressions involving Appell hypergeometric functions.For this reason, we were only able to find non-perturbative solutions (in λ GB ) near the boundary and solve the full equations perturbatively.
Near the boundary we find where A 2 and C 2 are, as yet, unspecified constants.To determine them, we would need to know the full bulk solutions and the constants would then follow from horizon regularity.Instead, as will be shown below, we will use known properties of the dual field theory (the transport coefficients and energy conservation) to show that they must take the following values: Full perturbative first-order (in λ GB ) solutions are presented in Appendix A. Here, we only state their near-boundary forms:
Stress-energy tensor and transport coefficients
We can now compute the boundary stress-energy tensor by following the well-known holographic procedure (see e.g.[34,115,116]), which we review here.First, we introduce a regularized boundary located at r = r 0 = const.The induced metric on the regularized boundary is given by γ µν ≡ g µν − n µ n ν , where n µ ≡ δ µ r / √ g rr is the outward-pointing unit vector normal to the r = r 0 hypersurface.The boundary stress-energy tensor is then where µν is the induced Einstein tensor on the regularized boundary, K µν is the extrinsic curvature17 and J = g µν J µν .The constants δ 1 and δ 2 , fixed by holographic renormalization, are given by For the background derived in Section 3.3, the non-zero components of the four dimensional boundary stress-energy tensor, T ab , are found to be where we identify τ + with the proper time, τ , at the boundary.
Before analyzing T ab , we note three immediate observations: 1. T ab is traceless: with η ab given by Eq. (2.8).
2. Conservation implies a relationship between A 2 and C 2 : 3. The stress-energy tensor is completely specified by a single time-dependent function, ε(τ ) ≡ T τ + τ + : The three properties above are the defining properties of the hydrodynamic description of a relativistic, conformal Bjorken fluid.The only thing that remains to be specified is a single integration constant A 2 (see discussion below Eq. (3.26)).Now, the energy density of a Bjorken fluid, given by Eq. (2.13), can be written to second order in the hydrodynamic gradient expansion as where Σ(2) represents the relevant linear combination of second-order transport coefficients: By comparing the energy density of the Gauss-Bonnet fluid derived in the previous section with that of the Bjorken fluid, we identify At zeroth order in the hydrodynamic expansion, the energy density of our plasma is, as required, where we have used Eq.(3.13) to express our answer in terms of T .The shear viscosity is then which agrees with Eq. (2.20).At second order, we find Collecting our results, the energy density, as a function of proper time, takes the final form:
Breakdown of non-local observables
In this section we study various non-local observables in the boost-invariant backgrounds described above.As advertised in the Introduction, we will see that requiring a physically sensible behavior for the observables leads to several constraints on the regime of validity of hydrodynamic gradient expansions at a given order.
Two-point functions
According to the holographic dictionary [117,118], bulk fields φ are dual to gauge-invariant operators O with conformal dimension ∆, specified by their spin s, the mass m and the number of dimensions d.For scalar fields, the relation is given by ∆(∆ − d) = m 2 .The equivalence between the two sides of the correspondence can be made more precise by the identification: The left-hand-side of the above equation is the bulk partition function, where we impose the boundary condition φ → d−∆ φ .The right-hand-side is the generating functional of correlation functions of the CFT, where the boundary value φ acts as a source of the dual operator O.The equivalence (4.1) becomes handy by treating the bulk path integral in the saddle point approximation.In this regime, the above relation becomes where on the left-hand side we have the bulk action evaluated on-shell and the righthand side is the generating functional of connected correlation functions of the CFT.For instance, two-point functions can be computed by differentiating two times with respect to the source: For operators with large conformal dimension ∆ (or equivalently, bulk fields with large mass m) the above problem simplifies even further.It can be shown that, in this limit, the relevant two-point functions reduce to the computation of geodesics in the given background geometry [119,120], i.e.
where S reg is the regularized length of a geodesic connecting the boundary points x and x .
Perturbative expansion: Eddington-Finkelstein vs. Fefferman-Graham
We can now compute the late-time behavior of scalar two-point functions probing the out-of-equilibrium Bjorken flow.In order to do so we will follow the approach of [90]. 18onsider the functional L[φ(y); α] for the geodesic length, i.e. S ≡ dy L[φ(y); α].Here, φ(y) denotes collectively all of the embedding functions, y is the affine parameter and α is a small parameter related to the hydrodynamic gradient expansion in which the perturbation is carried out.Its precise definition will be given below.We can expand both L and φ(y) as: L[φ(y); α] = L (0) [φ(y)] + αL (1) [φ(y)] + O(α 2 ) , φ(y) = φ (0) (y) + αφ (1) (y) + O(α 2 ) . (4.5) The functions φ (n) (y) can in principle be found by solving the geodesic equation order-byorder in α.However, the embedding equations are in most cases highly non-linear making closed form solutions difficult to find.The key point here is that at first order in α, so we only need φ (0) (y) to obtain the first correction to the geodesic length.
Let us now discuss the expansion parameter α in more detail.In particular, what we will see is that there is a natural choice for α depending on whether we work in Eddington-Finkelstein or Fefferman-Graham coordinates, so we must proceed with some care before we interpret our results. 19Let us start with the Fefferman-Graham expansion, which was first considered in [90].In this case, the metric coefficients can be expanded as in Eq. (B.8) so each hydrodynamic order is suppressed by a factor of the dimensionless quantity ũ = τ −2/3 w −1 , where w is the same dimensionful parameter that appears in the energy density.On the other hand, the near-boundary expansion stipulates that we can alternatively expand all metric coefficients in powers of ṽ ≡ zτ −1/3 w.This is the expansion that will be relevant for our perturbative calculation (4.6).Notice that when ṽ → 0, we recover pure AdS, for which the embedding function φ (0) (y) is analytically known.The first correction in this expansion enters at order O(ṽ 4 ) so we can identify α ∼ ṽ4 .Now, according to the UV/IR connection [122][123][124], the bulk coordinate z can roughly be mapped to the length scale z ∼ in the boundary theory.In our setup, the only length scale of the problem is given by the separation the two points (x, x ) so ∼ ∆x ≡ |x − x |. 20 Therefore, in terms of CFT data, our expansion parameter in Fefferman-Graham coordinates is given by α = 4 τ −4/3 w 4 (Fefferman-Graham).( As mentioned already in Appendix B, the leading correction to the metric in the nearboundary expansion receives contributions at all orders in hydrodynamics, so one can obtain non-trivial results by studying contributions to the two-point correlators to only first order.For instance, as found in Ref. [90], in order to have a well behaved late-time relaxation of longitudinal two-point functions, first-order hydrodynamics puts a constraint on the regime of validity of ũ.Namely, the approximation breaks down when21 In this work, we are interested in studying both i) higher order hydrodynamic corrections and ii) (inverse) coupling constant corrections in the N = 4 plasma and a hypothetical dual of Gauss-Bonnet theory.
In Eddington-Finkelstein coordinates, the hydrodynamic expansion is performed in terms of u, and the near boundary expansion in terms of v, both given in Eq. (B.3).However, notice that these definitions involve τ + instead of τ , which at the leading order becomes Eq.(B.6).If we perform a similar analysis in Fefferman-Graham coordinates, we find that in Eddington-Finkelstein coordinates the expansion parameter is given by α ∼ v −4 , or equivalently, α = 4 (τ − ) −4/3 w 4 (Eddington-Finkelstein). (4.9) Notice that in this case, truncating the expansion (4.6) at the leading order in α is problematic for τ < .Furthermore, if we expand (4.9) for τ , even the first subleading term is not complete since, due to the coordinate mixing, we would require higher order terms in the near-boundary expansion to have a full result at the given order in /τ .Thus, in Eddington-Finkelstein coordinates the results can only be trusted in the limit /τ → 0. 22 To avoid this issue we will convert first to Fefferman-Graham coordinates and perform our calculations in that chart. 23Explicit expressions for the metric functions are given in Appendix B.1.
Transverse correlator
In Fefferman-Graham coordinates, a generic bulk metric dual to Bjorken hydrodynamics can be written as follows: where {ã, b, c} are functions of (τ, z) that can be expanded in terms of ũ = τ −2/3 w −1 and ṽ = zτ −1/3 w 1 as in (B.10), i.e., ã(ṽ, ũ) = ã4 (ũ)ṽ 4 + . . ., and similarly for b and c.Notice that we have set the AdS radius to unity L = 1.The AdS radius generally depends on the cosmological constant Λ as well as all higher derivative couplings of the gravity theory that we consider.Since L is just an overall factor of our metric, it will only appear as an overall factor in the various observables we study, and can be easily restored via dimensional analysis.
Let us begin by considering space-like geodesics connecting two boundary points separated in the transverse plane: (τ 0 , x) and (τ 0 , x ), where x ≡ x 1 and all other spatial directions are identical.Because the metric (4.10) is invariant under translations in x, we can parameterize the geodesic by two functions τ (z) and x(z), satisfying the following UV boundary conditions:
.11)
22 For longitudinal correlators, this would imply that only the ∆y → 0 limit is valid.Fortunately, this is exactly the limit for which the constraint (4.8) was found. 23We explicitly checked that the results in both coordinate systems agree at the leading order in /τ .
At the end of the calculation, we can shift our coordinate x → x+x 0 , where x 0 = 1 2 (x+x ), and express the results in terms of ∆x = |x − x |, for any x and x .The length of such a geodesic is given by: (4.12) We can now use (B.10) and expand the above as: S = S (0) + S (1) + • • • , where The first term is just the pure AdS contribution, which is UV divergent.To see this, we can use the zeroth order embeddings: with z * = ∆x 2 .Integrating from → 0 to z * and subtracting the divergence S div = −2 ln , we obtain: which is the expected result for a two-point correlator in the vacuum of a CFT.At next order, the correlator can be written as follows: where S (1) is given in (4.13).The functions {ã 4 (ũ), b4 (ũ), c4 (ũ)} are generically theorydependent (see Appendix B.1 for explicit expressions) and contain information about all orders in hydrodynamics.On general grounds, we expect S (1) to be positive definite at late times, so the correlator relaxes from above as the plasma cools down.Below, we will use the explicit form of {ã 4 (ũ), b4 (ũ), c4 (ũ)} to put constraints on the regime of validity of hydrodynamics, at each order in the derivative expansion.
For the transverse correlator, there is a very drastic simplification: once we evaluate S (1) using the zeroth order embeddings (4.14), we have: where x = z/z * and ũ0 = τ −2/3 0 w −1 .Therefore, the positivity of S (1) follows directly from the positivity of c4 (ũ).Let us specialize to the particular cases of interest: Einstein gravity (which is dual to a Bjorken flow at infinite coupling), and higher derivative gravities with αand λ GB -corrections (two different models of Bjorken flow with finite coupling corrections).
• Einstein gravity.The function c4 (ũ) is known up to third order in hydrodynamics and is given by equation (B.12).Up to first order in hydrodynamics c4 (ũ) is positive definite but it becomes negative for τ < τ 2nd crit and τ < τ 3rd crit in second-and third-order hydrodynamics, respectively, where It is interesting to note that for this particular obsevable, the above criterion would naively imply that third-order hydrodynamics is more constraining than second-order hydrodynamics.However, as we will see below, the most stringent bound on the applicability of hydrodynamics will come from the longitudinal correlator, which decreases at each order in hydrodynamics (up to third order), as expected.
• α -corrections.The function c4 (ũ) is known to linear order in γ = α 3 ζ(3)/8 = λ −3/2 ζ(3)L 6 /8, and up to second order in hydrodynamics, and is given by equation (B.13).The coefficient c4 (ũ) is positive definite for first-order hydrodynamics, but becomes negative for τ < τ 2nd crit (γ) in second order hydrodynamics, where Finite coupling corrections (γ > 0) are shown to increase τ 2nd crit , which is in accordance with our expectations that they should reduce the regime of validity of hydrodynamics.As we will see below, the most stringent bound will again come from the longitudinal correlator.
• λ GB -corrections.The function c4 (ũ) is known non-perturbatively in λ GB and up to second order in hydrodynamics, and is given by equation (B.14).c4 (ũ) is positive definite for first-order hydrodynamics, but becomes negative for τ < τ 2nd crit (λ GB ) in second-order hydrodynamics, where Negative values of λ GB tend to increase τ 2nd crit so they reduce the regime of validity of hydrodynamics.This is indeed the expected behavior as we flow from strong to weak coupling.It is also interesting to study the full dependence of τ 2nd crit on λ GB ∈ (−∞, 1/4], which we plot in Figure 1.For negative λ GB , we observe that τ 2nd crit increases monotonically.However, for positive λ GB , τ 2nd crit is non-monotonic.We note that, also for this case, the true bound will come from the longitudinal correlator. Finally, it is worth noting that the results above can be expressed generically in terms of a few theory-specific constants { Σ, Σ(γ) , Σ(λ GB ) , Λ}, which can be found in Appendix C. At second order in the hydrodynamic expansion, the critical time is given by Expressing our coupling constants γ and λ GB collectively as β, first-order corrections to τ 2nd crit then take the form The expressions for τ 3rd crit are complicated, but correspond to the smallest real root of the equation 1 − Σξ 4/3 − 2 Λξ 2 = 0 , ( where ξ = τ −1 0 w −3/2 .
Longitudinal correlator
We are now interested in a space-like geodesic connecting two boundary points in the longitudinal plane: (τ 0 , y) and (τ 0 , y ) for any y and y .We can make use of the invariance under translations in y and parameterize the geodesic by functions τ (z) and y(z) with boundary conditions At the end, if desired, we can simply shift our rapidity coordinate y → y + y 0 , where y 0 = 1 2 (y+y ), and express our results in terms of x 3 = τ 0 sinh(y 0 + ∆y 2 ) and x 3 = τ 0 sinh(y 0 − ∆y 2 ).The length of such a geodesic is given by: S = 2 We can now use (B.10) and expand the above as: S = S (0) + S (1) + • • • , where Again, the first term gives the pure AdS contribution.To see this, we can use the zeroth order embeddings, which in this case are given by:
.27)
Integrating from → 0 up to z * = ∆x 3 2 = τ 0 sinh( ∆y 2 ) and subtracting the divergent part S div = −2 ln , we obtain: (4.28) At zeroth order, the longitudinal correlator depends only on |x 3 − x 3 |.This is expected because this is the result for a two-point correlator in the vacuum of a CFT, which is translationally invariant.At next order, the correlator can be written as follows: where S (1) is given in (4.26).Again, we expect S (1) to be positive definite at late times, so the correlator relaxes from above as the plasma cools down.However, we will see below that there are crucial differences with respect to the transverse case, which will ultimately lead to stricter bounds on the regime of validity of the hydrodynamic expansion.The next step is to evaluate S (1) using the zeroth-order embeddings (4.27) and then use the explicit forms of {ã 4 (ũ), b4 (ũ), c4 (ũ)} which are theory-dependent.Defining a dimensionless variable x = z/z * , we arrive at the following expression: where ũ(x) = 1 for some numbers {ã Different values of k correspond to contributions from different orders in hydrodynamics; for example, k = 0 corresponds to the perfect fluid approximation, k = 1 corresponds to first-order hydrodynamics, and so on.Therefore, we can rewrite S (1) as follows: where Both integrals can be performed analytically for any value of k, although we refrain from writing them out here, since they are not particularly illuminating.Nevertheless, it is interesting to study the ∆y → 0 limit, from which we can extract τ crit at different orders in hydrodynamics [90].A simple observation is that both of I (k) ± are positive definite and decrease monotonically as ∆y increases.In the limit ∆y → 0, both integrals are finite and independent of k: However, it is clear that the first term of (4.33) dominates since in this limit cosh( ∆y 2 ) → 1, while sinh( ∆y 2 ) → O(∆y).Putting everything together, we find that for ∆y → 0: where ũ0 = τ −2/3 0 w −1 .Therefore, in this limit the positivity of S (1) follows directly from the positivity of b4 (ũ).In the cases we considered, this criterion was enough to guarantee the positivity of S (1) for any other value of ∆y.However this does not trivially follow from (4.33): at finite ∆y, the value of S (1) will generally depend on the interplay between the coefficients {ã In the following, we will study in more detail the behavior of S (1) as a function of ∆y and τ 0 w 3/2 , specializing to the particular cases of interest: Einstein gravity and higher derivative gravities with α -and λ GB -corrections.
• Einstein gravity.The functions ã4 (ũ) and b4 (ũ) are known up to third order in hydrodynamics and are given in (B.12).With these functions at hand we can extract the numbers ã(k) 4 and b(k) 4 and then use formula (4.33).Figure 2 (left) shows some representative curves for S(1) ≡ S (1) 3 as a function of ∆y for various values of ξ = τ −1 0 w −3/2 = {0, 0.15, 0.3, 0.45, 0.6} depicted in blue, orange, green, red and purple, respectively.The solid lines correspond to third-order hydrodynamics; the dashed and dotted lines correspond to second-and first-order hydrodynamics, respectively.For ξ = 0.45 the dotted curve becomes negative for small ∆y, indicating that first-order hydrodynamics is no longer valid.For ξ = 0.6 both the dotted and dashed curves are negative for small ∆y.This indicates that second-order hydrodynamics is also invalid at this time.Finally, for all values of ξ that were plotted, the solid lines are always positive, so third-order hydrodynamics is valid for these values.However, if we keep on increasing ξ, the solid lines will become unphysical for small ∆y at some point.We observe the following behavior for any finite value of ξ (in the range of parameters that we plotted): the value of S(1) increases up to a maximum S(1) max > 0 and then decreases monotonically to zero as ∆y → ∞.This implies that the positivity of S(1) at ∆y = 0 is enough to guarantee a good physical behavior for any ∆y.In Figure 2 (right) we show the behavior of S(1) (0) as a function of ξ for first-, secondand third-order hydrodynamics, depicted in blue, orange and green, respectively, and we indicate the times at which it becomes negative.From the ∆y → 0 limit of the correlator (4.36) we obtain the critical times: These bounds are stricter than the ones derived from the transverse correlator (4.18), and decrease at each order in hydrodynamics, as expected.
• α -corrections.The functions ã4 (ũ) and b4 (ũ) are known to linear order in γ = for various values of ξ = τ −1 0 w −3/2 = {0, 0.15, 0.3, 0.45, 0.6} depicted in blue, orange, green, red and purple, respectively.The solid lines correspond to third-order hydrodynamics; the dashed and dotted lines correspond to second-and first-order hydrodynamics, respectively.Right: Plots for S(1) (0) for first-, second-and third-order hydrodynamics, depicted in blue, orange and green, respectively.The dashed vertical lines correspond to the critical times at each order in hydrodynamics.
and b(k)
4 and then use the formula (4.33).Figure 3 (left) shows some representative curves for S(1) ≡ S (1) 4 3 as a function of ∆y for various values of ξ = τ −1 0 w −3/2 = {0, 0.12, 0.25, 0.28, 0.5} depicted in blue, orange, green, red and purple, respectively.The solid lines correspond to γ = 0 (Einstein gravity) while the dashed lines correspond to γ = 10 −3 , both for second-order hydrodynamics.For all the ξ that were plotted the solid lines are well behaved because we have chosen ξ < ξ 2nd crit (γ = 0) = 0.503.For ξ = 0.5 the dashed curve becomes negative for small ∆y, indicating that second-order hydrodynamics becomes invalid faster at finite coupling.We observe the same behavior as in Einstein gravity, namely that the positivity of S(1) at ∆y = 0 is enough to guarantee a good physical behavior for any ∆y.In Figure 3 (right) we show the behavior of S(1) (0) both for γ = 0 and γ = 10 −3 as a function of ξ for first-and second-order hydrodynamics, depicted in blue and orange, respectively, and we indicates the times at which it becomes negative.From the ∆y → 0 limit of the correlator (4.36) we obtain the following critical times: These bounds increase as we increase the value of γ and are stricter than the ones derived from the transverse correlator (4.19).Based on this, we can conclude that finite coupling corrections indeed tend to reduce the regime of validity of hydrodynamics.
• λ GB -corrections.The functions ã4 (ũ) and b4 (ũ) are known non-perturvatively in λ GB and up to second order in hydrodynamics, and are given in (B.14 for various values of ξ = τ −1 0 w −3/2 = {0, 0.12, 0.25, 0.38, 0.5} depicted in blue, orange, green, red and purple, respectively.Solid lines correspond to γ = 0 (Einstein gravity) while the dashed lines correspond to γ = 10 −3 (α -corrections), in both cases for second-order hydrodynamics.Right: Plots for S(1) (0) for first-and second-order hydrodynamics, depicted in blue and orange, respectively.Solid lines correspond to γ = 0 while dashed lines correspond to γ = 10 −3 .The dashed vertical lines correspond to the critical times at each order in hydrodynamics, including the leading α -corrections.(4.33).For small and negative values of λ GB we observe qualitatively the same behavior as for the γ−corrections: the critical time below which first-and second-order hydrodynamics break down increases, which is the expected behavior for a theory that flows from strong to weak coupling.On the other hand, positive values of λ GB behave in the opposite way, and thus appear unphysical for λ GB interpreted as a coupling constant.From the ∆y → 0 limit of the correlator (4.36) we obtain the following critical times: It is interesting to consider the behavior of the correlator for negative values of λ GB in the non-perturbative regime.Figure 4 (left) shows S(1) ≡ S (1) τ 4/3 0 /w 4 ∆x 4 3 plotted as a function of ∆y for a few representative values of ξ = τ −1 0 w −3/2 = {0, 0.1, 0.2}, depicted in blue, orange and green, respectively.The solid lines correspond to λ GB = 0 (infinite coupling limit) while the dashed and dotted lines correspond to λ GB = −0.5 and λ GB = −2, respectively, all for second-order hydrodynamics.For all the ξ that were plotted the solid lines are well behaved because we have chosen ξ < ξ 2nd crit (λ GB = 0) = 0.503.For ξ = 0.2 the dashed curve becomes negative for small ∆y, indicating that second-order hydrodynamics becomes invalid faster for λ GB = −0.5.As mentioned earlier, this is what is indeed expected as the theory flows to weak coupling.However, the dotted curves are always positive in this range of ξ, which means that something qualitatively different is happening for sufficiently negative values of λ GB .In Figure 4 (right) we investigate this behavior in more 3 for some representative values of ξ = τ −1 0 w −3/2 = {0, 0.1, 0.2} depicted in blue, orange and green, respectively.Solid lines correspond to λ GB = 0 (infinite coupling result) while the dashed and dotted lines correspond to λ GB = −0.5 and λ GB = −2, respectively, all cases for second-order hydrodynamics.Right: Plot of ξ 1st crit (blue) and the two branches of ξ 2nd crit (orange and green) as a function of λ GB .In the ranges of λ GB ∈ (−∞, −1.657) and λ GB ∈ (0.073, 1/4], the correlator is positive but non-monotonic as a function of ξ.Here, ξ 2nd crit is found instead by requiring a monotonic decay at late times and is depicted in red.The dashed blue and orange lines correspond to the perturbative results to leading order in λ GB .The vertical line indicates the maximum allowed value for λ GB = 1/4.The behavior observed for negative values of λ GB in the range λ GB ∈ (−1.583, 0) is what is expected for a theory that flows from strong to weak coupling, i.e. ξ 2nd crit decreases as the coupling decreases.However, ξ 2nd crit increases in the range λ GB ∈ (−1.657, −1.583).The small square on top of the figure is a zoomed-in version of the same around this region.The dashed vertical line there signals the value of λ GB = −1.583for which dξ 2nd crit /dλ GB = 0.The discontinuous jump in the derivative of ξ 2nd crit at λ GB = −1.657 is likely to be an artifact of a truncated hydrodynamic gradient expansion or a truncated gravitational derivative expansion.
detail.In this plot we show the behavior of ξ 1st crit and ξ 2nd crit as a function of λ GB .The blue curve corresponds to ξ 1st crit and has precisely the expected behavior: it decreases monotonically as we decrease the value of λ GB .However, we observe something different for ξ 2nd crit : it has two branches for each value of λ GB , depicted in orange and green, respectively, which merge at two values of the coupling, λ GB = −1.657and λ GB = 0.073.For values of the coupling within the ranges λ GB ∈ (−∞, −1.657] and λ GB ∈ [0.073, 1/4] the correlator is always positive, however, non-monotonic with respect to ξ.In these ranges of λ GB we can find ξ 2nd crit by requiring monotonicity of the late-time correlator.The result of applying the latter criterion is depicted in red in Figure 4 (right).Combining these two criteria, we find that ξ 2nd crit decreases monotonically as λ GB varies from 0 to −1.583, but then increases again as λ GB goes from −1.583 to −1.657.Moreover, the derivative of ξ 2nd crit is discontinuous at λ GB = −1.657.Such behavior does not match the expectations for a theory that flows from infinite to zero coupling.It is likely that the inclusion of higher-than-secondderivative terms in the gravity action (beyond R 2 Gauss-Bonnet terms) or a higherorder hydrodynamic expansion would cure these problems.As a result, we conclude that the qualitative resemblance between non-perturbative λ GB -corrections and (nonperturbative) finite coupling corrections to the longitudinal two-point correlator, to second order in the hydrodynamic gradient expansion, is restricted to the range of λ GB ∈ (−1.583, 0].The critical times found for the longitudinal correlator can also be expressed generically in terms of a few theory-specific constants {η, Appendix C, and take the form: ) Expressing our coupling constants γ and λ GB collectively as β, first order corrections to τ 1st crit and τ 2nd crit take the form: The expression for τ 3rd crit now corresponds to the smallest real root of the equation where ξ = τ −1 0 w −3/2 .
Wilson loops
Wilson loops are another phenomenologically relevant non-local observable that can be studied within the framework explored in this work.The Wilson loop operator is a pathordered integral of the gauge field, defined as where the trace runs over the fundamental representation and C is a closed loop in spacetime.In AdS/CFT, the recipe for computing the expectation value of a Wilson loop, in the strong-coupling limit, is given by [125] W (C) = e −S NG (Σ) , ( where S NG = (2πα ) −1 × Area(Σ) is the Nambu-Goto action and Σ is an extremal surface with boundary condition ∂Σ = C.
Here, we consider two separate cases.The first case consists of a rectangular loop in the plane transverse to the boost-invariant direction of the Bjorken flow, where and → ∞.In the second case, we consider a rectangular loop with two sides extended along the longitudinal (beam) direction, y ∈ [− ∆y 2 , ∆y 2 ], x 1 ∈ [− 2 , 2 ] and → ∞.The calculation of the Wilson loop is qualitatively similar to that of the two-point function, so we will omit some of the redundant details below.
Transverse Wilson loop
The Nambu-Goto action for the transverse Wilson loop in the Fefferman-Graham chart is Using Eq. (B.10), we can expand this expression as NG + . . ., where
S
(1) and we used α = λ −1/2 .The first term is the pure AdS contribution, which we can see by using the zeroth-order embeddings: with z * = ∆x Γ[1/4] 2 /(2π) 3/2 .Integrating from → 0 to z * , and subtracting the divergent part, S div = √ λ/π , we obtain which gives the vacuum expectation value of the Wilson loop, (4.54) At next order, after using the zeroth-order embeddings and defining a dimensionless variable x = z/z * , we find where ũ0 = τ −2/3 0 w −1 .We observe that S NG depends linearly on c4 (ũ), similarly to S (1) for the transverse two-point function.Therefore the resulting values of τ 2nd crit and τ 3rd crit will be the same as those obtained in that case, for both Einstein gravity and the higher derivative gravities with α and λ GB corrections.As a result, the transverse Wilson loop provides no new bounds on the validity of the hydrodynamic description.
Longitudinal Wilson loop
The Nambu-Goto action for the longitudinal Wilson loop is which gives via (B.10)
S
(1) Again, the first expression gives the pure AdS embedding when we use the zeroth-order embeddings: with Integrating from → 0 to z * , and subtracting the divergent part S div = √ λ/π , we find i.e. the same result as in the transverse case.The next step is to evaluate S NG using the zeroth-order embeddings (4.59)-(4.60)along with the explicit forms of {ã 4 (ũ), b4 (ũ), c4 (ũ)}.Defining the dimensionless variable x = z/z * and expanding {ã 4 (ũ), b4 (ũ), c4 (ũ)} as in (4.32), we find where We can extract τ crit at different orders in the hydrodynamic expansion by studying the ∆y → 0 behavior of S (1) NG .In this limit, the sinh 2 (∆y/2) term of (4.62) vanishes and the relevant I (k) integrals are finite and independent of k: Collecting our results, we find that for ∆y → 0: NG (0) itself does not provide a useful criterion for establishing the regime of validity of the hydrodynamic description at all orders in the hydrodynamic expansion, so we have to also impose monotonicity.The positivity criterion is enough only at first order, however S (1) NG (0) is strictly positive at second and third order in the backgrounds we consider.In these cases, we find that S (1) NG (0) decreases with decreasing τ until it reaches some minimum value, S NG,min (0, τ = τ min ), and then turns around and grows without bound (this behavior is demonstrated in Figure 5 for Einstein gravity).Therefore, for τ < τ min , the longitudinal Wilson loops are unphysical.This will be our criterion for establishing τ crit for the higher order hydrodynamic descriptions.
In the following, we will study the full behavior of S NG as a function of ∆y and ξ = τ −1 0 w −3/2 for our three cases of interest.In each case, the bounds on the validity of the hydrodynamic description are less constraining than those coming from the longitudinal correlator.
• Einstein gravity.Using the expansions of ã4 (ũ), b4 (ũ) and c4 (ũ) up to third order in hydrodynamics in (B.12), we evaluate S NG via (4.62), and plot the results for some representative values of ξ in Figure 5. From the ∆y → 0 limit of S (1) NG , we find: • α -corrections.Using the expansions of ã4 (ũ), b4 (ũ) and c4 (ũ) up to second order in hydrodynamics in (B.13) we evaluate S NG via (4.62), the results of which are shown in Figure 6.The solid lines correspond to γ = 0 (Einstein gravity) while the dashed NG τ 4/3 0 /w 4 √ λ∆x 3 3 for various values of ξ = τ −1 0 w −3/2 = {0, 0.15, 0.3, 0.45, 0.6} depicted in blue, orange, green, red and purple, respectively.The solid lines correspond to 3 rd order hydrodynamics; the dashed and dotted lines correspond to 2 nd and 1 st order hydrodynamics, respectively.Right: Plots for S(1) NG (0) for 1 st , 2 nd and 3 rd order hydrodynamics, depicted in green, orange and blue, respectively.The dashed vertical lines correspond to the critical times at each order in hydrodynamics.
7. The solid lines correspond to λ GB = 0 (Einstein gravity) while the dashed lines correspond to λ GB = −0.2.Following the same line of reasoning as in the previous two cases, we find: Finally, the critical times found above can be expressed generically in terms of the theory-specific constants defined in Appendix C, and take the form: Expressing our coupling constants γ and λ GB collectively as β, first order corrections to τ 1st crit and τ 2nd crit take the form:
Discussion
This work provides a new tile in the mosaic of recent developments on coupling-dependent thermal physics from the point of view of holography.With a view towards a better understanding of heavy ion collisions, the goal of this program has been to uncover qualitative and quantitative features of physical phenomena across a wide range of coupling constants-an understanding of which will likely require an interpolation between weaklycoupled perturbative field theory and strongly-coupled holographic techniques.Non-linear shock wave collisions were recently analyzed in perturbative Gauss-Bonnet theory to, for the first time, numerically model coupling-dependent heavy ion collisions [32] and, for example, compute the corrected hydrodynamization time.The extension of those results to either non-perturbative Gauss-Bonnet gravity or to type IIB supergravity is technically demanding.Therefore, it is useful to also study other, simpler models and probes of phenomena related to hydrodynamization.In this paper, we studied the gravity backgrounds dual to a boost-invariant Bjorken flow, which are good models for the late time dynamics of heavy ion collisions, at least in the regime of mid-rapidities.We considered non-perturbative Gauss-Bonnet gravity, studied in the present context for the first time, and type IIB supergravity (to leading order in α ), both to second order in hydrodynamics.Following up on [90], we provided an example of an analytically-tractable computation of a critical time defined through relaxation properties of non-local observables (equal-time correlators and Wilson loops), after which hydrodynamics becomes a good description.
Numerical estimates of the critical times obtained for second-order hydrodynamicscomputed to leading order in inverse 't Hooft coupling corrections in N = 4 theory and nonperturbatively in λ GB in Gauss-Bonnet theory-are summarized in Tables 1 and 2, where we show the increase of the critical time at decreased field theory coupling corresponding to a 10% and an 80% increase of η/s compared to its infinitely strongly coupled value of η/s = 1/4π.In both theories, the most stringent critical time is set by the longitudinal two-point correlator, φφ .and in a dual of Gauss-Bonnet theory at λ GB = −0.025.Both choices of the coupling correspond to a 10% increase of η/s.We use ⊥ and subscripts to denote transverse and longitudinal operators, respectively.
Several interesting features can be extracted from our analysis.One is the possibility of direct comparison between the size of effects of the 't Hooft coupling in N = 4 SYM and λ GB in the hypothetical dual of Gauss-Bonnet gravity.Such results should come in handy when using Gauss-Bonnet theory for phenomenologically relevant studies.The sec- and in a dual of Gauss-Bonnet theory at λ GB = −0.2.In this case, the choices of the coupling correspond to an 80% increase of η/s.
ond is the comparison between the sizes of perturbative and non-perturbative corrections in Gauss-Bonnet theory.As noted before, in both N = 4 SYM and Gauss-Bonnet gravity, the strictest bound on the regime of validity of hydrodynamics comes from the longitudinal two-point correlator.Since all other bounds are weaker, their non-convergent behavior in terms of the gradient expansion (third-order hydrodynamics giving a stricter bound than second-order hydrodynamics for φφ ⊥ , W (C) ⊥ and W (C) ) and in the perturbative λ GB expansion should not be taken seriously: at their respective critical times, the hydrodynamic description assumed in the derivation is no longer valid.What is important, however, is that for the critical time derived from the longitudinal φφ , the perturbative λ GB corrections converge remarkably quickly to the non-perturbative results, even for the increase of η/s by 80%.While perhaps surprising at first, this observation is compatible with the results of [32].
Another interesting consequence of our analysis is the emergent restriction on the range of the (non-perturbative) Gauss-Bonnet coupling for the second-order hydrodynamic approximation to a boost-invariant flow.While the Gauss-Bonnet theory with negative λ GB very well reproduces the expected behavior of a thermal CFT with finite coupling [31][32][33][34]43], it is also known that the theory suffers from instabilities and UV problems for large (or finite) values of λ GB .For the non-linear setup studied in this work, our computations suggest that the range of the non-perturbative coupling needs to be restricted to the interval λ GB ∈ (−1.583, 0].If we continue to decrease the Gauss-Bonnet coupling, then the bound on hydrodynamics becomes weaker, which is incompatible with the expectations for the behavior of a theory that flows from infinite to zero coupling.As is usual in holographic higher-derivative theories, we expect that in order to (reliably) flow from an infinitely coupled theory dual to Einstein gravity to a free thermal CFT, one would need to include an infinite tower of higher-order curvature corrections, beyond the R 2 terms considered in the Gauss-Bonnet theory, or the R 4 terms derived from type IIB string theory.We leave the investigation of these, and issues pertaining to finding phenomenologically relevant applications of non-local observables and the validity of hydrodynamics investigated in this work for the future.
A Second order solutions in perturbative Gauss-Bonnet gravity
As discussed in Section 3, the Gauss-Bonnet equations of motion can be solved at second order in the late-time expansion to first order in λ GB by writing the metric functions a 2 , b 2 and c 2 as a 2 = a 0 2 + λ GB â2 (and similarly for the other two functions) and expanding the equations of motion to first order in λ GB .The resulting system of equations is solved by where we have presented the solutions for c 0 2 and ĉ2 as first order derivatives due to the complexity of their integrated forms.Upon integration, the resulting integration constants are set by imposing AdS boundary conditions (see Eq. (3.19)).
B.1 Explicit expansions in Fefferman-Graham coordinates
We will consider three gravity solutions dual to Bjorken flow: Einstein gravity including 3 rd order hydrodynamics, perturbative α -corrections up to second order in hydrodynamics and non-perturbative λ GB -corrections up to second order in hydrodynamics: • Einstein gravity.The full gravity solution is known analytically only up to second order in hydrodynamics.However, the near-boundary metrics can be easily obtained for 3 rd order hydrodynamics from the expected stress-energy tensor and the corresponding transport coefficients [94].In particular, we find that at this order: where γ = α 3 ζ(3)/8 = λ −3/2 ζ(3)L 6 /8.As we can see, in the limit of infinite 't Hooft coupling λ → ∞ (or γ → 0) we recover the coefficients for second-order hydrodynamics in Einstein gravity (B.12).
• λ GB -corrections.The full gravity solution including non-perturbative λ GB -corrections and first-order hydrodynamics was obtained for the first time in the present paper.
Since the transport coefficients are known non-perturbatively up to second order in hydrodynamics [34], we can reconstruct the near-boundary coefficients explicitly.We find that: where γ GB = √ 1 − 4λ GB .For λ GB → 0 (or γ GB → 1) we recover the coefficients for second-order hydrodynamics in Einstein gravity (B.12).
C Useful definitions
We can express the critical times found in the previous sections generically in terms of a few theory-specific constants {η, Σ, Λ}, which correspond to contributions from first-, secondand third-order hydrodynamics, respectively.
Figure 1 :
Figure 1: Behavior of τ 2nd crit (λ GB ), non-perturbative in λ GB , coming from the transverse correlator.Negative values of λ GB resemble qualitatively the expected behavior as we flow from strong to weak coupling.
Figure 4 : 3 0
Figure 4: Left: Plots for S(1) ≡ S (1) τ 4/3 0 /w 4 ∆x4 3 for some representative values of ξ = τ −1 0 w −3/2 = {0, 0.1, 0.2} depicted in blue, orange and green, respectively.Solid lines correspond to λ GB = 0 (infinite coupling result) while the dashed and dotted lines correspond to λ GB = −0.5 and λ GB = −2, respectively, all cases for second-order hydrodynamics.Right: Plot of ξ 1st crit (blue) and the two branches of ξ 2nd crit (orange and green) as a function of λ GB .In the ranges of λ GB ∈ (−∞, −1.657) and λ GB ∈ (0.073, 1/4], the correlator is positive but non-monotonic as a function of ξ.Here, ξ 2nd crit is found instead by requiring a monotonic decay at late times and is depicted in red.The dashed blue and orange lines correspond to the perturbative results to leading order in λ GB .The vertical line indicates the maximum allowed value for λ GB = 1/4.The behavior observed for negative values of λ GB in the range λ GB ∈ (−1.583, 0) is what is expected for a theory that flows from strong to weak coupling, i.e. ξ 2nd crit decreases as the coupling decreases.However, ξ 2nd crit increases in the range λ GB ∈ (−1.657, −1.583).The small square on top of the figure is a zoomed-in version of the same around this region.The dashed vertical line there signals the value of λ GB = −1.583for which dξ 2nd crit /dλ GB = 0.The discontinuous jump in the derivative of ξ 2nd | 16,550.8 | 2017-07-27T00:00:00.000 | [
"Physics"
] |
On the observation of magnetic events on broad-band seismometers
The objective of this contribution is to get new insights into the effects of magnetic field variations of natural and anthropogenic origin on broad-band seismic stations. Regarding natural sources of magnetic perturbations, we have investigated if the Sudden Storm Commencements (SSC) cataloged during the 24th solar cycle (2008–2019) can be systematically identified in broad-band seismic stations distributed worldwide. The results show that the 23 SSC events with a mean amplitude above 30 nT and most of those with lower energy but still clearly identified in the magnetometer detection network can be observed at broad-band stations’ network using a simple low-pass filter. Although the preliminary impulse of those signals is usually stronger at stations located at high latitudes, major SSC are observed at seismic stations distributed worldwide. Regarding anthropogenic sources, we focus on the short period seismic signals recorded in urban environments which are correlated with the activity of the railway transportation system. We have analyzed collocated measurements of electric field and seismic signals within Barcelona, evidencing that significant changes in the electric field following the activity of the transportation systems can be attributed to leakage currents transmitted to the soil by trains. During space weather events, electric currents in the magnetosphere and ionosphere experience large variations inducing telluric currents near the Earth surface, which in turn generate a secondary magnetic field. In the case of underground trains, leakage currents are transmitted to the soil, which in turn can result in local variations in the magnetic field. The observed signals in modern seismometers can be related to the reaction of the suspension springs to these magnetic field variations or to the effect of the magnetic field variations on the force transducers used to keep the mass fixed.
Introduction
This contribution presents examples of electromagnetic signals, both from natural and anthropogenic origins, which are recorded regularly by permanent and temporary broad-band seismometers. The effect of magnetic events on seismic signals is known since the beginning of modern broad-band instruments and related to the mechanism of the signal generation in the devices (Wielandt 2002). A limited number of contributions have studied this effect, focusing on its suppression to enhance the identification of signals in the normal mode band (0.3-3 mHz) (Forbriger 2007;Forbriger et al. 2010). Our aim here is to analyze if these magnetic perturbations are a widespread feature affecting systematically seismic instruments distributed worldwide and to discuss the mechanism explaining the detection of magnetic field variation in broad-band sensors.
Different electromagnetic signals of natural and manmade origin can potentially affect the seismic instruments, including magnetic storms, auroral electrojets, lightening during meteorological storms, magnetic fields produced by local supply currents, perturbations due to the passage of moving magnetic or electrical elements or leakage currents associated to transportation systems. We will focus here on two kinds of magnetic signals, the Sudden Storm Commencements (SSC) often preceding Open Access *Correspondence<EMAIL_ADDRESS>1 Geosciences Barcelona, CSIC, c. Solé Sabarís sn, 08028 Barcelona, Spain Full list of author information is available at the end of the article the arrival of large magnetic perturbations due to solar storms and the effect of electromagnetic fields associated with the activity of public transportation systems within the city of Barcelona. In the first case, we will show that large SSCs are recorded regularly at global and local seismic networks distributed worldwide, confirming previous observations. In the second case, we will document, using collocated electric and seismic sensors, that the activity of underground railway and surface tramway results in an electrical field that can be recorded at distances of hundreds of meters, generating a perturbation that dominates the seismic spectra for frequencies below 10 mHz.
SSC recordings on broad-band seismometers
Sudden commencements (SC) are defined as abrupt increases of the horizontal component of the Earth magnetic field due to the compression of the magnetosphere, which may be followed or not by a magnetic storm (Park et al. 2015). Following the initial onset, SC signals show an increase of the horizontal magnetic intensity (H) which can last from 1 to 10 min but is usually limited to 3-4 min (Maeda et al. 1962). It has been proposed that the sudden increase of the magnetic field should be designed by the general term sudden commencement (SC), which can be named as SSC if it is followed by a magnetic storm or as a sudden impulse (SI) if it is not (Curto et al. 2007). However, the term SSC is usually used to refer to the two subcases. The amplitude of the magnetic field during SSC episodes vary quickly from 10 to 15 nT to several hundreds of nanoteslas (Nishida 1978), making them a good candidate to be detected in different kind of instruments. Due to its nature, SSC are global geophysical phenomena which can be detected everywhere on Earth, although the primary impulses are stronger in high latitudes and can produce steep responses in the magnetograms. The origin of SSC is related to sudden increases in the solar wind dynamic pressure. According to Araki's model of SSCs (Araki 1977(Araki , 1994, the magnetic perturbation is attributed to the combination of two current contributions: (i) one due to the increased pressure of the magnetopause when the hydromagnetic wave hits the magnetosphere; and (ii) the other as a consequence of the conducting ionosphere reaction to a transient dusk-to-dawn electric field transmitted from the compressional wavefront through the geomagnetic field lines down to the polar upper atmosphere. The first one is not only maximum at the geomagnetic equator, but it is also noted in the low-and middle-latitude north component of the ground magnetic records. On the contrary, the second contribution is produced by two successive current vortices with reversed polarity which are observed to develop in the polar cap ionosphere, moving toward the flanks of the polar cap starting from a location closer to noon (e.g., Marsal et al. 2017).
Following the arrival of the magnetic perturbations generated by solar storms, the magnetospheric and ionospheric currents experience large variations, which in turn create secondary magnetic fields, that are systematically recorded by magnetometers. These variations produce the so-called Geomagnetic Induced Currents (GIC) in conductors operating at or near the surface of the Earth (e.g., Ngwira et al. 2015;Pulkkinen 2015). Although the larger magnetic field variations associated with those events are observed at high latitudes, GIC are also recorded at mid-latitudes during major storms (e.g., Torta et al. 2012). The monitoring of GIC is of great economical interest, as they can result in high-voltagepower transformers' degradation, increase the corrosion of pipelines steel or disturb seafloor fiber optics' systems (e.g., Oughton et al. 2017).
Since 1976, the SSC determination lists are compiled by the Service of Rapid Magnetic Variations held by the Observatori de l'Ebre, and distributed by the International Service of Geomagnetic Indices (http://isgi.unist ra.fr). The detection of those signals is based on the visual recognition of the signals in the magnetograms from five selected low-latitude observatories, although some automatic detection methods have been proposed in the last decade (Khabarova et al. 2006;Segarra and Curto 2013). The SSC catalog qualifies each event according to the clarity of its identification at the reference observatories and provides the mean amplitude value for each event. The events with a very sharp change of rhythm, large amplitude values, and remarkably morphology are classed as "unmistakable" event in the catalog.
SSC observations on global seismic networks
To show an example of seismic records contemporary to SSCs, we will first discuss the 7th September 2017 event, reported in the IAGA Bulletin with a mean amplitude of 31.9 nT. We have selected a group of 145 broad-band seismic stations from some of the main worldwide-scale seismic networks, including the Global Seismograph Network (Albuquerque Seismological Laboratory ASL/USGS 1988), the IRIS/IDA seismic network (Scripps Institution of Oceanography 1986), Geoscope (Institut De Physique Du Globe De Paris 1982) and Geofon (GEOFON Data Centre 1993). The raw seismic data have been corrected for the instrumental response and expressed as ground acceleration. This procedure, common in seismological practice, removes the effect of the recording instrument, allowing measuring the effective movement of the soil. Data processing is very simple, as it only includes the application of a low-pass filter with a corner frequency of 0.01 Hz to suppress the high-frequency signals related to oceanic waves and human activities (e.g., Díaz 2016). We identify a detection when a long period pulse is observed at the time of the reported SSC, clearly outstanding the previous minutes of the filtered signal. For this example, the SSC can be identified in around 55% of the available sites.
As observed in Fig. 1, the stations showing a clear signal are not limited to higher latitudes but spread all over the world. Although there are some outliers, larger amplitudes are mostly recorded, as expected, at high latitudes. Figure 2 provides more details on the signals recorded for the same event at stations located at different latitudes, including the Arctic Circle, the tropics, midlatitude southern hemisphere and Antarctica, hence denoting the widespread character of the recordings. In each case, we show the horizontal magnetic intensity (H) as recorded by the closest geomagnetic observatory of the INTERMAGNET network and the seismic acceleration after applying a 0.01-Hz low-pass filter. The lower panels show the spectrograms, a diagram showing the temporal evolution of the frequency content of the filtered signals. The spectrograms show that an increased energy level can be identified for at least 45 min after the SSC. Hence, broad-band seismometers do not only record the SSC but are sensitive to the whole magnetic storm.
As stated in "Introduction" section, our objective in this work has been to verify if SSC signals are recorded systematically at broad-band seismic stations distributed worldwide. To get a representative database, we have inspected all the SSC with mean amplitudes above 30 nT and the events of lower amplitude but qualified as "unmistakable" in the catalog during the 24th solar cycle, spanning between 2008 and 2019. For each of the 34 events selected, we have recovered the data of the 145 seismic stations described above and inspected the occurrence of the SSC signals. We have inspected the seismic horizontal components, but in a large majority of the cases the SSC-related signal is not visible, probably because the higher level of noise affecting the horizontal components at low frequencies, mostly related to tilt effects, masks the eventual recording of small signals as those related to SSCs. Therefore, we have focused on the analysis of the vertical seismic components.
As reported in Table 1, all major SSC events in the IAGA catalog can be identified in global-scale seismic networks, with a percentage of observations ranging between 10 and 65% of the locations, with a mean value of 48%. Figure 3a shows the relationship between magnetic amplitudes and the number of detections in the seismic records, which has a strong coincidence, although some outliers can still be identified. If only SSC events with reported mean amplitude above 40 nT are considered, the number of broad-band stations where SSC can be identified ranges between 50 and 95, that is, 35% to 65% of the inspected sites. The SSC events with larger number of detections on broad-band seismometers are those occurring on the 16th July 2017, the 7th September 2017, the 17 March 2015 and the 23 December 2014, all of them detected in more than 80 seismic stations and presenting mean amplitudes between 32 and 52 nT (Table 1). Figure 3b shows the number of SSC observations during the 2008-2019 period in each of the investigated sites. As discussed previously for a particular example, the observation of SSC is not limited to high latitudes. The stations detecting a large majority of the SSC events are those located near the South pole and in the northern part of Canada, but many stations in mid-latitudes of the northern hemisphere have detected more than 50% of the 34 SSC inspected events, while stations in South-America and the southern part of Africa show the lowest number of detections. As observed at Fig. 3b, the region with low number of seismic detections of SSC events matches closely the South-Atlantic Anomaly (SAA), a very weak magnetic intensity minimum localized in the a b Fig. 3 Magnetic amplitude of the SSC events vs. seismic observations. a Mean amplitude of the significant SSC events during the 24th solar cycle (red bars) compared with the number of observations of each event in the broad-band stations (black bars). b Total magnetic intensity (NCEI Geomagnetic Modeling Team and British Geological Survey 2019) for year 2018 compared to the number of SSC observations for each investigated broad-band seismic station. F isolines are shown every 1000 nT below 35,000 nT and every 5000 nT above this value to better constrain the SAA anomaly. The size and gray saturation of the circles represent the number of observations at each seismic site, with small white dots representing seismic sites without observations South Atlantic and due mainly to the contribution of the quadrupole component of the main field (e.g., Olsen et al. 2007).
On the other hand, most of the sites with positive identifications are located near coastlines, but the pattern is not clear, as there are positive cases in locations far from the coast as well as there are a relevant number of locations in islands within large oceanic basins without positive identifications.
We have checked the occurrence of large earthquakes which could produce surface waves that could be misinterpreted as SSC. Table 1 also reports in the last column the earthquakes of magnitude greater than 5 and origin time within less than 1 h before the SSC. As we require the seismic pulse at the time of the SSC to be clearly larger than the previous signals, the possibility of misinterpreting a surface wave as SSC is very low. It is more probable in fact that SSCs occurring during the propagation of the surface waves of large earthquakes get masked by them. This is the case for the 08/07/2019 SSC event, occurring 30 min after a 5.9 magnitude event in Indonesia, for which we have not identified any clear record. The same feature can explain the low number of observations of the 26/01/2017 and 08/03/2018 SSCs, both occurring some minutes after significant earthquakes.
Observations in local networks
To better illustrate the characteristics of the seismic recording of magnetic events and to analyze its robustness, we have inspected the signals recorded at local networks of different spatial scales. We have inspected the collocated sensors at station QSPA near the South Pole, the Norwegian Seismic Array Network, located in a highlatitude region (southern Norway), and the ICGC network in NE Iberia, covering a mid-latitude region.
Southern pole: QSPA station
Station QSPA is located in the South Pole Remote Earth Science Observatory, at 8 km from the geographic South Pole (89.929º S, 144.438E). The station, belonging to the IU network (Albuquerque Seismological Laboratory (ASL)/USGS 1988), sits over a glacier 2.5 km thick and it is considered one of the quietest seismic stations in the world. Its equipment includes three broad-band instruments installed in boreholes at depths of 270 m (KS54000), 255 m (CMG3-T) and 146 m (CMG3-T) and two additional seismic instruments installed in a cylindrical vault with the floor 3 m below the surface of the snow (STS-2.5 and STS-1V/VBB). The boreholes, excavated in solid ices, are dry and not cased. This site is an excellent choice to compare how the SSC are recorded by different seismic sensors, as its location in the South Pole makes it very sensitive to small magnetic field variations and because it is free from anthropogenic seismic noise. 24 out of the 34 SSC investigated in this study have been clearly identified in the seismic data recorded at this site. Figure 4 shows the magnetogram recorded at the magnetic station SBA, located at the Scott Base in Antarctica (− 77.5º S, 166.78º E) and the seismic records of the 23rd December 2014, when a strong SSC event with a reported mean amplitude near 50 nT happened. The seismic traces are represented normalized in amplitude in Fig. 4a and using a common amplitude scale in Fig. 4b. In both cases, the data have been corrected from its instrumental response and low-pass filtered below 0.01 Hz. As it can be observed, the different traces have a similar waveform but differ strongly in amplitude. The two deeper sensors, located at 270 and 254 m, differ in amplitude by a factor of 3, with larger amplitudes in the deeper sensor. The sensor at 146 m depth has an amplitude almost 5 times larger than the same instrument buried at 254 m. This fact is remarkable, as electromagnetic signals are expected to be attenuated with depth following an exponential law. The two sensors installed in a vault close to the snow surface have opposite polarizations and the trace recorded by the STS-1V has an amplitude with an order of magnitude lower than the rest. As the data have been corrected for the instrument responses, these large differences are unexpected. We have verified that these waveform differences between sensors remain stable for all the SSC-related signals during the analyzed time interval (Additional file 1: Figure S1), despite the maximum amplitude of each event. Figure 4c shows the records at the same sensors of a 6.3 magnitude earthquake with epicenter in the Indonesia region, represented also in true amplitude. In this case, the data recorded at each sensor have the same amplitude once corrected by its instrumental response. Therefore, we must conclude that the physical process explaining the recording of magnetic signals is not considered in the instrument response, which should be modified to account for this effect. This fact confirms that, if seismic sensors had to be used to quantify SSC events, an individual calibration for each instrument should be needed (Forbriger 2007).
High latitudes: Norsar array
Regarding the high-latitude network, we have inspected the data recorded at stations of the NORSAR PS27 array, covering an area of about 70 × 70 km located 120 km north of Oslo in Southern Norway. As in the previous case, the response of each instrument is removed and data are filtered with a low-pass filter at 0.01 Hz.
The data presented in Fig. 5 show not only that this SSC is clearly observed at the 7 stations, but also that significant differences do occur between sites located a few tens of kilometers apart. Amplitude normalized traces (Fig. 5, left panel) show two families of traces, with opposite polarization. Stations NC602, NA001 and NC204 (top traces) share the same polarity, while NB201, NBO00, NC303 and NC405 have similar waveforms but opposite polarities. The inspection of the traces in true amplitude ( Fig. 5 right panel) evidences large amplitude variations between neighboring stations, with sites NC602 and NAO01 having the lowest amplitudes, while sites NBO00 and NC204 have values up to five times larger.
As in the discussion of the collocated sensors in QSPA site, these variations cannot be related to the location of the sites or the different models of sensors, as proven by the fact that the records of seismic events do not show polarity reversals or amplitude variations. This confirms that the SSCs records on broad-band seismometers are dependent on the sensitivity to magnetic variations of each particular sensor.
Mid-latitudes: ICGC array
Finally, we have checked the SSC observation in the regional, mid-latitude CA network (Institut Cartogràfic i Geològic de Catalunya 2000). The array is formed by 22 broad-band seismometers covering an area of about 32,000 km 2 in NE Iberia, at latitudes ranging between 40º N and 42.5º N. This case is of interest because only for three of the sites SSCs' events can be systematically identified in the seismic records. These sites, all of them in an area of 25 × 30 km in or near the Ebro River delta, are EBR, located within the Observatori de l'Ebre and CBUD and CFAR both located in the Ebro River delta, in areas recently gained to the sea. As observed in Fig. 6, the signals related to SSC have large amplitude and better signal-to-noise ratio at CBUD and CFAR, are still clearly identified at site EBR and are not detected at site CMAS, located on the foothills of the Caro Mount, part of the Catalan Coastal Ranges.
Near the edge of the ocean, the abrupt change in conductivity can produce substantial enhancement of the electric field on the landward side. The so-called "coast effect" is taken into account in geomagnetic sounding studies of Earth conductivity and in the evaluation of the effects of magnetic storms on seafloor fiber optics telecommunication systems and power grids (e.g. The difference in the amplitude of the SSCs seismic signals can be related to the geological materials at each site, which in turn affects the electrical conductivity of the ground (map onset in Fig. 6). CFAR and CBUD, the sites with maximum amplitude, are located over quaternary sand and silt terrains, where low electrical resistivity not exceeding 4 Ω m has been observed in the uppermost 50 m underlaying a thin (3 m) more resistive layer (Bellmunt et al. 2018). EBR, where the signal is clear but has smaller amplitude, lays over an alluvial fan formed by unconsolidated sedimentary materials, where resistivity is expected to be still low. Although resistivity measurement near the seismic site is not available, Bellmunt et al. (2018) have shown a clear increment of resistivity toward the inner part of the Ebro River, reaching values between 5 and more than 30 Ω m over the first 50 m of depth in the marginal area. On contrary, CMAS lays over Jurassic black dolomites with low electrical conductivity. The two sites were located in the Ebro River delta (CFAR and CBUD); the sensitivity of the instruments to magnetic events can be enhanced by the large volume of marine saline intrusions documented in the area (Palanques and Guillén 1998). Therefore, it seems clear that there is a relationship between ground conductivity and sensibility of the broad-band sensors to magnetic effects.
Low-frequency sources associated to anthropogenic sources
Human activities, in particular in urban environments, generate electromagnetic field propagating in the solid earth and the ocean basins. The most important sources of man-made electromagnetic field are high-voltage direct current cables (HVDC) although significant contributions can arise from cell tower base antennas or leakage current related to transportation systems. We will focus here on the effect of these leakage currents near our recording site. DC electric railways (subway, tramway) produce magnetic fields both from the intended traction currents and from the stray currents leaking to the Earth, although the first ones are only relevant near the train (Lowes 2009).
Stray current associated to transportation systems
Subway and tramway systems often use the running rail as a traction loop. As the insulation is not complete, part of the current flows into the earth, in what is known as stray or leakage currents. The study of these currents is of interest from an engineering point of view, as it results in electrochemical corrosion of metal structures close to the subway system. An updated review of the current distribution models can be found at Wang et al. (2018). However, the modeling of stray currents is difficult as all the metallic structures around the railway need to be considered, rail and rail-to-earth resistances can change locally and the grounding system can be more complex than the usually accepted resistor network model. This makes it problematic to realistically evaluate the intensity of the stray currents for a particular location.
The stray currents can produce electromagnetic disturbances that strongly affect magnetotelluric measurements at distances over tens of kilometers (de Pádua et al. 2002) and have even been used as a source to measure ground resistivity at distances around 16 km (Tanbo et al. 2003). As an example of the effect of these currents in different measuring systems, we can highlight that the leakage currents associated to the passage of TGV trains at distances of 1-3 km have been identified as noise sources in the CERN Large Electron Positron collider (LEP) near Geneva (Bravin et al. 1998). Díaz et al. (2017) analyzed the sources of background seismic noise for a broad-band station located within the city of Barcelona, and noted a periodic change in the amplitude of the seismic amplitude clearly correlated with the subway system. The subway system at Barcelona works from 05:00 to 23:59 (local time) Monday to Thursday, 05:00-02:00 on Friday and continuously from 05:00 Saturday until 00:00 on Sunday. A tramway line, running directly over the subway tunnel, follows the same timetable, except during Saturday to Sunday nights, when the tramway stops between 02:00 and 05:00, while the subway remains active. The subway system operates using 1200 V DC, while the tramway runs using a 750 V DC electrification system. This rather complex activity pattern allows easy comparison with the signal amplitude variations in the seismic data, in particular during the weekends. In the 20-40 Hz frequency band, the individual passage of trains can be identified in the seismic data recorded at about 150 m of the subway tunnel. From the inspection of the high-frequency seismic records, it can be observed that trains circulate for around 45 min after the end of the service (when the last train starts from the edges of the line) and that the train circulation starts around 20 min before the official time of start.
Observation of leakage currents on broad-band seismometers
Surprisingly, the authors noted that the seismic energy variation at low frequencies (8-50 mHz, 20-125 s) mimics the subway time activity cycles. Figure 7 shows the seismic vertical component filtered within this frequency band for a period of 4 weeks, evidencing a time pattern matching the subway operating timetable. Although the low-frequency signal associated to subway activity could result from the deformation generated by the weight of the trains, Díaz et al. (2017) related its origin to the stray currents leaking to the Earth from the subway system that will generate a magnetic field disturbing the broadband sensor measurements.
Collocated electric field measurements
To test the hypothesis related to the effect of leakage currents, we have measured the electric field close to the site of the broad-band seismometer using electric dipoles. The acquisition system consisted of two orthogonal dipoles oriented parallel and perpendicular to subway and tramway lines (N20W, N70E). The distance between each dipole was close to 10 m. Voltage measurements were sampled at a rate of 250 samples per second and stored in a datalogger equipped with a timing system based on GNSS signals. The acquisition Fig. 7 Low-frequency recording of the subway activity at the ICJA broad-band seismometer. Displayed data correspond to the vertical component of the seismic acceleration, bandpass filtered between 0.08 and 0.05 Hz. Each line shows 1 day, between September 27 and October 26, 2017. The different timetable of the subway system during the Friday and Saturday nights (orange and red boxes) is clearly shown by the seismic data. The night between the 11 and 12th October shows the same pattern than Fridays, as October 12th is a bank holiday in Spain has been active in two periods; first during a week in June 2018 and then in a longer interval, from September to December 2018. Figure 8 shows the electric field in one of the dipoles and the seismic acceleration in the 0.005-0.05 Hz band between the 18th and 23th October 2018. As it can be observed, the correlation between both signals is very high. The periods with significant electric field amplitude match the operation time of the transportation systems; during the nights of working days, both the electric field and the seismic data are close to zero during the period without subway activity (00:40-04:40, Local time). During Friday night, the time interval with low amplitudes is limited to 3 h long (00:00-3:00 UTC). Finally, during the Saturday to Sunday night, when the subway remains operative but the tramway not, the electric field above 0.01 Hz shows a minimum, only slightly higher than during week-day nights. It seems very clear that the amplitude variations for frequencies between 0.01 and 0.1 Hz in both the electric field and the seismic signal are correlated with the activity of the subway and tramway systems. This corroborates the hypothesis relating the seismic signals often observed at low frequencies to the variations of the magnetic field in the ground.
The corresponding spectrograms (Fig. 8c) confirm this interpretation and allow explaining some further details. The time variation pattern in seismic and electric data is very similar between 0.1 and 0.01 Hz. The electric field data are dominated by signals related to subway activity till frequencies above 1 Hz, while the seismic data above 0.1 Hz reflect the well-known variations related to the oceanic wave activity (i.e., Díaz 2016). Note that the large peak observed the 22nd October in the seismic data corresponds to the arrival of the seismic waves from a series of three earthquakes near Vancouver (Canada), with magnitudes 6.5, 6.8 and 6.5 and origin times 05:39, 06:16and 06:22 UTC.
Below 0.01 Hz, the electric field variations have different behavior, with large peaks lasting 2-3 h appearing the Thursday, Saturday and Sunday during the first part of the day. From the inspection of the complete dataset, we have concluded that this low frequency signal appears in approx. 60% of the inspected days, most of the time during morning hours, but without any systematic pattern. We relate its origin to the use of electric devices close to the dipoles although this fact, not affecting the main discussion in this contribution, should be better studied. The relative minimum observed in the electric field during the Saturday-to-Sunday night suggests that the electric field measured on the surface is more sensitive to the leakage current from the tramway line (not operating in night hours) than to the leakage from the subway system (working continuously from Saturday morning to Sunday night). On the contrary, the amplitude variations recorded at the broad-band seismometer, installed in the basement of the building at approx. 3 m below the surface, nearly follow the activity of the subway system. Differences in the resistivity of the uppermost level of the subsoil can explain this observation, although additional observational and theoretical efforts are needed to explain the feature.
Discussion and conclusions
We have shown that broad-band seismometers are widely sensitive to variations in the Earth's magnetic field, both from natural and anthropogenic origins. Large SSC affects broad-band instruments worldwide, although large signals are better recorded at high latitudes and a reduced number of detections are observed in South America and Africa. We have documented that not only the SSC but the whole magnetic storm is also often recorded in seismic instruments. These observations confirm the first observations made by Wielandt (2002) and Forbriger (2007) and prove that the seismic detection of SSC is a worldwide phenomenon.
Our observations prove that magnetic signals are present in a large amount of different broad-band sensors, including STS-1, STS-2, Trillium 240 or Trillium 120, as well as post-hole and borehole instruments (Trillium 120 Post-Hole, Geotech KS-540000 Borehole). Therefore, the sensitivity of broad-band sensors to magnetic signals is a generic feature, affecting instruments with sensors measuring directly the vertical and the two horizontal components as well as those built following the symmetrical or Galperin arrangement, with three orthogonal sensors mounted obliquely, each of them sensing the same proportion of gravitational acceleration (e.g., Townsend 2014). We have also noted, analyzing data from 5 collocated sensors at station QSPA near the South Pole, that the relative variations in amplitude and polarity between the sensors remain stable during the investigated period, spanning from 2011 to 2019.
Analyzing regional seismic networks we can see how differences in amplitude and polarity do exist among close sites. Detections can be very different in the onset of sharpness, polarity, dominant frequency, or signal-tonoise ratio. To explain this, apart from the constructive peculiarities of each instrument, we must turn to the contribution of the telluric currents induced by the SSC signals, which depend on the local resistivity structure of the Earth's crust below them, including lateral heterogeneities, such as land-ocean interfaces. In this sense, we have shown that the areas with low electrical resistivity in the Ebro delta in NE Iberia seem to enhance the magnetic perturbations, allowing their systematic registration in seismic instruments.
In urban environments, we have shown that the leaking currents from public transportation systems as tramway or subway are detected by the broad-band sensors, dominating the spectra below 0.01 Hz. To verify such origin, we have recorded simultaneously the seismic signal and the electric field and the results show a large correlation between both datasets in the frequency range 0.01-0.1 Hz. It can be noted that the electric field measured at the surface is more sensitive to the leakage currents from the surface tramway, while the seismic sensor, located in the basement, approx. 3 m below the surface, seems more coincident with the subway activity. These low-frequency time variations in the seismic energy have previously reported by Green et al. (2017) at the London tube and by Sheen et al. (2009) near subway lines at Seoul and some more sparse worldwide sites, but their interpretation remained unclear. The collocated electric field measurements clearly support the hypothesis of a magnetic field alteration due to stray currents.
To provide a tentative explanation of the physical mechanisms that link these phenomena, we have to consider first the way a seismometer is constructed. In broad-band seismometers, the displacement of an inertial mass is detected by a capacitive displacement transducer, converted to an electric signal and transmitted to a feedback coil, which in turn restores the position on the mass applying a compensatory magnetic force. The electric current generating this compensatory force is proportional to the ground motion acceleration. To get a broad-band response, the sensors include a large capacitor acting as an integration stage and the final output is then a voltage proportional to ground velocity (Havskov and Alguacil 2016).
Broad-band seismometers need the use of materials with low thermal expansion coefficients for the suspension springs of the inertial mass. These springs are usually built using Elinvar, a nickel-iron-chromium or nickel-iron-molybdenum alloy (Guillaume 1967) which has the required small thermal coefficient but is sensitive to magnetic fields (Rau 1977). Forbriger (2007) presented seismograms showing clear signals corresponding to a couple of SSC events recorded at stations of the German Regional Seismic Network and proposed that the magnetic field disturbances affected the suspension springs of the inertial mass, resulting in apparent accelerations proportional to the variations of the magnetic field which, at low frequencies, can be larger than the accelerations due to mechanical accelerations generated by soil vibration. Alternatively, Kozlovskaya and Kozlovsky (2012) proposed that the origin of the magnetic signals generated by geomagnetic pulsations in seismic records has their origin in the feedback system of the sensor. Under this hypothesis, the geomagnetic field variations would result in an induced current modifying the electrical current flowing through the large capacitor and the feedback coil and resulting in apparent accelerations not related to ground motion.
It is well known that spurious signals related to electrical field variations not related to geomagnetic field variations are often observed in seismic recordings. Examples of this kind of signal include the anomalous apparent acceleration due to incremented electric current supply during hard disk access in the Quanterra baler recording system (Forbriger et al. 2010), the checking of sensor leveling in some OBS equipment (Stähler et al. 2017) or the spikes generated by poor filtering of charge regulators used to connect solar panels to the instrument battery (Havskov and Alguacil 2016). These effects are observed also in high-frequency geophones not equipped with either force-balance systems or sophisticated suspension springs. In these sensors, ground motion is measured by a wire coil moving within a magnetic field that produces an electrical signal proportional to ground velocity. Sudden variations in the electric field can modify this output voltage, leading to the observed spurious signals.
From the previous point, we propose a working hypothesis where the imprint of the magnetic signals in seismic records is mostly due to the effect of currents generated by the magnetic field variations that modify the current applied by the force-balance systems (broad-band instruments) or the voltage produced by the moving mass (geophones). This hypothesis would justify the enhancement of the signals in areas of high conductivity, where telluric currents induced by SSC events are expected to be relevant, and the low-frequency seismic signal detected during subway activity in urban environments and related to the effect of leakage currents. However, the fact that in the region of the South Atlantic Anomaly (SAA), where the intensity of the main magnetic field is much lower, coincides with the region where the magnetic signals are more difficult to be observed in seismic stations, leads us to think that the magnetization of the spring effects should not be disregarded. Forbriger (2007) suggested that Earth's permanent field adds a magnetization bias to the overall spring magnetization. It looks like only seismometers located in regions where the sum of the constant magnetization plus that due to field variations exceeds a threshold are sensitive to them. Therefore, for the events analyzed in this study, seismometers located within the SAA would hardly reach the spring magnetization that triggers the effect. A further experimental and theoretical effort is needed to fully understand how these processes result in the generation of signals related to magnetic events in seismometers.
We have seen that the SSC signals recorded on broad-band seismometers are affected by multiple factors depending on each sensor or specific details of its location, which makes it difficult to use them to obtain quantitative measurements of these magnetic features. However, as pointed by Forbriger et al. (2010), it is possible to calibrate the seismometers response to magnetic field using a nearby (100 s km) magnetometer. The time stability of the signals recorded by the different seismic sensors at the South Pole station seems to confirm the feasibility of this approach. Although a worldwide magnetic network is nowadays available, the number of available broad-band seismic stations is larger, in the order of few hundreds. Therefore, the seismic recordings could be used as a complementary tool to monitor the occurrence of magnetic field disturbances in areas far from magnetic observatories. | 8,951.6 | 2020-07-29T00:00:00.000 | [
"Geology",
"Physics",
"Environmental Science"
] |
Mapping of physics problem-solving skills of senior high school students using PhysProSS-CAT
Evaluation using computerized adaptive tests (CAT) is an alternative to paper-based tests (PBT). This study was aimed at mapping physics problem-solving skills using PhysProSS-CAT on the basis of the item response theory (IRT). The study was conducted inSleman Regency, Yogyakarta, involving 156 students of Grade XI of senior high school. Sampling was done using stratified random sampling technique. The results of the study show that the PhysProSS-CAT is able to accurately measure physics problem-solving skills. Students’ competences in physics problem solving can be mapped as 6% of the very high category, 4% of the high category, 36% of the medium category, 36% of the low category, and 18% of the very low category. This shows that the majority of the students’ competences in physics problem solving lies within the categories of medium and low.
Introduction
One of the 21st-century learning and innovation skills is the ability related to critical thinking, problem solving, technology, and information (Daryanto & Karim, 2017).Technology is an integral aspect of the development of a nation.The more advanced the cultures of a nation, the more varied and complicated the technology that is used.Problem solving is a cognitive process directed to the attainment of an objective when there is a solution method to solve a problem (Bueno, 2014).Physics learning highly needs problemsolving skills; it is, therefore, necessary to have an evaluation as one of the efforts in elevating the learners' thinking skills.Nitko and Brookhart (2011, p. 3) define evaluation as a process to obtain information for making decisions concerning the learners, curriculum, program, school, and educational policy.Evaluation instruments used in learn-ing covers tests and non-tests (Nitko & Brookhart, 2011).Test-type instruments can be further grouped into objective tests and non-objective tests.Objective tests can be in the form of multiple-choice, short answers, matching, and objective essays.Non-objective tests can be open essays, work performance or observation, and portfolios or project tasks (Mundilarto, 2010, p. 52).Multiple-choice test items can be used to assess learning more complex outcomes which are concerned with the aspects of recall, understanding, application, analysis, synthesis, and also evaluation (Arifin, 2016, p. 138).The administering of the test can be done in two modes: paperpencil and computer-based test (CBT).The paper-pencil test is paper-based test (PBT) as has been done for long, while CBT is computer-based (Pakpahan, 2016, p. 24).
PBT is based on the assumption that learners with the same level of age and education have the same level of competences.In ISSN 2460-6995 145 -Mapping of physics problem-solving skills... Edi Istiyono, Wipsar Sunu Brams Dwandaru, & Revnika Faizah reality, there is, however, a significant variation (Bagus, 2012, pp. 45-46).The PBT model has many shortcomings especially related to deviating behaviors, such as frauds, discussions, sharing of answer keys, or even teachers or schools giving out answers keys with the intention that the teachers or schools are not regarded as failing in the running of education and learning by the society (Balan, Sudarmin, & Kustiono, 2017, p. 37).Further, Retnawati (2014, p. 190) states that Indonesia is a big archipelago consisting tens of provinces.As such, distribution of test packages from the centre to the regions faces many obstacles including, for example, during the national examination (NE).This causes, among others, test administration to be impartial and tests results not valid in that they do not represent the real competences of the students.These limitations of PBT can be overcome by testing using the computer.
Computer-based testing has some advantages, including: there is no need to wait for weeks for testees to receive their scores; scores can be obtained immediately.CBT also provides the facility for giving each testee test items that are pre-arranged to give the testee the freedom to select the next test item (Miller, Linn, & Gronlund, 2009, p. 12).According to Luecht and Sireci (2011) Each model has its own advantages and disadvantages.CBT gives more advantages than PBT does in that, among others, its scoring system is automatic and it reduces the burdens on the part of the testees (Riley & Carle, 2012).However, CBT is similar to PBT in that it may not be able to measure the testees' abilities accurately since there is still a potential of fraud in its administration.CBT makes the testees respond to all of the items so that there is inefficiency in the use of time.
There are two theories in assessment that have been empirically and technologically developed.These are classical test theory (CTT) and item response theory (IRT).Both CTT and IRT widely represent two different frames of assessment.In views of the CTT, scoring of a test is done partially, using the steps that need to be taken in answering a test item correctly.Scoring is conducted step by step, each testee's item score is obtained by summing up the score in each step, and achievement is estimated from raw scores.This scoring model may not be appropriate since the difficulty level of each step is not taken into consideration (Istiyono, Mardapi, & Suparno, 2014, p. 4).In the item level, the CTT model is relatively simple; CTT does not demand a complex theoretical model to relate a testee's success in responding to a test item.On the contrary, CTT collectively considers a group of testees for a particular item.IRT has been developed and important to complement CTT in the design, interpretation, and evaluation of a test or examination.IRT has a strong mathematical basis and relies on a complex algorithm more efficiently calculated on the computer (Adedoyin, 2010, p. 108).IRT supports the use of the computer in educational testing.IRT can be used to provide any item saved in the computer independently, so that the computer select a test from item banks, manage the procedure of the item administering, or design a model for a new computer-based item-response test (Masters & Keeves, 1999, p. 139;van der Linden & Glas, 2003).Thus, a test which uses CAT is highly suitable with the item response theory (IRT).
Hambleton, Swaminathan, and Rogers (1991, p. 9) propose three assumptions underlying the item response theory, including: (1) the chance for answering an item is not dependent on that for another item (local independence), (2) an item measures one competence dimension (unidimensional), and (3) the response pattern of each item can be represented in an item characteristic curve.The weaknesses of the classical theory are tackled up by these three assumptions.Hambleton et al. (1991) obtained; i.e. they depend on the group and test.Second, reliability is defined by paralleltest concepts, which are difficult to realize in practice.This is due to the fact that individuals can never be the same in the second test since they may forget, earn new competences, or have different motivation and anxiety levels.Third, standard errors of measurement are assumed to be the same for all subject matters and variabilities in errors are not being considered.Fourth, the classical theory reflects focus on the test-level information to put itemlevel information aside.Test-level information is an additive process, that is, the amount of information across the item, and item-level information is the information only for certain items.These limitations show that the classical theory deals with individual score totals and not each testee's competences in the individual level.
A CAT is based on the item response theory.Hambleton and Swaminathan (1985, p. 48), state that there are three types of scoring systems: dichotomous, polytomous, and continuous.Of the three, dichotomous system is the most used in the educational evaluation.The models that can be used for the dichotomous data are latent linearity, perfect scale, latent distance, Ogive one-two-three normal parameter, one-two-three logistic parameter, and four logistic parameter (Barton & Lord, 1981;Guttman, 1944;Lazarsfeld & Henry, 1968;Lord, 1952).The dichotomous model is only suitable for items with twocategory scores such as true/false.For items with more than two score categories, the polytomous system is used.
The polytomous scoring system has a number of models, such as nominal response, graded response, partial credit model, and others (Bock, 1972;Geoff N. Masters, 1982;Samejima, 1969).The partial credit model (PCM) has been developed in order to analyze the test items which require multiple-step responses, wherein the items follow the partial credit model patterns so that individuals with higher competences will score higher than those who have lower competences (Istiyono, 2017, p. 2).Therefore, it is reasonable that the partial credit model is used for multiplechoice tests.
A CAT is based on the principles that items must be selected by a consideration that they must measure the testees' competences.Generally, an item is selected in that it gives the most information to estimate the testee's competences.Then, based on the true/false response pattern, the competence level is supposed to return and the item is selected on the basis of the newly estimated competence.These processes are then continued up to a certain precision of the obtained testee's competences (Hambleton & Zaal, 1991).Based on the discussion of these facts, a need is felt on the development of a test that will measure the testees' competences in problem solving.The computerized adaptive test (CAT) has been developed as a CBT alternative to examine PBT tests and provide better tests items and shorter tests in accordance with each test.CAT is a testing system which is more advanced than CBT (Hadi, 2013, p. 12).In accordance with Suyoso, Istiyono, and Subroto (2017), computer-based evaluation is needed more and can help teachers in conducting an evaluation in their subject-matter teaching.In the 21st century, more is emphasized on the higher-order thinking cognitive domain such as HOTS Bloomian, HOTS Marzonian, critical thinking, creative thinking and problem solving (Brookhart, 2010;Heong et al., 2011;Schraw & Robinson, 2011).Testees interact directly with the computer containing the test items of the subject matter.They work on answering test items through the computer as they do in PBT through writing.The number of items is the same that in PBT and item characteristics do not function as they do in CAT (Pakpahan, 2016, pp. 26-27).
The use of CAT does not require items in a great number since the computer is able to give the items in accordance with the testees' competence levels.On the contrary, PBT, which is developed by classical theories, needs items in a great number since it needs to measure the testees' optimum competences repeatedly (Gregory, 2014).According to Weiss (2004, p. 82), CAT is a technology that is viable to have the potentials to give a better assessment, in smaller testing time, for various application in counseling and education.In these two fields, there are needs to measure There are so many varieties in the evaluation applications, and one that is able to make use of the superiority of assessment applications which are good and efficient is that which applies the CAT technologies.
Method
The study was conducted in State Senior High School in Sleman Regency, Yogyakarta Province, during the even semester of the 2017-2018 academic year.The subjects of the study were 156 students of the Physics Department selected by a stratified random sampling technique taking the higher, medium, and lower groups into consideration based on the students' scores of the National Examination in Physics.The size of the sample was determined from the population using the 1-PL formula that ended with 150 to 250 students (Linacre, 2006).
Data collection was conducted by a test that was used to map students' competences in problem solving in the field of physics.The research participants were asked to take the PhysProSS-CAT test which was the product of this research development.
The PhysProSS-CAT consists of items that have undergone development in the forms of multiple-choice items with reasons.The mate-rial is related to the balance of solid things, elasticity and Hooke law, static fluid, dynamic fluid, and temperature and calorie.The development of the instrument was based on the Curriculum 2013 which had been revised on the aspects and sub-aspects of problem-solving skills (Ministry of Education and Culture, 2013).The aspects included identification, planning, implementation, and evaluation.The sub-aspects included identifying, differentiating, planning, formulating, sequencing, connecting, applying, checking, and criticizing.The test was developed into four sets of test items, 180 in total with nine anchor items.
The test items had the characteristics that fulfilled the requirements for testing.These requirements were as follows: (a) Based on the results of the content validation by the evaluation experts, the test was content-wise valid with Aiken's V value of 0.97; (b) Based on the empirical evidence, the test had a fit with the Partial Credit Model (PCM) polyatomic data with four categories with a mean score and INFIT MNSQ standard deviation of 1.00±0.25;(c) Based on the Cronbach Alpha reliability estimation values, all items were regarded as reliable at the measure of 0.93; (d) Based on the levels of difficulty, the test was regarded as good with a range of -1.23 to 1.50; and (e) On the information function and SEM, the test was stated to be able to estimate competences on the range between -2 and 1.6.
The scoring of the test used the partial credit model (PCM) technique which was a development of the 1-PL model and was of the Rash family.Meanwhile, the results of the physics problem-solving test used the computerized adaptive test (CAT) categorized in the form of levels adapted from (Azwar, 2010).The categories are shown in Table 1.
Findings
The level of students' competences in problem solving is directly in comparison with the level of item difficulty.The higher the students' theta values, the more difficult the items; the lower the theta, the lower the item difficulty.Students respond to an item whose difficulty level is comparable with their competence level.The first item is one with a medium level of difficulty.If the students answer it correctly, the test will give them a more difficult item; and if they get it wrong, the test will give them a less difficult item.The exposed items have been fitted with the problem-solving aspects, namely identification, planning, implementation, and evaluation.The presentation of an item using CAT can be seen in Figure 1.In Figure 1, a PhysProSS-CAT test item can be seen in the multiple-choice format with reasons.The testees are asked to select the correct answer and give the reasons for selecting it.After a testee completes the test on the CAT, a recapitulation report from the computer will appear on the screen, as presented in Figure 2.
The recapitulation report can be immediately seen by the administrator, teacher, and student.The administrator can see all the reports of all the test takers.The teacher can see only the reports of his students.The report is in the form of theta scores representing the students' competences.The students' competence level (θ) is categorized into very high, high, medium, low, or very low in a five-level scale (Azwar, 2010, p. 63) as can be seen in Table 2.In Table 3 and Figure 3, of the 156 students taking the CAT test, ten are in the very high category, six are in the high, 56 are in the medium, 56 are in the low, and 28 are in the very low.In percentages, 6% of the students are in the very high category, 4% in the high, 36 % in the medium, 36% in the low, and 18% in the very low.It means that most students' competence levels are in the medium and low categories.Mapping is done on the three schools based on the scores which are obtained from the national examination (NE) in Physics, categorized as: high, medium, and low.The results of the mapping are presented in Table 4, Table 5, and Table 6.
Discussions
Based on the findings of the research, it is clear that the PhysProSS-CAT test has been quite well and accurately able to map students' competences in Physics problem solving.The CAT-based instrument has been able to select the items in accordance with the students' competence levels.In this case, students of School A who are high in the national examination are dominantly in the medium category, but have the highest score in the problemsolving test.In the B school, which is medium in the national examination, the students are dominantly at the medium and low categories.Meanwhile, in School C, with a low level of national examination results, the students are dominantly low in their problem solving competence.This means that mapping has been done well in matching test items with students' levels of competence.
The results of the overall mapping of the 156 students participating in the study show that many of the students are in the medium category.This can be traced from the factors of students' motivation, instructional processes, and evaluation practices.In this relation, only the evaluation factor will be discussed further.Accurate evaluation will be able to support students to learn using higherorder thinking (Istiyono et al., 2014, p. 2).The learning processes and evaluation are supposed to deal with higher-order thinking, including problem solving, in order that the students' skills in problem solving improve.In time, the need is felt to develop evaluation that will be able to measure these students' skills.Ultimately, this will help in realizing students' learning achievements.Meanwhile, Figure 8 presents the recapitulation report of the test results.It consists of scores, test items answered, and time.In the time of the test administration, most students completed 18 to 25 test items, in 35 to 50 minutes out of the total items of 154.The minimum items to be completed were nine items, and the shortest time was 14 minutes.The maximum items completed were 25 and the longest time was 58 minutes.Students did not need to complete all the items but only those within their competences.This is in line with Gregory (2014) stating that CAT testing does not need too many items since, in the computer-based testing, the computer provides test items that are within the range of the testee's competences.
Departing from the weaknesses of the paper-based testing (PBT) mode, in which all testees take all items without considering their skill differences, the computer-based testing (CBT), on the other hand, is designed using the adaptive mode.In this mode, next items are given on the basis of the testee's competence in completing the previous items (Istiyono, 2013).It is, therefore, reasonable to use the computerized adaptive test (CAT) as an alternative technique for testing since it gives a better estimation result and using a shorter test to be adjusted to the testee's competence.Further, testees do not have to answer all questions, and this saves testing time.In accordance with Huang, Chen, and Wang (2012), the superiority of the CAT over the PBT is that the CAT is able to achieve the same precision with fewer items and shorter time.In CAT, the testee needs only to click on the correct answers until the computer finds and determines his most accurate estimate of his competences to terminate the test and gives his score.CAT is most suitable for such tests for selection and one of a large scale.
The use of PhysProSS-CAT can minimize frauds since testees do different items and have different numbers of items to complete the test; the CAT program gives different items to testees in accordance with their levels of competences.Safety and confidentiality of the items are guarded.On its turn, results of the testing will be reliable.In PBT and CBT testing, chances are abound for frauds to take place for the opposite reasons that testees take the same test with relatively the same items.The PhysProSS-CAT can do its testing functions safely, fast, and accurately showing the accurate competences of the testees.For this reason, the test helps much in competence mapping for various purposes.The immediate issuance of the test results helps the teacher map the students' competences in a short time.The teacher can also immediately evaluate and plan for further programs.
In line with the opinions proposed by van der Linden and Glas ( 2003), a number of reasons for switching to the CAT type are: (1) CAT makes it possible for testees to schedule their own testing in accordance with their preferences; (2) Testing is administered in a comfortable atmosphere with fewer people around than there are in conventional paperpencil testing; (3) CAT processes the data and gives out the results fast; and (4) Test items and materials are more varied in levels and sizes.
It is possible for teachers to select a test from a variety of choices but testing must be done in accordance with the needs and situations.In a school with adequate facilities for computers, the CAT type testing is more preferable.For the assessment of higher-thinking skills, more specifically, the CAT model is more appropriate since it measures competences accurately and efficiently and saves energy and time of the administration.This is supported by Jiao, Macready, Liu, and Cho (2012) stating that computerized captive testing achieves higher accuracy of the measurement and provides efficient administering of the assessment.In view of the superiority of PhysProSS-CAT, it is suitable for testing individuals' competences in such testing for selection and the final examination.The test saves time, and energy and minimizes frauds.
Conclusion
Based on the results of the study, it can be shown that the PhysProSS-CAT is able to accurately map the students' competences in problem solving in the physics field.In percentages, students' competences can be rated as very high (6%), high (4%), medium (36%), low (36%), and very low (18%).This means that the majority of the students' competences are within the categories of medium and low.On the average, of the total 154 items provided in the test, students complete between 18 and 25 test items in a time range of 35 to 50 minutes.Meanwhile, the minimum number of items responded is 9 and the time needed is 14 minutes; and the maximum number is 25 and the maximum time 58 minutes.Therefore, PhysProSS-CATis able to map problemsolving competences accurately, efficiently, and saves time and energy.
Suggestions
In the administering of CATs, including PhysProSS-CAT, it is recommended that administrators provide items with difficulty levels that are more normal in distribution.In relation to the technical facilities, it is suggested that administrators use adequate numbers of items to anticipate troubles in the computer webs since testees access the same items in the same time.
Figure 3 .
Figure 3. Mapping results of competence levels in three state senior high schools
Figure 4 .
Figure 4. Mapping of problem-solving competence levels in Senior High School A Shown in Figure 4, in State Senior High School A, of the 64 students, 8% are in the very high category, 6% very high, 36 % medium, 30% low, and 20% very low.It indicates that most students' competence in this school are in the 'medium' category.
Figure 5 .
Figure 5. Mapping of problem-solving competence levels in Senior High School B
Figure 6 .
Figure 6.Mapping of problem-solving competence levels in senior high school C As seen from Figure 6, in State Senior High School C, 32 students participated in the study and 10% of them are in the very high category, 6% very high, 28% medium, 47% low, and 9% very low.It indicates that most students in this school are in the 'low' category.
Figure 7 .
Figure 7. Mapping of the students' problem-solving competences in three schools
Figure 8 .
Figure 8. Recapitulation report of the PhysProSS-CAT test results
Table 1 .
Intervals of students' problemsolving skills
Table 3 .
Mapping results of competence levels in three state senior high schools
Table 4 .
Mapping of problem-solving competence levels in Senior High School A
Table 5 .
Mapping of problem-solving competence levels in Senior High School B
Table 6 .
Mapping of problem-solving competence levels in Senior High School C | 5,423.6 | 2018-12-22T00:00:00.000 | [
"Physics"
] |
The Impact of the digitalization and the policy changes on the Savings Instruments (Saving Certificates) in Bangladesh: a response from the investors.
Investment in the National Savings Certificate (NSC) has been the most popular savings instrument among the people of Bangladesh that provides guaranteed returns with tax savings. The government of Bangladesh mainly issues the NSCs to collect money from small and scattered savings of general people. It brings marginal and special populations into the Government's social safety net programs for ensuring an equitable and poverty-free society. Recently the authority has introduced automation and regulatory deterrents such as making mandatory the submission of e-TIN, national identity cards, bank accounts, cheque transactions, and increased deduction at source. My research has attempted to identify the impact of the policy changes on the investors’ minds and how they react. This study suggests that recent policy changes and the requirement for the mandatory documents to purchase NSCs have no impact on the investment decision as people still consider this is the most attractive and secures means of investment.
Introduction
The National Savings Certificate (NSC) is a popular and safe small-savings instrument that combines tax savings with guaranteed returns. Sanchayapatra, also known as National Savings Certificate in Bangladesh, is regarded as a risk-free investment in Bangladesh. Over the years, it has become a part of the savings mobilization scheme of the Government of the People"s Republic of Bangladesh (Ministry of Finance, 2019). Sanchayapatra (National Savings Certificate) encompasses different types of savings schemes operated by the National Savings Department, Bangladesh. This project is monitored and supervised by the Internal Resources Division of the Ministry of Finance of the Government of Bangladesh. Savings certificates are considered a form of loan for the government because it has to pay monthly interest to savings certificate holders. (National Budget Speech, 2021). The NSCs are issued mainly to collect money from small and scattered savings of general people. It has been started to bring marginal and special populations into the Government's social safety net programs for ensuring equitable and poverty-free society (Ministry of Finance, 2019). It is presumed that selling NSCs to those populations may help to develop their savings habit. Besides, deficit financing through NSCs controls the fear of inflation as it does not require printing money as well as reducing dependency on foreign loans (World Bank, 2021). The main aim of these savings certificates is to protect women, retired government employees, senior citizens, non-resident Bangladeshis, and disadvantaged marginalized citizens. In the absence of a well-functioning pension and social security system, these savings certificates have been working as a social safety net for many people (National Budget Speech, 2021). In recent years, the government has introduced several mandatory requirements such as e-TIN certificate, Bank account & NI Number for the investors to be able to invest in the national saving certificates. At the same time, the return on investment rates have been reduced and the Tax Deduction at Source (TDS) has made been increased. The study aims to investigate the impact of the recent changes on the investors' mind and how they react in response to the police change.
Literature Review
Savings are indispensable for economic growth. Countries with high level of savings tend to have lower inflation, high level of investment and sustainable economic growth. In all the countries of the world, household savings contribute substantially to national savings. Households" savings are an important source of capital to fund investment and growth in the economy (Alade, 2006). Moreover, some recent studies tried to identify determinants of savings on the perspective of regional, national and cross country using mostly secondary data/information. Aric, K. H.,(2015) analysed secondary panel data from the World Bank for sixteen APEC member countries during 2000-2013 by Pooled OLS method and found that income, age dependency ratio, young population, rural population and urban population influence savings positively while financial depth effects on savings negatively. They also found that inflation and old population have no role in savings in APEC countries. Bhandari et al. (2007) observed that Government expenditures and past savings have a significant negative role in determining private savings in five South Asian countries namely Bangladesh, India, Nepal, Pakistan and Sri Lanka. However, financial development and raise in income per capita promote people to save more. The rate of dependency, localization level and real interest rates appear to have less impact on private savings in these countries. Imran et al. (2017) showed that inflation, tax and gross domestic product have statistically significant positive impacts on the gross domestic savings while per capita income, interest, money supply growth and age dependency ratio have a non-significant effect on gross domestic savings in six South Asian countries. Similarly, Das and Ray (2012) analysed panel data for1990-2007 period of developing six Asian (China, India, Indonesia, Malaysia, the Philippines and Thailand) economies that have high savings rates and observed that high growth, low age dependency, an increasing degree of financial deepening, presence of liquidity constraint, remittances, terms of trade shock and human capital formation are leading determinants of the savings for these countries. Another study conducted by the Research Department of Bangladesh Bank on NSCs considering total sample 1336 based on field survey in 2011 in seven divisional cities of Bangladesh. The study intended to find out the basic socio-economic characteristics of investors in national savings certificates that influenced the investment decisions in NSCs, problems in encashment and other services related, and way out what features can make NSCs attractive to the investors as it serves as a very important window of financing for Government budget deficit. The report identified that most of the NSCs buyers were male (52.5 percent), investors mostly resided in urban areas (83.15%) and only a marginal number of investors were from rural areas (16.5%). Most of the investors (77.4%) felt investment in NSCs attractive, mainly due to safety and security. While a large number of NSCs buyers stated that they found investment in NSCs unattractive due to its lower return as compared to FDR rate in commercial banks. Moreover, the study recommended to rationalize and adjust the interest rate of NSCs in line with commercial bank"s FDR rate; create special savings schemes for the person retired from private organizations and for widows, ensure availability of the scripts and forms in all state-owned and private commercial banks, and post offices, upgrade service standard, introduce online sales, profit withdrawal and encashment facilities for NSCs.
Types of National Saving Certificates
The government securities market of Bangladesh is consisted of tradable and non-tradable securities. Nontradable securities include National Savings Certificates i.e. Sanchayapatras and Sanchaya bonds which are only for retail investors. Different types of savings schemes are being conducted by the National Savings Directorate of Bangladesh under the supervision of the Internal Resources Division of the Ministry of Finance of the Government of Bangladesh. There are various types of National Savings Certificates; of which the popular certificates are as follows: Family Saving Certificates: This saving scheme, also known as the Paribar Sanchayapatra. It is a 5-year long saving scheme. This scheme is specially designed for women. Any woman who is more than 18 years old can invest in Paribar Sanchaypatra. The minimum investment for this scheme is Tk. 10000/ and the maximum limit is Tk. 45,00,000/. This schema typically provides a return of around 11.52% if encashed after maturity (Department of National Saving, 2021). Pensioner Saving Certificates: This scheme is also known as Pensioner Sanchayapatra. Any retired government and semi-government employees who have a minimum employment period of 20 years can invest in this scheme. Generally, the returns are around 11.56% for withdrawal after 5 years. The returns vary for premature withdrawal, and it depends on the number of years of investment in the scheme (Bangladesh Bank, 2021). Quarterly Profit based Savings: This scheme is known as Tin Mash ontor munafa vittik Sanchayapatra. This scheme requires a minimum investment of Tk. 1,00,000/-and has a maturity period of 3 years. A single person can invest a maximum of Tk. 30,00,000/-and for joint owner maximum limit is Tk. 60,00,000/. Generally, the returns of this scheme are around 11.04% for the complete tenure of 3 years. The returns vary for premature withdrawal, and it depends on the number of years of investment in the scheme. (Bangladesh Bank, 2021). Bangladesh Saving Certificates: In this scheme, the maximum investment limit is Tk. 30,00,000/-for individual investors and Tk. 60,00,000/-for joint investors. This is a five years scheme and provides a return of around 11.28% at maturity. The returns vary if the investor withdraws before maturity and it depends on the number of years of investment in the scheme. (Department of National saving, 2021).
Ceilings for investing in Savings Instruments
Sales of the savings certificates picked up in recent times, and in response, the government has reduced the ceilings. The government lowered the maximum investment ceilings of purchasing three types of savings certificates. (Akter. D 2020). The certificates are -Five-Year Bangladesh Sanchaypatra, Three-Month Profit-Based Sanchaypatra, and Paribar Sanchaypatra (family savings certificate). Small investors can now purchase these savings certificates up to Tk. 5.0 million in total in a single name, and Tk. 10 million in joint names, whereas the previous ceiling was Tk. 10.50 million and Tk. 12 million respectively. An individual can now buy the three types of savings certificates worth up to Tk. 1.05 crore in total.
Return on investment
Family Saving Certificates Pensioner Saving Certificates
Quarterly Profit based Savings Bangladesh Saving Certificates
Both policymakers and economists have suggested discouraging investment in savings certificates to ease the burden of costly borrowing tools and to help the banks lower their interest rates. Earlier, an investor could buy Bangladesh Sanchaypatra worth up to Tk. 3.0 million, Three-Month Profit-Based Sanchaypatra up to Tk. 3.0 million, and Paribar Sanchaypatra worth up to Tk. 4.5 million in a single name. The investors were also allowed to purchase Bangladesh Sanchaypatra worth up to Tk. 6.0 million, and Three-Month Profit-Based Sanchaypatra up to Tk. 6.0 million in joint names (NEWAGE, 2021).
Increased Tax Deduction at Source (TDS) for the NSC
The tax on savings certificates has been increased to 10 percent from 5 percent in the current fiscal year to discourage the purchase and keep the government"s borrowing from this high-interest-bearing instrument within target. In case of investments not exceeding Tk.5 lakh, source tax would be deducted at the rate of 5 percent (NBR, 2021). According to the Bangladesh Bank, profit along with principal would be treated as fresh investment in case of auto reinvestments in five-year national saving certificates (NSCs), and the applicable source tax will be determined based on the invested amount. For auto reinvestment of five-year national saving certificates, source tax at the rate of 10 percent would be deducted if investments exceed Tk.5 lakh.
Making the e-TIN mandatory for NSC Electronic Taxpayer Identification Number (e-TIN) has been made mandatory for purchasing national savings certificates and opening a postal savings account exceeding Tk.200,000 (NBR, 2021). Since many of the savers were not familiar with the online process and had no electronic Tax Identification Number (e-TIN), they could not purchase such tools (Arafat Ara ,2020). One has to submit multiple documents, including e-TIN to buy the borrowing instruments. Besides, national identity cards, bank accounts, mobile numbers, and cheque transactions are also mandatory while investing in risk-free instruments. The government is considering the measure as a means to prevent money laundering. (Mowla,G, 2019).
Nature of the Research
This is descriptive research in nature where a survey questionnaire was used to collect quantitative data to realize the challenges and opportunities of the NSC policy changes. To be precise, this study involved a cross-sectional design in which there were multiple respondents, and information from each respondent was collected only once. The study uses both primary and secondary data. A field survey was conducted during January-March 2021 through face-to-face interview with investors of the NSCs to gather the primary data.
Total 513 investors of NSCs had been interviewed on a random basis from Tangail, Gazipur & Dhaka districts in Bangladesh.
Research Questions
The researcher asked several research questions that were guided by the specific objectives of the study. The collected data have been analysed based on the true response of the respondents. The researcher has always been unbiased and objective in analysing the collected data to conclude. Theoretical framework as well as the analytical model has been adopted in this study. Research questions were as follows: To investigate the impact of the increased Tax Deduction at Source (TDS) on the NSC investment. To explore the difference in waiting time before and after the introduction of the online system. To apprehend the investors" feelings in regards to making the e-TIN certificate a mandatory document. To recognize the investors" reaction to the introduction of the online recording system. To explore the impact of the tax credit opportunity on investing decisions.
Data Analysis Techniques
For data analysis, the researcher relied upon graphical approach and some descriptive approach using SPSS version 25.0 and EXCEL outcome. Through a graphical approach, I have presented some parameters that change the investor"s behaviours and views in introducing the automation and policy changes.
Results of the data Analysis
The researcher has always been unbiased and objective in analysing the collected data to conclude. The information was collected through personal interviews of the investors and the offices of the Bangladesh National Savings Bauru were visited to collect the data. This pie chart shows the percentage of males and females that responded to the questionnaire. In a sample of size 513 people, 55% were males and 45% were females.
GENDER ORIENTATION
Through the analysis, the types of investments and the account holders have been identified. Bases on the respondents" status it is obvious that people are more likely interested in "3 Monthly Interest Bearing Sanchayapatra" investment with 44.64% divided into 32.19% as self-owned accounts holders and 12.27% as joint accounts holders. While for the other three types of investments, the percentage is almost the same, it varies between 16% and 22 % for both account holders. It has been observed that the majority of the "3 Monthly Interest Bearing Sanchayapatra" investors maintain self-owned accounts.
My first objective is to establish the relationship between the increased source of deduction (5% to 10%) and the investors" decision to withdraw. To find out the result, I have run a Chi-Square test, taking into consideration the null and alternative hypothesis as follows: Null Hypothesis: The investor"s decision did not get affected by the increased source of deduction. ( At 5% level of significance follows chi-square distribution and a degree of freedom of 1 x 2 0.05 =3.84> x 2 cal = 3.283 P-value = 0.07 > 0.05 So I fail to reject my null hypothesis, at a 5% level of significance, I can say that the investor"s decision doesn"t get affected by the increased source of deduction.
EM-2021-2359
This graph represents the rejection part of the null hypothesis, which is the grey part, while in the white part I fail to reject H0. As it is seen, the observed value (3.283) is in the white part, where I cannot reject my null hypothesis, taking into consideration that the critical value is taken from the chi-square table shown above with 1 degree of freedom and risk of 5%.
My second objective is to see whether there is any difference in waiting time before and after the introduction of the online system. To run the study, I asked the people how long on average they had to wait before and how long they have to wait now to get their papers works done. Before running a paired t-test to analyse my result, I need to prove the assumption of normality of the difference in waiting time between before and after the introduction of online system From the curve shown on the histogram above and the Q-Q plot, I can say that the difference is normally distributed. It is possible to assume the normality by the values of skewness and kurtosis. Where |skewness| = 0.53 < 0.8 and |kurtosis| = 0.704 < 2 shown in the following A paired t-test has been run to see whether there exists a significant difference between the waiting time before and after the introduction of computers. To investigate the result, I have run a Paired Samples Statistics test, taking into consideration the null and alternative hypothesis as follows: Null Hypothesis: the average waiting time before and after the introduction of computer does not differ significantly (m1 = m2) Alternative Hypothesis: there is a significant difference between the average waiting times (m1 ≠ m2) Correlation R = 0.254 means that there exists a weak relationship between the waiting time now and then. At a 5% level of significance and from the paired t-test we got: P-value = 0.260 < 0.05. So we fail to reject the null hypothesis, consequently, there is no significant difference between the waiting time before and after the introduction of online system. Since our Paired Samples Statistics box revealed that the Mean waiting times before the computer (79.77) was almost the same as the Mean for waiting time after the computer (77.78), we can conclude that the introduction of the online system has not make a huge difference to reduce the waiting time.
Next, I have sought to realize the investors" reaction to the introduction of the computerized online recording system.
My analysis indicates that 55.36% people assumed computerized online recording system provides more accurate and quicker service, where 44.63% hasn"t found it helpful by answering that it hasn"t made any difference or has created hassles for them. Furthermore, I have run a study to understand the investors" feelings in regards to making the e-TIN certificate a mandatory document. It is found that 55.56% of people do not like the introduction of the e-TIN certificate as a mandatory document since they did not have it before making it mandatory. They say that it has created an extra burden.
On the other hand, 16.07% of people who already have an e-TIN expressed their satisfaction with the introduction of the digitalized system. Finally, a minority of 8.73% of investors didn"t know about it before but they have no problem with making it mandatory.
Later, I have tried to find out what had been the main causes people withdraw their investment from the NSCs, and I have found that 55.44% of people have withdrawn their investment due to an increased source of deduction. Some people withdraw their investment for having an opportunity to invest in another place that represents 21.76%. A Small number of investors withdraw their savings for investing in their own business or personal reason.
Finally, an effort has been made to figure out the impact of the tax credit opportunity on investing decisions to national saving certificates.
It was investigated to understand if the tax credit opportunity has any influence on the investor"s decision to invest in the NSC, to end up with an obvious result, it doesn"t have a huge influence. It affected the decision of 35.26% of investors, while 64.74% of people are not worried about it.
Conclusion & Recommendations
The National savings certificates are a saving scheme for the small savers. Department of National Saving (DNS) explicitly states that bringing small savers under this scheme is one of its goals (Five Year Plan, 2015). Another important goal is to provide a social and financial safety net for certain groups of people such as women, retired government employees, senior citizens, non-resident Bangladeshis and physically challenged people. Although it was suspected that the sale of National Savings Directorate (NSD) certificates would come down significantly due to the automation and regulatory deterrents, this study suggests that recent policy changes and the requirement for the mandatory documents have less effect on the investment decision.
There are two main reasons behind people"s preference for buy savings certificates. First, no inquiries are made regarding the origin of the buyers' funds. Second, interests on savings certificates are higher than those on other deposits. Investors are attracted to NSCs for being risk-free investments with high returns (Agrawal, M. 2009). In addition to that, the downing of bank deposit rates is pushing savers towards national savings certificates. The interest rate on NSC is much higher (on average 11%) than the interest rate of bank FDs, which is now as low as 4 to 4.5% (Hasan, M 2018). Furthermore, the pandemic has made it difficult for people to invest elsewhere and the inefficient stock market remains unattached due high risk involved. Saving certificates is a means of ensuring social well-being and the overall development of the economy of the country. Government should encourage the small savers to invest in it by ensuring a favourable environment for them. The following initiatives could be taken to make the investors more confident in their investment: • The middle-income group is the major investors of NSCs and they seek a stable source of income. Many families depend on the income from the investment, and reducing the rate of return would put pressure on the lifestyle of the families. Government should always keep the interest rates attractive and higher so that the investors feel secure.
• The Government should ensure that the target group especially, the small savers from the rural areas get easy access to purchase of NSCs via post office or rural bank branches which will effectively broaden the social safety net coverage and benefit the mass people.
• There is no provision for recording the sources of income of NSCs buyers. Authority may ask the information about sources of income of the NSCs buyers with proper documentation so that black or illegal money can"t enter into the NSCs market. This kind of documentation may help maintaining single-person exposure as well as ensure the safety net purpose of the Government.
• Those who come to buy NSC certificates have to face a long queue, in most cases this happen due to the shortage of staff. The authority should recruit more employees to offer better service to the investors. In addition to that, to reduce the sufferings of NSC buyers during waiting time NSC selling place can be modernized. For example, a digital display board may be introduced so that, sick or old people can patiently sit with their eyes on the display board.
• A national integrated online database keeping a record of the NSCs investors and their close relatives, i.e., a spouse may help prevent buying NSCs more than limits. Concerned authorities may ask National Identity Card (NID) of the investors and the person whose income is used. All the offices/institutions where NSCs are being sold may be interconnected through a network so that establishing a database of the investors could be possible.
• It is apparent that people invest in the National Savings certificates for the better return. If the government keeps reducing the rate of return and increasing the deduction at source, then investors will be demotivated, in response, they might withdraw their savings. As savings certificates are issued for the welfare of the society and the overall economic development of the country, it should provide some privilege to the investors. | 5,398.4 | 2021-08-28T00:00:00.000 | [
"Economics",
"Business",
"Political Science"
] |
Corrosion resistance evaluation of carbon steel plates protected by zirconium and titanium nanoceramic coatings
Metal surface pre-treatment is a known process and is used to increase corrosion performance as well as improve adhesion between the substrate and the paint layer. The present paper evaluated the corrosion resistance of carbon steel before and after treatment with nanoceramic coatings. The comparison was between a pure zirconia nanoceramic compound (Bonderite NT-1), with the addition of a dispersant (polyacrylic acid) and another nanoceramic coating developed from titanium oxide. Additionally, salt spray, open circuit potential (OCP), polarization and impedance tests were performed to obtain a methodology to quantitatively assess the quality of protection. The zirconia coating presented better corrosion protection than the titanium coating and carbon steel without coating. The corrosion potential of this coating was about twice as low as carbon steel without coating, while for the titanium coating it was about 1.5 times less. The addition of dispersant produced no significant improvements Research, Society and Development, v. 9, n. 3, e183932715, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i3.2715 3 in corrosion resistance and was similar to uncoated carbon steel, possibly due to the high concentration used.
Introduction
Metals are fundamentals compounds for infrastructure development and daily life products. Therefore, it is of great relevance to the technological progress aiming to obtain greater resistance to aggressive conditions, expanding the range of these compounds applications (Gentil, 2011).
The carbon steel excels at industrials applications by its mechanical properties.
However, it is unavoidable the occurrence of corrosion on harsh surroundings and means (Guo, Kaya, Obot, Zheng, & Qiang, 2017). This iron alloy has between 0.05 and 2.0 % per carbon mass besides small quantities of others compounds and it receives considerable attention for the solutions of problems involving corrosion (Bossardi, 2007;Guo et al., 2017).
The phosphating and chromatization are common processes used as this purpose, however, it may present various environmental concerns and problems with the low surface cover (Popić et al., 2011;Ramanauskas et al., 2015). While the phosphating needs heating, resulting in an energetic cost, the chromate is known for being a toxic and carcinogenic compound (Milošev & Frankel, 2018). Insight of the need to use sustainable process, the technology of nanoceramics surfaces coatings is proposed and applied to reduce the environmental impacts caused by conventional treatments.
The main advantages of this new process encompass the high reactivity of the nanostructured compounds, the better utilization as well as the minor residues generation. The high reactivity makes possible the greater processing reduction time on the cold or room temperature processes. While the small residues generation solves one of the serious problems of phosphating. These two advantages together lead to an economy of water and energy Roman and contributors (2011) performed a comparative study about the temperature variation used during the formation stage of a thin nanofilm and the obtained corrosion resistance. Regarding the temperature, it was verified that the major the variable is, the major will be the obtained corrosion resistance. However, the room temperature bath presented satisfactory quality, a fact of great importance for industrial applications due to is charge generated by having an energy supply.
Ramanathan and Balasubramanian (2016) studied the mechanism of nanoceramics coatings of low carbon steels. The researchers found a deposition of hydrated nano zirconia under the carbon steel surface. The formation of this thin and uniform layer of oxide gives the same special properties after the coating of the paint layer, comparing to the process involving phosphate and zinc.
According to Droniou and Fristad (2005), the high spontaneity of the process results Research, Society and Development, v. 9, n. 3, e183932715, 2020(CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i3.2715 of the deposition quality with the increase of the dispersant or with the use of the titanium coating. Thus, it becomes extremely relevant to the impedance and polarization curve tests analyses. The test was not followed up because the results of the impedance and polarization curve tests analyses show the difference in corrosion resistance between the coatings.
Open Circuit Potential (OCP) Test
The Open Circuit Potential (OCP) test has the purpose to express if the system analyzed is in equilibrium after one hour of metal excitation, which it searches for the dynamic equilibrium of the system (metal-solution) before the variation tests of current and potential start. Figure 1 presents the graphics of OCP for the unprotected carbon steel with the zirconia coating with or without dispersant addition and the titanium base. Based on Figure 1, it is noticed that the potential reached the expected dynamic equilibrium. Since after the end of the 1h immersion, the potential did not present considerable variations. According to Wolynec (2003), for the unbalance or the passive layer break occur, it must have a variation superior to 0.5 V on the final 5 minutes of the test, which had not to happened on the performed tests.
It must be pointed out that the lower the Open Circuit Potential, the most active the material on the site and the lower the corrosion resistance (Hadinata et al., 2013). By Figure 1, it is noted corrosion potentials are similar among the coated steels and with superiors values than the pure carbon steel. Consequently, the coated steels have presented more resistant to corrosion.
In spite of the steel with zirconia coating and titanium coating presenting a similar behavior, the Open Circuit Potential the zirconia coating was greater at the end of the test. Therefore, this coating has a better resistance to corrosion in relation to the titanium coating.
The Open Circuit Potential of the zirconia coating with dispersant addition had decreased with time and it reached the value of the uncovered carbon steel corrosion potential. This behavior is due to the addition of the dispersant that had produced a very thin coating that did not resist for a long time to corrosion and eventually it left the steel uncovered.
Linear Polarization Tests
For the polarization analyses, it was performed the arithmetic mean of the data relative to the triplicate of each coating. Figure 2 resumes the results of the polarization obtained for the carbon steel without coating, with zirconia coating, with zirconia coating and dispersant and with titanium coating. The current density is represented on the graphic abscissa. Source: Authors. Research, Society and Development, v. 9, n. 3, e183932715, 2020(CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i3.2715 10 The carbon steel without coating, with nanoceramic zirconia coating and dispersant and with titanium coating have corrosion potential raised in the module, around 0.89 V, -0.88 V and -0.75 V, respectively. The carbon steel with nanoceramic zirconia coating was the one that presented major corrosion resistance with a corrosion potential of -0.47V. The minor titanium performance is because the literature presents that the TiCl 4 baths produce more uniforms layers than the ones produced by the H 2 TiF 6 baths (Milošev & Frankel, 2018), consequently, these layers are more resistant to corrosion.
The carbon steel without coating presented high current density and therefore, greater corrosion rates (Behzadnasab, Mirabedini, & Esfandeh, 2013;Ramanathan & Balasubramanian, 2016). It is noticed that the corrosion potential indicated by the polarization curves shows that the carbon steel with titanium coating presented lower current density, while the carbon steel with nano-ceramic coating and dispersant presented a greater density, thus, it is verified that the dispersion acted negatively on the substrate protection. It was not observed the presence of "current shots", which indicated the absence of local corrosion on the samples analyzed.
Similarly, to Ramanathan and Balasubramanian (2016), the carbon steel with nanoceramic zirconia coating presented a small corrosion potential and inferiors current densities when compared to the steel without coating, being this one the coating with the greater performance on the tests accomplished at this article.
Impedance Tests
Figures 3 and 4 exhibits the tests results for the impedance test through the Nyquist and Bode graphics, respectively. | 1,838.2 | 2020-03-10T00:00:00.000 | [
"Materials Science"
] |
Remote Inflammatory Preconditioning Alleviates Lipopolysaccharide-Induced Acute Lung Injury via Inhibition of Intrinsic Apoptosis in Rats
Background Acute lung injury (ALI) always leads to severe inflammation. As inflammation and oxidative stress are the common pathological basis of endotoxin-induced inflammatory injury and ischemic reperfusion injury (IRI), we speculate that remote ischemic preconditioning (RIPC) can be protective for ALI when used as remote inflammatory preconditioning (RInPC). Method A total of 21 Sprague-Dawley rats were used for the animal experiments. Eighteen rats were equally and randomly divided into the control (NS injection), LPS (LPS injection), and RInPC groups. The RInPC was performed prior to the LPS injection via tourniquet blockage of blood flow to the right hind limb and adopted three cycles of 5 min tying followed by 5 min untying. Animals were sacrificed 24 hours later. There were 2 rats in the LPS group and 1 in the RInPC group who died before the end of the experiment. Supplementary experiments in the LPS and RInPC groups were conducted to ensure that 6 animals in each group reached the end of the experiment. Results In the present study, we demonstrated that the RInPC significantly attenuated the LPS-induced ALI in rats. Apoptotic cells were reduced significantly by the RInPC, with the simultaneous improvement of apoptosis-related proteins. Reduction of MPO and MDA and increasing of SOD activity were found significantly improved by the RInPC. Increasing of TNF-α, IL-1β, and IL-6 induced by the LPS was inhibited, while IL-10 was significantly increased by RInPC, compared to the LPS group. Conclusion RInPC could inhibit inflammation and attenuate oxidative stress, thereby reducing intrinsic apoptosis and providing lung protection in the LPS-induced ALI in rats.
Introduction
Acute lung injury (ALI) is a life-threatening parenchymal lung disease caused by various pathogenic factors. The ALI is characterized by hypoxemia, lung gas and blood barrier damage, bilateral pulmonary inflammatory infiltration, and noncardiogenic interstitial edema. It often progresses to acute respiratory distress syndrome (ARDS) and requires mechanical ventilation. Uncontrolled inflammation is the main cause of death, with a mortality rate of over 30% [1]. At present, the treatment for ALI/ARDS is mainly supportive, and novel therapeutic strategies are urgently needed.
Sepsis is the most common cause of ALI. Lipopolysaccharide (LPS), the endotoxin derived from the outer membrane of Gram-negative bacteria, which is believed to be one of the most frequent triggers of sepsis, is a powerful causative agent of systemic inflammation. The LPS can directly damage the alveolar-capillary barrier, lung epithelial cells, and pulmonary vascular endothelial cells [2]. Alveolar macrophages (AM) activated by LPS can release cytokines such as TNF-α and IL-1β to initiate the inflammatory cascade, producing a large number of inflammatory mediators and factors, and reactive oxygen species (ROS). The ROS can destroy the gas and blood barrier by damaging pulmonary vascular endothelial cells and alveolar epithelial cells, increasing their permeability, and causing pulmonary edema; it can also upregulate the expression of inflammatory factors and induce inflammation [3]. It has been elucidated that several different forms of programmed cell death (PCD), including autophagy, apoptosis, and pyroptosis, have been correlated with the LPS-induced ALI in rat models [4][5][6].
Pyroptosis is triggered in response to infection. The LPS has been reported to directly stimulate the activation of caspase-11, which cleaves gasdermin D (GSDMD) resulting in membrane rupture and cell lysis in rodents [7]. The innate immune response can be activated by LPS through the activation of TLR4 receptors [8], leading to the transcription of MyD88-dependent genes, which encode proinflammatory cytokines including inactive proforms of IL-1β and inflammasome components [9]. Multiple studies elucidated the role of the Fas/FasL system in the extrinsic epithelial apoptosis in LPS-induced ALI [6]. DNA damage, hypoxia, and metabolic stress can induce intrinsic apoptosis, which begins with mitochondrial outer membrane permeabilization (MOMP) and leads to the release of mitochondrial proteins into the cytosol [10]. The ROS may stimulate the cell death pathways and trigger inflammation, resulting in inflammasome activation, pyroptosis [11], and intrinsic apoptosis.
Ischemia-reperfusion injury (IRI) refers to the irreversible tissue damage caused by insufficient oxygen supply following tissue ischemia and subsequent restoration of blood supply. Oxidative stress, inflammation, and calcium ion overload were involved with the ischemia-reperfusion injury [12]. Ischemic preconditioning (IPC) is currently known as an effective protection strategy against the IRI. Remote IPC (RIPC) can be used to offer a protective effect to the target organ by transient ischemic interventions in organs or tissues far away from the target. In previous studies, the protective effect of the RIPC against myocardial IRI and cerebral IRI has been demonstrated in rat models [13,14]. Its protective mechanism was related to the reduction of oxidative stress and the alleviation of intrinsic apoptosis.
Based on the results from this study, we speculate that the RIPC can also be used as a novel protective strategy in LPS-induced ALI via alleviating intrinsic apoptosis. To facilitate the distinction, RInPC, a short-term ischemic intervention in organs or tissues far away from the target organ before inflammation occurs, is termed to stand for remote inflammatory preconditioning, which is distinguished from RIPC. The LPS-induced ALI rat models were used with the RInPC during the preinflammatory stage to verify this hypothesis and explore its intrinsic apoptosis-related mechanisms. . The animal experiment occurred at the animal experimental center of the Biofavor Biotech Company in Wuhan, Hubei, China. Animals were maintained in an air-conditioned atmosphere at 25°C with a 12-hour light-dark cycle exposure and were provided with free access to pelleted food and ad libitum water. After a one-week acclimation, the animals were randomly assigned into three groups, six rats per group. The first group was maintained as the control. The second group (LPS group) had the LPS intravenous injection. The third group (RInPC group) was treated the same as the LPS group with additional 30 minutes of remote stimuli before the LPS injection. There were 2 rats in the LPS group and 1 rat in the RInPC group that died before the end of the experiment. Supplementary experiments for the LPS and RInPC groups were conducted to include 6 animals that reached the end of the experiment in each group.
Drugs.
The LPS (O127: B8; Sigma, St. Louis, MO, USA) used in this study was derived from Escherichia coli (O127) endotoxin, and it was dissolved in sterile saline.
Experimental
Protocol. The animal model of LPSinduced ALI was developed with some modifications as described by Hagiwara et al. [15]. Briefly, the rat model was created by injection of LPS (5 mg/kg) via the tail vein. The same volume of normal saline (NS) was administered to the animals in the control group through the same route. All animals were injected intravenously under ether inhalation anesthesia.
The RInPC was performed for 30 minutes ahead of the LPS injection via tourniquet blockage of blood flow to the right hind limb and adopted three cycles of 5 min tying followed by 5 min of untying. Circulatory arrest in the limbs was identified by observing the empurpled limb skin and confirmed using a vascular Doppler. This method has been developed and standardized in a previous study [16].
Twenty-four hours after the injection, the animals were sacrificed following heart blood sampling under overanesthesia. The serum was separated by centrifugation of the blood sample at 3000 g for 15 minutes. Lung samples were collected with inflation after the chest was opened. The left lungs were used to measure the wet/dry ratio. The right upper lungs were stored in 4% paraformaldehyde for histological studies. And the right lower lungs were stored at −80°C for biochemical assay and protein analysis by western blotting.
2.5. Histology and Morphology. Complete random crosssections of the rat lungs were fixed in 4% neutral phosphate-buffered formaldehyde, embedded in paraffin, sectioned (5 μm), and stained with hematoxylin and eosin (H&E). The sections were viewed by an experienced morphologist who knew nothing about the sample identity. Ten randomly chosen microscopic fields (×200) were viewed for each lung sample, and all 6 samples were viewed for each animal group. Histological evidence suggesting ALI was also evaluated by a blinded investigator according to Hofbauer 2 Journal of Immunology Research and colleagues' method [17]. In which, alveolar membrane thickness and cellularity were evaluated by estimating the fraction of the microscopic field occupied by the parenchymal tissue as opposed to the empty alveolar spaces. The average values of ALI were represented by a histological index of quantitative assessment (IQA) using the following criteria. Samples were graded from normal to severe, which was expressed by 0 (<15% of the space occupied by tissue and >85% by alveolar space), 1+ (15%-25% occupied by tissue and 75%-85% by alveolar space), 2+ (25%-50% occupied by tissue and 50%-75% by alveolar space), 3+ (50%-75% occupied by tissue and 25%-50% by alveolar space), and 4+ (75%-100% occupied by tissue and 0%-25% by alveolar space).
2.6. Lung Wet-to-Dry Weight Ratio Measurement. To assess tissue edema, the weight of rat lungs (six lungs per group) was measured, followed by a drying step of the lungs in an oven at 80°C for 48 h until the weight of the samples became constant. Then, the lung wet-to-dry weight ratio was calculated.
2.7. Assay of Serum Lactate Acid. Serum lactate measurement was performed in all groups using a lactate assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, Jiangsu, China), according to the manufacturer's instructions.
2.8. Enzyme-Linked Immunosorbent Assay (ELISA). The levels of TNF-α, IL-1β, IL-6, and IL-10 in serum were detected using the specific mouse or human ELISA kits (Elabscience Biotechnology Co. Ltd., Wuhan, Hubei, China). The optical density was measured at 450/540 nm wavelength using an automated ELISA reader (Flexstation3, Molecular Devices, LLC, Sunnyvale, CA, USA). All standards and samples were run in triplicate.
2.9. Assays of Malondialdehyde (MDA), Myeloperoxidase (MPO), and Superoxide Dismutase (SOD). These three oxidative stress indicators were detected in serum, as previously reported by using commercial assay kits (Nanjing Jiancheng Bioengineering Institute), according to the manufacturer's instructions [18]. The unit of measurement for MDA was nmol per milligram of protein. MPO and SOD activities were expressed as units per milligram of protein.
Terminal Deoxynucleotidyl Transferase-Mediated dUTP Nick
End Labeling (TUNEL) Assay. The TUNEL technique was carried out using the "In Situ Cell Death Detection Kit." Briefly, the lung sections on the microscopic slides were dewaxed and incubated with proteinase K. Then, the slides were stained using a TUNEL kit (Biovision Inc., Mountain View, CA, USA), according to the manufacturer's instructions. Subsequently, the slides were examined under a fluorescence microscope (Olympus BX53, Olympus, Japan). Images were captured to determine the percentage of positive cells and intensity of staining and then used to calculate the percentage of positive nuclei in three representative areas from three samples per group as the apoptotic index for statistical analysis.
2.11. Western Blotting Analysis. The right lower lung specimens (approximately 100 mg each) were dissected out and stored at -80°C. The protein expressions of Bcl-2, Bax, Cytc, AIF, caspase-3, cleaved caspase-3, caspase-9, and cleaved caspase-9 in the lung were detected by western blotting analysis, which was described in the literature [19]. Briefly, the protein concentration was determined by the Bicinchoninic Acid (BCA) method. The protein sample was boiled and denatured; then, SDS-PAGE gel electrophoresis was performed. The protein was transferred onto the nitrocellulose membrane. Next, the proteins were blocked with 5% skim milk at 37°C for 1 h. The membranes were incubated overnight at 4°C with diluted primary antibody and GADPH primary antibody (1 : 1000). The next day, the membrane was washed three times with TBST and incubated with a secondary antibody diluted with the blocking solution at 37°C for 2 hours. The enhanced chemiluminescence (ECL) was developed, and the protein bands were photographed after washing. The integral optical density (IOD) of each target band was determined using Bandscan 5.0 software (Bio Marin Pharmaceutical, San Rafael, CA, USA). The expressions of the target proteins were normalized by the ratio of integrated optical density (IOD) of proteins to the IOD of GADPH.
The expressions of Cyt-c and AIF in the mitochondria were normalized by the ratio of the IOD of proteins to the IOD of COX4.
2.12. Statistical Analysis. The significant differences were calculated using one-way ANOVA among multiple groups with the Prism 8.0 software (GraphPad Software, Inc., San Diego, CA, USA). Results were expressed as means ± standard deviation ðSDÞ. Values are shown using a column diagram. P < 0:05 was considered significant.
RInPC Attenuated the LPS-Induced ALI in Rats.
The survival percentages of the three groups of models were 100% (6/6), 75.0% (6/8), and 85.7% (6/7), respectively ( Figure 1(a)). Histological evaluations of lung tissue changes by H&E staining were compared among the three groups. Similar to the description by Du et al. [20], the morphology in the control group was normal with no fluid in the alveolar space. No evidence of inflammatory cell infiltration or hemorrhage on the alveolar wall was found. Diffuse edema in alveolar spaces, inflammatory cell infiltration, and thickened interlobular septa were found in both the LPS and RInPC groups. A significantly higher ALI score represented by IQA was observed in the LPS group compared to the others. The IQA score of the RInPC group was significantly lower than that of the LPS group (control vs. RInPC vs. LPS: 0:71 ± 0:24 vs. 1:96 ± 0:10 vs. 3:00 ± 0:16, P < 0:001) (Figures 1(b) and 1(e)). The wet/dry lung weight ratio was significantly increased in the LPS group (8:66 ± 2:34 vs. 6:02 ± 0:60, P < 0:05) compared to the control group. The wet/dry ratio in the RInPC group was between the control and LPS groups, without any significant differences (Figure 1(c)). The value of lactate acid in both the LPS group and the RInPC group was significantly increased compared to the control, while the value in the RInPC group was significantly lower than that in the LPS group. The values of the three groups were 3:98 ± 0:33, 19:33 ± 1:03, and 8:22 ± 0:51, respectively (P < 0:001) (Figure 1(d)).
RInPC Prevented Apoptosis via an Intrinsic Pathway in LPS-Induced ALI in Rats.
To determine the protective effects of the RInPC against LPS-induced apoptosis, TUNEL was performed. In vivo, LPS-challenged animals exhibited a significant increase in green fluorescence apoptotic cells, which was significantly reduced by the RInPC (Figures 2(a) and 2(b)).
Although the values of both caspase-3 and caspase-9 were not changed in the lung specimen, cleaved caspase-3 and cleaved caspase-9 were upregulated significantly (P < 0:001 compared with the control group). The RInPC inhibited the LPS-induced upregulation of cleaved caspase-3 and cleaved caspase-9 (P < 0:01 compared with the LPS group) (Figures 2(c) and 2(d)).
The intrinsic pathway of apoptosis, which means mitochondrial-dependent apoptosis, is mediated through the release of cytochrome c (Cyt-c) and apoptosis-inducing factor (AIF), leading to ultimately caspase activation. In the present study, significantly increased Cyt-c in the cytoplasm and decreased Cyt-c in the mitochondria were observed (P < 0:001 compared with the control group), which was alleviated by the RInPC (P < 0:001 compared with the LPS group). Simultaneously, increased AIF both in cytoplasm and mitochondria were observed (P < 0:001 compared with the control group), which was also alleviated by the process of RInPC (P < 0:001 compared with the LPS group) (Figures 2(f) and 2(g)).
Additionally, the present study investigated the changes in the expression levels of the Bcl-2 family proteins (Bcl-2 and Bax) in lung tissue. The LPS injection resulted in the downregulation of the antiapoptotic protein Bcl-2 and upregulation of the proapoptotic protein Bax. Although no significant differences of Bcl-2 and Bax were observed among the three groups, a significantly decreased Bcl-2/Bax ratio was observed (P < 0:001 compared with the control group), and the RInPC prevented this decreased ratio (P < 0:001 compared with the LPS group). These results indicated that intravenous administration of LPS induced lung cell apoptosis, which was significantly alleviated by the treatment with the RInPC (Figures 2(c) and 2(e)).
RInPC Palliated the Oxidative Stress in Lung Induced by LPS Injection.
To determine the antioxidative effects of the RInPC against LPS-induced ALI in rats, the MDA, MPO, and SOD levels in serum were measured. The LPS injection induced a 2.30-fold elevation of MDA level, a 2.13-fold elevation of MPO activity, and a 71.0% reduction of SOD activity, respectively, compared with the control group. In contrast, these oxidative markers were significantly Journal of Immunology Research improved by the RInPC in the LPS-injected rats. The MDA and MPO were reduced to levels close to the control group, and SOD was elevated to a level which was almost 84.5% of the control group (Figures 3(a)-3(c)).
The RInPC Reduced Proinflammatory Cytokine Secretion
Induced by LPS. To investigate the anti-inflammatory effects of the RInPC in the lung of LPS-intoxicated rats, TNF-α, IL-1β, IL-6, and IL-10 levels were measured. The LPS injection induced a 4.22-, 3.28-, 3.11-, and 2.20-fold elevation of TNFα, IL-1β, IL-6, and IL-10 levels, respectively, compared with the control group. Conversely, proinflammatory cytokines were significantly improved by the RInPC of LPS-injected rats. The TNF-α, IL-1β, and IL-6 levels were improved to a level which was less than half of the level in the LPS group, with a significant increase of anti-inflammatory cytokine, IL-10, to a level which was more than 2-fold of the level in the LPS group (Figures 4(a)-4(d)).
Discussion
In this study, we demonstrated that the RInPC significantly attenuated the LPS-induced ALI in rats, possibly via an inhibition of intrinsic apoptosis, associated with reductions in both oxidative stress and proinflammatory cytokines.
Although investigations on the inhibition of pyroptosis [7] and extrinsic apoptosis [6] in the LPS-induced ALI have been reported previously, we have not found a similar research result about intrinsic apoptosis and LPS-induced ALI.
Gram-negative bacteria have been associated with approximately 50% of infectious ALI, usually from 7 Journal of Immunology Research pneumonia or sepsis [21]. The LPS, as a common endotoxin, is critical for organ dysfunction and mortality associated with severe Gram-negative infections [22,23]. It has been well established that intravenous administration of LPS can induce a model of ALI [24][25][26].
The RIPC was originally one of the strategies to alleviate organ IRI. It has been reported to exert a protective effect against ischemia/reperfusion injury in rat hearts, brains, and other organs, which may be associated with inhibiting the opening of mPTP [27,28]. It has also been demonstrated to regulate the human myocardial apoptosis and inflammation, which is associated with the caspase cascade [29]. The protective mechanism has been known to be related to inhibiting inflammation, reducing oxidative stress, and reducing intrinsic apoptosis. Although different mechanisms about cell death are involved between IRI and ALI, the RInPC, which we abbreviated to stand for remote inflammatory preconditioning, was suspected to be protective in this rats' ALI model induced by LPS based on the effect related to inhibition of intrinsic apoptosis.
The animal model was established through intravenous injection of LPS (5 mg/kg) in the present study, based on a previous report [30,31]. It was observed with significant lung injury and dysfunction following LPS administration, evidenced by the deterioration of histopathology, increased wet/dry weight ratio of the lung, and elevated lactate acid in serum, which is consistent with the other studies [26,[31][32][33]. The ALI in rats was attenuated by the performance of RInPC, which was reflected by improved histopathological changes and decreased wet/dry ratio and lactate acid in serum compared to the LPS group. Although the value of PaO 2 /FiO 2 was not clarified, lactate acid has been certified to be an indicator of anoxemia of the organ, especially in lung injury. The lactate level has long been used as a marker of resuscitation, for risk stratification, and as a mortality prediction tool in sepsis with the commonly held belief that elevated lactate levels in sepsis occur as a consequence of anaerobic metabolism from tissue malperfusion [34]. Cytopathic hypoxia and direct mitochondrial impairment have been proposed as a cause of hyperlactemia, although the exact mechanism remains incompletely understood [35].
Through TUNEL detection, it was confirmed that the apoptosis of lung cells existed in the ALI model, and the RInPC significantly reduced the occurrence of apoptosis. Intrinsic apoptosis, mitochondrial-dependent apoptosis, was activated through the mitochondrial release of Cyt-c, AIF, and Smac [36]. When Cyt-c entered the cytoplasm, the apoptosome assembly was released from the apoptotic protease-activating factor 1, ATP, and procaspase-9, leading to cellular apoptosis via the activation of caspase-3 and caspase-7 [37]. To further elucidate the present hypothesis of intrinsic apoptosis, the Cyt-c and AIF levels in the cytoplasm and mitochondria were measured. The RInPC was demonstrated to improve the mitochondrial release of the Cyt-c into the cytoplasm and thus the expression of AIF.
The apoptosis-related proteins play pivotal roles in apoptosis. The caspase-3 and caspase-9 are activated and regulated by the apoptotic pathway mediated by the Bcl-2/Bax ratio [38,39]. The present results demonstrated that the RInPC significantly downregulated the expression of caspase-9 and caspase-3, the proapoptosis protein, and the executive protein of apoptosis in vivo. In addition, the antiapoptosis protein Bcl-2 and the proapoptosis protein Bax, both involved in the regulation of the opening of mitochondrial permeability transition pore (mPTP), were also analyzed. The values indicated that the RInPC could attenuate the opening of mPTP through regulation of the Bcl-2/Bax ratio to inhibit the release of Cyt-c and AIF.
To explore the ability of the RInPC in regulating oxidative stress, we tested the contents of MDA and MPO and the activity of SOD. The MDA indirectly reflects the severity of the cells being attacked by free radicals. The MPO activity is an indicator of neutrophil infiltration in the lung. The SOD is an important oxygen-free radical scavenger [40]. It was shown that the LPS injection caused an increase in MDA production, MPO secretion, and SOD consumption in rats, suggesting an induced imbalance of oxidative stress. It was also demonstrated that the RInPC was found to be a good alleviator for the imbalance of oxidative stress induced by the LPS.
In this rat model of LPS-induced ALI, it was observed that the secretion of proinflammatory cytokines, including TNF-α, IL-β, and IL-6, as well as the anti-inflammatory cytokine IL-10, was all increased significantly after the administration of the LPS, consistent with previous studies [15,41]. Monocytes and macrophages secrete cytokines such as TNF-α, IL-β, and IL-6 during the early stage of the inflammatory response when activated by the LPS, which play an important role in the occurrence and development of ALI/ARDS [32,42,43]. TNF-α is a primary mediator of inflammation [32,43]. The IL-1β also appears in the early stage of ALI and cooperates with the TNF-α to promote an inflammatory response. Levels of the IL-6 positively correlate with mortality in experimental models of sepsis. Measuring the IL-6 levels in at-risk patients can accurately predict individuals who are at significant risk of death as a result of sepsis [44]. The IL-10 inhibits the expression of proinflammatory cytokines, chemokines, and chemokine receptors as well as allergen tolerance in allergen-specific immunotherapy [42]. The RInPC significantly suppressed the secretion of TNF-α, IL-β, and IL-6, promoting the secretion of IL-10, which suggested that the RInPC could reduce the inflammatory response in this ALI model.
Pyroptosis exerts a cell type-dependent role in inflammation and immunity. The caspase-11-dependent noncanonical pyroptosis was activated by cytosolic LPS from invading Gram-negative bacteria in macrophages, monocytes, or other cells in rodent animals [7]. As intrinsic apoptosis is always induced by DNA damage, hypoxia, and metabolic stress; we speculated that the intrinsic apoptosis may have been secondary to noncanonical pyroptosis in the LPS-induced ALI models, and further research is needed.
Some limitations in this study exist because of the experimental design. First of all, the protective effect of the RInPC on ALI was discussed only in rodent in vivo models. To determine whether there is a similar effect on other animals or humans, more elucidations are warranted. The second is that the in vitro experiments have not been applied to 8 Journal of Immunology Research explore whether cells treated with hypoxia and reoxygenation can better resist the endotoxin damage. Another one is that the wet/dry ratio was showed to have a significant difference between the control group and the LPS group, but that of the RInPC group was without any significant differences compared to the other two groups. Measurement of the protein level in BALF may be a better choice in future experiments. The last one is that the study showing some protective effects of RInPC on the LPS-induced ALI correlated with the intrinsic apoptosis is still observational. The mechanism mediating this protection has not been fully investigated.
Conclusion
In the present study, the RInPC inhibited the inflammatory response and attenuated the oxidative stress, thereby reducing intrinsic apoptosis and ultimately providing lung protection in the LPS-induced ALI model in rats. If a similar effect could be found in other animal models or human beings, we may get a new strategy to fight against ALI and ARDS.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding authors on reasonable request.
Ethical Approval
This study was performed in agreement with the ARRIVE guidelines. The ethics approval has been obtained from the Ethics Committee of the Central Hospital of Wuhan affiliated to Tongji Medical College, Huazhong University of Science and Technology. Great efforts were made to minimize the suffering of animals.
Consent
No consent was necessary.
Conflicts of Interest
All authors declare that they have no conflict of interest.
Authors' Contributions
Yong Liu and Baojun Chen contributed to the study conception and design. Material preparation, animal operation, data collection, and analysis were performed by Yong Liu, Jiahang Xu, Liang Zhao, and Jing Cheng. The first draft of the manuscript was written by Yong Liu, and the final version of the manuscript was revised by Baojun Chen and Yong Liu. All authors have read and approved the final version of the manuscript. | 6,001.8 | 2021-09-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
ChiTeSQL: A Large-Scale and Pragmatic Chinese Text-to-SQL Dataset
,
Introduction
In the past few decades, a large amount of research has focused on searching answers from unstructured texts given natural questions, which is also known as the question answering (QA) task (Burke et al., 1997;Kwok et al., 2001;Allam and Haggag, 2012;Nguyen et al., 2016). However, a lot of high-quality knowledge or data are actually stored in databases in the real world. It is thus extremely useful to allow ordinary users to directly interact with databases via natural questions. To meet this need, researchers have proposed the text-to-SQL task with released English datasets for model training and evaluation, such as ATIS (Iyer et al., 2017), GeoQuery (Popescu et al., 2003), WikiSQL (Zhong et al., 2017), and Spider (Yu et al., 2018b).
Formally, given a natural language (NL) question and a relational database, the text-to-SQL task aims to produce a legal and executable SQL query that leads directly to the correct answer, as depicted in Figure 1. A database is composed of multiple tables and denoted as DB = {T 1 , T 2 , ..., T n }. A table is composed of multiple columns and denoted as T i = {col 1 , col 2 , ..., col m }. Tables are usually linked with each other by foreign keys.
The earliest datasets include ATIS (Iyer et al., 2017) , GeoQuery (Popescu et al., 2003), Restaurants (Tang and Mooney, 2001), Academic (Li and Jagadish, 2014), etc. Each dataset only has a single database containing a certain number of tables. All question/SQL pairs of train/dev/test sets are generated against the same database. Many interesting approaches are proposed to handle those datasets (Iyer et al., 2017;Yaghmazadeh et al., 2017;Finegan-Dollak et al., 2018).
However, real-world applications usually in-volve more than one database, and require the model to be able to generalize to and handle unseen databases during evaluation. To accommodate this need, the WikiSQL dataset is then released by Zhong et al. (2017). It consists of 80,654 question/SQL pairs for 24,241 single-table databases. They propose a new data split setting to ensure that databases in train/dev/test do not overlap. However, they focus on very simple SQL queries containing one SELECT statement with one WHERE clause. In addition, Sun et al. (2020) released TableQA, a Chinese dataset similar to the WikiSQL dataset. Yu et al. (2018b) released a more challenging Spider dataset, consisting of 10,181 question/SQL pairs against 200 multi-table databases. Compared with WikiSQL and TableQA, Spider is much more complex due to two reasons: 1) the need of selecting relevant tables; 2) many nested queries and advanced SQL clauses like GROUP BY and ORDER BY.
As far as we know, most existing datasets are constructed for English. Another issue is that they do not refer to the question distribution in realworld applications during data construction. Taking Spider as an example. Given a database, annotators are asked to write many SQL queries from scratch. The only requirement is that SQL queries have to cover a list of SQL clauses and nested queries. Meanwhile, the annotators write NL questions corresponding to SQL queries. In particular, all these datasets contain very few questions involving calculations between rows or columns, which we find are very common in real applications. This paper presents DuSQL, a large-scale and pragmatic Chinese text-to-SQL dataset, containing 200 databases, 813 tables, and 23,797 question/SQL pairs. Specifically, our contributions are summarized as follows.
• In order to determine a more realistic distribution of SQL queries, we collect user questions from three representative database-oriented applications and perform manual analysis. In particular, we find that a considerable proportion of questions require row/column calculations, which are not included in existing datasets.
• We adopt an effective data construction framework via human-computer collaboration. The basic idea is automatically generating SQL queries based on the SQL grammar and constrained by the given database. For each SQL query, we first generate a pseudo question by traversing it in the execution order and then ask annotators to paraphrase it into a NL question.
• We conduct experiments on DuSQL using three open-source parsing models. In particular, we extend the state-of-the-art IRNet model to accommodate the characteristics of DuSQL. Results and analysis show that DuSQL is a very challenging dataset. We will release our data at https://github.com/luge-ai/luge-ai/ tree/master/semantic-parsing.
SQL Query Distribution
As far as we know, existing text-to-SQL datasets mainly consider the complexity of SQL syntax when creating SQL queries. For example, Wik-iSQL has only simple SQL queries containing SE-LECT and WHERE clauses. Spider covers 15 SQL clauses including SELECT, WHERE, ORDER BY, GROUP BY, etc, and allows nested queries. However, to build a pragmatic text-to-SQL system that allows ordinary users to directly interact with databases via NL questions, it is very important to know the SQL query distribution in realworld applications, from the aspect of user need rather than SQL syntax. Our analysis shows that Spider mainly covers three types of SQL queries, i.e., matching, sorting, and clustering, whereas WikiSQL only has matching queries. Neither of them contains the calculation type, which we find composes a large portion of questions in certain real-world applications.
To find out the SQL query distribution in reallife applications, we consider the following three representative types of database-oriented applications, and conduct manual analysis against user questions. We ask annotators to divide user questions into five categories (see Appendix B for details), i.e., matching, sorting, clustering, calculation, and others.
Information retrieval applications. We use Baidu, the Chinese search engine, as a typical information retrieval application. Nowadays, search engines are still the most important way for web users to acquire answers. Thanks to the progress in knowledge graph research, search engines can return structured tables or even direct answers from infobox websites such as Wikipedia and Baidu Encyclopedia. From one-day Baidu search logs, we randomly select 1,000 questions for which one of returned top-10 relevant web sites is from infobox websites. Then, we manually classify each question into the above five types.
Customer service robots. Big companies build AI robots to answer questions of customers, which usually require the access to industrial databases. We provide a free trial API 1 to create customer service robots for developers. With the permission of the developers, we randomly select 1,500 questions and corresponding databases from their created robots. These questions cover multiple domains such as banks, airlines, and communication carriers, etc.
Data analysis robots. Every day, innumerous tables are generated, such as financial statements, business orders, etc. To perform data analysis over such data, companies hire professionals to write SQL queries. Obviously, it is extremely useful to build robots that allow financial experts to directly perform data analysis using NL questions. We collect 500 questions from our data analysis robot. Figure 2 shows the query distributions of the three applications. It is obvious that calculation questions occupy a considerable proportion in all three applications. For customer service robots, users mainly try to search information, and therefore most questions belong to the matching type. Yet, 8% questions require calculation SQL queries to be answered. For data analysis robots, calculation questions dominate the distribution, since users try to figure out useful clues behind the data.
To gain more insights, we further divide calculation questions into three subtypes according to
Column Calculation
What is the population density of Hefei?
Calculation with a Constant
How old is Jenny?
SELECT curdate -birthday FROM student WHERE name = 'Jenny' How far is Beijing's population from 23 million? the SQL syntax, i.e., row calculation, column calculation, and calculation with a constant. Figure 3 shows some examples.
Corpus Construction
Building a large-scale text-to-SQL dataset with multi-table databases is extremely challenging. First, though there are a large amount of independent tables on the Internet, connections among the tables are usually unavailable. Therefore, great efforts are needed to create multi-table databases. Second, it is usually difficult to obtain NL questions against certain databases. Third, given a question and the corresponding database, we need proficient annotators to write a SQL query for the question who understand both the database schema and the SQL syntax.
Different from previous works, which usually rely on human to create both NL questions and SQL queries (Yu et al., 2018b), we build our dataset via a human-computer collaboration way, as illustrated in Figure 4. The key idea is to automatically generate SQL queries paired with pseudo questions given a database. Then pseudo questions are paraphrased to NL questions by humans. Finally, to guarantee data quality, low-confidence SQL queries and NL questions detected according to their overlapping and similarity metrics, and are further checked by humans.
Database Creation
Most of mature databases used in industry are not publicly available. So we collect our databases mainly from the Internet. However, databases available on the Internet are in the form of independent tables, which need to be linked with other tables. We create databases in three steps: table acquisition, table merging, and foreign key creation.
We collect websites to crawl tables, ensuring that they cover multiple domains. As the largest Chinese encyclopedia, Baidu Baike contains more than 17 million entries across more than 200 domains. We start with all the entries in Baike as the initial sites, and extend the collection based on the reference sites in each entry page. We keep sites where tables are crawled. The final collection contains entries of Baike, annual report websites 2 , vertical domain websites 3 , and other websites such as community forums 4 . Table 1 shows the data distribution regarding database sources.
To make a domain correspond to a database, we merge tables with the same schema to a new table with a new schema, e.g., tables about China cities with the schema of {population, area, ...} are merged to a new table with the schema of {termid, name, population, area, ...}, where termid is randomly generated as primary key and name is the name of the city. Meanwhile, we add a type for each column according to the form of its value, where the column type consists of text, number and date.
We create foreign keys between two tables via entity linking, e.g., a table named "Livable cities in 2019" with the schema of {city_name, ranker, ...} joins to a table named "China cities" with the schema of {term_id, name, area, ...} through the links of entities in "city_name" and "name". According to foreign keys, all tables are split into separate graphs, each of which consists of several
Automatic Generation of SQL Queries
Given a database, we want to generate as many common SQL queries as possible. Both manually writing SQL queries and quality-checking take a significant amount of time. Obviously, SQL queries can be automatically generated from the grammar. We utilize production rules from the grammar to automatically generate SQL queries, instead of asking annotators to write them. According to the difficulty 5 and semantic correctness of a SQL query, we prune the rule paths in the generation. Then, we sample the generated SQL queries according to the distribution in Figure 2 and carry out the follow-up work based on them.
As illustrated in Figure 5, the SQL query can be represented as a tree using the rule sequence of {SQLs = SQL, SQL = Select Where, Select = SELECT A, Where = WHERE Conditions, ...}, all of which are production rules of the grammar. Guided by the SQL query distributions in real applications, we design production rules to ensure that all common SQL queries can be generated, e.g., the rule of {C = table.column mathop table.column} allows calculations between columns or rows. By exercising every rule of the grammar, we can generate SQL queries covering patterns of different complexity.
We consider two aspects in the automatic SQL generation: the difficulty and semantic correct- ness of a SQL query. To control the difficulty of the generated queries, we make some restrictions based on our analysis on real-life questions: first, a SQL query contains only one nested query; second, there are no more than three conditions in a where clause and no more than four answers in a select statement; third, a SQL query has at most one math operation; forth, most text values are from databases 6 . To ensure the semantics correctness of the generated query, we abide by preconditions of each clause and expression in the generation, e.g., the expression of {A > SQL} requires that the nested SQL returns a number value. The full list of preconditions is shown in Appendix C.
Under these requirements, we generate a large amount of candidate SQL queries against 200 databases. Among them, only a tiny proportion of SQL queries are of the calculation type, since only few columns support calculation operations. We keep all queries in the calculation type, randomly select ones with sorting and clustering types of the same size, and select ones with the matching type 7 of three times the size. We make sure that these selected queries are spread across all 200 databases. Then these queries are used as input for the followup work. 6 The text values in a SQL query are from the database to reduce the difficulty of SQL prediction. We plan to remove this restriction in the next release version of DuSQL. 7 Including combinations of matching type and other types, e.g., the SQL query of {SELECT ... WHERE ... ORDER BY ... } represents the combination of matching and sorting types.
Semi-automatic Generation of Questions
For each SQL query, we automatically generate a pseudo question to explain it. Then pseudo questions are shown to annotators who can understand them and paraphrase them to NL questions without looking at databases and SQL queries.
We generate a pseudo question for a SQL query according to its execution order. As shown in Figure 6, the entire pseudo question of the SQL query consists of pseudo descriptions of all clauses according to their execution orders. The pseudo description of a clause consists of pseudo descriptions of all its components. We give a description for each component, e.g., list for SELECT, average for the aggregator of avg. Appendix D shows the descriptions for all components. To ensure that the pseudo question is clear and reflects the meaning of the SQL query, intermediate variables are introduced to express sub-SQL queries, e.g., "v1" in the example of Figure 6 represents the result of the nested query and is used as a value in other expressions.
We ask two annotators 8 to reformulate pseudo questions into NL questions 9 , and filter two kinds of questions: 1) incomprehensible ones which are semantically unclear; 2) unnatural ones which are not the focus of humans 10 . During the process of paraphrasing, 6.7% of question/SQL pairs are filtered, among which 76.5% are complex queries. Then we ask other annotators to check the correctness of reformulated questions, and find 8% of questions are inaccurate.
Review and Checking
To guarantee data quality, we automatically detect low-quality question/SQL pairs according to the following evaluation metrics.
• Overlap. To ensure the naturalness of our questions, we calculate the overlap between the pseudo question and the corresponding NL question. The question with an overlap higher than 0.6 is considered to be of low quality.
• Similarity. To ensure that the question contains enough information for the SQL query, we train a similarity model based on question/SQL pairs. The question with a similarity score less than 0.8 is considered to be of low quality.
In the first round, about 18% of question/SQL pairs are of low quality. We ask annotators to check these pairs and correct the error pairs. This process iterates through the collaboration of human and computer until the above metrics no longer changing. It iterates twice in the construction of DuSQL.
Dataset Statistics
We summarize the statistics of DuSQL and other cross-domain datasets in Table 2, and give some 8 They are full-time employees and familiar with SQL language. Meanwhile, they have lots of experience in annotating QA data.
9 Some values in SQL queries are rewritten as synonyms. 10 E.g., "When province is Sichuan, list the total rank of these cities." for the SQL query {SELECT sum(rank) From T2 WHERE province = 'Sichuan'} is considered as an unnatural question, as the total rank would not be asked by humans. examples in Figure 7. DuSQL contains enough question/SQL pairs for all common types. Wik-iSQL and TableQA are simple datasets, only containing matching questions. Spider and CSpider (Min et al., 2019) mainly cover matching, sorting, clustering and their combinations. There are very few questions in the calculation type, and all of them only need column calculations. Spider does not focus on questions that require the common knowledge and math operation. According to our analysis in Figure 2, the calculation type is very common, accounting for 8% to 65% in different applications. DuSQL, a pragmatic industry-oriented dataset, conforms to the distribution of SQL queries in real applications. Meanwhile, DuSQL is larger, twice the size of other complex datasets. DuSQL contains 200 databases, covering about 70% of entries in Baike and more than 160 domains, e.g., cities, singers, movies, animals, etc. We provide content for each database.
All the values of a SQL query can be found in the database, except for numeric values. All table and column names in the database are clear and selfcontained. In addition, we provide English schema for each database, including table names and column headers.
Benchmark Approaches
All existing text-to-SQL works focus on English datasets. Considering that DuSQL is the most similar with Spider, we choose the following three representative publicly available parsers as our benchmark approaches, to understand the performance of existing approaches on our new Chinese dataset.
Dataset
Size DB We also extend the state-of-the-art IRNet model of to accommodate the two characteristics of our data, i.e., calculation questions and the need of value prediction.
Seq2Seq+Copying (Zhong et al., 2017) incorporates the database schemas into the model input and uses a copying mechanism in the decoder.
SyntaxSQLNet (Yu et al., 2018a) proposes a SQL syntax tree-based network to generate SQL structures, and uses generation path history and table-aware column attention in the decoder.
IRNet designs an intermediate representation called SemQL for encoding higher-level abstraction structures than SQL, and then uses a grammar-based decoder (Yin and Neubig, 2017) to synthesize a SemQL query. At present, IRNet reports the state-of-the-art results on Spider dataset.
Both SyntaxSQLNet and IRNet utilize a grammar to guide SQL generation and conduct experiments on Spider dataset. However, neither of their grammars can handle calculation questions. Another major difference between our dataset and Spider is that our evaluation metric (see Section §5) also considers value prediction, since values in a SQL query are from the corresponding question or database both of which are available inputs to the model. Please refer to our discussion in Section §3 for details. Due to the characteristics of our dataset, all the three models perform poorly on DuSQL. Therefore, we extend the IRNet model to accommodate DuSQL as follows.
Firstly, we extend the grammar of SemQL to accommodate the two characteristics of our dataset, as shown in Figure 8. The production rules in bold are added to parse calculation questions. Other production rules are modified based on original rules to support value prediction (Due to space limitation, we attach the full list of extended grammar in Appendix F.). Then we use all the n-grams of length 1-6 in the question to match database cells or number/date to determine candidate values for the predicated SQL query. The values are used in the same way as the columns and tables in the IR-Net model.
Experiments
Data Settings Following WikiSQL, we split our dataset into train/dev/test in a way so that databases are non-overlapping among the three subsets. In other words, all question/SQL pairs for the same database are in the same subset. This is also referred to as cross-domain parsing problem, since some database schemes in dev/test do not appear in train. At last, 200 databases are split into 160/17/23, and 23,979 question/SQL pairs are split into 18,602/2,039/3,156.
Evaluation Metrics Evaluation metrics for the text-to-SQL task include component matching, exact matching, and execution accuracy. Component matching (Yu et al., 2018b) uses F1 score to evaluate the performance of the model on each clause. Exact matching, namely the percentage of questions whose predicted SQL query is equivalent to the gold SQL query, is widely used in text-to-SQL tasks. Execution accuracy, namely the percentage of questions whose predicted SQL query obtains the correct answer, assumes that each SQL query has an answer.
We use exact matching as the main metric, and follow Xu et al. (2017) and Yu et al. (2018b) handle the "ordering issue". Finally, we give the model performance with (w) and without (w/o) value evaluation.
Main results. Table 4 shows performance of the benchmark approaches. The performance of Seq2SeqCopying is the lowest. It uses the copying mechanism to reduce errors posed by out-ofdomain words in the databases of test set. But it predicts lots of invalid SQL queries with grammatical errors, since its decoder does not consider SQL structures at all. SyntaxSQLNet and IRNet outperform Seq2SeqCopying by utilizing a grammar from SQL structures to guide SQL generation. In particular, IRNet utilize SemQL as an abstraction representation of SQL queries. However, neither of the two vanilla models handles calculation questions and value directions properly. The basic IRNet achieves only 34.2/15.4 accuracy on the test set w/o and w/ value evaluation.
We can see that by simply extending IRNet to parse calculation questions and predict values, the IRNetExt model achieves much higher accuracy (54.3/50.1).
Ablation study. We perform ablation study to gain more insights on the contribution of our extensions. As shown in table 4, the accuracy on test set drops 4.5 by excluding production rules from the grammar of SemQL. The accuracy of calculation type is 0, which composes 20.7% of the questions in the test set. After excluding the prediction of values, the test performance drops significantly for two reasons. First, there are a large number of questions that contain values, accounting for about 75% in the dev set and 70% in the test set. Second, the generation of where clauses can be improved by leveraging the column-cell relationship. Table 3 shows performance of different SQL query types. Firstly, the grammar extension is effective, the accuracy of all types is significantly improved. Second, the accuracy of calculation type is lower than that of other types, as many calculation questions require incorporating common knowledge, e.g., age = dateOfDeath -dateOfBirth. How to represent and incorporate such knowledge into the model is very challenging. Third, questions requiring common knowledge perform poorly, as they need understanding rather than matching, such as the matching issue of "the oldest" and "age".
Related Work
Semantic parsing. Semantic parsing aims to map NL utterances into semantic representations, such as logical forms (Liang, 2013), SQL queries (Tang and Mooney, 2001), Python code (Ling et al., 2016), etc. In order to facilitate model training and evaluation, researchers release a variety of datasets. ATIS and GeoQuery are two popular early datasets originally in logical forms, and are converted into SQL queries (Iyer et al., 2017;Popescu et al., 2003). As two recently released datasets, WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b) have attracted extensive research attention. It is also noteworthy that Min et al. (2019) propose the CSpider dataset by translating English questions of Spider into Chinese.
Data construction methods. As discussed in Section §3, creating a large-scale semantic parsing dataset is extremely challenging. To construct Spider, Yu et al. (2018b) ask annotators to write both questions and SQL queries given a database. Both Iyer et al. (2017) and Herzig and Berant (2019) assume that the database and questions are given and try to reduce the effort of creating semantic representations. Our data construction is most closely related to Overnight (Wang et al., 2015), who proposes to automatically generate logical forms based on a hand-crafted grammar and ask annotators to paraphrase pseudo questions into NL ques-tions. Overnight focuses on logic form (LF) based semantic representation, while our work on SQL representation. The differences are two-fold. First, databases of Overnight are much simpler, composed of a set of entity-property-entity triples. Second, LF operations of Overnight are much simpler, consisting of only matching and aggregation operations, such as count, min, max. Our dataset is more complex and thus imposes more challenges on the data construction.
Text-to-SQL parsing approaches. Seq2Seq models achieve the state-of-the-art results on single-database datasets such as ATIS and Geo-Query (Dong and Lapata, 2016). With the release of WikiSQL dataset, researchers make efforts to handle unseen databases by using database schema as inputs. Two mainstream approaches are the Seq2Seq model with copy mechanism (Sun et al., 2018) and the Seq2Set model (Xu et al., 2017). With BERT representations (Devlin et al., 2019), the execution accuracy exceeds 90% (He et al., 2019;. For the more challenging Spider dataset with multi-table databases, introduces an intermediate representation (SemQL) for SQL queries, and uses a grammar-based decoder to generate SemQL, reporting state-of-the-art performance. Bogin et al. (2019) proposes to encode the database schema with graph neural network. Recently, Wang et al. (2019) proposes RATSQL to use relation-aware self-attention to better encode the question and database schema simultaneously.
Conclusion
We present the first large-scale and pragmatic Chinese dataset for cross-domain text-to-SQL parsing. Based on the analysis on questions from real-world applications, our dataset contains a considerable proportion of questions that require row/column calculations. We extend the state-of-the-art IR-Net model on Spider to accommodate DuSQL, and obtain substantial performance boost. Yet, there is still a large room for improvement, especially on calculation questions which usually require incorporation of common-sense knowledege into the model. For future work, we will continually improve the scale and quality of our dataset, to facilitate future research and to meet the need of database-oriented applications. Figure 9: The production rules for SQL generation. Figure 9 shows production rules used for SQL generation.
B Query Type Definition
Question classification is mostly based on the operations used in corresponding SQL queries. Matching means the answer can be directly obtained from the database. Sorting means we need to sort the returned results or only return top-k results. Clustering means we have to perform aggregations (count, min/max, etc.) on each cluster. Calculation means we need to calculate between columns or rows to get the answer. Other usually corresponds to questions requiring reasoning or subjective questions, e.g., "Is Beijing bigger than Shanghai?", and "Is the ticket expensive?". Figure 10 shows some examples for types in Figure 2, except for the calculation type (shown in Figure 3) and other type which
Matching
List cities with a population less than 10 million.
Sorting
Give the top 5 cities with the largest population.
SELECT name FROM T1 ORDER BY population DESC LIMIT 5
Clustering
Give the total population of each province.
SELECT province, sum(population) FROM T1 GROUP BY province Figure 10: Examples of types in Figure 2. All of them are based on the database in Figure 1.
do not have corresponding SQL queries.
C Preconditions in SQL Generation
To ensure the semantic correctness of the generated SQL query, we define the preconditions for each production rule, and abide by these preconditions in the SQL query generation.
• For the generation of SQL query with multiple SQLs, e.g., {SQLs ::=SQL union SQLs}: the columns in the select clause of the previous SQL match the columns in the select clause of the subsequent SQL, i.e., the columns of the two select clauses are the same or connected by foreign keys.
• For the rule of generating GroupC: the C is generated from the rule of {C ::=
D Descriptions of SQL Components
We provide a description for each basic component, as follows: • • The descriptions for columns, tables, and values are equal to themselves.
Meanwhile, we provide the description for each production rule, as shown in Figure 12. Table 5 shows the statistics of our dataset and other cross-domain datasets in the way of Spider. We provide enough examples for both advanced SQL clauses and the calculation type.
F The extended grammar of SemQL
We extend the grammar used in IRNet model to accommodate DuSQL, as shown in Figure 11. The Figure 8 shows the main changes. | 6,591.2 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
Dynamic 3D Measurement without Motion Artifacts Based on Feature Compensation
Phase-shift profilometry (PSP) holds great promise for high-precision 3D shape measurements. However, in the case of measuring moving objects, as PSP requires multiple images to calculate the phase, the movement of the object causes artifacts in the measurement, which in turn has a significant impact on the accuracy of the 3D surface measurement. Therefore, we propose a method to reduce motion artifacts using feature information in the image and simulate it using the six-step term shift method as a case study. The simulation results show that the phase of the object is greatly affected when the object is in motion and that the phase shift due to motion can be effectively reduced using this method. Finally, artifact optimization was carried out by way of specific copper tube vibration experiments at a measurement frequency of 320 Hz. The experimental results prove that the method is well implemented.
Introduction
Due to various benefits like excellent precision, fast measurement in multiple dimensions, and the ability to automate the process, non-contact optical 3D shape measurement technology has gained popularity [1][2][3][4]. This technology has been extensively researched and applied in various fields under computer control constraints [5][6][7]. One of the most widely used methods for measuring optical 3D structures is fringe projection profilometry (FPP), which relies on phase calculation. FPP is well-known for its accurate measurements and high spatial resolution [8,9]. In FPP technology, Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP) have gained significant research value in recent years [10,11]. FTP is a single-frame raster projection method based on spatial filtering, which requires only one image to reconstruct the target information [12]. Spectrum aliasing can affect the accuracy of 3D reconstruction [13,14]. When studying object motion, FTP may not meet the requirements for object reconstruction accuracy [15,16]. On the other hand, PSP is the most widely studied method and can obtain high robustness and high-precision pixel-wise phase unwrapping [17,18]. However, PSP may encounter difficulties when dealing with scenes that have dynamic motion, as the movement of objects can cause distortion in the phase, leading to errors. This issue is particularly prominent when the motion of the object between inter-frame times is significant [19,20]. To address these issues, research on dynamic 3D shape measurement using PSP has focused on reducing the number of patterns required for each 3D reconstruction and improving the quality of the measurement to reduce motion artifacts.
The enhancement of measurement efficiency in Fourier transform profilometry (FPP) can be achieved by optimizing the technique of reducing the number of projections, as proposed by various researchers [21][22][23]. Nonetheless, this approach may result in phase ambiguity due to the periodicity of the sinusoidal signal [24]. One possible solution is to utilize the time phase unwrapping (TPU) algorithm with auxiliary patterns such as Gray codes or multi-wavelength fringe [25,26]. Another approach is the use of composite phaseshifting schemes like dual-frequency PSP, which can resolve phase ambiguity without significantly increasing the number of patterns. However, PSP requires at least three fringe patterns to achieve high-precision pixel-wise phase measurement [27,28]. These methods often compromise measurement accuracy by relying on low-frequency fringes for reliable phase unwrapping. Therefore, improving measurement accuracy is still a major challenge in dynamic 3D shape measurement using FPP.
Numerous studies have been conducted with the aim of improving the precision of measurement and reducing motion artifacts in FPP. Weise et al. (2007) introduced a method based on least squares fitting to estimate the phase shifts caused by movement [29]. Pistellato (2019) introduced a probabilistic framework aimed at mitigating the influence of errors. Yet, this framework does not address errors caused by motion artifacts [30,31]. Lu (2016) developed an iterative algorithm utilizing least squares to correct the unknown phase offset induced by the 3D rigid motion of the object [32]. summarized the methods affecting the development of projection measurements and also proposed a strategy to reduce motion errors in three-step projection measurements [33]. These techniques assume that each pixel undergoes uniform motion, which may not be valid for objects exhibiting different motion patterns. proposed a four-step phaseshifting contour method that compensates for error by utilizing the intermediate phase of two results [34]. This method is effective in the presence of uniform motion but fails to handle non-uniform and non-rigid objects. Wang (2018) utilized the Hilbert transform to alleviate motion error, which is effective for periodic motion [35]. Nevertheless, current methods still face challenges in dealing with motion artifacts, particularly in scenarios where the target object exhibits high moving frequency or large moving amplitude.
In this paper, a new technique is introduced to suppress motion-induced artifacts in the reconstruction of 3D point clouds of free-moving objects, thereby improving the accuracy of 3D measurements. The proposed algorithm is a new feature phase optimization algorithm that reduces motion errors and optimizes the artifacts of the non-periodic motion of flexible copper tubes, and the feasibility and effectiveness of the method is verified by simulation and experiment. In the simulation, this study uses a hemispherical object with a size of 512 × 512 pixels to test our artifact suppression algorithm. In experiments, this study uses structured light projection to measure the free motion of a copper tube under the impact of a small hammer. In a supplementary experiment, we measure the 3D point cloud data of a freely falling ping-pong ball bouncing back after hitting the ground, and optimize the results. Finally, our method is compared with other optimization algorithms for periodic errors and its superior performance in reducing reconstruction artifacts is demonstrated.
The rest of the paper is organized as follows: Section 2 illustrates the principles of the 3D measurement method for reducing motion-induced errors and simulates the associated experimental effects. Section 3 presents some experimental results to validate the proposed method. Section 4 summarizes and discusses the feature of the proposed method.
Motion-Induced Error for Six-Step Phase-Shifting Method
For a generic N-step phase-shifting algorithm, the intensity distribution of the Nth stripe can be described as: where A(x, y), B(x, y), and Φ(x, y) denote the average intensity, intensity modulation, and phase map, respectively. I n (x, y), n = 1, 2, . . ., N, n is the intensity recorded by the camera. In dynamic measurements, according to the sampling theorem, it is hoped that the fewer projection patterns the better, but the number of projection patterns is equally related to the quality of the reconstructed phase. Therefore, in order to balance the effect of dynamic measurements and the quality of the reconstructed phase, this paper takes the six-step projection measurement algorithm as an example for error calculation and compensation.
For a standard six-step phase-shifting method with π/2 phase shift, the wrapped phase can be computed using the following equation: The N-step phase-shifting algorithm can obtain an accurate phase map Φ(x, y) if the phase shift 2π(n − 1)/N is precise. If the measured object is moving, the phase shift at each pixel in the captured images will have an additional unknown phase-shift error ε n (x, y) [n = 1, 2, 3, . . ., N−1] due to the object's motion. The error ε n (x, y) will result in pattern distortion and phase calculation as follows: For a small phase-shift error ε, sin(ε) ≈ ε and cos(ε) ≈ 1. According to the phase affected by the phase shift error, ε n can be expressed as: Here, the relationship between each error and the actual phase is calculated. It can be seen from the formula that, if the error is different every time, the actual phase is related to the error generated by each projection. In order to reduce the motion error, local feature matching is used to calculate the displacement changes in the six-step item movement, and then the motion is compensated to reduce the generation of artifacts.
Local Feature Matching
The measured phase consists of the phase 2 of the object itself and the projected phase 1, and its gray value is also related to them. When motion occurs, it affects the change in grey value corresponding to the projection phase. Such as in Figure 1.
The projections at different moments are affected by the projection fringes when the effect of the motion error of the object is taken into account, as shown in Figure 2.
In vertical projection measurement, the phase obtained in the first projection is P 1 , and the phase obtained in the second projection is P 2 . Since P 2 is generated by the movement of the object, if the object is not moving in the second projection, the phase should be P 2 .
The difference between the two phases is actually a DY shift in the Y-direction, which corresponds to the maximum value (or minimum value) of the object's features: A1 should be the maximum value of phase P 1 , and its corresponding coordinate is Y 1 ; A2 is the maximum value of phase P 2 , and its corresponding coordinate is Y 2 ; the optimized phase P 2 can be obtained by the following equation: The above is the phase optimization of a single column, while the phase optimization of a graph can be carried out using the following equation: Y n 2 in the formula represents the feature phase position of the second image in the n row, and Y n 1 represents the feature phase position of the first image in the n row as the main feature. The optimized phase P 2 can be found by the above equation, and the same equation can be extended to the third to the sixth amplitude as follows: P n (x, y) represents the optimized phase, where n represents the first image. Y n i represents the Y coordinate of its corresponding feature location and this parameter is used to calculate the compensated pixel value of the image. The main features chosen in the above equation are obtained from the phase of the 1st image, so all the subsequent 2-6 images are optimized for the phase values by the features of the 1st image, which removes the phase effect of the motion on the projection measurements, and ultimately optimizes the point cloud data for better results.
Simulation
In order to implement the present method, the validation is continued here using simulation. Firstly, a picture with 512 × 512 pixels and a grating width of 51 pixels is designed in the simulation experiment, and a hemispherical shape with a pixel size of 100 is designed in the center of the picture for simulation. The period of the grating is the pixel size of one side divided by 10, which satisfies the design in our actual measurement train of thought. The experimental diagram after simulation is shown in Figure 3. The first set of experiments was performed in order to verify the artifacts produced by uniform motion and their elimination. On this basis, the position of the object is moved. The specific moving value is that each picture moves in the X direction of 5.5 sub-pixels, and six pictures are regenerated and phase decoded.
As shown in Figure 4, the left side is the phase information obtained when the object is not moving, and the right side is the phase information with artifacts obtained by the object under the influence of motion error. As shown in Figure 5a, the phase of the 100 × 100 pixel hemisphere of the simulation is used. The phase of the projection is obtained without considering errors, as shown in Figure 5b. The added error is an offset of 5.5 pixels in each image starting from the second image, and the result is shown in Figure 5c. Figure 5d shows the result after optimization using the method proposed in this paper. From the figure, it can be seen that, before optimization, the phase of the object produces displacement artifacts in the Y direction. This phase result has a significant impact on the reconstruction of the target. The optimized phase is closer to the true phase. To this end, we extracted the phase value of the target object at 250 pixels and compared it with the actual value. The results among the three are as follows. As can be seen from Figure 6, at an x-coordinate of 250 pixels, the phase values of the object show a periodic variation, in line with the 2π law of motion; while the motion errors produce phase values with a large discrepancy, the optimized data are obviously better than the data before optimization. The standard picture presents a phase change from 150 to 350, and other parts remain stable. The optimized phase also conforms to this feature, but there are some errors at the change, which should be caused by the fact that the optimized parameters cannot achieve sub-pixel level matching (in the simulation experiments to verify the irregular effects of motion, the motion parameters of all six images are different, where there is also the motion of 0.5 pixels of object motion). The second set of experiments had exactly the same parameters as the first set of experiments, with the difference that the position of the moving pixels was not uniform, moving (7.8, 10.2, −2.5, −6, 4.3) pixels from frame 2 to frame 6, respectively.
From the above figure, it can be seen that the effect on the phase per column is greater than the effect on the phase of rows in non-uniform motion. The comparison of Figures 6 and 7 shows that the error of its column phase will be more obvious in nonuniform motion. The optimization method in this paper has good results for both uniform and non-uniform motion objects.
Decoding and Reconstruction
The details of the experiment are shown in Figure 8. The first step is to project a combination of stripes and scatter onto the object, then generate six term-shift images from the measured object, and then compensate for the phase of the motion by means of feature optimization. After the wrapped phase is obtained, the speckle image is embedded into the fringe pattern to solve the phase blur problem. Combining the six-step phase-shift grating and the binary digital speckle image into a composite grating, and directly adding the gray value of the speckle image to the average gray value of the phase-shift grating: where n is the number of fringe patterns from 1 to 6, I n (x, y) represents the six-step phaseshift fringe pattern, and S(x, y) is the digital speckle image. Embedding the digital speckle image into the phase-shift fringe pattern means that the phase correlation and digital correlation can be combined.
Experiments
To evaluate the effectiveness of the newly proposed method for compensating for motion-induced errors in six-step phase-shifting profilometry, we developed an experimental measurement system. This system consisted of two camera with an imaging resolution of 1280 × 1024 pixels and two 12mm imaging lens. This experiment used a digital projector (LightCrafter 4500, from Texas Instruments Incorporated, Dallas, TX, USA) with a resolution of 912 × 1140 pixels. To ensure synchronization between the camera and projector, the camera was triggered by a signal from the projector. In this experiment, the projection and capturing rates were set to 320 frames per second. For the measurements, we focused on a 10 mm copper pipe, which was vibrated by tapping on it with a hammer. This vibration simulates the free vibration of a copper tube, where one end of the tube is clamped and fixed to a vibration table.
The setup for the experiment is shown in the diagram in Figure 9. The calibration of the left and right cameras was first performed to calibrate the internal and external parameters of both cameras. Then the composite stripes were projected onto the copper tube by means of a projector and the vibration was realized by striking the tube with a small hammer. Finally, the camera was triggered by the projector to collect the pictures of the time series of the copper tube. The experimental pictures obtained from the left and right cameras were then used to create a 3D point cloud. In order to verify the effectiveness of this method, we selected the six-step term shift method to test when the motion amplitude changes greatly. Six consecutive images were selected from the acquired data for phase reconstruction, and the results are as follows.
Figure 10a-f show the results taken by the camera, where the color corresponds to the grayscale value of the place. Figure 10g-l correspond to the image on the left optimized by the feature algorithm of this paper, respectively. From the comparison in the figure, it can be seen that, the closer to the left side of the image, the greater its movement will be, and the corresponding optimized effect will be more obvious. The parcel phases are obtained from these 6 images separately and the results and comparisons are as follows.
The comparison of phase results shows that the optimized algorithm in this paper has obvious effect of eliminating the influence of motion in the phase. In order to highlight the advantages of this scheme, this paper compares and discusses the reconstruction results of other schemes. As can be seen from the Figure 11, the phase obtained by the six-step term shift method is affected by the motion in the case of motion, resulting in a term shift in the Y direction of the image, which results in great error. However, after the motion is compensated for by the method in this paper, its motion phase is compensated and, as a result, there is basically no significant artifact in the phase range. It can be seen that the method in this paper has a good effect. Select the main axis of the pipeline, extract the phase of this point, and compare it with Wang's method [35] to obtain the following figure. (a) Phase before optimization (b) Optimized phase of this method Figure 11. Compare measured results with optimized results.
As shown in Figure 12, the phase information in the image with pixel coordinates of 398 is selected. The Y-direction represents the phase value of the image, and its range is in [0, 2π]. It can be seen from the figure that the phase cannot be reconstructed without any method, and the phase information is not obvious. After optimization using Wang's method, some extreme changes of the phase occurred, and some wrong phase information was obtained. Only when the method in this paper is used for optimization can the phase information be clearly distinguished and the phase reconstruction be carried out. By matching the phases reconstructed by the left and right cameras, the three-dimensional point cloud images of these 6 images at a single moment are obtained. From the coordinates of the three reconstructed point clouds, as shown in Figure 13, it can be seen that the three-dimensional effect reconstructed from the original phase produces a large-area fracture effect, which makes a copper tube that should have been continuous produce a discontinuous point cloud image. When Wang's method is optimized, there is also a fracture, and some point cloud coordinates are distorted. For the method in this paper, the three-dimensional point cloud reconstructed by the optimized phase can be in the same coordinate system continuously, and the result can be regarded as reliable.
Free-Fall Experiment
In order to verify that this experimental method has a stable optimization effect, a free-falling ball is selected for motion artifact elimination experiments in this paper. As shown in Figure 14, the ball free-falls in the air through the ground, rebounds, and then continues to fall freely. Due to the influence of gravity, the process of the ball falling and bouncing is realized as non-uniform free motion. Due to the fast falling process of the ball, the sampling frequency of the camera is still set to 256 Hz, so that the motion trajectory of the ball landing and bouncing can be captured as much as possible. The results of reconstructing the phase and 3D point cloud after selecting a certain 6-frame image of the ball after bouncing are as follows.
From analyzing the coordinates of the three reconstructed point clouds depicted in Figure 15, it is evident that the 3D point cloud derived from the original phase does not exhibit a spherical shape. This distortion is a result of the ball's movement during its ascent. A similar distortion can be observed when applying Wang's method as shown in Figure 15b. However, the 3D point cloud generated by the method proposed in this paper clearly demonstrates a spherical surface. This serves as strong evidence that the results obtained by this method are trustworthy. By fitting their point cloud, spherical results can be obtained as shown in Table 1. The table compares the standard deviation and mean distance of the three sets of point clouds fitted to the blob. When the point cloud generated from the original data is used, its standard deviation reaches 6.675 mm, and when it is optimized using the features proposed in this paper, the standard deviation is reduced to 1.176 mm, which is much better than the original results. Better still, the optimized 3D point cloud in this paper fits the sphere with an average distance of 0.245 mm.
Conclusions
The PSP method is used to measure dynamic scenes where the motion of the object between frames is not negligible, in which case it will lead to phase errors and thus motion artifacts. This paper proposes a method to eliminate motion artifacts by using the information of object features, using the information of one frame of the object as a basis upon which to phase optimize the other frames.This method has the following advantages over other methods: • The method in this paper addresses the effect of motion artifacts in the case of motion, and the method is able to optimize the measurement results for situations in which the amplitude of motion is large. • Compared to the effects of motion errors arising from other targeted periodic moving objects, this paper targets flexible copper tubes with non-uniform velocities and the solution in this paper offers better results in terms of optimization. • Compared to most other cases where only 60 Hz object motion is achieved, this paper achieves a reconstruction of the 3D point cloud at 320 Hz, which is more suitable in terms of frequency for the scenario of engineering applications.
Despite these advantages, the proposed method has some limitations. The first is that this paper is aimed at objects with simple textures; in the case of objects with complex textures, the feature phases do not match well and are prone to mis-matching. Secondly, the optimization method in this paper does not achieve sub-pixel level optimization, so there are still local errors in the optimized phases, as shown in Figure 13. Therefore, in the future, we will incorporate more effective methods, such as machine learning, in the hope of further improving its application in dynamic measurements. | 5,359.2 | 2023-08-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Global patterns of climate change impacts on desert bird communities
The world’s warm deserts are predicted to experience disproportionately large temperature increases due to climate change, yet the impacts on global desert biodiversity remain poorly understood. Because species in warm deserts live close to their physiological limits, additional warming may induce local extinctions. Here, we combine climate change projections with biophysical models and species distributions to predict physiological impacts of climate change on desert birds globally. Our results show heterogeneous impacts between and within warm deserts. Moreover, spatial patterns of physiological impacts do not simply mirror air temperature changes. Climate change refugia, defined as warm desert areas with high avian diversity and low predicted physiological impacts, are predicted to persist in varying extents in different desert realms. Only a small proportion (<20%) of refugia fall within existing protected areas. Our analysis highlights the need to increase protection of refugial areas within the world’s warm deserts to protect species from climate change.
more realistic assessments of how desert species will likely respond to climate change 17 .
Here, we combine a microclimate model and a physiologically explicit biophysical model with climate change projections and biodiversity maps to address the following questions: (a) How will the world's warm deserts be affected by climate change, and do the projected impacts vary between and within major desert realms? (b) Does a physiological model of climate change impacts on desert birds produce spatially different results from models based solely on air temperature (Tair)? (c) Which areas within each of the world's warm deserts are likely to serve as refugia for desert birds in the face of climate change? (d) To what extent do these refugia fall within the boundaries of existing protected areas (PAs) ? We focus on birds because of their diurnality and limited ability to use thermally buffered microsites such as burrows [18][19][20] , which makes them particularly exposed to extreme climates relative to other taxa. Additionally, birds have among the highest mass-specific evaporative water loss rates of any terrestrial animals, which may render them more sensitive to further warming 13,21 . Indeed, researchers have already documented major declines in avian abundance in some desert regions as a result of climate change [22][23][24] .
Of the various aspects of avian physiology that are potentially sensitive to climate change, water balance is critically important to desert birds because of the trade-off between maintaining body temperatures below lethal limits by increasing evaporative water loss and avoiding dehydration 13,25 . Thus, we use two physiological metrics of climate change impact: total evaporative water loss (TEWL; i.e., water loss in an average day of the typically hottest month of the year) and acute dehydration risk (ADR; i.e., maximum water loss per gram of mass in three continuous hours in an average day of the typically hottest month of the year), which have been shown to determine species' likelihood of surviving under long-term warming 20,26 and extreme heat waves, respectively 13,19,27 . We focus on two future climate change scenarios in which global mean temperatures are, on average, 2°C (main text) and 4°C warmer (Supplementary Information) than pre-industrial values. The climate projections account for geographic patterns in warming and for changes in radiation, humidity, and wind speed. Conclusions hold for both scenarios. We found that heterogeneity in predicted climate change impacts exist both between and within major warm deserts. The physiological model of climate change impacts produced spatially different results from models based solely on air temperature. Most identified climate change refugia, which were the areas with high desert bird diversity and low climate change impact, lie close to coastlines. Only a very small proportion of identified refugia fall within the borders of existing PAs.
Heterogeneity in predicted climate change impacts on desert birds
Our analysis, based on three model species representing desert birds that fall within three size categories, revealed considerable heterogeneity in predicted climate change impacts on birds between and within global warm deserts (Fig. 1). According to climate models and our projections, the largest change in mean values of air temperature (Tair) and TEWL will occur in the Saharo-Arabian desert realm, while that of ADR is similar among desert realms (desert realm locations shown in Fig. 1a). We estimated the "proportion of overlap" (overlapping area of kernel density estimations) between current and future values of Tair, TEWL and ADR, as it considers not only the change in mean but also the overall variance between years. The smallest proportion of overlap for Tair, TEWL, and ADR occurs in the Saharo-Arabian desert realm (Supplementary Fig. 4; p < 0.001). We used the proportion of overlap between current and future values of the two physiological metrics (TEWL and ADR) to represent climate change impact (less overlap means larger impact) hereafter. The probability distributions of climate change impact vary between desert realms ( Supplementary Fig. 4). Sensitivity analyses (see "Methods") show our results are robust to potential interspecific variation in morphological and physiological parameters.
Climate change refugia for desert birds and PA coverage
We defined desert birds as species having ≥90% of their habitat within warm deserts. By overlaying the distribution of desert bird diversity (measured as rarity-weighted species richness 28 ) with our projections of climate change impacts on desert birds, we generated bivariate heat maps for the world's warm deserts that place each pixel along axes of the two variables (Fig. 2a, c). We then classified each pixel as falling into one of four categories, based on whether it falls in the top ("high") or bottom ("low") 25% of values for desert bird diversity and climate change impact (Fig. 2b, d).
The area that falls within each of the above categories varies from realm to realm based on the degree to which avian diversity and climate change impacts are spatially aligned. The choice of TEWL or ADR for estimating climate change impact changes the area of warm desert that falls into each category. For example, the percentage of desert area in the Neotropical desert realm that falls into the "High-Diversity/ Low-Impact" category when using ADR to estimate climate change impact is about two-thirds of the value obtained when considering TEWL instead. This mismatch is due to the difference between the probabilistic distributions of climate change impact in each desert realm measured using TEWL and ADR.
We defined refugia as areas in each desert realm with relatively high diversity and low climate change impact measured using either TEWL or ADR. We assumed that these three variables are equally important when considering biodiversity conservation, and therefore used the same threshold for all three to identify refugial areas. For example, a threshold of 75% means that we selected pixels in a given desert realm that had bird diversity values (larger values indicate higher diversity) and proportion of overlap in TEWL or ADR (larger values indicate lower impacts) larger than the 75th percentile of pixels for that realm. As the distributions of the three variables vary spatially within desert realms, using a fixed threshold for different desert realms could result in very different percentages of desert area being identified as refugia. Thus, we ran separate analyses in which we specified that a fixed percentage area of each desert realm must qualify as refugia, calculated by adjusting a "floating" threshold until that percentage area target was met. We did so under the assumption that every desert realm has unique value for biodiversity and therefore is worth protecting, notwithstanding differing impacts of climate change among realms. Results comparing the fixed and floating thresholds are shown in Fig. 3.
Using a fixed threshold of 75% for avian diversity and climate change impacts for all six desert realms (Fig. 3a), we found notable differences between desert realms with respect to the percentage area identified as refugia. The Australian desert realm has the largest percentage (6.6%) of its area identified as refugia, while the Neotropical desert realm has just 1.7% (see the vertical dotted line in Fig. 3c). Next, we adjusted these thresholds such that at least 5% of each desert realm's area was identified as refugia (see Fig. 3b). The Australian, Afrotropical, and Saharo-Arabian desert realms maintain stricter thresholds (>75%) than the other three desert realms to meet the goal of 5% refugia (see the horizontal dotted line in Fig. 3c). To understand the extent to which these refugia are currently protected, we overlaid the boundaries of existing PAs with the identified refugia. The PA coverage for refugia in each desert realm is generally low (<20%), although percentages vary depending upon the threshold values used to define refugia (Fig. 3d).
Comparing refugia identified using physiological metrics and those identified using Tair Although our physiological modeling of climate change impacts (TEWL and ADR) on desert birds was positively correlated with those estimated using the overlap between current and future values of Tair, and negatively correlated with mean current Tair, the relative magnitude of impacts estimated using the these metrics was not spatially aligned (Weighted Jaccard Index ≤75.5% 29 ; Supplementary Tables 4 and 5). The relative magnitudes of predicted climate change impacts between desert realms are contingent on whether or not physiological Overlap in ADR g Fig. 1 | Climate change impacts for desert birds when global mean temperatures are 2°C warmer than pre-industrial values. The climate change impacts are shown as estimated changes in mean values (panels b, d, and f; "Δ" represent value changes; warmer colors indicate higher impact) and proportion of overlap between current and future values (panels c, e, and g; cooler colors indicate higher impact) of air temperature (Tair;°C), total evaporative water loss (TEWL; g/day) and acute dehydration risk (ADR; percent of body mass) during the hottest month (July for Northern Hemisphere, January for Southern Hemisphere). Panel a shows the locations of the six major realms containing warm deserts ("desert realms", represented by colors) and desert birds (bird species having ≥90% of their habitat within warm deserts). We assumed that a bird actively shifts between open and shaded habitat to minimize its rate of water loss. See Supplementary Fig. 6 responses are considered. Also, the probability distribution of predicted climate change impacts in each desert realm is more positively skewed when considering TEWL or ADR than is the case for predicted impacts based solely on Tair. In other words, pixels showing very high climate change impact based on physiological variables are more extreme relative to all other pixels than is the case when considering only Tair ( Supplementary Fig. 4).
Physiological modeling requires more than just climate data. A critical question, then, is whether refugia identified using the proportion of overlap between current and future values of Tair differ spatially from refugia identified using physiological metrics. If the use of Tair alone would result in both low under-protection (i.e., does not omit many of the refugia identified using physiological metrics, equivalent to false negatives) and low overprotection (i.e., does not identify as refugia extensive areas that are not identified as such using physiological metrics, equivalent to false positives), then Tair could provide a reasonably good proxy for the more appropriate but costlier physiological metrics. Unfortunately, when comparing refugia identified using Tair and those identified using physiology, we found considerable under-protection and over-protection in all desert realms ( Supplementary Fig. 5). Under-protection and overprotection can both involve up to 60% of predicted refugia area in some desert realms.
Discussion
Gaining a nuanced understanding of how climate change will affect species in the world's warm deserts requires an integrated consideration of the spatial patterns of both biodiversity and physiological impacts. To better understand the distribution of physiological impacts to species due to climate change, we combined microclimate data, climate change projections, and physiologically explicit biophysical models to predict climate change impacts to birds across the world's warm deserts for when global mean temperatures are 2°C warmer than pre-industrial values. We found considerable heterogeneity of climate change impacts both between and within major warm deserts. Climate change refugia, areas with high desert bird diversity and low climate change impact, are predicted to differ markedly in total area between desert realms. Alarmingly, only a very small proportion of these refugia fall within the borders of existing PAs. Species within climate change refugia that occur outside PAs are exposed to potential harm from land-use change, overexploitation, and other direct human impacts 30,31 . Compared with projections based solely on air temperature, physiological models produced markedly different spatial patterns of climate change impacts on desert birds. Using air temperature as a proxy for physiological metrics results in under-protection of future refugia and overprotection of areas that are not expected to function as refugia. These conclusions hold even for high-risk scenarios wherein global mean temperatures are 4°C warmer than pre-industrial levels or in which birds have no access to shade, extreme cases that indicate our findings are relatively robust.
Interestingly, no matter which methods are used, most identified refugia lie close to coastlines, which may be related to the oceanic buffering effect for terrestrial warming 32 . However, sea-level rise 33 and increasing human disturbance 34 Climate change impacts are measured as the proportion of overlap between current and future values of TEWL (panels a and b) or ADR (panels c and d) per pixel (higher overlap implies lower impact) when global mean temperatures are 2°C warmer than pre-industrial values. We defined desert bird as bird species with ≥90% area of their global habitat area falling within warm deserts. Diversity is calculated as rarity-weighted species richness, where species are weighted by the size of their global Area of Habitat (AOH). Panels a and c are bivariate heatmaps that place each pixel along axes of TEWL/ADR overlap and diversity value (from 0 to 100 percentiles; mapping is done for each desert realm). Correspondingly, panels b and d show the percentages of area in each desert realm falling within the four categories defined by TEWL/ADR overlap and diversity value ("High" and "Low" are defined by whether the pixel value falls in the top or bottom 25% of all pixel values within that desert realm, respectively). For each pixel, we averaged the results for birds in three body mass categories (see "Methods"), weighted by the number of bird species in each category, to calculate TEWL and ADR values. We assumed that a bird actively shifts between open and shaded habitats to minimize its rate of water loss. See Supplementary Fig. 9 for results assuming a bird always stays in the open. See Supplementary Figs. 10, 11 for results for a scenario in which global mean temperatures are 4°C warmer than preindustrial values.
to 30% by 2030 35 , we propose that refugia we have identified be considered in future conservation efforts as a way to ensure that desert species persist in the face of climate change. Of the six desert realms, the Neotropical desert realm has the lowest proportion of its predicted refugia currently within the boundaries of PAs. We note that our focus here is protecting sites that are likely to retain the greatest richness of desert birds in the future. Models for other taxonomic groups should be developed to determine whether their refugia overlap significantly with the avian refugia we have identified. We emphasize that our results in no way imply that desert areas falling outside the boundaries of refugia are unworthy of conservation attention. For example, an additional reasonable objective would be to reduce harmful land-use changes in high diversity areas that are predicted to suffer greatly from climate change to minimize additional anthropogenic stressors to the species living there. Nor do we wish to imply that our results represent the "final word" as to which places will function as climate refugia for birds. Future refinements to the models we used can enhance their predictive value. Finally, our study highlights the value of using physiologically explicit biophysical models parameterized with microclimate data to predict how organisms will actually experience climate change, and it provides a physiologically relevant framework for prioritizing desert areas for future protection.
Global warm deserts and terrestrial zoogeographic realms
We created a map of global warm deserts (with a resolution of 50 km) by choosing desert-related habitat types from a global map of terrestrial habitat types 36 , based on the Habitat Classification Scheme of IUCN (version 3.1). The habitat types we chose included: hot desert, temperate desert, subtropical/tropical dry shrubland, subtropical/ tropical dry lowland grassland, and dry savanna. We further refined this map by restricting it to areas with less than 500 mm of annual precipitation (using averaged data for 1970-2000; WorldClim V2.1 37 ), which is a widely-used threshold for identifying arid or semi-arid regions 38 . We divided global warm deserts into six major realms (desert realms) using an updated map of Wallace's zoogeographic regions of the world 39 .
Climate data and microclimate model
We used historical and projected future monthly climate data from TerraClimate 40 (50 km spatial resolution) for our simulations, which provides maximum temperature, minimum temperature, precipitation, soil moisture, vapor pressure, downward surface shortwave radiation, and wind-speed. Two future climate scenarios were considered: (1) when global mean temperatures are 2°C warmer than preindustrial values, and (2) when global mean temperatures are 4°C above pre-industrial values. The climate change scenarios were derived from 23 CMIP5 global climate models and downscaled using a pattern-scaling approach described in Qin et al. 2020 41 . The 'micro_terra' function of NicheMapR then disaggregated the monthly climate data to hourly following methods described in 42 . Using this function, temperature data are elevation-and terrain-corrected, spline interpolated to daily, and then downscaled to hourly by imposing a latitude-and longitude-dependent diurnal cycle to the data. Hourly relative humidity is then determined from the vapor pressure and modeled diurnal variation in air temperature. Radiation is interpolated to hourly, by computing the clear sky fraction in each month and then computing hourly clear sky radiation. See Supplementary Table 1 for parameter values for the microclimate model. We extracted monthly climate data for global warm deserts for what is typically the hottest month (July and January for Northern and Southern Hemispheres, respectively).
Model species and physiological model
Desert birds exhibit diversity in their morphology, behavior, and physiology [43][44][45] , which may affect their sensitivity to climate change. Except for body size and morphological traits that scale with body size, empirical data for many traits are available only for a few species, which precludes us from running the physiological model for every single desert species. However, previous studies have suggested that body size significantly affects TEWL 26 and ADR 13 , so we created three model species weighing 13 g, 39 g, and 185 g, representing small (0-33th percentiles), medium (33-66th percentiles) and large (66-100th percentiles) desert birds (Supplementary Data 1). Specifically, we first created the medium-size model species by using sizerelated traits (body mass, plumage depths, feather lengths) of the desert-dwelling Cactus Wren (Campylorhynchus brunneicapillus; body mass 39 g). Other parameters were taken either from the Cactus Wren and other well-studied species or were based on our best estimates (Supplementary Data 2). We then created the small-size and large-size model species by adjusting size-related traits (plumage depths and feather lengths, which were scaled to the mass to the power of 1/3). Model results generated using the three model species were averaged for each desert grid cell weighted by the number of species falling within each size category.
We calculated the hourly water loss (cutaneous water loss + respiratory water loss) of bird species using a customized version of the "endoR_devel" function in R package "NicheMapR" 46 . This function implements a biophysical model for calculating heat and mass exchange between an individual endotherm and a given environment, and for simulating required postural and physiological thermoregulation for maintaining minimal metabolic rates. Our model found a solution for maintaining minimal metabolic rates at all desert sites. Based on the physiology of desert birds 44,45 , we revised the sequence of thermoregulatory events for all our model species in the face of heat stress as follows: (1) reduce ptiloerection; (2) stretch the body; (3) increase flesh conductivity; (4) simultaneously raise core temperature (up to 44°C) and respiratory rate (up to 7.5 times of the resting level), by modifying the source code of "endoR_devel". We also converted and run the function with Fortran for a faster running speed. We assumed that birds sit 1.5 m above ground and shifted between open (0% shade) and shady areas (90% shade) to minimize their hourly water loss. As deep shade may not be widely available in deserts and shadeseeking behavior may involve trade-offs with other behaviors such as foraging 47 , we considered a scenario in which the bird always stayed in an open area.
We calculated the TEWL as total water loss in an average day of the typically hottest month of the year 20,26 and the ADR as the maximum water loss per gram of mass in three continuous hours in an average day of the typically hottest month of the year. The ADR reflects the risk of the bird dying due to acute dehydration as previous studies have suggested that birds are unlikely to survive when accumulated water loss reaches 15% of body mass within three hours 13,19,27 . We projected maps of air temperature and physiological results using the Eckert IV equal-area projection to ensure each pixel represents the same area.
Model validations
We validated model predictions of body temperature and water loss rate at a series of air temperatures against empirical data for wellstudied species from four orders and nine families (only size-related traits were adjusted for each species based on body mass). The results indicate a good performance of our physiological models (see Supplementary Figs. 1, 2). We collected empirical data of core body temperature and evaporative water loss rate measured at air temperatures using a flow-through respirometry system from literature 44,45,48,49 . In Fig. 3 | Predicted locations of climate change refugia for desert birds in global warm deserts and their current protection status. The figure considers a climate change scenario that the global mean temperatures are 2°C warmer than preindustrial values. Panel a shows the refugia identified using a fixed threshold of 75th percentile (i.e., top 25%) for ADR overlap, TEWL overlap, and avian diversity, while panel b shows the refugia identified using a floating threshold such that at least 5% of desert area in each realm is identified as refugia (see text for details). Panel c shows the relationship between the threshold used and the percentage of desert area in each realm identified as refugia (see "Source_data_Figure_3c" for source data). Panel d shows the relationship between the threshold used and PA coverage for refugia identified in each realm (see "Source_data_Figure_3d" for source data). We assumed that a bird actively shifts between open and shaded habitats to minimize its rate of water loss. See Supplementary Fig. 12 for results assuming a bird always stays in the open. See Supplementary Figs. 13, 14 for results for a scenario in which global mean temperatures are 4°C warmer than pre-industrial values.
total, data of nine bird species from four orders and nine families were collected. We predicted the core body temperature and evaporative water loss at air temperatures using the physiological model (customized "endoR_devel" from NicheMapR) and parameters that we used for the main analysis. To simulate the experimental condition, we used a wind speed of 0.1 m/s, a relative humidity of 5% and zero radiation. We adjusted only five size-related parameters to account for size variation among species: body mass, plumage depth (dorsal and ventral) and feather length (dorsal and ventral), which is the same method we used in the main analysis. The body mass (AMASS) of species were from the literature and we scaled plumage depths and feather lengths (known for Campylorhynchus brunneicapillus) in proportion to AMASS 1/350,51 . The results suggested that our model predicts the core body temperature and evaporative water loss of bird species well ( Supplementary Figs. 1, 2). Note that our model slightly overestimated the body temperature and water loss rate of some non-passerine species, which was because we used a passerine metabolic rate equation for all species. We decided to use one equation for basal metabolic rate (from McNab et al. 2009 for passerine) for all desert birds for the following reasons: 1. Additional sensitivity analysis suggested that using the QBASAL equation for passerines or non-passerines does not largely change identified safe sites (see Supplementary Tables 2, 3); 2. Over 77% (118 out of 152) desert bird species (bird species having ≥ 90% of their habitat within warm deserts) are passerines; 3. Some nonpasserine species show high metabolic rates, while some passerine species show low metabolic rates 52 .
We 26 and we scaled plumage depths and feather lengths in proportion to AMASS 1/350,51 . The result suggested that our model predictions of changes in TEWL (p = 0.028) and ADR (p = 0.031) were negatively correlated with the change in occupancy (linear model used; Supplementary Fig. 3).
Sensitivity analysis
We conducted sensitivity analyses to test if conclusions generated from our models were robust to interspecific variation in species traits in ways that might affect the water loss rates. As the aim of this study is to identify the areas within deserts that are safer for birds under climate change relative to other desert areas, provided the variation in species traits does not affect the relative rankings of these desert areas, we can rely on results generated using the model species to identify climate change refugia. Therefore, we performed a sensitivity analysis by rerunning the model using much lower or higher (but nonetheless realistic) parameter values for desert birds and then identifying "safe sites" (defined in each case as the top 25% of sites showing the largest overlap between current and future values of TEWL and ADR). We then noted the sites that consistently appear in this top quarter. The results of this analysis indicate that our conclusions are robust to potential interspecific variation in traits (Supplementary Tables 2, 3).
We ran sensitivity analyses by calculating the overlap between current (1986-2015) and future (pseudo years 1986-2015 commensurate with the climate future that the global mean temperatures are 2°C warmer than pre-industrial values) values of the total evaporative water loss (TEWL; g/day) and acute dehydration risk (ADR; % mass) in the hottest month at desert sites in Australia (1627 sites) using different parameter values. We tested the sensitivity of identified 25% sites with the largest TEWL overlap or the largest ADR overlap (safe sites) between the two time periods to potential interspecific variations in model parameters (Supplementary Tables 4 and 5). For each model parameter, we reran the models using extreme parameter values that were lower and higher (but realistic for desert birds) than the values we used for the model species, while keeping other parameters unchanged. For the assumed sitting height, we conducted the sensitivity analysis using a lower value considering that some desert birds only use terrestrial habitats (e.g., Otidiformes). For basal heat generation, we used a function of body mass for non-passerine species in the sensitivity analysis. For the assumed onset of panting, we conducted the sensitivity analysis for a scenario that the bird starts panting after reaching its maximum body temperature. We also conducted sensitivity analyses for combinations of parameters set at extreme values that would minimize and maximize the water loss rate. The results suggested that at least 69.3% (over 90% in most cases) of identified safe sites using the model species can be consistently predicted using lower or higher parameter values or combinations of parameter values that maximize or minimize the water loss rate.
Rarity-weighted richness and PAs
The maps of the spatial distribution of bird species were downloaded from BirdLife 54 and refined to Area of Habitat (AOH) 55 , based on species-specific habitat and elevation requirements listed by BirdLife. We defined desert bird species as bird species with more than 90% of their AOH falling within warm deserts (152 species). We then calculated the rarity-weighted species richness (RWR) of desert bird species for each grid cell by summing the inverse of each species' AOH for all species occurring in that cell (following Kier et al. 2009 28 ). RWR (sometimes called endemism richness) better captures the relative importance of an area for global biodiversity than does unweighted species richness (which can be dominated by common, widespread species), by assigning higher values to species with smaller ranges, therefore incorporating aspects of both richness and endemism 28 .
We calculated the overlapped estimated area of kernel density estimations (using the "overlap" function in R package "overlapping" 56 ) for current and future values of TEWL and ADR for three modeled birds with small, medium, and large body masses. To compare rarity-weighted richness with these physiological responses under climate change scenarios, we calculated the average predicted TEWL overlap and ADR overlap for the avian community that occurs in each location, weighted by the number of species in each of the three body mass categories.
To assess PA coverage we overlaid refugia with a global map of PAs provided by the World Database on Protected Areas (WDPA) 57 , refined following Butchart et al. 2015 58 . We considered only strictly PAs in the categories "Ia", "Ib", "II", "III" and "IV", as defined by the IUCN Protected Area Categories System 59 .
Statistical analysis
We used Kruskal-Wallis tests to compare the changes in and overlap between the distributions of current and future values of Tair, TEWL and ADR between desert realms, and used Epsilon-Squared (R package "rcompanion" 60 ) as the corresponding effect size statistic. We used pairwise Pearson correlation tests to estimate correlations between projected climate change impacts based on TEWL, ADR, and Tair, respectively. To compare the spatial similarity between maps of Article https://doi.org/10.1038/s41467-023-35814-8 projected climate change impacts, we used a weighted version of the Jaccard similarity index 29 . The Jaccard index is a measure of the proportion of shared elements between two maps, and the weighted version allows for the comparison of two maps with values along a continuous gradient 61 . We compared maps for the predicted overlap between current and future values in TEWL, ADR, and Tair in pairs. We also used above methods to compare the climate-change impacts estimated using TEWL and ADR with mean current air temperature.
Softhware
Maps of AOH, warm deserts and rarity-weighted richness were created using Google Earth Engine 62 . All other analyses were performed in R 4.0.3 63 .
Inclusion & ethics
Our research has included researchers from countries around the world that contain warm deserts. Roles and responsibilities were agreed amongst collaborators ahead of the research. We have taken local and regional research relevant to our study into account in citations.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Habitat classification scheme of IUCN (version 3. Source data are provided for Fig. 3c (Supplementary Data 3) and Fig. 3d (Supplementary Data 4). Climate change impact data, bird diversity data, protected area coverage data generated in this study are provided as Supplementary Data 5. | 7,252 | 2023-01-13T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Fixed-point oblivious quantum amplitude-amplification algorithm
The quantum amplitude amplification algorithms based on Grover’s rotation operator need to perform phase flips for both the initial state and the target state. When the initial state is oblivious, the phase flips will be intractable, and we need to adopt oblivious amplitude amplification algorithm to handle. Without knowing exactly how many target items there are, oblivious amplitude amplification also suffers the “soufflé problem”, in which iterating too little “undercooks” the state and too much “overcooks” the state, both resulting in a mostly non-target final state. In this work, we present a fixed-point oblivious quantum amplitude-amplification (FOQA) algorithm by introducing damping based on methods proposed by A. Mizel. Moreover, we construct the quantum circuit to implement our algorithm under the framework of duality quantum computing. Our algorithm can avoid the “soufflé problem”, meanwhile keep the square speedup of quantum search, serving as a subroutine to improve the performance of quantum algorithms containing oblivious amplitude amplification procedure.
Quantum amplitude amplification algorithms 1-3 have a vast variety of applications in the field of quantum computing, such as quantum state preparation [4][5][6] , quantum probability algorithm 7,8 , quantum counting 9-11 and so on. As a generalization of Grover's quantum search algorithm [12][13][14][15] , quantum amplitude amplification can be described as rotations in the 2-dimensional Hilbert plane extended by the target state |t� and the original source state |s� . Consider an unsorted database which contains N items with M target items, the quantum amplitude amplification algorithm can amplify the amplitude of the target states to O(1) through O( √ N/M) Grover iterations. While the classical methods need approximately O(N/M) queries. The necessary condition that the quantum amplitude amplification algorithms based on Grover's rotation operator work is the ability to make the phase flips for both the target state |t� and the original state |s� . In some cases, the original state is oblivious, for example, the unsorted database contains both the index register and the content register(the state of which is usually oblivious), then we should use the oblivious amplitude amplification algorithm 16 .
The oblivious amplitude amplification method was first introduced by Berry et al. in 2014 and used to deal with the sparse Hamiltonian system simulation problem 16,17 . The method can be used for the unsorted database searching problem which contains both the index register and the content register. As shown in Fig. 1, the two registers are in entangled states during the process of searching. The oblivious amplitude amplification algorithm works by amplifying the amplitude of the index state together with the target state. Consider a unitary operator U which can implement another unitary V (generally unknown) with some probability. The oblivious amplitude amplification method can implement V with high probability through a version of amplitude amplification similar to the original Grover's search algorithm. Specifically, suppose U and V are unitary operators on n + 1 and n qubits respectively, and let θ ∈ (0, π/2) . For an arbitrary n-qubit state |ϕ� , we have where |φ� is an n-qubit state depends on |ϕ� . Let R := (|0��0| − |1��1|) ⊗ I , S := −URU † R , then for any k ∈ N, Oblivious amplitude amplification method suffers the same "soufflé problem" 18,19 with those based on Grover's rotation operator. When the exact number of target items in the database is unknown, there is no knowing when to stop the iteration properly. Iterating too little "undercooks" the state and too much "overcooks" the state, both leaving a mostly non-target final state. In 2005, Grover presented the fixed-point quantum search algorithm to tackle this problem 20 . The rotation phase of the original Grover operator is modified to π/3 . As a result, the amplitude of the target state increases monotonically as the number of iterations grows. However, the amplification efficiency is sacrificed. For the O(2 n/2 ) original Grover iterations, the fixed-point version needs O(3 n ) iterations. Other fixed-point search algorithms have also been proposed subsequently and the quantum square speedup efficiency regained 19,[21][22][23][24][25][26] . In 2009, Mizel 22 adopted the damping of dissipative system to deal with the "soufflé problem" and the overhead is only 1. 25 .
In this paper, we present the fixed-point oblivious quantum amplitude-amplification (FOQA) algorithm based on the damping method. We have constructed the explicit quantum circuit to implement the algorithm in the framework of duality quantum computation. The core operator for iteration is implemented by the linear combination of unitaries (LCU) which is one of the major methods for quantum algorithm design. The new algorithm can avoid the "soufflé problem" while maintaining the quantum square speedup. For an unsorted database with N items and M target items, the FOQA algorithm using approximately O(1.5 √ N/M) iterations. The rest of this paper is organized as follows. In "The framework of duality quantum computation", we introduce the main idea and framework of duality quantum computation (or LCU); in "Mizel's fixed-point quantum search", an introduction of the Mizel's fixed-point search method based on damping; We present the FOQA algorithm and the quantum circuit implementation in "FOQA algorithm and its circuit implementation"; Finally, we give the conclusion in "Conclusion".
The framework of duality quantum computation
The quantum operations performed by a general quantum computer are unitary transformations 27 , while a duality quantum computer can implement a class of more generalized operators 28 , namely the linear combination of unitaries (LCU), which are usually non-unitary. Inspired by the wave-particle duality of microscopic particles, Long proposed the duality quantum computing model based on the double-slit interference phenomenon in 2006 [28][29][30][31][32][33] . The core functions of this computing model are accomplished by two kinds of generalized computing gates, namely quantum wave division (QWD) and quantum wave combination (QWC), which are two unitaries working on the ancillary qubits. Consider a nonunitary operator H = 1 l is the normalization coefficient. This operator, the linear combination of unitaries can be realized in four main steps: i. Wave division. Suppose the work register is initialized to |ψ 0 � . The ancillary qubits are prepared to the initial state |ψ i � by the QWD operation, The number of qubits needed in ancillary register is m = ⌈log 2 L⌉ , the symbol ⌈a⌉ means round up to an integer not less than a. Here the state of the whole quantum register is: www.nature.com/scientificreports/ ii. Entanglement generation. In this step, a series of ancillary system controlled operators L−1 l=0 |l��l| ⊗ U l are implemented on the work register. Then the ancillary register and the work register are entangled. The state of the system is transformed into iii. Wave combination. The QWC operation is implemented to integrate the quantum states in m qubits of the ancillary register. And the L wavelets in the subspace are integrated into the initial state |0� ⊗m of the ancillary register, iv. Post-processing and measurement. If the ancillary register is measured directly, then the state |0� ⊗m can be measured with the probability of p m , Notice that the whole LCU process is successful only when the ancillary register is in state |0� ⊗m . When m is big, the probability p m can be very small, then the process needs post processing before measurement. The oblivious amplitude amplification method can be used to enlarge the amplitude of the state |0� ⊗m . It is easy to observe that a general quantum operator H = 1 C L−1 l=0 β l U l can be implemented in the framework of duality quantum computation. Since this operator is usually not unitary, duality quantum computation or the linear combination of unitaries (LCU) can be adopted to broader applications of quantum computing. More details about duality quantum computation or LCU can be found in 28-33 .
Mizel's fixed-point quantum search
In 2009, Mizel observed that the transformation between quantum search and classical search can be realized by adjusting the magnitude of damping 22 . When the magnitude of the introduced damping is large enough, the algorithm becomes a pure classical search without any quantum speedup. While the damping is sufficiently small, the search process becomes the quantum Grover algorithm. There is a critical damping value that enables a fixed-point quantum search, and the number of Oracle calls becomes only 1.5 times that of the original Grover. Consider an unsorted database of N items, M of which are the target items. Here the value M/N is unknown. The main tool of the algorithm is an Oracle that can recognize the target state by flipping the phase of the target state while leaving the phases of the other quantum states unchanged. First, initialize the register into the equal superposition state Here sin(ξ/2) = √ M/N , we define |β� the equal superposition of the target statesand |γ � is the equal superposition of N − M non-target states. Grover search process can be described as a series of rotations in the 2-dimensional Hilbert plane (or space) spanned by |γ � and |β� . In this space, the Pauli operators can be defined as X = |γ ��β| + |β��γ | , Ȳ = i|β��γ | − i|γ ��β| , Z = |γ ��γ | − |β��β| . Here Z operator is the Oracle of the Grover algorithm used to rotate the phases of the target states. Another important operator in Grover algorithm is the so called "inversion about the mean", which can be constructed as Ē = 2|ψ��ψ| − I . Then the whole Grover rotation operator can be constructed as: G =ĒZ = exp(−iξȲ ) . After k Grover iterations, the system becomes Mizel introduced an ancillary qubit to indicate the proportion of the target states. Assume the ancillary qubit is initialized into |1� , Mizel's fixed-point search algorithm can be described as follows.
Step1. If the state of work register is target state, then rotate the ancillary qubit by e −iαY . Here the phase angle α is used to control the magnitude of damping. The value of damping will change with the iteration. More details about the changing rule of α could be found in 22 . This step needs to call the Oracle Z and it can be expressed Step2. If the ancillary qubit is in its original state |1� , then apply the Grover rotation to the work qubits. This step can be described as: Step3. Measure the ancillary qubit. If it is in the state |1� , return to the first step and go on to the next iteration. Else, return the target state.
Here Y, Z are single qubit Pauli operators, namely Y gate and Z gate.
FOQA algorithm and its circuit implementation
In order to realize fixed-point oblivious quantum amplitude-amplification (FOQA) algorithm via Mizel's damping method, we refer to the transformation technique in 25 and implement the algorithm using linear combination of unitaries (LCU). As shown in Fig. 2, the amplification process requires three quantum registers, namely the single qubit ancillary register, the index register and the content register. The latter two registers are collectively referred to as the working register. In the application, the unsorted database state to be retrieved needs to be prepared by the unitary transformation U applied to the working register. This operation makes the index states entangle with the content states. Then, FOQA can be constructed by using iterations of the LCU circuit (see Fig. 3 for details). After each iteration, the ancillary register needs to be measured. If the result is |0� , the process will be completed and we'll get the target state in work register. Otherwise, proceed to the next iteration. The process of the whole algorithm is as follows: I. Initialize the circuit. Prepare the three registers into state |0�|0�|ϕ�. II. Generate the database state by the U operator performing on work register. We have U|0�|ϕ� = sin(θ)|0�V |ϕ� + cos(θ)|1�|φ�, here V |ϕ� is the target state for searching and the corresponding index state is |0�. III. Utilize the LCU circuit to amplify the amplitude of the target state. The LCU circuit will change the phases of index states together with content states. IV. Measure the ancillary register. If it is |0� , return to step III and go on to the next iteration. Else, return the target state V |ϕ�.
The above is the workflow of FOQA algorithm. The following is to design the quantum circuit based on linear combination of unitary operators. We learn from the methods of Lei et al. 22,25 . According to "Mizel's fixed-point quantum search", the quantum operation corresponding to Mizel's damping fixed-point search can be constructed as follows: Here I i represents for the identity of the i th (i = 1, 2, 3 ) register and the subscript n represents the n-th iteration. We have used the equation e −iα n Y = ZV † n ZV n in the transformation of (10), here Further more, utilize the identites V n V † n = I 1 , UU † = I 2 ⊗ I 3 , the LCU operator can be further simplified as: here The quantum circuit implementation of this operator is shown in Fig. 3. After n-th iteration, measure the ancillary qubit. If we get |0� , then move forward to the (n + 1)-th iteration. At this time, suppose the work register stay in the state: Here t n represents for the amplitude of the target state |0�V |ϕ� after n-th iteration, while s n represents for the amplitude of the non-target state |1�|φ�.
Step 1. Wave division. Perform the operator V n to ancillary qubit, the whole wave function becomes: Step 2. Entanglement generation. If the index qubit is in |0� , apply the -Z operator to the ancillary register. The equivalent unitary transformation of this step is: (−Z ⊗ |0��0| ⊗ I 3 + I 1 ⊗ |1��1| ⊗ I 3 ) . The state of the whole system changes to Step 3. Perform the U † operator to the work register. We have Here we have used the results in 16 that Step 4. Entanglement generation. If the ancillary qubit is in 0 , perform the Z operator to the ancillary register. The equivalent unitary transformation of this step is: ( 0 0 ⊗ Z ⊗ I 3 + 1 1 ⊗ I 2 ⊗ I 3 ) , and the whole wave function becomes Step 5. Wave combination. Perform the V † n operator on the ancillary registerand implement U operator on the work register. We have where we have At this time, the ancillary register is measured and the state |1� will be obtained with the probability of p n+1 . According to Eq. (4), the quantum state of the work register is |0�V |ϕ� , which means the amplifying process is successful. Otherwise, proceed to the next iteration until the state |1� is obtained. The above is the process of FOQA algorithm implemented by the LCU quantum circuits. The failure probability of the algorithm after n iterations is equivalent to the probability of getting |0� everytime when measuring the ancillary register. Thus, the failure rate of n iterations is: q n = n k=1 (1 − p k ) . When the damping values α n are selected according to the Ref. 25 , the theoretical result of the success rate of the algorithm in this paper (see Eq. ()) is the same as that in Ref. 22 . According to the analysis in the literatures 22,25 , with the iteration number increasing, the failure probability will approach to zero. The average number of iterations required before we get |1� on ancillary qubit is O(1.5 √ N/M). | 3,659.6 | 2022-08-22T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Partial regularity for minimizers of a class of discontinuous Lagrangians
We study a one dimensional Lagrangian problem including the variational reformulation, derived in a recent work of Ambrosio-Baradat-Brenier, of the discrete Monge-Amp\`ere gravitational model, which describes the motion of interacting particles whose dynamics is ruled by the optimal transport problem. The more general action type functional we consider contains a discontinuous potential term related to the descending slope of the opposite squared distance function from a generic discrete set in R^{d}. We exploit the underlying geometrical structure provided by the associated Voronoi decomposition of the space to obtain C^{1,1} regularity for local minimizers out of a finite number of shock times.
Introduction
In recent years, action functionals of the form have received the attention of many authors, due to their appearence in several areas of Mathematics.
In the theory of gradient flows, for instance, they correspond to the integral form of the energy dissipation (see [5]), and they are also related to the so called entropic regularization of the Wasserstein distance, when f is a multiple of the logarithmic entropy, defined on the space of probability measures with finite quadratic moment (see [9]).In all these cases, the main obstruction to the application of standard results of Calculus of Variations stems from the lack of differentiability, even continuity, of the Lagrangian with respect to γ.
When f : X → (−∞, +∞] is a λ-convex function defined on a metric space (X, d), the term |∇f (x)| has to be interpreted as the descending slope of f at x, namely and, if X is a Hilbert space, it also coincides with the norm of the minimal selection in the subdifferential ∂f (x), known as the extended gradient of f at x (see [5]).In this general framework, stability of the functionals I f with respect to Γ-convergence of the functions f was investigated in [2], [3] and [4].
In particular, [2] addressed a rigorous derivation, along the lines of [8], of a dynamical system of interacting particles strictly related to the optimal transport problem, known as the discrete Monge-Ampère gravitational (MAG) model (see subsection 2.2).What emerged from this work is that the dynamics of MAG can be conveniently studied as the Euler-Lagrange equation associated to an action functional of type (1.1), where f = f K is the (−1)-convex function given by the opposite squared distance from a specific discrete set K ⊂ R d , namely Clearly, f K is not everywhere differentiable in general, so that ∇f K , denoting the extended gradient of f K , is not even continuous, and standard results of Calculus of Variations are not directly applicable in this case.The present work aims at a systematic analysis of the properties of local minimizers for the functional I f K , where K is a generic discrete set K in R d .Our results apply in particular to solutions of the discrete MAG model, thereby addressing the general n-dimensional case, left open in [2], where the most involved part of the analysis was carried out only in dimension 1.
The plan of the paper is the following.In section 2 we present the general framework and the motivations of our work.More precisely, in subsection 2.1 we provide a contextualization of the problem in the general Hilbertian setting, with particular emphasis on the variational properties of functionals of type (1.1), when f is a λ-convex function.The main references for this part are [3] and [5].Then, in subsection 2.2, we introduce the Monge-Ampère gravitational model in the flat torus T n as a modification of the classical Newtonian gravitation in which the linear Poisson equation is replaced by the fully nonlinear Monge-Ampère equation (see (2.7)).Following the ideas of [8], we underline the intriguing link of this dynamical system with the optimal transport problem, whose powerful tools can be used in order to derive a Lagrangian reformulation of MAG particularly meaningful in the discrete setting, where additional foundation to the model is given by the results in [2].By means of a least action principle, we then interpret solutions of the discrete MAG model as local minimizers of the functional I f K , where f K is the opposite squared distance function from a discrete set K ⊂ R d , as defined in (1.3).Section 3, being the core of the paper, is then devoted to the analysis of local minimizers for the Lagrangian problem associated to I f K , when K is a finite collection of points in R d : We crucially consider the Voronoi partition of the space carried by K, which encodes the underlying geometrical structure of the problem, and exploit it in order to obtain, in Proposition 3.8, the existence of some specific directions along which momentum is locally conserved by the dynamics.As a byproduct, we show in Corollary 3.9 that a local minimizer γ is regular as long as it stays in a single Voronoi cell, possibly developing singularities only at those times in which the optimality class changes.As it is shown later, the set K also carries a partition of R d into "potential zones" (see Proposition 3.6).This second partition is in general less fine than the Voronoi one, and coincides with it when the set K is "balanced", like for instance a cubic lattice.We then define S(γ) to be the set of "shock times", at which the curve γ jumps from a Voronoi cell to another, and N DS(γ) ⊆ S(γ) the set of "nondegenerate shock times", at which γ not only changes the Voronoi cell, but also the potential zone.With this in mind, our main regularity results Theorem 3.15 and Corollary 3.16 can be collected in a single statement as follows: Theorem (Partial regularity).Let γ be a local minimizer of I f K with endpoints constraints.Then i) γ has a finite number of nondegenerate shock times out of which it is C 1,1 .
ii) Under the additional assumption that K is balanced, γ has a finite number of shock times out of which it is C ∞ .
This result in particular provides an extention to any space dimension of [2, Theorem 13], where regularity out of a finite number of shock times was proved for minimizers of a one dimentional version of the MAG model.
Acknowledgements.I wish to thank Prof. Luigi Ambrosio for introducing me to the present research topic and for providing me with very useful suggestions throughout the preparation of this work.
2 General framework and motivations It is easily seen that λ-convex functions are precisely those functions that satisfy the perturbed convexity inequality By ∂f (x) we denote the Gateaux subdifferential of f at x ∈ dom(f ), namely the (possibly empty) closed convex set We denote by dom(∂f ) the domain of the subdifferential.For a λ-convex function, we can exploit the monotonicity of difference quotients to derive the equivalent non asymptotic definition of the subdifferential Whenever x ∈ dom(∂f ), there exists a unique element ξ with minimal norm in ∂f (x), obtained by projecting 0 on ∂f (x).This element is called the extended gradient of f at x, and is denoted by ∇f (x).
The concept of extended gradient is strictly related to the one of descending slope of f at x ∈ dom(f ), namely In fact, for λ-convex functions, it can be proved that ∂f (x) is not empty if and only if |∇ − f |(x) < +∞, and that, in this case, the following equalities hold (see [5]): In this paper we deal with the following specialization of the above setting.Given a closed set K ⊆ H, we consider the opposite squared distance function from K, namely 2) The infimum is not attained in general, unless K is either convex or compact, or H is finite dimensional.By defining the convex function we derive from the equality g K A class of action functionals.We now introduce, in the general Hilbertian setting, the class of action functionals that we are going to study throughout the paper.We fix a function h : [0, +∞] → [0, +∞] representing a "potential shape".Then, for δ > 0 and f : H → (−∞, +∞] proper, λ-convex and lower semicontinuous, we consider the functional I δ f : C([0, δ], H) → [0, +∞] defined by Compared to the one of type (1.1) studied so far in the literature, we consider here the enriched class of functionals in which the potential shape h is allowed to be different from the identity.In the sequel we assume h to be continuous, and C 1 when restricted to [0, +∞).
As we have in mind to study this type of functionals from the variational point of view, it is crucial to realize that (2.3) is lower semicontinuous with respect to the C([0, δ], H) topology.This in fact easily follows from the lower semicontinuity of the classical action and the above characterization of the extended gradient (2.1).Then, for x 0 , x δ ∈ H, the infimum is attained under suitable coercivity conditions.Note in particular that this is the case if H is finite dimensional.
Due to the lack of continuity of the potential term, however, very little is known about the regularity for minimizers of this type of functionals, even in the finite dimensional case.We could ask for instance whether some higher regularity or at least a sort of Euler-Lagrange equation like formally could be derived for local minimizers.In the very specific case in which f is the opposite squared distance function from a discrete set in R d , we will prove in the sequel that local minimizers are piecewise C 1,1 , and that, out of a finite number of singularities, (2.4) holds taking the modulus on both sides and replacing the equality with a ≤ sign (see Theorem 3.15).Nevertheless, in the general setting, one can exploit the fact that the functional I δ f is autonomous in order to perform "horizontal" variations of the independent variable, and eventually derive the Dubois-Reymond equation for a local minimizer γ (see [1]): in the sense of distributions in (0, δ).Equivalently, there exists a constant c ∈ R such that a.e. in (0, δ).This implies in particular that every local minimizer of I δ f is Lipschitz continuous, provided that |∇f | is bounded on bounded sets.
We end this part by quoting a result from [3] addressing the matter of stability for the class of functionals considered so far.By adding endpoints constraints x 0 , x δ ∈ H, we define the functional Theorem 2.1 (Stability, [3]).Let f j , f be uniformly λ-convex functions, and let x j,0 , x j,δ , x 0 , x δ ∈ H. Suppose that i) f j → f w.r.t.Mosco convergence.
Then I δ f j ,x j,0 ,x j,δ Γ-converge to I δ f,x 0 ,x δ in the C([0, δ], H) topology.As a byproduct, under an additional equi-coercivity assumption, this theorem grants convergence of minimal values to minimal values and of minimizers to minimizers.Notice that Theorem 2.1 is stated in [3] for h = id, but the same proof is seen to work in the general case with only minor modifications.
The Monge-Ampère gravitational model
In a periodic spatial domain like the flat torus T n = R n /Z n , we can describe classical Newtonian gravitation of a unity of mass in a "parametric" way as follows.We first choose a reference metric probability space (A, λ) of labels for the gravitating particles.Then we assign to each particle a ∈ A its position X t (a) ∈ T n at time t.Tipical choices for the reference space are the unit cube [0, 1] n with the n-dimensional Lebesgue measure in the continuous case, and a finite set of points with the renormalized counting measure in the discrete case.Denoting by µ t := (X t ) # λ the image measure of λ by X t , the Newtonian model can be written as ( Here φ t is the gravitational potential generated by µ t , defined on T n .Note that due to the periodicity of the space, the average density 1 has been removed from the right hand side of the Poisson equation, in order to let the uniform measure L n be a stationary solution of the system.This is a perfectly meaningful assumption, because by symmetry, the attractive force of the uniform density has to be zero everywhere on T n .In this section we are interested in the related Monge-Ampère gravitational model (MAG in short), which is simply obtained from (2.6) by replacing the Poisson equation with the fully nonlinear Monge-Ampère equation: (2.7) Notice that (2.6) can be recovered from (2.7) by expanding the determinant in the Monge-Ampère equation and keeping only the linear term: We refer to [8] and the references therein for a broader introduction to this dynamical system, as well as for a comparison with the classical Newtonian model.
The MAG model in optimal transportation terms.System (2.7) appears to have an intriguing geometrical interpretation if we look at it from the optimal transportation point of view.
In order to better illustrate this link, we first quote the following specialization to the flat torus T n of the classical Brenier-McCann theorem on the existence and uniqueness of optimal transport maps on Riemannian manifolds (see [7], [10], [12]).Let us begin with some notation.We denote by π : R n → T n the projection to the quotient.We say that a vector field If this is the case we make a little abuse of notation by considering F also as a vector field from T n to itself.Given a Borel probability measure λ on T n , we consider the Hilbert space and its closed subset K λ given by all the λ-preserving vector fields Finally, we recall that f K λ denotes the opposite squared distance function from K λ , as defined in (2.2).
Theorem 2.2 (Existence and uniqueness of optimal transport maps in T n ).Let µ and λ be Borel probability measures on T n , and suppose that µ ≪ L n .Then i) There exists a locally Lipschitz convex function ψ : , and T := ∇ψ : T n → T n is the unique optimal transport map from µ to λ.
ii) If µ = ρL n and λ = ηL n are both absolutely continuous w.r.t. the Lebesgue measure, then φ solves the Monge-Ampère equation in the almost everywhere sense.Furthermore, if ρ and η are of class C 0,α , then φ is of class C 2,β , for 0 < β < α, and solves (2.8) in the classical sense. iii where W 2 is the Wasserstein distance in the space P 2 (T n ).Moreover, the map In order to reformulate the Monge-Ampère gravitational model in optimal transportation terms, we look to the continuous case, in which the reference space is given by (T n , L n ).Fix then λ = L n in the Theorem above, and consider a parametrization X t : T n → T n such that (X t ) # λ = µ t and µ t = ρ t L n is absolutely continuous w.r.t. the Lebesgue measure.If Y t ∈ H λ is any lifting of X t , that is to say a map that satisfies π • Y t = X t , then Theorem 2.2 grants that the Kantorovich potential φ t solves the Monge-Ampère equation and , where T t is the unique optimal transport map from µ t to λ.So we see that (2.7) reduces to with T t equal to the unique optimal transport map from suggesting an interpretation of (2.11) as the Euler-Lagrange equation associated to the functional This variational reformulation appears natural in the attempt to give a meaning to system (2.7) also in the discrete setting, where, as it is well known, Theorem 2.2 fails.
The discrete MAG model.As already mentioned before, one of the aims of this work is to go deeper in the analysis of the discrete version of the Monge-Ampère gravitational model, first introduced in [8] and then formalized in [2].Here we choose as reference measure where the a i 's are distinct points on T n (think for instance to a regular lattice approximating the uniform measure).In this case, the space H λ is easily seen to be finite dimensional, and isomorphic to R nm , through the identification of a map Y ∈ H λ with the m-uple By regarding, a bit improperly, the a i 's as elements of [0, 1) n , the set K λ can be written as the union of m! cubic lattices in R nm : In this discrete scenario, the MAG model describes the motion of m particles of equal mass 1/m in the torus T n , whose dynamics is ruled by the optimal transport problem as follows.The position of the i-th particle at time t is denoted by x i (t) = X t (a i ), and a lifting of The equivalent of (2.11) in this setting is, at least formally, where The system (2.13) is easily seen to be ill posed, because of the general non uniqueness of the projection on K λ , ultimately due to the nonuniqueness of the optimal transport map in the discrete setting, in contrast with the absolutely continuous one.As already pointed out in [2], in order to fix this problem, it is convenient to switch to a variational reformulation of the dynamical system, by considering an action functional of type (2.12).Therefore, relying on a least action principle, we say that y ∈ AC([0, δ], R nm ) is a solution of the discrete MAG model if it is a local minimizer of the functional subject to endpoints constraints.In the next secion, we are going to study a more general functional in which K λ is replaced by a generic discrete set K in R d .
Before concluding this part, we would like to briefly turn the attention of the reader to an analoguous Lagrangian problem in the space of probability measures (P(T n ), W 2 ).This can be obtained from MAG by dropping the parametric description of the gravitating matter, required by the Hilbertian setting of subsection 2.1, and directly considering the evolution of a probability measure µ t in T n .
A related Lagrangian problem in (P(T n ), W 2 ).Far from being limited to the Hilbertian context, functionals of type (1.1) can be considered in a much more general metrict setting, provided that we interpret |∇f (x)| as the descending slope |∇ − f |(x) defined in (1.2), and | γ| as the metric derivative of an absolutely continuous curve γ : [0, 1] → X.We avoid to repeat all the constructions in this new scenario (see [4] for a systematic introduction), and prefer to immediately specialize to our case of interest.We take (X, d) = (P(T n ), W 2 ) the space of probability measures on T n endowed with the Wasserstein distance induced by the optimal transport problem with quadratic cost.As it is well known, (X, d) is compact, geodesic and positively curved (see [5]).Given a "reference" probability measure λ ∈ P(T n ), we consider the opposite squared distance function from λ, namely Since (X, d) is positively curved, we easily deduce that f λ is (−1)-convex (in the metric setting, convexity has to be intended along geodesics).Moreover, we can bound the descending slope of f λ at µ as follows: Theorem 10.4.12], and involves the minimal L 2 norm of the barycentric projection of optimal transport plans.Inspired by the MAG model, and in particular by formulas (2.9) and (2.10), one could study the Lagrangian problem associated to the lower semicontinuous functional I δ f λ ,µ 0 ,µ δ : C([0, δ], X) → [0, +∞] defined by From the compactness of X, we immediately obtain the existence of minimizers of I δ f λ ,µ 0 ,µ δ .In addition, by exploiting a generalization of Theorem 2.1 to the general metric setting provided by [4,Theorem 17], as well as the bound on the descending slope (2.14), we obtain the following stability result: Proposition 2.3.Let λ j , λ ∈ P(T d ) be reference measures, and µ j,0 , µ j,δ , µ 0 , µ δ ∈ P(T d ) be endpoints.Suppose that i) Then I δ f λ j ,µ j,0 ,µ j,δ Γ-converge to I δ f λ ,µ 0 ,µ δ in the C([0, δ], X) topology.Moreover, we have convergence of minimal values to minimal values and of minimizers to minimizers.
The case of the opposite squared distance function in R d
In this section the main results of the paper will be derived.We study functionals of type (2.1) in the special case in which H = R d and f = f K is the opposite squared distance function from a closed subset K ⊆ R d .Motivated by the variational reformulation of the discrete MAG model, derived in the previous section, we will in particular focus on the case in which K is a discrete collection of points in R d .In this last setting, we will exploit the geometrical structure given by the associated Voronoi decomposition of the space in order to get regularity for local minimizers out of a finite number of "shock times".
Given a closed set K ⊆ R d , we consider the opposite squared distance function from K, defined by Notice that the infimum in (2.2) is always attained here, due to the local compactness of the ambient space.The convex function , thus implying the (−1)-convexity of f K .We fix a potential shape h : [0, +∞) → [0, +∞) of class C 1 and consider the action functional We stress that ∇f K has to be intended as an extended gradient, because f K is differentiable only at those points in which the projection on K is unique.
In order to get a useful characterization of ∇f K , we need a well known Lemma of convex analysis providing an explicit formula for the subdifferential at x of the maximum of a family of convex functions, under suitable assumptions (see [11]).Lemma 3.1 (Subdifferential of the sup function).Let g α : R d → R α∈A be a collection of convex functions indexed on a compact metric space A, and suppose that α → g α (x) is upper semicontinuous for every x ∈ R d .We consider the supremum function Then, if the supremum in the definition of g(x) is attained, the following formula holds for the subdifferential of g at x: We call opt K (x) the compact subset of K containing all the points that minimize the distance from x: In the sequel we will refer to opt K (x) as the optimality class of x.Applying Lemma 3.1 we get: Proposition 3.2 (Subdifferential of the opposite squared distance function).Let K ⊆ R d be a closed set, and let f K , g K be defined as in (3.1) and (3.2).Then i) The subdifferential of g K at x is given by ii) The subdifferential of f K at x is given by Moreover, denoting by η K (x) the unique projection of x on the closed convex set conv(opt K (x)), the following formula holds for the extended gradient of f K at x: iii) The point η K (x) depends only on the optimality class of x.That is to say, η K (x) = η K (y), whenever opt K (x) = opt K (y).
Proof.Point i) easily follows from Lemma 3.1 if K is compact.To deal with the general case it is enough to notice that for every x ∈ R d and every radius R > dist K (x), we have g K = g K∩B R (x) in a neighborhood of x.The formula for ∂f K (x) is a consequence of the rule for the subdifferential of the sum of two functions, one of which smooth.Then, by definition, ∇f K (x) is the projection of 0 on ∂f K (x) = conv(opt K (x)) − x, and formula (3.3) follows after a translation of x.Let us now address point iii).Suppose that x and y share the same optimality class, opt K (x) = opt K (y).Consider the affine space A spanned by opt K (x) and its orthogonal space B passing through x.From the hypothesis on x and y we deduce that also y belongs to B. Then, denoting by p the point of intersection of A and B, by orthogonality we have: Hence, both distances are minimized by the point z obtained by projecting p on conv(opt K (x)), thus η K (x) = η K (y).
Remark 3.3.From (3.3) we deduce that the potential term |∇f K (x)| 2 is always less than or equal to −2f K (x) = dist 2 K (x), and equality holds if and only if x has a unique projection on K.It is interesting to see what this means for the MAG model.Using the notation of subsection 2.2, the potential of a configuration (y 1 , . . ., y m ) ∈ R nm is always smaller than W 2 (µ, λ), where, setting and W 2 is the Wasserstein distance in P 2 (T n ).Moreover, equality holds if and only if there exists a unique optimal transport map from µ to λ.So we see that in the context of the MAG model, the potential term should be interpreted as a "measure of the ambiguity in the optimal transport problem".Tipical manifestations of ambiguity in the discrete scenario appear when two or more particles collapse, thus sharing the same position in T n .Compare also this phenomenon with the continuous framework of Theorem 2.2, where this ambiguity does not occur.
To end this part, we briefly come back to the matter of stability in this specialized context, stating the following Corollary of Theorem 2.1: Corollary 3.4.Let K j , K be closed subsets of R d , and let x j,0 , x j,δ , x 0 , x δ ∈ R d .Suppose that i) K j → K in the sense of Hausdorff in every compact set.ii) x j,i → x i for i = 0, δ.
Then I δ f K j ,x j,0 ,x j,δ Γ-converge to I δ f K ,x 0 ,x δ in the C([0, δ], R d ) topology.Moreover, we have convergence of minimal values to minimal values and of minimizers to minimizers.
As a consequence, the functional associated to a closed set K can be approximated by functionals associated to K j , where each K j is a finite collection of points in R d .It is the scope of the following subsection to focus on this simpler situation.
The discrete case
From now on, we restrict our analysis to the case in which K is given by a collection of N distinct points in R d : K = {p 1 , . . ., p N } .
We are particularly interested in studying properties of local minimizers for I δ f K ,x 0 ,x δ because of the link with the variational reformulation of the discrete MAG model (see the discussion above).There, K was an infinite discrete set, but we can clearly restrict our analysis, which is essentially local, to the case in which K is finite, due to the compactness of the range of every continuous curve γ : [0, δ] → R d .Let us fix K, so as to be allowed to omit all the pedices involving it.Then, for instance, we will write f, g, η, opt in the place of f K , g K , η K , opt K .
Polyhedra, Voronoi cells and potential zones.We say that P ⊆ R d is a polyhedron if it is a nonempty closed convex set admitting a representation of the form where ℓ ∈ N and T j : R d → R are affine functions.A bounded polyhedron is called a polytope.
The Voronoi partition associated to a finite collection of points K = {p 1 , . . ., p N } is the finite decomposition {V H } H∈P(K) of R d , indexed by the set P(K) of the parts of K, and such that We call V H the Voronoi cell corresponding to the optimality class H.The following are well known facts about this remarkable cellular decomposition of the space (see [6]).Proposition 3.5 (Properties of the Voronoi partition).Let H ∈ P(K) be such that the Voronoi cell V H is nonempty.Then i) V H is a convex set.Moreover, denoting by A H the affine space spanned by H, and by B H the affine space ii) The closure V H is a polyhedron, whose relative boundary in B H is precisely given by the disjoint union of all the Voronoi cells V L with H L.
In order to introduce the second fundamental decomposition associated to K, we also need to define, for every η ∈ R d , the sets: The following proposition encodes the underlying geometrical structure conferred to our variational problem by the particular choice we made for the potential.It will be of fundamental importance in deriving regularity results for local minimizers of the functional I δ f K ,x 0 ,x δ .Proposition 3.6 (Voronoi cells and potential zones).The following facts hold: i) The map η is constant in each Voronoi cell, and hence has a finite range, that we denote by E.
ii) {Q η } η∈E is a partition of R d , and x ∈ Q η if and only if ∇f (x) = η − x.In the sequel we will refer to the Q η 's as potential zones.
iii) For every η ∈ E, both Q η and P η are union of Voronoi cells and it holds Q η ⊆ P η .iv) For every η ∈ E, P η is a polyhedron.Proof.Point i), ii) and iii) are direct consequences of Proposition 3.2.To prove point iv) it is enough to notice that P η is a closed convex set that can be written as the union of a finite number of polyhedra (the closures of the Voronoi cells contained in P η ).Finally, point v) easily follows from the fact that η is the projection of x on the closed convex set ∂g(x) containing η.
So we see that K carries two partitions of R d , one finer than the other: the first into Voronoi cells and the second into potential zones.Simple examples show that for a general K they do not coincide (see for instance Example 3.14 hereafter).If they coincide, we say that K is balanced.In such a case, the map η defines a bijection between Voronoi cells and potential zones, that is to say, η(x) = η(y) ⇐⇒ opt(x) = opt(y).
Clearly, a sufficient condition for K to be balanced is given by opt(η(x)) = opt(x) for every x ∈ R d . (3.7) It is worth noting that in dimension d = 1 every K is balanced, and that the same is true in any dimension for cubic lattices.
Conserved quantities.In this paragraph we underline the presence of some conserved quantities for local minimizers of our variational problem.They naturally arise by testing the local minimality against variations along some specific directions.The following Lemma collects two crucial observations in order to suitably perform such variations: Lemma 3.7 (Local properties of the Voronoi diagram).The following facts hold: i) If x ∈ V H , for some H ⊆ K, then there exists a neighborhood U of x such that opt(y) ⊆ H, for every y ∈ U .
ii) If x ∈ V H , then x + ǫv ∈ V H , provided that the vector v is parallel to B H , and ǫ is sufficiently small.
Proof.Point i) follows from the fact that a point of K which is not optimal for x is neither optimal for y, provided that y is chosen close enough to x. Point ii) is instead a direct consequence of the fact that V H is relatively open in the affine space B H .
By using very classical variational arguments as well as Lemma 3.7 we derive the following Proposition 3.8 (Conservation laws).Let γ be a local minimizer of I δ f K ,x 0 ,x δ .Then i) (Conservation of the energy).There exists a constant c ∈ R such that In particular, γ is Lipschitz continuous.
ii) (Local conservation of momentum).Let (t 1 , t 2 ⊆ [0, δ] be a time interval.Suppose that there exists an optimality class H ⊆ K, with V H = ∅, such that for every s ∈ (t 1 , t 2 ) the inclusion opt(γ(s)) ⊆ H holds. Let γ H be the curve obtained by projecting γ on the affine space B H . Then γ H is a C 1,1 curve in (t 1 , t 2 ) and, in this interval, it satisfies In particular, for each time t ∈ (0, δ), denoting H = opt(γ(t)), there exists a neighborhood of t in which the component γH of the momentum parallel to B H is continuous.
Proof.Point i) states that γ solves the Dubois-Reymond equation (2.5).This can be shown by testing the local minimality through "horizontal" variations of the independent variable of the form γ ǫ = γ•ρ −1 ǫ , where ρ ǫ = id +ǫϕ, ϕ ∈ C ∞ c ((0, δ)), and ǫ is small enough so that ρ ǫ is a diffeomorphism.To get point ii), instead, we need to perform "vertical" variations of the form γ ǫ = γ + ǫϕv, where ϕ ∈ C ∞ c ((t 1 , t 2 )) and v is any vector parallel to the affine space B H .We then use point ii) of Lemma 3.7 to get η(γ ǫ ) = η(γ), for ǫ sufficiently small.Point ii) of the previous proposition implies in particular that γ is regular as long as it stays in a single Voronoi cell.More precisely: Corollary 3.9 (Regularity inside a Voronoi cell).Let γ be a local minimizer of As a natural consequence, any singularity for a local minimizer of the functional I δ f K ,x 0 ,x δ appears only when the optimality class "changes".In the next paragraph we will try to give a more precise meaning to this statement.
Shock times and minimal deviation.Given a curve γ : [0, δ] → R d , and a time t ∈ [0, δ], we say that • t is a shock for γ if opt(γ) is not constant on I for every neighborhood I of t in [0, δ].
• t is a nondegenerate shock for γ if η(γ) is not constant on I for every neighborhood I of t in [0, δ].
• t is an effective shock for γ if there are two distinct potential zones Q η , Q η, and two distinct Voronoi cells V H ⊆ Q η , V H ⊆ Q η such that, for some ǫ > 0, one of the following holds: γ((t − ǫ, t)) ⊂ V H and γ([t, t + ǫ)) ⊂ V H .In this case, H H and we say that t is a left effective shock.
γ((t − ǫ, t]) ⊂ V H and γ((t, t + ǫ)) ⊂ V H .In this case, H H and we say that t is a right effective shock.
We denote by S(γ), N DS(γ) and ES(γ) respectively the sets of shocks, nondegenerate shocks and effective shocks for γ.Notice that ES(γ) ⊆ N DS(γ) ⊆ S(γ), and that S(γ) and N DS(γ) are compact.According to the definitions above, during a shock there must be a change of Voronoi cell, while, during a nondegenerate shock there is also a change of potential zone.Clearly we have S(γ) = N DS(γ) provided that K is balanced.Finally, we have an effective shock when a neat passage occurs from a Voronoi cell to an adjacent one with different potential.By conservation of the energy, we expect the dynamics to develop a singularity in the kinetic term here.This is the content of the following proposition, which is a direct consequence of the conservation laws stated in Proposition 3.8.Proposition 3.10 (Minimal deviation during an effective shock).Suppose that the potential shape h is strictly increasing.Let γ be a local minimizer of Clearly an analogous result holds for right effective shocks.
Remark 3.11.Notice that, in the specific case of a superadditive shape h, (for instance when h = id, as in the MAG model), we derive a uniform lower bound on the jump of the derivative during an effective shock: where β was defined in (3.5).
Remark 3.12.In the MAG dynamics, shocks tipically happen when two or more particles collide or separate, generating an instant change in the optimality class.For instance an effective shock occurs when two particles collide and remain sticked together.Notice that Proposition 3.8 tells us that energy and momentum are conserved in a collision.
To end this part, we show, through a couple of simple examples, that all of the three types of shock times defined in this paragraph may occur for a minimizer of I δ f K ,x 0 ,x δ .Example 3.13 (Effective and noneffective shocks).Take It is easily seen that a minimizer γ of I δ f K ,x 0 ,x δ has to be nondecreasing, and thus γ −1 (0) = [t 0 , t 1 ] is a closed interval (possibly degenerate if t 0 = t 1 ), and S(γ) = N DS(γ) = {t 0 , t 1 }.Then, there are two possible qualitatively different behaviours of γ, according to whether t 0 = t 1 or not.If t 0 = t 1 , then γ has a single nondegenerate non effective shock time.If instead t 0 = t 1 , then t 0 and t 1 are respectively left and right effective shocks.Now, direct computations show that the first case occurs if for instance c = 1.On the other hand, we can prove that the second case necessarily occurs if c is chosen sufficiently small.As a matter of fact, for c < 1, by the monotonicity of γ, the minimum value Γ(−c, c) of the functional can be bounded below as follows: At the same time, by Corollary 3.4, we have Therefore, t 1 > t 0 , provided that we choose c small enough.
Regularity results
Here we state and prove our main regularity results.Recall that K = {p 1 , . . ., p N } is a finite collection of points in R d , and h : [0, +∞) → [0, +∞) is a C 1 potential shape.Theorem 3.15 (C 1,1 regularity out of a finite number of nondegenerate shock times).Let γ be a local minimizer of I δ f K ,x 0 ,x δ .Suppose that h is strictly increasing.Then: i) The set N DS(γ) of non degenerate shock times of γ is finite.That is to say, there is a finite number of times ii) Setting t 0 = 0 and t ℓ+1 = δ, then γ is C 1,1 regular in the interval [t i , t i+1 ] for every i ∈ [0 : ℓ].
Moreover, if we let η i ∈ E be such that γ((t i , t i+1 )) ⊆ Q η i , then we can estimate Actually, if K is balanced and h is smooth, Theorem 3.15 can be improved to reach piecewise smooth regularity for any local minimizer γ.In fact, since K is balanced, the equality N DS(γ) = S(γ) holds and point i) of Theorem 3.15 implies that γ has a finite number of shock times.On the other hand, Corollary 3.9 together with the smoothness of h ensures that γ is smooth in each connected component of [0, δ] \ S(γ), where clearly opt(γ(t)) is constant.
Corollary 3.16 (C ∞ regularity out of a finite number of shock times).Let γ be a local minimizer of I δ f K ,x 0 ,x δ .Suppose that K is balanced and h is C ∞ and strictly increasing.Then: i) The set S(γ) of shock times of γ is finite.That is to say, there is a finite number of times ii) Setting t 0 = 0 and t ℓ+1 = δ, then γ is C ∞ in the interval [t i , t i+1 ] for every i ∈ [0 : ℓ].Moreover, if t i+1 > t i , denoting by H i the optimality class of γ in the interval (t i , t i+1 ), and defining Remark 3.17.Corollary 3.16 offers a generalization of [2, Theorem 13], in which smooth regularity out of a finite number of shock times was derived for solutions of a 1-dimensional version of the discrete MAG model.In their framework, h was simply the identity, while K consisted of all the m! points of R d obtainable by permuting the components of a fixed vector Exploiting the rearrangement inequality provided by the order structure of R, it is not difficult to show that hypothesis (3.7) holds in this case, therefore implying that K is balanced.
Let us now give an overview of the proof of Theorem 3.15 before entering in the details of the forthcoming paragraphs.We start discussing how point ii) can be derived from point i).We need to show that if η(γ) = η is constant in an interval (s, t), then γ is C 1,1 regular in [s, t].We clearly have that γ([s, t]) ⊂ P η .Then we observe that for every absolutely continuous curve ρ : [s, t] → P η , due to the monotonicity of h and inequality (3.6), the following inequality holds: Then, by a comparison argument, knowing that γ is a local minimizer of the functional on the left hand side of (3.8), and that ρ = γ saturates the inequality, we get that γ is a local minimizer for the functional on the right hand side of (3.8), if one restricts to curves living in the closed convex set P η .
On the other hand, we are going to prove in Lemma 3.18 that any such constrained local minimizer is necessarily C 1,1 regular.We now pass to the proof of point i) of the Theorem.First notice that we can restrict ourselves to prove the following equivalent local statement: Claim: for each time t ∈ (0, δ], there exists ǫ > 0 such that η(γ) is constant in (t − ǫ, t).
From claim (3.9) and the analogous one for right intervals (obtainable by exploiting the autonomicity of the functional), it clearly descends that N DS(γ) is discrete.Then, by compactness, we derive that N DS(γ) is finite.The proof of claim (3.9) will be elementary accomplished by a quite intricated series of "cut and paste" constructions of competitors.Let us first outline the general heuristic idea.We divide the proof in a few steps: Step 1. Suppose by contradiction that t is a cluster point for the "jumps" in the potential.Then, approaching t from the left, γ would infinitely often visit high and low potential zones.If only one low potential zone were visited asymptotically, then it would be convenient to stay in it definitely.Therefore we can assume that γ infinitely often visits at least two different low potential zones, approaching t from the left.Moreover, in alternating between different low potential zones, γ necessarily spends a non negligible amount of time in high potential ones.
Step 2. Approaching t from the left, the percentage of time spent by γ in high potential zones tends to zero, thus enforcing at least two different low potential zones to be very near to each other in order to make it possible for the Lipschitz curve γ to jump from one to the other in a short time.
Step 3. By Lemma 3.20 we will be eventually allowed to slightly deviate γ to reach an even lower potential zone (the interface between the two), hence contradicting the local optimality of γ and reaching the desired absurdum.
In the following paragraph we prove Lemmas 3.18 and 3.20.As a byproduct of Lemma 3.18, we obtain that a local minimizer is C 1,1 regular as long as it stays in a single potential zone (see Corollary 3.19), hence addressing point ii) of Theorem 3.15.Subsequently, in the final paragraph, we will provide the rigorous proof of claim 3.9 along the lines of the heuristics above, thus concluding the proof of Theorem 3.15.
Two Lemmas.The first Lemma, of indpendent interest, concerns the regularity of local minimizers of action functionals restricted to curves living in a given closed convex set.Because of boundary effects, such constrained minimizers are in general not C 2 , even if the Lagrangian is smooth.Nevertheless, they must be at least C 1,1 , whenever the Lagrangian is C 1 .Lemma 3.18 (Regularity for a problem with a convex constraint).Let P ⊆ R d be a closed convex set, and let Ψ : R d → [0, +∞) be of class C 1 in a neighborhood of P .Given two points x 0 , x δ ∈ P , we consider the functional G : C([0, δ], R d ) → [0, +∞] defined by
Proof.We call π P the projection on P .We start by defining, for each point x ∈ P , the "blow up" of P at x, namely the closed convex cone P x defined by the formula We then call S x the projection on P x , which turns out to be positive homogeneous and 1-Lipschitz.It is not difficult to realize that P − x ǫ in the sense of Hausdorff on every compact set.Since the inclusion (P − x)/ǫ ⊆ P x always holds, in order to show the convergence (3.10), we only need to check, using a separation argument, that, for every ǫ j → 0 + , and every point z ∈ P x , there exists a sequence y j ∈ (P − x)/ǫ j converging to z.Hence, the projection on (P − x)/ǫ pointwise converges to S x as ǫ → 0 + .By exploiting also the homogeneity of S x we eventually obtain that, for every v ∈ R d , Now, let γ be a local minimizer for the functional G.Given a test function ϕ ∈ C ∞ c ((0, δ); R d ), we consider the following competitors: γ ǫ := π P (γ + ǫϕ), for ǫ > 0.
Using the previous pointwise expansion (3.11), as well as the dominated convergence Theorem, we obtain From the local minimality of γ, it follows immediately that the right hand side in (3.12) is nonnegative.Finally, we can use the contractiveness of the projections S x to get the inequality and the thesis follows.
As already shown above, in the comparison argument accompanying inequality (3.8), from Lemma 3.18 descends the following Corollary 3.19 (Regularity in a potential zone).Let h be a nondecreasing potential shape, and let γ be a local minimizer of I δ f K ,x 0 ,x δ .Suppose that there exists an η ∈ E such that γ((s, t)) ⊆ Q η , where (s, t) ⊂ [0, δ].Then γ is C 1,1 regular on [s, t] and the following estimate holds Thus, point ii) of Theorem 3.15 is proved.We can now address the second Lemma, which will turn out to be crucial in the proof of point i) of Theorem 3.15.Lemma 3.20 (Reciprocal distance of intersecting polytopes).Let A, B ⊂ R d be two polytopes with A ∩ B = ∅.Then there exists a sufficiently large constant M > 0 such that dist A∩B (x) ≤ M dist B (x) for every x ∈ A.
Proof.Let P ⊂ R d be a polytope endowed with a representation of the form (3.4), where we assume without loss of generality that each of the affine functions T j has Lipschitz constant equal to 1. Let us consider the associated vector valued function z P : R d → [0, +∞) ℓ defined by z P (x) = T 1 (x) + , . . ., T ℓ (x) + . (3.13) As a first step we show that there exists a constant c P > 0 such that T j (x) + ≤ dist P (x) ≤ c P |z P (x)| 1 for every j ∈ {1, . . ., ℓ} and every x ∈ R d . (3.14) The left inequality follows by the 1-Lipschitz assumption on T j , while the right one can be obtained using a compactness argument as follows.Suppose by contradiction that there exists a sequence of points Up to replacing x n with we can assume that dist P (x n ) = 1.Now, if x is any cluster point for x n , we obtain that dist P (x) = 1 and z P (x) = 0, which is clearly a contradiction.We are now in the position to prove the Lemma.Let ℓ ∈ N be the number of affine functions in a representation of B of the form (3.4), with 1-Lipschitz affine functions T j .By exploiting the estimates in (3.14), for every x ∈ A we can bound where c A∩B is a positive constant.The thesis follows by choosing M = ℓc A∩B .
Proof of the regularity result.In this paragraph we are going to complete the proof of our main regularity result, Theorem 3.15, following the heuristic idea outlined above.It remains to prove point i), and we already restricted to show claim (3.9).Throughout the proof, γ will be a fixed local minimizer of I δ f K ,x 0 ,x δ and L the Lipschitz constant of γ.Moreover, for the sake of simplicity, we indicate I δ f K ,x 0 ,x δ as F. As a preliminary observation, notice that we can choose an R > 0 large enough such that γ([0, δ]) ⊂ B R , and then, by point v) in Proposition 3.6, we can find an α > 0 small enough such that |η − x| ≥ |η − x| + α for every x ∈ B 2R ∩ Q η ∩ P η and every η, η ∈ E. (3.15) Proof of claim 3.9.First of all, by translation invariance, we can assume without loss of generality that γ(t) = 0, thus simplifying some further computations.Then we consider the asymptotic lowest potential threshold a := lim inf We call Ẽ the subset of E indexing the potential zones that are visited infinitely often before t.Namely, Notice that the thesis is equivalent to # Ẽ = 1.The set Ẽ can be partitioned into two subsets Ẽ1 and Ẽ2 defined by The set Ẽ1 , which is clearly nonempty by the definition of a, corresponds to those potential zones which are infinitely often visited by γ before t and that share the asymptotic lowest potential threshold a.
We expect the curve γ to spend most of the time there.We make the following technical choices of constants in order to simplify later arguments.We fix r, µ, ǫ > 0 small enough so that these requirements are satisfied: R1) For every x ∈ B r we have opt(x) ⊆ opt(0).R5) For every choice of η, η ∈ E, exactly one of the following holds: Notice that thanks to R1), for every x ∈ B r the segment [x, 0) will be entirely contained in the Voronoi cell V opt(x) , while R2) assures us that the curve belongs to this good area.Conditions R3), R4) and R5) will be useful to effectively distinguish between different potential Finally, condition R6) implies that the potential zones touched by γ in the interval (t − ǫ, t) are exactly those touched asymptotically.Thus, in particular, we have a = min By condition R6), the interval (t − ǫ, t) can be partitioned into the following two sets: We observe that C 1 is closed in (t − ǫ, t).In fact, if s j ∈ C Remember that the thesis is equivalent to # Ẽ = 1.We now assume that # Ẽ ≥ 2 and try to find a contradiction.
Step 1. (Reduction to the case in which # Ẽ2 ≥ 1 and # Ẽ1 ≥ 2).We first show that # Ẽ2 ≥ 1.If this were not the case, then we would have |η(γ(s))| = a for every s ∈ (t − ǫ, t) and # Ẽ1 ≥ 2. Then we could find two distinct η, η both belonging to Ẽ1 and a time s ∈ (t − ǫ, t) such that x := γ(s) ∈ Q η ∩ P η.Now, by (3.15)We now show that # Ẽ1 ≥ 2. Suppose by contradiction that Ē1 = {η}.Then we can build a better competitor γ by performing arbitrarily small perturbations of γ in the following way.We choose s ∈ (t − ǫ, t) as close to t as we want, such that γ(s) ∈ Q η .Then we modify γ only in the interval [s, t], by replacing it with its projection on the closed convex set P η .Namely, denoting by π η the projection on P η , we define γ(u) = γ(u) for u < s or u > t, π η (γ(u)) for u ∈ [s, t].
Notice that V H and V H are two polyhedra whose intersection contains 0. Possibly replacing them with their intersection with a large d-dimensional cube, we can assume that they are polytopes.Then we can apply Lemma 3.20 to deduce the existence of a constant M > 0 and a sequence of points Step 3. (A slight deviation of γ through V H ∩ V H reduces the action, thus contradicting its local minimality).We call ǫ ℓ := c 4 (t − s ℓ ) and assume that ℓ is large enough so that ǫ ℓ ∈ (0, 1).We crucially consider the following competitor: for u < s ℓ or u > t, γ(s ℓ ) + x ℓ −γ(s ℓ ) ǫ ℓ (t−s ℓ ) (u − s ℓ ) for u ∈ [s ℓ , s ℓ + ǫ ℓ (t − s ℓ )), Notice that in the interval [s ℓ , t], the curve δ ℓ is simply a piecewise linear modification of γ, going from γ(s ℓ ) to x ℓ in time ǫ ℓ (t − s ℓ ), and then from x ℓ to 0 in the remaining time.We will see that δ ℓ has strictly less action than γ for ℓ large enough, thus reaching the desired contradiction.We first want to be sure that V H ∩ V H is a very low potential zone, so that we can lower the action of γ by a slight deviation through it.This can be seen as follows.For sure η and η both belong to ∂f (x ℓ ), thus also η+η 2 ∈ ∂f (x ℓ ).But then where we used in the very last equality that |η| = |η| = a.Therefore, for ℓ large enough, we can assume that x ℓ ∈ B r , and |η(x ℓ )| < a.The last one in particular implies that |η(x)| ≤ a − 3µ for every x ∈ [x ℓ , 0] and ℓ large enough.
We can also estimate Let us then compare the action of δ ℓ with the one of γ.We start from the kinetic part: Whence, integrating: Here ℓ is choosen large enough so that c 0 (t − s ℓ ) ≤ µ 2 .Moreover, we set Finally, collecting all the estimates together, we obtain Now the contradiction comes from the fact that the right term is strictly positive for ℓ large enough.This concludes the proof.
∀p i , p j ∈ H , we have that A H is orthogonal to B H , they have complementary dimensions in R d , and V H is relatively open in B H .We call p H the unique intersection point of A H and B H . | 13,148.8 | 2023-04-22T00:00:00.000 | [
"Mathematics"
] |
Distribution and biological role of the oligopeptide-binding protein (OppA) in Xanthomonas species
In this study we investigated the prevalence of the oppA gene, encoding the oligopeptide binding protein (OppA) of the major bacterial oligopeptide uptake system (Opp), in different species of the genus Xanthomonas. The oppA gene was detected in two Xanthomonas axonopodis strains among eight tested Xanthomonas species. The generation of an isogenic oppA-knockout derivative of the Xac 306 strain, showed that the OppA protein neither plays a relevant role in oligopeptide uptake nor contributes to the infectivity and multiplication of the bacterial strain in leaves of sweet orange (Citrus sinensis) and Rangpur lime (Citrus limonia). Taken together these results suggest that the oppA gene has a recent evolutionary history in the genus and does not contribute in the physiology or pathogenesis of X. axonopodis.
Introduction
Oligopeptides play important roles in bacterial nutrition, representing important sources of nitrogen, carbon and other elements. They are also involved in intercellular gene signaling processes, such as those involved in chemotaxis, conjugation, spore formation and the development of the competence state (Detmers et al., 2001). Three distinct oligopeptide uptake systems, specifically committed to the transport of dipeptides (Dpp), tripeptides (Tpp) and oligopeptides (Opp), have been characterized in Escherichia coli, Salmonella enterica pv Typhimurum and Sinorhizobium meliloti (Hiles et al., 1987;Nogales et al., 2009).
In gram-negative bacteria, the Opp system belonging to the ATP-binding cassette (ABC) transporter family, comprises various functional and structural domains, viz., the substrate binding protein in the periplasm (OppA), two transmembrane pore-forming proteins (OppB and OppC), and the two membrane-associated ATPases (OppD and OppF) which generate energy from ATP hydrolysis re-quired for the transport process. The opp genes are frequently organized as a polycistronic operon (oppABCD/F), in which the binding component (OppA) is usually expressed at a higher stoichiometric ratio compared to the other components (Higgins and Hardie, 1983;Hiles et al., 1987;Monnet, 2003). Besides a role in nutrition, the Opp system participates in various physiological processes, such as the recycling of cell-wall peptides, quorum sensing and adhesion to host cells, genetic competence and sporulation (Rudner et al., 1991;Cundell et al., 1995;Alloing et al., 1998;Claverys et al., 2000;Detmers et al., 2001). Opp-encoding genes are found in approximately 50% of those bacterial species with available genomic sequences, although as yet, in only a few cases have their specific roles, either in physiology or pathogenesis, been investigated.
In the present study, we investigated the distribution of the oppA gene among different Xanthomonas species. In addition we evaluated the putative physiological role of OppA, namely that of the oligopeptide-binding component, in the Xac 306 strain, based on the generation of a specific knockout mutant. The present evidence demonstrated that the oppA gene is restricted to two X. axonopodis strains and does not play a specific role in the virulence of this phytopathogen.
Bacterial strains and growth conditions
All the bacterial strains used in this work are listed in Table 1. Bacterial strains were routinely cultivated in Circle Grow (CG) (Bio 101) broth at 30°C. For the oligopeptide uptake assays, the strains were cultivated overnight in M9 minimal medium supplemented with proline, methionine, histidine, tryptophan (100 mg mL -1 each), or in the same medium with proline replaced by the YPLG peptide (0.5 mg mL -1 ).
Gene screening and expression studies
Genomic DNA from all the Xanthomonas isolates was isolated, according to the procedure described by Llop et al. (1999). The oppA gene was amplified with genomic DNA, as template and the oligonucleotides Fw 5'-CGGC GCTCGGGTACCGTGGCGTTGGCGGTGCTG-3' and Rv 5'-GGCGGATCTA GATCAGTGGTGGTGGTGG TGTTTGCTCACCCAGGCGTC-3', based on the reported opp operon sequence (Silva et al., 2002). For Southern-blot analysis, genomic DNA from the tested samples was digested with Pst I, and then transferred to a nylon membrane (Hybond-N, Amersham Biosciences). A probe for the oppA gene was synthesized with [a 32 P]-dCTP, by random primer labeling, using the PCR generated DNA fragment described above. The labeled probe was hybridized with the membrane at 42°C for 16-20 h before exposure to autoradiography films (Kodak T-MAT G/RA film). Western-blot analyses were carried out with whole cell proteins sorted in 12.5% (w/v) acrylamide gels, followed by transference to nitrocellulose membranes (Millipore) and development of bands reacting with polyclonal monospecific anti-OppA antibody, as previously described (Moutran et al., 2004). The anti-OppA serum was raised in mice 342 Oshiro et al. parenterally immunized with a His-tagged protein produced in E. coli transformed with a recombinant pET-28a vector carrying the Xac oppA gene without the native signal peptide (Moutran et al., 2004).
Oligopeptide uptake assay
Xac 306 and XoppA2 cells, grown overnight, were harvested by centrifugation, washed twice with saline and inoculated (1:100) in the M9 medium without proline and with the tetrapeptide YPLG (Sigma) (0.5 mg mL -1 ). After various incubation periods at 28°C, aliquots of the culture supernatant were harvested and dried by vacuum centrifugation. The peptides were extracted with cold methanol and submitted to RP-HPLC separation (LC 10A-VP binary Shimadzu HPLC system). Each fraction was analyzed in an ESI mass spectrometer (LCQDuo, ThermoFinnigan, USA), equipped with a nanospray source and connected to a nanoHPLC system (UltiMate HPLC System, LC Packings, Dionex, USA). The samples were introduced into the spectrometer by a flow rate at 1 mL/min, and then diluted in a solution of 5% acetonitrile and 0.2% formic acid. The spray voltage was kept at 1.8 kV, capillary voltage at 46 V, capillary temperature at 180°C and tube lens offset at -5 V. MS spectra were collected in centroid mode in the 50 to 2000 m/z range.
Protease assays
Cell-free culture medium aliquots of Xac 306 and XoppA2 strains prepared in M9 medium, were incubated with YPLG (0.5 mg mL -1 ) for different periods at 20-28°C. The presence of the tetrapeptide was monitored by RP-HPLC fractionation, and the samples analyzed by mass spectrometry, as described above. The same experiments were repeated in the presence of EDTA (ethylenediaminetetraacetic acid) added to the medium aliquots at a final concentration of 1 mM.
Plant infection experiments
Cells of the Xac 306 and XoppA2 strains were diluted in sterile destilled water to a final absorbance (OD 600 ) of 0.3. Aliquots (100 mL) of cell suspensions were inoculated into the leaves of two sensitive citrus hosts, namely sweet orange (Citrus sinensis) and Rangpur lime (Citrus limonia), through injuries in the leaf surface. Infiltrations were carried out at the lower part of the leaf using a needleless syringe, as previously reported by Laia et al. (2009).
Studies of in vivo growth kinetics
In vivo growth tests of Xac strains were carried out with sweet orange (C. sinensis). The whole procedure followed the experimental conditions described by Laia et al. (2009). Five discs of at least 6 different leaves were assayed for each time point and for each tested bacterial strain.
Results
Opp genes have only been discovered in the genome of the Xac 306 strain (Silva et al., 2002). Moreover, no orthologs have been detected in genomes of the Xcc ATCC33913, 8004 and B100 strains (Silva et al., 2002;Qian et al., 2005;Vorhölter et al., 2008), the Xcv 85-10 strain (Thieme et al., 2005), nor the Xoo strains KACC10331 (Lee et al., 2005), PX099 A (Salzberg et al., 2008) and MAF311018 (GenBank accession number AP008229). These results indicate that opp genes are restricted to Xac and may have a specific physiological role, such as host specificity, in this bacterial species. To further investigate the distribution of the Opp system in Xanthomonas species, the oppA gene was screened in eight additional Xanthomonas species without known genome sequences, including one additional Xcv strain, six isolates of different Xanthomonas species (X. bromi, X. codiaei, X. sacchari, X. pisi, X. theicola, and X. melonis, and two X. axonopodis pv aurantifolii (Xaa) strains (409 and 381). As shown in Figure 1, the oppA gene was detected only in the Xac 306 and Xaa 381 strains, previously classified within the citrus pathovar, but currently, and according to the low DNA homology with Xac strains, considered as belonging to a different pathovar altogether (Schaad et al., 2005). The oppA gene detected in both strains were actively transcribed and translated during in vitro growth. This was demonstrated by Western-blot analysis carried out with a specific anti-OppA polyclonal antibody generated in mice immunized with purified recombinant OppA protein from the Xac 306 strain. As shown in Figure 1C, the anti-OppA serum recognized a reactive protein band in the periplasm fraction of the two X. axonopodis strains (Xac 306 and Xaa 381), but not in extracts of any other Xanthomonas species The OppA protein in Xanthomonas 343 or strain. The expression of OppA protein was apparently constitutive, since no significant difference in its expression was detected in whole-cell extracts of cells kept under different culture medium conditions, supplemented or not with oligopeptides or leaf extracts of susceptible citrus hosts (data not shown).
The restricted distribution of the oppA gene in the Xanthomonas genus suggests that the Opp system plays a specific physiological role in these two strains, but not in the other tested species and strains. In order to investigate the putative role of OppA in the uptake of oligopeptides by X. axonopodis strains, we generated an isogenic oppA defective knockout mutant strain (XoppA2) by a gene replacement approach in which the native gene was replaced by non-functional knockout alleles.
The uptake of a synthetic tetrapeptide (YPLG) was monitored by mass spectrometry, using both the parental Xac 306 strain and the OppA-deficient XoppA2 strain based on the removal of the synthetic peptide from the culture medium during bacterial cell growth. The choice of the substrate peptide was based mainly on molecular modeling studies and docking analyses of different peptides to the Xac OppA structural model (Moutran et al., 2007). In order to validate the experimental approach, we employed two E. coli strains: one (the SS320 strain) proficient in the opp operon, and the other (the SS5013 strain) being an isogenic derivative deleted in the entire opp operon (Andrews and Short, 1985). As indicated in Figures 2A and 2B, the oligopeptides were efficiently removed from the culture supernatant by the E.coli SS320 strain, but not by the opp-defi- 344 Oshiro et al. cient SS5013 strain. On the other hand, when the same experiment was repeated with Xac 306 and its isogenic oppAderivative, XoppA2, no tetrapeptide was detected in the culture supernatants of either strain, after incubation periods ranging from 12 to 24 h at 28°C ( Figure 2C,D). This might be explained, either by the presence of an alternative Opp-independent oligopeptide uptake system in the Xac 306 strain, or by the production and secretion of proteases in the culture media.
In order to discover the reason for the rapid removal of tetrapeptide from Xac cultures, the oligopeptide was incubated with aliquots from the culture supernatant of both Xac strains. No tetrapeptide was detected after incubation with culture supernatants of both parent and mutant strains ( Figure 3B/D). In addition, EDTA prevented the in vitro peptide degradation following incubation with culture supernatants of both Xac strains ( Figure 3A,C). Such result indicated that the apparent uptake of the tetrapeptide was mainly attributed to the proteolytic attack by secreted proteases produced by the Xac 306 strain. Attempts to block protease activity in culture supernatants of actively growing Xac strains with EDTA failed, probably due to the large amount of secreted proteases (data not shown).
In order to evaluate the putative role of OppA in Xac pathogenesis in citrus hosts, we infected two susceptible citrus hosts, viz., sweet orange (C. sinensis) and Rangpur lime (C. limonia), with the Xac 306 and XoppA2 strains, and compared the induction of leaf lesions and multiplication in leaf tissues. Up to 10 days following infection, no difference was detected in the symptoms inflicted by the The OppA protein in Xanthomonas 345 two Xac strains in either citrus host (Figure 4). Furthermore, no significant difference in the growth curves of either of the two strains was detected during their multiplication in sweet orange leaf tissues (Figure 4). These results clearly indicated that deletion of the oppA gene did not impair the growth of Xac in susceptible citrus hosts.
Discussion
In the present study we investigated the prevalence of the oppA gene, and the corresponding OppA protein, in different Xanthomonas species, an important group of phytopathogens inflicting heavy losses in several economically relevant crops. The present results demonstrated that, in contrast to other bacterial groups, such as enterobacterial species and lactic acid bacteria, the oppA gene was detected in only two out of the three tested X. axonopodis strains. Furthermore, studies screening revealed that oppA is also absent in the genomes of the Xccs 8004 and Xcv 85-10 strains, as well as two Xoo strains with reported genome sequences (Lee et al., 2005;Thieme et al., 2005). We also demonstrated that early-branching Xanthomonas species, including the ancestral X. sacchari and X. theicola, do not carry the oppA gene. Similarly, as recently defined by Parkinson and colleagues (2007), other three established Xanthomonas phylogenetic groups, represented by X. bromi, X. melonis, X. pisi and X. codiaei, also do not carry the oppA gene. In spite of our previous observations that the opp operon does not present biased codon usage, distinct GC content or the presence of adjacent insertion sequences and transposase-encoding genes (Moutran et al., 2004), the present results indicate that the opp genes have been acquired in a recent evolutionary event in the Xanthomonas genus and remained restricted to some X. axonopodis strains.
Generation of an oppA-deficient strain led us to conclude that the Opp system does not play a significant role in the uptake of a tetrapeptide, the most likely substrate of OppA encoded by the Xac 306 strain, as previously determined by molecular modeling tools (Moutran et al., 2007). Monitoring of the peptide uptake by mass spectrometry showed that, in contrast to E. coli, the tetrapeptide is quickly degraded by secreted proteases produced by both Xac 306 and XoppA2 strains. Hence, it may be deduced that the abundant production of extracellular proteases produced by Xac, as well as other Xanthomonas species, could constitute an abundant source of amino acids derived from the proteolytic degradation of host proteins. Under such conditions, the function of an oligopeptide uptake system would be dispensable, since free amino acid residues are actively transported by different dedicated uptake systems. This conclusion is supported by the lack of the oppA gene in most Xanthomonas species and the presence of a stop codon located 129 bp downstream of the first structural codon in the Xac 306 oppD/F cistron, this encoding the ATPase component required for the generation of energy to the transport process (Silva et al., 2002;Moutran et al., 2004). The finding that eight other Xanthomonas species, besides one Xaa strain, which do not carry opp genes, lends further support to the notion that, in contrast to other bacterial species, Xanthomonas oppA genes really represent pseudo genes on the way to disappearing from the genomes of these strains.
In accordance with this idea, no measurable difference in colonization, infection and generation of leaf lesions was observed in two susceptible citrus hosts infected with either the Xac 306 or the isogenic oppA-deficient strains. The absence of any significant pathogenic impact involving the Xac 306 strain in various citric hosts, lends further support to the conception that the Opp system is not functional in this strain and does not contribute to the pathogenesis of this bacterial strain in different citrus hosts. Collectively, the present evidence indicates that the Xac OppA, and consequently the Opp system, in contrast to other bacterial species, does not play a relevant physiological role. | 3,630.2 | 2010-06-01T00:00:00.000 | [
"Biology"
] |
Fighting the Cause of Alzheimer’s and GNE Myopathy
Age is the common risk factor for both neurodegenerative and neuromuscular diseases. Alzheimer disease (AD), a neurodegenerative disorder, causes dementia with age progression while GNE myopathy (GNEM), a neuromuscular disorder, causes muscle degeneration and loss of muscle motor movement with age. Individuals with mutations in presenilin or amyloid precursor protein (APP) gene develop AD while mutations in GNE (UDP N-acetylglucosamine 2 epimerase/N-acetyl Mannosamine kinase), key sialic acid biosynthesis enzyme, cause GNEM. Although GNEM is characterized with degeneration of muscle cells, it is shown to have similar disease hallmarks like aggregation of Aβ and accumulation of phosphorylated tau and other misfolded proteins in muscle cell similar to AD. Similar impairment in cellular functions have been reported in both disorders such as disruption of cytoskeletal network, changes in glycosylation pattern, mitochondrial dysfunction, oxidative stress, upregulation of chaperones, unfolded protein response in ER, autophagic vacuoles, cell death, and apoptosis. Interestingly, AD and GNEM are the two diseases with similar phenotypic condition affecting neuron and muscle, respectively, resulting in entirely different pathology. This review represents a comparative outlook of AD and GNEM that could lead to target common mechanism to find a plausible therapeutic for both the diseases.
INTRODUCTION
Aging is the process, which initiates with subclinical changes at molecular level including accumulation of mutations, telomere attrition, epigenetic alterations resulting in genome instability (López-Otín et al., 2013). These changes multiply at a very fast rate, ultimately leading to the morphological and functional deterioration of brain by progressive loss of the neurons, reduction in the levels of neurotransmitters at the synaptic junction and disruption of integrity of the brain (Sibille, 2013). In addition to neurons, muscle cells are also affected with age. Loss of muscle mass, reduction in muscle fiber size and number is observed in muscles with age that decreases muscle strength (Narici and Maffulli, 2010;Siparsky et al., 2014). Thus, age is a common risk factor for both neurodegenerative and neuromuscular diseases, that progress with time.
The neurodegenerative disorders like Alzheimer's disease (AD), Parkinson's disease, Huntington's disease and amyotrophic lateral sclerosis (ALS) share similar pattern of brain alterations and relate to each other at sub-cellular levels in numerous studies (Garden and La Spada, 2012;Montie and Durcan, 2013). Oxidative stress and altered Ca 2+ and mitochondrial dysfunctions cause neuronal damage with age (Thibault et al., 1998(Thibault et al., , 2001. Further, neurons do not divide (with rare exceptions), thus cellular damage tend to accumulate with age (Sibille, 2013). Similarly neuromuscular disorders such as multiple sclerosis, muscular dystrophy, GNE related myopathy, Myasthenia gravis, Spinal muscular atrophy and ALS show subcellular damage in muscle cells where oxidative stress and altered calcium/mitochondrial, and ER stress are observed (Kanekura et al., 2009;Roussel et al., 2013;Stone and Lin, 2015;Xiang et al., 2017). Muscle cells are also among the least dividing cells with average lifespan of 15 years or sometimes reaching four decades. Due to its long life span like neurons, the cellular damage in muscle also accumulates in due course of time. As the age progresses, the satellite cells of muscle decline reducing the regeneration capacity of healthy muscle in place of affected cells (Narici and Maffulli, 2010). Whether there is any correlation of cellular damage in neurons versus muscle cells that can be a common therapeutic target is not known.
Indeed some disorders such as ALS can be placed in either of the two disorders as it affects both neurons and muscle cells. Several neuromuscular disorders, which include muscular dystrophies have reported degeneration of neurons in brain and affect the cognitive function leading to memory loss (Anderson et al., 2002;Ricotti et al., 2011). In ALS, loss of motor neurons affect the movement of various muscles of body leading to muscle wasting and paralysis, along with cognitive impairment (Taylor et al., 2016). Interestingly, a novel missense mutation (histidine to arginine at 705 amino acid) in GNE gene (UDP N-acetylglucosamine 2 epimerase/Nacetyl Mannosamine kinase) was observed in familial ALS patient (Köroglu et al., 2017). Mutation in GNE gene causes GNEM, a rare neuromuscular disorder with completely different pathology compared to ALS (Huizing and Krasnewich, 2009). This raises a possibility of a missing link between the two disorders where the pathomechanisms might merge at a common target.
In this review, we have correlated and compared Alzheimer's disease, a neurodegenerative disorder with GNEM, a neuromuscular disorder and put forth how these diseases share common pathological events like aggregation of misfolded proteins, oxidative stress, mitochondrial dysfunction, autophagy and cellular death. This will help us to find a common therapeutic approach for the treatment of these diseases.
EPIDEMIOLOGY
Among various neurological disorders, Alzheimer's disease is the most common form of dementia accounting for 60-80% of all the cases of dementia, with worldwide prevalence above 45 million 1 . It is more prevalent in the Western European and North American population. On the other hand, GNEM is a rare genetic neuromuscular disorder with worldwide prevalence of 1-9 in a millionth population (Orphanet 2 ). GNEM has been reported in the Irish, Jewish, Japanese and Indian populations. Also, there are reports of GNEM from North America, European (United Kingdom and Scotland) and other Asian country like Thailand (Bhattacharya et al., 2018).
CAUSES, CHARACTERISTICS AND GENETIC PREDISPOSITION
AD is a multifactorial disease without any single cause. The main characteristic features of AD are senile plaques, composed mainly of extracellular amyloid-β (Aβ) peptides, and Neurofibrillary Tangles (NFTs) formed after accumulation of intracellular hyperphosphorylated tau (Serrano-Pozo et al., 2011). GNEM is caused by autosomal recessive mutation in GNE gene responsible for sialic acid biosynthesis. The characteristic features for GNEM involves weakness in the distal muscles, sparing the quadriceps, presence of rimmed vacuoles in muscle fibers and tubulofilamentous inclusions of aggregated proteins such as Aβ and phosphorylated tau (Jay et al., 2009). Despite the differences in tissues that are affected in the two diseases, accumulation of aggregates of amyloid-β and tau are common characteristics of both the diseases.
Initial symptom of AD is gradual loss in ability of the person to remember new information (Souchay and Moulin, 2009). The greatest risk factor for the development of AD is age as its pathological features increase exponentially with age (doubling every 5 years after the attainment of 65 years of age) (Querfurth and LaFerla, 2010). In GNEM, the initial symptoms include foot drop and weakness in the distal muscles, which gradually worsen with age toward wheel-chair dependence of patients. In GNEM, unlike AD, the brain function has been reported as normal (Anada et al., 2014). The onset of AD is late adulthood while GNEM onset is early adulthood during the second or third decade of life. How aging leads to sudden onset of GNEM is not known.
Beside aging, AD is caused due to mutation in either the presenilin genes or in Amyloid Precursor Protein (APP) gene (Goate et al., 1991;Hutton and Hardy, 1997;Holtzman et al., 2011). There is also an increased risk of AD in individuals suffering from Down's syndrome because chromosome 21 includes a gene encoding the production of APP (Wiseman et al., 2015). The epsilon four allele of the apolipoprotein E gene (APOE) located on chromosome 19 is found to be a risk factor for AD (Reiman et al., 2005). People with a history of diabetes, hypertension, obesity, smoking, head injury leading to memory loss and a family history of AD in close relatives are at a greater risk of AD (Barnes and Yaffe, 2011). The prevalence of AD is higher in women and less educated masses (Letenneur et al., 2000).
On the other hand, GNEM is caused due to mutation in GNE (UDP-GlcNAc 2-epimerase/ManNAc kinase) gene that catalyzes the first two rate limiting steps in the biosynthesis of sialic acid (Jay et al., 2009). Whether hyposialylation is the only cause of GNEM is still unknown. GNEM is a genetic disorder and not known to be associated with lifestyle disease. No gender bias has been reported for GNEM. A complete comparison of characteristics of both AD and GNEM has been described in Table 1.
DISEASE PATHOLOGY
In normal condition, neuronal cells release soluble Aβ after cleavage of a cell surface receptor called APP. In case of AD, the cleavage is abnormal leading to the precipitation of Aβ into dense beta sheets and formation of senile plaques (Zhang et al., 2011). To clear the amyloid aggregates, an inflammatory response is generated by astrocytes and microglia leading to the destruction of adjacent neurons and their neuritis (Norfray and Provenzale, 2004;Querfurth and LaFerla, 2010).
The tau protein is a microtubule stabilizing protein and has a role in intracellular transport (both axonal and vesicular). In its abnormally hyper-phosphorylated form, tau form intracellular aggregates called the NFTs or senile plaques, interfering with normal axonal transport of molecules along microtubules (Norfray and Provenzale, 2004).
In GNEM, main pathological feature includes formation of rimmed vacuoles, which is comprised of aggregated proteins such as Aβ and tau (Nalini et al., 2010). Cytoplasmic and nuclear inclusion bodies have also been observed by electron microscopy in muscle biopsies, which contain degradative products from the membrane, cytoplasmic tubulofilaments and mitochondria with irregular size and shape (Huizing and Krasnewich, 2009). However, since GNE is a key sialic acid biosynthetic enzyme, mutation in GNE affects the sialylation of proteins (Noguchi et al., 2004). The immunohistochemistry of GNEM muscle samples revealed upregulation of αβ-crystallin, NCAM, MHC-1, and iNOS levels (Fischer et al., 2013). NCAM was hyposialylated in GNEM and proposed as diagnostic marker for GNEM (Ricci et al., 2006). In aging brain and AD, the expression and function of NCAM and MHC-1 was altered that may result in synaptic and cognitive loss (Aisa et al., 2010). Also, reduced polysialated-NCAM load was reported in entorhinal cortex causing AD (Murray et al., 2016). Thus, NCAM sialylation can be a common target in the pathology of AD and GNEM in addition to Aβ and tau accumulation.
DIAGNOSIS
Medical and family history of individuals, which include psychiatric history, changes in behavior and cognitive functions, help in the diagnosis of AD. Amyloid plaques, presence of NFT's and distribution in the brain are used to establish the disease by an autopsy based pathological evaluation. The clinical diagnosis of AD is about 70-90% accurate relative to the pathological diagnosis (Beach et al., 2012).
GNEM is clinically characterized by weakness in tibialis anterior muscles with a unique sparing of the quadriceps leading to foot drop, gait abnormalities, mild or no elevation in serum creatine kinase levels with no involvement of cardiac muscles, usually in the second or third decade of life (Nalini et al., 2013). Pathologically, GNEM is characterized by presence of rimmed vacuoles in muscle biopsies, without inflammation (Argov and Yarom, 1984). The confirmation of GNEM mainly relies on identification of bi-allelic mutation in GNE gene. As more than 190 mutations in GNE have been identified worldwide, complete
Effect of Glycosylation, Particularly Sialylation, in AD and GNEM
Glycosylation is the process of incorporation of glycan, either monosaccharides or oligosaccharides, unit to proteins and lipid moieties (Spiro, 2002). The role of glycosylation in case of AD was first reported when impaired glucose metabolism increased toxicity from Aβ and affected glycosylation pattern (Ott et al., 1999;Peila et al., 2002;Chornenkyy et al., 2018). Several key proteins involved in Aβ deposition cascade such as APP, BACE-1 (β secretase), γ-secretase, nicastrin, neprisilin (NEP) undergo altered glycosylation in AD (Kizuka et al., 2017). Deletion of N-glycosylation of APP protein results in its reduced secretion (Schedin-Weiss et al., 2014). APP trafficking from trans-Golgi network to plasma membrane and non-amyloidogenic processing is enhanced by O-GlcNAcylation of APP (Chun et al., 2015). Interestingly, enhanced sialylation of APP increased APP secretion and Aβ production (Nakagawa et al., 2006). Defect in sialic acid biosynthesis due to mutation in GNE affects sialylation of glycoproteins in GNEM. Several proteins such as neural cell adhesion molecule (NCAM), α-dystroglycan, integrin, IGF-1R, and other proteins have been found with altered sialylation in absence of functional GNE (Huizing et al., 2004;Ricci et al., 2006;Grover and Arya, 2014;Singh et al., 2018). However, changes in glycosylation pattern of APP or Aβ are not studied in GNEM despite elevated levels of APP reported in ALS and GNEM (Koistinen et al., 2006;Fischer et al., 2013). Thus, there is a need to investigate whether hyposialylation of muscle cells, as effect of mutation in GNE, affects the glycosylation pattern and sialylation of accumulated glycoproteins and proteins like Aβ, presenilin-1 etc. Proper glycosylation of nicastrin (a subunit of γ-secretase) affects its trafficking to Golgi apparatus and proper binding to presenilin-1, thereby, inhibiting APP processing and γ-secretase substrate preference (Yang et al., 2002;Xie et al., 2014;Moniruzzaman et al., 2018). Expression of glycosylated NEP, protein involved in Aβ clearance, is also reduced in AD (Reilly, 2001). Interestingly, in GNEM also, the glycosylation and sialylation of neprilysin is dramatically reduced, affecting its expression and normal enzymatic activity (Broccolini et al., 2008). The effect of reduced activity in NEP in GNEM may lead to its failure of clearance of Aβ from muscle. Additionally, it has also been reported that enzyme GNE undergoes O-GlcNAcylation thereby, modulating its enzymatic activity (Bennmann et al., 2016). Thus, it would be of interest to study effect of altered sialylation due to GNE mutation on glycosylation pattern of aggregating proteins.
Several reports indicate alteration of protein sialylation to be a leading cause of AD (Wang, 2009;Schnaar et al., 2014). Binding of Aβ to cells is sialic acid dependent as its binding to surface is mediated through sialylated gangliosides, glycolipids, and glycoproteins (Ariga et al., 2001). The levels of sialyltransferase reduce with age that may contribute to altered sialic acid levels (Maguire et al., 1994;Maguire and Breen, 1995). In addition, clearance of Aβ by microglia is enhanced in absence of sialylated immunoglobulin, CD33 (siglec-33) (Jiang et al., 2014;Siddiqui et al., 2017). This suggests that sialylation is important for Aβ uptake and accumulation.
Interestingly, altered levels of sialyltransferases ST3Gal5 and ST8Sia1 were reported in HEKAD293 cells overexpressing wild type recombinant GNE resulting in increased levels of gangliosides GM3 and GD3 (Wang et al., 2006). Thus, GNE may affect sialyltransferases with an unknown mechanism. Molecules affecting sialyltransferase levels may influence Aβ uptake in both GNEM as well as AD. Thus, changes in the sialylation pattern of Aβ deposition cascade proteins in muscle cells may affect rimmed vacuole formation in GNEM and offer new therapeutic approach.
Role of Cytoskeleton Network in AD and GNEM
Cytoskeletal proteins are important functional proteins in both neuronal and muscle cells. In muscle, they help in conducting contraction and movement, while in neurons, they have a vital role in neuronal plasticity that is important for learning and memory process. Cytoskeletal proteins include different proteins like actin, tubulin, and lamin that provide mechanical support to the cell and modulate their dynamics inside the cell.
Tau, the first microtubule associated protein to be identified, was found to be one of the important hallmarks of AD along with Aβ. Tau directly helps in self-assembly of microtubule from tubulin. In AD, tau is found to be hyperphosphorylated at different site than normal (Gong et al., 2005;Hanger et al., 2007). The extent of tau aggregation is correlated with amount of phosphorylation at different sites (Iqbal et al., 2008). Also increased auto-antibodies of tubulin and tau were found in the serum of AD patients indicating a robust target for disease diagnosis (Salama et al., 2018). In GNEM, phosphorylated tau has been observed to accumulate in rimmed vacuoles (Nogalska et al., 2015), but whether aggregated tau is hyperphosphorylated from the normal form is not yet studied.
Actin dynamics and modulation of G-actin and F-actin is an important feature for neuronal plasticity and memory developments (Penzes and Rafalovich, 2012). Impaired cognitive function has been reported in AD pathology where cofilin-1, an actin depolymerizer, was found to be inactive (Barone et al., 2014). Inactivation of cofilin 1 contributes to actin dependent impairment of synaptic plasticity and thus, learning (Rust, 2015). Further, cofilin-1 inactivation is γ-secretase dependent, which controls Aβ peptide production. Also, cofilin-actin rods result in synaptic loss in AD (Bamburg et al., 2010). Small GTPases like RhoA, Rac1, and Cdc42 regulate APP, formation of Aβ and neurotoxicity (Boo et al., 2008;Wang et al., 2009). Phosphorylation of collapsin mediator response protein-2 (CRMP-2) in AD disrupts its binding with kinesin hampering axonal transport and resulting in neuronal defect (Mokhtar et al., 2018). RhoGTPases also play important role in muscle differentiation and muscle contraction (DeHart and Jones, 2004;Zhang et al., 2012). Interestingly, GNE has been shown to interact with CRMP-1, α-actinin-1, and α-actinin-2, key cytoskeletal regulatory proteins (Weidemann et al., 2006;Amsili et al., 2008;Harazi et al., 2017). Being an actin binding protein, binding of α-actinin-1 and α-actinin-2 with GNE raises a possibility of impaired actin function in GNEM. Differential cytoskeletal protein expression was observed in muscle biopsy samples of GNEM patients (Sela et al., 2011). Upstream of actin, FAK (focal adhesion complex) and integrin (extracellular matrix protein) function was affected in mutant GNE cells (Grover and Arya, 2014). It has also been reported that induction of Aβ led to the increased expression of FAK and autophosphorylation at Tyr397 (Han et al., 2013). However, role of RhoA, actin, cofilin needs to be further elucidated in GNEM. Taken together these studies indicate cytoskeletal proteins to be a common target that regulate Aβ production and need therapeutic intervention to explore effective molecules.
Mitochondrial Dysfunction in AD and GNEM
Mitochondria are self-dividing organelles undergoing fission and fusion inside a cell. It is the power house of a cell that provides energy by oxidative phosphorylation during TCA cycle. Neurons and muscle cells have higher demand for mitochondria for their neuronal processes and muscle contraction, respectively. It has been reported that different cytoskeletal proteins help in motility of mitochondria in the cytoplasm (Lackner, 2013). Accumulation of Aβ and increased cellular death has been reported upon dissection of brains of AD patients (Cha et al., 2012). Further, Aβ accumulation in mitochondria precedes amyloid plaque, indicative of an early stage AD (Ankarcrona et al., 2010). In the early stages of AD, the number of mitochondria in the affected neurons is highly reduced leading to decreased glucose metabolism and impaired TCA cycle enzyme activity (Bubber et al., 2005;Mosconi, 2005). Additionally, elevated level of oxidative damage and significant increase in mutation of mtDNA and cytochrome c oxidase has been reported in AD patients (Castellani et al., 2002). Further, impaired mitochondrial trafficking has been observed in rat hippocampal neurons upon exposure to sub-cytotoxic levels of Aβ (Rui et al., 2006). Altered calcium homeostasis affects ATP generation and cause mitochondrial dysfunction (Supnet and Bezprozvanny, 2010;Swerdlow, 2018).
In GNEM, upregulation of a number of mitochondrial genes and transcript encoding mitochondrial proteins like COX, Cytochrome C Oxidase, ATPases, NADH dehydrogenase etc., have been reported in GNEM patient muscle biopsies (Eisenberg et al., 2008). Vacuolar and swollen mitochondria indicative of structure and functional dysfunction have been observed in HEK cells with mutated GNE (Eisenberg et al., 2008). Since function of mitochondria is dependent on its structure, increased branching of mitochondria observed in cells of GNEM patients could lead to oxidative stress (Eisenberg et al., 2008). Thus, both GNEM and AD show mitochondrial dysfunction. It would be of interest to determine the stage at which mitochondria are affected in GNEM and whether any Aβ accumulation occurs in mitochondria besides rimmed vacuoles.
In AD mouse study, COX gene knock out reduced oxidative stress by reducing Aβ plaque formation (Fukui et al., 2007). Inhibition of COX2 function results in protection of neurons and reduces the accumulation of Aβ in neurons of AD transgenic mice (Woodling et al., 2016). In GNEM, COX7A protein is reported to be upregulated (Eisenberg et al., 2008). Thus, inhibiting COX gene in GNEM may reduce mitochondrial oxidative stress and inhibit Aβ aggregate formation in GNE deficient cells and could serve as an important therapeutic target.
Effect of Oxidative Stress in AD and GNEM
Oxidative stress is a key player in many neurodegenerative diseases. With age, oxidative stress in brain elevates due to imbalance of redox potential leading to generation of reactive oxygen species (ROS) (Andreyev et al., 2005;Wang and Michaelis, 2010). When the amount of ROS species produced is greater than scavenged by ROS defense mechanisms, it leads to oxidative stress leading to cell damage (Feng and Wang, 2012). Reports suggest that Aβ(1-42) accumulation is associated with oxidative stress in hippocampal neuron of C. elegans (Yatin et al., 1999). Phosphorylation of tau is also reported to be increased during oxidative stress via activation of glycogen synthase kinase 3-β (Lovell et al., 2004). Aberrant S-nitrosylation of proteins at cysteine residue of ApoE, Cdk5, and PDI leads to oxidative stress and neurodestruction . In fact, oxidation of proteins in neurons that control Aβ solubilization and tau hyperphosphorylation severely affect progression of AD.
In GNEM, upregulation of cell stress molecules, such as Aβ oligomers, αβ-crystallin that signals to elevate APP protein was reported (Fischer et al., 2013). Upregulation of iNOS enzyme suggested that cell stress in GNE myopathy is mainly due to NO-related free radicals (Fischer et al., 2013). In GNEM patients and mouse model, proteins were found to be highly modified with S-nitrosylation (Cho et al., 2017). In AD, generation of NO correlates with the activation of iNOS in glial cells. Generation of NO by iNOS is robust and render neurotoxicity, contributing to neuronal death and injury . Atrogenes and oxidative stress response proteins are highly upregulated in hyposialylated condition and supplementation with sialic acid restores ROS levels in muscle cells (Cho et al., 2017). Additionally, in HEK293 cell based model system for GNEM overexpressing pathologically relevant GNE mutation, PrdxIV, an ER resident Peroxiredoxin was found to be downregulated. The downregulation of Prdx IV may disturb the redox state of ER, affecting proper folding of proteins eventually leading to ER stress (Chanana et al., 2017). Also expression level of Prdx I and Prdx IV was substantially decreased in post-mortem brain of AD with higher level of protein oxidation (Majd and Power, 2018). These studies suggest that oxidative stress may be common to both the disorders. ER based peroxiredoxins may play an important role in the pathology of both the diseases.
Role of Endoplasmic Reticulum and Chaperones in Protein Aggregation
Endoplasmic reticulum is an important cellular organelle involved in proper folding and processing of proteins. Perturbation in functioning of ER leads to misfolding of proteins and eventually protein aggregation, which is the key feature in several neurodegenerative diseases. Accumulation of misfolded proteins in ER elicits ER stress and unfolded protein response (UPR) that triggers cell death by apoptosis to eliminate cell toxicity (Tabas and Ron, 2011). Misfolded proteins that are retained in ER undergo proteosomal degradation via ERassociated degradation or ERAD (Smith et al., 2011). Activation of UPR proteins such as IRE1 and chaperone, GRP78, have been reported in the cortex and hippocampal tissue of AD brain (Hoozemans et al., 2005;Lee et al., 2010a). Activation of UPR proteins such as IRE1α, PERK, and ATF6 have been reported in AD by Xiang et al. (2017). Even GNEM muscle biopsies revealed upregulation of different UPR proteins including GRP78/BiP, GRP94, calnexin, and calreticulin, which are ER resident chaperones. The same study showed localization of GRP78/BiP and GRP94 with Aβ in the ER (Li et al., 2013). Upregulation of chaperone GRP94 is reported in HEK cell based model of GNEM (Grover and Arya, 2014). Since upregulation of chaperones is also observed in GNEM, they may play an important role in protein aggregate and subsequently rimmed vacuole formation. Thus, small molecules affecting chaperone activity to enhance proper protein folding and inhibition of protein aggregation offer a promising therapeutic approach for GNEM.
Interestingly, calreticulin, molecular chaperone that modulates Ca 2+ homeostasis, is downregulated in cortical neurons of AD patients and used as negative biomarker for AD progression (Lin et al., 2014). Another study reported that calreticulin co-localizes with both Aβ and APP and helps in proper folding of Aβ (Johnson et al., 2001). Stemmer et al have showed that calreticulin bound directly with Presenilin and Nicastrin molecular component of γ-secretase, along with Aβ (Stemmer et al., 2013). The binding of calreticulin with γ-secretase may direct the proper binding and cleavage of APP to Aβ. Due to the downregulation of calreticulin in neurons, serum γ-secretase losses its proper cleaving activity leading to misfolded Aβ and accumulation in neurons. Altered calreticulin levels could affect protein folding in GNEM as calreticulin interact with phosphodiisomerase (PDI) to serve chaperone function in ER. PDI interacts with peroxiredoxin IV, which is downregulated in GNE deficient cells (Chanana et al., 2017). Thus, calreticulin may need further investigation towards its role as molecular chaperones in GNEM.
Heat Shock Proteins (HSPs) present in the cytosol also help protein to achieve native structure and avoid aggregation (Franklin et al., 2005;Paul and Mahanta, 2014). Elevated levels of HSP70 and HSP27 were found in brain tissues of AD patients (Perez et al., 1991;Renkawek et al., 1993). HSP70 has been reported to interfere with the secretory pathway of APP by binding to APP and reducing Aβ production. Along with HSP70, HSP90 has been shown to degrade Aβ oligomers and tau via the proteasome degradation pathway (Lu et al., 2014). Overexpression of HSP70 and HSP90 helps to maintain tau homeostasis and increases its solubility, thereby preventing aggregation (Petrucelli et al., 2004). Overexpression of the chaperones also prevents the activation of Caspases, which may lead to neuronal death due to accumulation of aggregated proteins (Sabirzhanov et al., 2012). Proteomic study on GNEM patient biopsies also indicates an increase in HSP70, Crystallin and HSPB1 levels (Sela et al., 2011). Thus, more intensive research is demanded to explore chaperones as therapeutic drug targets for GNEM that can reduce protein aggregation and inhibit rimmed vacuole formation.
Rimmed vacuoles observed in GNEM pathology are also defined as clusters of autophagic vacuoles and multi-lamellar bodies, which contain congophilic amyloid proteins, ubiquitin and tau proteins (Nonaka et al., 2005). Higher expression of lysosomal-associated membrane proteins (LAMPs), LC3 and various other lysosomal proteins involved in autophagic pathway were observed in the skeletal muscle of the mice model for GNEM (Malicdan et al., 2007). Differential regulation of BCL2 in GNEM also supports that some common proteins of autophagy pathway in AD may play a role in GNEM autophagic vacuole formation. A comparison of the autophagic mechanisms in AD vs. GNEM is shown in Figure 1. Thus, it would be of interest to study and identify novel targets causing autophagy in GNEM and several autophagy stimulating drugs for AD may serve as therapeutic option for myopathy.
Cell Death and Apoptosis
Cell death is the most common feature of the neurodegenerative diseases and occurs massively. In AD, neuronal loss is mainly in cerebral cortex and limbic lobe (Alzheimer's Association, 2017). There are two major pathways for apoptosis, extrinsic pathway and intrinsic pathway. The extrinsic pathway involves FIGURE 1 | Comparison of autophagy mechanism in AD and GNEM. Mutation in genes of Presenilin and APP for AD and GNE for GNEM leads to the accumulation of various proteins like Aβ and tau. This accumulation causes oxidative stress and ER stress/UPR activation due to upregulation of chaperones, ultimately leading to dysfunctional autophagy as the number of autophagosomes, lysosomes, and rimmed vacuoles increases.
cell surface receptors like TNF in which the binding of Aβ or Aβ oligomers to these receptors remains to be established but the pattern of activation of downstream Caspases (e.g., Caspases 2 and 8) involved in the extrinsic pathway is mediated by Aβ (Ghavami et al., 2014). In the intrinsic pathway, Aβ plays an important role as its intracellular accumulation in the ER cause ER stress and when it binding to a mitochondrial alcohol dehydrogenase leads to mitochondrial stress followed by activation of the downstream apoptotic markers (Lustbader et al., 2004). The upstream mediators of the apoptotic processes are yet to be determined, but the Caspases are activated in the process, which cleaves the tau protein leading to NFT formation (Dickson, 2004). Therefore, in AD, proteolysis of both APP and tau takes place leading to abnormal proteins, which aggregate and form lesions of fibrils extracellularly and intracellularly. Thus, direct involvement of Caspases in apoptosis of neurons is not yet established but many Caspases have been found to play a role in regulation of neuronal death upon Aβ accumulation (Behl, 2000;Dickson, 2004). Aβ(1-42) exposure leads to down regulation of anti-apoptotic proteins like Bcl-2 and upregulation of pro-apoptotic proteins like Bax, cytochrome-c and cleaved caspases in PC12 cells (Chen et al., 2018). Altered levels of various microRNAs that target neuropathological mechanisms have been reported in AD (Ma et al., 2017;Dehghani et al., 2018). Activation of programmed necrosis leading to cell death is reported in the brain of AD patients (Caccamo et al., 2017). The suppression of apoptotic cell signaling pathway proteins such as p38 MAPK can rescue tau pathology in AD (Maphis et al., 2016). These study suggest that various effector molecules targeting signaling proteins in the apoptotic pathway can play a role is preventing cell apoptosis caused due to Aβ accumulation or tau dysfunction and hence potential drug molecules for AD.
In GNEM, degeneration is seen in the myofibrils of the patient muscle biopsies, which might lead to rimmed vacuole formation (Yan et al., 2001). Similar to AD, activation of Caspases 3 and 9 was observed in the myoblast cells of the GNEM patient with M743T kinase mutation (Amsili et al., 2007). Along with this, increased pAKT levels was observed which suggests impairment in the apoptotic event (Amsili et al., 2007). Mitochondrial dependent apoptosis and disruption in both the structure and function of the mitochondria was observed in HEK cell based model system of GNEM over-expressing pathologically relevant GNE mutation (Singh and Arya, 2016). Also, activation of PTEN and PDK1 was observed in the myoblasts which might lead to muscle loss and on stimulation with insulin, activates PI3K and downstream signaling through AKT causing the activation of cell survival pathway (Harazi et al., 2014). Increased Anoikis, apoptosis due to loss of anchorage to extracellular matrix, was observed in pancreatic carcinoma cells when the GNE gene was silenced. Additionally, the level of CHOP has been reported to increase in GNE deficient cells indicative of apoptosis through ATF4-ATF3-CHOP pathway (Kemmner et al., 2012). Increased apoptosis due to internalization of Aβ peptides in hyposialylated C2C12 myotubes and skeletal muscles was observed in the patients of GNEM (Bosch-Morató et al., 2016). This suggests that sialylation has a role in Aβ uptake and cell apoptosis and molecules involved in apoptotic pathway can be therapeutic targets. Thus, molecular and cellular phenomenon for apoptosis in AD and GNEM seem to overlap despite difference in cell types, neuron vs. muscle cell, respectively.
A comparison of the apoptotic mechanisms in AD vs. GNEM has been described in Figure 2. Interestingly, treatment of GNE deficient cells with Insulin Growth Factor seems to rescue the apoptotic phenotype and hence could be a potential therapeutic target that counters apoptotic cell toward cell survival (Singh et al., 2018). In summary, proteins and drug molecules that rescue cell death phenomenon in AD by targeting common proteins, can be explored for GNEM therapy.
FIGURE 2 | Comparison of apoptosis mechanism in AD and GNEM. In GNEM, intrinsic pathway is mediated through mitochondria as a consequence of mitochondrial dysfunction and consequent release of cytochrome C and activation of executioner caspases-Caspase-3 and Caspase-9. Further, in the extrinsic pathway, mutation in GNE causes hyposialylation of Aβ leading to its aggregation progressing toward ER stress where AKT pathway is impaired and sarcoplasmic calcium is released into the cytoplasm, eventually leading to apoptosis. In the extrinsic pathway, hyposialylation of IGF1R receptor leads to impairment of ERK pathway and activation of BAD, inhibiting anti-apoptotic Bcl-2 and thus leading to apoptosis. Whereas in AD, in the intrinsic pathway, Aβ accumulation and tau phosphorylation in the ER leads to ER stress. In the mitochondria of AD patients, Aβ binds to Aβ binding-alcohol dehydrogenase (ABAD) leading to mitochondrial dysfunction. Both ER and mitochondrial stress lead to activation of effector caspases 3, 6, and 7, which might cleave tau leading to formation of NFTs. In the extrinsic pathway, accumulated Aβ binds to ligand TNF leading to activation of Caspase 2 and 8.
A complete comparison of molecular and cellular changes in AD and GNEM are listed in Table 2.
TREATMENT
There is no cure for AD till date as the medications available only help to control the symptoms of AD. The AD drug therapy includes drugs, which target neurotransmitter system of the brain such as Acetylcholinesterase (Ach esterase) inhibitors that increases neurotransmitter levels at synaptic junctions (Schenk et al., 2012). Three FDA approved acetylcholinesterase inhibitors are Rivastigmine, Galantamine (for mild AD), and Donepezil (for all stages of AD) are available (Schenk et al., 2012). Also Memantine, antagonist for N-methyl-D-aspartate (NMDA) receptor is used in combination with Ach esterase inhibitor. None of the pharmacological drugs are able to stop the damage and destruction of the neurons therefore, making the disease fatal.
Since, Aβ accumulation is one of the major causes leading to the disease; therefore drugs, which can lower the amount of Aβ accumulation in the brain are of prime importance. Secretase inhibitor drugs, inhibit the cleavage of APP into Aβ, therefore minimizing their accumulation (Imbimbo and Giardina, 2011). Another set of drugs used as a passive vaccination strategy in the form of antibodies, help in the clearance of Aβ species (Schenk et al., 2012). Several drugs were developed which completed Phase-III clinical trials but failed to demonstrate their efficacy in patients. The passive vaccination strategy in case of tau also proved to be ineffective (Wischik et al., 2014). A major limitation with respect to effectiveness of antiamyloid drugs was thought to be late diagnosis of the disease. Thus, research focussing on the stage of initiation of amyloid formation could offer better drug targets. Indeed aducanumab, human monoclonal antibody, selective for aggregated form of Aβ showed reduced amyloid uptake and improved cognitive function in early AD patients (Scheltens et al., 2016).
In case of GNEM also, there is no treatment therapy available, which could reverse disease progression and stop muscle degeneration. Administration of N-acetylmannosamine, neuraminic acid, and sialyllactose in the mouse models of GNEM improved survival of the mouse by reduction in rimmed vacuole formation and β-amyloid deposition (Yonekawa et al., 2014). Gene therapy by administration of GNE gene lipoplex through intravenous infusion to the patients leads to an improvement in muscle strength and increased cell surface sialylation (Nemunaitis et al., 2010). An FDA approved molecular chaperone aiding in protein folding -4-PBA (4-phenyl butyrate) has been proposed for GNEM (Krause, 2015). Anti-ActII activin antibody (bimagrumab or BYM338), an atrophic protein has been found to be helpful in preventing muscle atrophy in GNEM (Krause, 2015). Some of the compounds are under clinical trials such as sialic acid precursor, N-acetylmannosamine (ManNAc), and extended release sialic acid form, aceneuramic acid. However, due to lack of statistical significance in the cohort of patient study, the compound was discontinued by Ultragenyx (Mori-Yoshimura and Nishino, 2015; Argov et al., 2016). Recent studies in GNEM indicate that sialic acid supplementation alone may not be sufficient to rescue disease phenotype. As discussed above several other cellular phenomena affect GNEM including accumulation of aggregated proteins such as β-amyloid and tau proteins. Sialic acid has been shown to affect β-amyloid uptake in C2C12 myoblast indicating role of sialic acid in β-amyloid uptake (Bosch-Morató et al., 2016). Thus, drug molecules affecting β-amyloid uptake and initiation of Aβ accumulation may serve as better therapeutic targets and offer common mechanism for AD as well as GNEM.
CONCLUSION
While much is known for AD, GNEM is poorly understood rare disease. Lack of number of patient samples for GNEM also limits the study. Also, absence of appropriate animal model system for GNEM, as GNE −/− mice are embryonically lethal at day E8.5, restricts the understanding for genotype to phenotype co-relation. There could be some interesting leads from AD studies that could help explore GNEM pathomechanism. While both diseases have lot of similarities at cellular level such as Aβ amyloid deposition, protein aggregation, autophagic vacuoles, major difference is that in AD, brain/neurons are affected while in GNEM, only muscles in particular anterior tibialis muscle cells are affected. No changes in the neurons of GNEM patients are reported. It would be of interest to study the stage of Aβ deposition in GNE deficient cells and whether protein aggregation could be prevented to slow the disease progression for GNEM. Also, whether there is any genetic predisposition of AD or GNEM in patient families would be important to understand epigenetics of these neurodegenerative disorders. Future studies could be planned toward deciphering common therapeutic targets for these disorders.
AUTHOR CONTRIBUTIONS
SD and RY have written the first draft of the manuscript. PC and RA revised and improved the first draft. RY prepared the tables and Figure 2. PC prepared Figure 1. RA edited and finalized the version. All authors have seen and agreed on the finally submitted version of the manuscript.
FUNDING
This work was supported by grants from the UPOE-II (Project ID:16), University Grants Commission, India, DST PURSE II (DST/SR/PURSE Phase II/11, Department of Science and Technology, Government of India and SERB (Science and Engineering Research Board) EMR/2015/001798, Government of India. We acknowledge Jawaharlal Nehru University, New Delhi for providing financial assistance towards publication and infrastructure. | 8,659 | 2018-10-15T00:00:00.000 | [
"Biology"
] |
Verapamil inhibits efflux pumps in Candida albicans, exhibits synergism with fluconazole, and increases survival of Galleria mellonella
ABSTRACT The emergence of resistance requires alternative methods to treat Candida albicans infections. We evaluated efficacy of the efflux pump inhibitor (EPI) verapamil (VER) with fluconazole (FLC) against FLC-resistant (CaR) and -susceptible C. albicans (CaS). The susceptibility of both strains to VER and FLC was determined, as well as the synergism of VER with FLC. Experiments were performed in vitro for planktonic cultures and biofilms and in vivo using Galleria mellonella. Larval survival and fungal recovery were evaluated after treatment with VER and FLC. Data were analyzed by analysis of variance and Kaplan-Meier tests. The combination of VER with FLC at sub-lethal concentrations reduced fungal growth. VER inhibited the efflux of rhodamine 123 and showed synergism with FLC against CaR. For biofilms, FLC and VER alone reduced fungal viability. The combination of VER with FLC at sub-lethal concentrations also reduced biofilm viability. In the in vivo assays, VER and FLC used alone or in combination increased the survival of larvae infected with CaR. Reduction of fungal recovery was observed only for larvae infected with CaR and treated with VER with FLC. VER reverted the FLC-resistance of C. albicans. Based on the results obtained, VER reverted the FLC-resistance of C. albicans and showed synergism with FLC against CaR. VER also increased the survival of G. mellonella infected with CaR and reduced the fungal recovery.
Introduction
Oral candidiasis is the most common fungal infection of the oral cavity, and its main etiological agent is Candida albicans [1,2]. In immunosuppressed patients, this infection can spread to the bloodstream causing candidemia, which is one of the main nosocomial infections associated with high mortality rates ranging from 25%-60% [3].
Microbial infections, including oral candidiasis, are strongly associated with biofilms, which are communities of microorganisms attached to a biotic or abiotic surface that are embedded in an extracellular polymeric matrix [4][5][6]. Compared to their freefloating (planktonic) counterparts, cells growing as part of biofilms exhibit distinct phenotypic properties and have a greater tolerance toward antimicrobial agents [7,8].
The misuse and overuse of conventional antifungal agents has raised the problem of antifungal resistance [9]. According to the World Health Organization, antimicrobial resistance threatens public health and is a global concern [10]. Persistent infections caused by resistant strains are difficult to treat and costly due to long hospital stays. Some resistance mechanisms of C. albicans have been identified, especially against azole drugs. These mechanisms include genetic mutations and chromosomal aberrations [7], overexpression of plasma membrane multidrug transporters (efflux pumps, EPs), and signaling via cellular stress response pathways [7,9,11].
EPs or microbial efflux systems are membrane proteins that transport toxic substances out of the cell and have been widely recognized as the main mediators of microbial resistance toward several classes of antimicrobial drugs [9,12]. In C. albicans there are two important classes of efflux systems that are responsible for drug resistance: the energy-dependent transporter classes, or ATP-binding cassette (ABC) transporter superfamily, and the major facilitator superfamily (MFS) [13][14][15][16]. The ABC transporters Cdr1p and Cdr2p, and the MFS transporter Mdr1p are responsible for azole resistance [13][14][15][16][17][18]. The EPs can export a wide range of structurally unrelated compounds, such as antifungal drugs, herbicides, steroids, lipids, fluorescent dyes, etc [19]. Thus, the inhibition of EPs is considered as an important method for combating microbial resistance [20].
Several approaches have been proposed to address the antimicrobial drug resistance mediated by EP, such as the direct pharmacological inhibition of efflux systems [21][22][23]. Studies have shown that certain drugs can be used to inhibit EPs that are localized at the fungal plasma membrane, such as verapamil (VER) [24][25][26]. VER is a calcium channel blocker of the phenylalkylamine class and is used to treat hypertension [27] and angina pectoris [28]. In C. albicans, VER inhibits the metabolic activity of biofilms, shows synergism with fluconazole (FLC) [24], inhibits fungal filamentation [26], and reduces the expression of genes responsible for cellular adhesion and the oxidative stress response [24][25][26]. Therefore, VER is a promising efflux pump inhibitor (EPI) and its combination with FLC can increase in vitro the susceptibility of FLC-resistant C. albicans to antifungal inactivation [24]. However, the in vivo effect of VER on FLC-resistant C. albicans is not known. In this study, we investigated the inhibition of EPs for the reversion of FLC-resistance in C. albicans in vitro and in vivo using the greater wax moth Galleria mellonella.
Materials and methods
Initially, we investigated the use of curcumin (CUR) and VER as EPIs against FLC-resistant C. albicans (CaR). CUR was also used as a photosensitizer (PS) for antimicrobial photodynamic therapy against CaR, as other PSs such as methylene blue are substrates for EPs [21,22,29]. Because VER showed better results than CUR as an EPI, the results obtained with CUR are described in the Supplemental Material.
Preparation of drugs
FLC (Sigma-Aldrich, St. Louis, MO) was added to Yeast Nitrogen Broth (YNB; Difco, Detroit, MI, USA) with 2.5% DMSO, which was not toxic toward C. albicans ( Figure S1). VER hydrochloride (Sigma-Aldrich, St. Louis, MO) was used as an inhibitor of the fungal efflux system and was dissolved in sterile ultra-pure water immediately before using.
Candida albicans strains and growth conditions
An FLC-susceptible (CaS; ATCC®90028™, American Type Culture Collection, Rockville, MD, USA) and an FLCresistant standard C. albicans strain (CaR, ATCC 96901) were evaluated. The strains were stored at −80°C in YNB with 50% glycerol. Each strain was individually thawed and plated onto Sabouraud Dextrose Agar (SDA; Acumedia Manufacturers Inc., Lansing, MI, USA) culture medium having 0.05 mg/mL chloramphenicol. After incubation at 37°C for 48 h, five colonies were transferred to YNB medium having 100 mM glucose (YNBg) and incubated at 37°C overnight. Next, each fungal suspension was diluted 1:20 in fresh YNB medium and were incubated at 37°C until an optical density at 540 nm (OD 540 ; Bioespectro SP 220 Equipar Ltda, Curitiba, PR, Brazil) was reached such that the cells were in the mid-log phase of growth before the planktonic culture and biofilm assays were performed. At this growth point, the mean value ± standard deviation (SD) at OD 540 was 0.658 ± 0.091 and 0.514 ± 0.123 arbitrary units (a.u.) for the CaS and CaR strains, respectively, which corresponded to a mean ± SD value of 4.14 × 10 6 ± 2.29 × 10 5 and 3.61 × 10 6 ± 9.16 × 10 5 colony forming units per milliliter (CFU/mL), respectively.
Susceptibility test
The minimum inhibitory concentration (MIC) and minimum fungicidal concentration of each agent (VER and FLC) were evaluated by the microdilution method, based on the recommendations of the Clinical and Laboratory Standards Institute (CLSI, M27-A3) [30] and the European Committee for Antimicrobial Susceptibility Testing (EUCAST) [31], with some modifications. Briefly, 100 μL of each drug (VER and FLC) was serially diluted two-fold in YNBg in 96-well, U-bottom microtiter plates (TPP Techno Plastic Products, Trasadingen, Switzerland). The final concentrations of the drugs used for both strains are shown in Table 1. Next, the fungal suspensions were diluted to 10 3 CFU/mL, and 100 μL of each strain was added to each well at a final concentration of 0.5-2.5 × 10 3 CFU/
Inhibition of fungal efflux systems
Non-lethal concentrations (sub-MIC) of VER (2 mg/mL for CaS and CaR) were combined with FLC (0.25 μg/mL for CaS, and 64 μg/mL for CaR) for each strain as described above to evaluate the inhibition of the efflux system. Briefly, 50 μL of VER was added to 50 μL of FLC in the wells of a microtiter plate, and the fungal suspension was added at a final concentration of 0.5-2.5 × 10 3 CFU/mL. The drug combination was prepared so that the final concentration of each drug in the fungal inoculum was at sub-MIC levels. The samples were incubated at 37° C for 24 h, the OD 540 was determined, plating was done on SDA, and the medium was incubated at 37°C for 48 h. The controls used were fungal suspensions in YNB with drug vehicles alone and blank medium with drugs but without fungal suspensions.
Interaction of EPIs with FLC
The checkerboard microdilution assay was performed to evaluate the interaction of VER with FLC following the standards of the CLSI [30] and the EUCAST [31] with some modifications. Two-fold serial dilutions of FLC (50 µL) and VER (50 µL) were distributed along the rows and columns, respectively, of a 96-well, U-bottom microtiter plate (Kasvi, São José dos Pinhais, Brazil). The final drug concentrations used for each strain are shown in Table 2. An aliquot of 100 µL of CaS or CaR was individually added at final concentrations of 0.5-2.5 × 10 3 CFU/mL. The control consisted of fungal inoculum without drug (vehicle only). After 24 h of incubation at 37°C, the OD 540 was determined, the control and the samples with a lower OD value than the control were diluted and plated on SDA for colony counting.
To assess the interaction between the drugs, the fractional inhibitory concentration index (FICI) [32] was determined using the sum of the FICI of each agent (FICI = FICI VER + FICI FLC ). The FICI of each agent was calculated by dividing the MIC of the agent in combination with the MIC of the agent alone (FICI A = MIC A in the presence of B /MIC A alone ). The FICI value was interpreted as follows: FICI < 0.5: synergism; 0.5 ≤ FICI ≤ 4.0: no interaction; and FICI > 4.0: antagonism [33]. In addition, the Bliss independence model [34][35][36] was used due to the deficiencies of the FICI method [34]. The Bliss model is based on the idea that each drug acts independently of each other, and is calculated by the following equation: E IND = E A + E B -E A × E B , for a combination of drug A at concentration a, and drug B at concentration b. E A and E B are the percentages of growth inhibition observed for drug A or B alone at concentration a or b, respectively, and E IND is the expected percentage of growth inhibition of a non-interactive combination of drug A at a with drug B at b. The difference (ΔE = E OBS -E IND ) between the observed growth inhibition percentage (E OBS ) and the expected percentage (E IND ) describes the drug interaction for each concentration as follows: when ΔE and its 95% confidence interval (CI) were > 0, synergism was concluded. If ΔE and the 95% CI were < 0, antagonism was concluded for that combination, and Bliss independence was concluded when the 95% CI of ΔE overlapped 0 [35,36]. Experiments were performed thrice, and the FICI and Bliss independence analyses were performed for each drug combination using OD 540 values. The mean ΔE values were used to build a three-dimensional surface graph, where the peaks above the plane 0 corresponded to synergism, valleys below 0 corresponded to antagonism, and the plane 0 indicated no statistically significant interaction.
After an initial 24 h of incubation, 100 μL of the content from each well was removed and renewed by adding 100 μL of fresh RPMIg, and the plates were incubated for a further 24 h [37].
Susceptibility testing
After biofilm formation, samples were washed twice with PBS and 200 μL of the drugs were added. The final concentrations of drugs for both strains are shown in Table 3. Control biofilms were not treated with any drug and received the same volume of drug vehicle. All samples were incubated at 37°C for 24 h. After incubation, biofilms were washed twice with PBS and mechanically disrupted using a pipette tip and 200 μL of PBS for serial dilutions, which were plated on SDA and incubated at 37°C for 48 h, and the resulting colonies were counted.
Inhibition of fungal efflux systems
The highest non-lethal concentration of VER was combined with the highest non-lethal concentration of FLC to verify the potential reversal of FLC-resistance. The concentrations of VER and FLC used were 4 mg/mL and 1 μg/mL, respectively, for CaS, and 4 mg/mL and 64 μg/mL, respectively, for CaR. The biofilms were washed, a final volume of 200 μL of the combined drugs VER + FLC was added, and the mix was incubated at 37°C for 24 h. The control samples received the drug vehicle. After the incubation, the biofilms were washed twice with PBS, disrupted, and plated on SDA for colony counting as described above.
FLC and VER on the survival of G. mellonella infected with C. albicans
Larvae in the final stage of development (sixth instar) of average size (approximately 150 to 200 mg) were selected. Different 10 µL Hamilton microsyringes (Fisher Scientific, Buenos Aires, Argentina) were used to inject the fungal suspensions and drugs into the larvae, which were previously cleaned for 10 min using 10% bleach, then with 100% ethanol, distilled water, and finally with sterile PBS [38]. FLC was prepared with 2.5% DMSO and sterile saline. VER was diluted using sterile saline. Suspensions of CaR and CaS were centrifuged (6,000 ×g, 10 min, 4°C), washed twice, and resuspended in sterile saline. The mean OD 540 values for CaS and CaR were 0.655 ± 0.098 a.u. and 0.560 ± 0.140 a.u., which corresponded to a mean ± SD value of 1.57 × 10 7 ± 6.99 × 10 6 CFU/mL and 1.49 × 10 7 ± 2.40 × 10 6 CFU/mL, respectively. For fungal inoculation, each larva was handled with light pressure and 10 μL of CaS or CaR was injected at the last, left pro-leg [38,39]. The larvae were incubated at 33°C, and after 2 h, 10 μL of the drugs (VER and FLC) were injected, alone or in combination, in the last, right pro-leg. The following groups were evaluated (n = 10 each): control (fungal inoculum and saline); FLC (fungal inoculum and FLC); VER (fungal inoculum and VER); VER + FLC (fungal inoculum and VER combined with FLC). Each drug was used at its MIC. In another group (saline), larvae were injected with sterile saline in both the right and left pro-legs to assess the effect of the injection trauma. After the injections, the larvae were kept in separate Petri dishes according to each group, incubated at 33°C, and were observed daily for survival until no larvae were left or they became pupae. To assess larval survival, they were lightly touched to verify the lack of response to the stimulus [38][39][40].
Fungal recovery from G. mellonella
The fungal load was determined at 5 days after infection. Larvae were classified into the same groups (n = 10) as described above and, every 24 h, two larvae from each group were selected. Each larva was homogenized in 1 mL of sterile saline [40], and serial dilutions were plated on SDA. The plates were incubated at 37°C for 48 h for colony counting.
Statistical analysis
Each in vitro experiment was performed in quadruplicate thrice or five times (n = 3 or 5 for each group). The data [log 10 (CFU/mL)] were analyzed using the Shapiro-Wilk and Levene tests to verify the normal distribution and homogeneity of variances, respectively. The data were analyzed using two-way ANOVA (with strain and treatment as independent variables). For homoscedastic data, the post-hoc Tukey's test was used. When data were heteroscedastic, they were evaluated using the post-hoc Games-Howell test. The survival curves of G. mellonella were analyzed via the Kaplan-Meier method and log-rank tests, and the fungal loads were analyzed using three-way ANOVA (strain, treatment, and recovery day as independent factors). The level of significance was 5%, and the SPSS software (version 25.0, SPSS Inc., Chicago, IL, USA) was used for all statistical analyses.
Susceptibility test
The MIC values of VER and FLC were estimated for the CaS and CaR strains, and the reductions in the log 10 (CFU/mL) values are shown in Table 4.
Efflux of Rh123
The CaS cells showed intracellular retention of Rh123 (green fluorescence), whereas the CaR cells did not exhibit Rh123 fluorescence, suggesting that Rh123 was a substrate for the EPs of CaR strains (Figure 1).
CaR cells treated with VER at 2 mg/mL showed the intracellular retention of Rh123 (green fluorescence), suggesting that VER prevented Rh123 from being a substrate of the EPs in the CaR strain ( Figure 2).
Inhibition of fungal efflux systems
After establishing the MICs and the concentrations of drugs that inhibited the growth of both C. albicans strains, VER was combined with FLC at sub-MIC values, i.e., 2 mg/mL VER was used for both strains in combination with 0.25 and 64 μg/mL FLC for CaS and CaR, respectively.
The two-way ANOVA indicated a significant interaction (p < 0.001) between the strains and the treatment with VER and FLC. The combination of VER and FLC resulted in a greater growth reduction for CaR (4.08 log 10 , p < 0.001) than that observed for CaS (0.60 log 10 , p < 0.001) compared with their respective controls ( Figure 3).
Interaction of EPIs with FLC
The checkerboard assay with VER and FLC for CaS showed no interaction between the drugs (FICI values ranging from 0.625 to 1.000) and high mean values for CFU/mL. For CaR, the FICI values ranged from 0.508 to 1.000, which corresponded to no interaction between VER and FLC, and the plated samples showed mean values ranging from 4.47 × 10 3 to 4.92 × 10 6 CFU/mL.
Susceptibility test
The use of VER and FLC alone did not eradicate biofilm growth. The CFU/mL values obtained for FLC and VER for biofilms in both CaS and CaR are shown in Figure 5. Significant (p ≤ 0.001) reductions in CFU/ mL were observed after incubation with FLC at concentrations of ≥ 2 μg/mL and ≥ 128 μg/mL for CaS (0.84 to 1.30 log 10 ) and CaR (0.84 to 1.23 log 10 ), respectively, compared with the respective controls (without drug; Figure 5a and b). VER used at a concentration of 8 and 16 mg/mL promoted significant (p ≤ 0.001) reductions of 1.96 and 3.19 log 10 (CFU/mL), respectively, for CaS biofilms (Figure 5c), and of 1.29 and 1.81 log 10 (CFU/mL), respectively, for CaR biofilms ( Figure 5d) compared with the respective controls.
Inhibition of fungal efflux systems
After identification of the drug concentrations that caused a growth reduction of the fungal biofilms, VER was combined with FLC at non-lethal concentrations. A concentration of 4 mg/mL VER was used for both strains of C. albicans with 1 and 64 μg/mL FLC used for CaS and CaR, respectively. No significant interaction effect (p = 0.716) was observed between the strain (CaS, CaR) and the treatment with VER with FLC; however, a significant effect was observed for the strain (p = 0.043) and the treatment (p < 0.001). A significant (p < 0.001) growth reduction (1.11 log 10 ) was observed for biofilms treated with a combination of VER and FLC ( Figure 6) compared with the control-treated biofilms (without drug).
Effect of FLC and VER on the survival of G. mellonella infected with C. albicans
The results from in vivo assays for G. mellonella infected with C. albicans showed a reduction in larval survival (p ≤ 0.010) in the CaS and CaR strains relative to the control saline-injected groups.
The survival analysis for G. mellonella infected with CaS (Figure 7a) indicated that the control group did not show a significant difference (p ≥ 0.235) in survival outcomes relative to the drug-treated larvae infected with CaS, and the larval groups treated with VER and VER + FLC did not show a significant difference (p ≥ 0.071) relative to the saline group.
For larvae infected with CaR, the control group showed the shortest survival times (p ≤ 0.012), and all treatments (VER, FLC, VER + FLC) increased the survival time of larvae infected with CaR; however, the difference for the FLC-treated group was not significant (p = 0.137) relative to the saline-treated group (Figure 7b).
Fungal recovery from G. mellonella
After 5 days of infection with the C. albicans strains, the fungal load from the larvae that were treated or untreated with VER and FLC was determined by the recovery of CaS and CaR. The saline-treated group did not result in the recovery of C. albicans from the larvae. A three-way ANOVA did not show a significant interaction among the factors (p ≥ 0.091); however, each factor when considered alone (strain, treatment, and recovery day) demonstrated a significant effect (p ≤ 0.004). The recovery of the CaR strain [3.41 ± 0.70 log 10 (CFU/mL)] was greater (p < 0.001) than that of the CaS strain [2.94 ± 0.53 log 10 (CFU/mL)]. The use of the combination of VER and FLC significantly reduced (p < 0.001) the fungal recovery by 0.58 log 10 compared with the control (Figure 8a). The fungal recovery was significantly (p = 0.016) lower on the fourth day compared with the first day after infection (Figure 8b).
Discussion
To analyze antifungal resistance in vivo, we investigated the inhibition of the efflux system that acts as the main mechanism underlying the resistance toward FLC in C. albicans [13][14][15][16][17][18]. VER has been used as an EPI in vitro in C. albicans [24][25][26] and may serve as an important strategy for combating antifungal resistance.
Before conducting the in vivo assays, we investigated the MIC of all drugs in vitro, the inhibition of the efflux system, and the interaction between VER and FLC. VER showed the same MIC (4 mg/mL) for both strains and higher concentrations (8 and 16 mg/mL) reduced the biofilm viability for CaS and CaR. Another study also reported a reduction in the metabolic activity for biofilm formation and pre-formed biofilm in C. albicans treated with VER at concentrations ranging from 40 to 1280 μg/mL [24]. These results may be explained by the inhibitory effect of VER on the virulence of C. albicans. The use of VER at concentrations ranging from 20 to 640 μg/mL inhibited C. albicans filamentation, adherence to polystyrene surfaces and buccal epithelial cells, expression of the HWP1 (hyphal wall protein 1) gene, and the gastrointestinal colonization of mice [26]. Moreover, the use of 80 μg/mL VER increased the susceptibility of C. albicans toward oxidative stress by reducing the fungal oxidative stress response [25].
Further, we demonstrated that the combination of VER with FLC reversed the resistance of C. albicans toward FLC, as a sub-MIC of FLC (64 μg/mL) promoted significant reductions in the log 10 (CFU/mL) when combined with sub-MIC of VER (2 and 4 mg/ mL for planktonic cells and biofilms, respectively). The analysis of the effect of VER and FLC on biofilm formation in C. albicans showed that the MIC 50 of VER was reduced from 160 mg/L to 20 mg/L [24]. On preformed biofilms, the MIC 50 of VER and FLC was reduced from 320 to 80 mg/L and from > 256 to 0.5 mg/L, respectively [24]. In fungal cells growing as part of biofilms in the presence of VER, the use of concentrations ranging from 160 to 1280 mg/L reduced the metabolic activity of biofilms in more than 60% [24]. In contrast, our results showed that only a higher concentration of VER (8 and 16 mg/mL) reduced biofilm viability. This difference may be attributed to the method used to evaluate the effect of the drugs, as in this study we used the quantification of colonies instead of the cellular metabolic activity.
An FLC-susceptible strain was also evaluated in this study as a control. However, the combination of VER with FLC had a stronger effect on the FLC-resistant strain than that observed in the FLC-susceptible one. This result was expected since the susceptible strain does not overexpress the efflux systems, which are the substrate for the inhibitors. However, as a limitation of our investigation, we did not evaluate the expression of CDR1, CDR2, and MDR1 genes to determine the exact mechanism of resistance of C. albicans. This evaluation could explain better our susceptibility results and determine if VER is specific to the ABC and/or MFS EP families.
The results of accumulation/efflux assays showed that Rh123 was retained in the CaS, but not in CaR strains, indicating that Rh123 is a substrate for the EPs. We observed that VER increased the intracellular accumulation of Rh123 in CaR, and this may result due to a decrease in the efflux pump activity. A previous study has shown that the accumulation of Rh123 in planktonic C. albicans was higher during the earlier than the later phases of growth; therefore, the mid-log phase of growth was used to standardize the assay [41]. However, the authors of the study also reported that 10 μM VER did not increase the accumulation of Rh123 in FLC-resistant C. albicans, probably due to shorter exposure and the lower concentration [41]. Another investigation showed higher accumulation of Rh123 in the early phase (6 h) biofilms than in the intermediate (12 h) and mature (48 h) biofilms and planktonic cultures of C. albicans, which indicated that the azole resistance of C. albicans biofilms mediated by the EPs occurs at the early stage of biofilm growth alone [6].
The combination of VER with FLC at the sub-MIC level showed a synergism in the reduction of the CaR viability, which indicates that VER reversed the resistance Survival curves for G. mellonella infected with the CaS (a) and CaR (b) strains upon treatment with VER and FLC. The groups evaluated were: Saline (sterile saline alone); Control (fungal inoculum and saline); FLC (fungal inoculum and 0.5 and 128 μg/mL of FLC for the CaS and CaR strains, respectively); VER (fungal inoculum and 4 mg/mL VER); VER + FLC (fungal inoculum and 4 mg/mL VER combined with 0.5 and 128 μg/mL of FLC for the CaS and CaR strains, respectively). Censored observations are indicated with a plus sign (+) (data collection was stopped when the larvae became pupae). toward FLC. Despite not evaluating VER, another study reported synergism of other calcium channel blockers (amlodipine, nifedipine, benidipine, and flunarizine) with FLC against C. albicans by the Bliss independence analysis [42]. Altogether, our in vitro results demonstrated that VER was an effective EPI and increased the susceptibility of the CaR strain to FLC.
Our in vivo results demonstrated that treatment with both VER and FLC increased the larval survival and reduced the fungal recovery for CaR but not for the CaS strains. This result suggests that the combinatorial use of drugs was effective in treating the infection caused by CaR. This result is in accordance with the in vitro results that demonstrated a greater reduction in growth for CaR than for the CaS strains. Another study showed that proton pump inhibitors (omeprazole, lansoprazole, pantoprazole, rabeprazole, esomeprazole, and ilaprazole) inhibited the efflux pump activity of FLC-resistant C. albicans [43]. The combination of these inhibitors with FLC increased the survival of larvae and reduced the black lumps with yeast and hyphae that were observed in histological sections [43]. The combination of licofelone (dual microsomal prostaglandin E2 synthase/lipoxygenase inhibitor) and FLC also increased the survival of G. mellonella infected with an FLC-resistant C. albicans, and decreased the fungal burden in the CFU and histological sections, although no effect on the efflux pump was observed [40]. In these studies, larvae treated with the combined drugs showed the greatest survival, whereas in our investigation no difference was observed among groups treated with the drugs alone or together. This result from our investigation may be explained by the MIC used for each drug; we used the MIC as the fungal concentration was increased to more than 1 × 10 7 CFU/mL to reduce the survival of the larvae (see Supplemental Material).
Another limitation of our study is that we used only one reference strain of FLC-susceptible and -resistant C. albicans. Clinical isolates were not evaluated, which may lead to different outcomes owing to distinct virulence activities.
In conclusion, our results from experiments performed in vitro showed that VER reverted the FLCresistance of C. albicans and showed synergism with FLC against CaR. The drug increased the survival of G. mellonella infected with CaR and reduced the fungal recovery. These results can pave the way for future in vivo studies and clinical trials aimed to combat the antifungal resistance using VER as an EPI to reverse FLC resistance. Because VER is an approved drug for clinical use, repurposing its use may shorten the path to the clinical treatment of resistant infections.
Disclosure statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 6,665.4 | 2021-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem
We study the representation formulae for the fundamental solutions and viscosity solutions of the Hamilton-Jacobi equations of contact type. We also obtain a vanishing contact structure result for relevant Cauchy problems which can be regarded as an extension to the vanishing discount problem.
INTRODUCTION
In the previous work ( [21] and [22]), the authors developed an analogy to weak KAM theory for contact systems on compact manifolds. This leads to a representation formula of the viscosity solutions of the Hamilton-Jacobi equation using implicit variational principle, where M is a C 2 connected closed manifold and c is contained in the set of critical values. The power of this celebrating work has been shown to understand certain systems in a much wider and deeper viewpoint. The main purpose of this paper is to understand the limit of the viscosity solutions of (HJ e ) in the case M = R n when H u is uniformly bounded and tends to 0. For the special case when H having the form of a s Hamiltonian H 0 with a discount factor λ > 0, i.e., H = λu + H 0 (x, p), this problem has been widely studies. From calculus of variations and optimal controls point of view, the associated Lagrangian L = −λu + L 0 (x, v), where λ > 0 and L 0 is a Tonelli Lagrangian. A classical problem in ergodic control consists of studying the limit behavior of the optimal value u λ of a discounted cost functional with infinite horizon as the discount factor λ tends to zero. In the literature, this problem has been addressed under various conditions ensuring that the rescaled value function λu λ converges uniformly to a constant limit.
In recent works, for instance, [12], [18] and [19], the behavior of the vanishing discount limit has been widely studied in the compact manifold case, especially applying Aubry-Mather theory and weak KAM theory. Sufficient evidences show that the method we use in this paper is also a way to understand the vanishing contact structure limit by developing the Aubry-Mather theory for contact type systems (HJ s ) under suitable conditions. We will use a Langrangian approach of the solutions of (HJ e ) in the viscosity sense developed in [5] using the generalized variational principle proposed by Gustav Herglotz in 1930 (see [5] and the references therein).
Such a Lagrangian approach also leads to a very clear explanation of the representation formulae of the value function of the associated problem from calculus of variations. That is, due to Proposition 1.1, we conclude that the relevant fundamental solution and u ξ is uniquely determined by (1.2) in classical sense. By solving the ordinary differential equation (1.2), we can have some new representation formulae of the viscosity solution u(t, x) of (HJ e ). ( where u ξ is uniquely determined by (1.2) with u ξ (t) = u. This approach also leads to a result on the vanishing contact structure limit problem. This can be regarded as a generalization of the vanishing discount problem in PDE and control theory.
Main Result I: Suppose that {L λ } λ>0 is a family of Tonelli Lagrangians satisfying conditions (L1), (L2) and (L3') at the beginning of section 2, with {H λ } the family of associated Tonelli Hamiltonians. Let each u λ be the unique viscosity solution of (HJ e ) with respect to H λ and u defined by (2.14) be the unique viscosity solution of (HJ' e ). If φ is Lipschitz and bounded, then Main Result II: Under the same assumptions as above and replacing (L3') by (L3") (at the beginning of section 2), then u λ tends to u uniformly as λ → 0 + on any compact subset of (0, +∞) × R n . This paper is organized as follows. In Section 2.1, we give a representation formula for the equation (HJ e ). In section 2.2, we discuss our vanishing contact structure results for (HJ e ).
Acknowledgments This work is partly supported by Natural Scientific Foundation of China (Grant No. 11631006, No. 11790272 and No.11471238). The authors thank Qinbo Chen and Hitoshi Ishii for for helpful discussion.
REPRESENTATION FORMULA AND VANISHING CONTACT STRUCTURE
We will study (HJ e ) when M = R n . It is not difficult to see that the associated Lagrangian L defined in (1.1) is a function of C 2 class and it satisfies the following conditions: (L2) For each r ∈ R, there exist two superlinear and nondecreasing function θ r , θ r : (L3) There exists K > 0 such that Let {L λ } λ>0 be a family of Tonelli Lagrangians satisfying conditions (L1)-(L3). We denote by H λ the associated Hamiltonians.
For the family {L λ } λ>0 we also need the following conditions: and L λ tends to L 0 uniformly as λ → 0 + on any compact subset of R n × R n .
2.1. Representation formulae for fundamental solutions and viscosity solutions. In this section, we want to give a new representation formula for the viscosity of the Hamilton-Jacobi equation (HJ e ) with H satisfying condition (H1)-(H3). Such a representation formula for Tonelli systems appeared first in [21] and [22] using an implicit variational principle and a fixed point method. In [5], the authors give an alternative approach based on Hergoltz' variational principle. Our following new representation formula for the fundamental solutions is motivated by the multiplier rule (see [10]).
where u ξ is uniquely determined by (1.2).
Therefore, the curve u ξ given by (1.2) does not appear in the representation formula above except for the initial point u ξ (0). Therefore or, equivalently, Since A(t, x, y, u) = inf ξ∈Γ t x,y u ξ (t), then we obtain (2.1). Theorem 2.3. If L satisfies conditions (L1)-(L3), or equivalently, H satisfies conditions (H1)-(H3), and φ is bounded and Lipschitz real-valued function on R n with a Lipschitz constant Lip (φ), then is solution of (HJ e ) in the viscosity sense where A(t, x, y, u) is given by (2.1).
We will postpone the proof of Theorem 2.3. To verify that the infimum in (2.2) is indeed a minimum, we want to show the boundedness of the set For this purpose we need a refinement of Lemma B.1. Notice that when one works on a closed manifold instead of R n , at least the the infimum can be achieved automatically. But a quantitative estimate on the size of ball containing Λ x t has its own interest.
Proof. For any x, y ∈ R n and t > 0. Let ξ y ∈ Γ t y,x be a minimizer of A(t, y, x, φ(y)) and u ξy be the unique solution of (1.2) with initial condition u ξy (0) = φ(y). Based on the estimates of the lower bound of A(t, y, x, φ(y)) and upper bound of A(t, x, x, φ(x)) in Lemma 2.4, we have to deal with estimate of the lower bound of e Kt φ(y) − e −Kt φ(x) when φ(y), φ(x) 0 which is the most difficult case. Indeed, we have that where C 0 = sup x∈R n |φ(x)|. Since there exists C 1 > 0 such that 1 − e −2Kt C 1 t for all t 0. Therefore, for C 2 = C 0 C 1 , we conclude Now, suppose that φ(y) 0 and φ(x) 0, then by Lemma 2.4 and (2.12), we have that Therefore, for any k > 0, we have that − Lip (φ)|y − x| + k|y − x| − (c 0 + C 2 + C + e −2Kt θ * 0 (ke 2Kt ))t Choosing k = Lip (φ) + 1 and taking µ(t) = c 0 + C 2 + C + e −2Kt θ * 0 (e 2Kt (Lip (φ) + 1)), we have that the set Λ x t defined in (2.3) is contained in B(x, µ(t)t). Therefore Λ x t is compact and the infimum in (2.2) is indeed minimum. Moreover, (2.11) is a consequence of (2.3). The other cases can be dealt with in a similar way. Indeed,
This completes the proof.
Remark 2.7. It is not clear whether Lemma 2.6 holds true without the assumption that φ is bounded in general, while it does for the Lagrangian in the form L(x, u, v) = −λu + L 0 (x, v), λ > 0, which is the Lagrangian with respect to the well known discounted Hamiltonian (see, for instance, [12]). Lemma 2.6 not only ensures that the infimum in (2.2) is indeed minimum if φ is a bounded and Lipschitz continuous function on R n , but also plays an essential part for the applications to the study of the global propagation of singularities of the associated Hamilton-Jacobi equations ( [3], [4], [8] and [2]). Lemma 2.8. Let x, y ∈ R n , t > 0 and u ∈ R. For any ξ ∈ Γ t x,y being a minimizer of (1.3), we denote by u ξ (s, u) the unique solution of (1.2) with u ξ (0, u) = u. Then, for any 0 < t ′ < t, the restriction of ξ on [0, t ′ ] is a minimizer for and A(s 1 + s 2 , x, ξ(s 1 + s 2 ), u) = A(s 2 , ξ(s 1 ), ξ(s 1 + s 2 ), u ξ (s 1 )) for any s 1 , s 2 > 0 and s 1 + s 2 t.
Proof. Suppose x, y ∈ R n , t > 0 and u ∈ R. Let ξ ∈ Γ t x,y be a minimizer of (1.3) and u ξ (s) = u ξ (s; u) be the unique solution of (1.
By letting |t 2 − t 1 | → 0, this gives rise to As an application of Fenchel-Legendre dual and since u(t 0 , x 0 ) = u ξ (t 0 ), we obtain which shows that u is a subsolution. Now we turn to the proof that u is a supersolution. Let ϕ be a C 1 test function such that with V be an open neighborhood of (t 0 , x 0 ) in R n . Due to Lemma 2.6 and Lemma 2.8, there exists a C 2 curve ξ : [t,
It follows that
Finally, fix x ∈ R n and let y t,x be any minimizer as in Lemma 2.6, we conclude that lim t→0 + y t,x = x. Thus, |L(ξ(s), u ξ (s),ξ(s))|.
Since Proposition B.1 and Proposition B.2 and φ is bounded, then, for 0 < t 1, we conclude that, there exists C 1 > 0 independent of x, t and u, such that It follows that there exists C 2 > 0 such that max s∈[0,t] |L(ξ(s), u ξ (s),ξ(s))| C 2 , and This leads our conclusion that lim t→0 + u(t, x) = φ(x) and completes the proof.
2.2.
Vanishing contact structure. Let u λ be the viscosity solution of (HJ e ) with respect to H λ defined by Herglotz' variational principle (1.3) under the constrain (1.2). If H 0 is the Fenchel-Legendre dual of L 0 , then u defined by (2.14) u(t, x) = inf y∈R n {φ(y) + A t (y, x)}, x ∈ R n , t > 0, is a viscosity solution of where A t (x, y) is the fundamental solution or the least action with respect to L 0 . In this section, we will begin with an easier problem to show, for Cauchy problem, the vanishing discount problem mentioned in the introduction can be generalized to that of vanishing contact structure. Lemma 2.9. Suppose L satisfies conditions (L1)-(L3). Given x ∈ R n , t, R > 0, u ∈ R and |y − x| R. If ξ ∈ Γ t y,x is a minimizer for A t (y, x) and u ξ is determined by (1.2) with respect to L λ and ξ, then we have that where Proof. Denoting by ξ 0 ∈ Γ t y,x the straight line segment defined by ξ 0 (s) = y + s(y − x)/t for any s ∈ [0, t] and in view of (L2), we have that where κ(r) = θ 0 (r) + 2c 0 . Therefore, Due to Gronwall inequality , we obtain that which leads to our conclusion. Let each u λ be the unique viscosity solution of (HJ e ) with respect to H λ and u defined by (2.14) be the unique viscosity solution of (HJ' e ). If φ is Lipschitz and bounded, then x) ∈ (0, +∞) × R n . Remark 2.11. For the uniqueness of the viscosity solutions of both (HJ e ) and (HJ' e ), see, for instance, [1].
Proof. It is similar to the proof of Theorem 2.10.
APPENDIX A. CARATHÉODORY EQUATIONS
Let Ω ⊂ R n+1 be an open set. A function f : R × R n → R n is said to satisfy Carathéodory condition if -for any x ∈ R n , f (·, x) is measurable; -for any t ∈ R, f (t, ·) is continuous; -for each compact set U of Ω, there is an integrable function m U (t) such that |f (t, x)| m U (t), (t, x) ∈ U.
The classical problem of following Carathéodory equation (A.1)ẋ(t) = f (t, x(t)), a.e., t ∈ I is to find an absolutely continuous function x defined on a real interval I such that (t, x(t)) ∈ Ω for t ∈ I and satisfies (A.1).
Proposition A.1 (Carathéodory).
If Ω is an open set in R n+1 and f satisfies the Carathéodory conditions on Ω, then, for any (t 0 , x 0 ) in Ω, there is a solution of (A.1) through (t 0 , x 0 ). Moreover, if the function f (t, x) is also locally Lipschitzian in x with a measurable Lipschitz function, then the uniqueness property of the solution remains valid.
For the proof of Proposition A.1 and more results related to Carathéodory equation (A.1), the readers can refer to [11] and [14].
APPENDIX B. REGULARITY RESULTS
In this section, we collect some fundamental estimates mainly from [5]. We always suppose that conditions (L1)-(L3) are satisfied. | 3,393.6 | 2018-01-18T00:00:00.000 | [
"Mathematics"
] |
Mathematical Modeling of Gas-Solid Two-Phase Flows: Problems, Achievements and Perspectives (A Review)
: Mathematical modeling is the most important tool for constructing theories of different kinds of two-phase flows. This review is devoted to the analysis of the introduction of mathematical modeling to two-phase flows, where solid particles mainly serve as the dispersed phase. The main problems and features of the study of gas-solid two-phase flows are included. The main characteristics of gas flows with solid particles are discussed, and the classification of two-phase flows is developed based on these characteristics. The Lagrangian and Euler approaches to modeling the motion of a dispersed phase (particles) are described. A great deal of attention is paid to the consideration of numerical simulation methods that provide descriptions of turbulent gas flow at different hierarchical levels (RANS, LES, and DNS), different levels of description of interphase interactions (one-way coupling (OWC), two-way coupling (TWC), and four-way coupling (FWC)), and different levels of interface resolution (partial-point (PP) and particle-resolved (PR)). Examples of studies carried out on the basis of the identified approaches are excluded, and they are also excluded for the mathematical modeling of various classes of gas-solid two-phase flows.
Introduction
Continuum flows, which carry dispersed admixtures, include [1][2][3][4] sandstorms, tornadoes, volcanic eruptions, forest fires, and precipitation in the form of hail, snow, etc. Examples of technical devices that use two-phase currents include the paths of solid-fuel jet engines, pneumatic devices, and many others.
Today, we can state that there is continual growth of interest among researchers in the study of two-phase flows.This seems to be due to two factors.First, in recent years, there has been a tremendous growth in the possibilities of both the mathematical and physical (experimental) modeling of two-phase flows.Second, the range of problems under study for various types of two-phase flows is expanding.The second circumstance largely stems from the first circumstance.
This review differs from other reviews on this topic, which have largely been devoted to some narrower problems (for example, the problem of the influence of particles on gas turbulence, the problem of particle clustering, etc.).This review attempts to analyze the state of mathematical modeling in a broader sense.In the review, a classification of two-phase turbulent flows according to particle inertia is constructed.This classification covers almost the entire range of particle inertia.This classification is of great prognostic value since it offers new dimensionless criteria that allow one to analyze both the existing results at different qualitative levels and conduct new studies regarding various classes of two-phase flows determined by the concentration and inertia of particles.
The inertia of particles (which is primarily determined by their size and density) can vary in a colossal range (many orders of magnitude).One-phase flows are characterized by a number of space-time scales that are determined by the magnitude of their inherent flow velocity, flow regime (laminar, transient, and turbulent), flow geometry, etc.For the accurate modeling of particle motion, it is necessary to consider particles' interactions at different scales; these interactions are determined by (1) averaged movement, (2) widescale fluctuation motions, (3) fine-scale fluctuation motions, (4) different instabilities (for example, Tollmin-Schlichting instability in boundary layers, Taylor-Gertler instability in pipes, and Kelvin-Helmholtz instability in pure shear layers), etc.
It important to notice that many of these forces, in one form or another, contain the velocity of the carrier phase u(τ), which is a random variable in a turbulent flow.Therefore, a question often arises regarding the applicability of a particular expression that is obtained theoretically or empirically for other conditions to calculating the influence of forces (for example, for laminar flow or in the absence of velocity shift).
Multiplicity of Modeling Parameters
The main parameters are as follows: (1) three components of the average speed (U i , U j , and U k ), (2) three components of the fluctuation (rms) velocity ((u i 2 ) 1/2 , (u j 2 ) 1/2 , and (u k 2 ) ), (3) average temperature (T), ( 4) fluctuation (rms) in temperature ((t 2 ) ), (5) double correlations of various components of fluctuation velocities (components of the Reynolds stress tensor) (u i u j ), (6) double correlations of fluctuation velocity and fluctuation temperature (u i t , u j t ), etc.This multiplicity is explained by the fact that similar parameters for the dispersed phase are added to the parameters indicated above (for example, averaged (V i , V j , V k ) and fluctuation velocities ((v i 2 ) 1/2 , (v j 2 ) 1/2 , and (v k 2 ) 1/2 ) and particle temperatures (T p , (t p 2 ) ), etc.), including their size, size distribution, averaged and fluctuation (rms) concentrations (Φ, (φ 2 ) ), and the parameters for the carrier phase (in the presence of particles), the heat of phase transitions, and many others.
Multiplicity of Collision Processes
The main factors contributing to the occurrence of collisions between particles are listed as follows: (1) polydispersity, which leads to a difference in the averaged velocities; (2) the influence of the gradient of the averaged velocity of the carrier phase; (3) the influence of gravity (Archimedes); (4) the turbulent transport effect, which leads to the appearance of relative velocity between nearby particles; (5) the effect of clustering, i.e., an increase in the concentration of the dispersed phase in local regions of space; (6) electrostatic interactions; and (7) Brownian motion.
Multiplicity of Phase and Chemical Transformations
Phase transformations are not considered in this review because the subject of this review is gas flows with solid particles (gas-solid two-phase flow).However, for nonisothermal two-phase flows, particle melting may occur during interphase heat exchange.The melting of particles in the gas stream leads to the transition of a gas-solid two-phase flow into a gas-liquid two-phase flow.The subsequent process of the crystallization (solidification) of droplets can cause the flow to "return" to its initial state.
Multiplicity of Dimensionless Parameters
An example of such parameters is represented by the numerous Stokes numbers (see Section 3.3) that characterize the inertia of the dispersed phase with ratios of the various scales of the carrier gas, the Reynolds number of the particle, etc.
Main Characteristics of Two-Phase Flows
This section presents classifications of two-phase flows developed on their basis.
Particle Concentrations
The possible varieties of particle concentrations (classification) are given in [8,9].There are three classes of two-phase flows: (1) dilute two-phase flows without the reverse effect of the dispersed phase; (2) dilute two-phase flows with the reverse effect of the dispersed phase; and (3) dense two-phase flows with intense collisional interactions between particles.
One-Way Coupling
To model the motion of particles in dilute two-phase streams (dispersed phase Φ ≤ 10 −6 ), that is, streams with a small Φ, "one-way coupling" (OWC) is applied.
Four-Way Coupling
Further growth (Φ > 10 −3 ) requires the inclusion of the contribution of interparticle interactions to the process of the momentum and energy transfer of the dispersed phase [15][16][17][18].The chaotic motion of particles during their interaction is called "pseudoturbulence" to distinguish it from the actual turbulent fluctuations of particle velocities associated with their involvement in the turbulent motion of the carrier flow.
It should be noted that there is a clustering phenomenon in two-phase flows that consists of a sharp increase in the concentration of particles in local areas.This significant rise in Φ leads to an increase in the probability of particle collisions, even in a dilute two-phase flow.
In reference to what has been discussed above, it is clear that in flows with a small mass of dispersed phase content, in which particles do not undergo collisions and do not have a reverse effect on the flow of the carrier continuous medium, clustering phenomena can lead to flow restructuring.The formation of local areas of increased particle concentrations has been revealed experimentally or via calculation in various flows, including homogeneous isotropic turbulence [19,20], shear flows in pipes (channels) [21,22], flows in boundary layers [23], jet flows, traces behind streamlined bodies, flows around blunted bodies [24], and free, concentrated vortices [25][26][27].
Particles' Dynamic Relaxation Time
The inertia of particles is the time of dynamic relaxation τ p , which is represented in the following form where ρ p is the physical density; τ p0 is the time of dynamic relaxation of the Stokes particle; µ is the dynamic viscosity.
Stokes Numbers
There are three dimensionless criteria listed in [28,29]: Stk f , Stk L , and Stk K , representing Stokes numbers in averaged, large-scale, and small-scale fluctuation movements, respectively.
In the equation above, T i denotes a given characteristic time of the carrier phase.
Stokes Number in Time-Averaged Motion
We can apply the Stokes number to averaged motion, which we express as where T f is the characteristic time of the carrier phase in the averaged motion.
Stokes Number in Large-Scale Fluctuation Motions
In this case, the Stokes number assumes the following form where T L is the characteristic time of the carrier gas in a large-scale fluctuation motion (temporal Lagrangian integral turbulence scale).
Stokes Number in Small-Scale Fluctuation Motions
The inertia of particles in small-scale fluctuation motions can be characterized by the Stokes number we represent as where τ K is the Kolmogorov time scale of turbulence.
Classification of Turbulent Two-Phase Flows According to Particle Inertia
We will briefly describe the classification of turbulent two-phase flows according to particle inertia (see Figures 1 and 2) depending on Stokes numbers [28,29].
Classification of Turbulent Two-Phase Flows According to Particle Inertia
We will briefly describe the classification of turbulent two-phase flows accordin particle inertia (see Figures 1 and 2) depending on Stokes numbers [28,29].Flow around fixed "frozen" particles.In this case, the particles have an extrem large amount of inertia, which remains completely static and whose temperature d not change.An analogue of such a hypothetical class of two-phase flows is a one-ph
Classification of Turbulent Two-Phase Flows According to Particle Inertia
We will briefly describe the classification of turbulent two-phase flows according to particle inertia (see Figures 1 and 2) depending on Stokes numbers [28,29].Flow around fixed "frozen" particles.In this case, the particles have an extremely large amount of inertia, which remains completely static and whose temperature does not change.An analogue of such a hypothetical class of two-phase flows is a one-phase Flow around fixed "frozen" particles.In this case, the particles have an extremely large amount of inertia, which remains completely static and whose temperature does not change.An analogue of such a hypothetical class of two-phase flows is a one-phase flow in heat exchangers, where fixed pipes act as such particles, through which the working fluid moves.
Lagrangian and Eulerian Modeling of Two-Phase Flows
Mathematical modeling methods play an important role in the study of the processes of the motion of solid particles.Detailing a large number of processes in which information about each individual is not always indisputable can lead to a decrease in the reliability of the created model.
Reasons for Considering of the Two-Phase Nature of Tornados
It is clear that the attempts to describe all the varieties of two-phase flows using all models can hardly be justified.As a consequence, for certain classes of flows (see Section 3), which are characterized primarily by the concentration of the dispersed phase and Stokes numbers, specific models should be preferred.
Lagrangian Modeling
The system of equations is as follows (example taken from [30]) where x p is the position vector (radius vector) of the particle, v is the instantaneous velocity vector of the particle, ω p is the angular velocity vector of the particle, m p is the mass of the particle, and I is the moment of inertia of the particle.Equation ( 8) describes the change in angular velocity of the particle due to viscous interactions with the surrounding gas.
Due to the viscosity of the liquid, a moment of rotation T acts on a rotating particle.
Eulerian Modeling
Let us briefly consider the current approaches to constructing continuum equations for the motion of dispersed impurity and analyze the features of describing their behavior for different classes of two-phase flows.
Algebraic and differential models.There are two main approaches to determining the velocity correlations of dispersed phases.One of them is presented in [31,32] where A is the function of a particle's involvement in gas fluctuation movement.
The other approach consists of applying gradient relations like the Boussinesq relations for a single-phase flow [33] or in the form presented in [34,35] where ν p is the turbulent viscosity coefficient of the dispersed phase.Different methods for determining the value of ν p have been described in the literature [34,35].
According to this first method, to transform stochastic equations like the Langevin equation into a kinetic equation for a group of particles, a probability density function (PDF) that describes the coordinate x, velocity v, and temperature distribution of particles t p is introduced: where averaging is not carried out over time but over realizations of the random fields of the carrying gas flow.
The equation for the PDF is then used, as presented in [36]: where Here, T pL and T pLt are the times of interaction between particles (droplets) and energyintensive fluctuations of velocity and temperature, respectively, for the non-inertial impurities T pL = T L and T pLt = T Lt .
The system of Equations ( 13)-( 15) is not closed, as the equations contain information related to particle involvement in fluctuating motion turbulent stresses v i v j and turbulent heat flux v j t p in the dispersed phase as well as the turbulent diffusion of momentum and heat arising from non-uniform particle concentrations.
A mathematical description of momentum and heat transfer processes in the dispersed phase of varying complexity was developed in [36]: where is the energy of the fluctuations of the dispersed phase velocity.
The second method.The methods presented in [48,49] allow for the acquisition of equations for joint PDF distributions of the velocity and temperature of the dispersed impurity [50].
The third method.The third method constitutes the construction of a closed kinetic equation based on the expansion of the characteristic functional into a series of cumulants [51,52].
Advantages and Limitations of Lagrangian and Eulerian Modeling
Let us consider Euler-Lagrange and Euler-Euler models with respect to describing the motion of flows of continuous media with solid particles, droplets, and bubbles [36].
The advantage of Euler-Lagrange (trajectory and stochastic) models is their ability to obtain detailed statistical information about the motion of individual particles via integrating the equations of motion.
It should be noted that with an increase in the concentration of the dispersed phase, there are also difficulties in using Euler-Lagrange models [53].
Description of the Gas Flow Carrying the Particles
An increase in the volume fraction of the dispersed phase can affect the carrier medium (see Section 3.1.2).Let us consider the motion of a continuous medium (gas) in the presence of particles when the particles begin to have a reverse influence on the medium's characteristics.
Actual Equations
The relevant equations are as follows: The continuity equation (Equation ( 17)) has a form similar to the single-phase flow equation.
Time-Averaged Equations
The resulting averaged continuity, motion, and energy equations are expressed as follows: Let us assume that the distributions of the averaged velocities and particle concentrations are known.We need to determine the turbulent gas stresses u i u j and the turbulent heat flux u j t as well as the correlations between particle concentration fluctuations and gas velocity and temperature fluctuations φ u i and φ t , which can be represented as follows [54][55][56]:
Equations for the Reynolds Stresses
Subtracting Equations (20)- (22) from Equations ( 17)-( 19) yields the following equations: Equation ( 25) differs from the corresponding equation of a single-phase flow via the presence of the last group of terms on the right-hand side, which takes into account the dynamic influence of particles on the carrier flow.
The system of Equations ( 20), ( 21), (23), and ( 25) is unclosed because Equation ( 25) contains unknown triple correlations of velocity fluctuations of the carrier phase, serving as correlations related to fluctuations in particle concentration and velocity.
One-parameter models.The model based on the turbulence energy equation is the most common (as in the case of single-phase flow).It is necessary to multiply the equation of fluctuation motion by u i , sum over i, and then average the result.The equation will then have the following form: The Equation ( 26) can be rewritten concisely: where the additional dissipation ε p, caused by the presence of a dispersed phase, assumes the following form: There have been several studies (e.g., [60][61][62]) in which the authors attempted to estimate the magnitude of the terms on the right-hand side (28) for different classes of two-phase flows.This means the second and third terms on the right-hand side (28) are small compared to the first term.Thus, in the implementation of quasi-equilibrium and non-equilibrium flows (see Figures 1 and 2), the first term on the right-hand side (28) will play a determining role in the process of dissipation: Considering this mechanism [63-65] leads to Equation ( 27) assuming the following form where P p is the additional generation caused by the presence of particles.Two-parameter models.As in the study of single-phase turbulent flows, the two-parameter k − ε-turbulence model has become the most widespread.
By analogy with the equation for a single-phase flow in the case of two-phase flow, we obtain where ε εp is the reduction in dissipation due to the presence of particles.The equation for ε εp is most often represented as follows [66,67]: where the constant C ε3 can assume the following values: C ε3 = 1.0 [68], C ε3 = 1.2 [66], and C ε3 = 1.9 [69].
Methods of Numerically Modeling Two-Phase Flows
The main methods of numerically modeling two-phase flows are presented below.
Particle-Resolved DNS
Particle-resolved direct numerical simulation (PR DNS) is the method that most fully describes the physics of two-phase flows.In this method, flow around each particle is allowed to occur.In this case, the behavior of each particle is determined by both external acting forces and the aerodynamic drag force from the carrier gas (determined in the calculation process).This method is also applicable to the calculation of more complex two-phase flows carrying droplets or bubbles, where the interfacial surface may deform.This deformation is calculated using the aerodynamic force determined in the calculation process.
A well-known limitation of this method is the following circumstance.It is possible to calculate the movement of gas around each particle when the step of the computational grid is small compared to the particle size.The application of this method is complicated when the particle size exceeds the size of the smallest turbulent vortices (Kolmogorov microscale) and the number of particles is large.
To date, various numerical methods and algorithms have been developed to implement PR DNS.In [70], this method was used to calculate the force acting on a single stationary particle in decaying homogeneous isotropic turbulence (DHIT).Effective methods of implementing PR DNS include the immersed boundary method [71], which uses a Cartesian grid throughout the computational domain, and the lattice Boltzmann method [72], which also uses a Cartesian grid that is not aligned in terms of particle shape.Another method is Physalis [73], which uses a local analytical solution to determine the flow around each particle.
Particle Point Methods
Lagrange's methods of description are the oldest methods of describing the motion of particles.These methods can be used to calculate the motion of millions of particles.The condition for the applicability of Lagrange's approaches is the smallness of the particle size compared to the Kolmogorov spatial scale.In this case, the particles can be considered as point particles.
The most important characteristic of particle inertia is dynamic relaxation time τ p .In the case of small values of τ p , the instantaneous velocity of the particle is close to the corresponding velocity of the carrier gas, and the particles are tracers.In this case, an equilibrium flow is realized (Section 3).With an increase in τ p , the particles cannot fully track the turbulent fluctuations of the gas, and a quasi-equilibrium flow is realized.In this case, to describe the motion of particles, it is necessary to integrate the equations of their motion.
Lagrange's models can have different levels of description of the turbulence of the carrier gas, ranging from Reynolds-averaged Navier-Stokes equations (RANS), wherein only fields of averaged turbulence characteristics are calculated, to large-eddy simulation (LES) and direct numerical simulation (DNS), wherein only large vortices and vortices of all scales are resolved (up to Kolmogorov), respectively (see Figure 3).The particle concentration determines the required level of description of the interfacial interaction (see Figures 1 and 2): (1) the mode of movement of single particles, where their presence does not have a reverse effect on the characteristics of a non-existent gas (one-way coupling-OWC); (2) the mode of weakly dusty flow (dilute two-phase flows), with a reverse effect of particles (two-way coupling-TWC), and the mode of highly dusty flow (dense twophase flows), where collisions of particles with each other play a significant role (four-way coupling-FWC).
centration determines the required level of description of the interfacial interaction (see Figures 1 and 2): (1) the mode of movement of single particles, where their presence does not have a reverse effect on the characteristics of a non-existent gas (one-way coupling-OWC); (2) the mode of weakly dusty flow (dilute two-phase flows), with a reverse effect of particles (two-way coupling-TWC), and the mode of highly dusty flow (dense two-phase flows), where collisions of particles with each other play a significant role (four-way coupling-FWC).
Direct Numerical Simulation
To date, there has been a significant amount of work in which researchers have studied various problems regarding the physics of two-phase flows using DNS and by describing interphase interactions and interphase boundary at various levels.
One of the first papers in which the behavior of point particles (PP DNS) under damped homogeneous isotropic turbulence decaying homogeneous isotropic turbulence (DHIT) was studied was [74].In this study, the motion of 432 particles was studied at a very small Reynolds number ( 35 Re < λ ).Only linear aerodynamic drag was taken into account in the equations of particle motion.
In later studies [75,76] devoted to the study of particle motion, both regarding forced homogeneous isotropic turbulence forced homogeneous isotropic turbulence (FHIT) and with respect to damped homogeneous isotropic turbulence (DHIT), emphasis was placed on the study of the possibilities of various methods of interpolation (linear interpolation, high-order Lagrangian interpolation, and high-order Hermite interpolation) of the gas velocity at the location of the particle.
A more complex case of turbulent two-phase flow turbulent flow in a channel is considered in [77,78].In [77], in addition to the aerodynamic drag force, the Safman force was also taken into account, and in [78], a more advanced Fourier-Chebyshev pseudo-spectral method was used to interpolate the gas velocity at the particle's location.To date, there have been numerous studies using the PP OWC DNS method of two-phase flows in the channel [79], in pipes [80][81][82], under FHIT [19,20], and under DHIT [83].
With an increase in the concentration of particles, the particles begin to have the opposite effect on the characteristics of the carrier gas flow (see Section 3), so TWC DNS is necessary.This introduces additional difficulties in mathematical modeling.Firstly, in the equation of the motion of a particle, it is not the initial (inherent in a single-phase flow) velocity of the gas that should be present but the "new" velocity of the flow caused by the presence of particles.In [84], it was suggested that the difference between these velocities is small if the diameter of the particles is smaller than the size of the numerical grid, L d p < .This condition is almost always satisfied in the case of PP DNS.Secondly, it is necessary to introduce a source term in the equations of gas motion [85].If the particle
Direct Numerical Simulation
To date, there has been a significant amount of work in which researchers have studied various problems regarding the physics of two-phase flows using DNS and by describing interphase interactions and interphase boundary at various levels.
One of the first papers in which the behavior of point particles (PP DNS) under damped homogeneous isotropic turbulence decaying homogeneous isotropic turbulence (DHIT) was studied was [74].In this study, the motion of 432 particles was studied at a very small Reynolds number (Re λ < 35).Only linear aerodynamic drag was taken into account in the equations of particle motion.
In later studies [75,76] devoted to the study of particle motion, both regarding forced homogeneous isotropic turbulence forced homogeneous isotropic turbulence (FHIT) and with respect to damped homogeneous isotropic turbulence (DHIT), emphasis was placed on the study of the possibilities of various methods of interpolation (linear interpolation, high-order Lagrangian interpolation, and high-order Hermite interpolation) of the gas velocity at the location of the particle.
A more complex case of turbulent two-phase flow turbulent flow in a channel is considered in [77,78].In [77], in addition to the aerodynamic drag force, the Safman force was also taken into account, and in [78], a more advanced Fourier-Chebyshev pseudospectral method was used to interpolate the gas velocity at the particle's location.To date, there have been numerous studies using the PP OWC DNS method of two-phase flows in the channel [79], in pipes [80][81][82], under FHIT [19,20], and under DHIT [83].
With an increase in the concentration of particles, the particles begin to have the opposite effect on the characteristics of the carrier gas flow (see Section 3), so TWC DNS is necessary.This introduces additional difficulties in mathematical modeling.Firstly, in the equation of the motion of a particle, it is not the initial (inherent in a single-phase flow) velocity of the gas that should be present but the "new" velocity of the flow caused by the presence of particles.In [84], it was suggested that the difference between these velocities is small if the diameter of the particles is smaller than the size of the numerical grid, d p < L. This condition is almost always satisfied in the case of PP DNS.Secondly, it is necessary to introduce a source term in the equations of gas motion [85].If the particle is smaller than the Kolmogorov scale (d p < η K ), then there are no special problems.Otherwise, (d p > η K ) raises the question of the relevance of the assumption of point particles.In [86,87], calculations of a two-phase flow containing a lot of very small particles at Φ = O(10 −4 ) were carried out, and the number of particles was commensurate with the number of cells of the computational grid.
Examples of studies in which PP TWC DNS modeling was performed include [88][89][90][91].In [90], turbulent flow in a channel was studied.The volume concentration of particles was equal to Φ ≈ 10 −4 .It was assumed that the particles were of the Stokes type (adhering to the linear law of resistance).It was found that in the case of small particles (d p < η K ), their presence suppressed turbulence, and, on the contrary, the presence of relatively large particles (d p > η K ) caused turbulence intensification.In [89,90], a two-phase flow in a channel at Re τ = 180, determined from the half-height of the channel, was simulated.It was revealed that the presence of particles reduces resistance and leads to an increase in longitudinal fluctuations of the gas velocity.At the same time, the presence of particles caused a decrease in gas velocity fluctuations in the other two directions and significantly reduced Reynolds stresses.In [91], a two-phase turbulent flow in a channel was simulated for the same Reynolds number (Re τ = 180), taking into account the nonlinearity in the particle drag law (regarding non-Stokes particles).It was found that particles with small Stokes numbers increased the intensity of turbulence, Reynolds stresses, and the level of viscous dissipation.At the same time, particles with large Stokes numbers led to a decrease in the intensity of turbulence.
The numerical concentration of particles N 0 and the number of particles in the Kolmogorov vortex N η are related, as N η = N 0 η 3 K .The calculations [92] carried out allowed for the clear identification of two regimes.At Stk K < 1, the presence of particles results in a decrease in the decay of turbulent energy (first mode).At Stk K > 1, particles accelerate the decay of turbulence (second mode).In [93], the results regarding PR TWC DNS with respect to the direction of turbulent two-phase upward flow in a vertical channel are presented.
A further increase in particle concentration necessitates the consideration of interparticle collisions (see Section 3), which requires conducting FWC DNS.Intense interparticle collisions influence particle motion statistics and, consequently, their backreaction with respect to gas flow.This greatly complicates mathematical modeling.Currently, several stochastic approaches have been developed in order to depart from simple deterministic calculations of pairwise particle collisions, which require immense computational time.
Examples of studies in which PP FWC DNS modeling was performed include [94,95].In [94], the mathematical modeling of turbulent two-phase flow in a vertical pipe in the presence of small heavy particles was carried out over a wide range of variations in particle mass concentration (M = 0.1 − 30).Various modeling techniques for real wall roughness were used to better match the results with experimental data.It was found that the results strongly depend on the model of wall roughness used rather than on the variation of the parameters characterizing the inter-particle collision process.The calculations also revealed a decrease in turbulence intensity with an increase in particle mass concentration.In [95], the modeling of turbulent two-phase downward flow in a channel was performed at Re τ = 642 and particle mass concentration M = 0.8.The calculations were carried out for smooth and rough walls, where roughness was modeled by placing fixed tiny particles on the wall.It was discovered that rough walls enhance the suppression of turbulence caused by the presence of particles in the flow.
In [96], the interaction between a stationary homogeneous isotropic turbulent (HIT) flow and inertial particles while accounting for inter-particle collisions (PP FWC DNS) was studied via direct numerical simulation (DNS).The calculations were performed for a periodic cubic box with a size of 128 3 for two values of the Reynolds Taylor number (Re λ = 35.4 and Re λ = 58) while varying the volume concentration of particles (from Φ = 1.37 • 10 −5 to Φ = 8.22 • 10 −5 ) and the Stokes number (Stk K = 0.19 − 12.7).Elastic spherical particles with a diameter of d p = 67.6 µm, corresponding to d p /η K = 0.1, served as the dispersed phase.The Stokes number was varied by changing the particle density over a wide range, namely, ρ p = 150 − 18, 000 kg/m 3 .The results [96] showed that dissipation decreases up to 32% with an increase in the Stokes number and the volume concentration of particles.It was shown that this maximum reduction in dissipation is overestimated by 7% when accounting for inter-particle collisions.The spectral analysis revealed a transfer of energy from large to small scales due to particle flow, which explains the difference in dissipation.
Large Eddy Simulation
The use of the Reynolds-averaged Navier-Stokes (RANS) equations requires far fewer computational resources, which is its undeniable advantage.This approach has been successfully used in practical calculations.However, the Reynolds equations and turbulence models used to solve the equations do not have acceptable universality and, therefore, cannot be used to solve a wide range of applied problems.
Large Eddy Modeling (LES) is a compromise between DNS and RANS.This approach is limited to the study of flows only on scales exceeding some given value.In the LES model, the Navier-Stokes equations, which are filtered in space, are solved, and only large eddies are allowed to move.Small eddies have a more versatile structure and are modeled using subgrid scale models.
The LES-based solution contains richer information than the RANS-based solution.It contains not only the average flow characteristics (velocity, temperature, pressure, and concentration fields) and Reynolds stress distributions but also spectral characteristics (velocity, temperature, and pressure fluctuation spectra), two-point moments, and temporal and spatial scales of turbulence.Many of these characteristics are important for engineering applications.For example, temperature fluctuations play a fundamental role in the calculation of chemically reactive flows.
LES is similar to DNS, but the grid used in the process is much larger.Small vortices are approximated using a subgrid-scale (subgrid-scale) model of turbulence.The most commonly used model is the dynamic Smagorinsky model of vortex viscosity [97].Other well-known models are based on scale similarity assumption [98], Taylor series expansion [99], or approximate deconvolution [100].
One of the early works that used the PP OWC LES method was the study presented in [101].In this work, particle dispersion was investigated regarding the case of homogeneous shear flow.The authors did not use the term LES, but they considered the spatially averaged Navier-Stokes equation for the gas and used time-and space-varying coefficients for the small-scale vortices.The calculations were carried out for only 48 passive particles, and the influence of subgrid scales on their motion was not considered.
The work presented in [102] investigated particle dispersion in a turbulent pipe flow using PP OWC LES and DNS methods for different Reynolds numbers.The equation of particle motion considered drag force, lift force, and buoyancy force.Due to very low particle volume concentrations, their back-reactions on the gas and interparticle collisions were not considered.Moreover, the influence of subgrid scales of the gas velocity was also not considered.The main conclusion of this work was that the dynamic relaxation time of particles plays an important role in their sedimentation.
In [103] studied particle motion in a vertical channel with a very low particle volume concentration using the PP OWC LES method.The dynamic Smagorinsky approach, previously developed in [104], was used as a subgrid-scale model.A comparison of the results obtained with those of DNS-based modeling showed good agreement.It should be noted that this work investigated the influence of subgrid-scale velocities on particle settling.For this purpose, an additional equation for the transport of kinetic energy of subgrid-scale turbulence was used, revealing only a minor effect on the calculation results.
In [105], the authors performed calculations of a two-phase flow for the case of forced homogeneous isotropic turbulence (FHIT), for which the reverse influence of particles on gas was taken into account, i.e., using the PP TWC LES method.The authors applied various subgrid-scale models to the equations of motion of the carrying gas.A very important conclusion was drawn: an increase in particle mass concentration leads to a decrease in the weighting coefficients in the dynamic model of vortex viscosity.As a consequence, the calculation error due to the use of subgrid-scale models for the two-phase flow was reduced compared to the single-phase flow.
The PP FWC LES method was used to account for particle collisions in [106] in the study of two-phase flow in a vertical channel at Re τ = 644 and a volume concentration of up to Φ = 1.4 × 10 −4 .The impact of drag force, gravitational force, and lift forces (due to the presence of gas velocity shear and particle rotation) on particle behavior was considered in the work.A deterministic model was used to account for particle collisions.Conclusions were drawn about the significant influence of inter-particle collisions on the statistical characteristics of particle motion, including the concentration magnitude.
In [107], two-phase flow calculations were performed using the PP FWC LES method in a channel with a very high particle volume concentration Φ = 1.3 × 10 −2 .Among all the forces, only the drag force and gravitational force were considered.The calculations showed that particles have a colossal effect on turbulence, leading to a thinning of the boundary layer, an increase in gas velocity fluctuations in the longitudinal direction, and, conversely, a reduction in gas fluctuations in the two other directions.
In [108], the parameters of a two-phase flow in a channel were calculated at a particle volume concentration of Φ = 4.8 × 10 −4 and a Reynolds flow rate of Re = 42, 000, both of which were set based on the height of the channel.The authors separately considered the effects of particle back-influence on gas and inter-particle collisions (PP TWC LES and PP FWC LES).They also emphasized the use of various particle collision models (hard-sphere and soft-sphere), different wall conditions (smooth and rough), and different subgrid viscosity models (the Smagorinsky model and a dynamic model).The calculation results showed that the differences when using different collision and subgrid models were insignificant.At the same time, the consideration of particle collisions and wall roughness leads to better agreement with the available experimental data.
In [109], PP FWC LES was performed for a two-phase flow with particles at a volume concentration of Φ = 7.3 × 10 −5 and a Reynolds flow rate of Re = 11, 900, both of which were set based on half of the height of the channel.The authors used a subgrid model developed earlier in [110] for the particle motion equation as well as a deterministic model to calculate inter-particle collisions.It was shown that with such a small volume concentration of particles, their influence on gas turbulence is negligible.At the same time, it was found that the influence of particle collisions plays a significant role.A good agreement was found between the results and the DNS data described in [86] as well as with the experimental data.
The authors of [109] later performed PP FWC LES simulations of a two-phase flow [111] in a horizontal pipe at a Reynolds number of Re = 120, 000, which was set based on the pipe diameter.The peculiarity of this study was the consideration of particle polydispersity and particle rotation as well as the inclusion of not only the drag force but also the lift force of Saffman and the Magnus force.Wall roughness was modeled by introducing coefficients of normal and tangential velocity restitution that differ from unity and by taking into account the so-called shadow effect at small wall collision angles.
In [112], PP FWC LES of a two-phase flow in a channel was performed with the presence of particle agglomeration effects.The main technique that allowed for the consideration of the appearance of particle agglomerates in the flow after their collision was the introduction of van der Waals forces, which are responsible for the phenomenon of cohesion.Various aerodynamic and energy systems can serve as examples of the use of two-phase flows in the future [113][114][115][116][117][118][119].It should be mentioned that mixing and chemical reactions can occur in a two-phase flow.The coupling of CFD with chemicals can be used to evaluate the performance of devices [120][121][122].
The following conclusions can be drawn from the above description and analysis of works devoted to the mathematical modeling of two-phase flows.
As is known in this field, the Reynolds number is the most important criterion for single-phase flows.It is known that high Reynolds number values limit the use of the DNS method, as the requirements for computing power increase sharply.The Reynolds number of a particle Re p determines the mode of flow around a particle (from the Stokes mode to the mode of formation of turbulent wakes behind moving particles) and is the most important criterion for two-phase flows.It is important to note that the Reynolds number of a particle can be determined not only from the difference in average velocities but also from the difference in fluctuating velocities between the carrier gas and the particles.
In the overwhelming majority of works, the ratio of a particle's diameter to the Kolmogorov space scale of turbulence is used as the only "two-phase" criterion.This is obviously not sufficient, especially in the case of studying the flow in channels (pipes), where there may be a difference in the average velocities of the gas and particles.It seems appropriate to use other criteria more widely, such as the Reynolds number of a particle Re p , the Stokes number in average motion Stk f , and the Stokes number in large-scale fluctuating motion Stk L .This will allow for the mathematical modeling of various classes of flows in accordance with the developed classification of two-phase flows (see Figures 1 and 2).
Conclusions
Two-phase flows have a colossal distribution in nature and are widely used in practice.The extreme complexity of the physics of two-phase turbulent flows is determined by the factors described in detail in Section 2.2.In addition, in two-phase systems, it is necessary to consider the processes of mixing and chemical reactions [120][121][122], which are of great importance for the operation of a wide range of technical devices.All of the above information complicates the mathematical modeling of such flows.
In the last 20-30 years, there has been a tremendous growth in interest among numerous researchers in the numerical modeling of two-phase flows with particles.As a result, there has been significant progress in improving the methods and approaches for the mathematical modeling of such flows.Currently, there are advanced methods such as particle-resolved direct numerical simulation (PR DNS), which allows for the determination of local gas velocities influenced by the presence of particles and interphase interaction forces.This method has well-known limitations associated with its small number of particles and their "coarseness".However, DNS is a very computationally intensive method for solving practical problems.Therefore, in the near future, RANS and LES methods and the modeling of particle motion based on the Euler approach are likely to be more in demand and require further improvements.
In conclusion, we have formulated, in our opinion, the following promising directions for further progress in the field of the mathematical modeling of two-phase flows with particles: (1) The development of mathematical modeling methods for two-phase flows with relatively large particles (non-equilibrium flows) that only interact with large energycarrying vortices and are characterized by dynamic slippage (velocity difference) in relation to average motion.(2) The development of mathematical modeling methods for two-phase flows with large particles, which form turbulent wakes behind them.With the increase in particle concentration, these turbulent wakes will interfere with each other, and the particles will undergo collisions.(3) The development of mathematical modeling methods for two-phase flows containing particles of different sizes (polydisperse particles).Such flows are of interest to practicing engineers.Particles of different sizes will have different velocities and different effects on gas flow and tend to collide with each other at lower concentrations.
The development of mathematical modeling methods for two-phase flows with particles complicated by phase transitions (melting and subsequent evaporation) and chemical reactions (primarily combustion reactions).
Nomenclature d p
particle diameter, m η K Kolmogorov length scale, m x p particle radius vector, m u vector of actual velocity of gas, m/s v vector of actual velocity of particle, m/s u i , u j , u k actual velocity components of gas, m/s v i , v j , v k actual velocity components of particle, m/s U i , U j , U k time-averaged velocity components of gas, m/s V i , V j , V k time-averaged velocity components of particle, m/s u i , u j , u k fluctuation velocity components of gas, m/s v i , v j , v k fluctuation velocity components of particle, m/s
τ
is the Kolmogorov time scale of turbulence.
Figure 1 .
Figure 1.Classification of turbulent two-phase flows' dependence on particle inertia.
Figure 2 .
Figure 2. Simplified schemes of turbulent two-phase flows of different classes depending on p cle inertia: (a) equilibrium flow, (b) quasi-equilibrium flow, (c) nonequilibrium flow, (d) flow large particles, (e) and flow around fixed "frozen" particles.
Figure 1 .
Figure 1.Classification of turbulent two-phase flows' dependence on particle inertia.
Figure 1 .
Figure 1.Classification of turbulent two-phase flows' dependence on particle inertia.
Figure 2 .
Figure 2. Simplified schemes of turbulent two-phase flows of different classes depending on particle inertia: (a) equilibrium flow, (b) quasi-equilibrium flow, (c) nonequilibrium flow, (d) flow with large particles, (e) and flow around fixed "frozen" particles.
Figure 2 .
Figure 2. Simplified schemes of turbulent two-phase flows of different classes depending on particle inertia: (a) equilibrium flow, (b) quasi-equilibrium flow, (c) nonequilibrium flow, (d) flow with large particles, (e) and flow around fixed "frozen" particles.
Figure 3 .
Figure 3. Classification of approaches to numerical simulation of two-phase flows depending on different levels of turbulence description and interphase coupling.
Figure 3 .
Figure 3. Classification of approaches to numerical simulation of two-phase flows depending on different levels of turbulence description and interphase coupling. | 10,351.6 | 2023-07-26T00:00:00.000 | [
"Physics"
] |
Fitting a function to time-dependent ensemble averaged data
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Time-dependent ensemble averages appear in several scientific fields. Examples include: particle tracking experiments where mean square displacements (MSD) are measured at different sampling times 1 , human travel dynamics where dispersal distance as a function of time are measured 2 , single-molecule pulling experiments 3 , applications of fluctuation theorems 4 such as the Jarzynski equality 5 , measurements of the time-dependence of donor-acceptor distance dynamics 6 , tracer particle dynamics in complex systems 7 and correlation functions in spin systems and lattice gauge theories 8 . The final step when interpreting ensemble averages is often to fit a function to these averages in order to extract parameters.
Fitting a function to data is done so readily in science that one seldom considers the correctness of the standard go-to solution of the (linear and non-linear) weighted least squares (WLS) method [9][10][11] . One of the crucial implicit assumptions of the "standard" version of this method is that the fluctuations around mean values are independent. However, since for time-dependent ensemble averages the data is sampled along trajectories, this independence assumption is in general not satisfied when analyzing ensemble averages; heuristically, if in one trajectory an observable, such as the square displacement, was smaller than its ensemble averaged value at some time, it is typically still so at the next time step. For an illustrative example, see Figure S1 in Supplementary Information, which shows the time-evolution in simulations of fractional Brownian motion (FBM). Thus, the fluctuations around an ensemble averaged (time-dependent) observable will in general exhibit temporal correlations. Herein, the term trajectory is used in its widest sense: an observable (such as squared displacement) is chosen, and a trajectory is then measurements of this observable at different consecutive sampling times.
The question now arises of how severe the consequences of neglecting the temporal correlations in least squares fitting are. We demonstrate that such neglect leads to unreliable error estimation for parameters and can in some cases lead to underestimated errors for fitted parameters (such as diffusion constants) by more than one order of magnitude for our prototype systems (see below). The unreliability of the estimated errors can have detrimental effects when statistically interpreting the data: The 1σ (2σ) rule for Gaussian statistics states that 68% (95%) of the observed data should (on average) fall within ±1 (±2) σ from the estimated mean. For this rule to be meaningful one must have a correct estimator for the variance in estimated parameters, σ 2 .
To our knowledge, the only previous method for dealing fully with correlation in data for function fitting to ensemble-averages is the correlated chi-square method (CCM) 12,13 . This method is known to the lattice quantum chromodynamics community, but does not seem to have found wide spread use. This could partly be due to that, while mathematically sound, numerical robustness issues have been identified 14,15 . We here carefully examine the CCM method and demonstrate that it in general only provides correct parameter estimation in a small region of the "phase space" (N, M), where N is the number of sampling times and M is the number of trajectories. Thus, it appears that the CCM is of limited general purpose use for fitting of time-dependent ensemble averages to a model function.
Although the least squares and WLS methods are common techniques for parameter estimation from ensemble averages, alternative methods exist, e.g., for inferring parameters from trajectories for biological systems [16][17][18] . In particular, for Brownian motion (BM) an optimal estimator for the diffusion constant has recently been derived [19][20][21] . Bayesian methods 11,[22][23][24][25][26] have also been used for parameter estimation for certain classes of systems. In general, when they apply, these methods give more precise parameter estimates than the WLS method. However, these newer approaches require as input a full stochastic model of the process, and we refer to this type of approach as model matching methods. By a full stochastic model we here refer to a model from which (in principle) any probability or average of a measured observable can be calculated. A simple example is BM, where the time-evolution is described by a Langevin equation with a noise term for which the statistics is fully specified. In contrast, the WLS and CCM methods are parametric function fitting 27 type methods, which can be used even if a full stochastic model is not available to describe the data at hand. An example from single-particle tracking, where function fitting is useful, is if one wants to determine a power-law exponent for the scaling of the mean-square displacement with time. In this situation, a function fitting procedure such as WLS can be used, without making any assumption about the underlying dynamics. Also, even if a full stochastic model is indeed available, it might be impractical to carry out a full model matching procedure.
In this article, we derive a mathematically rigorous expression for the variance and covariance of estimated parameters in WLS fitting. Our new error estimation formula for fitted WLS parameters takes into account the temporal correlations, which are intrinsic to ensemble averages based on trajectories. To avoid confusion we term the "standard" WLS method 9-11 (i.e., weighted least squares neglecting correlation) as WLS-ECE (Weighted Least Squares Excluding Correlation in Error estimation), whereas our new approach is referred to as WLS-ICE (Weighted Least Squares Including Correlation in Error estimation). In figures and discussion where we only consider parameter values and not the associated errors, we only use the term WLS. In contrast to the previous two methods (WLS-ECE and CCM), our new method has the desirable unique features of providing both (1) robust parameter estimates in the full phase space (N, M) with mean parameter values in agreement with theory for our prototype systems; (2) error estimates that reproduce the observed spreads in our fitted parameters.
As prototype models we use BM, damped harmonic oscillation (DHO) in a heat bath, FBM and continuous time random walks (CTRW). These have been identified as important model systems in a wide range of systems. BM is of interest to many fields of science [28][29][30] . Variants of DHO appear in physics, engineering and chemistry 31 . FBM has been applied, for instance, to protein dynamics 6 , in financial modeling 32 , for analyzing climate time series 33 , to describe tracer particle diffusion 7,34 and for modeling earth quake phenomena 35 . Recent applications of CTRW 28,36 include modeling of human travel patterns 2 and of molecular motions in cells and cell membrane 34,37 . However, we point out that our model systems are merely convenient examples for illustrating our WLS-ICE function fitting procedure, which can be applied to arbitrary fit functions. Our four model systems provide ideal test beds for our method, because the functions to be fitted, the mean position and MSD, are known analytically for these systems. Moreover, trajectories are fast to generate for these systems, which, which facilitates stringent testing of the fitting methods based on a relatively large number of trajectories.
We finally point out two restrictions on the scope of our study: First, we do not concern ourselves with the model selection problem 11,38 , i.e., how to choose the "best" model or "best" form for the fit function. Second, in single particle tracking (one of the application fields of our results), it is common to separate between time-averaged observables (such as the time-averaged MSD) and ensemble averaged observables 39,40 . In certain cases, these averages are described by the same functional form, but this is not always so 40 . In this study our sole focus is on ensemble averaged observables.
Methods
In what follows, we provide a ready-to-use method, which is further motivated and detailed in Section A in Supplementary Information. The WLS-ICE procedure. In experiments or simulations one records a set of trajectories, here denoted by m. The task at hand is to fit some functional form f(t i ;θ) = f i (θ), with K free fitting parameters θ = θ 1 , …, θ K to some ensemble averaged observable where the index i is over the N sampling times T = T 1 , …, T N (with ≥ N K). Herein, we use bold symbols to denote vectors or matrices. For BM, FBM and CTRW (see Results), which are all zero mean processes, the observable used is the squared displacements, i.e., , where x (m) (t) is the position (a vector with d components, where d is the number of spatial dimensions) at process time t for trajectory m, and the start time for the simulation/experiment is t = 0. For DHO, our non-zero-mean prototype process, we instead use the position directly as relevant observable, . It is important to point out, however, that in the fitting SCienTifiC RepoRTs | (2018) 8:6984 | DOI:10.1038/s41598-018-24983-y procedure the quantity y i m ( ) can be any observable for trajectory m at sampling time T i . We shall consistently use a 'bar' to denote a sample estimator (we only make use of sample means and sample covariances). The challenge in function fitting procedures 10 is to fit some function f i (θ) to the data y i and thereby extract the model parameters, θ. This problem has previously been tackled using the WLS-ECE or CCM methods (reviewed in Section B in Supplementary Information).
Our approach, the WLS-ICE method, extends the WLS-ECE procedure with a correct error estimation formula which takes correlations in fluctuations around ensemble averages into account (see Introduction). For completeness and ease of application, we here provide the full details of the proposed WLS-ICE fitting procedure. We start by introducing a cost function, χ 2 , based on the the difference between the sample average and the fit function where R is a symmetric positive definite matrix. This cost function is to be minimized with respect to θ in order to determine the best parameter values, θˆa (a = 1, …, K) 41 . We use a 'hat' to denote parameters which have been estimated through minimization of the χ 2 cost function above and for the estimated (co)variance of such parameters. In the WLS method one uses weights δ , where δ i,j is the Kronecker delta, and the (unbiased) sample "covariance matrix of the mean" is defined While this specific choice of R is used in our applications, we note that the results in this section, including the new error formula below, is valid for arbitrary choices of R. In Section A in Supplementary Information we elaborate on one "non-conventional" choice of R particularly adapted for BM.
The parameters, θˆa, obtained by minimizing χ 2 in Equation (2), have a (co)variance , where 〈…〉 denotes ensemble average. Throughout this study we use a 'star' to denote exact parameter values, i.e., estimated values as M → ∞. The variances of the fitted parameter are σ = Δ a a a 2 . As noted in the Introduction, this covariance depends on the temporal correlations. For a stationary process, it is well-known how to estimate the variance of a mean in the presence of temporal correlations, typically by expressing the variance in terms of the sum or integral of the auto-correlation function 42,43 . In the present context, such an estimation corresponds to fitting to a constant, f i (t) = θ 1 , and assuming all correlation functions only depend on time differences.
We here extend the above-mentioned results to non-stationary processes and arbitrary fit functions by deriving the analogous expression for Δ ab by using the full multivariate probability density for the fluctuations around mean values. Briefly, the covariance for the estimated parameters is defined denotes an average over the multivariate probability density, θ ρ ⁎ y ( ; ). We note that the dependence of the estimated parameters θ on y is implicitly determined by the minimization condition ∂χ 2 /∂θ a = 0. Now, because all y i are averages over M identically distributed random numbers, for large M, it immediately follows from the multivariate central limit theorem that the function ρ takes the Gaussian form: 44 . Two complications that occur in evaluating Δ ab in closed-form are that the y -dependence of θ is implicit, and, in general, non-linear. Both of these challenges are solved by making a Taylor series expansion of θ θ − ⁎ a a in terms of − ⁎ y y and implicitly using the minimization condition. The full derivation is given in Section A in Supplementary Information. The final result is the following estimator: where the indices a,b = 1, …, K. Equation (4) gives a mathematically rigorous expression (to lowest order in 1/M) for the covariance of the estimated parameters, and is our key result. It allows us to accurately estimate the covariance of any parameter fitted by minimizing the cost function in Equation (2). Notice that the correlations in fluctuations around mean values enter through the quantity Q, which is estimated using the usual sample estimate above. In practice, our general formula, Equation (4) is simple to implement and computationally fast.
The new error estimation formula, Equation (4), reduces to previously known results in specific limits. (i) Neglecting the off-diagonal elements of Q above we recover the WLS-ECE error estimation formula 9 . (ii) By set- 1 above we recover the covariance estimation formula for CCM 10,12 . (iii) For a stationary process one seeks to fit a constant, f i (θ 1 ) = θ 1 , to data. For such a case, the minimization procedure (solving ∂χ 2 /∂θ 1 = 0 with i.e., the parameter estimate is the mean of the data. The error estimation Equation (4), then reduces to the usual result 42,43 Validation procedure. We tested the different fitting procedures on simulation data for our four prototype systems (generated as described in Section D in Supplementary Information). Estimated parameters, θˆa, were compared to their known exact values θ * a (see Section C in Supplementary Information). For BM, the MSD behaves as 2 . For DHO (at critical damping and with the initial conditions x(0) = x 0 and v(0) = 0), the mean position has the form For validating the WLS-ICE estimator for Δ ab , we generated S simulation sets (with S = 500) each consisting of M trajectories. Using these S × M trajectories, we obtained S number of parameter estimates ˆa θ . From these S estimates we calculate the covariance Δ ab (using sample estimators), which then serves as true Δ ab ("ground truth"). This true Δ ab is then compared to estimates based on the WLS-ICE error formula, Equation (4) (which requires only one set of simulations), and the corresponding error estimates for WLS and CCM.
Code availability. Computer codes (Python, Octave/MATLAB, and Lisp) which performs the associated fitting (determining θˆa) and error estimation (calculating Δ ab ), using a set of measured observables for different trajectories and at different times as input, is freely available under the gnu General Public License (gpl) 45 at http://cbbp.thep.lu.se/activities/wlsice/.
Results
Our first test of the fitting methods involve comparing histograms of fitted parameters for our four prototype systems (the number of trajectories, M, and number of sampling times, N, were kept fixed). For both CCM and WLS the S fitted values of a given parameter were binned to a histogram, see Fig. 1, and compared to a Gaussian centered on the mean of the estimated parameters with a variance from the average of the error estimates, using either the WLS-ECE or WLS-ICE method. For WLS, the histogram of fitted parameters is centered close to the true value (see also Figure S3 in Supplementary Information). However, only the WLS-ICE method gives a correct error estimation, Equation (4), as the predicted width of the WLS-ECE method, see Section B in Supplementary Information, is much too narrow. Clearly, the new error estimation of the WLS-ICE method performs extremely well. By contrast, the WLS-ECE method does not provide correct errors of the estimated parameters; this result extends beyond the chosen parameters for (N, M) in Fig. 1, and holds true under rather general conditions, see Fig. 2 (the exception is the prefactor for CTRW for very small M). Notice that while the parameters from the WLS-ICE and WLS-ECE methods are centered on the analytical prediction, this is not true for parameters from the CCM method, which show a strong bias (Fig. 1) for BM, FBM and CTRW (not for DHO). Thus, the WLS-ICE is the only method which yields an acceptable bias and correct error estimation for all model systems. Note that for the ensemble size used in Figure S2 in Supplementary Information, the distribution of fitted parameters is well described by a Gaussian, see Section F in Supplementary Information for a discussion on this topic. For a smaller ensemble size there are deviation from a Gaussian distribution, see Figure S2 in Supplementary Information, in particular for the prefactor for CTRW. From Fig. 2 we notice that the variance in the estimated parameter does not approach zero as N → ∞. Hence, the only way to decrease the variance in estimated parameters further is to increase M (the WLS estimator is consistent with respect to M).
As we have seen (Fig. 1), the CCM method gives a pronounced bias in the parameter estimate for a specific choice of the number of sampling times N and trajectories M for BM, FBM and CTRW systems, but not for DHO. In order to understand the generality of these findings, we numerically quantified the bias for an extended range of (N, M) values, and find that the pronounced bias for BM, FBM and CTRW (and lack of bias for DHO) is rather general, see Figure S3 in Supplementary Information. In Section E in Supplementary Information we investigate the expected bias for the CCM method further by analytical means. Indeed, we find that the parameter estimate from CCM fitting is unbiased for DHO. Mathematically, this result follows from the fact that the observable (mean position) used for the fitting is a linear function of the noise (this is in contrast to BM, FBM and CTRW, where the squared displacements are used as relevant observables). For BM, our analytical calculation in Section E in Supplementary Information shows that for large N the bias for CCM fitting becomes θ θ 〈 〉 = + bias for all model systems and yields correct error estimation, Equation (4). The rather large number of trajectories (M = 1000) was used in order to avoid ill-conditioness and major bias issues for the CCM fitting, compare to Fig. 3. Results for a smaller ensemble size are found in Figure S2 in Supplementary Information, where we see that also for FBM there can be pronounced bias effects for CCM fitting. For simulation parameters, see Section D.5 in Supplementary In conjunction we show the true standard deviation of each of these methods computed from the parameters from the fit (lines), i.e., the width seen in Fig. 1, but for an extended range of N. It is evident that the standard deviation from the WLS-ECE fit is far too small for almost all N. Error bars show standard error of the mean. For panels a-d there are small biases for M = 20 and M = 80 in the observable σ, as compared to actual standard deviation. These biases can be removed using the jackknife procedure applied to Equation (4b), see Section G in Supplementary Information. For panel e, M = 20, there is discrepancy between the WLS-ICE estimate σ, and the actual standard deviation; we assign this to slow convergence towards the asymptotic form for the multivariate distribution ρ (see Methods) for CTRW (see also Figure S2 in Supplementary Information). For simulation parameters, see Section D.5 in Supplementary Information. and (e,f) continuous time random walk (CTRW), we investigate for which number of sampling times N, and number of trajectory realizations M, the fitting is more than 10% off from its analytical value, averaged over S = 500 simulations. As indicated, CCM is only reliable in a limited region (large M, small N), which can be extended by a first order jackknife correction. For BM we also include when the analytically predicted first order bias term for CCM, G(N), see Section E in Supplementary Information, gives a bias that is 10% of the true parameter value. We also show the boundary for when more than half of the S generated covariance matrices become ill-conditioned. Interestingly, for the CCM a second order jackknife generally does more harm than good compared to the first order, which we elaborate more on in Figure S4 in Supplementary Information. In contrast to CCM (non-jackknifed), the parameter estimations for the WLS method are acceptable for most N, M (region above the green curve), and can be extended even further using a jackknife approach (data not shown). For simulation parameters, see Section D.5 in Supplementary Information. In order to further investigate practical implications of the pronounced bias for CCM fitting, as well as other known issues with the CCM method 14,15 , we quantified in what parts of phase space (N, M) the CCM fitting and WLS-ICE provides "acceptable" (see below) parameter estimation, see Fig. 3. First, we find that for large N and moderate to small M, the sample estimate for the covariance matrix C is ill-conditioned (the condition number is larger than the machine precision). In practice this means that it cannot be numerically inverted, as required in the CCM parameter estimation procedure, without uncontrollable numerical errors. Second, for parts of phase space where ill-conditioness is not an issue, we, rather generously, defined an acceptable fit as one where the bias is smaller than 10% (compared to the analytic value, θ ⁎ a ). We find that for BM, FBM and CTRW there is indeed a thin region of the (N, M)-phase space (large M and small N) where CCM works. For DHO, the bias effect is negligible, as previously noted. However, the ill-conditioness issue is as pronounced for DHO as for BM, FBM and CTRW. In contrast, for WLS ill-conditioness is not a problem (no matrix inversion is required in this procedure), and the bias in the parameter estimation is acceptable for most parts of the phase space. The bias inherent in the CCM method (for observables which are not linear functions of the noise (MSD for BM, FBM and CTRW)) can be reduced by applying the common jackknife procedure 46 , which removes bias terms proportional to 1/M, see Section G in Supplementary Information. By applying the (first-order) jackknife procedures to BM, FBM and CTRW (Fig. 3), we find that the bias is reduced which expands somewhat the region of the phase space where the CCM method may be used reliably. Note that the computational time is a factor g (i.e., the number of groups into which the trajectories are pooled) larger for the first-order jackknife procedure compared to the non-jackknife case. Finally, the jackknifing procedure can be extended to remove higher order bias terms (proportional to M 1/ n , with n = 2,3,…) 46 . However, for the present case there is no guarantee that these higher order terms have this functional form with respect to M, see Section E in Supplementary Information. Also, our results show that the second-order jackknife increased, rather than decreased, the bias in the parameter estimations for most parts of the phase spaces (Fig. 3). For BM, Figure S4 in Supplementary Information indicates that the reason for this is that the third order term (term proportional to 1/M 3 ) is generally larger in amplitude (but of opposite sign) than the second order one. Higher order bias reduction comes at a computational price, since the number of numerical evaluations required for second order jackknife is g(g + 1)/2 times that of non-jackknifed parameter estimation. Due to these findings and the lack of a formal functional form for the bias, beyond the 1/M term (see above), we do not recommend applying the jackknife procedure beyond first order. Finally, we point out that the new error estimation formula, Equation (4), remains valid also for jackknifed parameters, see Section G in Supplementary Information.
In Figure S5 in Supplementary Information we investigated the "goodness of fit" for the WLS and CCM procedures using a standard R 2 metric (see Section I in Supplementary Information). Examples of fitted curves are found in Figure S6 in Supplementary Information. A good fit is characterized by R 2 ≈ 1. We find that, in this sense, the new method provides "good" fits. In contrast, the CCM method provides "bad" fits for BM, FBM and CTRW with R 1 2 for large N. We point out that for the present type of data, R 2 is only a heuristic goodness-of-fit metric -its distributional properties are not known for general fit functions and correlated data.
When computational times are not a concern, error estimation using bootstrap resampling (or the related jackknife error estimation procedure) are common method (see Section H in Supplementary Information) 47 . We here find that bootstrap resampling performs as well as WLS-ICE in general for our four models (jackknife error estimation is slightly worse), see Figure S7 in Supplementary Information. Thus, our numerical results indicate that for the type of observables and fit functions used in our model systems, the bootstrap can be used for calculating the variance for parameters estimated through χ 2 minimization. However, we point out that such resampling techniques require us to repeat the χ 2 minimization several (herein, 100) times (the WLS-ICE method requires only one χ 2 minimization). Such minimization can be computationally costly, especially for the case when the number of unknown parameters is large. Moreover, one must bear in mind that the bootstrap method is in general a heuristic method (there are cases when it does not apply 47 ).
As a final alternative to the WLS-ICE method, we now briefly turn to error estimation using subsampling 43 . Subsampling refers to the method of choosing sampling times sufficiently sparsely in order to make the data points essentially uncorrelated (the "brute force" method in Figure S1 in Supplementary Information is an extreme case of subsampling where only one data point per trajectory is kept). After subsampling, error analysis is performed using standard error analysis for independent data. In order to properly choose N within this method, N is systematically decreased until the variance saturates to a constant, which is assumed to be the true variance 43 . Notice for stationary time series, rather than reducing the number of sampling times, one can make full use of the data through the blocking method 42 . For non-stationary processes the blocking method cannot be used, however. Figure 2 shows how estimated errors from our WLS-ECE and WLS-ICE analyses depend on the number of data points used, N. We find that temporal correlations are so strong that the WLS-ECE method underestimates the errors down to very small N. Moreover, finding a sufficiently small N is difficult, since the error does not in general saturate to a constant level as N is reduced. These problems are circumvented by instead using the error estimation from the WLS-ICE method (i.e., using Equation (4) instead of the WLS-ECE equations in Section B in Supplementary Information).
As a final test of our method, we now turn to "real world" data. To that end, we use particle tracking data used in a competition for testing particle tracking software where 14 teams world-wide participated 48 . We choose to analyze this data set for two reasons. First, it served as standard benchmark data within the particle tracking community. Second, since these movies are based on noisified and pixelated simulations (aiming to mimic actual experimental data), we know the values of the underlying model parameters. We used their Supplementary Videos 1 (medium particle density), 5 (low particle density) and 6 (high particle density). All these movies correspond to BM of vesicles for which the expected MSD for the data sets are 〈[x(t) − x(0)] 2 〉 = f BM (θ,t) = θ 1 t, with θ 1 = 2dD = 8. For particle detection in the movies and linking of particle positions into trajectories we used Method 1 48 , i.e., the tracking method described by Sbalzarini et al. 49 , and implemented as the ImageJ plugin SCienTifiC RepoRTs | (2018) 8:6984 | DOI:10.1038/s41598-018-24983-y "Particle Tracker" by the MOSAIC group 50 . Parameter settings for the plug-in are listed in Section J in Supplementary Information. For each video we extracted trajectories which were subsequently cut into trajectories consisting of 7 discrete process times (there is no memory in BM, so the start time is inessential). Notice that for the higher particle density, fewer sufficiently long trajectories were produced as compared to the low density scenario (values for M are listed in Table 1). We subsequently divided the trajectories for each movie into two data sets each with M trajectories. For the fitting procedures the first process time point, t 0 = 0, was discarded (since at t 0 the position is precisely known, the variance = 0 and can not be used as a weight in Equation (2)), thus leaving us with N = 6 sampling times. Results for the estimated parameters, θˆ1 and associated standard deviation, σ are found in Table 1. We notice that the CCM method fails at predicting the correct parameter for high and medium particle densities. This finding is simply due to the smaller ensemble size for these cases which, in turn, is a result of the tracking software's inability to track and link particles in high and medium density settings. Comparing the WLS-ECE and WLS-ICE method, we see that the WLS-ECE underestimates the error by factors ≈ 2 for all movies. While, this underestimation may seem minor it will affect conclusions drawn from particle tracking data (see discussion in Introduction), in particular it is noteworthy that for the WLS-ECE method only 2 out of 6 estimates fall within 2σ (confidence level 95%) of the expected result (=8). In contrast, for the WLS-ICE all six observed parameter estimations for θ 1 fall within 2σ of the expected value.
Let us finally briefly discuss how well one is expected to be able to estimate a parameter based on experimental/simulation data. For model matching procedures (see Introduction), the Cramer-Rao bound is useful by providing an expression for the smallest possible variance in the estimated parameter 10 . For the case of BM, optimal estimators (i.e., estimators which reach the Cramer-Rao bound) based on the measured displacements have been derived for model matching type fitting [19][20][21] . For function fitting, the question is rather whether an optimal cost function, i.e., an optimal weight matrix R, can be found (see Equation (2)). If the covariance matrix for the process is independent of the inferred parameters (up to a proportionality constant), and for linear fit functions, then the generalized least squares method can be shown to be optimal among unbiased WLS methods 51 . Since the generalized least squares method requires as input the inverse of the true covariance matrix, it can be viewed as a hybrid method in between model matching and function fitting. In Figure S8 in Supplementary Information we show results for the generalized least squares for BM (we use the term BMALS -Brownian motion adapted least squares) where we see that, indeed, the variance in estimated parameter value is smaller for BMALS as compared to WLS-ICE, although the difference is not dramatic. Also notice that for M and N values where the CCM "works" (acceptable bias, see Fig. 3) the variance in estimated parameters for CCM and BMALS agree, as it should.
Discussion, Conclusion and Outlook
A common task in many fields of science is that of fitting a model to the time-evolving mean of some observable. Since fluctuations around observed mean values, calculated based on trajectories, are in general correlated in time, the error estimates provided by a "standard" weighted least squares (WLS-ECE) fit can be more than one order of magnitude too small, see Fig. 2. Further, the correlated chi-square method (CCM), involving numerical inversion of a noisy covariance matrix, often show numerical instabilities (ill-conditioning) or a strong bias in the fitted parameters, see Fig. 3. To overcome these problems, we derived a new error estimation formula, see Equation (4), for weighted least squares fitting, which does not require inversion of a noisy covariance matrix. With this formula at hand, a simple, yet accurate, function fitting procedure, WLS-ICE, can be followed: (A) perform a weighted least squares fit to the data, (B) use the new formula to estimate the errors. We demonstrated on four simulated prototype systems that the WLS-ICE method provides robust results, with a negligible bias in the fitted parameters and accurate error estimates. Our method's estimated errors are comparable to errors estimated Table 1. Results of the three fitting methods for "real world" particle tracking data. Particle trajectories where extracted from the "Vesicle" Supplementary videos from the article by Chenouard et al. 48 using the "Particle Tracker" software (MOSAIC group). The trajectories where cut into shorter trajectories, all of length 7 discrete process times. The short trajectories were then divided into two independent sets of size M. We then performed fitting using the WLS-ICE, WLS-ECE and CCM methods for BM, discarding the first process time point, resulting in N = 6 sampling times. Expected parameter value is θ 1 = 8 (data are noisified and pixelized simulations with known properties). Since M was very small for video S6, we applied the jackknife procedure both in parameter and error estimation (all videos). Results before jackknifing are found in Table S1 in Supplementary Information. We notice that the CCM method gives ill-conditioness issues for the high density movie, where few trajectories could be extracted. The WLS-ECE method underestimates the error as compared to WLS-ICE method.
SCienTifiC RepoRTs | (2018) 8:6984 | DOI:10.1038/s41598-018-24983-y using bootstrap and jack-knife resampling for the four model systems. A strength of our method is that the fitting procedure does not have to be repeated multiple times. We separated between two types of parameter estimation procedures: model matching where a full stochastic model is matched to the data, and function fitting in which a full stochastic model is not known and one rather seeks to fit a function to the chosen ensemble-averaged observables. The weighted least-squares method is a procedure of function fitting type.
We have in this study not discussed methods for dealing with experimental errors, such as missing data etc. Such errors depend on the experimental setup and typically have to be dealt with in different ways depending on setup. For the single-particle tracking field (one of the application fields of our results), two major sources of experimental errors are: effects due to the finite size of pixels in cameras used to record the trajectory and motional blur effects (in a single time frame, a fluorescent molecule moves while being imaged). Methods for correcting these types of errors are discussed by Savin et al. 52 , Martin et al. 53 , Berglund 19 and Calderon 54 .
Parameter estimation through χ 2 minimization is ubiquitous throughout many fields of science, and we hope that our method and publically available software will be found useful in these fields. | 8,534.4 | 2018-05-03T00:00:00.000 | [
"Physics"
] |
Dayside flow bursts and high-latitude reconnection when the IMF is strongly northward
The char acteristics of dayside ionospheric convection are studied using Northern Hemispheric SuperDARN data and DMSP particle and flow observations when the interplanetary magnetic field (IMF) was strongly northward during 13:00–15:00 UT on 2 March 2002. Although IMF Bx was positive, which is believed to favour Southern Hemisphere high-latitude reconnection at equinox, a four-cell convection pattern was observed and lasted for more than 1.5 h in the Northern Hemisphere. The reconnection rate derived from an analysis of the Northern Hemisphere SuperDARN data illustrates that the high-latitude reconnection was quasi-periodic, with a period between 4–16 min. A sawtooth-like and reverse-dispersed ion signature was observed by DMSP-F14 in the sunward cusp convection at around 14:41 UT, confirming that the high-latitude reconnection was pulsed. Accompanying the pulsed reconnection, strong antisunward ionospheric flow bursts were observed in the post-noon LLBL region on closed field lines, propagating with the same speed as the plasma convection. DMSP flow data show that a similar flow pattern and particle precipitation occurred in the conjugate Southern Hemisphere.
When the IMF is southward, reconnection at the lowlatitude magnetopause between the closed magnetospheric field lines and the IMF field lines in the sheath, which has been referred to as low-latitude reconnection, results in the formation of open flux tubes, anti-sunward flow in the polar cap, twin-cell convection in the polar ionosphere, and a concurrent expansion of the polar cap (Siscoe and Huang, 1985;Cowley and Lockwood, 1992).Since impulsive dayside reconnection was first observed (Haerendel et al., 1978;Van Eyken et al., 1984;Goertz et al., 1985), it has been believed to be the primary mechanism for the transfer of flux from the Earth's magnetosheath to the magnetosphere and episodes of such flux transfer are referred to as flux transfer events (FTEs, Russell and Elphic, 1978).Several different pulsed ionospheric signatures associated with FTEs have been studied in detail, such as the poleward moving auroral forms (PMAFs) (e.g.Sandholt et al., 1998), rapid velocity transients or temporally enhanced flow velocities called "flow channels" or pulsed ionospheric flows (e.g.Pinnock et al., 1995;Yeoman et al., 1997;Provan et al., 1998;Milan et al., 1999Milan et al., , 2000;;Lockwood et al., 2000;Davies et al., 2002), pulsed large-scale convection associated with cusp auroral transients (Moen et al., 1995), or poleward moving radar auroral forms (Wild et al., 2001).
When the IMF is northward, lobe reconnection, or highlatitude reconnection between lobe field lines and the IMF begins, resulting in a multi-celled flow pattern, with a region of sunward flow in the dayside polar cap (e.g.Dungey, 1963;Russell, 1972;Reiff and Burch, 1985;Bristow et al., 1998;Sandholt et al., 2000Sandholt et al., , 2001)) DARN radars overlaid on the potential map at 13:30 UT, the middle time of the considered interval in this paper.This is plotted on a geomagnetic grid from the pole to 60 • , with 12 MLT at the top and 18 MLT to the left.The data from the beams highlighted with blue lines will be studied in detail.∼30 • -40 • (e.g.Sandholt et al., 1998).Moore et al. (2002) and Sandholt et al. (2003) found that subsolar reconnection will weaken and disappear for nearly northward B z .There have been a wide range of proposed reconnection topologies that might exist for IMF B z northward (Dungey, 1963;Russell, 1972;Cowley, 1981 and1983;Crooker, 1992;Onsager and Lockwood, 1997;Lockwood and Moen, 1999;Sandholt et al., 2000;Onsager et al., 2001), but there are only two basic types.The first type is lobe reconnection, which takes place only in one hemisphere, referred to as "lobe stirring" (Reiff, 1982), or lobe reconfiguration, which results in circulatory lobe convection cells in the polar cap.Another type, often referred to as lobe merging, occurs first in one hemisphere and then the overdraped field lines reconnect in the opposite hemisphere.The flow cells with sunward polar cap flow now cross between the open and closed field line regions and do not remain in the region of open flux.Open flux thus may be converted to closed flux and the convection flow streamlines in the merging cells cross the polar cap boundary.Flux is lost from the polar cap and hence, the polar cap shrinks.
There is no doubt about the existence of the lobe reconnection resulting in sunward convection within the polar cap ionosphere (Maezawa, 1976), and "four-cell" convection (Freeman et al., 1993), higher-latitude type 2 (north) auroras (Øieroset et al., 1997) and "reverse" cusp ion dispersion signature (Matsuoka et al., 1996).However, some contro-versies remain over the temporal variability of the lobe reconnection and the related convection signatures.For example, the merging process has been believed to be quasistationary (e.g.Onsager et al., 1995;Fuselier et al., 2000;Frey et al., 2002Frey et al., , 2003;;Chang et al., 2004), while Chisham et al. (2004) noted reconnection potential with transient enhancements.Lu et al. (2004) observed intermittent magnetic reconnection at the high-latitude magnetopause, and Provan et al. (2005) suggested that the dayside reconnection rate was modulated by variations in the solar wind dynamic pressure and the IMF B z component.
In this paper we present a case study in which the IMF B z was strongly northward, a four-cell convection pattern and antisunward flow bursts were observed in the Northern Hemisphere by SuperDARN and lasted for at least 1.5 h.A similar flow pattern and a reversed cusp ion dispersion signature were also observed by DMSP satellites.It gives us an excellent opportunity to study the temporal variations of lobe reconnection and its associated ionospheric convection in detail.
SuperDARN radar
The SuperDARN coherent HF radars (Greenwald et al., 1995) are designed to investigate field-aligned ionospheric plasma density irregularities (radar aurora) and large-scale ionospheric convection.The spectral characteristics of power, line-of-sight Doppler velocity, and spectral width (Hanuise et al., 1993) can be derived from the autocorrelation function of the returned signals.The Doppler velocity gives an estimate of the radar line-of-sight component of the plasma convection velocity (Ruohoniemi et al., 1987).Large-scale maps of the high-latitude convection can be derived from multiple radars using the "Map Potential" analysis method developed by Ruohoniemi and Baker (1998).In this method, the line-of-sight velocities are mapped onto a polar grid to determine a solution for the electrostatic potential, which is expressed in spherical harmonics, and the statistical model of Ruohoniemi and Greenwald (1996), parameterized by concurrent IMF conditions, is used to stabilize the solution where no data are available.In this study, an eighth-order spherical harmonic fit is employed to represent the Northern Hemisphere data.The flow vectors are derived using the SuperDARN line-of-sight velocity measurements and the transverse velocity component from the spherical harmonic fit.
During the period of interest, eight of the Northern Hemisphere radars were operated with excellent data coverage.Figure 1 shows the fields of view of the radars in Altitude Adjusted Corrected Geomagnetic (AACGM) coordinates (Baker and Wing, 1989), looking down on the geomagnetic north pole at 13:30 UT, the middle time of the interval under study.Starting from the dusk side and moving clockwise, the fields of view in the map are of Finland (F), Iceland East (E), Iceland West (W), Goose Bay (G), Kapukasing (K), Saskatoon (T), Prince George (B) and Kodiak (A) radars.The highlighted beams are beam 9 of the Finland (F) radar, beam 5 of Iceland East (E) and beam 6 of Goose Bay (G), whose data will be analysed in detail.During this interval, the CUTLASS radars (F&E) covered the antisunward flow region, while the Goose Bay radar covered the sunward flow region.It should be noted, however, that the MLT locations of the fields of view of the radars changed over the interval.The fields of view extend over more than 18 h of magnetic local time, covering the whole dayside ionosphere with very rich backscatter, which makes the potential map reliable, especially for dayside convection.There was a gap of SuperDARN coverage in the evening sector where the IMF model data are used to stabilize the solution, which might have some influence on nightside convection but no obvious influence on our results about dayside convection.
On 2 March 2002, all eight radars were operated in a standard mode in which each radar scans through 16 beams of azimuthal separation 3.24 • , with a 7 s dwell time for each beam and a total scan time of 2 min.Each beam is divided into 75 range gates of length 45 km, and so in each full scan the radars cover 52 • in azimuth and over 3000 km in range.The CUTLASS components, Finland (F) and Iceland East (E) radars, are stereo radars (Lester et al., 2004), which have two identical channels.Channel A of these radars were operated in a standard mode, whereas Channel B was fixed in the beam directed towards Svalbard, namely beam 9 and beam 6, respectively, giving a much higher time resolution of 7 s compared to other beams.Although most of the Southern Hemispheric radars were also in operation during that period, they received very little ionospheric backscatter.It is thus difficult to derive the large-scale convection pattern using SuperDARN data in the Southern Hemisphere to compare with the Northern Hemisphere.
DMSP particle and flow data
Measurements of ion and electron fluxes by the SSJ/4 (Hardy et al., 1984) and SSIES instruments on board the DMSP F13 and F14 spacecraft have been employed to investigate the pattern of particle precipitation and their relationship to the plasma flow.The DMSP spacecraft are in polar orbits (fixed in local time), sampling the ionospheric plasma at about 840 km.F13 is in a roughly dawn-dusk orientation while F14 is in 09:30-21:30 local time orientation.SSJ/4 points toward zenith at all times, and provides 1 s resolution spectra of ion and electron flux between 30 eV and 30 keV, while SSIES provides snapshot pictures of ionospheric convection during the interval, allowing us to compare the flow characteristics in both hemispheres.
ACE spacecraft
Upstream interplanetary conditions for the period under study were monitored by the ACE spacecraft (Stone et al., 1998) located at GSM coordinates (X, Y, Z)=(225, -37, -3) Re during this interval.The solar wind and the IMF are measured by the SWEPAM and MAG instruments, respectively (McComas et al., 1998;Smith et al., 1999).The time lag of field changes from ACE to the dayside ionosphere has been estimated to be 67±5 min, using the algorithm of Khan and Cowley (1999).This estimate includes the propagation time between ACE spacecraft and subsolar bow shock, the transit time for the shocked and slowed solar wind across the subsolar magnetosheath, and the Alfvenic propagation time along open field lines from the subsolar magnetopause to the cusp ionosphere, in which solar wind proton number density and speed are taken to be ∼10 cm −3 and ∼390 km s −1 respectively, and assuming that the reconnection occurred at the subsolar magnetopause.During the interval of interest, the active reconnection was at lobe field lines, and thus the time lag is expected to be influenced by the polarity change in B x and might be variable and underestimated.
Solar wind and IMF conditions
On 2 March 2002, IMF B z and B x are positive for the whole day except for some very short time negative excursions, which gives us a very good opportunity to study dayside reconnection and associated ionospheric convection and particle precipitation during a prolonged period of positive IMF B z .In Fig. 2 we present data from the ACE spacecraft, lagged by 67min via the method described above, from 12:00 to 16:00 UT on 2 March 2002.This is an extended interval describing the upstream interplanetary conditions for 1 h on either side of the interval of specific interest, namely from 13:00 to 15:00 UT, which is highlighted by the vertical solid lines.The top two panels show the solar wind proton number density and speed.The following three panels show the GSM components of the IMF.The clock and elevation angles of the field, defined as the angles of the IMF vector with GSM z axis in the y-z and z-x planes, respectively, are shown in the next two panels.
It is clear that the solar wind and IMF conditions had become very stable during the main interval of interest, with variable IMF restricted to about 20 min before.During 13:00-15:00 UT, the solar wind proton number density and velocity were about 10 cm −3 and 390 km s −1 .IMF B x was positive with an average value of 4 nT, while B y was small and negative, and B z was strongly positive with a very constant value of 10 nT.The clock angle is mainly negative and its magnitude is less than 15 • , while the elevation angle is positive with a value of 20 orientation at 12:40 UT is noted here, although it is not included as part of the main period.Before 12:40 UT, when positive IMF B y was dominant, the Northern Hemispheric radars observed a two-cell convection pattern with a flow enhancement in the post-noon sector, associated with day-side magnetopause low-latitude reconnection, which demonstrates that a 67±5-min time delay is acceptable for magnetopause low-latitude reconnection.Around 12:48 UT, the Goose Bay radar observed the high-latitude ionospheric flow changed from antisunward to sunward, which shows the beginning of the lobe reconnection associated with strongly positive B z .Therefore, for the ionospheric convection related to lobe reconnection, a 75-min time lag may be more reasonable, with the appropriate lag being variable during the 2-h period.However, the IMF conditions were very stable over 3 h after the northward turning, so that any errors in the calculated time lag should have no detrimental influence on our results.
Large-scale convection
In order to investigate the large-scale ionospheric flow, we have derived a sequence of convection maps using all the available SuperDARN data in the Northern Hemisphere, with the Map Potential analysis method.Before the IMF changed from B y dominant to B z dominant at 12:40 UT, the convection was dominated by a large circular dusk cell and crescentshaped dawn flow cell, with flow enhancement in the postnoon sector, which is consistent with the Dungey-cycle flow driven in the presence of strong positive IMF B y (Reiff and Burch, 1985;Ruohoniemi and Greenwald, 1996).As the IMF abruptly turned strongly northward around 12:40 UT, localised sunward flows began to appear within the polar cap, first appearing at the post-noon sector and then moving or expanding to the pre-noon sector, consistent with the negative turning of IMF B y .The global convection pattern also changed from a distorted two-cell into a multi-cell configuration, and eventually a clear four-cell pattern at 13:00 UT.
In Fig. 3 we present four representative examples of the flow pattern, imaged at 13:00, 13:30, 14:00 and 14:28 UT, which employ 2-min resolution data from the eight radars indicated in Fig. 1. Figure 3 shows that the four-cell convection pattern and the sunward flows in the polar cap lasted for at least 1.5 h, and also that there are some strong flow bursts occurring in the dayside post-noon sector (see the map at 13:30 UT).
To examine more closely the variability of the dayside flow, in the four bottom panels of Fig. 2, we extract parameters from the Map Potential plots during 13:00-15:00 UT, in which horizontal dashed lines show the average values.Figure 2h shows the total transpolar voltage in the flow maps, which lies typically between ∼15 and ∼40 kV, and exhibits a decreasing trend.Figure 2i illustrates the potential difference between the foci of the two lobe cells, which is believed to be related to the lobe reconnection rate at the high-latitude magnetopause when the IMF is northward.It should be noted that this potential difference is of the opposite sense to the total transpolar voltage.The lobe potential difference was varying between -11 and -23 keV, with some transient enhancements with periods between 4 and 16 min.This suggests that the dayside high-latitude lobe reconnection during IMF B z positive is a transient phenomenon, just like dayside low-latitude reconnection.The magnitude of the lobe potential difference also had a decreasing trend.
Inspection of the flow maps, such as those presented in Fig. 3, shows that, in addition to the temporally varying lobe cell activity, there were also strong flow bursts occurring in the post-noon sector, mainly in the antisunward direction.Figure 2j shows the peak antisunward flow speed measured in this post-noon sector, occurring in each 2-min Map Potential plot.In order to provide a representative value, we have averaged the vector velocities in bins of four "pixels" of the Map Potential algorithm, corresponding to 2 • of latitude and typically ∼7 • of longitude.Figure 2j demonstrates the bursty nature of the flow with 6-24 min periods evident.The averaged value of the peak flow speed over the interval, 625 m s −1 , is indicated by the horizontal dotted line, emphasizing the peaks in the flow.The onsets of these peaks are marked by the vertical dashed lines.Comparison with the data in Fig. 2i shows that most of the onsets of flow bursts corresponded to the onsets of the lobe potential enhancements, although not all lobe potential enhancements had flow bursts associated with them.In Fig. 2k, the latitude of the antisunward flow burst peak velocity is presented, which illustrates that the position of the flow burst had a poleward moving trend and there is some evidence for a poleward jump shortly after each flow burst event.
Radar parameter plots
To examine the dayside convection flows in detail, Fig. 4 presents the line-of-sight velocity and spectral width of the beams highlighted in Fig. 1, in magnetic coordinates.The scales of the parameters are illustrated with the colour bars to the right of the related panels.It should be noted that, in these plots, only ionospheric backscatter is plotted, and the positive (negative) velocities correspond to plasma drift toward (away from) the radar.The vertical dashed lines illustrate the onsets of the flow bursts identified in Fig. 2.
Figure 4a and d give the magnetic latitude-time-parameter plots for Beam 9 of CUTLASS-Finland radar Channel B with a 7 s resolution.A small region of backscatter appeared for a very short period around 12:30 UT between magnetic latitudes 76-78 • , which is related to the positive B y dominant IMF.Stronger and more extensive backscatter appeared after 12:50 UT at higher latitudes of 78-82 • , which occurred about 10 min after IMF B z turned strongly northward.Subsequently, the antisunward flow bursts in the post-noon region propagated poleward, with broad spectral width.
Figures 4b and e give the magnetic longitude-timeparameters plots for beam 5 of the CUTLASS-Iceland radar Channel B with 7-s resolution.Similar to Finland Beam 9, some weak backscatter appeared in a very short period around 12:30 UT near noon and stronger ionospheric backscatter appeared in the post-noon sector after 12:50 UT, which are related to the IMF B y dominant and strongly northward IMF, respectively.After 12:50 UT flow burst structures (seen clearly as sloping velocity structures in Fig. 4b) propagating eastwards with a speed of 600 m s −1 , away from the radar, occurred in the antisunward flow and wider spectral width region.It is interesting that the direction and magnitude of the propagation speed are similar to the flow speed itself.
Figures 4c and f give magnetic latitude-time-parameter plots for Beam 6 of the Goose Bay radar which probed the midday region during this interval, with a 2-min resolution.Figure 4c shows that a strong antisunward flow appeared for a short period around 12:30 UT.At 12:48 UT the flow speed became sunward (towards the radar).The sunward flow persisted in the field of view for more than 2 h.This switch is related to the time-lagged IMF orientation change at 12:40 UT.DMSP particle data shows (see section below) that the sunward flow region is the cusp.
DMSP flow and particle data
In Fig. 5a, the DMSP/F13 track and the horizontal crosstrack velocity data during 13:40-13:55 UT are overlaid on the Northern Hemispheric potential map at 13:46 UT.The DMSP velocity observations agree well with SuperDARN, namely, that sunward flow occurred in the polar cap and enhancements of antisunward flows occurred at lower latitude, which have been highlighted by coloured segments of the track in red and blue, respectively.Due to the convection changes which happened while F13 passed the sunward flow region in 5 min, the sunward flow region observed by F13 was wider than that indicated by the map potential measurement.
Figure 5b shows the particle data observed by DMSP SSJ/4 and the plasma flow data by SSIES during the same period as Fig. 5a.As shown in the top two panels in Fig. 5b, the average energy and energy flux for electrons associated with antisunward flows were about 200 eV and 10 10 -10 11 eV (cm 2 s sr) −1 , respectively.The third panel shows that the electron energy spectrum in this region had spiky structures with sub-keV peaks.The coloured lines below this panel illustrate the plausible source region of the precipitation.The bottom panel presents the horizontal crosstrack component of the flow data, in which positive is sunward.Red and blue lines in the bottom of this panel are used to highlight the period of antisunward and sunward flows, as were identified in Fig. 5a.The characteristic energy and energy flux illustrate that these particles, related to antisunward convection, were mainly from the low-latitude boundary layer (LLBL) (Newell and Meng, 1992), interrupted by BPS or accelerated LLBL particles.Although the SSJ4 instrument on F13 has degraded low energy ion detectors (below 1 keV) since 1995, we can still distinguish that there was no dispersed low-energy ion cutoff in the ion energy spectrum, which suggests that DMSP was on field lines that had been merged for a very long time or were closed (Chang et al., 2004).The convection reversals are located within the LLBL.The sunward flow region at higher latitude was mainly in a region void of precipitation, interrupted by some spiky structures with BPS or mantle features in the electron spectrum, although.The spikes in this region were surrounded by a diffuse background that was vastly weaker than that in the lower-latitude antisunward flow regions, which suggests that the high energy electrons might have been accelerated in the magnetosphere, implying that the antisunward flow and the sunward flow regions in the ionosphere were related to the LLBL and polar cap/mantle, respectively.
Figure 6a shows the track of the DMSP F14 during its passage over the northern polar region from dusk to the pre-noon sector during 14:30-14:45 UT, overlaid on the potential map and the spectral width data of Iceland West Radar at 14:40 UT, with the segment of the track in red, highlighting the passage of the cusp region during 14:41:38-14:42:11 UT, which will be discussed later.The flow patterns observed by the two different systems are again similar.The flow direction was sunward in the cusp and changed to antisunward when the satellite had passed through the cusp region into the lower-latitude region.The spectral width is wider in the cusp region, and in fact the backscatter power (not shown) is also stronger than other regions.
The particle and plasma flow data from DMSP F14 during 14:41-14:44 UT are presented in Fig. 6b.During 14:41:38-14:42:11 UT, the same period as the red segment track illustrated in Fig. 6a, the average energy and energy flux for electrons and ions were approximately (100 eV, 10 10 -10 11 eV (cm 2 s sr) −1 ) and (1 keV, 10 10 -10 11 eV (cm 2 s sr) −1 , respectively, indicating that these particles were from the cusp (Newell and Meng, 1992).The ion energy spectrum in the bottom panel of Fig. 6b shows that a low-energy ion cut-off and reversed ion dispersion structure also occurred in this region, which suggests that lobe reconnection indeed happened at the higher latitude cusp (Matsuoka et al., 1996).Most interesting is that the ion dispersion structure was a sawtooth-like structure (Morley and Lockwood, 2003), confirming that the lobe reconnection was pulsed, which will be discussed later.When the DMSP F14 passed the cusp into the lower-latitude region, the low energy cutoff phenomena in the ion energy disappeared, and the electron and ion average energies increased, which indicate that the satellite entered the LLBL region (interrupted by mantle precipitation) where flow direction changed to antisunward.
In the equinox season, positive IMF B x is believed to favour lobe reconnection and four-cell convection in the Southern Hemisphere (Lockwood and Moen, 1999).However, the ionospheric backscatter from southern Super- pattern.Fortunately, the DMSP satellites measured some flow data during two of their southern passages.In order to compare the conjugate hemispheric convection, in Fig. 7, Southern Hemisphere DMSP data are overlaid on the Northern Hemisphere convection maps at the central times of the passages.To eliminate the asymmetry caused by IMF B y in the two hemispheres, we plot Northern Hemisphere convection maps and the DMSP Southern Hemisphere track in AACGM coordinates (Baker and Wing, 1989), looking down on the geomagnetic north pole and south pole, respectively, so that the direction of the horizontal coordinates for the DMSP track is opposite to the northern potential map, namely 06:00 MLT is to the right side for the potential map while for the DMSP track it is to the left.Figure 7a demonstrates that the sunward flow region was mainly in the postnoon (pre-noon) sector of the central part of the northern (southern) polar region, which is coincident with the convection pattern for small negative B y around 12:56 UT. Figure 7b shows the antisunward flow enhancement in the postnoon (pre-noon) sector of the Northern (Southern) Hemisphere polar region and the sunward flow at the central parts of the polar regions when IMF B y was negative.Thus a similar dayside convection pattern occurred in both hemispheres, suggesting that magnetopause reconnection occurred in both lobe boundaries during this period.
The post-noon flow bursts
The peak flow speed in the post-noon sector shows that the antisunward flow bursts happened with a period of 6-24 min (see Fig. 2j).Viscous interaction may always produce some antisunward flow in the low-latitude boundary layers, independent of any reconnection.However, during the interval of interest, neither solar wind plasma density nor solar wind speed has any obvious variation, indicating that such interaction has little contribution to the flow bursts.On the other hand, the coincidence of the onsets of the flow bursts and the enhancements on the lobe cell potential difference suggests that they are closely related (see Fig. 2).There are two alternative interpretations for this relationship: in the first the transient lobe reconnection caused lobe potential enhancements and the flow bursts; while in the second the lowlatitude reconnection in the post-noon sector caused the antisunward flow bursts and then influenced the lobe cell potential, in turn.Both of the SuperDARN and DMSP observations support the former assumption.
In the flow burst region, the spectral width was broad (see Fig. 4), which has been used as the criterion of open field region in the IMF southward situations.In this case, however, when IMF B z was strongly northward, the electron energies and energy flux observed by the DMSP in this region suggest that these particles were from the low-latitude boundary layer (LLBL) (Newell and Meng, 1992).There was no lowenergy ion cutoff in the ion energy spectrum, which suggests that DMSP was on field lines that had been merged for a very long time or were closed (Chang et al., 2004).
The radar observations demonstrate that the flow bursts were not caused by the low-latitude reconnection in the postnoon sector.The Iceland East radar, whose beam was directed antisunward during the highlighted interval, observed flow burst structures, seen clearly in Fig. 4b, which propagated with a similar speed to the flow speed around 600 m s −1 in the antisunward direction.These structures were located in the post-noon sector, between the highlatitude anti-clockwise lobe cell and the low-latitude anticlockwise Dungey cell in the large scale flow pattern.This suggests that the flow bursts and related precipitation were caused by reconnection occurring elsewhere; otherwise, the transient should not propagate in the plasma flow velocity but in a phase velocity in the ionosphere.Provan et al. (1998) observed that the velocity of the transients at the reconnection region was much higher than that of the ionospheric flow.
The latitude of the peak flow at the post-noon sector exhibits abrupt poleward jumps just after the flow burst onsets (see Fig. 2k), which might imply that the polar cap contracted when the flow bursts occurred, even though DMSP data show that these structures were on the equatorward edge of the open/closed field boundary.If low-latitude reconnection happens, it would have opened the closed field lines and then enlarged the polar cap, resulting in equatorward rather than poleward jumps.Polar cap contraction, not only at the nightside boundary (Lester et al., 1990) but also at the dayside boundary (Moen et al., 2004), may effectively be controlled by tail reconnection (Cowley and Lockwood, 1992); however, no evidence of coincident tail reconnection is observed in the nightside magnetometer data, and the agreement of the antisunward flow bursts and the lobe potential variation implies that the poleward jumps in the flow burst structure were not caused by tail reconnection but by lobe reconnection occurring in both hemispheres.
Although DMSP F13 observed antisunward flow enhancements both in the post-noon and pre-noon sectors (see Fig. 5), the map potential analysis demonstrates that most of the dayside flow bursts occurred in the postnoon sector, which was associated with prolonged small negative IMF B y .Lobe reconnection excites sunward flows in the cusp region and then as a result of magnetic tension, the reconnected field lines slide around either the dawn or the dusk flank, before being returned to the tail lobe by the magnetosheath flow.The prolonged negative IMF B y , although its magnitude was very small, led to more field lines moving towards the dusk flank, and therefore the flow bursts periodically occurred in the post-noon sector instead of the pre-noon sector.
Transient characteristics of the lobe reconnection
During 13:00-15:00 UT on 2 March 2002, both northern Su-perDARN radars and DMSP satellites observed a four-cell convection pattern and sunward flow in the central part of the polar region, which illustrates that the lobe reconnection indeed happened in Northern Hemisphere, the reversed cusp ion dispersion signature observed by DMSP F14 confirms it further.These observations also give us some evidence to believe that the lobe reconnection is transient.
Assuming that the middle point (P m ) of the straight line between the foci of the two lobe cells is located within the AACGM Latitude, 12.0 MLT.These ionospheric locations may be mapped to the outer magnetosphere with the T96 model (Tsyganenko 1997), in which the solar wind plasma density, velocity, IMF B y , B z and D st are taken at 10 cm −3 , 390 km s −1 , -2 nT, 10 nT and -12 nT, respectively.The limiting and average locations of P m are mapped in Fig. 8, which shows the model field lines in the GSE XY and XZ planes, along with the estimated T96 magnetopause.This mapping suggests that the location of the reconnection occurred on lobe field lines during this period, and that the reconnection line was extended or variable in the GSE Y direction, although clearly the accuracy of the field line model may be limited.
As a first order estimate, the potential difference between the foci of the lobe cells, whose direction is opposite to that of the total potential difference in the whole polar region, represents the lobe reconnection rate.During the interval of interest, the lobe cell potential was variable (see Fig. 2i), with a period of 4-16 min, which suggests that the lobe reconnection is also transient with the same period.The time scale is similar to the observation made by Chisham et al. (2004).
The polar cap potential is decreasing during the interval of interest.Lockwood et al. (1999) noted that much of the voltage associated with antisunward flow during northward IMF is due to field line closure in the tail and is not caused by either a viscous-like interaction or field line opening at the dayside magnetopause.There were no substorm signatures observed during periods of prolonged northward IMF.Wygant et al. (1983) showed that the range of residual transpolar voltages during northward IMF dropped progressively with time since the northward turning of the IMF.This indicates that such voltage is associated with open flux produced by the prior period(s) of southward IMF.
In addition to the periodic flow bursts discussed above, the DMSP particle data also demonstrate that the related lobe reconnection was transient.During subsolar reconnection under southward IMF (B z <0), "stepped" and "sawtooth" signatures have been shown to be caused by pulsed reconnection (Morley and Lockwood, 2003).During lobe reconnection under northward IMF (B z > 0), the sunward convection will result in a reversed cusp ion dispersion (Woch and Lundin, 1992).As the ion precipitation evolves with elapsed time since reconnection, the particle dispersion process is just like subsolar reconnection, in which newly-reconnected lines allow particles from the magnetosheath to precipitate through the magnetospheric cusp into the ionosphere.The velocity dispersion as the particles travel along field-lines from the point of particle injection means that the more energetic particles of any species will reach the ionosphere before the less energetic particles.The low-energy ion cutoff is directly related to the time elapsed since reconnection.Since the con-vection direction is opposite to the southward IMF B z , the ion dispersion will be reversed.The discontinuity of ion dispersion observed here (see Fig. 6) confirms that the lobe reconnection was transient.
Although the flow speed observed by the Goose Bay radar at the sunward flow region was not particularly variable (see Fig. 4e), the reconnection is not necessarily steady.When lobe reconnection occurs, the sunward flow is driven by the magnetic tension which is in the opposite direction of the sheath flow, so that the sunward flow speed and its change are depressed.On the other hand, after the newly-reconnected field lines become overdraped to the dusk or dawn side, the magnetic tension and the sheath flow speed are in the same direction, so that the variation in the antisunward flow are amplified.Due to the small negative B y , most of reconnected field lines were overdraped with the post-noon sector, causing flow bursts in this region.
The IMF effect on lobe reconnection
Usually the time lag is very important in any analysis of the IMF effect on solar wind-magnetosphere-ionosphere coupling.The time lag may vary and is different for reconnection at different locations, especially for low-latitude and high-latitude reconnection; therefore, it is very difficult to obtain an exact IMF time lag.In this study, we use the constant IMF time lag of 67 min calculated with the algorithm of Khan and Cowley (1999) designed for subsolar reconnection, although the high-latitude ionospheric convection switching from antisunward to sunward, as observed by the Goose Bay radar, demonstrates that this time lag is underestimated for high-latitude reconnection.However, since prolonged, stable IMF conditions prevail over the interval under study, the uncertainties in the lag has no significant influence on our results.
During the interval of interest, the IMF conditions were quite stable with strongly positive B z , positive B x , small and negative B y .Moore et al. (2002) calculated the shape of the X-line for the full range of clock angles using a generic T89 internal magnetic field model.They found an X line shape that is very similar to that inferred by Sandholt et al. (2003), for dominant IMF B y , but also found that subsolar reconnection will weaken and disappear for nearly northward B z (small clock angles).As the clock angle increases from near zero, the X line traverses the subsolar region, producing some weak antisunward flow, but mainly azimuthal flows corresponding to overdraping effects, as we note here.
At equinox there is no dipole-tilt.Since B x was positive and B y negative, the phase plane containing IMF first hit the magnetopause poleward and duskward of southern cusp, and such conditions have been believed to favour southern lobe reconnection (Lockwood and Moen, 1999).The DMSP flow data during southern polar region passages illustrate that lobe reconnection and a four-cell convection pattern occurred in the both Northern and Southern Hemispheres.Unfortunately, too little ionospheric backscatter was observed by the southern SuperDARN radars to study the convection pattern in the Southern Hemisphere.In contrast, excellent coverage of the ionospheric scatter observed by northern SuperDARN radars showed that the four-cell convection pattern in the Northern Hemisphere lasted for at least 1.5 h.The sunward ionospheric flow preferentially occurred at prenoon.
An interpretation of the proposed magnetopause reconnection geometry is illustrated in Fig. 9. Figure 9a is a view of the magnetosphere from the dusk flank, in which points X 1 and X 2 are reconnection locations, the dotted line is the open/closed field boundary.Figure 9b is a view of the magnetopause from the Sun.When B x is positive and the elevation angle is about 20 • , strongly positive B z and small IMF B y cause the reconnection to first occur in the southern dusk-side lobe in Fig. 9a, the newly-reconnected field is draped sunward and westward by magnetic tension, and then the draped interplanetary magnetic field lines from the Southern Hemisphere reconnect with the northern lobe field lines on the dawn side, resulting in a sunward and eastward flow in northern high-latitude region.In other words, the lobe reconnection might first occur in the open field region in the Southern Hemisphere and then on the open/closed field line boundary in the Northern Hemisphere, resulting in fourcell convections in both hemispheres and the closure of open magnetic flux.
During the interval, the locations of the flow bursts had a poleward moving trend, which indicates that the polar cap tended to contract.At the onset of the flow bursts, the peak location of the bursts also had poleward jumps, which suggests that the polar cap contraction might partly be caused by the re-closing of open field lines by lobe reconnection occurring in both hemispheres.
Summary
In this paper we present SuperDARN and DMSP satellite observations when the IMF was strongly northward.Although the IMF B x was positive during most of the interval of interest, which is believed to favour Southern Hemisphere lobe reconnection at equinox, a four-cell convection pattern occurred in the Northern Hemisphere and lasted for at least 1.5 h, with periodic flow bursts in the post-noon antisunward convection region associated with LLBL precipitation on closed field lines.The flow burst structures propagated at a similar speed to the plasma convection, and the location of the flow bursts had poleward jumps at the onset times, which suggests that the flow bursts are not caused by low-latitude reconnection.The flow burst onsets were rather related to the onsets of the lobe potential enhancements, which implies that the transient lobe reconnection results in the periodic flow bursts.A sawtooth-like reversed cusp ion dispersion signature observed by DMSP F14 at a sunward flow region con-Fig.9. Schematic illustrations of the evolution of lobe reconnection for strongly positive IMF B z , small negative B y and positive B x at equinox, (a) as viewed from the dusk flank, lobe reconnection occurs first at the Southern Hemisphere at X 1 and overdraped lines re-closed by subsequent lobe reconnection in the Northern Hemisphere at X 2 ; (b) as viewed from Sun, the southern lobe reconnection occurs at dusk flank and results in sunward and westward flow, while the followed northern lobe reconnection occurs at the dawn flank and results in a sunward and eastward flow.
firms that the transient lobe reconnection was the source for the sunward convection in the four-cell pattern.
Flow data from DMSP during their Southern Hemispheric passes show that the four-cell convection also happened in the Southern Hemisphere.During most of the interval, IMF B x is positive, which favours southern lobe reconnection, so that the lobe reconnection might occur in the Southern Hemisphere first, with field lines subsequently overdraped and reclosed by the northern lobe reconnection.Poleward jumps of the position of the flow bursts imply that the flux re-closure caused by the lobe reconnection has some contribution to the polar cap contraction. Fig.1
Fig. 1 .
Fig.1.Fields of view of the eight Northern Hemispheric Super-DARN radars overlaid on the potential map at 13:30 UT, the middle time of the considered interval in this paper.This is plotted on a geomagnetic grid from the pole to 60 • , with 12 MLT at the top and 18 MLT to the left.The data from the beams highlighted with blue lines will be studied in detail.
Fig. 2 .
Fig. 2. Upstream interplanetary observations from the ACE spacecraft during 12:00-16:00 UT on 2 March 2002, lagged by 67 min to account for the propagation delay to the ionosphere, and parameters derived from potential maps observed by Northern Hemispheric SuperDARN radars.The top two panels show solar wind density (a) and velocity (b).The following three panels show IMF B x (c), B y (d), B z (e) in GSM coordinates.The next two panels show the clock (f) and elevation (g) angles of the IMF.The bottom four panels show the parameters derived from the map potential, namely the total transpolar potential difference (h), potential difference between the foci of two lobe cells (i), the magnitude of the peak flow speed (j) and the latitude (k) of the flow bursts in the post-noon sector.The vertical dashed lines illustrate the onset times of the flow bursts in (j). Fig.3
Fig. 3 .
Fig.3.Streamlines and vectors of the ionospheric flows derived from the Northern Hemispheric SuperDARN velocity measurements shown on geomagnetic grids, obtained from the "map potential" algorithm.Maps are shown at 13:00, 13:30, 14:00 and 14:28 UT.The direction and magnitude of the lagged IMF are indicated at the right-hand upper corner of each map.The average auroral oval for Kp=1 and the Hepner-Maynard convection boundary are also overlaid on each of the map.
Fig. 4 .
Fig. 4. SuperDARN line-of-sight velocity and spectral width measurements from the radar beams indicated in Fig. 1 are shown for the interval 12:00-15:00 UT on 2 March 2002.The vertical dashed lines show the onset times of 5 pulsed flow features illustrated in Fig. 2j.
Fig. 5 .
Fig. 5. (a) The track and horizontal cross track velocity data of DMSP-F13 during 13:40-13:55 UT overlaid on the potential map at 13:46 UT.The segments of the track in the sunward and the antisunward flow regions are highlighted by red and blue block lines.(b) Ion and electron data from SSJ/4 instrument and particle flow data from SSIES on board DMSP-F13 spacecraft during 13:40-13:55 UT.From top to bottom, electron and ion energy flux in eV (cm 2 s sr) −1 , electron and ion average energy in eV, electron energytime spectrogram, and ion energy-time spectrogram.The coloured lines under Panel (3) illustrate the plausible source regions of the precipitation.The red and blue lines under the bottom panel illustrate the intervals when the satellite passed the sunward and the antisunward flow regions, respectively. Fig.6a
Fig. 6 .
Fig. 6.(a) The track and horizontal cross track velocity data of DMSP F14 during 14:40-13:55 UT overlaid on the potential map and the polar plot of the spectral width data from Iceland West radar at 14:40 UT.The track where DMSP F14 observed a reversed cusp ion dispersion signature is in red.(b) Ion and electron data from SSJ/4 instrument and particle flow data from SSIES on board DMSP-F14 spacecraft during 14:41-14:44 UT, in the same format as Fig. 5b.The coloured lines under Panel (3) illustrate the plausible source regions of the precipitation. Fig.7
Fig. 7 .
Fig. 7.The track and horizontal cross track velocity data in the Southern Hemispheric passages of DMSP F13 during 12:50-13:05 UT (a) and DMSP F14 during 13:43-13:58 UT (b), overlaid on the Northern Hemisphere potential map at the central time of the passages, respectively.The Northern Hemisphere potential map and the DMSP southern passage data are plotted in AACGM coordinates, looking down on the geomagnetic north pole and south pole, respectively.
Fig. 8.A field line trace of the average and limiting locations (see text for details) of the high-latitude reconnection signatures as observed with SuperDARN using the T96 model.The field lines are projected in the GSE XY and XZ planes.The lobe field lines map close to the estimated T96 magnetopause, and span a wide range of GSE Y values. | 10,250.8 | 2006-09-13T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
The exact solutions and approximate analytic solutions of the (2 + 1)-dimensional KP equation based on symmetry method
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
have individual range of applications. Therefore, summarizing and concluding, adopting the advantages and abandoning the disadvantages have been regarded as the effective approaches to investigate these existing methods. At the same time, it is worth obtaining more new solutions of NLPDE by using Lie symmetry and other methods.
As we all know, the symmetry method is the most universal method, and many traditional methods become its special cases. During the end of the nineteenth century, in order to unify and expand the methods used in solving the ordinary differential equations(ODE), Norwegian mathematician Sophus Lie (1842Lie ( -1899 firstly proposed the symmetry theory of differential equations (Lie 1881). The investigations of the symmetry theory and approach have important theoretical and practical significance in modern mathematics, physics, mechanics and so on, at the same time, many successful applications have emerged in those fields (Bluman and Kumei 1989;Bluman et al. 2009;Noether 1918;Ma 1990;Clarkson and Kruskal 1989;Lou and Tang 2001;Ma and Chen 2009;Ma 2013). At present, using the symmetry method and others, such as the analytic solutions method, the approximate analytic solutions method and the numerical method with the aid of thorough considering mutual complementarity and availability to solve NLPDE are the new research subjects.
The premise of applying the symmetry method is to determine the all kinds of symmetries of the partial differential equations (PDEs). The main approach of determining the symmetries is the infinitesimal transform method which is proposed and constructed by Lie, called Lie's algorithm. Lie's algorithm, which is the major method with respect to determining symmetries, transforms the problem of determining symmetries into that of determining corresponding infinitesimal vectors whose infinitesimal functions are found as solutions of some over-determined system of PDEs, called the determining equations (Lie 1881). In determining symmetries, tedious, mechanical computations are involved and the order relation of unknown quantities have not been considered in conventional Lie's algorithm, which result many problems, such as infinite loops on computers, a mass of work and so on. According to the investigations, differential form Wu's method is one of effective methods to get rid of the defects of Lie's algorithm. Therefore, Wu-differential characteristic set algorithm extended and constructed by Temuer Chaolu can partially solve the above-mentioned problems (Temuer 1999;Temuer and Bai 2010). This algorithm has been successfully applied to classical symmetries, nonclassical symmetries, high-order symmetries, approximate symmetries, potential symmetries, conservation laws and symmetry classification of PDEs, which has promoted the investigations of symmetry theory of PDEs (Bluman and Temuer 2006;Temuer et al. 2007;Temuer and Bai 2009;Temuer and Pang 2010;Sudao et al. 2014). Recently, we investigate the applications of the symmetry method in the boundary value problem of the nonlinear PDEs based on Wu-differential characteristic set algorithm and use the symmetry method and the homotopy analytic method to solve the boundary value problem (Sudao et al. 2014;Sudao 2011). Some other investigators use the symmetry method, the variational iterative method and the homotopy perturbation method to solve the boundary value problem based on Wu-differential characteristic set algorithm (Lu and Temuer 2011a, b;EerDun and Temuer 2012).
In this paper, we will construct the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation by using the Lie symmetry, the extended tanh method and the homotopy perturbation method. The Wu-differential characteristic set algorithm plays an important role in calculating the symmetries of the (2 + 1)-dimensional KP equation. This investigation will explore a new approaches of Lie symmetry in application of NLPDE. In addition, it will also effectively popularize the range of application and advance the efficiency of using method.
The symmetries and symmetry reduction of the (2 + 1)-dimensional KP equation
We consider the (2 + 1)-dimensional KP equation (Ding and Ji 2008) as follow: it is applied to describe the law of motion of water waves in (2 + 1)-dimensional spaces as well as plasmas in magnetic fields. Next, we will give the process of calculating the symmetry and reduction of Eq. (1).
The symmetries of the (2 + 1)-dimensional KP equation
The symmetry group of Eq. (1) will be generated by the vector field of the form where ξ, µ, τ, η are the infinitesimal generated functions of the symmetry. According to the Lie algorithm, we obtain the determining equations of symmetry (2), but it is too difficult to get its solutions. However, we can obtain the followig system of equations corresponding to the characteristic set which is equivalent to the determining equations by using Wu-differential characteristic set algorithm (Temuer 1999).
By solving the above PDEs, we get the infinitesimal functions where c 1 , c 2 , c 3 , c 4 , c 5 are arbitrary constants, then the corresponding infinitesimal vector is the following form Obviously, X has five one-parameter point symmetries, then the corresponding infinitesimal vectors are as follow:
The reduction of Eq. (1)
To facilitate solve the Eq. (1), we will reduce it by using the invariant form method. The resulting reduced PDE is fewer independent variable than Eq. (1).
The exact travelling wave solutions of (8) based on the extended tanh method
Recently, as an effective approach, the extended tanh method is introduced to seek the exact solutions of the nonlinear evolution equations by Xie et al. (2005). This method is further improved by the generalized Riccati equation and introducing its twenty seven new solutions, these solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions, respectively. When the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Taking Eq. (8) for example from the symmetry reduction equations, we will get its exact travelling wave solutions by the extended tanh method and the process is composed of the following four steps.
Step 1 Doing the travelling wave transformations. In order to look for the travelling wave solutions of Eq. (8), we introduce the travelling wave transformation as follows: where k, c are constants and ξ 1 = x − t, ξ 2 = y. Then we reduce Eq. (8) into ODE for U (ξ ), namely Step 2 Choosing the expression of solution. By considering the homogeneous balance between the highest order derivatives U (4) and nonlinear terms UU ′′ appearing in Eq. (13), we choose the following expression of solution: where α 0 , α 1 , α 2 are undetermined coefficients. The function φ = φ(ξ ) satisfies the second-order linear ODE where λ, δ, ν are constants. The ODE (15) has four cases of solutions as follows.
Step 3 Determining the coefficients. By substituting (14) into Eq. (13) and using ODE (15), collecting all terms with the same order of φ i together, the left-hand side of Eq. (13) is converted into another polynomial in φ i . Equating each coefficient of this different power terms to zero yields a set of nonlinear algebraic equations for α i (i = 0, 1, 2), k, c, λ, δ and ν. With the aid of mathematica, we get the solutions as follows: By analyzing (16), these solutions are suitable to all cases of the general solutions φ 1 -φ 27 to ODE (15).
Step 4 Acquiring the exact travelling wave solutions. By substituting (16) and the general solutions φ 1 -φ 27 of ODE (15) into (14) respectively, we obtain the exact travelling wave solutions as follows: the solutions (17) have 27 different cases, which are expressed by the hyperbolic functions, the trigonometric functions and the rational functions, respectively. The solitary wave solutions can be obtained (see Fig. 1) when the parameters are taken as special values.
The approximate analytic solutions of Eq. (8) based on the homotopy perturbation method
The homotopy perturbation method is proposed by He (1999), and it has successfully been applied to solve many types of linear and nonlinear functional equations. This method, which is a combination of homotopy in topology and classic perturbation techniques, provides us with a convenient way to obtain analytic or approximate solutions for a wide variety of problems arising indifferent fields. In recent years, the application of the homotopy perturbation method in nonlinear problems has been developed by scientists and engineers (He 2003(He , 2006Olga 2011;Ebaid 2014;Najafi and Edalatpanah 2014).
According to the homotopy perturbation method (He 1999), we construct the following homotopy Equation (19) has the following form of solutions where p is an embedding parameter, and V 1 (ξ 1 , ξ 2 ), V 2 (ξ 1 , ξ 2 ), . . . are undetermined. In order to be convenient for computing, we choose the following initial value approximation By substituting (20) and (21) into Eq. (19) and collecting parameters p i (i = 1, 2, . . .) with the aid of expansion as follows: where V m,n donates that V m (m = 1, 2, . . .) takes derivative with respect to the n(n = 1, 2) variant. We choose the initial conditions as follows: . . .
Then the second-order approximate solutions of Eq. (8) can be achieved by (25) and (26) Case 2 When j = 13, satisfying the initial conditions as follows: the solutions can be obtained by (22) and (28) as follows: V 1 (x, t, y) = k 2 − 3c 2 + 2k 4 δ 2 − 8k 4 λν + 3k 4 δ 2 − 4λν tan Then the second-order approximate solutions of Eq. (8) can also be achieved by (29) and (30) Case 3 When j = 25, satisfying the initial conditions as follows: the solutions can be obtained by (22) and (32) as follows: Then the second-order approximate solutions of Eq. (8) can also be achieved by (33) and (34) Case 4 When j = 27, satisfying the initial conditions as follows: the solutions can be obtained by (22) and (36) as follows: Then the second-order approximate solutions of Eq. (8) can also be achieved by (37) and (38) Table 1 shows the error comparison between the solutions (17) ( j = 1) and (27) when k = 0.1, c = 0.2, δ = 3, λ = 1, ν = 2 . According to the figure and table, the exact property of the homotopy perturbation method has been showed successfully.
Conclusion
In this paper, we studied that construct the exact solutions and the approximate analytic solutions of NLPDE by using the Lie symmetry, the extended tanh method and the homotopy perturbation method. Specifically, we have constructed the abundant exact travelling wave solutions and approximate analytic solutions of the (2 + 1)-dimensional KP equation by using the above-mentioned three methods and obtained the high-precision approximate solutions by error analysis.
Lie symmetry, the extended tanh method and the homotopy perturbation method are effective methods which applied to solve PDEs. Hence, comprehensive use of them will advance their availability. The Wu-differential characteristic set algorithm is a key factor which influence the calculating the symmetry of PDEs. At present, combining the Wu-differential characteristic set algorithm, symmetry method and others to solve NLPDE has been regarded as a hot research topic and widened the application of symmetry and the Wu-differential characteristic set algorithm. This investigation is valuable in advanced research and development. | 2,812.6 | 2016-08-05T00:00:00.000 | [
"Mathematics"
] |