text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A new limit of the 129Xenon Electric Dipole Moment We report on the first preliminary result of our 129Xe EDM measurement performed by the MIXed collaboration. The aim of this report is to demonstrate the feasibility of a new method to set limits on nuclear EDMs by investigating the EDM of the diamagnetic 129Xe atoms. In our setup, hyperpolarized 3He serves as a comagnetometer needed to suppress magnetic field fluctuations. The free induction decay of the two polarized spin species is directly measured by low noise DC SQUIDs, and the weighted phase difference extracted from these measurements is used to determine a preliminary upper limit on the 129Xe EDM. Introduction The Baryon asymmetry is one of the unsolved big questions of Cosmology. Most scenarios developed for explaining the Baryon asymmetry in the Universe involve modifications of the Standard Model (SM) which generate additional CP violating interactions as postulated by one of the three Sakharov criteria [1,2]. This is one reason to search for CP violating interactions beyond the SM. These CP violating interactions will also generate Electric Dipole Moments (EDM) of elementary particles, which are experimentally detectable. The search for EDMs of elementary particles is very favourable in this context because the generation of EDMs is a high order process in the SM and therefore highly suppressed. Detection of a nonzero EDM is a direct hint of new physics because SM contributions are far too small and can be neglected for all experimental searches [3,4]. For a neutral particle like the neutron, the direct EDM measurement of the free particle is a promising path, which is pursued by many collaborations. The other two elementary particles, the proton and electron, which are stable enough to perform precise EDM measurements, are charged, and Lorentz forces caused by the interaction of the charge with a magnetic field are a real obstacle for a direct EDM measurement. But bound in a neutral atom or molecule, the indirect measurement via an atomic or molecular EDM becomes feasible. The EDM of the electron can be extracted from the EDM measurement of a paramagnetic atom [5] or molecule [6], in the latter case even amplified by orders of magnitude due to molecular dipole fields. The value derived from molecular EDM measurements is rather small d e ≤ 8.7 · 10 −29 e · cm (90% Confidence level) [6] and progressing in the last years. For diamagnetic a e-mail<EMAIL_ADDRESS>atoms, nuclear EDMs like the EDM of the neutron or proton induce atomic EDMs. Unfortunately these induced EDMs are reduced by Schiff screening named after Schiff's theorem [7] which states: for a nonrelativistic system made up of point charged particles which interact electrostatically with each other and with an arbitrary external field, the shielding is complete. For light nuclei, Schiff screening is perfect and prevents nuclear EDM measurements. For heavy nuclei however the nuclear EDM is not completely screened but suppressed by 2 to 3 orders of magnitude due to relativistic and finite size effects. The result of the 199 HgEDM d Hg ≤ 7.4 · 10 −30 e · cm (95% confidence level) [8] is the most precise limit on an EDM of diamagnetic atoms to date, providing the most stringent constraints on flavor-conserving CP violating phases. This upper limits of the 199 HgEDM also give the best indirect constraints on the proton EDM d p ≤ 2.0 · 10 −25 e · cm [9] and on the neutron EDM d n ≤ 1.6 · 10 −26 e · cm [10], whereas from direct neutron EDM searches the present upper limit is d n ≤ 2.0 · 10 −26 e · cm [11]. This example shows that the high measurement precision achieved in diamagnetic atom EDM measurements can provide severe constraints on CP violation in purely hadronic interactions in spite of the fact that Schiff screening reduces the EDM sensitivity by 2 to 3 orders of magnitude. Investigating the 129 Xe EDM is a different approach to improve the EDM sensitivity in the hadronic sector. Technically, our measurement sensitivity benefits from the extraordinary long spin coherence times that can be achieved under dedicated experimental conditions (for further details see [12]). Setup In our setup ( Fig. 1) we directly measure the free induction decay of transversal nuclear polarized 3 He and 129 Xe atoms by means of Superconducting Quantum Interference Devices (SQUIDs) in a gradiometer configuration. The low noise DC SQUIDs are housed in a metal free fibre glass cryostat to maintain them on their operating temperature of about 4 K. The gradiometer configuration enables us to operate the SQUIDs in a 400 nT magnetic field with a resolution of a few fT/ √ Hz, limited by noise. Not only within the cryostat, but also within the whole setup close to the SQUIDs and the EDM cell we avoid any metal, because the Johnson noise of the free moving electrons inside a metal would induce severe magnetic noise in our SQUIDs. Below the cryostat, the housing of the EDM cell is mounted which is made of glass, covered by a slightly conductive layer to avoid charge up. The EDM cell of 10 cm diameter is placed between high voltage electrodes which provide an electrical field of 800 V/cm strength with switchable polarity. We use a very moderate high voltage for this setup in order to be on the safe side concerning sparks and leakage currents. Below the EDM cell, a pressurized air controlled valve allows for filling the EDM cell from outside. As shown in the level scheme of a 129 Xe-atom inside a homogenous magnetic holding field of 400 nT (Fig. 2), the energy shift caused by a hypothetical EDM is about 12 orders of magnitude smaller than the Zeeman splitting of the holding field. Therefore comagnetometry is mandatory in order to get rid of magnetic field drifts. Here the colocated nuclear polarized 3 He atoms serve as an ideal magnetometer because any possible interaction of the electrical field with an EDM from the 3 He nucleus is completely suppressed by Schiff screening. Technically we have to measure the 129 Xe frequency relative to the 3 He frequency. We achieve this by calculating the frequency difference weighted by the gyromagnetic ratio of the two nuclear spin species. While frequency shifts caused by magnetic field drifts are cancelled in the frequency difference ω, shifts due to a hypothetical 129 Xe-EDM interaction with the electrical field are still present. For practical reasons we evaluate Eq. (1), which is the integrated form of (1) over time. For an EDM below the sensitivity of our experiment, the phase should be constant. A closer look on Eq. (2) reveals further corrections, as shown in Eq. (3) besides the phase shift (t) E DM caused by a nonzero EDM. The most prominent one in our case is earth rotation (coefficient a lin ). During the course of one day, we accumulate an additional phase of 60% of 2π because we are in a rotating frame (i.e., the detector rotates around the precessing spin sample). Furthermore, the chemical shift, dependent on the partial pressures of all gas species inside the measurement cell including buffer gases, affects the effective value of the gyromagnetic ratio on the order of 10 −7 . Both effects contribute to the linear correction term a lin which we treat as a free parameter because we cannot calculate the chemical shift precisely enough (absolute order 10 −10 ). Finally we have to correct for the general Ramsey-Bloch-Siegert-shift which consist of two components. One is caused by the self interaction of the spin species (coefficient a He and a Xe ) within the given magnetic field, while the other follows from the interaction between two different spin species, which we called cross talk (coefficient b He and b Xe ). 2,Xe (3) The cross talk is an analogue of the classical Bloch-Siegert-shift in NMR. Here this shift is caused by the tiny magnetization of the other spin species precessing with a different frequency in contrast to the case of classical NMR where this shift is caused by the counter propagating component of the linear RF-field. As in the case of NMR this shift depends quadratically on the magnetization amplitude (∼ e −2t/T * 2 ). Because of the unavoidable presence of magnetic field gradients, the Larmor frequency is slightly different in different parts of the gas cell. A first order approximation following Ramsey's paper [13] leads to a frequency shift depending linear on the magnetization amplitude (∼ e −t/T * 2 ) of the same spin species. Therefore the functional dependence on time of all correction as shown in Eq. (2) is well known, and the parameters, in this case the relaxation times T * 2,He of 3 He and T * 2,Xe of 129 Xe are well determined by fitting exponential functions to the measured decaying amplitudes of the SQUID signal shown in Fig. 4. However, we cannot determine the amplitudes of these phase corrections well enough by independent measurements. Therefore we include the amplitudes (Eq. (2), is placed inside a two-layer magnetically shielded room (MSR) with an additional mu-metal cylinder (4) to reduce magnetic field gradients. A coil system consisting of a cosine-coil (5) and a solenoid (6) generates a homogeneous magnetic guiding field. Four additional shimming, coils (shown in the top-left corner) are used to compensate gradients. A fibrereinforced plastic tube (7) acts a rigid mounting structure for all components inside the MSR. The gas mixture of hyperpolarized 3 He and 129 Xe, and buffer gases is prepared outside the MSR (8) and then transferred through filling lines equipped with solenoids (3) to the EDM cell. together with a parameter for a linear shift (a lin ) which accounts for earth rotation and the chemical shift as already discussed above. The raw data from the SQUID gradiometers ( Fig. 1) sampled with 250 Hz are divided in sub cuts of 4 second duration. Then these sub cuts are fitted with sin and cos functions of the corresponding frequency. From the coefficients of the sin and cos functions, we calculate the average amplitudes and phases of 3 He and 129 Xe with the corresponding errors for each sub cut. Further details of these corrections and how we perform the whole data analysis can be found in [14]. The whole setup, located at the Research Center Jülich, is shown in Fig. 3. Inside a double layer magnetically shielded room, we placed a µ-metal cylinder 85 cm in diameter, 1.9 m in height to provide additional magnetic shielding from outside. The main task is to reduce magnetic field gradients from 300 pT/cm to 50 pT/cm. The µ-metal cylinder also serves as a return yoke for the cos-coil, which generates a very homogeneous horizontal magnetic field. A magnetic field along the cylinder axis is generated by a set of solenoids optimized for field homogeneity. In order to reach long transverse relaxation times, it is necessary to further minimize the magnetic field gradients actively. In the past we already used free induction decay time T * 2 measurements to determine small magnetic field gradients [15]. Here we employ T * 2 measurements as the control element in a feedback loop. This was achieved by additional shimming coils that produce inhomogeneous fields, consisting of Anti-Helmholtz coils along the cylinder axis, and saddle coils perpendicular to the cylinder axis to compensate magnetic field gradients shown in the left upper part of Fig. 3. The following online method is used: The EDM cell is filled with ca. 30 mbar of hyperpolarized 3 He aligned with the magnetic field. After a non-adiabatic π /2 spin flip, the Larmor precession signal is monitored. The transverse relaxation time of Helium is maximized by systematically varying the coil currents according to a downhill simplex algorithm. For each setting of coil currents, the transverse relaxation time T * 2 is measured for several minutes. The fully automated optimization procedure takes approximately twelve hours, improving the transverse relaxation time T * 2 from 7500 s to 40000 s (see Fig. 4). This corresponds to a reduction of magnetic field gradients from 50 pT/cm to below 10 pT/cm. This method has the advantage that an EDM measurement run can directly follow the gradient optimization procedure without any modifications of the setup. The individual steps to perform a single EDM measurement run are: A gas mixture of hyperpolarized 3 He and 129 Xe and buffer gases is prepared outside the MSR. The solenoids of the filling line are switched on, as well as the cos-coil which serves as a guiding field in the x-direction. The gas mixture expands through the filling lines into the evacuated EDM cell. The solenoids of the filling line are ramped down; and the magnetic guiding field of the EDM setup slowly rotates into the z-direction by decreasing the current through the cos-coil and simultaneously increasing the current through the solenoid. A non-adiabatic π /2 spin-flip back to the xdirection starts the spin precession. Then the electric field is ramped up and regularly inverted. The signal of the precessing magnetization is monitored by low-temperature SQUIDs (very sensitive, low-noise magnetometers) and recorded for off-line evaluation. The individual He and Xe phases are extracted from the data, and the weighted phase difference is calculated as explained above. Results In one week of measurement time, we were able to perform 6 data taking runs (about 10 hours each) after shimming of the field gradients. Each run was subdivided into sub cuts of 4 second duration, and we follow the procedure described above. Figure 5 shows the resulting phase difference for run number 4 after applying different types of corrections. The upper part shows the phase difference after correcting the linear phase shift due to earth rotation and the chemical shift (coefficient c, a lin of Eq. (2)). Because we analyse the phase difference , the periodical switching of the electrical field direction will result in a triangular shaped phase shift versus time as shown in Fig. 5 lower part for a hypothetical EDM of d Xe = 4.0 · 10 −25 e · cm( EDM (t) term in Eq. (2)). The middle part shows the residual resulting from fitting. Because of the decaying amplitude mainly of the Xe (see Fig. 4), the signal-to-noise ratio becomes worse with time which leads to increasing error bars. From fitting this function simultaneously with all corrections to the phase shift (Eq. (2)), we can determine an EDM value together with the corresponding error. For all runs, this EDM value is shown in Fig. 6. These EDM-runs also served to optimize the parameter settings, i.e. the respective partial pressures of the He, Xe and buffer gases under the given gradient conditions and T 1 relaxations times in the EDM cell. It also depends on the actual system noise. The strong fluctuations of the error bars of the individual runs indicate these highly delicate dependencies and show, that we still in the phase of parameter optimization. Together with the values of the individual runs, the preliminary result and the 1σ error for the mean over all 6 runs is shown. The result is in concordance with zero. For comparison, the world's best upper limit on an atomic EDM [16] is also shown. In order to get a final result, two points are still missing. According to the standards of high energy physics, we should verify the correctness of our fitter and the implemented method in combination with our model either by evaluating several (more than ten) Monte Carlo simulated data set and get the same results, as it was put into the simulations, or evaluate with an independent method. The correctness of the fitter implementation was already tested by two independent implementations at Heidelberg (Mathematica) and Groningen (Python). Because some of us have to do this job for other experiments as well, we decided to follow the second option. In addition to our classical Frequentist analysis (result Fig. 6), we are performing a Bayesian analysis employing Markov chain Monte Carlo calculation for integrating over all nuisance parameters. This additional analysis is finished soon. Further we investigate the electrical field strength inside our spherical cell by optical methods under the conditions that we run the experiments. The final results of these investigations are also in reach. With respect to the electrical field distribution, a cylindrical cell has the preferred shape. We use a spherical cell for our measurements because for a cylindrical cell, additional phase shifts to the ones shown in Eq. (2) arise due to demagnetization effects of the magnetized sample itself. We were not able to get these additional phases shift under control in a deterministic way, so we changed our setup to run with a spherical cell. Furthermore we investigate other possible sources of false effects of the EDM signal listed in Table 1 where one major contribution at the present sensitivity level comes from geometric phase effects which are discussed in detail in [17]. The second major contribution, the Ramsey-Bloch-Siegert shift, is a consequence of the approximation we used. Conclusion and Outlook Our result shows that our method, measuring the free induction decay of co-located spin species with low noise DC SQUIDs, is competitive to other methods like maser PPNS 2018 techniques [16] in setting lower upper limits for the 129 Xe EDM. Because our first approach was in some aspects not yet fully optimized, improvements of our setup are possible, as shown in Table 2. This improvement by 2 to 3 orders of magnitude is necessary to become competitive to the 199 Hg EDM and to bolster the search for a proton EDM where the relative sensitivity of the 129 Xe is higher. Besides establishing lower limits for the 129 Xe-EDM, we will also use our setup with slight modifications to search for axion like dark matter and for Lorentz invariance violations, as we did in the past [14].
4,174.8
2019-12-12T00:00:00.000
[ "Physics" ]
Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled data in the Bangla NLP domain. The majority of the existing Bangla research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT have recently revolutionized the state of natural language processing. In this work, we utilized BERT’s transfer learning ability to a deep integrated model CNN-BiLSTM for enhanced performance of decision-making in sentiment analysis. In addition, we also introduced the ability of transfer learning to classical machine learning algorithms for the performance comparison of CNN-BiLSTM. Additionally, we explore various word embedding techniques, such as Word2Vec, GloVe, and fastText, and compare their performance to the BERT transfer learning strategy. As a result, we have shown a state-of-the-art binary classification performance for Bangla sentiment analysis that significantly outperforms all embedding and algorithms. Introduction Sentiment classification is the process of examining a piece of text to forecast how an individual's attitude toward an occurrence or perspective will be oriented. The sentiment is usually analyzed based on text polarity. Typically, a sentiment classifier categorizes positive, negative, or neutral [1]. Sentiment extraction is the backbone of sentiment categorization, and considerable study has been conducted. The next crucial step is sentiment mining, which has increased tremendously in recent years in line with the growth of textual data worldwide. People now share their ideas electronically on various topics, including online product reviews, book or film studies, and political commentary. As a result, evaluating diverse viewpoints becomes essential for interpreting people's intentions. In general, sentiment refers to two distinct sorts of thought, either positive or negative, across several platforms where mass opinion has worth. For example, internet merchants and food suppliers constantly enhance their service in response to customer feedback. For instance, Uber or Pathao, Bangladesh's most popular ride-sharing service, leverages consumer feedback to improve its services. However, the difficulty here is traversing through the feedback manually, which takes far too much time and effort. Automatic Sentiment Detection (ASD) can resolve this issue by categorizing the sentiment polarity associated with an individual's perspective. This enables more informed decision-making in the context of one's input. Additionally, it may be utilized in various natural language processing applications, such as chatbots [2]. As a result of numerous revolutionary inventions and the persistent efforts of researchers, the area of NLP has arisen. Deep Learning (DL) approaches have been increasingly popular in recent years as processing power has increased and the quantity of freely accessible data on the Web has increased. As word embedding improves the efficiency of neural networks and the performance of deep learning models, it has been used as a foundation layer in a variety of deep learning methods. Earlier attempts to implement sentiment analysis in Bangla have relied on non-contextualized word embeddings (Word2Vec and fastText), which present a series of static word embeddings without considering many other contexts in which they could occur. However, the Bidirectional Encoder Representations from Transformers's (BERT) recent advent phenomenon tremendously amplifies the contextualization strategy [3]. As the trend switched toward transformer-based architectures consisting of attention heads, BERT has established itself as the most impressive NLP model capable of performing superbly in any NLP operation with proper fine-tuning for specific downstream tasks. BERT is a pre-trained state-of-the-art (SOTA) language model that is highly bidirectional and has been trained on a large English Wikipedia corpus [4]. For 104 languages, there is a generic mBERT model [5]. Since it does not do well on other language tasks, the researchers developed their language-specific BERT model that performs pretty similarly to the original BERT model. Consequently, we employ the superior BERT model for Bangla sentiment analysis. Bangla is spoken by around 250 million people and is the world's fifth most widely spoken language. However, due to a scarcity of resources, pre-trained models such as transformer-based BERT were unsuitable for any task. This issue was handled by developing a monolingual Bangla BERT model for the Bangla language. To obtain the best possible result for this sentiment analysis dataset, we fine-tuned the Bangla-BERT (https://huggingface.co/Kowsher/bangla-bert (accessed on 1 February 2022)) model which had been trained on the largest BanglaLM dataset (https://www.kaggle.com/datasets/ gakowsher/bangla-language-model-dataset (accessed on 1 February 2022)) [6] and then set connection to a Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). This research examined an extensive sentiment dataset, including reviews from various domains like the internet and social networking sites, including politics, sports, products, and entertainment. To do this at first, we fine-tuned the BERT, and then the aggregating layer has been utilized as the text embedding; finally, we have developed a deeply integrated model such as CNN-BiLSTM for decision-making. Here, we showed two kinds of comparison of our proposed work: the first one is word embedding techniques such as Word2Vec, GloVe, fastText with BERT, and the second one compares various machine learning and deep learning algorithms to ensure the best performance of hybrid integrated model CNN-BiLSTM. to ensure the best performance of hybrid integrated model CNN-BiLSTM. This work will assist merchants in rapidly integrating a classification model into their own systems for the purpose of tracking customer feedback. The following points can notate the main contribution of this paper: • This work has ensured the hybrid integrated model such as CNN-BiLSTM, and it has been used in combination with monolingual BERT to address the issue of sentiment analysis in Bangla; • We compared Word2vec, GloVe, FastText, and BERT, where we demonstrated how transformer architecture exceeds all prior state-of-the-art approaches and becomes the new state-of-the-art model with proper fine-tuning; • To do this, we developed a Bangla pre-trained BERT model for transfer learning (Huggingface: Kowsher/bangla-bert). The following Section 2 discusses related work. Section 3 presents our proposed methodology, while Sections 4 and 5 discuss the embedding and classification algorithms. We reported the results in Section 6 and concluded with a discussion and recommendations for further study in Section 7. Related Work Sentiment Analysis is a well-known problem that involves assessing the polarity of an individual's viewpoint. The SA procedure entails extracting features from a text corpus, building a classifier model, and assessing its performance [7]. This type of usual procedure has been applied to a variety of sentiment classification tasks, including categorization of movie reviews [8], online product reviews [9], and Twitter tweets [10]. Akshay et al. [11] developed a model for detecting positive and negative sentiments in restaurant reviews, with a maximum accuracy of 94.5%. Another analysis revealed an accuracy of 81.77% for smartphone reviews when the researchers employed SVM as a classifier [12]. Sentiment analysis on Twitter data for the Portuguese language has been defined in [13]. Ombabi et al. demonstrated a sentiment classifier for the Arabic language with an accuracy of 90.75%, outperforming the state of the art [14]. The blending of two languages to create a new one is a regular occurrence in NLP. Such work has been conducted on vernacular Singaporean English, a product of the coalescence of Chinese and Malay languages [15]. However, the majority of efforts on sentiment categorization focus on English and other widely spoken languages. The biggest constraint on Bengali sentiment analysis research is a lack of appropriate resources and datasets. Numerous deep learning techniques have been developed for a variety of domains, including microblogs, product reviews, and movie reviews. To classify sentiment polarity on those domains, SVMs use maximum entropy [16] and Multinomial Naive Bayes (MNB) [17] have been utilized. Hossain et al. [18] created a Bangla book review dataset, applied machine learning approach, and discovered that MNB had an accuracy of 88%. Similar study using SVM on the Bangladesh cricket dataset achieved 64.60% accuracy [19]. Sarker et al. suggested a sentiment classifier for Bengali tweets that outperforms n-gram and SentiWordnet features by 45%. The sentiment categorization of Bengali film reviews demonstrates a range of performance values when using various machine learning techniques. Amongst them, the SVM and LSTM models achieve 88.89 and 82.41% accuracy, accordingly [20]. Pre-trained language models have notably become pivotal in a variety of NLP applications since they can leverage massive amounts of unlabeled data to obtain general language representations; Elmo [21], GPT [22], and BERT [4] are just a few of the possible best example. Among them, the BERT model receives the most attention due to its unmatched bidirectionality and attention mechanism. As a result, researchers are tracking its effect on downstream NLP tasks. Since BERT is trained exclusively in English, researchers create their language-specific BERT model to get higher precision on the task since it has been demonstrated that language-specific BERT models outperform generic mBERT models. Recent research has also shown outstanding task performance in sentiment analysis [23,24] attempting to uncover factors and their related views. Numerous researchers from various countries have developed their respective language BERT models to evaluate the sentiment analysis task. The Arabic BERT model AraBERT scored 99.44 on their distinct sentiment analysis experiment [25], while the Persian ( PersBERT) [26], DutchBERT (BERTje) [27], and Romanian (RobBERT) [28] models scored 88.12, 93.00, and 80.44 on their corresponding sentiment analysis experiments. Russia (Ru-BERT) [29], China [30], and several other countries develop their language-specific BERT models to obtain greater accuracy across all NLP domains, including sentiment analysis. They compare their model accuracy to that of the mBERT model and discover that their results are significantly higher than their mBERT values. This demonstrates that, while performing sentiment analysis, the monolingual BERT produces the state-of-the-art (SOTA) outcome, discarding all previous attempts and methods. Methodology Though BERT can be used as a feature extraction model, we chose the fine-tuning technique. We have expanded the Bangla-BERT model using two distinct end-to-end deep network layers in this technique: CNN and LSTM. BERT generates contextualized embedding vectors for each word that are then fed through one of two deep network layers CNN to LSTM, which has been described in Figure 1. The feature vector is constructed by concatenating the output neurons for each word from the intermediary layer. Then, each vector is processed through a densely linked neural network to reduce its dimension. Softmax is used to classify the final reduced vector. Additionally, three additional learning algorithms with pre-trained word embeddings were incorporated: Word2Vec, Glove, and fastText. Word2Vec has proved to be very effective in the analysis of sentiment in a variety of languages, especially Bengali [31]. Meanwhile, fastText has gained widespread interest in Bengali text analysis, owing to its action on n-grams at the word level [32]. Representative mechanism of Bangla-BERT to CNN-BiLSTM in sentiment analysis. First, BERT accepts tokens for embedding, and then passes through the CNN layer for the extract information. Next, LSTM aids to create a sequence from the extracted information after FNN makes a decision by calculating loss. Data gathering and labeling were the initial steps in this classification work. The data acquired from social media was carefully labeled. A relevant domain expert validated the manual labeling. The data were then subjected to a pre-processing approach that included the removal of missing values, noise, and spelling correction, as well as feature extraction and dimension reduction. Figure 2 illustrates the entire process of this research. Following that, the data set was partitioned into training and test segments at a 7:3 ratio. We trained and evaluated the model using supervised learning. The trained model was then fed the testing data, and the prediction accuracy was compared to the ground truth. The whole methodology of this work has been depicted in Figure 2. Whole workflow of sentiment analysis. The first phase is data collecting and then labelling, Secondly, the data have been pre-processed and the last phase is decision-making by modeling. Data Source We gathered data for the corpus from a range of sources, including internet sites and social networking sites where individuals share their valued opinions. A substantial portion of the data was gathered via Facebook, Twitter, and YouTube comments. Apart from that, online stores have grown to be a significant part of digital marketing. As a result, we also gathered data from online retailer product reviews. Additionally, certain film and book reviews have been included in this corpus. Table 1 presents an overview of the dataset. Data Collection Following that, we have collected a total of 8952 samples from referred sources, where 4325 samples are positive and the rest of the samples are negative. For this sample labeling, ten native speakers annotated it using the web annotation software doccano. Each participant annotated 30% of the dataset individually by assigning positive or negative labels. We then applied kappa statistics to labeling data collectors and the majority voting of the labeling by the native speaker group. The annotation tool is depicted in Figure 3. Data Preprocessing Data preparation is essential in machine learning-based classification, as the model's accuracy is heavily dependent on the quality of the input data [33]. We employ this procedure to prepare data for machine utilization. The next subsection describes the many procedures involved in data preprocessing. Missing Value Check We began our data processing phase by addressing the dataset's missing values. We've encountered two distinct sorts of missing values. Several of these are data omissions, while others provide less information than is required. If all information was absent, we eliminated the entire sample by erasing the entire row. If there was insufficient information, the value was manually adjusted using a similar observation value. Noise Removal After correcting for missing values, we enhanced the dataset by removing noise from the samples. Non-Bangla letters or characters, meaningless special symbols, and emoticons are all considered noise. Though emoticons can express a wide variety of emotions, we have seen that only a small percentage of data contains emoticons. As a result, the cleaning operation includes the removal of emoticons. Table 2 illustrates the processing steps with an example. Table 2. Data pre-processing methods, step by step. Processing Text Original Spelling Correction Since the data were gathered from various people, some words may have been mistyped or misspelled. We used the Bangla Academy's [34] available dictionary (AD) database to determine the most suitable structure of the word. From the sentiment corpus, SC = d1, d2, d3, dn where d1 is the text data. Each v that does not appear in AD is deemed a misspelled word. The right word was then obtained from AD and substituted for the incorrect one. Table 2 details the workflow used to analyze the sample data. Feature Extraction Feature extraction, alternatively referred to as word embedding, represents words in such a way that related terms are translated appropriately [35]. We employed four distinct word extraction approaches in this analysis to examine which word extraction technique performs the best on Bangla language sentiment. We explored the most commonly used methods for word embedding, including Word2Vec, GloVe, fastText, as well as the state-of-the-art model BERT. We trained Word2Vec, fastText, and GloVe to demonstrate more incredible performance using the skip-gram model rather than the CBOW model as it can better represent fewer recurring words. In Section 4, we widely described the feature extraction techniques in detail. Encoding Algorithm We used the preprocessed data for the word embedding algorithm's training model [36]. We examined the performance of each model independently using a variety of window sizes, vector sizes, and iterations over the dataset. The models we developed were created with the Gensim tool, which is an efficient toolkit for performing a variety of typical natural language processing tasks and includes a Word2Vec, fasttext, Glove models' implementation, whereas, to train BERT, we have used the Huggingface open-source tool. Word2Vec Word2Vec is an extensively used word embedding method. It uses a neural network to ascertain the semantic similarity of the context of the words [37]. Word2Vec implements two inversely related architectures: a continuous bag of words (CBOW) and a Skip-Gram. Skip-Gram is an architecture for unsupervised learning used to discover semantic concepts depending on their context [38]. Skip-Gram works based on Equation (1) to get the maximum average logarithmic probability: It makes use of the provided training w 1 , w 2 , w 3 . . . w N . c denotes the context size, also known as the window size. E is the embedding size. The probability (wn + m|wn) can be calculated using Equation (2): Here, V represents the vocab list and u signifies 'input' and u is the 'output' vector representations of i, o accordingly. CBOW forecasts the target word using the semantic information available in a collection of given text [39]. It makes use of distributed continuous contextual representations. CBOW constructs a static window from a word sequence. Then, using a log-linear classifier trained on upcoming and prior words, the model assumes the window's middle word. The greater the value of Equation (3), the more likely the word wt will be inferred: Here, V and c are equivalent to the Skip-Gram model parameters. Figure 4 illustrates both models. GloVe GloVe or Global Vectors imply word embeddings relying on their co-occurrence [40]. The co-occurrence matrix indicates the frequency with which a specific pair of words occurs. The matrix of co-occurrences, designated as C, in which the rows and columns correspond to the vocabulary of words. Each element in C, i.e., C ij , indicates the frequency with which the word occurs. Increased weight results in an increase in vector similarity. FastText FastText is a highly robust algorithm for word embedding that takes advantage of subword information [41]. This model learns the embeddings from the training words' character n-grams. As a result, during the training period, a non-existent word in the vocabulary can be created from its constituent n-grams. This resolves the constraint of Word2Vec and GloVe, which require training to obtain a non-vocab word. The first matrix of weights, A, is a look-up table for the words. After averaging the word representations, a text representation is created, which is subsequently input into a linear classifier. The text representation is a protected variable that could be utilized. This structure is identical to Mikolov's cbow model [42], except that the intermediate word is substituted by a tag. They estimate the likelihood function over the predefined set of classes using the softmax activation function f . This results in a reduction of the negative log-likelihood over the classes for a set of N documents: where x n is the nth document's normalized bag of information, y n is its label, and A and B are its weight matrices. Concurrently, on many CPUs, this model is trained to employ a stochastic gradient descent and a linearly decreasing learning rate. BERT BERT is the world's first pre-trained bidirectional and entirely unsupervised language representation approach, having been trained on a massive English Wikipedia corpus [4]. It is an Open-Source Language Representation Model developed by Google AI. Prior to training, BERT can read the texts (or a series of words) in either direction, which is superior to a single-direction technique. BERT surpasses all other word embedding algorithms with fine-tuning, attaining state-of-the-results (SOTA) in multiple NLP applications. BERT employs Transformer, an attention method that discovers semantic aspects of speech (or sub-words) in a text. The attention mechanism of the transformer is the core component of BERT. The attention mechanism helps extract the semantic meaning of a term in a sentence that is frequently tied to its surroundings. The context information of a word serves to strengthen its semantic representation [43]. Simultaneously, other terms in the context frequently play multiple roles in expanding semantic representation. An attention mechanism can enhance the semantic representation of the target sentence by evaluating contextual information. In contrast to prior word embedding approaches, BERT employs two distinct strategies: masked language modeling (MLM) and next sentence prediction (NSP). For the purpose of predicting random masked tokens, the Masked Language Model (MLM) is utilized. In addition, 15% of N tokens are picked at random for this reason. These are derived by substituting an exclusive [MASK] token for 80% of selected tokens, 10% with a randomized token, and 10% staying unmodified. In the case of the Next Sentence Prediction (NSP) task, the model is fed pairs of sentences and trained to predict whether the second sentence in the pair corresponds to the succeeding sentence in the original text. According to the original BERT research, excluding NSP from pre-training can result in a decrease in the model's performance on specific tasks. Some research explores the possibilities of leveraging BERT intermediate layers but the most typical is to utilize the last output layer of BERT to boost the efficiency of fine-tuning of BERT. We compute this sentiment analysis research using a pretrained Bangla-BERT model. This BERT model is comparable to Devlin's [4] suggested BERT model in terms of performance because it was trained on the largest Bangla dataset yet created. This model demonstrates that state-of-the-art results outperform all preceding results. The key component of this transformer architecture is the BERT encoder. It is based on a feed-forward neural network and an attention mechanism. Multiple encoder blocks are layered on top of one another to form the Encoder. Each encoder block consists of two feed-forward layers and a self-attention layer that operates in both directions [44]. Three phases of processing are performed on the input: tokenization, numericalization, and embedding. Each token is mapped to a unique number in the corpus vocabulary during the tokenization process, which is known as numericalization. Padding is essential to ensure that the lengths of the input sequences in a batch are similar. When data travel through encoder blocks, a matrix of dimensions (Input length) × (Embedding dimension) for a specific input sequence is provided, providing positional information via positional encoding. The Encoder's total N blocks are primarily connected to obtain the output. A specific block is in charge of building relationships between input representations and encoding them in the output. The structure of the Encoder is based on multi-head attention. It performs multiple calculations of attention h utilizing varying weight matrices and then combines the outcomes [43]. Each of these simultaneous calculations of attention results in the creation of a head. The subscript i is used to denote a certain head and its associated weight matrices. Once all of the heads have been calculated, concatenation will proceed. This results in the formation of a matrix with the dimensions Input_Length * x(h * d v ). Finally, a linear layer composed of the weight matrix W 0 with dimensions (h * d v ) * Embedding_dimension is added, resulting in an ultimate output with dimensions Input_Length * Embedding_dimension. In mathematical terms: where head i = Attention(QW Q i , KW K i , VW V i ) and Q, K, and V are placeholders for various input matrices. Each head is defined by three unique projections (matrix multiplications) determined by matrices in the mechanism of scaled Dot-Product. W v i with the dimensions d emb_dim × d v The input matrix X is projected individually through the above weight matrices to estimate the head. Then, the resultant matrix are as follows:: with the dimensions input_length ×d v We use these K i , Q i , and V i to calculate the scaled dot product attention: To assess the similarities of token projections, the dot product of these K i and Q i projections is utilized. Considering m i and n j as the i th and j th token's projections via K i and Q i , Thus, the dot product is as Equation (6): It reflects the relationship between n i and m j . Next, for scaling purposes, the resulting matrix is partitioned into elements by the square root of d k . The following step entails the row-by-row implementation of softmax. As a result, the matrix's row value converges to a value between 0 and 1, which equals 1. Finally, V i multiplies this value to obtain the head [4]. Convolutional Neural Networks (CNN) CNNs (Convolutional Neural Networks) is a type of deep feed-forward artificial neural network extensively employed in computer vision problems like image classification [45]. CNN was founded by LeCun in the early 1990s [46]. A CNN is a similar multilayer perceptron to a multilayer perceptron (MLP). Because of its unique structure, the model's architecture allows CNN to demonstrate translational and rotational invariance [47]. A CNN is made up of one or more convolutional layers, associated weights and pooling layers, and a fully connected layer in general. The local correlation of the information is used by the convolutional layer to extract features. Convolution Layer The convolution layer uses a kernel to compute a dot product (or convolution) of each segment of the input data, then adds a bias and forwards it through an activation function to build a feature map over the next layer [48,49]. Suppose an input vector for beat samples is where n is the number of samples/beat. The output values are then calculated using Equation (7): In this case, l is the layer index, h is the activation function used to append nonlinearity to this layer, and b is the bias term for the j feature map. M specifies the kernel/filter size, while w specifies the weight for the jth feature map and m filter index. Batch Normalization The training data are collected batch by batch. As a result, the batch distributions remain nonuniform and unstable, and therefore must be fitted using network parameters in each training cycle, severely delaying model convergence. To solve this issue, a convolutional layer is followed by batch normalization, an adaptive reparameterization approach. The batch normalization approach calculates the mean µ D and variance σ 2 D of each batch of training data before adjusting and scaling the original data to zero-mean and unity-variance. Additionally, weight and bias are given to the shifted datax l to improve its expressive capacity. The calculations are provided by the Equations (8)- (11). The reparameterization of the batch normalization approach substantially simplifies coordinating updates across layers in the neural network: Max Pooling Layer The sub-sampling layer is another name for the pooling layer. The proposed method employs the 1D max-pooling layer following the 1D convolutional layer and batch normalization layer, which performs a downsampling operation on the features to reduce their size [48]. It collects small rectangular data chunks and produces a distinct output for each piece. This can be performed in a variety of different methods. In this study, the Maxpooling approach is used to find the largest value in a set of neighboring inputs. The pooling of a feature map inside a layer is defined by (12) [49]: The pooling window size is denoted by R, while the pooling stride is denoted by T. Following that, utilizing several convolutional and max-pooling layers, the obtained features are converted to a single one-dimensional vector for classification. Apart from each classification label corresponding to a single output type, these classification layers are fully coupled. CNN needs fewer experimental parameter values and fewer preprocessing and pre-training methods than other approaches, such as depth and feedforward neural networks [50]. As a result, it is a very appealing framework for deep learning. Bidirectional Long Short-Term Memory Model Since deep learning is the most advanced sort of machine learning accessible today, there is an increasing range of neural network models available for use in real-world settings. A successful deep learning method was used in this study to illustrate its unique and exciting problem-solving capabilities. Because of its memory-oriented characteristics, it is known as long short-term memory. The Bi-LSTM is a deep learning algorithm that analyzes data fast and extracts the critical characteristics required for prediction. This method is an extension of the Recurrent Neural Network methodology (RNN). To tackle the "vanishing gradient" problem of the old RNN structure, the predecessors devised the new network structure of LSTM [51]. The LSTM structure (Cell) has an input gate, an output gate, a forgetting gate, and a memory unit [52]. Figure 5 shows the architecture of the gate. The math notation of these gates are things such as forget gate ( f t ), input gate (i t ), and output gate (O t ) at time t. For given input x and hidden state h t at time t, the computation of lstm alluded to is as below: The forget gate in the memory block structure is controlled by a one-layer neural network. The activation of this gate is determined by (13): where x t represents the input sequence, h t−1 represents the previous block output, C t−1 represents the previous LSTM block memory, and b f represents the bias vector. While σindicates the logistic sigmoid function, W signifies separate weight vectors for each input. The input gate is a component that uses a basic NN with the tanh activation function and the prior memory block effect to generate fresh memory. These operations are computed using (14) and (15) [53]: Long-term dependencies may be avoided by deliberately constructing and remembering long-term information, which is the default behavior of LSTM in practice. The one-way LSTM is based on previous data, which is not always sufficient. Data are analyzed in two directions using the Bi-LSTM. The bi-hidden LSTM's layer has two values [54], one of which is utilized in forward computation and the other in reverse computation. These two values define BiLSTM's final output, which tends to improve prediction performance [55]. One-Dimensional CNN-BiLSTM Proposed Method The one-dimensional CNN (1D CNN) is the same as the classic 2D CNN, other than that the convolution operation is only conducted on one dimension, resulting in a deep architecture as shown in Figure 1. Hence, it can be easily trained on a normal CPU or even embedded development boards [56]. The convolution technique facilitates the development of significant hierarchical features for classification from a dataset. To estimate the dimensions of the output features after 1D CNN, apply the following equation: where x represents the output dimension and w represents the size of the input features f is the size of the filter used for convolutions, and p denotes padding, which is the addition of values to the border before conducting convolution. The variable s stands for stride, which is the distance traveled once the convolution procedure is completed. Because one-dimensional convolution is a linear operation, it cannot be utilized to categorize nonlinear data. The majority of real-world datasets are nonlinear, requiring nonlinear processes after convolution. A nonlinear function is an activation function. The sigmoid, hyperbolic tangent, rectified linear unit (ReLU), and Exponential Linear Unit are the most commonly used activation functions (ELU). The suggested CNN architecture makes use of the ELU activation function, which is easy to implement and allows for faster processing. Furthermore, it addresses some of the issues with ReLUs while retaining some of the favorable aspects. It also has no difficulties with gradients vanishing or popping. Finally, the whole method is integrated with BERT. BERT transforms the embedding layer to CNN-BiLSTM for decision-making, which is described in Figure 1. Experiment According to some studies, fine-tuning mBERT with a text classification model generates a lower result than the proposed similar architecture since it eliminates the aggregated weight and restricted data arise and result in improved performance [57]. Another research study reveals that, when a classification algorithm is fine-tuned with BERT, the results are improved to the original BERT fine-tuning approach [58]. The proposed model is a hybrid of BERT and CNN-LSTM. We have used the BERT output as the LSTM's input. The LSTM layer eventually extracts features from BERT that it has obtained. Then, we have connected CNN to the following layer. As we have used BERT as a sentence encoder, this state-of-the-art model can acquire precise semantic information. Then, we integrated CNN models for text classification. The architecture of Bangla-BERT for sentiment analysis is depicted in Figure 1. To examine various characteristics of the suggested method, we combined four embedding and thirteen classification methods. In the next section, we have included a table that summarizes the categorization models' performance. The model with the highest scores has been made publicly available. Prediction and Performance Analysis We develop the complete project entirely in Python 3.7. We used Google Collaboratory for GPU support because the data set was substantially larger than average and required the implementation of various deep learning architectures. Sci-kit-learn and Keras (with Ten-sorFlow backend) were utilized for machine learning and DNN frameworks, respectively. Additionally, we included another machine learning framework, Impact Learning. Having completed the training set, we tuned critical hyperparameters to offer a finetuned model with equivalent assessment metrics. Based on the test set, we evaluated the models. We assessed each estimate for accuracy, precision, recall, f1 score, Cohen's kappa, and ROC AUC. We summarized the findings in Tables 3-6. We represented the outcome of Word2Vec embedding with each classification technique in Table 3. As shown in Table 3, the CNN-BiLSTM algorithm achieves the maximum accuracy of 84.93%, while the SVM method achieves the second-highest accuracy of 83.83%. ANN performed the worst, achieving an accuracy of 76.84%. Because CNN-BiLSTM is leading in the F1 score, it is optimum if we implement Word2Vec embedding. In Table 4, we have used fastText for embedding and used all the algorithms that were used earlier to classify emotions. As seen in the table, CNN-BiLSTM has the highest accuracy of 88.35%, while LDA has the second-highest accuracy of 86.38%. The Naive Bayes classifier performed the worst, with an accuracy of 78.16%. With an F1 score of 85.97%, Impact Learning is the best match when fastText embedding is used. In Table 5, we used GloVe for embedding and all previous methods for classifying emotions. As can be seen from the table, the CNN-BiLSTM method is once again the winner with an accuracy of 84.53%, followed by Decision Trees with an accuracy of 82.93%. With an accuracy of 74.93%, Logistic Regression produced the lowest performance. As illustrated in Table 6, we implemented Bangla-BERT for embedding and all previous algorithms for sentiment classification. As can be seen from the table, the CNN-BiLSTM approach wins with an accuracy of 94.15%, which exceeds all previous scores for other embedding methods and places this model above all previous models. SVM comes in second with an accuracy of 92.83%. Naïve Bayes performed the worst, with an accuracy of 87.81%. Word2vec, Glove with LSTM classification performs almost identically in the challenge; however, Fasttext improves by 4% with an 88.35% score using the impact learning method. However, fine-tuned Bangla BERT with the LSTM classification model outperforms it. These experimental findings indicated that fine-tuning Bangla BERT with LSTM and CNN resulted in great improvement compared to the other embedding approach. As the improvement is much better, it reveals possibilities for subsequent advancements. Table 8 denotes the classification score for each dataset. Among the traditional techniques, SVM beats RF, excluding the ABSA cricket dataset, where RF outperforms SVM by 2.6%. Deep learning models provide continuous improvements. Both Skip-Gram (Word2Vec) and CNN glove embedding perform comparably well. FastText performs better in three datasets than CNN does in others, while, ultimately, FastText outperforms CNN. Models based on transformers, such as BERT (Multilingual version), have gradually surpassed FastText. However, it indicates a reduction of 0.6% in the ABSA cricket dataset. Here, the suggested Bangla-BERT dataset outperforms all prior results, except the BengFastText dataset. Bangla-BERT outperforms all other techniques in terms of average F1 score, establishing this transformer-based model as the state-of-the-art approach compared to all other methods. Conclusions and Future Work This paper compares and contrasts various machine learning and deep learning algorithms for classifying texts according to their topics. We have demonstrated how transfer learning, the new revolution in natural language processing, can surpass all previous architectures. We have shown that transformer models such as BERT with proper fine-tuning can play a crucial role in sentiment analysis. Additionally, a CNN architecture was developed for this classification task. A very reliable pre-trained model was prepared for ease of use and made accessible as a python open-source package. Due to the fact that deep learning takes a rather large amount of data, we will continue to work on expanding the dataset. Additionally, we want to provide a Python API compatible with any web framework. We wish to use a particular word extraction algorithm to analyze the truth word extraction system for Bangla topic classification. We discovered that combining Bangla-BERT and LSTM leads to an astounding accuracy of 94.15%. However, LSTM gives the most significant overall result of all four-word embedding systems. We worked with an unbalanced dataset. A well-balanced dataset improves efficiency significantly. We want to use the sophisticated deep learning algorithm on a more enriched and balanced dataset in the future. Additionally, we offer an approach for assessing the performance of the proposed model in real-world applications.
8,614.2
2022-05-30T00:00:00.000
[ "Computer Science" ]
Charmonium-like resonances with $J^{PC}=0^{++},2^{++}$ in coupled $D\bar D$, $D_s\bar D_s$ scattering on the lattice We present the first lattice investigation of coupled-channel $D\bar D$ and $D_s\bar D_s$ scattering in the $J^{PC}=0^{++}$ and $2^{++}$ channels. The scattering matrix for partial waves $l=0,2$ and isospin zero is determined using multiple volumes and inertial frames via L\"uscher's formalism. Lattice QCD ensembles from the CLS consortium with $m_{\pi}\simeq280$ MeV, $a \simeq 0.09 $ fm and $L/a=24,~32$ are utilized. The resulting scattering matrix suggests the existence of three charmonium-like states with $J^{PC}=0^{++}$ in the energy region ranging from slightly below $2m_D$ up to 4.13 GeV. We find a so far unobserved $D\bar D$ bound state just below threshold and a $D\bar D$ resonance likely related to $\chi_{c0}(3860)$, which is believed to be $\chi_{c0}(2P)$. In addition, there is an indication for a narrow $0^{++}$ resonance just below the $D_s\bar D_s$ threshold with a large coupling to $D_s\bar D_s$ and a very small coupling to $D\bar D$. This resonance is possibly related to the narrow $X(3915)$/$\chi_{c0}(3930)$ observed in experiment also just below $D_s\bar D_s$. The partial wave $l=2$ features a resonance likely related to $\chi_{c2}(3930)$. We work with several assumptions, such as the omission of $J/\psi\omega$, $\eta_c\eta$ and three-particle channels. Only statistical uncertainties are quantified, while the extrapolations to the physical quark-masses and the continuum limit are challenges for the future. Introduction Since the discovery of the J/ψ meson in 1970s a multitude of charmonium bound states and resonances have been found with energies ranging up to almost 5 GeV. A simple cc quark model provides a reasonable description of the levels below the strong decay thresholds and also some of the states above, however, there are clearly too many states to fit into this picture. Some mesons, such as the charged Z c states certainly have additional quark content, while for other states the interpretation is not so clear. On the theory side the nature of these states is being explored in tetraquark, molecular, and hybrid meson models, among others, while on the experimental side insight is provided by establishing their quantum numbers, decay modes and widths. Lattice QCD studies of the charmonium spectrum have a significant role to play in terms of guiding experimental searches, determining the quantum numbers of the states not well established as well as investigating their internal structure. In this work we focus on the isoscalar channel I(J P C ) = 0(0 ++ ) in the region up to 4.13 GeV for which there are a number of open questions. The ground state, χ c0 (1P ), found well below the DD threshold is interpreted as the 3 1P 0 cc level of the quark model and is the only well established state. In the energy region around 3.9 GeV, above the threshold, one expects a corresponding excited state. So far, three hadrons have been observed with the possible assignment of J P C = 0 ++ : the X(3860), a broad resonance detected by Belle [1,2], and two narrow resonances just below the D sDs threshold -the χ c0 (3930) discovered in the DD channel by LHCb [3,4] and the X(3915) observed through it's decay into J/ψω [5][6][7][8] (with the assignment of J P C = 0 ++ or 2 ++ ). While the latter two resonances could be the same state, their narrowness may indicate exotic content, where X(3915) has been interpreted as ccss meson in Ref. [9]. Predictions have also been made for an additional, as yet unobserved, bound state just below the DD threshold [10,11]. The determination of the low lying charmonium spectrum on the lattice is relatively straightforward, with the energy levels being directly accessed from correlation functions measured on the configurations generated in the Monte-Carlo simulation. Systematics arising from finite lattice spacing and simulating with unphysical light (sea) quark masses must be addressed by carrying out a continuum and quark-mass extrapolation. Near and above threshold, the analysis is considerably more challenging with information on the masses and (for resonances) also the widths being inferred from scattering amplitudes which can be obtained from the finite volume spectra via the Lüscher method [12][13][14]. Two-particle interpolators must be included in the basis of operators for the construction of the correlation functions in order to reliably determine these spectra. Simulating charmonia in flight provides additional levels with which to probe the scattering matrix, however, the identification of the continuum spin and parity quantum numbers of the levels is complicated due to the reduced symmetry on the lattice. In addition, for the energy range of interest both the DD and D sDs thresholds must be considered leading to a coupled-channel scattering analysis. So far, the coupled-channel scattering matrix has been extracted for several lightmeson systems, for example, πK, ηK [15,16], πη, KK [17] and ππ, KK, ηη [18] by the Hadron Spectrum Collaboration. In the heavy sector, there has been one investigation of Dπ, Dη, D sK scattering in isospin-1/2 [19] and a recent analysis of the Z c (3900) via D * D , J/ψπ scattering [20]. The HALQCD Collaboration has also investigated the Z c (3900) using a different approach which involves solving the Schrödinger equation with potentials determined on the lattice [21,22]. Pioneering works such as these were limited to a single lattice spacing and unphysical light-quark masses. The charmonium scalar channel has previously been studied by some of the authors considering only DD scattering with total momentum zero [23]. Here we present a lattice study of scattering in the coupled-channels DD and D sDs with quantum numbers I = 0 and J P C = 0 ++ , 2 ++ . This represents the first determination of the coupled-channel scattering matrix from lattice QCD in the charmonium system with isospin zero. Two lattice volumes are employed for the charmonium system at rest and in flight. This analysis uses the same lattice setup as our previous article on the identification of the spin and parity of the single hadron spectrum [24] and the investigation of single channel DD scattering for J P C = 1 −− and 3 −− [25]. While the present study represents a significant improvement on previous work, some simplifications remain and a comparison of the results for the masses and widths with experiment is qualitative. Within the energy range of interest, additional scattering channels, such as the J/ψω, η c η and those involving three particles, could in principle also be relevant. The effects of these channels will be investigated in the future, along with systematics associated with finite lattice spacing and unphysical light quark masses. The remainder of the paper is organized as follows. We begin by reviewing the essential general aspects of one-channel and two-channel scattering in Section 2. The details of the lattice setup and methodology are given in Section 3 and the single-and two-meson interpolators used in the correlation functions are discussed in Section 4. Simplifying assumptions made in this study are summarized in Section 5. The first step in extracting the scattering amplitudes is to compute the finite-volume spectra from the correlation functions. Our analysis and the final spectra are presented in Section 6. An overview of determining the scattering amplitudes from the lattice eigen-energies is provided in Section 7. Our results for the J P C = 0 ++ and 2 ++ channels are detailed in Section 8 and the relation to states observed in experiment is discussed in Section 9. Finally, Section 10 presents our conclusions. More details are given in several Appendices. Generalities on scattering matrices, poles, hadron masses and widths The masses and widths of strongly-decaying resonances should be inferred from the study of scattering processes where these resonances appear. In this section, we briefly review relevant concepts regarding scattering matrices, complex energy planes, pole singularities, hadron masses, and their decay widths. The first part lists definitions and notations for the scattering amplitudes, the phase space factors, etc.. The second part discusses naming conventions for various Riemann sheets, pole singularities in the complex energy plane and their relation to the hadron properties. Scattering matrices for real energies The unitary scattering amplitude S for one-channel scattering (DD or D sDs ) of spin-less particles in partial wave l is generally parametrized in terms of the energy-dependent phase shift δ(E cm ), where ρ ≡ 2p/E cm , p denotes the momentum of the scattering particles in the center-ofmomentum frame and t is the scattering amplitude. The factors p −2l in front ofK −1 lead to smooth behavior close to the threshold. In the case of t exhibiting simple Breit-Wigner type behavior,K −1 /E cm falls linearly as a function of E 2 cm , (2. 2) The phase shift equals π/2 at E cm = m R , while the width Γ(E cm ) is parametrized in terms of the coupling g and the phase space. S, t,K and δ depend on E cm and partial wave l (the dependence on l is not written explicitly). For coupled-channel scattering of DD and D sDs in partial wave l, the scattering matrices S are energy-dependent 2 × 2 unitary matrices, 3) The momenta of D and D s in the center-of-momentum frame are denoted by p 1 and p 2 , respectively. t is the scattering matrix andK(E cm ) is a real symmetric matrix. We follow the definition of t by the Hadron Spectrum Collaboration (e.g. [15]) and the definition of K from Ref. [26] 1 . Continuation to complex E cm , Riemann sheets and poles In experiment and lattice QCD simulations the scattering matrices S(E cm ) are determined for real energies. The theoretical interpretation in terms of (virtual) bound states and resonances is conventionally made via the poles in the t-matrix, analytically continued to the complex s-plane. The feature that leads to interesting physics is the square root branch cut related to ρ = 2p/E cm = 1 − (2m) 2 /E 2 cm starting from the threshold connecting the physical Riemann sheet (or sheet I), conventionally chosen to have Im(ρ) > 0, to the I II 2m 2 Im(E cm ) Re (E cm ) I II II III 2m 1 2m 2 2 Im(E cm ) Re(E cm ) Figure 1: Sketch of the pole locations in the scattering matrix t that typically affect the experimental rates on the physical axes (denoted by the cyan line) for one-channel scattering (left) and two coupled channels (right). The Roman numbers indicate the Riemann sheets where the poles are located according to Eq. (2.4). Poles immediately below a threshold, indicated by crosses, can also have observable effects on the physical axes above the respective threshold. unphysical Riemann sheet (or sheet II), which has Im(ρ) < 0. For a two channel system, there will be four Riemann sheets, such that Bound states, virtual bound states and resonances are related to pole singularities of t in the complex s-plane. These poles affect the physical axes, indicated by the cyan line in Fig. 1, along which the experimental measurements are made. Fig. 1 presents a schematic picture of various pole locations in our study, that can affect scattering amplitudes/matrices along the physical axes for one-channel and two-channel scattering. The location of the poles are related to the masses and widths via E p cm = m − i 2 Γ for resonances and E p cm = m for the (virtual) bound states. In the close vicinity of the pole, the scattering matrix has the energy dependence 5) and the residue (c i c j ) can typically be factorized into the couplings c i ( 2 ), whose relative size is related to the branching ratios of a resonance (associated with the pole) to both channels i = 1, 2. [29]. Open boundary conditions in time are imposed [30] and the sources of the correlation functions are placed in the bulk away from the boundary. We remark that these correlation functions do not show any effects related to the finite time extent in the time regions analyzed. For H105 we use replica r001 and r002 for which the issue of negative strange-quark determinants described in Ref. [31] is not of practical relevance. For our analysis we use 255 (492) configurations on two replicas for ensemble U101 (H105). The masses of the pion, kaon, D and D s mesons determined on the larger ensemble are shown in Table 1. Note that the chosen quark-mass trajectory leads to a larger than physical m u/d and a smaller than physical m s . This means that the splitting between the DD and D sDs thresholds is smaller than in experiment, emphasizing the need for a coupled-channel analysis. We employ the charm-quark hopping parameter κ c = 0.12315 corresponding to a charm-quark mass m c and spin-averaged 1S-charmonium mass M av that are slightly larger than their physical values. For estimates of the statistical uncertainty we use the bootstrap method with (asymmetric) error bars resulting from the central 68% of the samples. Further details are collected in Appendix A. The correlation matrices are averaged over several source-time slices and momentum polarizations to increase the statistical precision. Note that all quoted uncertainties are statistical only, and that results quoted in MeV have been obtained using the central value of the lattice scale without propagating its statistical or systematic uncertainties into the results. For hadrons with charm quarks, non-negligible discretization effects are observed when computing the dispersion relation on lattices with a ≈ 0.086 fm. A comparison of the finite momentum lattice energies and the continuum dispersion relation for the D meson on the two ensembles utilised in this work is given in Table II of Ref. [25]. The deviations found are small but statistically significant. A similar picture is observed for the D s meson. Note that, these deviations may spoil the finite-volume analysis outlined in Section 7, which assumes the continuum dispersion relation. In particular, it is important to ensure that if the energy shifts observed with respect to nearby non-interacting two hadron levels are zero then the resulting phase shift arising from the finite-volume analysis is also zero. In order to achieve this and mitigate the affect of the discretisation effects we adopt the analysis strategy described in Sect. IV.B. of Ref. [25]. Below we reiterate the most important details of the method. First the energy shift of each interacting eigenstate with respect to a nearby noninteracting two-hadron level where p 1,2 = n 1,2 2π L , p 1 + p 2 = P and s denotes the bootstrap sample. Here, (E lat ) s is the energy of the interacting two-hadron system, while (E lat H i ( p i ) ) s is the energy of a single hadron (either D or D s meson in this paper) with momentum p i measured on the lattice. We then use (E calc ) s = (∆E lat ) s + E cont as input to the quantization condition (see Eq. (7.1)) for each bootstrap sample s. The energies (E cont H i ( p i ) ) s are computed from the continuum dispersion relation using the lattice momenta p 1,2 and the single-hadron (D and D s ) masses at rest. The resulting energies E calc are equal to E lat in the naive continuum limit a → 0 by construction. The non-interacting levels are chosen via an analysis of the overlap factors by identifying those levels that are dominated by the corresponding two hadron interpolators. 3 In the case where more than one suitable nearby level was identified, we found the results obtained for E calc were consistent. A comparison of E calc with E lat is presented in Appendix B. For further details of the lattice methodology, in particular of the setup for computing the quark propagators with the (stochastic) distillation method [32,33] we refer the reader to our previous papers [24,25]. Interpolators The main aim of this work is to investigate the coupled-channel DD-D sDs scattering amplitudes and cross-sections in the channel I(J P C ) = 0(0 ++ ) in the energy range encompassing the DD threshold up to 4.13 GeV. Following Lüscher's approach [12][13][14]34], this requires a reliable extraction of the finite-volume charmonium spectrum below 4.13 GeV on several different volumes and/or in different momentum frames. In this study, we consider the charmonium spectrum in four different lattice irreducible representations (irreps) Λ: The squared momenta | P | 2 in the lab frame are given in units of (2π/L) 2 . Charge conjugation C = + is a good quantum number in all frames and hence is suppressed for brevity. On the right of Eq. (4.1), we list all relevant states with quantum numbers J P [λ] contributing to the respective irreps. Here λ refers to the helicity of the state. The first three irreps are relevant for an investigation of the J P = 0 + channel. The irreps The single-and two-meson interpolators utilized in each lattice irrep Λ (P )C considered in this study. We use the simplified notation M 1 ( p 2 1 )M 2 ( p 2 2 ) for the two-meson interpolators with the momentum p i of each meson (i = 1, 2) given in units of 2π/L. The full expressions are omitted for brevity. N ops indicates the number of operators of each type employed. in the moving lab frames also receive contributions from states with J P [λ] = 2 + [0] and 2 ± [2] within the energy range of interest 4 . The analysis of the spectrum in the B 1 irrep constrains the parameters for DD scattering with l = 2. This partial wave inevitably contributes to the finite-volume spectrum of irreps A 1 with P > 0. We utilize a large basis of single-meson as well as two-meson interpolators in the above irreps to reliably determine the relevant low energy spectrum. As in our previous publications [24,25], we construct the single-meson interpolators following the procedure in Refs. [35,36], using up to two gauge covariant derivatives. Table 2 lists the number of single-meson operators employed in each of the finite-volume irreps considered. The procedure discussed in Ref. [24] guides us in assigning the quantum numbers J P [λ] to the extracted energy levels and aids us in selecting the levels relevant for the amplitude analysis. The DD as well as D sDs interpolators are constructed following the same procedure as in Ref. [25]. The momentum combinations implemented in this study are given in Table 2. The two operators for D (s) (0)D (s) (0) differ in terms of the gamma matrices employed: γ 5 or γ t γ 5 for each meson. Similarly, for D * (0)D * (0) and J/ψ(0)ω(0), two operators are constructed by employing γ i or γ t γ i for the spin structure. Only one eigenstate related to J/ψ(0)ω(0) or D * (0)D * (0) is expected in the non-interacting limit. We also include two-meson operators involving spin 1 mesons, such as J/ψω and D * D * (see Table 2). For non-zero momenta, the construction of such operators needs additional care and we follow the induced representation method described in Appendix A2 of Ref. [37]. In the | P | 2 = 2 frame, for example, we implement three linearly independent J/ψ(2)ω(0) operators and observe three almost-degenerate eigenstates. These operators are not included when extracting the finite volume spectrum for the amplitude analysis, as discussed in Section 5. Assumptions and simplifications in the present study This study is performed using lattice gauge ensembles with two different physical volumes at a single lattice spacing and at unphysical quark masses (the resulting masses of key hadrons are given in Table 1). As a consequence, only a qualitative comparison of the results can be made with experiment. Unlike for light hadrons [38], scattering studies in the charmonium sector are still at an early stage. For the physical states we are interested in, a three-particle channel and multiple two-particle channels are open and all could, in principle, be relevant. One possible approach is to simulate at very heavy pion (and kaon) mass, such that the number of relevant decay modes is reduced to a few two-hadron modes, which can then be fully explored. This approach has the disadvantage that the quark masses are far removed from their physical values, making a comparison to experiment a challenge. We opt for a strategy where we simulate at a moderate pion mass of 280 MeV and take into account the scattering channels expected to be most relevant for the physics close to the opencharm threshold(s). Some additional channels are neglected (as discussed below), however, our assumptions about which thresholds are relevant can be relaxed successively in future calculations. Neglecting certain scattering channels in our study is relevant in two different ways, which could be seen as two different assumptions. Some channels are already neglected in constructing the correlator matrices. This implicitly assumes that the neglected multihadron correlators would simply yield additional energy levels rather than significantly modifying the extracted spectrum. Additionally, we assume that the resulting energy levels can be analyzed with the (coupled-channel) formalism for the channels we deem to be dominant, which might fail if there is significant coupling to neglected channels. Beyond the scattering channels investigated explicitly, our current study includes J/ψω and (some) D * D * operators in the interpolator basis. Due to the poor signal obtained for light isoscalar mesons, the energy levels close to the non-interacting J/ψω levels are not very precisely determined and would not provide strong constraints on the scattering matrix. In particular, almost all energy levels dominated by J/ψω interpolators fall within one standard deviation of the non-interacting J/ψω energies, and -apart from the additional energy levels which appear -the other finite-volume energies do not shift significantly when including these interpolators. Section 6 will present the finite volume spectrum up to 4.13 GeV based on all the operator types in Table 2, apart from the J/ψω operators (see Fig. 2). Note that for our lattices the J/ψω threshold is located at approximately 3.95 GeV. We also neglect the η c η channel which has a threshold of around 3.54 GeV. We remark that this decay channel has not been observed for any of the experimental candidates mentioned in the introduction (and discussed in more detail in Sect. 9). Operators with more than two hadrons are also not implemented. The lowest three-hadron threshold is for the decay into χ c0 ππ at 4.02 GeV. This threshold is within the energy region we consider, close to the upper end. The analysis of DD scattering with l = 2 assumes that the coupling to the channel D sDs with l = 2 is negligible in the analyzed energy region and hence is omitted. We also neglect the coupling to DD * with l = 1, which contributes to irrep B 1 . The DD * threshold opens at 4.0 GeV, while the lowest non-interacting level D * (2)D(1) would appear at E cm 4.2 GeV and 4.1 GeV on the N L = 24 and 32 ensembles, respectively, which is at the upper limit of the analyzed region (see Fig. 2). We also assume negligible effect of the D * D * channel with threshold at 4.1 GeV. As in all studies of charmonium-like resonances to date, charm annihilation Wick contractions are omitted. All the remaining contraction diagrams arising from the singleand two-meson operators in our basis (shown in Fig. 1 of Ref. [23]) are computed following the procedures described in our previous publications [24,25]. We stress that we determine the finite-volume spectra at a single lattice-spacing and are therefore unable to quantify the uncertainty associated with the lattice discretization. In particular, the uncertainty arising from the heavy quark discretization may be nonnegligible. As discussed in the previous section, the dispersion relation deviates from the continuum relation in our study and spin-splittings are also likely to be affected [39,40]. In general, lattice spacing effects in heavy-light mesons and charmonium are different with the net result that even at physical light-quark masses the open-charm thresholds can be shifted with respect to the measured charmonium states at finite lattice spacing. Determination of the finite-volume spectrum This section presents the eigen-energies E n that will be used to determine the scattering matrices. The energies are obtained from the correlation matrices C ij (t) = O i (t)O † j (0) via the widely-used variational method. This involves solving the generalized eigenvalue problem C(t)u (n) (t) = λ (n) (t)C(t 0 )u (n) (t) for the eigenvalues λ (n) (t) and the eigenvectors u (n) (t) [41][42][43]. We use the reference time t 0 /a = 3 or 4. The eigen-energies are extracted from 1-exponential fits to the eigenvalues λ (n) (t) = A n e −Ent with the fit range, in most cases, starting between timeslices 10 and 12. The finite-volume spectrum of the charmonium system with isospin I = 0 and C = +1 is shown in Fig. 2. We present the spectrum in the center-of-momentum (cm) frame 1 , B 1 and total momenta | P | 2 = 0, 1, 2. These irreps give information on the charmonium(like) states and D (s)D(s) scattering in the channels with J P C = 0 ++ , 2 ±+ (see Eq. 4.1). The energies indicated by the black-circles are used to extract information on D (s)D(s) scattering. These energies are near or above the DD threshold and are precise enough to reliably resolve the energy-shifts with respect to the non-interacting energies of D (s)D(s) (indicated by the solid lines). The light-blue circles are the energy levels related to ground-state charmonia with J P = 2 ± . Figure 2: The eigen-energies in the center-of-momentum frame (E cm ) for the charmoniumlike system with I = 0 and C = +1. Results are presented for irreducible representations 1 , B 1 and total momenta | P | 2 = 0, 1, 2, which give information on the channels with J P C = 0 ++ , 2 ±+ . The data points correspond to the eigen-energies obtained from the lattice simulation: the black circles are used to extract the coupled-channel scattering matrices for DD − D sDs , while the blue circles are omitted from the scattering analysis. The solid and dashed red (green) lines correspond to discrete DD (D sDs ) eigen-energies in the non-interacting limit: solid lines correspond to the operators that are implemented, while dashed lines correspond to the lowest-lying energies from operators that are not implemented. Dotted lines represent thresholds. The data points indicated by the light blue circles correspond to ground-state charmonia with J P C = 2 ++ and 2 −+ , which appear at m 3.56 GeV and 3.83 GeV, respectively. Some points are shifted horizontally slightly for clarity. Determining scattering matrices from lattice finite-volume energies The bound states and resonances are inferred from the scattering matrices as briefly reviewed in Section 2. The infinite-volume scattering matrix S(E cm ) is related to the finitevolume two-hadron spectrum for real energies E cm above the threshold and somewhat below it through the well-known Lüscher relation [12][13][14]. The eigen-energies of the coupled channel DD − D sDs system given in the previous section provide information on the in the next section.K uniquely determines S, while both depend also on the partial wave l. We use the spectrum from the previous section to determineK(E cm ) using the publicly available package TwoHadronsInBox [46]. The relation between discrete lattice eigen-energies E cm andK-matrix for coupledchannel scattering is referred to as the quantization condition [46] Both terms in the determinant are matrices in the space of partial waves l, l and channels i, j (DD, D sDs or both), and the determinant is evaluated over both indices.K l;ij δ ll is an unknown matrix in channel space that depends on the partial wave l; it is diagonal in l since the good quantum numbers in continuum scattering of spin-less particles (such as D and D s ) are J, S = 0 and l = J −S = J. The B P ,Λ ll ;i (E cm ) are known box-functions [46] that are in general non-diagonal in the partial wave index. In one-channel scattering and when only partial wave l contributes, relation (7.1) simplifies toK −1 (E cm ) = B P ,Λ (E cm ), since the argument of the determinant is a 1 × 1 matrix. The values of K −1 (E cm ) will be shown as points in figures for one-channel scattering. For two coupled channels, for the case when only partial wave l contributes, the determinant equation (7.1) provides one relation betweenK 11 (E cm ),K 22 (E cm ) andK 12 (E cm ) for each energy level, complicating the determination of those functions. Therefore, we follow the strategy proposed in Ref. [44], where theK ij (E cm ) are parametrized as functions of the energy. In this strategy, theK-matrix elements are determined by requiring that relation (7.1) is simultaneously satisfied for all relevant lattice energies E cm . We will focus on certain interesting and rather narrow energy regions, where a linear dependence on s is expected to be a good approximatioñ Such a parametrization is equivalent to a Breit-Wigner parametrization in the resonance region and is also similar to the well-known effective range expansion K −1 ij (s) = c ij + d ij p 2 near threshold, where p is the momentum of the scattering particles in the center-ofmomentum frame. We determine the parameters a ij and b ij following the strategy discussed above, using the determinant residual method proposed in [46], which is briefly described in Appendix C. A posteriori, we always verify that the resulting parametrization predicts via Eq. (7.1) the same number of eigen-energies observed in the actual simulation in the relevant energy range; this is shown in the Ω plots for some fits in Appendix D. This procedure will be followed for the extraction of the coupled-channel scattering matrix as well as for one-channel scattering. This study is based on the parametrization in Eqn. (7.2). Alternatively, one could parametrizeK ij (s) itself with common pole terms in both channels, such as those tabulated in Table IV of Ref. [47]. We have performed fits with different parametrizations ofK ij (s) (single pole, double pole, triple pole, poles with polynomial terms, etc.). We find that fits (for coupled DD − D sDs scattering) with a single-pole in the higher energy region are not consistent with our data. Including two or more poles/resonances leads to fits with six or more parameters. We observed that the data used in this work is insufficient to accommodate such a large number of parameters and hence such an analysis is beyond scope of this work. An investigation of the model-independence of the findings presented here requires extending the lattice calculation to include a larger set of ensembles with high statistics. The box-function B P ,Λ ll ;i (E cm ) can have off-diagonal elements for l = l due to the lack of rotational symmetry in a finite box. This will result in contributions from multiple partial waves in the quantization condition Eq. (7.1) for a given lattice irrep Λ. We consider partial waves l = 0 and l = 2 and ignore contributions from l ≥ 3, which is a reasonable assumption in the energy region considered for the respective irreps. In this case the only non-diagonal elements B ll among the A 1 (P 2 = 0) irreps that are nonzero are B P =001,A 1 02 and B P =110,A 1 02 . These will be taken into account in the analysis of Section 8.4.2. Results for various channels and energy regions In this section we present our results for the scattering matrices, pole positions, masses, and widths of J P C = 0 ++ and 2 ++ charmonium(like) states in various energy regions and with varying assumptions. The energy range from slightly below 2m D up to 4.13 GeV is divided into smaller intervals, where the elements of the coupled DD − D sDs scattering matrix are separately parametrized according to Eq. (7.2) or as a constant. The details of the parametrizations and the results are presented in separate subsections below, while information on the energy levels considered in each case is given in Appendix D. A single description of the whole energy region requires a finite-volume analysis involving many more parameters, which results in more challenging and unstable fits. Such an analysis is beyond the scope of the current investigation. Our inferences and conclusions are based on the finite-volume analysis of separate energy regions. Similar parametrizations to those employed for the separate energy regions, are employed collectively to a wider energy range in Appendix E as an additional consistency check. 8.1 DD scattering with l = 0 near threshold The narrow energy region near the DD threshold is significantly below the D sDs threshold and can be treated in a one-channel approach. We employ the parametrization in Eq. (7.2) which is equivalent to the effective range expansion p cot δ 0 = 1/a 0 + r 0 p 2 /2 near threshold. Four lattice energy levels with E cm closest to 2m D (listed in Appendix D.1) are utilized to determine the parameters via the quantization condition (7.1). We find where cor is the correlation matrix defined in Appendix A. The fit is shown in Fig. 3a. This scattering matrix leads to a bound state at the energy E cm = m when the scattering matrix t (2.1) has a pole on the real axis below threshold on sheet I The lhs of the second equation is shown as the red line in the figure, while the rhs is indicated by the orange line. The bound state occurs at the value E cm = m, where the two curves intersect. The slope of p cot δ at the intersection, is smaller than the slope of −|p|, as required for an s-wave bound state (see Section VC of [25]). The location of the pole in the scattering matrix is shown in Fig. 3c. The bound state appears just below the DD threshold with the binding energy We denote this state by χ DD c0 , indicating it has J P C = 0 ++ and a strong connection to the DD threshold. This state comes in addition to the conventional χ c0 (1P ), which is found significantly below threshold. Experiments cannot explore DD scattering below threshold, however, a closeby bound state below threshold could be identified experimentally through a sharp increase of the rate just above threshold. Fig. 3b shows a dimensionless quantity ρ|t| 2 related to the number of events N DD ∝ pσ ∝ ρ|t| 2 expected in experiment. It features a peak above threshold, which increases much more rapidly than the phase space. Such a DD bound state was not claimed by experiments so far. A similar state was predicted in phenomenological models [10,48,49], and some indication for it was suggested in the experimental data [11,50] and in data from the lattice simulation of Ref. [23]. A more detailed discussion follows in the summary in Section 9. Details of the fit (8.2) and some variations thereof are provided in Appendix D.1. In these fits, the ensemble average of the data gives rise to a bound state, while a very small proportion of the bootstrap samples instead produce a virtual bound state. This indicates that our lattice results, at the employed quark masses, favour the existence of a bound state. However, with the present statistical accuracy, one cannot completely rule out the existence of a virtual bound state. The robust conclusion is that we observe a significant DD interaction near threshold, leading to one state just below threshold. Such a state leads to an increase of the DD rate above threshold irrespective of whether it is a bound or a virtual bound state. Note that it is not known whether this state would also feature in a simulation with physical quark masses. 8.2 D sDs scattering with l = 0 near threshold in the one-channel approximation The D sDs channel carries the same quantum numbers as DD necessitating the consideration of coupled-channel scattering. In this subsection we aim to get a rough estimate of D sDs scattering in the one-channel approximation, which will also provide initial guesses for the parameters when coupled channel scattering is considered in Section 8.4. The D sDs scattering near threshold is parametrized by We employ the quantization condition (7.1) together with four lattice energies close to this threshold that are dominated by D sDs interpolators (listed in Appendix D.2) and obtain The resulting fit is shown in Fig. 4a. The scattering matrix has a bound state pole at the energy E cm = m where condition (8.3) is satisfied, see Fig. 4c. Again, the slope of p cot δ is smaller than the slope of −|p| at the position of the pole, as required for an s-wave bound state (see Section VC of [25] that we denote χ DsDs c0 , indicating it has J P C = 0 ++ and a strong connection to the D sDs threshold. This state is responsible for the significant increase in the D sDs rate shown in Fig. 4b just above threshold. In order to search for the χ DsDs c0 in experiment an exploration of the D sDs invariant mass near threshold would be invaluable. In one-channel D sDs scattering, considered here, the state is decoupled from DD, while it will become a narrow resonance and acquire a small width when the coupling to DD is considered in Section 8.4. Two candidates χ c0 (3930) [3] and X(3915) [2] (which may correspond to the same state) have already been observed in experiment just below the threshold 2m exp Ds 3936 MeV; they have a small coupling to DD and a small width. If the D sDs bound state (8.8) corresponds to χ c0 (3930) and/or X(3915), it naturally explains both features as will be discussed in Section 9. This channel features charmonia with J P C = 2 ++ . It is not the main focus of our study, however, an estimate of its scattering amplitude is required to extract the l = 0 scattering amplitude using Eq. (7.1). We consider the energy region encompassing the 2 ++ resonance and neglect the coupling to D sDs scattering with l = 2, which we assume to be negligible in this region. The scattering amplitude is parametrized by the Breit-Wigner form (2.2) Fig. 5a. The mass m J2 corresponds to the energy where the phase-shift reaches π/2, which is close-to the 2 ++ resonance mass obtained from the pole position below, while the coupling g J2 is related to its width as shown in Eq. (2.2). The position of the pole E p cm of the scattering matrix (8.9) on sheet II provides a better way of determining the resonance mass m and width Γ. We obtain The pole is plotted in Fig. 5c. This leads to the lowest J P C = 2 ++ resonance above DD threshold with where g parametrizes the width Γ = g 2 p 5 /m 2 . This likely corresponds to the wellestablished resonance χ c2 (3930) = χ c2 (2P ) [2]; a detailed comparison with experiment is made in Section 9. The resonance mass and the coupling obtained from the pole and from Eq. (8.10) are consistent, which is expected for a narrow resonance. The next higher 2 ++ charmonium is estimated 6 to be near E cm 4.2 GeV, which is above our region of interest. We assume it to be narrow and to have a negligible effect on the analysis of the lower-lying 2 ++ resonance. Finally, we turn to the coupled DD − D sDs scattering. We focus on the energy region E cm 3.93 − 4.13 GeV near the D sDs threshold and we find an indication for several interesting hadrons. The scattering matrix for partial wave l = 0 is parametrized as with the off-diagonal element held constant in E cm . Of the two equivalent parametrizations shown above, we will utilize the one on the rhs. The 5 parameters in Eq. (8.13) are determined using all levels of irreps A (+) 1 within the energy region E cm = 3.93 − 4.13 GeV displayed in Fig. 2: there are 14 levels from three frames with P 2 = 0, 1, 2 and from two spatial volumes N L = 24, 32 (see the black circles in the figure). The quantization condition (7.1) for A (+) 1 irreps depends on the scattering amplitudes for l = 0, which we aim to determine. However, it also depends on the scattering amplitudes for l = 2 when P > 0. Below we present analyses both including and excluding the contribution from the l = 2 partial wave. Analysis omitting l = 2 In the first analysis we omit the contribution of the partial-wave l = 2. This is expected to be a fair approximation since l = 2 effects DD scattering only in the narrow 2 ++ resonance region that is at the upper end of the current energy range of interest. The coupling between channels a 12 is non-zero but small. We also performed a study where five parameters for l = 0 and two parameters for l = 2 are fitted simultaneously using 18 levels of irreps A 3) on the physical axes can be described by the phase shifts δ DD , δ DsDs and inelasticity η, which are shown in Fig. 8. We find that the DD and D sDs channels are not strongly coupled, which can be seen from the inelasticity η 1 in Fig. 8 ( 8 ) and from the smallness of the off-diagonal element a 12 in Eqs. (8.15) and (8.13). Our results suggest there are two 0 ++ resonances in this energy region: a narrow resonance dubbed χ DsDs c0 just below the D sDs threshold and a broader one denoted by χ c0 . The broader 0 ++ resonance χ c0 is related to the pole indicated in red on sheet III This pole affects the scattering amplitude on the physical axes above the D sDs threshold and is responsible for a peak around 3.98 GeV in the DD → DD rate shown in the left pane of Fig. 7. The presence of this pole is also reflected in the phase shift δ DD 0 , which increases gradually starting from 2m Ds as is evident in the left pane of Fig. 8. The nearby pole on sheet II does not have a significant influence on the physical scattering above the second threshold. The pole residues indicate that this state decays predominantly to DD, while the decay to D sDs is suppressed, as evidenced by |c 1 | |c 2 |, see in the last two rows of Fig. 6 for the pole presented in red. The resonance parameters are (8.17) where M av = 1 4 (3m J/ψ +m ηc ), and the coupling g parametrizes the full width Γ = g 2 p D /m 2 . The possible relation of this state to the broad resonance χ c0 (3860) discovered by Belle in 2017 [1,2] is discussed in Section 9. The narrow 0 ++ resonance χ DsDs c0 near the D sDs threshold is related to the pole on sheet II, indicated by the top-filled orange symbols in the first row of Fig. 6. Its location relative to the threshold is given by This resonance is related to the bound state in the analysis of D sDs -scattering in the onechannel approximation of Section 8.2. The pole on sheet II and the nearby pole on sheet IV correspond to this resonance and are mutually exclusive across the bootstrap samples. Further details on this can be found in Appendix D.4. It is clear from Figure 7 that the resonance pole leads to a sharp rise in the D sDs → D sDs and DD → D sDs rates just above 2m Ds . The increased DD → D sDs rate is also responsible for a dip in the DD → DD rate at 2m Ds and all three features should be used as a signature for experimental searches of this state. Note that the magnitude of the D sDs → D sDs peak above 2m Ds is larger when the pole is closer to the threshold. χ DsDs c0 couples predominantly to D sDs and very weakly to DD (one can see that |c 2 | 2 |c 1 | 2 in Fig. 6). The mass difference of the state with respect to the threshold and its narrow total width Γ = g 2 p D /m 2 parametrized in terms of g are On the experimental side, the newly discovered χ c0 (3930) [3] and the X(3915) [2] lie near the D sDs threshold and have very small or zero decay rate to DD. The indication for a D sDs state in our study explains both properties, as detailed in Section 9. The parameters of the scattering matrix obtained from the analysis including or excluding l = 2 are similar, with the exception of a 22 and b 22 . These parametrize D sDs → D sDs and differ between the coupled-channel analysis and the one-channel approximation, however, both analyses lead to a state just below the D sDs threshold on the real axis (see Fig. 4c) or slightly away from it. The conclusion that there is a near-threshold pole is robust, while its exact location and the effect on physical scattering need to be investigated in a simulation with higher statistics and a better control of systematic uncertainties. 2m Ds 2m D sheet I sheet II sheet III sheet II J PC = 0 + + J PC = 2 + + J PC = 0 + + J PC = 2 + + Figure 9: Pole singularities of the scattering amplitude/matrix in the complex energy plane, which are associated with the hadrons predicted in this work. The pole related to the J P C = 2 ++ resonance appears on sheet II, as we have neglected the l = 2 partial wave contribution from D sDs scattering. the identification with experimental states is unambiguous, while the other states found are denoted χ c0 , χ DD c0 and χ DsDs c0 . The subscripts c0 and c2 indicate the assignment of J P C = 0 ++ and 2 ++ , respectively. The location of the poles in the complex energy plane related to these hadrons are given in Fig. 9, while the corresponding masses are compared to experiment in Fig. 10 The resonance decay widths depend on the phase space p 2l+1 evaluated for the meson momenta (in the cm-frame) at the resonance energy, which in turn depends on the position of the threshold. The latter is different in the simulation and in experiment. Therefore it is customary to compare the coupling g that parametrizes the full width of a hadron Γ ≡ g 2 p 2l+1 D /m 2 with l = 0, 2 for J P C = 0 ++ , 2 ++ , (9.1) as g is expected to be less dependent on the quark masses than the width itself. Note that it is not known whether this bound state would also feature in a simulation with physical quark masses. Such a state has not been claimed by experiments. The existence of a shallow DD bound state dubbed X(3720) was already suggested by an effective phenomenological model in Ref. [10] 9 featuring also exchanges of vector mesons. Ref. [11] indicates that there may already be some evidence for such a state in the DD invariant mass distribution from Belle [50], which shows an enhancement just above threshold. The DD rate from Babar [51] also shows a hint of enhancement just above threshold (see Fig. 5 of [51]). In a molecular picture, a 0 ++ state is expected as a partner of X(3872) via heavy-quark symmetry arguments [48,49]. A similar virtual bound state with a binding energy of 20 MeV follows from the data of the only previous lattice simulation of DD scattering [23] 10 . 2 ++ resonance and its relation to χ c2 (3930) We find a resonance with J P C = 2 ++ in l = 2 DD scattering with the following properties This is most likely related to the conventional χ c2 (3930) resonance (also called χ c2 (2P )) discovered by Belle [57] exp χ c2 (3930) : m − M av = 854 ± 1 MeV , g = 2.65 ± 0.12 GeV −1 . (9.8) Here g parametrizes the width Γ = g 2 p 5 D /m 2 . The masses are reasonably close, while the coupling from lattice QCD is larger that in experiment. However, the couplings are also not inconsistent given the large statistical error from our simulation and the unquantified systematic uncertainties discussed in Section 5. Broad 0 ++ resonance and its possible relation to χ c0 (3860) This resonance couples mostly to DD and has a very small coupling to D sDs . Its resonance parameters are Table 4 of Ref. [10]. 10 The presence of this state was not mentioned in Ref. [23], as such virtual bound states were not searched for. 11 For the couplings calculated from the experimental values we vary both the mass and width by ±1σ and take the maximal positive and negative deviations as the uncertainties -25 -based on the following arguments: The mass and coupling are reasonably consistent with experiment, in particular, when considering the experimental errors and the systematic uncertainties in the lattice results. The mass is also close to the value obtained from the only previous lattice study of DD scattering [23] 12 , although the width and coupling are larger in the present work. Narrow 0 ++ resonance χ DsDs c0 and its possible relation to χ c0 (3930), X(3915) We find a narrow 0 ++ resonance near the D sDs threshold. It has a large coupling to D sDs and a very small coupling to DD. The latter is responsible for its small decay rate to DD and the small total width. This state corresponds to the bound state in one-channel D sDs scattering discussed in Section 8.2. We compare the resulting resonance parameters The χ c0 (3930) with J P C = 0 ++ was very recently discovered in DD decay by LHCb [3]. The X(3915) was observed by Belle [6] and BaBar [5,7,8] in J/ψω decay and has J P C = 0 ++ or 2 ++ , while its decay to DD was not observed [2]. They might represent the same state if X(3915) is a scalar. Both experimental states lie just below the D sDs threshold. One would naturally expect 0 ++ states with this mass to be broad, given the large phase space available to DD decay. Their narrow widths can only be explained if their decay to DD is suppressed by some mechanism. If the resonance found on the lattice is indeed related to X(3915)/χ c0 (3930), our results indicate that this state owes its existence to a large interaction in the D sDs channel near threshold, which naturally explains why its width is small and its decay to DD is suppressed. Note that a detailed quantitative comparison of lattice and experimental results in Eqs. (9.11) and (9.12) is not possible due to the unphysical masses of the quarks in the lattice study and due to the omission of decays to J/ψω and η c η, which may affect the determination of the width. The qualitative comparison, however, suggests the existence of a D sDs resonance with small coupling to DD. This could be further investigated experimentally by considering the D sDs invariant mass spectrum near threshold, where a peak (see Fig. 7) would be visible for a state just below threshold. The X(3915) was proposed to be a groundccss state within the diquark-antidiquark approach by Polosa and Lebed [9]. The identificationccss was considered also in phenomenological studies [58,59]. Conclusions We presented a lattice study of coupled-channel DD-D sDs scattering in the J P C = 0 ++ and 2 ++ quantum channels with isospin 0. Using the generalized Lüscher method and a piecewise parametrization of the energy region from slightly below 2m D to 4.13 GeV, the coupled-channel scattering matrix S along the real energy axis was determined. The resulting S was then analytically continued to search for pole singularities in the complex energy plane that can affect the scattering amplitudes/parameters along the physical axes. Our study utilized the spectrum in three different inertial frames determined on two CLS ensembles with u/d and s quarks, spatial extents ∼2.07 fm and ∼2.76 fm and a single lattice spacing ∼0.086 fm. In addition to χ c0 (1P ), the results suggest three charmonium-like states with J P C = 0 ++ below 4.13 GeV. One is a yet undiscovered DD bound state just below threshold. The second is a narrow resonance just below the D sDs threshold predominantly coupled to D sDs . This state is possibly related to the narrow resonance X(3915)/χ c0 (3930), which is also below the D sDs threshold in the experiments. The third feature is a DD resonance possibly related to the χ c0 (3860) observed by Belle, which is believed to be χ c0 (2P ). An overview of the resulting pole structure of the coupled-channel DD-D sDs scattering matrix in the complex energy plane is given in Fig. 9, and the possible implications of this singularity structure for experiments are illustrated in Figs. 7 and 8. The masses are compared to experiment in Fig. 10 and summarized in Section 9. Turning to states with J P C = 2 ++ , the mass of the ground state χ c2 (1P ) was determined directly from the lattice energy and is compared with the experimental value in Eq. (9.3). We have assumed the 2 ++ resonance to be coupled only with the DD scattering channel in the l = 2 partial wave and have parametrized this with a Breit-Wigner form. The resonance parameters are extracted and compared with the experimental values of the conventional χ c2 (3930) in Eqs. (9.7) and (9.8). These are then fixed for the finite-volume coupled-channel analysis discussed in Section 8.4.2. We find the estimates for positions and residues for the poles with J P C = 0 ++ to be robust with the exclusion/inclusion of the l = 2 partial wave contribution to the analysis. The resulting pole positions and the residues from either study are shown in Fig. 6. In this study we worked with several simplifying assumptions (detailed in Section 5) necessary for a first investigation of this coupled-channel system. The lattice QCD ensembles we used have heavier-than-physical light and charm quarks and a lighter-than-physical strange quark. This results in a smaller-than-physical splitting between the DD and the D sDs thresholds. In future studies, it will be important to systematically improve upon the current results by successively relaxing our assumptions, for example, by explicitly including η c η interpolating fields and adding this channel as well as J/ψω to the coupled-channel study. It is also essential to investigate additional parametrizations to test the model inde-pendence of our findings and this will require a larger set of ensembles with high statistics. With regard to the pole structure observed in this work, it would be particularly interesting to investigate how our observations, such as the shallow DD bound state and our X(3860) candidate evolve when simultaneously approaching the limit of physical quark masses and the continuum limit. funding No. P1-0035 and No. J1-8137). We are grateful to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and its partial support during the course of this work. A Error treatment Central values for all quantitiesQ are obtained from the average of correlation matrices over the gauge ensemble, while the errors are based on N b = 999 bootstrap samples. The 1σ standard error formulae for a Gaussian-distributed quantity Q provides the range that captures the central 68% of the bootstrap samples, which is represented by the gray bands in various figures. 13 We present resonance masses, widths/couplings, pole positions, phase shifts and |t| with the asymmetric errorsQ Here cov is the modified correlation matrix defined as The central valueQ corresponds to the average over the gauge configurations and not to the average of the bootstrap samples, therefore it is possible thatQ is not within the range captured by the 68% of the bootstrap samples. The sum indicated by a prime runs over the bootstrap samples b in which Q b i and Q b j are not among the 16% of the values excluded on either end. cov ii in Eq. (A.1) is equal to the standard covariance 1 for Gaussian-distributed quantities. cov ij also coincides with the standard covariance for completely correlated or uncorrelated Gaussiandistributed quantities i and j. We have verified that the standard covariance and cov render almost identical errors on the energies and most of the parameters of the scattering matrix. The advantage of the modified correlation matrix is the exclusion of outliers for non-Gaussian distributions with long tails. Such distributions might occur for the bootstrap samples of scattering-matrix parametrizations due to the highly non-linear nature of the box-functions B(E cm ) in Eq. (7.1). B Eigen-energies The Figure 11 compares the original eigen-energies E lat cm and the eigen-energies E calc cm obtained via Eq. (3.2). The energies E calc are taken as inputs to the scattering matrix according to our approach towards disretization errors, as outlined in Section 3. C Fitting the parameters of the scattering matrix The parameters of the matrixK are determined from the energies presented in Fig. 2 via the quantization condition (7.1) following the determinant residual method proposed in Ref. [46]. In this method, one determines the parameters such that the zeros of the Ω(E cm ) function (which are identical to the zeros of the determinant in Eq. (7.1)) Figure 11: The lower figures present the original eigen-energies E lat cm in four irreducible representations considered. The upper figures present the eigen-energies E calc cm obtained via Eq. (3.2) and match those in Fig. 2. The E calc are inputs to our scattering analysis, while E lat are considered to be less reliable input according to the discussion in Section 3. irreps shown in Fig. 2: these are levels n = 2(3) from | P | = 0(1) on both volumes. The charmonium-like state obtained lies just below threshold, therefore the relative error on its binding energy given in Eq. (8.5) is large. We note that 6% of the bootstrap sam- ples do not render any poles on the real axes -this corresponds to the bootstraps for which p cot δ/E cm just fails to cross the orange line in Fig. 3a. An additional 6% of the bootstrap samples render a virtual bound state -this corresponds to the bootstraps for which p cot δ/E cm crosses ip/E cm = |p|/E cm rather than −|p|/E cm in Fig. 3a. Both of these scenarios happen in extreme cases and end up within the 32% of bootstrap samples that are excluded when computing the errors via (A.1). In Fig. 12, we present the pole positions along the (virtual) bound state constraints across the bootstrap samples showing the continuous distribution of the poles along the constraint curves and hence also across the Riemann sheets. The preferred fit with the scattering parameters (8.2) utilizes the 4 levels shown in violet in Fig. 3a. We also performed the fits using the 3 lowest levels, the 6 lowest levels and all 7 levels shown: the ensemble averages of the data lead to a bound state in all these fits and the binding energy is within the error given in Eq. (8.5). This analysis of DD scattering near threshold includes only the eigen-states with energies close to the threshold and omits the eigen-state related to χ c0 (1P ), which is significantly below threshold. We are unable to constrain the DD scattering below the lowest violet point in Fig. 3a. Hence, the pole at around E cm 3.80 GeV, which would arise at the crossing of the orange and red curves (and would violate the consistency check, see Section VC of [25]), is below the region in which our analysis can reasonably be applied and also outside of the energy range of interest. D.2 D sDs scattering with l = 0 near threshold in the one-channel approximation This analysis employs only those eigenstates whose overlaps are dominated by D sDs operators and that do not have significant overlap with DD operators. These are the four levels in Fig. 2 near the D sDs threshold in the A (+) 1 irreps: levels n = 3, 4 from | P | 2 = 0, 1 on the N L = 24 ensemble and levels n = 4, 7 from | P | 2 = 1, 2 on N L = 32. Here 97% of the bootstrap samples result in a bound-state pole, while 2.3% result in a virtual bound-state pole and 0.7% do not render any poles on the real axis -the latter two cases end up among the extremal 32% of bootstrap samples. D.3 DD scattering with l = 2 DD scattering in partial wave l = 2 is the not the main focus of our study. It was considered in order to investigate and constrain its contribution to the A 1 irreps we have studied. Since this partial wave was initially not the goal of our study, we did not evaluate all irreps where it appears (for example E + and T + 2 for P = 0), instead we implemented only the B 1 irrep with | P | 2 = 1. The extraction of the phase shift in Eqs. (8.10,8.11) employs four lattice levels in the B 1 irrep with | P | 2 = 1: these are levels n = 3, 4 on both lattice volumes (levels n = 1, 2 correspond to the ground states with J P C = 2 ++ and 2 −+ , respectively). A fit using only three lattice levels (omitting the higher level on the smaller volume) renders the resonance pole position E p = (4.013 +0.013 −0.016 ) − i 2 (0.098 +0.044 −0.057 ) GeV. This is compatible with our main result (8.11) and has a larger central value for the width. D.4 Coupled DD, D sDs scattering with l = 0 for E cm 3.9 − 4.13 GeV Fig. 13 shows an example of Ω(E cm ) (C.1) for the parameters of the coupled-channel scattering matrix given in Eq. (8.14). The values of E cm at which Ω crosses zero are indeed near the observed eigen-energies (indicated by the black circles). The number of crossings agrees with the number of observed levels in the relevant energy ranges. In Figure 14, we present the pole distribution across the bootstrap samples for various poles we extract in the complex p Ds plane, where p Ds is the momentum of the D s meson in D sDs scattering in the CMF. The two islands of poles, one in sheet II and the other in sheet IV, lying close to Im(ap Ds ) are mutually exclusive and hence represent the same dynamics. The island in sheet II constitutes 70% of the samples, while a pole appears at a similar location on sheet IV in the remaining samples. Hence the results for the pole location related to χ DsDs P 2 =2, L/a=32 Figure 13: The function Ω(E cm ) (defined in Eq. (C.1)) for the resulting scattering matrix of the coupled channels DD − D sDs (8.14) is given by the orange line. The observed eigen-energies are given by the circles: the black levels are employed to fit the parameters (8.14), while the blue circles are not. from 2m D up to 4.13 GeV requires additional parameters. This is difficult as, with the statistics and the number of lattice QCD ensembles available to us, the fits become unstable. Instead, as a cross-check, we model the infinite-volume scattering matrix in the wider energy range using the parametrizations similar to those presented in Section 8. One of the aims is to verify that the resulting scattering matrix predicts the same number of finite-volume energy levels as observed in the actual simulation. The t-matrix elements are modeled in the energy range E cm 2m D − 4.13 GeV as shown forK −1 / √ s in Fig. 15. We require that they are continuous in energy and that they have continuous derivative. In the high energy region they asymptote to the linear dependence on s (7.2,8.13) and the parameters are fixed to the values (8.14) obtained from the coupled-channel analysis. Below we provide more details on each t-matrix element in t 11 The energy region considered is divided into three intervals, as shown by the red line in the figure. t 11 asymptotes to the coupled and single channel results (of the main text) in the high and middle energy intervals, respectively. In order to ensure a smooth transition between two regions, we emply hyperbola-type shape forK −1 11 / √ s that smoothly asymptotes 14 to the linear dependencesK −1 11 / √ s = a 11 + b 11 s (8.14) andK −1 11 / √ s = a 11 + b 11 s (8.2). The four parameters a, b are fixed to the values in the main text. The value of the smoothing parameter c hyp is the only free parameter ofK −1 ij in this appendix. Its value c hyp = 0.00021 (2) is obtained from fitting the scattering matrix to all energy levels and the resulting fit has χ 2 /dof = 1.8. In the region below DD threshold, we choose a shape ofK −1 11 which prevents the occurrence of a second bound-state (this would correspond to the red and orange lines in the figure intersecting a second time). The exact form of this choice is not important as this is beyond the region of interest. t 12 In the upper regionK −1 12 / √ s asymptotes to the constant value of the coupled channel analysis (8.14). In the region near DD threshold, where the effects from D sDs channel are expected to be negligible, it asymptotes to zero. The smooth transition between two constant values is ensured by using the sigmoid function. t 22 This element is parametrized as in Eq. (8.14), for the entire energy region, see the black line in the figure. Note that we ignore any crossing of the D sDs bound state condition with the t 22 parametrization that occurs well below the DD threshold. We find that the number of poles with the above-designed scattering matrix is the same as that obtained from the analysis of the separate energy regions. The pole locations on the various complex Riemann sheets and their residues are also almost unchanged. Following Lüscher's finite-volume analysis, we extracted the finite-volume spectrum from this scattering matrix. In Fig. 16, we present the Ω(E cm ) function defined in Eq. (C.1). The zeros of Ω(E cm ) are the predictions for the finite-volume spectrum derived from the scattering matrix. The points indicate the energy levels observed in the actual simulation. It is clear from the figure that the predicted spectrum agrees qualitatively with the lattice energy levels within the energy range of interest and that the same number of levels are obtained. P 2 =2, L/a=32
15,877.4
2020-11-04T00:00:00.000
[ "Physics" ]
The Traf2- and Nck-interacting kinase as a putative effector of Rap2 to regulate actin cytoskeleton. Rap2 belongs to the Ras family of small GTP-binding proteins, but its specific roles in cell signaling remain unknown. In the present study, we have affinity-purified from rat brain a Rap2-interacting protein of approximately 155 kDa, p155. By liquid chromatography tandem mass spectrometry, we have identified p155 as Traf2- and Nck-interacting kinase (TNIK). TNIK possesses an N-terminal kinase domain homologous to STE20, the Saccharomyces cerevisiae mitogen-activated protein kinase kinase kinase kinase, and a C-terminal regulatory domain termed the citron homology (CNH) domain. TNIK induces disruption of F-actin structure, thereby inhibiting cell spreading. In addition, TNIK specifically activates the c-Jun N-terminal kinase (JNK) pathway. Among our observations, TNIK interacted with Rap2 through its CNH domain but did not interact with Rap1 or Ras. TNIK interaction with Rap2 was dependent on the intact effector region and GTP-bound configuration of Rap2. When co-expressed in cultured cells, TNIK colocalized with Rap2, while a mutant TNIK lacking the CNH domain did not. Rap2 potently enhanced the inhibitory function of TNIK against cell spreading, but this was not observed for the mutant TNIK lacking the CNH domain. Rap2 did not significantly enhance TNIK-induced JNK activation, but promoted autophosphorylation and translocation of TNIK to the detergent-insoluble cytoskeletal fraction. These results suggest that TNIK is a specific effector of Rap2 to regulate actin cytoskeleton. Rap2 is a member of the Ras family of small GTP-binding proteins, which regulate a range of cellular processes including cell proliferation, differentiation, and cytoskeletal rearrangement (for a review, see Ref. 1). To regulate these processes, Ras family proteins cycle between GTP-bound active and GDPbound inactive forms. In the GTP-bound active form, Ras family proteins physically interact with downstream effectors and thereby regulate their subcellular localization and activity (1). For instance, GTP-bound Ras interacts with effectors including Raf-1, B-raf, Ral guanine nucleotide dissociation stimulator (RalGDS), 1 (1,2). During interaction with effectors, the effector regions of Ras family proteins (amino acids 32-40 in the case of Ras) serve as binding interfaces; thus, mutations within their effector regions impair interaction with effectors (Refs. 1-4, reviewed in Ref. 6). The effector regions are also critical for the differential recognition of effectors. For instance, the effector region of Rap1, a close relative of Rap2, is identical to that of Ras. Rap1 interacts with effectors of Ras, and sometimes counteracts Rasmediated signaling (1,2). For example, Rap1 regulates the extracellular signal-regulated kinase (ERK) pathway, the "classical" mitogen-activated protein kinase (MAPK) pathway, through Raf-1 and B-raf. Although Rap1 interacts with Raf-1 and B-raf, it only activates B-raf. In fibroblasts, Rap1 inhibits Ras-induced cellular transformation. Rap1 exerts this action presumably by trapping Raf-1 in an inactive complex, thereby inhibiting ERK activation (1,2,7). In PC12 cells, Rap1 mediates the sustained activation of ERK induced by nerve growth factor through the activation of B-raf (8). However, Rap1 does not have a specific effector that does not interact with Ras. On the other hand, the effector region of Ral, another Ras family protein, differs from that of Ras by three amino acids. Unlike Rap1, Ral has a specific effector, Ral-binding protein 1, which does not possess RBD or RAD and does not interact with Ras (1,2). Rap2 has been thought to be functionally analogous to Rap1. In fact, Rap2 interacts with effectors of Ras and sometimes counteracts Ras-mediated signaling, as does Rap1. Rap2 interacts with Raf-1 in HEK293T cells and inhibits the Ras-dependent activation of the transcription factor Elk1, a direct down-stream target of ERK (9). Rap2 also interacts with RalGDS in COS7 cells, but this does not lead to the activation of Ral, the downstream target of RalGDS (10). Further, Rap2 interacts with PI3K in the B cell lymphoma cell line A20, and inhibits the activation of Akt, the downstream target of PI3K (11). However, contrary to these observations, Rap2 fails to suppress Ras-induced cellular transformation (12), while it suppresses v-Src-induced transformation (9). Furthermore, Rap2 is activated by GDP/GTP exchange factors (GEFs) differently from Rap1 (9). For instance, RA-GEF-1 (13), also termed nRapGEP, PDZ-GEF, or CNrasGEF (14 -16), is the strongest activator of Rap2, whereas GFR/MR-GEF (17,18) is the strongest activator of Rap1. RA-GEF-2 (19) is more effective in activating Rap2 than Rap1, whereas C3G (20) and CalDAG-GEFI/RasGRF2 (21,22) are more effective in activating Rap1 than Rap2 (9). Furthermore, the GEF domain of PLC⑀ acts on Rap1 but not on Rap2 (23). These observations suggest that Rap2 is not functionally analogous to Rap1, and that Rap2 and Rap1 perform overlapping but distinct signaling functions. We hypothesized that Rap2 performs its specific signaling functions by regulating specific effectors. In support of this, the effector region of Rap2 differs from those of Ras or Rap1 by a single amino acid. Amino acid 39 in Rap2 is Phe, while in Ras and Rap1 it is Ser. This may confer on Rap2 the ability to interact with specific effectors and play signaling roles distinct from those of Ras and Rap1. In fact, Rap2 interacts with a specific effector candidate, Rap2-interacting protein 8, which does not interact with Ras or Rap1 (24). However, no signaling function for this protein is known. In the present study, we attempted to affinity-purify a specific effector(s) of Rap2 from rat brain. We searched for proteins that interact with a Rap2 affinity column but not with a Ras affinity column. We found one such protein of ϳ155 kDa. By liquid chromatography tandem mass spectrometry (LC-MS/MS), we identified this protein, designated p155, as tumor necrosis factor (TNF) receptorassociated factor 2 (Traf2)-and Nck-interacting kinase (TNIK) (25). TNIK was isolated by yeast two-hybrid screening for proteins that interact with Traf2 and Nck (25). Traf2 belongs to a family of adaptor proteins that shares a common structural domain, the Traf domain. Traf2 is implicated in the regulation of c-Jun N-terminal kinase (JNK), a "stress-activated" MAPK, and the transcription factor NF-B by TNF receptor or related receptors (for a review, see Ref. 26). Nck belongs to a family of adaptor proteins containing the Src homology (SH)2/SH3 domains. Nck is implicated in the regulation of the actin cytoskeleton by receptor or non-receptor tyrosine kinases (for a review, see Ref. 27). TNIK does not possess RBD or RAD. TNIK consists of an N-terminal kinase domain, a C-terminal regulatory domain termed the citron homology (CNH) domain, and an intervening region between these domains. The CNH domain was named after citron kinase (28), an effector of the Rho family small GTP-binding proteins Rho and Rac, where it was first described. The kinase domain of TNIK is structurally related to that of STE20, a Saccharomyces cerevisiae MAPK kinase kinase kinase (MAP4K), and thus TNIK belongs to the STE20 group of protein kinases, which share similar kinase domains (for a review, see Ref 29). The STE20 group includes the p21activated kinase (PAK) family, which possesses a C-terminal kinase domain (for a review, see Ref 30), and the germinal center kinase (GCK) family, which possesses an N-terminal kinase domain (for a review, see Ref. 31). Several PAK family members serve as effectors of the Rho family small GTP-binding proteins Cdc42 and Rac and regulate JNK and the actin cytoskeleton (30). TNIK belongs to the GCK family (29), and exhibits two mutually independent functions (25). Like several other GCK family members, TNIK regulates JNK. For this function, its CNH domain is necessary and sufficient. TNIK does not activate other MAPKs such as p38, or ERK. TNIK does not activate NF-B, either. Unlike other GCK family members, TNIK also regulates the actin cytoskeleton. TNIK induces actin fiber disassembly and consequently reverses pre-established cell spreading: with the expression of TNIK, well adherent and spread cells round up and finally lose attachment to culture dishes. However, these cells are viable and are not undergoing apoptosis (25). For this function, its kinase domain is necessary and sufficient. TNIK phosphorylates gelsolin, an F-actin fragmenting and capping enzyme, in vitro, although it is unknown whether phosphorylation takes place and influences gelsolin functions within cells (25). In the present study, we show that TNIK interacts with Rap2 but not with Rap1 or Ras. The interaction requires GTPbound configuration of Rap2 and the intact effector region of Rap2. TNIK co-localizes with Rap2 when co-expressed in cells. Rap2 potently enhances TNIK-induced loss of cell spreading. Furthermore, Rap2 promotes the autophosphorylation and translocation of TNIK to the detergent-insoluble cytoskeletal fraction. These observations suggest that TNIK serves as a specific effector of Rap2 to regulate actin cytoskeleton. EXPERIMENTAL PROCEDURES Affinity Purification and Mass Spectrometry-To generate affinity columns, glutathione S-transferase (GST)-fusion proteins of Rap2A and Ha-Ras were expressed in Sf9 insect cells using a baculovirus expression system, extracted, and immobilized on glutathione-Sepharose resin (Amersham Biosciences) as previously described (7,32). Rat brain (10 g) was homogenized in buffer A (20 mM Tris/HCl, pH 7.6, 100 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, and 5 mM MgCl 2 ) containing 0.1 mM phenylmethylsulfonyl fluoride and 10% sucrose. This and all subsequent steps were performed at 4°C. The homogenate was centrifuged at 100,000 ϫ g for 1 h, and the resultant supernatant was dialyzed against buffer A three times and applied to a 2.5-ml glutathione-Sepharose column. The flow-through fraction was applied to a 0.1-ml affinity column carrying 2 nmol of GST-Rap2A or GST-Ha-Ras fusion proteins preloaded with GTP␥S or GDP as previously described (7,32). After washing the columns with 2 ml of buffer A, bound proteins were eluted with buffer A containing 10 mM glutathione. The eluted proteins were resolved by SDS-polyacrylamide gel electrophoresis (PAGE) and visualized with Coomassie Brilliant Blue staining. A gel piece containing p155 was excised, and the proteins were in-gel digested with trypsin. The resultant peptides were analyzed using an LC-MS/MS system (LCQ DecaXP, ThermoQuest Inc., San Jose, CA), and the data were used to search against the NCBI data base with MASCOT software (Matrix Science, Ltd., London, UK) for protein identification. In Vitro Binding Assay with Recombinant TNIK-The cDNA clone KIAA0551 (GenBank TM accession number AB011123) containing the full-length coding sequence for the largest isoform of human TNIK (1360 amino acids) was kindly provided by Dr. Takahiro Nagase (Kazusa DNA Research Institute, Chiba, Japan). TNIK and its C-terminal deletion mutant TNIK⌬CNH (amino acids 1-1041) lacking the CNH domain were expressed in HEK293T cells with the HA epitope tag. To this end, the HA coding sequence was inserted into the mammalian expression vector pCIneo (Promega) to yield pCIneo-HA. From the clone KIAA0551, the TNIK and TNIK⌬CNH coding sequences were amplified by PCR and inserted into pCIneo-HA to yield pCIneo-HA-TNIK and -TNIK⌬CNH, respectively. 293T cells were then transfected with pCIneo-HA-TNIK or pCIneo-HA-TNIK⌬CNH using Polyfect reagent (Qiagen) and harvested at 24-h post-transfection. The cells were homogenized in buffer A containing 1% Nonidet P-40 (Nonidet P-40) and protease inhibitors (Roche Applied Science), incubated for 30 min at 4°C, and then centrifuged at 100,000 ϫ g for 30 min at 4°C. The supernatant (Nonidet P-40-soluble fraction) was next incubated with glutathione-Sepharose resin carrying immobilized GST-Rap2A, GST-Rap1A, or GST-Ha-Ras preloaded with GTP␥S or GDP as previously described (7,32). After incubation at 4°C for 1 h, the resin was washed four times with the same buffer, and bound proteins were eluted with 10 mM glutathione and subjected to SDS-PAGE followed by Western immunoblot detection with monoclonal anti-HA antibody (12CA5; Roche Applied Science) or Coomassie Brilliant Blue staining as previously described (32). Analyses of Autophosphorylation and Cytoskeletal Translocation-pCIneo-HA-TNIK(K54R) used for the expression of a kinase-deficient mutant of TNIK (25) was constructed using an oligonucleotide-directed mutagenesis technique. 293T cells were transfected with pCIneo-HA-TNIK or pCIneo-HA-TNIK(K54R) alone or in combination with pCIneo-Myc-Rap2A. At 18-h post-transfection, the cells were harvested and a Nonidet P-40-soluble fraction was prepared by centrifugation as mentioned above. The Nonidet P-40-insoluble pellet was then solubilized in radioimmune precipitation assay (RIPA) buffer (buffer A containing 1% Nonidet P-40, 0.1% SDS, 0.1% sodium deoxycholate, and protease inhibitors) for 30 min and again centrifuged at 100,000 ϫ g for 30 min. The resultant supernatant was used as the Nonidet P-40-insoluble/ RIPA buffer-soluble fraction (cytoskeletal fraction). From these fractions, HA-TNIK or HA-TNIK(K54R) was immunoprecipitated as previously described (35). For alkaline phosphatase treatment, the beads carrying immunoprecipitated HA-TNIK or HA-TNIK(K54R) were . Of these, 3 peptides corresponding to amino acids 766 -778, 816 -825, and 981-1008, each contained a single amino acid substitution compared with the human sequences, A777S, M823L, and E994D, respectively, which were specific for mouse TNIK (these positions are indicated by asterisks above the sequence). Other peptides were identical to their respective human sequences. The kinase domain and the CNH domain as determined by InterPro Server (www.ebi.ac.uk/interpro/) are boxed with a dotted line. washed twice with buffer B (50 mM Tris/HCl, pH 8.0, 100 mM NaCl, 10 mM MgCl 2 , and 1 mM dithiothreitol) and then incubated with 20 units of calf intestine alkaline phosphatase (CIAP) in the same buffer for 30 min at 37°C. The beads were washed twice with buffer B, and proteins were eluted from the beads by boiling in SDS sample buffer and subjected to Western immunoblot analysis with monoclonal anti-HA antibody. Cell Rounding Assay-pEGFP-C1 (Clontech) expressing the enhanced green fluorescent protein (EGFP) was transfected with various pCIneo constructs into 293T cells cultured in 35-mm dishes. At 18-h post-transfection, live cells were observed using an inverted fluorescent microscope (Axiovert 135, Zeiss) with ϫ10 objective under a GFP filter, and images were collected using a CCD camera (DP70, Olympus). EGFP-positive cells were examined for their round or spread morphology, and the percentage of cells that maintained a spread morphology was determined. The cells were then harvested to assess the expression of various HA-or Myc-tagged proteins by Western immunoblot analyses. RESULTS p155/TNIK Is a Novel Rap2-interacting Protein-To search for proteins that specifically interact with Rap2, rat brain extract was applied to GST-Rap2A and -Ha-Ras affinity columns. After extensive washing, proteins bound to the affinity columns were eluted with GST-Rap2A by glutathione and subjected to SDS-PAGE. As shown in Fig. 1A, a protein with a molecular mass of ϳ155 kDa was detected in the eluate from the affinity column carrying GTP␥S-bound GST-Rap2A and was thus designated p155 (Fig. 1A, lane 2). A smaller amount of p155 was also eluted from the column carrying GDP-bound GST-Rap2A (lane 3). However, it was not eluted from the column carrying nucleotide-free GST-Rap2A (lane 1). Further, p155 was not eluted from columns carrying GTP␥S-or GDPbound GST-Ha-Ras, either (lanes 4 and 5). To clarify the molecular identity of p155, peptides resulting from in-gel trypsin digestion of the 155 kDa gel slice were analyzed by LC-MS/MS. As shown in Fig. 1B, twelve peptides matched human TNIK, mouse TNIK or both (human and mouse TNIK share 99.0% amino acid identity). TNIK possesses an N-terminal kinase domain, C-terminal CNH domain, and an intervening region where Traf2-and Nck-binding sites are located. TNIK consists of multiple isoforms resulting from alternative splicing within the intervening region. Human TNIK consists of 8 isoforms (TNIK 1 to TNIK 8 ) with different combinations of 3 alternatively spliced modules of 29, 55, and 8 amino acids (25), for which one peptide matched a part of the 55 amino acid module. TNIK 1 is the largest isoform containing all of these modules with a predicted molecular mass of 155,361 Da, which was close to the apparent molecular mass of p155 estimated by SDS-PAGE. We therefore concluded that p155 corresponds to rat TNIK, most likely the counterpart of TNIK 1 . Accordingly, TNIK 1 was used in the following experiments. The CNH Domain of TNIK Mediates Specific Interaction with Rap2-We next confirmed that recombinant TNIK interacts with Rap2 in a manner similar to that of p155. For this purpose, HA-tagged TNIK (HA-TNIK) was expressed in 293T cells and was examined for interaction with immobilized GST-Rap2A ( Fig. 2A). Similar to p155, HA-TNIK preferentially interacted with the GTP␥S-bound form of GST-Rap2A (lane 2) compared with the GDP-bound form of GST-Rap2A (lane 3), but did not interact with the nucleotide-free form of GST-Rap2A (lane 4). In another experiment (Fig. 2B), HA-TNIK did not interact with GTP␥S-bound GST-Rap1A or GST-Ha-Ras ( lanes 5 and 6), providing further support for the interaction specificity between TNIK and Rap2. Unlike HA-TNIK, HA-TNIK⌬CNH, a C-terminal deletion mutant that lacks the CNH domain, failed to interact with GST- Rap2A (lane 4), suggesting that the CNH domain is required for this interaction. To test whether the CNH domain mediates the specific in-teraction of TNIK with Rap2, we examined the interaction of the CNH domain with Rap2, its mutants, Rap1, and Ras using a yeast two-hybrid assay ( Table I). The CNH domain strongly interacted with Rap2 and its activated mutant Rap2(G12V), while it weakly interacted with the dominant negative mutant Rap2(S17N), and did not interact with Rap2(F39S), in which the effector region of Rap2 was mutated to resemble those of Ras and Rap1. As expected, the CNH domain did not interact with Rap1 or Ras, either. On the other hand, Raf-1, which possess an RBD, failed to interact only with the dominant negative mutant, Rap2(S17N). To determine whether the CNH domain mediates interaction with Rap2 in cells, the subcellular localizations of Rap2 and TNIK expressed in NIH3T3 cells were examined by immunofluorescence microscopy (Fig. 3). When expressed alone, Myc- tagged Rap2 (Myc-Rap2A) showed a vesicular staining pattern that was particularly intense in the perinuclear region (panel a). Unlike Myc-Rap2A, HA-TNIK was evenly distributed throughout the cytoplasm when expressed alone (panel b). Strikingly, when co-expressed with Myc-Rap2A (panels c-e), HA-TNIK was almost completely co-localized with Myc-Rap2A. It was also noted that half of the cells co-expressing Myc-Rap2A and HA-TNIK lost their flat, spread out morphology, and rounded up (panels f-h). Myc-Rap2A and HA-TNIK were colocalized in these rounded cells as well. In contrast to fulllength HA-TNIK, HA-TNIK⌬CNH failed to co-localize with Myc-Rap2A (panels i-k): Myc-Rap2A again showed a perinuclear staining pattern (panel i), while HA-TNIK⌬CNH showed a cytoplasmic staining pattern (panel g). This staining pattern for HA-TNIK⌬CNH in cells co-expressing Myc-Rap2A was the same as that in cells expressing HA-TNIK⌬CNH alone (data not shown). The absence of co-localization was also observed when HA-TNIK was co-expressed with Myc-Ha-Ras (panels l-n). Again, HA-TNIK showed a cytoplasmic staining pattern (panel m), while Myc-Ha-Ras exhibited a perinuclear and plasma membrane-associated staining pattern (panel l). This staining pattern for Myc-Ha-Ras in cells co-expressing HA-TNIK was the same as that in cells expressing Myc-Ha-Ras alone (data not shown). Rap2 Does Not Enhance TNIK-induced JNK Activation-Next, the effect of interaction with Rap2 on TNIK function was tested. TNIK activates JNK when expressed in Phoenix-A cells, derivatives of 293 cells (25). We therefore tested whether Rap2 enhances TNIK-induced JNK activation in 293T cells (Fig. 4A). I Specific interaction of TNIK with Rap2 in a yeast two-hybrid assay The C-terminal portion of TNIK encoded by pACT2-CNH and the N-terminal portion of Raf-1 encoded by pGAD-Raf-1 were examined for their interaction with wild-type Rap2A, its activated (G12V), dominant negative (S17N), and effector region (F39S) mutants, wild-type Rap1A, and wild-type Ha-Ras encoded by respective pBTM116 constructs in the L40 strain. a Co-transformants were examined for HIS3 expression: ϩϩ, strong expression; Ϯ, weak expression; -, no expression. b Co-transformants were also examined for ␤-galactosidase (␤-gal) expression by the o-nitrophenyl-␤-D-galactopyranosidase assay. lower mobility, while that in the latter cells (lane 4) consisted of a single lower mobility form. We speculated that in the latter cells, the higher mobility form was converted to the lower mobility form by Myc-Rap2A. FIG. 3. Co-localization of TNIK with Rap2 in cells is mediated by the CNH domain. To test whether this Rap2-induced conversion occurred in the absence of co-expressed FLAG-JNK2, cells that expressed HA-TNIK alone were compared with cells that co-expressed HA-TNIK and Myc-Rap2A (Fig. 4B). Again, HA-TNIK in the former cells (lane 1) migrated as a broad band consisting of lower and higher mobility forms, while HA-TNIK in the latter cells (lane 2) migrated as a narrow band consisting of the lower mobility form. To determine whether this Rap2-induced conversion was related to the kinase activity of TNIK itself, we expressed the kinase-deficient mutant HA-TNIK(K54R), which fails to undergo autophosphorylation in vitro (25) (lanes 3 and 4). Notably, HA-TNIK(K54R) migrated as a narrow band (lane 3) with a similar mobility to that of the higher mobility form of wild-type HA-TNIK. Co-expressed Myc-Rap2A reduced the mobility of HA-TNIK(K54R) only slightly (lane 4), suggesting that the kinase activity of TNIK is important for the Rap2-induced conversion. In addition, HA-TNIK⌬CNH migrated as a narrow band, and Myc-Rap2A did not affect the mobility of HA-TNIK⌬CNH detectably, thus suggesting that effect of Rap2 on the mobility of TNIK requires the CNH domain (lanes 5 and 6). Contrary to the above observations, wild-type HA-TNIK mi-grated as a narrow band in our in vitro binding assay ( Fig. 2A). In the in vitro binding assay, we used the Nonidet P-40-soluble fraction, while in the above experiment (Fig. 4, A and B) we used total cell extract. This difference led us to examine the Nonidet P-40-solubility of the lower and higher mobility forms of HA-TNIK (Fig. 4C). Collectively, these data suggest that the lower mobility form of HA-TNIK represents an autophosphorylated form of HA-TNIK, which is mainly associated with the cytoskeletal fraction, and that interaction of HA-TNIK with Myc-Rap2 promotes the autophosphorylation and translocation of HA-TNIK to the cytoskeletal fraction. Rap2 Enhances TNIK-induced Cell Rounding-We then tested whether Rap2 enhanced TNIK-induced cytoskeletal rearrangement. TNIK induces actin fiber disassembly and consequently disrupts pre-established cell spreading in Phoenix-A, HeLa and NIH3T3 cells (25). As shown in Fig. 5, using EGFP as a co-marker, we compared the effects of HA-TNIK expression and HA-TNIK⌬CNH expression on cell morphology in the absence or presence of co-expressed Myc-Rap2A. We also used the kinase-deficient mutant HA-TNIK(K54R), since the F-actin disrupting function of TNIK is dependent on its kinase activity (25). We also used Myc-Ha-Ras as another negative control. 293T cells that expressed HA-TNIK, HA-TNIK⌬CNH, or HA-TNIK(K54R) (Fig. 5A, panels b-d; Fig. 5B, columns 2-4) were well spread and looked similar to control cells co-transfected with empty vectors and pEGFP-C1 (a spread morphology was noted for more than 90% of the EGFP-positive cells) (panel a, column 1). Similarly, cells that expressed Myc-Rap2A alone did not look significantly different from the control cells (panel e, column 5). However, the majority of cells that expressed HA-TNIK in the presence of co-expressed Myc-Rap2A exhibited a distinctly rounded morphology (a spread morphology was observed for less than 40% of the EGFP-positive cells) (panel f, column 6). In contrast, cells that expressed HA-TNIK⌬CNH, which is incapable of interacting with Rap2, maintained a spread morphology even in the presence of co-expressed Myc-Rap2A (panel g, column 7). As expected, cells that expressed HA-TNIK(K54R) did not round up in the presence of co-expressed Myc-Rap2A (panel h, column 8). In addition, cells that expressed Myc-Ha-Ras (panel I, column 9) as well as cells that expressed HA-TNIK plus Myc-Ha-Ras (panel j, column 10) looked similar to the control cells. Nearly equal amounts of HA-TNIK were present in cells that expressed HA-TNIK, HA-TNIK, and Myc-Rap2A, or HA-TNIK and Myc-Ha-Ras as examined by Western immunoblotting (data not shown). Similar observations were made during the immunofluorescence microscopy analysis of NIH3T3 cells (Fig. 3A, panels f-h, and data not shown). 2 DISCUSSION There is limited information on the role of Rap2 in the regulation of the actin cytoskeleton. It inhibits spontaneous cell migration in mouse embryonic fibroblasts (MEFs) deficient in C3G (37), but promotes spontaneous and chemokine-induced cell migration in the B cell line 2PK3 (38). Cell migration involves complex regulation of the actin cytoskeleton (39). However, the effectors that mediate these Rap2 actions remain unidentified. We believe that TNIK serves as an effector of Rap2 to regulate actin cytoskeleton. TNIK reverses pre-established cell spreading by inducing the disassembly of F-actin through its kinase domain (25), and our data demonstrated that Rap2 potently enhances this TNIK function. How the kinase activity of TNIK is involved in actin fiber disassembly is unknown. One possibility is through autophosphorylation. TNIK undergoes autophosphorylation in vitro (25), and our results indicated that autophosphorylation also takes place in cells and is promoted by Rap2. The interaction of Rap2 with TNIK may promote autophosphorylation by recruiting and accumulating TNIK to a specific membrane domain. This accumulation may in turn allow juxtaposed TNIK molecules to transphosphorylate each other. This speculation is consistent with the observation that kinase-deficient TNIK(K54R) undergoes autophosphorylation to a small extent in cells that coexpress Rap2, where endogenous wild-type TNIK could phosphorylate TNIK(K54R). Autophosphorylation might then trigger a conformational change in TNIK necessary for interaction with or phosphorylation of the downstream molecule that mediates F-actin disassembly. Autophosphorylation might also permit the translocation of TNIK to cellular compartments where the downstream molecule resides. Consistent with this, autophosphorylated TNIK was found in the cytoskeletal fraction. TNIK is the first identified Rap2 effector that mediates regulation of the actin cytoskeleton by Rap2. It is also the first identified Rap2-interacting protein isolated by the affinity purification-mass spectrometry approach. In a previous report, we carried out yeast two-hybrid screening for Rap2-interacting proteins and isolated another GCK family kinase, the isoform 3 of human MAP4K4 (32). MAP4K4 consists of multiple isoforms resulting from alternative splicing (40), and other isoforms of MAP4K4 in humans and mice are known as hematopoietic progenitor kinase (HPK)/GCK-like kinase (HGK) (41) and Nckinteracting kinase (NIK) (42), respectively. Similar to TNIK, MAP4K4 possesses an N-terminal kinase domain, C-terminal CNH domain, and an intervening region, and interacts with Rap2 through its CNH domain. MAP4K4 shares 90% amino 2 Unlike the report by Fu et al. (25), we did not observe significant cell rounding by HA-TNIK expression in the absence of co-expressed Myc-Rap2A in 293T or NIH3T3 cells. This discrepancy could be caused by differences in experimental conditions. FIG. 5. Rap2 enhances TNIK-induced cell rounding. A, 293T cells (2 ϫ 10 5 cells/35-mm dish) were triply cotransfected with pEGFP-C1 (0.14 g), various pCIneo-HA constructs (0.23 g), and various pCIneo-Myc constructs (0.23 g) for co-expression of EGFP and indicated proteins. At 18-h post-transfection, EGFP-positive cells were examined for their round or spread morphology under the fluorescent microscope. B, the percentage of EGFP-positive cells that maintained a spread morphology was determined. At least 100 EGFP-positive cells were examined in three experiments (more than 300 cells in total). Combinations of co-transfected plasmids for columns 1-10 correspond to those for panels a-j in A. acid identity with TNIK in its kinase and CNH domains. However, this homology drops to 50% in the intervening region, suggesting potentially different signaling roles for MAP4K4 and TNIK. Expression of MAP4K4 weakly activated JNK in cultured cells and co-expression of Rap2 markedly enhanced this activation (32). Thus, we proposed that MAP4K4 serves as an effector of Rap2 to regulate the JNK pathway. On the other hand, in the present study, expression of TNIK alone activated JNK substantially, and co-expression of Rap2 did not markedly enhance this activation. Thus, TNIK does not appear to serve as an effector of Rap2 to regulate the JNK pathway. Rap2 may regulate the actin cytoskeleton and JNK separately through TNIK and MAP4K4, respectively. The differential effect of Rap2 on TNIK-and MAP4K4-induced JNK activation could be related to different requirements for kinase activity for JNK activation between TNIK and MAP4K4. Unlike several other GCK family members, TNIK does not require its kinase activity for JNK activation. During this process, kinase-deficient TNIK(K54R) and the CNH domain of TNIK are as effective as wild-type full-length TNIK, and the kinase domain of TNIK is ineffective (25). In contrast, MAP4K4 requires its kinase activity for JNK activation. For instance, neither the kinase-deficient mutant nor the CNH domain of mouse NIK can fully activate JNK (42). Similarly, a kinase-deficient mutant of human HGK cannot activate JNK (41). The question as to why TNIK does not require kinase activity for JNK activation and whether this is related to the differential effect of Rap2 on TNIK-and MAP4K4-induced JNK activation awaits further study. The role of kinase activity in JNK activation by GCK family members is not fully understood. GCK family members are thought to activate downstream MAPK kinase kinases (MAP3Ks) largely by binding per se through their noncatalytic regions, rather than by phosphorylating MAP3Ks through their kinase domains (29). Kinase activity might be used to induce conformational changes to make their noncatalytic regions more accessible to MAP3Ks through autophosphorylation (29). Autophosphorylation might not be important for TNIK to bind to its downstream MAP3K, while it may be important for MAP4K4. Rap2 likely promotes the autophosphorylation of MAP4K4, since we observed a lowered-mobility form of HA-MAP4K4 in cells that co-expressed Myc-Rap2 in an experiment similar to that shown in Fig. 4A (32). It could be that Rap2 promotes the autophosphorylation of TNIK and MAP4K4 but this promotion enhances JNK activation only in the case of MAP4K4. The CNH domain is also present in GCK family kinases other than TNIK and MAP4K4. The GCK family consists of 8 subfamilies (29), and members of the GCK-I and -IV subfamilies possess CNH domains in their C termini. TNIK and MAP4K4 belong to the GCK-IV subfamily, while GCK and HPK belong to the GCK-I subfamily. The CNH domain of GCK-I subfamily members are only distantly related to those of GCK-IV subfamily members (less than 20% amino acid identity). On the other hand, CNH domains of GCK-IV subfamily kinases, except for one member, are conspicuously homologous to each other. The GCK-IV subfamily includes a single member in nematode Caenorhabditis elegans, MIG-15 (43); a single member in Drosophila melanogaster, Misshapen (43); and four members in mammals, TNIK, MAP4K4, Misshapen/NIKs-related kinase (MINK) (44), and NIK-related kinase (NRK)/NIKlike embryo-specific kinase (NESK) (45,46). The CNH domains of TNIK, MAP4K4, and MINK are highly homologous (about 90% amino acid identity). These CNH domains are also highly homologous to those of MIG-15 and Misshapen (about 70% identity). On the other hand, the CNH domain of NRK/NESK shares less homology with those of all other GCK-IV subfamily members (about 30 -40% identity). NRK/NESK is also divergent from other GCK-IV subfamily members with respect to its kinase domain (29). CNH domains of GCK-IV subfamily kinases, except that of NRK/NESK, may define an evolutionally conserved subclass of CNH domain that mediates specific interaction with Rap2. The CNH domains of TNIK and MAP4K4 exhibit the same properties during interaction with Rap2 (32). Moreover, MINK likely interacts with Rap2 in a GTP-dependent manner but not with Ras. In the affinity purification-mass spectrometry experiment, several peptides matching mouse or human MINK was contained in a minor band that co-eluted with p155. 3 Furthermore, our two-hybrid screening for proteins that interact with C. elegans Rap2 (C25D7.7 protein) isolated a clone that contained the CNH domain of MIG-15. 4 MIG-15 did not interact with C. elegans Rap1 (C27B7.8 protein) or Ras (LET-60). The C. elegans Rap2 possesses Phe-39, and C. elegans Rap1 and Ras instead possess Ser-39 within their effector regions. In addition, the CNH domain of MIG-15 interacted with human Rap2A but not with Rap1A or Ha-Ras in a two-hybrid assay. 5 The present study shows that Rap2 serves as a direct upstream regulator of TNIK. A well-accepted model for the activation of GCK-I and -IV subfamily kinases involves the recruitment of these kinases to a specific membrane region (29,31). In this model, stimulation of receptor-tyrosine kinases (through SH2/SH3 adaptor proteins) or cytokine receptors (through Traf family of adaptor proteins) results in the recruitment of GCK-I and -IV subfamily kinases to membrane-associated receptor complexes thereby initiating the activation. We hypothesize that Rap2 also recruits members of GCK-IV subfamily kinases, except for NRK/NESK, to a specific membrane region. Rap2 can be activated by a variety of extracellular stimuli that regulate protein tyrosine phosphorylation or second messengers such as cAMP, Ca 2ϩ , and diacylglycerol (20 -22, 47, 48). The potential roles of these diverse stimuli in the activation of GCK-IV subfamily kinases through Rap2 deserve further study.
7,397.6
2004-11-19T00:00:00.000
[ "Biology" ]
Strategies for Teaching Linguistic Preparedness for Physicians: Medical Spanish and Global Linguistic Competence in Undergraduate Medical Education Abstract In accordance with Liaison Committee on Medical Education (LCME) curriculum content standards, medical schools are expected to teach physician communication skills and cultural competence. Given the sustained U.S. Spanish-speaking population growth, importance of language in diagnosis, and benefits of patient–physician language concordance, addressing LCME standards equitably should involve linguistic preparedness education. The authors present strategies for implementation of linguistic preparedness education in medical schools by discussing (1) examples of institutional approaches to dedicated medical Spanish courses that meet best practice guidelines and (2) a partnership model with medical interpreters to implement integrated global linguistic competencies in undergraduate medical curricula. Introduction The Liaison Committee on Medical Education (LCME) sets curriculum content standards for undergraduate medical education to ensure standardization of educational content for United States (U.S.) medical school graduates. In accordance with standards pertaining to communication skills and cultural competence of graduating medical students, 1 medical schools are expected to provide education on communication skills pertinent to providing quality care for the U.S. patient population. Given the heterogeneous and dynamic linguistic profile of U.S. patients, communication skills education in medical schools should address these demographic realities to ensure health equity for all. Further, the Association of American Medical Colleges (AAMC) includes within its diversity and inclusion mission the objective to grow a diverse and culturally prepared health workforce by improving integration of public health concepts into medical education, supporting inclusion of social factors in health within medical education programs. 2 Language alone, through medical history taking, has long been demonstrated to be sufficient in making a diagnosis in a majority (75%) of cases, 3 and linguistic differences have been shown to lead to exacerbation of health disparities for the underserved Hispanic/Latino population. [4][5][6][7] Prior national regulations and efforts have primarily focused on interpreter services only 8,9 rather than on simultaneously enhancing languageconcordant, direct patient-physician communication through medical education and assessment. 10 Moreover, U.S. population trends demonstrate continued growth of Spanish use nationwide, 11 yet patient-physician communication in languages besides English and the non-English linguistic proficiency of medical students are not routinely included in medical education curricula or assessment processes. We propose that meeting LCME curricular content standards for communication and cultural competence in medical education within a public health context should include two primary elements of linguistic preparedness for physicians: global linguistic competence education and dedicated medical Spanish courses, outlined in Table 1. We define global linguistic competence for physicians as the skills needed to communicate with patients of any linguistic preferences or needs. Dedicated medical Spanish courses in medical school are defined as courses that teach and assess learner ability to competently use Spanish in the practice of medicine for direct communication with patients and to self-assess limitations. Multiple examples exist of curricula for medical Spanish education, and a majority of medical schools report medical Spanish courses according to the most recent survey; 12 although methods of implementation are variable, most reported courses lack assessment of student skills, posing patient safety risks, 13 and a standardized curriculum tested at multiple centers has never been attempted. We will illustrate several examples of institutional approaches to dedicated medical Spanish courses that can form the basis of a standardized curricular model and propose a partnership model with professional medical interpreters to implement the concept of integrated global linguistic competencies in undergraduate medical education. Dedicated Medical Spanish Courses In the absence of an existing standardized curricular structure or list of learner competencies or objectives, individual schools and/or instructors typically design their own medical Spanish programs. This presents a challenge for schools without existing programs since creating a course from scratch is a significant academic endeavor, and the lack of standardization results in a lack of uniformity at different institutions. We provide examples of curricular structure at three academic institutions that were designed according to best practice guiding principles established by Reuland et al. 14 and represent variations in execution that can be replicated at other centers. A comparison of the programs is illustrated in Table 2. Curricular Models for Dedicated Medical Spanish At the University of Texas Medical Branch (UTMB), a multidisciplinary hybrid course was designed to provide the student with basic structures of the Spanish language and the specialized medical vocabulary needed to communicate effectively with Spanish-speaking patients in a variety of clinical scenarios. Students actively selfmanage their learning; the course is online, therefore modules can be completed during flexible times at the students' convenience. The course is divided into four main sections: (1) learning resource center, which provides a variety of learning experiences to help the student understand and apply course concepts, such as Spanish basics, grammar, anatomy, medical terminology, and culture. (2) Clinical modules, which contain the essentials of the Spanish language, culture, and the vocabulary needed to communicate effectively in a variety of health care The third section provides the student easy access to basic language tools existing online. (4) The fourth component of the course is interactive, where students have to produce four taped encounters and, finally, a complete full medical history. Student assessment during the course includes self-reflection, peer and faculty feedback on videos, language assessment by faculty, and scoring on 15 inline components from the learning resource center and from the clinical modules. The Clinical Conversational Spanish for Healthcare Professionals UTMB course has been in existence for 5 years, a total of 129 students have enrolled, and 100% have achieved successful course completion. Students report that course content is useful and relevant to clinics (72%), and 52% believe that clinical Spanish should be a mandatory course. 15 Due to student popularity, the course is now accessible outside of UTMB through the Visiting Student Application Service (VSAS) of the AAMC. 16 At the University of Illinois College of Medicine (UICOM), students hold peer-led faculty-supervised workshops on medical Spanish vocabulary and conversational skills during lunch as a preparatory step in first and second years of medical school. This initial exposure helps some students self-identify a need to improve their basic Spanish skills before qualifying for the thirdand fourth-year Clinical Medical Spanish elective, for which an intermediate Spanish language requirement is implemented. The formal elective is designed to provide 2 weeks (80 h) of official credit spread out over a 10-week longitudinal elective, in which students have a 2-h evening lecture weekly, but have multiple self-study multimedia and grammar/vocabulary exercise components, 17 patient interview requirements, mid-and end-course objective structured clinical examinations with trained standardized patients (SPs), and cultural research spread out over the course. 18 The course is organized by organ-system (musculoskeletal, pulmonary, cardiovascular, gastrointestinal, endocrine, genitourinary, ophthalmologic and ear, nose, and throat, neurologic, psychiatric, and pediatric) with each week focusing on one subject and addressing class role-play, patient interviews, case write-ups, and development of culturally appropriate patient education materials. In the 10-week course, students are expected to overlap the medical Spanish elective with other clinical clerkships. In 4 years since course initiation, 158 students have enrolled and completed the Clinical Medical Spanish elective. Data analyzed for the first 2 years of implementation (51 respondents of 58 students, 88% response rate) showed that the self-rated comfort level with interviewing and examining Spanish-speaking patients significantly improved after the course, and the improvement in comfort level was sustained 1 year into residency training based on a follow-up survey completed by 64% of students. Eighty-nine percent of follow-up respondents reported that the elective was useful for their intern year, and 97.3% reported that they would recommend the course to other fourth-year medical students. 18 To accommodate student need, a new 2-week intensive course will be piloted in the near future at UICOM, in which the curriculum will be condensed into a shorter time period, but students will dedicate to medical Spanish material full-time for the 2 weeks of the course. At the Washington University School of Medicine in St. Louis, the medical Spanish curriculum is designed for advanced Spanish speakers, defined as native speakers or those with strong conversational skills. The goals of the program are for students to develop proficiency in conducting a clinical encounter in Spanish, emphasize the importance of both language and culture in patientphysician interactions, and obtain certification as a bilingual provider. The program is designed as a comprehensive and longitudinal experience to span all 4 years of medical school, with the classroom component primarily in year 1, certification in the summer after year 2, and clinical opportunities to serve Spanish-speaking patients in years 3 and 4. In the first semester of year 1, a 10-week selective is offered for clinical credit with weekly, 1-h, peer-led faculty-supervised classes to learn medical Spanish vocabulary and practice speaking during role-play. Two SP sessions in Spanish are included in this selective. During the second semester of year 1, a 10-week selective is offered as humanities credit with weekly 1-h classes to discuss cultural aspects of medical interaction, ethics, and professionalism. In year 2, students continue to practice their medical Spanish by required participation in a few of the following roles: teaching assistant for year 1 sessions, organizer for student group conversation sessions, on-call interpreter for a Spanish-speaking clinic, additional SP session participation, or grand rounds Spanish case presentations. At the end of year 2, students take the Clinical Cultural and Linguistic Assessment examination as part of their certification process. Years 3 and 4 have clerkships with incorporated bilingual components for eligible students. Global Linguistic Competence Education The elements of global linguistic competence, which should be included as part of a longitudinal communication skills curriculum for all students regardless of Spanish proficiency or desire to enroll in a formal Spanish elective, should be as follows: understanding of linguistic diversity and the impact of language on population health; self-awareness of language proficiency and limitations in medical settings; competency in the use of a professional medical interpreter; and cultural issues in health that may confound successful medical communication. Critical communication elements related to language include cultural health beliefs, variations in health literacy, and patient distrust or other barriers to full communication with physicians, as well as the possibility of errors in communication due to limited physician Spanish skills or misunderstandings. Medical school curricula pertaining to clinical communication skills may consider integration of global linguistic competence elements throughout existing programming without the need to add a separate course. If these components are only included in elective, dedicated medical Spanish courses, there would be a significant missed opportunity to expose students who do not qualify or desire to enroll in medical Spanish to these critical communication skills. For example, segments that can be considered throughout the curriculum include demonstration of correct and incorrect usage of professional interpreters, inclusion of readings or faculty lectures pertaining to linguistic minority patient populations relevant to a particular module (e.g., cardiovascular and pulmonary), discussion of cultural health beliefs or practices that may defy literal interpretation, and incorporation of SP cases that reflect underserved patients' social circumstances, including linguistic realities, and allow students to practice clinical skills and troubleshoot challenges. It may be helpful to identify existing sources of expertise such as certified medical interpreters who could assist with specific components of linguistic competence education using an interdisciplinary approach. Similar partnerships could be sought with public health institutions, university foreign languages departments, or community organizations with educational missions such as promotores de salud. 19 Certified Medical Interpreters as a Partnership Model Certified medical interpreters already serve as communication conduits in health care settings, assisting with verbal communication between providers and patients where language discordance is identified. Interpreters certified by accredited certification processes in the U.S. 20,21 are skilled in medical vocabulary in source and target languages and adhere to a strict code of ethics founded on precision, boundaries, objectivity, confidentiality, and respect. 22 In health care, the dynamic interdisciplinary partnership between interpreters and providers can also be extended to an educational partnership. The certified medical interpreter's role is to maintain transparency, set boundaries, and recognize and intervene when a miscommunication may be taking place in a patient-provider encounter. Likewise, inter-preters can help providers assess their needs and skills through observational progress assessments for medical Spanish learners as well as constructive feedback for medical students using interpreter services to improve the students' skills in effectively using language assistance tools, including onsite interpreters and remote interpreter devices. Although such technological advances are helpful in increasing access to professional interpreting, they require training to minimize perceived barriers to utilization and maximize effective communication. 23 During dedicated medical Spanish courses, interpreters can potentially contribute during in-class role-play encounters, provide feedback on recorded or real-time simulated or SP interviews, or observe live clinical encounters in which the interpreter can serve as a safety net by intervening if miscommunication occurs. Medical interpreters can help students and providers determine their need for language assistance services in varying situations. For instance, providers with basic proficiency may be safely encouraged to engage in greetings and casual conversations as trust-and rapport-building opportunities; however, conversations about treatment plans, procedures, and informed consent require a much higher degree of medical Spanish skill and therefore a much lower threshold for utilization of a trained medical interpreter. 24 In the case of intermediate or advanced proficiency, determining if the encounters will be general in scope or specialty-driven is also a fundamental consideration since the latter vocabulary may be out of the scope of even an advanced or native Spanish speaker. In addition, interpreters can participate in longitudinal, integrated global linguistic competence training in partnership with medical school faculty by acting as the interpreter during select SP encounters, providing sample clinical scenarios of culturally or linguistically complex cases, identifying materials that can be used for problem-based learning or role-play challenges (e.g., consent forms) in multilingual encounters, or giving guest lectures on cultural topics that may influence communication. Conclusions Medical schools should consider strategies to incorporate the linguistic realities of U.S. patients as part of public health-conscious medical education. Addressing linguistic skills for medical students is supported by LCME core content standards and must adapt to demographic trends to ensure health equity for linguistic minorities and to address physician public health preparedness in an increasingly global U.S. population. Institutional endorsement of courses that address non-English language competencies is critical to the success and sustainability of educational programs and can be justified as part of existing medical education standards. We propose that schools focus on two complementary approaches to linguistic needs of physicians in training: (1) dedicated medical Spanish courses for eligible students who desire to achieve and assess competency in direct patient-physician communication with Spanish-speaking patients and (2) integrated global linguistic competence education for all medical students within communication and clinical skills courses. Replication of an accepted curricular methodology that has been previously established should be considered to reduce the need for recreating new curricula for dedicated medical Spanish courses. Furthermore, interdisciplinary partnerships, such as collaboration with professional medical interpreters, can contribute to the cultural and self-awareness components critical for multilingual and monolingual physicians alike and should form part of a comprehensive longitudinal approach to linguistic competence in medical education. Next steps should consider interinstitutional collaboration on curricular implementation and evaluation by means of data collection regarding student comfort, confidence, and skill demonstration. Varying aspects of implementation such as course duration, curricular placement, course faculty qualifications, and successful interdisciplinary partnership models should be studied to better understand best practices. In addition, continued engagement of institutional stakeholders such as administrators and leadership is important in assessing institutional priorities and considering integration of both dedicated medical Spanish and global linguistic competence curricula as part of health equity-conscious undergraduate medical education. Disclaimer The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the Association of American Medical Colleges, the University of Illinois College of Medicine, the Washington University in St. Louis, the University of Texas Medical Branch, or the National Institutes of Health.
3,710
2019-07-01T00:00:00.000
[ "Linguistics", "Medicine", "Education" ]
Features and themes of poetry in KOPI (komunitas puisi Indonesia) facebook group Going along with the development of technology, literary disciplines also have been developing. Cyber literature becomes popular in this digitalization era which explores the existence of the internet as the medium. One of the large cyber literary communities on Facebook is KOPI (Komunitas Puisi Indonesia) with 64 thousand members. Due to the insufficient research analyzing poetry in the Facebook group, this research analyzed the features of poetry and the themes of poetry in KOPI (Komunitas Puisi Indonesia). Descriptive and qualitative approaches were used in this research. Observation and documentation were used as the techniques in data collecting. The observation was done by reviewing the group’s situation and the documentation is done by reviewing the poetry collections that had been uploaded in this research from the beginning until the middle of February 2021. The data analysis techniques used Miles and Huberman techniques which include: (1) data reduction, (2) data serving, and (3) conclusion/verification withdrawal. The results of this research showed that the dominant poetry features were image illustration and the themes that were used by KOPI group members in writing poetry included the physical themes, moral themes, social themes, egoic themes, divine themes. Physical theme with the most widely used type of love. It can be concluded that the physical theme with the type of “love” is the choice of the theme that the author most favors. Copyright@2021, Sugeng Santoso, Ni Made S I Wahyuni, I Wayan A This is an open access article under the CC–BY-3.0 license INTRODUCTION The rapid progress of the times and digitalization brought many changes to life. Now technology is inseparable from human life. Divers technology in information have been developing and creating the easier job in humans' life (Wahyudi & Sukmasari, 2014). Human activities that previously used conventional models have now been transformed into digital models. That era is the disruption era. Oey-Gardiner et al. (2017) define disruption as a very fundamental change that has taken place in various fields, such as correspondence, print media, and public transportation. This Oey-Gardiner et al. view is supported by the emergence of many online transportation service systems, online news media, and instant message applications with the most advanced features that are presently used by many people. Christensen (in Ohoitimur, 2018) described disruption as a profitable innovation opportunity. So, the existence of disruption makes human life easier. The era of industrial revolution 4.0 is marked by virtual commerce (e-commerce), artificial intelligence, big data, and the use of robots (Prasentiantono in Abdullah, 2019). The industrial revolution 4.0 is a technological advancement that combines the physical, digital, and biological worlds of humans (Hamdan, 2018). One of the tangible benefits of the disruption of industrial revolution 4.0 is the ease of communication of remote interactions with social media that the internet network memorizes. Social media is a communication tool that users use to perform social processes (Mulawarman & Nurfitri, 2017). Social media offers a lot of sophistication when it comes to communicating with no time limitation. It encourages the continued increase in the number of social media users over time that is presented by the number of people who use smartphones (Herawati, 2011). The latest data of the Indonesian Internet Service Providers Association (2020) stated that the population of internet users in Indonesia is 73.7%, with an exact number of 196.71 million people. From the data, the Association of Indonesian Internet Service Providers surveyed 7,000 respondents and obtained the results that social media is the most owned by respondents is Facebook with a percentage of 94.4% of the total respondents. The disruption of industrial revolution 4.0 has spread to all aspects of life (Saputra & Meilasari, 2020), including literary works. Literary works that are written on the internet are called cyber literature. (Septriani, 2016) states, "cyber-sastra or cyber literature is a literary activity that utilizes computers or the internet". Endraswara (2006) also defined cyber literature as a literary activity that explores the existence of the internet and computer as the mediums. The existence of cyber literature continues to grow along with the advancement of technology. Many of the conveniences that cyber literature has been provided to authors. Some of these conveniences include (1) the absence of strict selection such as in print, (2) a large amount of reach, (3) the original work original without editor's editing, and (4) the affordable cost of not requiring the purchase of print readings. According to Sumiyadi (2020), so many people used digital media not only to read poetry collections, short story anthologies, or drama text in the shape of the electronic book with the PDF format that can find a lot on the internet. The improvement of digital literature can enrich the literature itself and creating a practical life. The literature improvement can create the situation where the people should directly meet and create the directed contact. The readers or users also the creator can create the condition of social distancing because of the social technology. According to Noorfitriana (2017), in the late '90s, mailing lists were the medium used in the spread of cyber literature. However, along with the existence of other social media, for example on Facebook. Facebook is one of the social media that has a large number of users. Riyanto (2020) mentioned that in 2020 the number of Facebook users in Indonesia is 130 million people. It supports the development of cyber literature on Facebook. The proliferation of cyber literature on Facebook is evidenced by the large number of online literature community groups that formed. According to the report, there are several well-known online literature communities with thousands of members on Facebook. These groups include KOPI (Komunitas Puisi Indonesia), Komunitas Penulis Sastra Indonesia, and Karya Sastra. The three communities mentioned are engaged in poetry. Poetry is one type of literary work that has many meanings and the value of beauty. According to Samuel (in Suryaman & Wiyatmi, 2013) poetry is beautiful words arranged in beautiful form. Furthermore, Rosita (2018) mentions that poetry is a short literary work that contains expressions of the content, thoughts, and feelings of authors who use creative and imaginative language. Poetry has two building elements, namely the physical element and the intrinsic element. Rosita (2018) states that elements that can be observed directly such as (1) sound, (2) word, (3) array or line, (4) verse, and (5) writing system. Meanwhile, the element that can only be understood through the sensitivity of the intrinsic element is called the layered meaning. These elements include (1) themes, (2) flavors, (3) tones, and (4) moral messages. The improvement of digital literature can enrich the literature itself and creating a practical life. From the Table 1, it can be known that some online poetry groups on Facebook are famous for having members in the number up to ten of thousands. The intensity of activity is also above 25 activities per day, which includes publishing works by authors and commenting on each other on a post. Besides, the group members also come from a variety of professions, ages, and competencies of various literature. The popularity of poetry in cyber literature is caused by the poetry is one of literature product that does not need any explanation or further elaboration. As stated by Gunawan (2019) poetry is the literature product that is uploaded by social media users due to the condition that does not force the author to explain their literature product. The existence of poetry that is published on Facebook groups has some interesting things to explore, for example about cyber literary products in the form of themes of written poetry, as well as features of poetry published in Facebook groups. Therefore, this research analyses the themes and features of cyber poetry, to increase wealth in literary science, more specifically about the existence of poetry in the Facebook group. Besides, the analysis of features in cyber poetry has not been done much like the other cyber poetry topics. Similar research has been conducted by Purwaningsih, Khusniyah, and Sartini et al. Purwaningsih (2016) research titled Puisi Facebook sebagai Salah Satu Bentuk Budaya Cyber. The research found that the quality of Facebook poetry is far below those published in print (conventional). The similarity between this research and Purwaningsih's research lies in the variable, namely Facebook poetry. The difference is in the direction of his research because Purwaningsih's research analyzes the comparison of the quality of Facebook poetry with printed poetry. Next, Khusniyah (2019) research has the title Perkembangan Puisi Cyber Sastra di Indonesia. The results obtained from the research include, (1) internet media that are used to publish cyber literature are blogs, Tumblr, Facebook, and Twitter, (2) Twitter poetry has the most prominent characteristics because it has a maximum number of 140 characters. The similarity between this research and Khusniyah's research lies in the variables, which are equally analyzing the genre of cyber poetry. Khusniyah's research analyzes the media used in cyber-media, as well as the characteristics of Twitter poetry. Further research that has similarities belongs to Sartini et al. (2019) titled Fitur Puisi Remaja dalam Sosial Media Line. The research found that poetry uploaded on Line always uses the image features. The similarity between the research to be conducted and Sartini's research lies in the variables, namely analyzing features in cyber poetry, and the difference lies in the objects studied. Based on previous similar research descriptions, it can be concluded that the analysis of the theme and features of cyber poetry is a novelty because previous research has not examined these variables in Facebook poetry group. The Facebook group chosen is KOPI (Komunitas Puisi Indonesia) because that group has so many members which provide variations in data. KOPI (Komunitas Puisi Indonesia) group was launched on June 24th, 2015. The center of this group is located in Kotagede, Yogyakarta. Because KOPI (Komunitas Puisi Indonesia) is an online group, the center of the group is the location where the group's founders formed the group. The purpose of making the KOPI (Komunitas Puisi Indonesia) group is as a place (home) for poetry writers and also people who like poetry to learn together and share knowledge about the writing of poetry. These objectives are already listed in the group description. As of February 23 rd , 2021, the number of KOPI group members is 64 thousand accounts. While the number of KOPI group managers are 6 people. The managers have account names Yulia Suganda, Rindu Violet, Achmad Masih II, Abil McWriter, Tri Raden Raden, and Ikhsan Madjid. The task of the group organizer is to monitor the group's activities. The purpose of this research is to find out the features of poetry as well as the themes of poetry raised in KOPI (Komunitas Puisi Indonesia). Besides, this research is also expected to contribute information about the existence of cyber poetry on Facebook to literary activists and the general public. For the world of education, this research can also be a reference and consideration material for Indonesian Language Teachers in terms of the use of cyber literature as a media of poetry learning for students. There is not much cyber literature research that raises the topic of Facebook poetry, then in the future, this research is expected to be a reference for next similar research. METHOD This research used a descriptive qualitative research design. A qualitative approach is a research that produces data in the form of words, both written and oral (Siyoto & Sodik, 2016). Meanwhile, according to Kuntjojo (2009), descriptive research is research that is conducted by describing the variables raised. The data source in this research is the KOPI (Komunitas Puisi Indonesia) Facebook group, with data in the form of poetry published during the beginning to mid-February 2021, more precisely the first to 20 th , with 30 selected random poetry. The restriction was done because of the many publications of poetry conducted in the KOPI group. The data collection methods used digital observation and documentation. Digital observations were made to see the state of the KOPI group. While the documentation was done by reviewing the poetry that are uploaded during the beginning to mid-February 2021, more precisely the first to 20 th . The data analysis techniques used in this research are the Miles and Huberman models. The analysis of the data model has three elements, namely: (1) data reduction, (2) data serving, and (3) conclusion/verification withdrawal . The theory used the features of poetry and the themes of poetry. Data reduction was done by selecting the main things that are the focus of research, as well as eliminating unnecessary data. The presentation of data was an attempt to display the information that has been compiled. The presentation of data was done by presenting a description of the values of poetry features and themes in the KOPI group. Withdrawal of conclusions was the activity of formulating conclusions based on the data obtained. This activity was carried out after the data was presented. Features of Poetry in KOPI (Komunitas Puisi Indonesia) Quoted from Sartini et al. (2019), feature is one of the component that complete the uploaded poetry. From that definition, when it relates to Facebook poetry, then the feature is a special component that completing the poetry that was uploaded on Facebook. Facebook has several features that support its users to communicate in several forms, such as using image media, through voice, and video. This feature can be used to support the visualization and the beauty of Facebook poetry. In the analysis of poetry features, random samples of 30 poetrys from February first to 20 th , 2021. From the samples that were taken, it was found that most of the poetrys published in the KOPI group used illustrations such as animated images, paintings, landscape photos, self-portraits, and photos of loved ones. Here is a description of the three most representative poetrys of the entire sampled poetry. 5) Information about some of the responses other members gave to the poetry. (6) A feature to respond in the form of likes or comments. The feature in Perihal Poligami can be classified to be as basic as poetry in general because it does not use illustrations or videos. Nevertheless, the meaning of the poetry to be delivered by the author is still well-received because the poetry is simple. The use of rhymes in poetry has also been able to make poetry more aesthetic, as in the word pencuri and pensuci. 7) A feature to respond in the form of likes or comments. The use of paintings in this poetry serves to increase aesthetic power and facilitate the delivery of the meaning of poetry. The girl who fell asleep holding a flower was meant to be the girl who had planted the flower in the author's heart, but the girl was dead. It can be known that the girl remains in the heart of the author even though he no longer lives in the world. with the poetry in picture 3, the use of illustrations in the Terbangun also serves to increase aesthetic power and facilitate the delivery of meaning. This poetry by D tells about gratitude and tahajud prayer, so the illustration of the picture is very precise. However, the difference between poetry's picture 4 and picture 3 lies in the way it is written. Terbangun instantly brings together lines of poetry alongside images in one file. This is done to make it more practical and aesthetic, because the style of the letters can be changed. Based on the analysis that was conducted on the three poetry samples above, it was found that the dominant features used by KOPI group members are illustration pictures. This is in line with Pramudya's (2017) statement, that illustrations in poetry help show and express the intentions, ideas, feelings, situations or concepts in becoming precisely real and effective so that they are easily understood by the reader. Besides, the atmosphere, process, expression in poetry are also shown through illustration. Features tha are used by KOPI group members are not too different from the features in Line teen poetry by Sartini et al. (2019). Both types of poetry generally use images as media, although in the Perihal Poligami poetry by AGW only uses text only. The advantage of Line teen poetry is the sharing feature for other Line users. That feature isn't in Facebook's poetry. When it is viewed in terms of the used number of characters, Facebook poetry has no limit. This is the contrast to Twitter poetry that only have a maximum limit of 140 character, as revealed by Khusniyah (2019). It can be ensured that the features of each cyber literary media vary, depending on the social media policy used in literature. Themes of Poetry in KOPI (Komunitas Puisi Indonesia) Theme is the idea that becomes the basis of a work. This definition is almost similar to Hartoko & Rahmanto (in Hermawan & Shandi, 2019) definition, according to them, theme are basic ideas that are support a literary work. Furthermore, Shipley (in Kurniawan, 2014) defines the theme as a discourse, general topic, or primary issue spilled into the story. From the Shipley's difinition, it can be interpreted that theme is a part that is always there in poetry. Shipley (in Hidayatullah, 2018) divides the themes into five, namely physical themes, moral themes, social themes, ego-based themes, and divine themes. In this research, the themes analysis will refer to the Shipley theory. The analysis of poetry themes were also took random samples of 30 poetrys, from February first to 20 th , 2021. Based on the documentation that has been done, it was obtained that on February 1 st to 20 th , 2021 the most commonly published poetry theme by KOPI group members was physical theme with a percentage of 43.3%. Then the next is social theme with a percentage of 20%. The third choice of theme is the theme of deity and moral theme with a percentage of 13.3%. The least chosen theme is ego-based with a percentage of 3.3%. The dominance of romance themes in the literary world is always the case. This is because love is the element that always exists in human life, and love is what makes human life more beautiful. Ramdan (2018) also stated that the greatest and fundamental experience in human life is love. Love itself also contains many usages. The use of the experience is often enshrined in the form of literary texts. This is what causes many literary works that raise the theme of love. This is similar to the research that is conducted by Kurniawan (2014) titled Tema pada Puisipuisi Karya Siswa dalam Buletin Suara Puspa di SMA Negeri 5 Yogyakarta Juni 2013-Januari 2014. The themes of love are the most commonly used in the poetry of Puspa Voice Bulletin. This is different from Masruroh's (2017) findings in her research titled Tematik pada Puisi dalam Buku Teks Bahasa Indonesia untuk Sekolah Menengah Pertama (SMP) Kelas VIII. The themes that were found by Masruroh were dominated by the theme of deity with 47%. Another theme found by Masruroh is the physical theme with the type of love for the motherland as much as 33,3%, and a divine theme with a type of religious 25%. The domination of the three themes were happened because the poetrys in the Indonesian textbooks were intended as teaching materials, as well as materials that shape the character of students. This was supported by Siswanto's (2017) statement, that the literary learning (including poetry) in 2013 curriculum was designed as the provision of values thought literary learning. Including social value, love for the motherland value, and religious value. Furthermore, based on the above results, the poetrys will be sampled one by one from each theme for further analysis. This is done in order to provide a clear and detailed description. The discussion is as follows. KEMBARA: Jurnal Keilmuan Bahasa, Sastra, dan Pengajarannya Vol. 7, No. 1, April 2021, Halaman: 104-117 ISSN : 2442 Based on the results of the analysis, it can be known that the theme of the poetry is love. The theme can be classified into physical themes. According to Shipley (in Nourmalita, 2015), physical themes are related to the physical state of man. So, this theme will show a lot of human physical activity. Some examples of this theme is about feelings of love, shame, longing, etc (Shipley in Kurniawan, 2014). This is the most common theme in literature because the literature will always be related with love. In this poetry, the author can take the reader drifting in the scene of a couple's fondness that turns out to be just a delusion. The author's idea in the poetry is expressed in the form of three verses. In this poetry the author expresses great love through praise and expression directly. The expression of praise in this poetry is found in the array of semanja menatap matamu yang memesona, and dan kubelai poni yang menghiasi dahimu yang indah. Meanwhile, the expression of love is directly shown on the part dan kukecup mesra penuh cinta. With these questions, the author hopes that the feelings of love and beauty of someone dubbed "dinda" can be captured by the reader, so that the reader also feels what is being experienced by the author. Vol. 7, No. 1, April, 2021, Halaman: 104-117 ISSN : 2442-7632 print | 2442 Based on the results of the analysis that was conducted on the Diam poetry, it can be known that the theme of the poetry is advice. The theme can be classified into moral themes. According to Shipley (in Nurhakiki & Andreawan, 2018) the moral theme is a theme that describes human mental activities, or something related to sexuality and other activities that only humans can do. So, moral theme characterize human beings. For example of this theme is advice, admonitions, opinions, etc (Shipley, in Kurniawan, 2014). The Diam poetry tells the story of an author who gives advice to someone nicknamed "Dik". Usually the nickname is used to call a wife or younger sister. The advice was given by the author was not to hear what others say about themselves. The expression of advice in this poetry is in the array biar semua berkata apa and dan tak perlu kita jengah.. The statement of advice is honestly not to be conveyed to someone dubbed "Dik" only, but also to all readers. The Diam poetry has a good purpose, because it gives advice to the reader. It is in accordance with the ethical function of literary works. Didipu (2013) stated that the ethical function is a moral function or given literature through advice (value) contained in it. Poetry 3. Calo'nisme by AT Aku belajar dari calo yang kurang ajar Aku belajar dari calo yang tak terpelajar Aku belajar dari calo yang tak bermoral Aku belajar dan sabar Ahhh, apa hanya di negeri ini Seorang tuna kerja terjajah oleh pribumi Pemerintah apakah tuli Suara Tuna kerja seolah senyap tertutupi Serang, 17-february 2021 Jeritan dari seorang tuna kerja KEMBARA: Jurnal Keilmuan Bahasa, Sastra, dan Pengajarannya Vol. 7, No. 1, April 2021, Halaman: 104-117 ISSN : 2442 Based on the results of the analysis, it can be known that the theme of Calo'nisme poetry is criticism to the government. Such themes can be classified in social themes. According to Shipley (in Dambudjai, 2018), social theme discribes human life as social beings and their interactions. Examples of this theme are social life, interaction between humans and the natural environment, social conflicts, etc (Shipley, in Kurniawan, 2014). The author can convey his criticism and resentment to the government well in this poetry well. The idea was put forward in the form of two verses of poetry. Phrases used to clarify the meaning and theme of poetry are innuendos and expressions directly. The expression of satire in this poetry is found in the first verse, for example in the part aku belajar dari calo yang kurang ujar. Meanwhile, the expression is directly shown on the part of pemerintah apakah tuli. With these questions, the author expects his criticism and resentment to the government can be understood by the reader, and even heard by the government once. The criticism in this poetry was in harmony with the function of literary works as a tool of social reflection as well as a reflection of the state of a society. Putra (2018) also supported the statement that literature can be said as a tool to give evaluations and suggestions for social life, because literary works are a sociocultural recording. Thus, literary works can be used to see a phenomenon that occurs in a society and at a certain time. And if it is associated with Calo'nisme poetry, then it can be known that employment in Indonesia is difficult to obtain and the government ignores it. Poetry 4. Terbangun by D Terbangun dari tidur di atas dipan kayu mahoni Mimpi yang tak kunjung usai seakan tak berujung sepi Berselimutkan sehelai tikar yang terbuatkan dari semak belukar Tetesan air yang terjerembab di sela genteng yang using Kutatap jam yang berdetak lambat seolah enggan berputar Oh Tuhanku, terima kasih Kau bangunkan ruh dan raga ini Agar selalu ingat pada-Mu Based on the results of the analysis that was conducted on the Terbangun poetry, it can be known that the theme of the poetry is religiousness (human relationship with the God). These themes can be classified into divine themes. According to Nurgiyantoro (in Anggraini, 2019), divine theme is a theme related to the human relation to the God or others matters of a philosophical nature. The main problem in this theme is man's relationship with God, such as issues of religiosity, vision, outlook on life, and beliefs (Shipley in Kurniawan, 2014). The Terbangun poetry tells the gratitude of a servant of Allah SWT, who has awakened him to tahajud pray (night prayer). The depiction of the early morning atmosphere is done by the author quite well, so that the readers can immediately understand the direction of the meaning of this poetry. In addition, the illustration of someone who is praying also helps the use of poetry. The idea of gratitude is put forward on the part of Oh Tuhanku terimakasih Kau bangunkan ruh dan raga ini agar selalu ingat dengan dirimu. In addition to the realization of gratitude to Allah SWT, this poetry can also be a medium of spreading goodness between humans, especially fellow Muslims. This is because in reality there are not many Muslims who pray tahajud. This is in line with Nurhayati et al. (2019) statement that preaching can also be done through the medium of literary works, by containing issues related to godliness and religion, or also the insertion of religious messages. Based on the results of the analysis that was conducted on poetry by DP, it can be known that the theme of poetry raised by the author is the principle of life. Such themes can be classified into egobased themes. According to Sa'diyah (2014) ego-based themes this theme has the characteristhic that each author is more concered with themself then others. Threfore, this theme is related to human personal reactions as individuals who always demand recognition of their individuality rights. An example of this theme is selfishness, self-esteem, dignity, or certain human nature and attitude (Shipley in Kurniawan, 2014). The poetry was deliberately untitled by the author. The meaning to be conveyed by DP in this poetry is an affirmation that smoking or not is the right of the author. This poetry is also given a picture to strengthen the meaning of poetry and add beauty, in the form of a picture of the author who is smoking. The part that shows the most element of selfishness is on the part Jangan sok tahu bilang saya hobi. Saya kecanduan! Al-Ma'ruf & Nugrahani (2017) stated that literature is a medium of expression for authors to support the meaning. If associated with it, then the untitled poetry written by DP is an expression of the author about the perception of his smoking hobby. CONCLUSION KOPI (Komunitas Puisi Indonesia) group is a cyber-literature group on Facebook engaged in poetry. The group has been established for almost six years, and the number of members now is 64 thousand. The dominant poetry features that are used by KOPI group members are illustrations, because illustrations in poetry can help clarify the intentions, ideas, feelings, situations or concepts in becoming precisely and effectively visible, so that it is easy for the reader to understand. In poetry writing, there are also several variations, such as members who directly unite lines of poetry together with images in one file. This is done to make it more practical and aesthetic. Then, the themes that were used by KOPI group including physical themes, moral themes, social themes, ego-based themes, divine themes. Physical theme with the most widely used type of love. Based on the purpose of the research, the next researcher who will raise the topic of cyber poetry on social media is expected to improve this research by studying other variables or analyzing the same KEMBARA: Jurnal Keilmuan Bahasa, Sastra, dan Pengajarannya Vol. 7, No. 1, April 2021, Halaman: 104-117 ISSN : 2442-7632 print | 2442 Sugeng Santoso, Ni Made Sania Indri Wahyuni, I Wayan Artika, Features and themes of poetry in KOPI (komunitas puisi Indonesia) facebook group variables in depth. In addition, to Indonesian teachers, it is recommended to use the existence of Facebook poetry (cyber poetry) as a media of learning in schools, given the many advantages and conveniences offered by cyber poetry.
6,896.4
2021-04-30T00:00:00.000
[ "Linguistics" ]
Information security in healthcare supply chains: an analysis of critical information protection practices : Because of their vital role and the need to protect the patient information, interest in information security in Healthcare Supply Chains (HSCs) is growing. This study analyzes how decisions related to information security practices in HSCs contribute to protecting patient information. Eleven semi-structured interviews were performed. The interviewees were managers from Brazilian HSC organizations. Four dimensions and 14 variables identified in a literature review were used to perform categorical content analysis. The findings suggest organizations, while aware of their critical information and internal processes, lack the necessary metrics to measure the impacts of possible failures. It seems organizations tend to invest in standard security measures, while apparently ignoring the specificity and complexity of information in HSCs. Introduction Around the world, Information Technology (IT) is used in ever wider areas of life. Similarly, with business transactions, increasing numbers of individuals have access to information, without necessarily paying proper attention to its security. Such a lack of attention (Song et al., 2019) exposes organizations to information security breaches (Safa et al., 2016;Gordon et al., 2015). To defend themselves, organizations need to invest in information security, which is the protection of the organizational resources including information, hardware, or software (Chamikara et al., 2020;Guttman & Roback, 1995). When organizations collaborate in supply chains, it is crucial to pay close attention to how their co-members deal with information security because if one fails, a breach can affect any and all the members of the chain (Gordon et al., 2015). Supply Chain Management (SCM) is considered vital for the success organizations pursuing profit and cost effectiveness while engaging with different suppliers (Ketchen & Hult, 2007). There is an increased need to implement strong security measures to safeguard the information of organizations throughout the chain (Bojanc & Jerman-Blažič, 2008). However, the cost of the ideal information security protection system may impact their financial status. Thus, striking a balance between the costs of security measures and the value of information is a great challenge (Gordon & Loeb, 2002). Although it is far behind other industrial sectors in term of IT and SCM, this situation is no different in the healthcare sector (Chen et al., 2013;Hedström et al., 2011). IT has an increasingly important role in the field of healthcare assistance, due to the need to provide information in a timely manner for decision-making, as well as protect patient information. However, whatever of the cost of ensure information security, the cost of failing to protect patient information may be more expensive (Landolt et al., 2012;Samy et al., 2010). Healthcare professionals often require rapid access to patient information, and the delivery of that information may not always be in compliance with organizational and industry Information Security standards (Hedström et al., 2011;Huang et al., 2014). Patient care and safety controls need to be established and abide by financial standards established by each organization (Huang et al., 2014). The challenge is to measure investments in Information Security against the impact of the failure of such systems (Huang et al., 2014). Information Security is a technical discipline that aims to ensure maximum security levels (Bojanc et al., 2012). From among the various lines of research into Information Security, the present study adopts a behavioral approach to the issue (Zafar & Clark, 2009). However, organizations need to consider the volume of security investments in order to assess whether the costs are feasible and whether they will provide the desired outcomes (Bojanc et al., 2012;Gordon et al., 2015). Furthermore, decision-makers must scrutinize how their supply chain members deal with Information Security (Huang et al., 2014). A further challenge in Information Security lies in the growing number of daily transactions with different HSC members. Organizations are seeking faster and better healthcare information, nonetheless they must also consider the prospects for achieving better financial results (Huang et al., 2014;Hedström et al., 2011). In this context, this article seeks to answer: how decision-making in HSCs contributes towards protecting patients and if the actions taken are financially balanced? 2 Theoretical background Information security Organizations of all kinds are increasingly dependent on their IT resources for their business activities (Gordon & Loeb, 2002). IT systems have evolved from being operational to becoming strategic, including in supply chains (Gupta et al., 2006). The speed at which transactions take place today necessitates greater security measures for critical information (Gordon et al., 2015). The primary goals of Information Security (IS) are the identification and the mitigation of possible security breaches in order to guarantee that, in the event of their occurrence, decision-makers will have enough information and knowledge to make the best decisions as fast as possible (Bojanc & Jerman-Blažič, 2008). While people are likely to fail in matters of security, it is the responsibility of IT to help users do the right thing, guiding them to make correct decisions from the security perspective (Kraemer & Carayon, 2007). Due to the difficulty in determining the IT investment levels necessary to prevent failures, some managers have started to analyze the indirect results of security incidents in their organizations (Huang et al., 2014). This kind of investment analysis takes into account the value of the information, seeking to balance the cost of protecting the information against the costs involved in the case of information being leaked or unduly manipulated due to some breach (Bojanc & Jerman-Blažič, 2008). The analysis is sophisticated because there are many variables to be considered, such as the probability of a breach or failure (Patel et al., 2008;Huang et al., 2014). By mapping all the critical business information for the organization, decisionmakers can elaborate plans and analyze the relationships in order to optimize investments in information security (Gordon & Loeb, 2002;Gordon et al., 2015). When the cost of protection becomes too high, the organization may take out insurance to retrieve any possible losses in the case of information security breaches (Bojanc & Jerman-Blažič, 2008). Supply chain management A Supply Chain (SC) is a network of organizations that exchange information and products as a part of their business processes in order to deliver goods and services to their customers (Christopher, 2007). Since it is complicated for a single organization to perform all the production steps in a SC, SCM seeks to coordinate the relationships between organizations, splitting the processes and attributing each organization responsibility for specific steps that need to be taken to achieve the business goal (Ballou, 2006). The primary objective of SCM is to ensure the efficiency of the SC to provide a competitive advantage to the member organizations (Christopher, 2007). To do so, the SCM has to plot strategies and processes that help the information and the products flow along the chain precisely in accordance with the needs of each organization (Ketchen & Hult, 2007). For any organization, SCM is a critical factor in obtaining competitive advantage over others, whether to monitor the organization or to increase its efficiency . In this context, the financial aspect may be one of the key indicators since it drives most of the decisions and is also used to measure chain performance Christopher, 2007). However, the existence of vulnerabilities that might jeopardize information security along the SC cannot be ignored. Research into HSCs has focused on improving their functioning and organizational results, preventing medical errors, ensuring better healthcare quality, and enhancing operational efficiency at hospitals (Lee et al., 2011;Kritchanchai et al., 2018). Healthcare systems are constantly under pressure to reduce costs, while at the same ensuring better quality, and maintaining consistent levels of patient care (Kazemzadeh et al., 2012). Delivering healthcare assistance becomes difficult for hospitals if along with medical errors and patient information security, the costs of providing the service are high (Lee et al., 2011). SCM can help hospitals control these costs and so improve their financial performance (Lee et al., 2011;Wieser, 2011). Information security on healthcare supply chains All the information that flows within an HSC is critical, for among other aspects, patient privacy. However, not all the employees affiliated with healthcare assistance organizations may fully understand that (Hedström et al., 2011;Magnagnagno, 2015). There is also a risk to organizational transactions, since details of agreements leaked on the market could damage the organizations involved, affecting their image or causing financial loss due to purchasing or delivery failures (Warren & Hutchinson, 2000;Gordon et al., 2015). Healthcare assistance organizations have only recently started measuring the impacts of IS issues (Huang et al., 2014). However, when they need to choose between better financial results OR patient healthcare, they opt for the latter (Hedström et al., 2011). Investments in security entail increased organizational costs which need to be subjected to cost/benefit analysis (Chen et al., 2013). Moreover, such investments do not produce revenue, their value lies in their effectiveness in preventing information breaches, but this is not always identified or measured (Huang et al., 2014). Typically, each organization seeks to apply its own information security method, which results in higher costs (Huang et al., 2014). However, given that the members of HSCs exchange information about patients, suppliers, materials, they should have better agreements regarding security since transferring such information poses greater risks than keeping it internally. Achieving a balance between ensuring sufficient security and the cost of providing that security is the crucial challenge (Warren & Hutchinson, 2000). Although there is considerable research into IS, vulnerabilities, and techniques, there is a lack of research into the financial aspects involved (Gordon & Loeb, 2002). Information should not just be seen as an item that leads to increased costs, because its integrity is essential for all aspects of an organization's business processes (Bojanc et al., 2012). When the critical information is well identified, investment decisions regarding its security can be made (Gordon & Loeb, 2002;Gordon et al., 2015). Conceptual model Organizations have focused on interorganizational processes (i.e., integrated with SC members) to achieve better organizational results and enhance efficiency in several vital areas (Min & Zhou, 2002;Ballou, 2006). This kind of integration means critical information has to flow between the interconnected organizations, and that information needs to be secure, not only within a single company, but throughout the chain (Gomes & Ribeiro, 2004;Gordon et al., 2015). Figure 1 represents a model of a typical Brazilian HSC, which is also the theoretical model adopted in the present study: Patient healthcare product and service providers have direct access to hospitals and healthcare clinics (Kazemzadeh et al., 2012). They provide all the products and medicines necessary for healthcare organizations to provide assistance to patients (Bhakoo & Chan, 2011). In Brazil, numerous organizations offer health insurance to cover their customers' healthcare costs. (Magnagnagno, 2015). Given the HSC structure presented in Figure 1, it is important to understand the Information Security initiatives applied both within the member organizations and throughout the HSC. Based on that, the conceptual model presented in Figure 2 was created. Assuming HSC are behind SCs in other industrial sectors (Chen et al., 2013;Hedström et al., 2011), this article uses the conceptual model to explore the Information-Security-related activities of SC organizations using HSC to validate propositions. SCs seek to manage processes among their member organizations in an integrated fashion order to yield better results in terms of their operations, products, services, and information (Ballou, 2006;Gunasekaran & Ngai, 2004). In turn, Ayers (2006) addresses the sharing of information, with the function of generating a flow of knowledge to satisfy the requirements of the end user. For this, it is necessary to take into account the possibility and the great need for a relationship between all companies from different sectors that will participate in the process (Cooper et al., 1997). On the other hand, organizations face a very big challenge to promote security standards, policies and procedures effectively (Boss et al., 2009). Due to the great complexity of security, to stay safe, it is necessary to pay attention to the configuration of all levels of users and also to the systems (Marciano, 2006). Given that, the first proposition is: (P1) Information Security among organizations in HSCs is integrated and collaborative. Effective decision-making is based on information, and systems are necessary to handle all the information needed. Therefore, it is crucial the information is accurate and secure (Bragança, 2010;Bojanc & Jerman-Blažič, 2008;Warren & Hutchinson 2000;Gordon et al., 2015). In HSCs, huge volumes of patient, treatment and supplier information flows along the chain (Ballou, 2006). Hence, the second proposition is: (P2) Organizations are aware of the need to protect their critical information and, that the HSC is a vital part of that. Organizations use performance metrics to help them achieve better organizational results, while, at the same time, they also have to protect their information (Gunasekaran & Ngai, 2004). Organizations need to consider the possibility of IS breaches and their impact on financial results (Bojanc & Jerman-Blažič, 2008;Gordon & Loeb, 2002). Hence, the third proposition is: (P3) Organizations have metrics to assess the impact of information security breaches. Besides strategic and financial information, HSCs deal with sensitive patient data that has to be both secure and constantly available to the professionals directly delivering medical assistance (Ballou, 2006;Bragança, 2010). Each organization's critical information needs to be identified to ensure suitable protective measures can be taken (Bojanc & Jerman-Blažič 2008;Huang et al., 2014). Therefore, the fourth proposition is: (P4) Specific investments are made to properly protect both the organization's and HSC's critical information. Methodology This qualitative study is intended to reveal the interviewees' views regarding ISrelated SCM initiatives. It does not offer well-defined pre-concepts, but rather makes four non-precise propositions (Gibbs, 2009;Sampieri et al., 2013) that hopefully will be answered in the course of the interviews and the subsequent interpretation by the researcher (Gibbs, 2009;Sampieri et al., 2013). The units of analysis are the professionals themselves and their particular view of the organization, considering the environment in which they work and the external relations with other organizations. Research instrument At the core of the research instrument are four dimensions and a set of variables that were created based on the conceptual model and the theoretical background. The instrument was validated in a process involving two respondents, one from the academic field, with a background both in Information Systems and healthcare, and the other was an IT manager from a hospital. Data collection This study sought to analyze the answers provided by professionals from the organizations in HSCs. The selected interviewees were from general management areas in their respective organizations, namely laboratories, hospitals, clinics, and healthcare insurance providers, operating within HSCs. The study focuses on the interviewees and their knowledge about Information Security and the HSCs to which their organizations belong with the aim of identifying the relevant IS practices and SC processes (Flick & Netz, 2004). All the included organizations were members of HSCs operating in two different cities in the Southern States of Brazil. Among the 11 specialists from 10 different organizations that agreed to be interviewed, 3 were from IT departments, and 8 from management. All the interviews were held and recorded between October and November 2015, and later transcribed and analyzed. Data analysis The content analysis was conducted according to Bardin, Bardin et al. (1979), using the following three steps: (i) pre-analysis; (ii) exploration of material; (iii) result processing, inference, and interpretation. The first step, (i), involved transcribing the interviews. The second step, (ii), involved the use of NVivo software to obtain a better view of the data and to group answers. The third step, (iii), required the researcher to interpret the data. This last stage consisted of categorical analysis, initially openly coding the data to identify prior categories based on the transcripts. After which, axial analysis was performed to group the comments according to similarity. Finally, careful analysis was carried out to identify the final categories, which also made it possible to establish their frequencies (Sampieri et al., 2013). The data from each interviewee were then compared with that obtained from the other interviewees as well as with the literature in order to validate the answers (Flick & Netz, 2004). Characterization of the respondents A total of 11 professionals from the HSCs were interviewed. Their characterizations presented in Table 2. Proposition 1 -Information security among organizations in HSCs is integrated and collaborative This proposition suggests that the organizations act in an integrated manner in seeking to achieve better organizational results. Nevertheless, the interviewees suggested this was not the usually case, for example I2 said: "We have many opportunities, but we live on different islands. We should be on the same land…that is what I stand for. The closer we get, the better the SC performance will be…" On the same subject, I11 suggested SC members cheat on each other, and on hospitals, and laboratories, performing more exams than necessary. So, he pays less for each service to compensate. He said, "I cannot pay more for their services, because they do more than necessary, and they do more because they think I am not paying enough. So, everyone loses". The proposition that all the organizations seek better organizational results is confirmed. However, they do it individually, with low levels of integration and collaboration, even among organizations in the same HSC. They maintain a relationship focusing exclusively on the supply of services and the materials they need to perform their activities. Proposition 2 -Organizations are aware of the need to protect their critical information and, that the HSC is a vital part of that The interviewees agreed that the security of patient information was critical throughout the HSC. The importance of the commercial contracts between the members was also noted. Interviewee I8 referred to the critical nature of patient information: "…Medical records, for sure, are the critical, and are even protected by law…" However, he also mentioned the commercial information in the following statement: "…Price is critical; if my competitors discover the amounts I pay, they can force my supplier to accept the same or change mine…" Interviewee I7 agreed upon this opinion, but not with I10, who claimed there is no confidentiality request between his hospital and its suppliers. Organizations need to be aware of their critical information for them to work effectively and to achieve any competitive advantage in the market (Bojanc & Jerman-Blažič, 2008). Once the critical information is identified, they can focus their efforts to protect it (Bojanc et al., 2012). Patient information is the most critical, not only because of the legal implications involved but also due to the possible indirect negative impact on the organization (Hedström et al., 2011;Huang et al., 2014). Proposition 2 was confirmed. The organizations indeed know which information is critical for them and for the HSC to which they belong. Proposition 3 -Organizations have metrics to assess the impact of information security breaches This proposition suggests the organizations always have metrics to measure the impacts they, as well as the HSC they belong to, would face when confronted by information security breaches. Interviewee I3 said that the whole chain faces many security issues. "…Today, banks have advanced security systems and even so, they suffer attacks…the healthcare area does not invest as much as them..." When asked why that is the case, he suggested, "…Maybe due to misinformation about what could happen. Maybe studies could show them how vital this information is." I3 elaborated on the answers found in most of the interviews, which showed they were aware of the possible impacts of an information breach. Nevertheless, neither I3 nor the other interviewees had any real data to support their assumptions and had little idea of the damage that would be caused to the organization or the HSC. That was the main reason most of them felt the most significant impact would be on the organizational image, but they were unable to translate that feeling into real numbers. However, the categorical analysis shows that "immeasurable damage" was the most frequent term used. The interviewees are aware of the impacts an information security breach would have on their organizations. However, they have no idea of the possible magnitude, thus, they cannot make suitable plans to mitigate such an event. The third proposition was not confirmed. Proposition 4 -Specific investments are made to properly protect both the organization's and HSC's critical information The last proposition was intended to cross-check the previous ones in terms of Information Security investments. It suggests the organizations invest in security actions according to the information they need to protect. Proposition 1 was partially confirmed due to the low level of integration between the organizations. Proposition 2 was confirmed; the organizations know what information is critical for them and for the HSC. Proposition 3 was not confirmed since the organizations had no metrics to measure the impact of any information security breach. Consequently, proposition 4 shows the organizations invest in information security, but they do so on an individual basis rather than in a coordinated manner with the whole HSC, nor do they consider the criticality of the information itself. Every organization should invest in IS according to the value of the information that they want to protect (Bojanc & Jerman-Blažič, 2008). In HSCs, there is critical information that is protected by law (according to I8). Nevertheless, there is no assessment of the costs that would be incurred in the case such information is compromised. Hospital managers need to be aware of the kind of information they are dealing with in order to make plans to prevent breaches of IS and mitigate their impacts in the case they occur, while ensuring the costs of any IS system designed to do so does not exceed the value of the information it is intended to protect (Huang et al., 2014). Interviewee I10 commented that the hospital always takes action reactively. After a breach occurs, they attempt to identify the best means of preventing it from reoccurring and mitigating its impact. Interviewee I9 says they do the same, but he adds that if they conducted prior risk analysis they might be able to prevent breaches and, thereby, avoid financial losses. Compared to SCs in other sectors, such as manufacturing, there is much to be studied regarding HSCs (Chen et al., 2013). However, that does not mean IS in HSCs also has to lag behind, because healthcare involves highly personal information. Depending on the seriousness of the breach, it could mean the end of a healthcare assistance organization (Hedström et al., 2011;Landolt et al., 2012;Huang et al., 2014). Analysis of the dimensions and variables All the variables identified in the literature review are analyzed in the transcript data. Below, Table 3 shows the results of the analysis of the dimensions and variables. Dimension Variable Results SC processes for better information flow Internal information flow Main categories identified refer purchasing and product tracking Information flow between members The results point to a low level of integration between members Role of the organization within the SC With a low level of integration, the organization focuses on internal processes Member's definition They know their co-members. The diagram can be seen in Figure 2 Information to be protected Critical information to be secured Medical records and patient data are the most critical information in the HSC Information access Individual access is limited accordingly to the user's role How information is exchanged among members The main category was E-mail/Website, but the clinic and the hospital also had identified systems Threats and mitigation actions Recognizing the threats and their impacts The main threats include information leak, information unavailability, and internal threats. However, there is no analysis of the impacts Mitigation actions in information systems The main categories identified concern the use of purchased systems, which are considered safer than the in-house systems, and the use of backups Mitigation actions on employees Organizations have codes of conduct and contractual obligations, but they are not periodically reviewed Information monitoring The interviewees said that they know it exists, but only a few were able to specify Information Security investments To assess the impact of threats Organizations have no formal IS breach impact analysis procedures. Specific Information Security budget There is no specific IS investment, only those recommended by IT Impact of investment on the organization, and the financial performance of the SC There is no analysis to validate the amount spent on Information Security and its impact on the organizational performance Source: The authors. The variable 'member's definition' considers the organization's suppliers and clients within the HSC. Figure 3 presents the SC diagram according to the responses of the interviewees. Each arrow shows interviewee (e.g. E2/I2) whose comments support the connection. With one exception, all the interviewees agree and have similar views regarding the HSCs. They understand their own and their co-members' roles. The only disagreement was with respect to the physicians. Some interviewees consider themselves as clients, others as service providers. According to I8, this may occur because they work inside organizations, which camouflages their real role within the HSC. So, the question arises: Are they employees, service providers, or clients? Discussion The main research question asked whether the organizations within HSCs identified their critical information and whether they had the necessary processes in place to adequately protect that information, while considering the value of the information and the cost of protecting it. Based on the interviews, it can be concluded the healthcare organizations are aware of their critical information, but there is neither any standard process in place nor any analysis being done in order to properly protect the information. Table 4 provides a summary of the results for each dimension as previously defined. SC processes for better information flow The individual organizations have defined internal processes, but not for the whole HSC. The level of the relations and level of integration and shared activities are even lower. Information to be protected The information related to patients is the most critical in the HSC, and the organization performs mitigation actions to protect it. Threats and mitigation actions The main threats identified include information leaks, information unavailability, and internal breaches. However, organizations keep their systems up to date; they have codes of conduct, but these are not regularly reviewed and emphasized to the employees. Information Security investments There is no analysis regarding the impact of possible information breaches and no specific investment to safeguard against them. They only carry out the standard processes suggested by IT their departments without conducting risk or cost/benefit assessments considering the losses that would be incurred in the case of information security breaches. Source: The authors. Overall, the primary information in HSCs is patient-related. All the interviewees imagined the impact on their respective organizations in the case of a security breach would be significant. Nevertheless, no risk analysis had been conducted in terms of the organizations or the HSCs. Although at a low level of integration, some information is shared among member organizations, nevertheless there is no analysis is made to measure it. Figure 4 presents the author's perspective on the levels of IS between the connections in the HSC. • High Level in terms of Information Security Practices (HL ISP): There are some security protocols, integration systems, and low levels of information shared by unsecured channels; • Medium Level in terms of Information Security Practices (ML ISP): There is some integration through systems, but high information is shared by insecure channels, such as emails and printed documents; • Low Level in terms of Information Security Practices (LL ISP): Information is shared mainly through unsecured channels. According to the data collected, HSCs do not seem relevant to the results pursued by the organizations. They recognize the need for members, but they do not have close relationships and do not help each other achieve better performance for all the comembers. They have weak ties and little trust in each other, sometimes even hiding information, being afraid their co-members might take advantage of them rather than work together to achieve better results. The study revealed the interviewees' concerns about information privacy, mainly regarding the medical teams that needs to access the patient information. Reports of the low level of concern for patient privacy by medical teams are not new. Related to information privacy, Luciano et al. (2011) suggest the most worrying aspects involve the wide range of the professionals/people that have access to patient information and what they are can and cannot do with it. This concern is expressed by I9 who says the doctors are resistant to having exclusive access to patient data. They want the medical team members to have access to patient information, but not all such members need to have access to all the information. Regarding investments in information security, it was found the organizations do not consider the value of the information when deciding how much they will spend on protecting it, nor do they consider the impact on the/organization in the case of information security breaches. The organizations purchase information security products available on the market, without focusing on the nature of their critical information, despite the possible consequences they would face in case of information security breaches. Final remarks Healthcare managers are increasingly concerned with the financial health of the medical institution to maintain their level of assistance and increase their capacity to attend patients (Lee et al., 2011). This study has analyzed the critical information in HSCs together with the practices related to IS investments made to protect that information in the HSC and the perception of the impact of those investments on the organizational performance of the organizations and the HSCs. The findings help explain the four research propositions, and, to some extent, are in line with those reported in previous studies. Among the findings, is that the internal hospital supply chains are defined and coordinated than the external supply chains. The hospitals have several security practices in place, such as individual access to systems, access profile levels, as well as contractual obligations and employee conduct guidelines. They also have backup procedures in place, and some even have redundancy systems. In addition, hospitals have transparent procurement processes for their materials and medicines. Rigorous tracking systems for all the products are necessary to increase the quality and the security of their use. This study's primary objective was to analyze IS practices in relation to protecting critical information within HSCs. The findings suggest the adopted information security practices are not based on the critical information in each organization and much less so on the relations with other organizations in the HSC. The organizations tend to make global IT investments that can, consequently, impact information security. Despite their importance, they are not based on risk and cost/benefit assessments regarding the value of the information or the impact in the case of information security breaches. That also implies unknown factors related to the impact of the investments on the organizational financial performance and, consequently, on the financial performance of the HSCs. Despite which, the levels of integration and collaboration among the members are very low, thus limiting their capacity to achieve the kind of efficiency levels and results expected of a successful SC. They are concerned about information security but do not analyze the issues that may arise in the case of breaches. Consequently, they do not have the correct information to make better decisions to safeguard their information. How the members of HSCs and their co-members deal with information is mapped in Table 3 and summarized in Table 4. This includes the SC processes and critical information mapping, which is consistent regarding patient data, threats and mitigation mapping and their strategies for IS investments. This study also contributes by examining how investments in Information Security impact the organizational performance of the organizations and the HSCs. While some studies in this area have looked at healthcare organization security, their focus is on patient privacy, as in Bragança (2010) and Magnagnagno et al. (2015). They analyze the organization alone and do not consider the financial aspects of security investments. In Brazil, there is a considerable amount of research into the healthcare area. The findings of present study suggest insufficient attention has been given to SCM, particularly in terms of risk and cost/benefit assessments regarding investments in information security and the impacts of information security beaches. Consequently, there is no security investment analysis regarding the critical information they handle every day. These findings should be taken into account by HSC and/or SC Information Security managers when analyzing the possible impacts related to information breaches and planning future investments designed to protect the organization and the HSC of which they are a part. This study has two main limitations, namely the number of interviewees and the low level of integration among the organizations in the HSCs. Some suggestions for future research based on the results are: • Applying the research instrument in other regions; • Conducting quantitative research in order to get a higher number of responses. So, the data can be analyzed statically in order to be able to generalize the results; • Creating a governance-integrated model for Information Security in HSCs; • Developing a base model for Information Security for organizations within HSCs, with possible impacts and losses, so organizations could prioritize their investments. This study has analyzed the prevalent practices intended to protect critical information in HSCs. The lack of coordination of information security practices among the organizations in the HSCs is something that should be revised.
7,764.4
2020-11-27T00:00:00.000
[ "Medicine", "Computer Science", "Business" ]
Spectral descriptors for bulk metallic glasses based on the thermodynamics of competing crystalline phases Metallic glasses attract considerable interest due to their unique combination of superb properties and processability. Predicting their formation from known alloy parameters remains the major hindrance to the discovery of new systems. Here, we propose a descriptor based on the heuristics that structural and energetic ‘confusion' obstructs crystalline growth, and demonstrate its validity by experiments on two well-known glass-forming alloy systems. We then develop a robust model for predicting glass formation ability based on the geometrical and energetic features of crystalline phases calculated ab initio in the AFLOW framework. Our findings indicate that the formation of metallic glass phases could be much more common than currently thought, with more than 17% of binary alloy systems potential glass formers. Our approach pinpoints favourable compositions and demonstrates that smart descriptors, based solely on alloy properties available in online repositories, offer the sought-after key for accelerated discovery of metallic glasses. 3) The confusion cannot just be due to "too many candidates with similar enthalpy". For example, if we have quite a number of enthalpy-degenerate candidates but one of them is particularly advantageous in nucleation not due to enthalpy but because it has a low interface energy or fast nucleation kinetics, or ..., then it breaks the confusion and kills GFA. The more candidates, the more likely that this could happen. For GFA, this is worse than the case of very few candidates that are known to nucleate very slowly. Perhaps a potential quasicrystal with low nucleation barrier due to low interface energy is such an example. 4) In general, the confusion can be due to kinetics, not necessarily how crowded near the ground state. For example, the compositional partitioning needed can be sluggish, even if you only have two phases (but with quite different compositions) to nucleate into. 5) The authors only did two binary systems, CuZr and NiZr. It is not clear that this method has merit for general glass-forming systems. It is limited to finding a peak within a system, relative to neighboring compositions. How do we compare different systems? Apparently two systems with similar "thermodynamic density" can have very different GFA. 6) Some groups emphasize the amorphous side (such as its structure, liquid viscosity-fragility,...), and this time this paper focuses on the crystalline state. I am however of the opinion that both should be in the picture, for GFA. In summary, I feel that this work is of preliminary nature and the conclusion is a bit too simplified. The predictive power mentioned in the title seems to be an overclaim for Nature Commun. The result is more like an incremental step in expanding what should be considered when dealing with glass-crystal competition. Reviewer #2 (Remarks to the Author) The formation of metallic glasses still remains mostly unclear. This lack of knowledge hinders the exploration for new systems, still performed with combinatorial trial and error. This article propose a heuristic descriptor quantifying such issue based on the \thermodynamic density of competing crystalline states, parameterized from high-throughput ab-initio calculations. The experimental results corroborate the capability of the heuristic descriptor in predicting glass forming ability through the compositional space. The results is expected to deepen the understanding of the underlying mechanisms and to accelerate the discovery of novel metallic glasses. It is a good try using the high-throughput ab-initio calculations to explain and understand the metallic glass formation. I recommend the paper for acceptance of publication in Nature Communications. Reviewer #3 (Remarks to the Author) Report of Manscript: "Predicting Bulk Metallic Glass Forming Ability with the Thermodynamic Density of Competing Crystalline States", by Eric Perim et al. This paper reports the entropic factor describing the glass forming ability, using two Cu-Zr and Ni-Zr systems to test its validity. This entropic factor is based on previous proposal "confusion" idea. This work demonstrates that it is very hard task to get the entropic factor for a particular alloy composition, which hinders its wide application to predict a new glass former alloy. Furthermore, only two alloys, and small composition ranges shown in Figs. 3 and 4 studied in this manuscript, are far enough to validate the agreement of glass forming ability with the entropy factor suggested in this work. All-in-all, I am of the opinion that the manuscript does not meet the standards of Nature Communications. Reviewer #1 (Remarks to the Author): The authors have addressed the concerns I raised in the first round of review. The data set has been expanded to a large number of systems and the descriptors have been further developed and explained. This manuscript is now acceptable for publication in Nat Commun. Reviewer #3 (Remarks to the Author): This revised manuscript addressed some questions which referees asked during the first run. However, the novelty of this manuscript is still missing although it provides some descriptors for bulk metallic glasses based on reported data. In fact, the authors predicted some possible good BMG systems using their descriptors. Thus, it will be nature for authors to fabricate such possible good BMG systems to confirm the novelty reported in this revised manuscript. Reviewer #1: This paper attempts to use an "entropic factor" to interpret the "confusion principle". The new factor basically counts the number of phases, types of lattice and space group that may be competing with the glass. A high value is believed to confuse the crystallization and favor GFA. The competitors were determined by their enthalpy-degeneracy, assessed using ab initio database. 1) This is a useful attempt. It adds a few metastable structures as potential competing phases, from the enthalpy standpoint. 2) But with regards to predicting GFA, it is a "modification", rather than a breakthrough. For example, even with just the equilibrium phases seen in the phase diagram, or the Trg (reduced glass transition temperature) that reflects how well the liquid would be competing with crystallizing phases, the prediction of GFA is already pretty good. See WL Johnson's Nat Commun paper this year. Also, in the current manuscript the experimental "# of phases" peak already matches the one from experiments and from "entropic factor". So it is not clear that the ab initio candidates actually added something essential and helped much in correlating with GFA. Authors The reviewer points out a weakness of the manuscript in its first version: the limited use of ab-initio data. The main point here is that our descriptor solely uses ab-initio calculation results on crystalline phases to predict GFA, without any reliance on the measurement of complex experimental parameters, such as the T rg . It is therefore uniquely suitable for materials discovery and design purposes. The fact that it closely matches those previous empirical quantities used to described GFA in known glass-forming systems is not a drawback but a necessary requirement to demonstrate that such a simple descriptor is capable of capturing the essential physical content of these complex empirical quantities, which are manifestly unsuitable for a priory identification of new metal-forming alloys. Moreover, the substantial expansion of our submission clearly presents the utility of our model for large scale screening of multiple potential glass-forming candidates, which is impractical by exclusive reliance on the current empirical GFA correlators. Now, the current version of the manuscript contains a second descriptor, the spectral evolution of the "entropy factor" trying to characterize the capability of the alloy in forming glasses. This second descriptor is based on structural and enthalpic mismatches and it is trained with the available list of known binary glasses. It is then used to predict novel glasses from a well established quantum-mechanical data repository. The highly efficient and effective use of ab-initio data and this descriptor is described on pages 5-8. Reviewer #1: 3) The confusion cannot just be due to "too many candidates with similar enthalpy". For example, if we have quite a number of enthalpy-degenerate candidates but one of them is particularly advantageous in nucleation not due to enthalpy but because it has a low interface energy or fast nucleation kinetics, or ..., then it breaks the confusion and kills GFA. The more candidates, the more likely that this could happen. For GFA, this is worse than the case of very few candidates that are known to nucleate very slowly. Perhaps a potential quasicrystal with low nucleation barrier due to low interface energy is such an example. 4) In general, the confusion can be due to kinetics, not necessarily how crowded near the ground state. For example, the compositional partitioning needed can be sluggish, even if you only have two phases (but with quite different compositions) to nucleate into. 6) Some groups emphasize the amorphous side (such as its structure, liquid viscosity-fragility,...), and this time this paper focuses on the crystalline state. I am however of the opinion that both should be in the picture, for GFA. Authors The reviewer correctly pinpoints possible other possible factors contributing to the GFA. We cannot rule them out, but they are out of the reach of a high-throughput quantum analysis leading to the identification of novel systems, by simple and quick ab-initio calculations. We believe that the predictive power, estimated in ~75%, warrants occasional false positives. The crucial point here is not to exhaustively describe the glass-formation process, with all its complexities, but to present a model that captures enough of the essential physics and is yet sufficiently simple to be practically employed for computationally guided design of new BMG's. Despite the limits in our methodology, we believe that we demonstrate convincingly that it is interesting enough to deserve publication. Reviewer #1: 5) The authors only did two binary systems, CuZr and NiZr. It is not clear that this method has merit for general glass-forming systems. It is limited to finding a peak within a system, relative to neighboring compositions. How do we compare different systems? Apparently two systems with similar "thermodynamic density" can have very different GFA. In summary, I feel that this work is of preliminary nature and the conclusion is a bit too simplified. The predictive power mentioned in the title seems to be an overclaim for Nature Commun. The result is more like an incremental step in expanding what should be considered when dealing with glass-crystal competition. Authors We agree with the referee's comment on the preliminary flavor of the original submission and the limited number of systems studied in it. The current version is drastically expanded. Now we are confident that the current analysis of all possible binary systems (1400+ instead of two), with a spectral descriptor based on energetic and structural considerations, trained with ~20 experimental reports (Table I) and with quite a few potential novel glasses (Table II), would not be considered incremental. To the best of our knowledge, nobody has ever challenged the problem in a similar way. Some potential novel BMG candidates are presented in Table II of the revised manuscript: Reviewer #2 The formation of metallic glasses still remains mostly unclear. This lack of knowledge hinders the exploration for new systems, still performed with combinatorial trial and error. This article propose a heuristic descriptor quantifying such issue based on the \thermodynamic density of competing crystalline states, parameterized from high-throughput ab-initio calculations. The experimental results corroborate the capability of the heuristic descriptor in predicting glass forming ability through the compositional space. The results is expected to deepen the understanding of the underlying mechanisms and to accelerate the discovery of novel metallic glasses. It is a good try using the high-throughput ab-initio calculations to explain and understand the metallic glass formation. I recommend the paper for acceptance of publication in Nature Communications Authors We thank Reviewer #2 for the supportive report. We invite him/her to read the second version of the paper, which has been drastically enhanced. Reviewer #3 This paper reports the entropic factor describing the glass forming ability, using two Cu-Zr and Ni-Zr systems to test its validity. This entropic factor is based on previous proposal "confusion" idea. This work demonstrates that it is very hard task to get the entropic factor for a particular alloy composition, which hinders its wide application to predict a new glass former alloy. Authors We agree that the task if very difficult and therefore it needs to be addressed within appropriate approximations. As mentioned in the answer to Reviewer #1 we have extended our approach to include more effective and extensive use of ab-initio data. Reviewer #3 Furthermore, only two alloys, and small composition ranges shown in Figs. 3 and 4 studied in this manuscript, are far enough to validate the agreement of glass forming ability with the entropy factor suggested in this work. All-in-all, I am of the opinion that the manuscript does not meet the standards of Nature Communications. Authors As mentioned in our response to referee #1 the revised manuscript is radically extended to address this point, and now includes an introduction of a second descriptor and an analysis of many more systems. The descriptor's spectral decomposition leads to a better determination of the concentration space. The method is completely ab-initio and does not require any input from experiments (except for the selfconsistent determination for a threshold). In summary, 1) The spectral descriptor is trained with ~20 experimental reports of binary metallic glasses (see Fig5g, attached below). 2) The spectra compared with experimental concentrations show good agreement, Fig5(a-f). 3) The descriptor was applied to our ab-initio repository AFLOW, containing 330,000+ calculations of binary systems, thus allows the analysis of important features, such as the frequency of metallic glasses versus solid solutions or intermetallics (Fig5h). We therefore believe that the revised manuscript fits the requirements of Nature Communications in terms of originality, advancement, and general interest.
3,180.8
2016-06-03T00:00:00.000
[ "Materials Science" ]
Role of Impaired Glycolysis in Perturbations of Amino Acid Metabolism in Diabetes Mellitus The most frequent alterations in plasma amino acid concentrations in type 1 and type 2 diabetes are decreased L-serine and increased branched-chain amino acid (BCAA; valine, leucine, and isoleucine) levels. The likely cause of L-serine deficiency is decreased synthesis of 3-phosphoglycerate, the main endogenous precursor of L-serine, due to impaired glycolysis. The BCAA levels increase due to decreased supply of pyruvate and oxaloacetate from glycolysis, enhanced supply of NADH + H+ from beta-oxidation, and subsequent decrease in the flux through the citric acid cycle in muscles. These alterations decrease the supply of α-ketoglutarate for BCAA transamination and the activity of branched-chain keto acid dehydrogenase, the rate-limiting enzyme in BCAA catabolism. L-serine deficiency contributes to decreased synthesis of phospholipids and increased synthesis of deoxysphinganines, which play a role in diabetic neuropathy, impaired homocysteine disposal, and glycine deficiency. Enhanced BCAA levels contribute to increased levels of aromatic amino acids (phenylalanine, tyrosine, and tryptophan), insulin resistance, and accumulation of various metabolites, whose influence on diabetes progression is not clear. It is concluded that amino acid concentrations should be monitored in patients with diabetes, and systematic investigation is needed to examine the effects of L-serine and glycine supplementation on diabetes progression when these amino acids are decreased. Introduction Diabetes mellitus occurs in two basic forms-diabetes of the first type (T1DM, type 1 diabetes mellitus) and diabetes of the second type (T2DM, type 2 diabetes mellitus). The cause of T1DM, which usually manifests itself in young individuals (juvenile diabetes), is insufficient insulin production in the β-cells of the islets of Langerhans. In T2DM, the effects of insulin are counteracted by factors that induce a state of insulin resistance. In the early stage, the increased output of insulin from β-cells compensates the insulin insensitivity. In later stages, a defect in insulin secretion develops, and therapy may require insulin administration. T2DM is becoming increasingly common in obese children [1]. Both types of diabetes develop marked disturbances in amino acid metabolism and amino acid concentrations in plasma and tissues. However, alterations are not consistent for most amino acids. Most consistently increase the levels of branched-chain amino acids (BCAA; valine, leucine, and isoleucine) and aromatic amino acids (AAA, phenylalanine, tyrosine, and tryptophan) and decrease the levels of L-serine and glycine [2][3][4]. There are inconsistent data on changes in alanine, glutamate, aspartate, and glutamine, although these amino acids play a role in BCAA catabolism reactions [4][5][6][7]. Although it is supposed that disturbances in aminoacidemia play a role in the development of diabetes and its complications, their pathogenesis is not completely clear. Important roles have undoubtedly alterations in protein balance, food intake, amino acid transport through cell membranes, and increased gluconeogenesis in the liver and kidneys. The aims of the present article are (1) to demonstrate that decreased glycolysis and preferential fatty kidneys. The aims of the present article are (1) to demonstrate that decreased glycolysis and preferential fatty acid oxidation, and subsequent decrease in the flux trough citric acid cycle (CAC) are the main causes of decreased L-serine and increased BCAA levels in diabetes and (2) examine the contribution of disturbances in L-serine and BCAA metabolism in the pathogenesis of altered concentrations of other amino acids and diabetes-associated complications. Basic Data on Glycolysis and the CAC Glycolysis is the main pathway of the breakdown of glucose to pyruvate that occurs in the cytosol and provides the substrates for energy production as well as for storage of energy in the form of lipids (Figure 1). Insulin increases glucose disappearance from the blood and glycolysis by enhanced translocation of glucose from extracellular fluid to cytosol by activation of some glucose transporters (GLUT), primarily GLUT4, and of some glycolytic enzymes, specifically hexokinase, phosphofructokinase, and pyruvate kinase. The effects of insulin are determined by the type of tissue. For example, insulin increases the translocation of GLUT4 and hexokinase activity in muscles and adipocytes but not in the liver. Pyruvate, the final product of glycolysis, can be in cytosol converted to alanine or lactate or transported from the cytosol to the mitochondria by one of two types of mitochondrial pyruvate carrier proteins. In mitochondria, pyruvate can be converted by pyruvate dehydrogenase (PDH) to acetyl coenzyme A (acetyl-CoA), the initial substrate for the CAC, by pyruvate carboxylase to oxaloacetate, and by alanine aminotransferase (ALT) to alanine. Pyruvate, the final product of glycolysis, can be in cytosol converted to alanine or lactate or transported from the cytosol to the mitochondria by one of two types of mitochondrial pyruvate carrier proteins. In mitochondria, pyruvate can be converted by pyruvate dehydrogenase (PDH) to acetyl coenzyme A (acetyl-CoA), the initial substrate for the CAC, by pyruvate carboxylase to oxaloacetate, and by alanine aminotransferase (ALT) to alanine. The PDH activity is regulated by the phosphorylation/dephosphorylation of the enzyme. Its kinase is activated (i.e., the enzyme is inactivated) by increases in acetyl-CoA to CoA, ATP to ADP, and NADH to NAD + ratios. Insulin activates PDH by reducing its phosphorylation and acetyl-CoA production from fatty acid oxidation. Pyruvate carboxylase is activated by acetyl-CoA, glucagon, and adrenaline and inhibited by insulin. Therefore, in the liver, its activation would promote gluconeogenesis by making more oxaloacetate be converted to phosphoenolpyruvate. In other tissues, primarily in the muscles, oxaloacetate is utilized in the CAC. The condensation reaction of oxaloacetate with acetyl-CoA to citric acid by citrate synthase is recognized as the rate-limiting step in the flux of the acetyl-CoA through the cycle regardless of whether the source of acetyl-CoA is glucose, fatty acids, or amino acids. The CAC is the main source of reducing equivalents that enter the respiratory chain, where ATP is produced. The intermediates of the CAC play a role in the metabolism of several amino acids, such as glutamate, glutamine, aspartate, phenylalanine, tyrosine, tryptophan, threonine, and BCAA. Glycolysis and Fatty acid Oxidation in Diabetes A common feature of both types of diabetes is impaired entry of glucose from extracellular space to the cell, decreased glycolysis, and mitochondrial dysfunction in most tissues [8][9][10][11][12][13]. In addition to the limited utilization of glucose, the utilization of fatty acids is of crucial importance [14]. The preferential fatty acid oxidation increases the mitochondrial ratios of acetyl-CoA to CoA and NADH to NAD + . The results are decreased acetyl-CoA synthesis from pyruvate and flux through the CAC, mainly due to the inhibition of NADHproducing enzymes, specifically malate dehydrogenase, isocitrate dehydrogenase, and α-ketoglutarate dehydrogenase and increased use of acetyl-CoA for the synthesis of ketone bodies ( Figure 1). Hence, during diabetes, the flux through the CAC decreases [11,15]. It is very likely that these alterations have a fundamental role in impaired mitochondrial respiration and energy balance observed in the muscles, hearts, and kidneys of subjects with diabetes [4,[16][17][18][19][20]. Basic Data on L-Serine Metabolism It has been estimated that~73% of L-serine appearance rate in fasting humans is the result of serine synthesis from 3-phosphoglycerate (3-PG), the intermediate in the glycolysis pathway, and from glycine [21]. The first step of L-serine synthesis from 3-PG is the oxidation of 3-PG to 3-phosphohydroxypyruvate, which is converted by 3-phosphoserine aminotransferase to 3-phosphoserine. The final step is the irreversible hydrolysis of 3phosphoserine to L-serine by phosphoserine phosphatase (Figure 2). It is generally accepted that the biosynthetic flux of L-serine from 3-PG is controlled by the last step through feedback inhibition [22,23]. From glycine, L-serine can be synthesized by the enzyme serine hydroxymethyltransferase, which catalyzes the reversible conversions of glycine and 5,10-methylenetetrahydrofolate (N 5 N 10 -CH 2 -THF) to L-serine and tetrahydrofolate (THF). L-serine synthesis from 3-PG and glycine is high in many tissues, including the kidneys, brain (especially astrocytes), liver, and spleen [24,25]. L-serine synthesis in the liver is activated under conditions of increased glycolysis and decreased gluconeogenesis, such as consumption of a carbohydrate-rich diet [21,26,27]. L-serine is a substrate for the synthesis of proteins, phospholipids, particularly phosphatidylserine, and sphingolipids, such as ceramides, phosphosphingolipids, and glycosphingolipids, which are in large amounts in the white matter of the brain and in the myelin sheaths of nerves. L-serine acts as an agonist of the glycine receptor and, therefore, is classified as an inhibitory neurotransmitter [28,29]. L-serine, in reaction with homocysteine catalyzed by cystathionine β-synthase, initiates the transsulfuration pathway. This makes L-serine important for homocysteine disposal and synthesis of several sulfur-containing substances, such as cysteine, cystine, taurine, and glutathione. The connection of L-serine with folate and methionine cycles enables its role in the synthesis of nucleotides and many methylation reactions. Neurological abnormalities observed in primary disorders of its synthesis indicate that the amounts of L-serine provided by food may not always be sufficient and that L-serine should be classified as a conditionally essential amino acid [30]. L-serine is a substrate for the synthesis of proteins, phospholipids, particularly phosphatidylserine, and sphingolipids, such as ceramides, phosphosphingolipids, and glycosphingolipids, which are in large amounts in the white matter of the brain and in the myelin sheaths of nerves. L-serine acts as an agonist of the glycine receptor and, therefore, is classified as an inhibitory neurotransmitter [28,29]. L-serine, in reaction with homocysteine catalyzed by cystathionine β-synthase, initiates the transsulfuration pathway. This makes L-serine important for homocysteine disposal and synthesis of several sulfur-containing substances, such as cysteine, cystine, taurine, and glutathione. The connection of L-serine with folate and methionine cycles enables its role in the synthesis of nucleotides and many methylation reactions. Neurological abnormalities observed in primary disorders of its synthesis indicate that the amounts of L-serine provided by food may not always be sufficient and that L-serine should be classified as a conditionally essential amino acid [30]. Why L-Serine Levels Decrease in Diabetes L-serine concentrations in plasma and tissues decrease in both T1DM [4,5,31,32] and T2DM [5,6,[33][34][35][36][37]. The decrease in L-serine levels is probably due to two reasons. Firstly, due to decreased glycolysis and subsequent decrease in the supply of 3-P-glycerate, the L-serine synthesis decreases in most tissues. Secondly, L-serine may be deaminated by serine dehydratase to pyruvate or converted by serine-glyoxylate aminotransferase into hydroxypyruvate and, ultimately, glucose ( Figure 2). Therefore, increased gluconeogenesis, which is one of the main metabolic features of diabetes, increases Lserine catabolism in the liver and the kidneys. Why L-Serine Levels Decrease in Diabetes L-serine concentrations in plasma and tissues decrease in both T1DM [4,5,31,32] and T2DM [5,6,[33][34][35][36][37]. The decrease in L-serine levels is probably due to two reasons. Firstly, due to decreased glycolysis and subsequent decrease in the supply of 3-P-glycerate, the L-serine synthesis decreases in most tissues. Secondly, L-serine may be deaminated by serine dehydratase to pyruvate or converted by serine-glyoxylate aminotransferase into hydroxypyruvate and, ultimately, glucose ( Figure 2). Therefore, increased gluconeogenesis, which is one of the main metabolic features of diabetes, increases L-serine catabolism in the liver and the kidneys. Consequences of L-Serine Deficiency in Diabetes Due to the exceptional importance of L-serine in a broad range of metabolic reactions and cellular functions, the consequences of L-serine deficiency are numerous. Clinically important are disturbances in synthesis of sphingolipids, glycine deficiency, and hyperhomocysteinemia. Disturbances in Synthesis of Sphingolipids and Diabetic Neuropathy A proven consequence of L-serine deficiency is impaired synthesis of sphingolipids, particularly ceramides and phospholipids [38][39][40][41]. Moreover, due to the possibility of substitution of L-serine by L-alanine during the first step of synthesis of sphingolipids by serine palmitoyl transferase, neurotoxic deoxysphinganines, which lack the C1 hydroxyl group of L-serine and therefore cannot be used for the synthesis of complex sphingolipids, are formed [34,42]. These substances accumulate in tissues and exert detrimental effects on neurite formation [39]. Hence, it is very likely that L-serine deficiency participates in the pathogenesis of diabetic neuropathy that may affect both limbs (peripheral type) and internal organs (autonomic type). Since deoxysphinganines are toxic to β-cells of the pancreas, their increased level may contribute to the pathogenesis of diabetes itself [41]. There are several studies reporting that L-serine supplementation reduces concentrations of deoxysphingolipids and manifestations of symptoms of diabetic neuropathy [5,40,43,44]. Glycine Deficiency Adaptive response to L-serine deficiency due to its impaired synthesis from 3-PG and increased catabolism in gluconeogenesis is its increased synthesis from glycine by L-serine hydroxymethyltransferase. The reaction requires N 5 N 10 -CH 2 -THF, that is formed during the degradation of glycine by a glycine cleavage system. Therefore, two molecules of glycine may be consumed during the synthesis of one molecule of L-serine: Gly + NAD + + THF → NH 3 + CO 2 + NADH + H + + N 5 N 10 -CH 2 -THF (cleavage system) Gly + N 5 N 10 -CH 2 -THF → L-Ser + THF (serine hydroxymethyltransferase) Sum: 2 Gly + NAD + → L-Ser + NADH + H + + NH 3 + CO 2 Glycine levels decrease along with the decrease in L-serine levels in both types of diabetes [2,4,[45][46][47]. However, it is not clear whether glycine deficiency in patients with diabetes affects some important physiological functions of glycine, such as neurotransmission, conjugation of bile acids, and synthesis of collagen, creatine, glutathione, heme, and purines. It is likely that an adaptive increase in L-serine synthesis from glycine plays a role in hyperhomocysteinemia and impaired synthesis of sulfur-containing substances (next item). Hyperhomocysteinemia and Impaired Synthesis of Sulfur-Containing Substances L-serine deficiency can lead to an increase in homocysteine levels in two ways. The first is decreased supply of N 5 -CH 3 -THF for homocysteine methylation to methionine due to the adaptive increase in L-serine synthesis from glycine [48]. The second is an impaired synthesis of cystathionine from L-serine and homocysteine by cystathionine β-synthase and a subsequent decrease in the drain of homocysteine from the methionine cycle to the transsulfuration pathway ( Figure 2). The possibility is supported by the presence of hyperhomocysteinemia in humans and rodents with cystathionine β-synthase deficiency [49]. Hyperhomocysteinemia is routinely observed in patients with diabetes and seems to be involved in an increased risk of cardiovascular, cerebrovascular, and thromboembolic diseases [50,51]. Decreased cytathionine synthesis due to L-serine deficiency may also be involved in impaired synthesis and alteration in several sulfur-containing substances, such as cysteine, cystine, taurine, and glutathione, reported in the serum of patients with diabetes [52]. Low levels of cysteine associated with increased homocysteine levels in diabetes have been reported by Rehman et al. [53]. Unfortunately, there are no studies on the effect of L-serine supplementation on levels of sulfur-containing substances in patients with diabetes. It has only been shown that L-serine administration decreases plasma homocysteine levels in hyperhomocysteinemia induced by high methionine diet [54][55][56]. Basic Data on BCAA Metabolism The BCAA are nutritionally essential amino acids that, together with their metabolites, the branched-chain keto acids (BCKA) and β-hydroxy-β-methylbutyric acid (HMB), are involved in the regulation of key protein-anabolic pathways and serve as an energy fuel during exercise and severe illness. Unlike most other amino acids, BCAA catabolism does not begin in the liver, but in extrahepatic tissues, especially in muscles. The cause is the negligible hepatic activity of BCAA aminotransferase, the first enzyme in a cascade of BCAA catabolism reactions (Figure 3), whereas its activity is high in muscles. Basic Data on BCAA Metabolism The BCAA are nutritionally essential amino acids that, together with their metabolites, the branched-chain keto acids (BCKA) and β-hydroxy-β-methylbutyric acid (HMB), are involved in the regulation of key protein-anabolic pathways and serve as an energy fuel during exercise and severe illness. Unlike most other amino acids, BCAA catabolism does not begin in the liver, but in extrahepatic tissues, especially in muscles. The cause is the negligible hepatic activity of BCAA aminotransferase, the first enzyme in a cascade of BCAA catabolism reactions (Figure 3), whereas its activity is high in muscles. The BCAA aminotransferase enables the reversible transfer of amino group between BCAA and α-KG to form BCKA and glutamate (BCAA + α-KG ↔ BCKA + Glu). Glutamate produced in muscles by BCAA aminotransferase is used by mitochondrial alanine aminotransferase (ALT) and aspartate aminotransferase (AST) as a source of nitrogen for the synthesis of alanine (Glu + pyruvate → α-KG + Ala) and aspartate (Glu + oxaloacetate → α-KG + Asp), respectively. Since the BCAA aminotransferase reaction responds rapidly to changes in concentrations of its reactants, the removal of glutamate and the regeneration of α-KG by ALT and AST are essential for a continuous flux of the BCAA through the BCAA aminotransferase. Alanine is transported from the mitochondria to the cytosol by an unknown carrier and is together with alanine synthesized in the cytosol released from muscles and used preferentially for glucose synthesis in the liver. Aspartate transported from the mitochondria to the cytosol by aspartate-glutamate carrier (AGC) is utilized in several The BCAA aminotransferase enables the reversible transfer of amino group between BCAA and α-KG to form BCKA and glutamate (BCAA + α-KG ↔ BCKA + Glu). Glutamate produced in muscles by BCAA aminotransferase is used by mitochondrial alanine aminotransferase (ALT) and aspartate aminotransferase (AST) as a source of nitrogen for the synthesis of alanine (Glu + pyruvate → α-KG + Ala) and aspartate (Glu + oxaloacetate → α-KG + Asp), respectively. Since the BCAA aminotransferase reaction responds rapidly to changes in concentrations of its reactants, the removal of glutamate and the regeneration of α-KG by ALT and AST are essential for a continuous flux of the BCAA through the BCAA aminotransferase. Alanine is transported from the mitochondria to the cytosol by an unknown carrier and is together with alanine synthesized in the cytosol released from muscles and used preferentially for glucose synthesis in the liver. Aspartate transported from the mitochondria to the cytosol by aspartate-glutamate carrier (AGC) is utilized in several reactions, such as the purine-nucleotide cycle and protein synthesis. Aspartate transamination back to oxaloacetate and its translocation back into the mitochondria via the malate-aspartate shuttle (specifically malate-ketoglutarate carrier) can be important for the continuous flux of the BCAA through the BCAA aminotransferase, The second enzyme of BCAA catabolism is branched-chain α-keto acid dehydrogenase (BCKA dehydrogenase), which catalyzes irreversible decarboxylation of the BCKA to corresponding branched-chain acyl-CoA esters (BCA-CoA). At rest, the activity of BCKA in the muscles of a healthy individual is low. Therefore, most of the BCKA formed by BCAA aminotransferase is released from muscles and oxidized in tissues with high activity of BCKA dehydrogenase, such as the liver, heart, and kidneys, or aminated to the original BCAA. Increased concentrations of ATP, NADH, and acyl-CoA derivatives and decreased concentration of α-ketoisocaproate (KIC), the transamination product of leucine catabolism, inhibit the enzyme [57,58]. Beyond the BCKA dehydrogenase reaction, the metabolism of the BCAA diverges into separate pathways. The final products are acetoacetate, acetyl-CoA, and succinyl-CoA ( Figure 3). It is estimated that 5-10% of KIC released to the blood is metabolized in the liver and kidneys by cytosolic enzyme KIC dioxygenase to produce HMB with favorable effects on protein balance and mitochondrial biogenesis in muscles [59]. Why the BCAA Increase in Diabetes The possible causes of elevated BCAA levels in diabetes have been reviewed recently [60,61]. Supposed is impaired BCAA transamination and decarboxylation in muscles due to the changes associated with decreased glycolysis and preferential fatty acid oxidation (Figure 4). These are mainly: • Decreased flux through the CAC, resulting in impaired α-KG supply to BCAA aminotransferase. • Impaired conversion of glutamate to α-KG by AST and ALT in mitochondria due to decreased supply of oxaloacetate and pyruvate from glycolysis. The result is the drain of α-KG from the CAC (cataplerosis) and glutamate cumulation in mitochondria. A marked decrease in the rate of aspartate production from glutamate and oxaloacetate and a decrease in the Vmax of glutamate translocase was observed in heart mitochondria from the alloxan-diabetic rats compared to fed controls [62]. • Inhibition of BCKA dehydrogenase by increased levels of NADH and acyl-CoAs formed during β-oxidation. • Increased BCAA release from the liver due to the activation of protein catabolism. The BCAA is released from the liver more than other amino acids because the activity of BCAA aminotransferase is very low in the liver. • Increased transamination of BCKA to BCAA. It has been suggested that glutamine released from muscles can, under conditions of decreased activity of BCKA dehydrogenase, activate the synthesis of BCAA from BCKA or limit the transamination of BCAA to BCKA in visceral tissues [63,64]. The hypothesis of the role of impaired glycolysis in muscles in the pathogenesis of increased BCAA levels is supported by the blunted decline in plasma BCAA levels during the oral glucose tolerance test in subjects with insulin resistance or diabetes [65,66]. The fundamental importance of skeletal muscle is proven by high BCAA levels in muscles [4,[67][68][69][70][71][72]. Insulin Resistance There is a strong association of BCAA levels with insulin resistance, and the rise of BCAA in obesity is considered a prognostically significant factor in the development of T2DM [45,73,74]. The notion that elevations in BCAA levels contribute causally to insulin resistance is supported by the observation of impaired glucose disposal after BCAA infusion into circulation [75]. Several studies point to the role of the mTOR signaling pathway. It has been proposed that high levels of the BCAA increase via mTOR phosphorylation of insulin receptor substrate 1 (IRS-1), leading to the block of insulin signaling [76]. It should be emphasized that it is not quite sure that the effects of increased BCAA levels on mTOR signaling are detrimental in subjects with diabetes. The BCAA, particularly leucine, has potent anabolic effects and increases insulin release from pancreatic β-cells [77,78]. Therefore, under conditions of impaired insulin signaling, increased BCAA levels may promote anabolic reactions and prevent some negative consequences of insulin resistance or deficiency. Recent studies have shown that dietary supplementation with leucine attenuates insulin resistance, favors weight loss, and improves mitochondrial function [79][80][81]. Therefore, leucine supplementation is becoming a focus of attention in T2DM therapy. Insulin Resistance There is a strong association of BCAA levels with insulin resistance, and the rise of BCAA in obesity is considered a prognostically significant factor in the development of T2DM [45,73,74]. The notion that elevations in BCAA levels contribute causally to insulin resistance is supported by the observation of impaired glucose disposal after BCAA infusion into circulation [75]. Several studies point to the role of the mTOR signaling pathway. It has been proposed that high levels of the BCAA increase via mTOR phosphorylation of insulin receptor substrate 1 (IRS-1), leading to the block of insulin signaling [76]. It should be emphasized that it is not quite sure that the effects of increased BCAA levels on mTOR signaling are detrimental in subjects with diabetes. The BCAA, particularly leucine, has potent anabolic effects and increases insulin release from pancreatic β-cells [77,78]. Therefore, under conditions of impaired insulin signaling, increased BCAA levels may promote anabolic reactions and prevent some negative consequences of insulin resistance or deficiency. Recent studies have shown that dietary supplementation with leucine attenuates insulin resistance, favors weight loss, and improves mitochondrial function [79][80][81]. Therefore, leucine supplementation is becoming a focus of attention in T2DM therapy. Accumulation of the BCAA Metabolites It has been suggested that high BCAA levels interfere with fatty acid oxidation leading to the accumulation of acylcarnitines and acyl-CoAs with various lengths of carbon skeleton [82]. An increase in C3 and C5 acylcarnitines in animals fed by a high-fat diet supplemented with BCAA suggests that some of these acylcarnitines are the direct products of BCAA catabolism [45]. In recent years, attention has been given to the increased level of 3-hydroxyisobutyric acid, one of the valine metabolites [75,[82][83][84]. The consequences of increased concentrations of the metabolites related to dysregulation of BCAA metabolism are not clear. The Increase in AAA Levels The BCAA belongs together with aromatic amino acids (AAA; phenylalanine, tyrosine, and tryptophan) to a group of large neutral amino acids, which compete with each other for transport through plasma membranes by the same transporter referred to as the LAT1 (CLC7A5). Therefore, the rise of AAA is apparently caused by their reduced transport to the tissues due to the rise of the BCAA. It has been suggested the elevation in the BCAA levels reduces the brain uptake of AAA, which are precursors of some neurotransmitters, notably dopamine and 5hydroxytryptamine (serotonin), which may affect mood, cognition functions, hormone secretion (prolactin, cortisol), and the onset of fatigue [85]. Significant associations of the sum of the BCAA and AAA levels with insulin resistance and future diabetes have been reported [45,73,74,86] In the previous part of this article, it was shown that the BCAA metabolism is closely linked to the metabolism of glutamate, aspartate, alanine, and glutamine. However, the reports on changes in the levels of these amino acids are not consistent, and both increased and decreased plasma concentrations have been reported in subjects with diabetes [2,[4][5][6][7]. Several speculations make it possible to explain these inconsistent findings. The decrease in glutamate synthesis due to the block in the flux of the BCAA through BCAA aminotransferase may cause a decrease in the concentrations of aspartate, alanine, and glutamine. The likely mechanism leading to increased levels of alanine in other subjects with diabetes may be the impaired entry of pyruvate to the CAC and its subsequent shift from pyruvate dehydrogenase to alanine aminotransferase and lactate dehydrogenase reactions. The suggestion is consistent with elevated lactate levels in patients with diabetes [47]. The cause of decreased alanine levels in other patients might be due to its increased consumption for gluconeogenesis in the liver. Summary and Conclusions The issue of diabetes is very complex, and in addition to genetic factors and obesity, other influences such as stress, alterations in the immune system, drugs, nutritional habits, physical activity, and changes in gut microbiota are also involved in the etiopathogenesis of diabetes and may affect amino acid metabolism. The focus of this article is specifically the changes in amino acid metabolism due to impaired glycolysis. In the article is demonstrated that decreased L-serine and increased BCAA levels in subjects with diabetes are directly related to impaired glycolysis, preferential use of fatty acids as an energy substrate, and decreased flux through the CAC and that these alterations are implicated in the development of several complications. L-serine deficiency contributes to the altered synthesis of sphingolipids, which plays a role in the pathogenesis of diabetic neuropathy, hyperhomocysteinemia due to impaired homocysteine disposal via the methionine cycle and transsulfuration pathway, and glycine deficiency due to the adaptive increase in glycine utilization for L-serine synthesis. Enhanced BCAA levels contribute to increased levels of aromatic amino acids (phenylalanine, tyrosine, and tryptophan), insulin resistance, and accumulation of various metabolites whose influence on the progression of diabetes has not been clarified. Due to the positive effects of BCAA on protein balance, it is not clear whether their increased levels in diabetes should be recognized as beneficial or harmful. It is concluded that: (i) Plasma amino acid concentrations should be monitored in patients with diabetes, and systematic investigation is needed to examine the effects of L-serine and glycine supplementation on diabetes progression in the case of a decrease in the level of these amino acids in the blood. (ii) The ratio between BCAA and L-serine levels could be a better prognostic indicator of insulin deficiency or resistance than BCAA alone. (iii) A better understanding of the consequences of perturbations in BCAA metabolism is essential for making decisions regarding dietary recommendations in patients with diabetes. Funding: Charles University, the Cooperation Program, research area METD. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable.
6,358.8
2023-01-01T00:00:00.000
[ "Biology" ]
Impact of the Telephone Assistive Device (TAD) on stuttering severity while speaking on the telephone There is extensive experimental evidence that altered auditory feedback (AAF) can have a clinically significant effect on the severity of speech symptoms in people who stutter. However, there is less evidence regarding whether these experimental effects can be observed in naturalistic everyday settings particularly when using the telephone. This study aimed to investigate the effectiveness of the Tele­ phone Assistive Device® (TAD), which is designed to provide AAF on the telephone to people who stutter, on reducing stuttering severity. Nine adults participated in a quasi-experimental study. Stuttering severity was measured first without and then with the device in partici­ pants’ naturalistic settings while making and receiving telephone calls (immediate benefit). Participants were then allowed a week of repeated use of the device following which all measurements were repeated (delayed benefit). Overall, results revealed significant im­ mediate benefits from the TAD in all call conditions. Delayed benefits in received and total calls were also significant. There was sub­ stantial individual variability in response to the TAD but none of the demographic or speech-related factors measured in the study were found to significantly impact the benefit (immediate or delayed) derived from the TAD. Results have implications for clinical decision making for adults who stutter. The exact mechanism of how AAF reduces stuttering is not well understood.However, the general consensus is that AAF repro duces the choral effect, where adults who stutter tend to show immediate and dramatic reductions in stuttering when speaking chorally, or in unison with another speaker (Kiefte & Armson, 2008).Theories differ regarding the mechanisms underlying both the choral speech effect, and by extension, AAF.Early theories attributed the positive effects of choral speech and AAF to changes in the motor production of speech (Wingate, 1969;1970) such as more active control of vocalization, reduced rate of speech, and changes in vocal intensity.Recent studies, how ever, have suggested that this motor hypothesis is over-simplistic and does not account for benefits derived under conditions of AAF even at rapid speech rates (Kalinowski et al., 1993(Kalinowski et al., , 1996;;McLeod et al., 1995;Sparks et al., 2002).More recent theories have attributed the effects of AAF to reducing auditory perceptual anomalies in those who stutter, particularly related to anatomical anomalies of the auditory temporal cortex (i.e., atypical rightward asymmetry of the planum temporale (PT); Foundas et al., 2004). Based on their physiological findings, Foundas and colleagues have proposed two subgroups within the stuttering population, those with typical (leftward) PT asymmetry, which they consider adaptive and therefore may respond to motor speech techniques alone, and those with atypical (rightward) PT asymmetry, which is a significant risk factor for developmental stuttering, which may be more responsive to treatment via AAF as demonstrated in their study. All studies conducted to date with AAF have found significant individual variation in the benefits derived from AAF.However, it severe versus mild presentations of stuttering when reading at both normal and fast speech rates.In contrast, Armson et al. (2006) found greater benefits from AAF in formulated speech (i.e., not reading) in adults with mild stuttering compared to those with more severe stuttering at baseline.Foundas et al. (2004) found that in their stuttering group, only those with atypical PT asymmetry demonstrated significant reduc tions in stuttering severity under conditions of DAF and that these participants were also the ones with the most severe presenta tions of stuttering at baseline.This finding could suggest that the atypical PT asymmetry is directly responsible for both the initial severity of stuttering as well as the ability of DAF to induce flu ency.The authors acknowledge that it is also possible that the difference between PT asymmetry groups is evidence of a ceiling effect in the ability of DAF to induce fluency (i.e., that DAF simply can improve severe stuttering more than mild stuttering).A sec ond factor that is related to stuttering severity, that could affect an individual's response to AAF, may be the type of stuttering symptoms a person presents with.For instance, it is possible that those whose stuttering is characterised predominantly by silent blocks may derive less benefit from a device that can only alter feedback from an audible source.In addition, related to the dis cussion above, silent blocks tend to be associated with more severe presentations of stuttering and the length of blocks is directly factored into severity calculations of clinical measures of severity such as the Stuttering Severity Instrument -3 (SSI-3; Riley, 1994).No studies appear to have examined this potential variable. A third factor affecting one's response to AAF may be the de gree of language proficiency or language familiarity in multilingual speakers.A clear language familiarity effect in fluent bi-and mul tilingual speakers under DAF has been demonstrated (Van Borsel, Sunaert & Engelen, 2005) with participants demonstrat ing significantly slower speech rate and more speech disruptions when reading their less familiar languages under DAF.Van Borsel and colleagues relate this finding to increased reliance on audi tory feedback when reading in less familiar languages.Hence, if the auditory feedback system is disrupted under conditions of DAF, greater disfluencies result.The authors go on to hypothesise that the corollary may be true in adults who stutter, that is, that DAF may be most beneficial for adults who stutter in their less Nola Chambers familiar language where problems in auditory feedback may be most pronounced.It is possible that DAF might assist in normalis ing the auditory feedback system, thus reducing stuttering sever ity more in less familiar languages in those who stutter.It is im portant to note that language proficiency in bilingual adults who stutter is not independent from stuttering severity, with severity often reported to increase in less familiar languages (Jankelowits & Bortz, 1996;Watt, 2000).The potentially beneficial effects of AAF on speakers' less familiar language is intriguing to consider in South Africa where many clients are treated in their second or third languages.There appears to be no literature to this author's knowledge regarding responses to AAF in multilingual speakers who stutter. In their seminal review of the literature regarding the impact of AAF on stuttering severity, Lincoln and colleagues (2006) stated that more research regarding the impact of AAF devices in natu ralistic, as opposed to experimental settings was essential for the field.In addition, Armson et al. (2006) have suggested that it is important to examine the effects of AAF, not only in naturalistic settings, but also using commercially available devices as specific devices may not have the capabilities of devices that are used in experimental studies.One naturalistic speaking context which many adults who stutter report to be extremely stress-provoking is the use of the telephone.James, Brumfitt and Cudd (1999) sampled the perceptions of 223 adults who stuttered regarding telephone use and found that the majority of their sample're ported particular difficulty using the telephone. Their results suggested that an inability to use the telephone effectively, constituted considerable restrictions in daily life activi ties for adults who stutter, restricting participation in both social and career-related activities.Interestingly, those participants with self-reported severe stuttering found telephone use to be more difficult than those with mild stuttering. It is important to consider reasons why the telephone presents such challenges to people who stutter.James et al. (1999) found that the most frequently cited reason for difficulty speaking on the telephone as opposed to "face-to-face" conversations was the total reliance on speech to convey information, leading to an actual or perceived pressure to speak fluently and keep the con versation going.The inability to use nonverbal communication both to assist in conveying messages and gauging listener's re sponses, was also reported as being problematic and was re ported to result in telephone partners being less understanding than face-to-face conversational partners.Finally, the fact that telephone conversations frequently required introductions and/or the exchange of specific information was also cited as a particu lar source of difficulty for respondents. There are only a few studies to date that have addressed the 24 DIE SUID-AFRIKAANSE TYDSKRIF VIR KOMMUNIKASIE-AFWYKINGS, VOL, 56, 2009 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.) Impact of the Telephone Assistive Device (TAD) on stuttering severity while speaking on the telephone potential therapeutic effects of AAF when speaking on the tele phone and these have all been conducted using commercially available devices.The earliest documented study was conducted by Zimmerman, Kalinowski, Stuart and Rastatter (1997). Their study examined the effect of AAF during scripted, 300- A second study that included the use of an AAF device in a more naturalistic or therapeutic manner was conducted by Van Borsel, Reunes, and Van den Bergh (2003).In that study, nine adults were exposed to AAF in a variety of speaking situations also using the portable Casa Futura School DAF device for three months.Included in this exposure, the participants were required to use the telephone to make two telephone calls, one to another participant and one to a stranger in response to a newspaper advertisement, and to receive four telephone calls a month from the researchers who enquired about compliance.Several of the participants reported that using the device had reduced their fear of speaking on the phone and this had lead to its more frequent use.However, objective data relating to stuttering severity while talking on the telephone was not collected. J h e authors noted that in other speaking conditions, specifically /"automatic speech, conversation, picture description and repeat ing, stuttering in the non-feedback conditions had markedly re duced following the three months of repeated exposure to AAF, suggesting carryover of benefit from exposure to AAF to speech without AAF.It is unknown if a similar carryover effect may occur with telephone calls. O'Donnell, Armson and Keifte (2008) recently investigated the effectiveness of the in-the-ear SpeechEasy device on stuttering severity in situations of daily living, including the telephone. Seven participants took part in the study which included assess ments incorporating laboratory and naturalistic measures of stut tering severity, both before and after 9-16 weeks of repeated exposure to the SpeechEasy in everyday situations.For the tele phone conversations, weekly calls were made to the participants by one of the authors, who became familiar to the participants over the course of the study, as well as by unfamiliar research assistants. Thus all calls recorded and analysed were received by the par ticipants.Results indicated that all seven participants demon strated reductions in percentage syllables stuttered (%SS) when speaking on the telephone with the device, with individual mean reductions ranging from 20% to 94.4% when speaking to the experimenter (group mean 64.5%; SD = 22.9), and 7.5% to 74.4% when speaking to the unfamiliar research assistants (group mean 55.1%; SD = 22.3). Finally, Bray and James (2009) recently published preliminary data on 5 participants who used the Telephone Assistive Device (TAD), the device investigated in the current study, in naturalistic telephone conversations.Descriptive data suggested a decreas ing trend in %SS between naturalistic phone calls made without the device and phone calls made with the device, although the responses were highly variable among participants.In addition, more positive feelings related to using the telephone were re ported by most participants, even when limited benefit in terms of reductions of stuttering frequency was noted.It is important to note that no attempt was made to assess the effect of repeated practice making telephone calls versus the effect of the device itself on stuttering severity.However, the results of this study provide preliminary data to warrant further research with the TAD. This study aimed to investigate the effects of the VA609 TAD developed by a South African company known as VoiceAmp®. The TAD incorporates the already existing technological architec ture of the VA601i Fluency System, which is a portable unit that provides monaural or binaural AAF to people who stutter.This existing technology has been housed in a unique unit that con nects to the telephone and delivers AAF monaurally through the telephone handset. This new platform has a variety of additional capabilities to the VA601i portable system, such as a word prompting feature that is designed to assist those who present with severe blocks on initial sounds to initiate voicing when speaking on the telephone.The overall aim of the study reported here, was to assess the effec tiveness of the TAD in a sample of adults who stutter when talk ing on the telephone in their natural environments.This aim was operationalised into the following objectives to determine: (1) the immediate and delayed benefit from the TAD following a week of repeated use (as well as the relationship between this delayed benefit and the amount of time spent using the device during that week); (2) the difference, if any, in the benefit derived from the TAD between making telephone calls versus receiving telephone calls; (3) the carryover of benefit, if any, from using the TAD to telephone calls without using the TAD; (4) the impact of initial speech variables (stuttering severity and type of symptoms vices.This inclusive approach to participant recruitment resulted in a highly diverse participant pool, which is helpful when at tempting to identify relationships between variables (such as language spoken on the telephone and benefit derived) but also reduces experimental control over extraneous variables.Three participants who took part in the study were receiving therapy. The majority of participants were not receiving therapy and were ultimately recruited via a newspaper advertisement with distribu- through to those with post-graduate degrees (P2).A range of oc cupations was reported, though it must be noted that occupa tions for some participants were severely constrained by their stuttering and did not reflect their educational ability or potential. Two participants (P3 and P7), both had the capacity and financial opportunity to attend university, but chose not to and were report edly not in their preferred occupations.Two participants spoke English as a second language. Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.)It is clear from the data in Table 2 that the severity of stuttering, as measured by the SSI-3 (Riley, 1994), was bimodally distrib uted, with five participants presenting with mild to very mild stut tering in conversational speech and reading and four presenting with very severe stuttering.Only two participants were currently receiving therapy, and this was reported to be of brief duration, approximately 3-4 months prior to the start of the study.One of these (P5) had previous exposure to -AAF via the VoiceAmp VA601i portable device.No others were receiving therapy at the time of the study, despite frequent severe presentations of stut tering.Most reported having received therapy as children, but few reported any positive gains from this therapy and they were un able to describe the nature [of the intervention received. Based on self-report, thejparticipants could be categorised into those who demonstrated little to no restrictions in telephone use, defined as using the telephone in all aspects of their lives, includ ing social and vocational spheres (n=5) and those that reported severe restrictions in telephone use that were long-standing (i.e., for at least the previous ten years; n=4).These participants re ported only using the telephone to call friends, or in an even more restricted fashion, only family (P3).They also reported never an swering the phone unless they knew the caller, and never making enquiry calls to strangers.The level of restriction in telephone use did not always correspond with stuttering severity in that one adult with mild stuttering fell into the severe restriction category, while one adult with severe stuttering fell into the no restriction category.The significant restrictions participants placed on their own telephone use also had implications for the procedures of this study, as these participants would not have been able to I make any 'cold' enquiry calls for the purpose of the study as in the Zimmerman et al. (1997) study.All calls made during the study needed to therefore be individualised to each participant's comfort level and typical pattern of usage. Apparatus and setting The apparatus used was the VA609 TAD as this is the device most accessible to the South African population.There is also only one previous, preliminary study documenting its potential effectiveness with adults who stutter (Bray & James, 2009).The TAD is a unique device that connects directly to the telephone handset, where the handset microphone receives the user's voice signal.Once altered by the device, the signal is delivered monaurally through the earpiece of the handset to the user.The feed back is not heard by the telephone conversational partners.In addition, the voices of the telephone partners are not altered in any way.During the telephone tasks, the default settings of 56ms delay and 304Hz upward frequency shift characteristic of Pro gramme 1 were used in order to assess the effectiveness of the device's standard settings.If participants expressed dissatisfac tion with these settings, they were also exposed to Programme 2, characterised by a 90ms delay and 530Hz upward frequency //.Percentage syllables stuttered (%SS).Commensurate with previous studies of AAF devices, the primary dependent variable used to quantify stuttering during the telephone tasks and to calculate the benefit derived from the TAD was %SS.This meas ure has value as it provides a clear metric for comparison to pre vious studies.However, it must be noted that it is limited to quan tifying the frequency of stutters and has no value in quantifying other aspects of severity such as type of symptom, length of blocks, which is captured by measures such as the SSI.All speech samples from the telephone calls were transcribed for analysis. Stuttered syllables were defined as silent blocks, sound, sylla ble or word repetitions, prolongations or interjections (Armson et al., 2006).If more than one type of dysfluency occurred on a syllable (e.g., interjection + block + sound repetition at the begin ning of a word), this was counted as one stuttered moment.The number of syllables in the intended message was calculated and %SS was calculated individually for made and received calls.A total %SS was calculated to Hi. Participant log and questionnaire.A participant log sheet and questionnaire was developed specifically for this study.Dur ing the week of repeated use of the TAD, participants were asked to use the log to document the number of calls made and re ceived using the TAD, the approximate number of minutes of each telephone call, and to rate their speech for each call on a 5point scale where 3 = typical for that situation; 4 a little better, and 5 much better for that situation; 2 a little worse and 1 much worse for that situation.At the end of the week, participants were also asked to give an evaluation of the device in terms of the following parameters: ease of use, comfort of use, whether they Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.) was an interview that tapped information regarding stuttering development and therapy history, and questions related to the frequency of typical telephone use.This information was there fore gained only by self-report. ii. Baseline telephone tasks without the TAD.For a baseline measure, participants were asked to make and receive a tele phone call without the TAD.Because each participant varied so much in terms of whom they would normally speak to on the tele phone, they were at liberty to choose who they wanted to speak to within the limitations of the venue of data collection and time of day.Table 3 lists to whom and from whom the calls were made and received during the telephone tasks both before and after the week of repeated use.Where data collection took place at work;some participants were limited to calling within their organi sation to limit telephone costs to the company.When it came to receiving calls, the researcher typically called the participant from another room or extension, or made use of serendipitous phone calls that took place during the data collection period.The lack of uniformity in terms of listener familiarity in this procedure may have affected the internal validity of the study.However, many of the participants with severely restricted telephone use would not have had the willingness or capacity to call a stranger or even a non-family member, particularly at the start of the study. ili.Orientation to the TAD.Following the baseline telephone calls, participants were oriented to the TAD with a brief descrip tion of the principle behind the technology.The choral effect was explained and then demonstrated by reading or counting along with the participant and comparing this to their solo performance. Following this, each participant was given the opportunity to listen to the feedback provided by the TAD in a series of graded tasks including: counting, automatic speech, reading, giving their name / I yand address, and answering some general questions regarding their family while holding the handset of the phone to their ear.iv.Telephone tasks with the TAD.Following the orientation, participants were again required to make and receive at least one I call using the TAD.Similar instructions to the baseline calls were given. v. Repeated use in naturalistic environment.The TAD device was left in the participants' home or office environments for a week of repeated use.Participants were instructed to make and receive calls using the TAD in the normal course of their days and to complete the log sheet.As with the other self-report measures, the accuracy of this measure was not verified independently. vi. Telephone tasks following the week of repeated use.At the end of the week, all telephone tasks were repeated (i.e., partici pants were required to make and receive a telephone call without the device and again with the device). Inter-rater reliability I All telephone samples were videotaped and transcribed for analysis.Twenty-five percent of the telephone samples were ran domly selected and analysed by a second coder in order to deter mine inter-rater reliability.The reliability coder was a qualified speech therapist who was blind both to the condition of each telephone call coded (i.e., with or without the TAD) and to whether it took place before or after the week's repeated use.Cohen's kappa (Cohen, 1960) was used to quantify agreement.Cohen's kappa assesses the reliability of a categorical scale while correct ing for chance agreement and has values ranging from 0 to 1. Values from .60 to .75 are regarded as good and values over .75 as excellent (Fleiss, 1981).A mean kappa of .80 (range .59 to .93) was obtained, indicating excellent agreement overall, de spite a range of kappas between samples. Data Analysis Data were analysed according to the aims of the study.Due to the small sample size, non-parametric statistical procedures were used for all analyses (Siegel & Castellan, 1988).Aims 1 to 3 re quired tests for related pairs and hence Wilcoxon signed rank tests for related pairs were used to determine whether changes or differences in %SS were significant for immediate benefit, de layed benefit, differences in made versus received calls and carryover of delayed benefit with the TAD to calls without the TAD. In order to determine whether the total number of minutes of TAD use during the week of repeated use had any effect on the de layed benefit derived, Spearman's correlation coefficient between the total number of minutes and overall change in %SS was cal culated.For aim 4, the sample was divided into the appropriate independent groups (mild vs severe stuttering, presence or ab sence of silent blocks, low vs high restriction in telephone use, and first vs second language used) and %SS was then compared across groups using a series of Mann-Whitney U tests for both immediate and delayed benefit.Participants' perceptions of the TAD (aim 5) collected via the questionnaire were analysed through content analysis.In addition to the group statistics, indi vidual trends were also examined in order to understand individ ual performances and differences more fully. Group Results Immediate benefit from the TAD.The individual and mean %SS and standard deviations for each condition (made or received, with or without the TAD) both before and after the week of re peated use are presented in Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.) Impact of the Telephone Assistive Device (TAD) on stuttering severity while speaking on the telephone Delayed benefit from the TAD.To determine the delayed benefit of the TAD following the week's repeated use, changes in %SS for each condition were examined from the initial assessment with out the TAD to the final assessment with the TAD.Results are presented in Table 5. Wilcoxon signed ranks tests indicated that the delayed benefit was statistically significant for the received calls (Z=2.52;p<.012; n=8), and total call conditions (Z=2.55; p<.011; n=9) but not for the made calls (Z=1.75;p<.080; n=8). On average, there was a 39% decrease in %SS in delayed benefit for all calls.Significant individual variation is also evident in Table 5 and decreases in %SS ranged from -75% (stuttering frequency increased) to +88% for the made calls condition, 13% to 77% for the received condition, and -8% to 69% for the total condition.The mean number of minutes spent using the TAD during the week of repeated use was 31.61 (SD= 63.31) for made calls, 4.67 (SD=2.60)for received calls, and 36.33 (SD=64.48)for total calls.The large standard deviations are as a result of one participant, P4, who used the TAD for substantially longer than the other participants, 208 minutes in total (200 for made calls and 8 minutes for received calls during the week of repeated use).This participant was particularly interested in knowing whether the TAD would improve his fluency on the telephone and used his phone extensively for personal and vocational use prior to the study.Without his data, the mean time spent using the TAD for the group was fairly low, only 10.56 (SD=4.85)for made calls, 4.25 (SD=2.44)for received calls, and 14.88 (SD=3.91)for total calls across the whole week.Reasons for this were cited as ill ness, lack of consistent access to a landline in the evenings, gen eral dissatisfaction toward the end of the week with the TAD, and the general restriction in telephone use evident in many partici pants' lives.No significant relationships were observed between the minutes spent making calls with the TAD and benefit for made calls (r/io=.24;p<.57; n=8); minutes spent receiving calls and benefit for received calls (rho=.08\p<.85; n=8) or total min utes spent using the TAD and total benefit (r/io=.36;p<.34; n=9)..63; p<.53; n=9).This suggests that there was no carryover of benefit from using the TAD during the week to calls made without the TAD at the end of that week.Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.) Impact of speech and demographic variables on benefit Participants' perceptions of the TAD.All the participants re ported that the device was easy to use.On the 5-point Likert scale, participants tended to rate their speech with the TAD as the same or slightly better than usual with a mean of 3.93 (SD=.38) and a range of 3.50 to 4.60.Despite these largely posi tive ratings of telephone calls made with the TAD, three partici pants also reported that they found the device uncomfortable to use, with the predominant complaint being that the feedback was very distracting to them.Three participants also reported that they would not recommend the device to another person who stutters.However, most reported that they would support further , development of the device.The most common developments suggested were adaptations for the device to be compatible with cordless and cellular phones, and voice prompts to assist with silent blocks.Voice prompting is technologically possible with the This study found no carryover of the effects of TAD to calls made without the TAD in the final evaluation.This finding con trasts with that of Van Borsel et al. (2003) who found significant decreases in stuttered words in non-feedback conditions follow ing three month's exposure to AAF compared to initial measures. It is possible that the discrepancy in findings is due to the shorter period of exposure in this study.Nonetheless, the present find ings lend some weight to the assertion by Antipova et al. (2008) that AAF devices may be limited to use as prosthetics as opposed to a more permanent management strategy. A finding in this study that is common to previous studies of AAF, was the presence of substantial individual variation in the participant's responses to AAF.This study found no explanation for this individual variation in initial stuttering severity, types of stuttering symptoms, initial level of restriction in telephone use, or use of first or second language on the telephone.It is likely that a combination of factors will ultimately predict who will bene fit most from such a device.It is possible that only further neuroi-/ I / m a g in g studies will shed light on possible subtypes of stuttering that respond more or less optimally to AAF (Foundas et al., 2004) to explain this individual variation.!Implications.These findings, have important clinical implica tions.Due to the heterogeneity in responses to the TAD device reflected in this study, it seems important for clinicians not to exclude any potential client expressing an interest in such a de vice from a fitting and trial period based on any one factor meas ured in this study, such as stuttering severity, or presence of si lent blocks, alone.Similarly, however, it seems equally important to inform potential clients of the variability in responses to AAF and not to make undue promises. The benefits of the TAD may be enhanced if it is introduced along with other management strategies.These could include behavioural strategies for slowly increasing the length of time using the TAD or grading telephone tasks to increase the likeli hood of success and systematically desensitising participants to using the telephone.In addition, speech motor strategies could be given to assist participants in situations where the TAD would not be useful, for example, with silent blocks at the beginning of utterances.It is certainly worthy of future research to investigate whether a more supported and systematic introduction of the TAD within a comprehensive therapy programme addressing tele phone avoidances, anxiety, and speech motor techniques would yield more favourable results than those obtained in the present study.This call has been made by other researchers who have found that AAF users tend to combine speech therapy techniques with their AAF devices (Lincoln & Walker, 2007).Future research is also warranted to investigate the effectiveness of the voice prompting feature of the TAD to assist clients presenting predomi nantly with silent blocks while talking on the telephone. is not clear what factors contribute to this variation.One impor tant potential factor is that of severity of stuttering.It is not yet clear what factors underlie the ultimate severity of stuttering in individuals who stutter, but these factors are no doubt varied and consist of a complex interaction between biological, emotional, have employed various measures of severity, which may include more or fewer of these variables in quantifying severity.Percent age syllables stuttered (%SS) is frequently used as it simply quan tifies the behavioural severity of stuttering.Behavioural studies investigating the relationship between AAF and stuttering severity have found conflicting findings.Sparks et al. (2002) demon strated greater improvements in fluency under DAF in people with ) and demographic variables (first versus second language use and self-reported restrictions in telephone use prior to the study) on the immediate and delayed benefit derived from the TAD and, (5) the participants' perceptions of the TAD following a week's re peated use of the device.THE SOUTH AFRICAN JOURNAL OF COMMUNICATION DISORDERS, VOL 56, 2009 25 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.)METHOD Design A quasi-experimental design was used for this study.All partici pants were assessed both without and then with the TAD while making and receiving telephone calls before and after one week's repeated use of the TAD in their natural environments.There was hence no control group and all participants took part in all parts of the study.Participants Nine participants were recruited from the Pietermaritzburg (PMB) and surrounding areas of Kwa-Zulu Natal (KZN).Inclusion criteria included the presence of developmental stuttering as determined by clinical interview and speech evaluation, no hear ing difficulties by self-report, and access to a land-line on a regu lar, if not daily, basis.There were no exclusionary criteria based on gender, age, severity of stuttering, number of languages spo ken, history of speech therapy, or previous exposure to AAF de shift.All participants chose to use Programme 1. Procedures took place in the participants' natural environments.Table 3 lists the venues for data collection for each participant.Four participants chose to collect data primarily at their places of work, five at home, and one at his university, where he was a residential stu dent.Venues for data collection were chosen by the participants based on where their most frequent telephone use occurred, permission from employers, and where they had access to a landline, which was necessary for TAD connection.All calls were videoed for further analyses using measures described below.THE SOUTH AFRICAN JOURNAL OF COMMUNICATION DISORDERS, VOL 56, 2009 27 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.)Measures Three measures were used in this study:i.Stuttering SeverityInstrument-3 (SSI-3;Riley, 1994).The SSI-3 is a clinical measure used to quantify and characterise stuttering se verity of each participant at the start of the study.The SSI-3 em ploys analysis of both a conversational speech sample of at least 300 syllables and a reading passage.Stuttering severity was calcu lated using the composite measures of percentage syllables stut tered (%SS), average duration of the three longest blocks, and physical concomitants according to the standard scoring proce dures of the SSI-3. would recommend the device to others, and whether they would support further development of the device.They were asked to give suggestions for further development and any other general comments in an open-ended written questionnaire.Procedures Ethical approval for this study was granted prior to the start of the study from the University of the Witwatersrand, Human Re search Ethics Committee (HREC) Non-Medical (protocol number H080103).Devices were loaned to the researcher at no cost for use in this study by the manufacturing company VoiceAmp.The company did not dictate any conditions or expectations regarding the study in exchange for this loan and provided no financial sup port or compensation related to the study.All devices were re turned to the company at the end of the study.The following procedures were implemented for each participant: /.Stuttering severity evaluation.Each participant received an initial stuttering severity evaluation.Included in this evaluation Table 3. Venue and telephone partners for each participant reflect a combination of the made and received calls, which was calculated by add ing the total number of stut tered syllables in each of the made and received calls and dividing by the sum of the syllables in each call, in order to control for differences in the length of samples.Benefit derived from the TAD was quantified as a mean percentage change value and was calculated as the differ ence in %SS without and with the TAD within each condition as a percentage of the value in the No TAD condition. de rived from the TAD.Aim 4 was concerned with the impact of speech variables (stuttering severity and presence of silent blocks at the initial evaluation) and demographic variables (use of first or second language on the telephone, and level of tele phone use restriction reported by participants at the start of the study) on the immediate and delayed benefit derived from the TAD.A series of Mann-Whitney U tests were calculated to investi gate this aim, and the results are summarised in Table 6 above.None of the speech or demographic variables was significantly related to either immediate or delayed impact derived from the TAD.THE SOUTH AFRICAN JOURNAL OF COMMUNICATION DISORDERS, VOL 56, 2009 31 tasks.However, similar to the present study, participants in the O'Donnell study received monaural feedback through the in-theear portable SpeechEasy device.Participants in the O'Donnell study were, however, also using their devices injriore speaking situations than the telephone and hence were being exposed to AAF through this device for much longer periods than the partici pants in this study.With regard to participant characteristics, those in the Zimmer man et al. (1997) study had apparently all attended or been as-32 DIE SUID-AFRIKAANSE TYDSKRIF VIR KOMMUNIKASIE-AFWYKINGS, VOL, 56, 2009 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012.)sociated with the Total Immersion Fluency Training Program, sug gesting a more extensive history of therapy than the participants in the present study.Similarly, the majority of participants in the O'Donnell study had previous exposure to AAF devices and theremaining two were referred from speech therapists, again sug gesting a greater involvement in intervention than the current study's participants.As a result, it is possible that the participants in both these previous studies had less active avoidance of the telephone and less overall restriction in daily use of the tele phone.The one week's use of the TAD in this study possibly did little to address the accumulated years of avoidance and anxiety associated with the telephone in a manner that would allow for the full fluency-enhancing benefits of the device to be seen. Table 1 . Demographic characteristics of participants 26 DIE SUID-AFRIKAANSE TYDSKRIF VIR KOMMUNIKASIE-AFWYKINGS, VOL, 56, 2009 liuii local to PMB (n=6).One other participant was recruited by word of mouth.Ten participants volunteered to take part in the study; however, one was ultimately excluded on the basis of hav ing additional communication problems associated with cerebral Table 2 . Stuttering characteristics of participants Table 4 . On average, %SS for the made calls during the initial evaluation decreased by 35%, re ceived calls by 36%, and total calls by 32%.According to the Wil coxon signed ranks test for related samples, these changes were all statistically significant (made calls: Z=2.21; p<.027; n=8; re ceived calls: Z=2.02; p<.043; n=6; total calls: Z=2.31; p<.021; n=9).Following the week's use of the TAD, only the made calls condition showing a significant immediate decrease in %SS when THE SOUTH AFRICAN JOURNAL OF COMMUNICATION DISORDERS, VOL 56, 2009 29Reproduced by Sabinet Gateway under licence granted by the Publisher(dated 2012.)theTAD was used compared to the no TAD condition (Z=2.21;p<.027; n=8), but not for received (Z=l .79;p<.074; n=7) or total calls (Z=l.90;p<.058; n=9). Table 4 . Mean (SD) percentage syllables stuttered in each condition before and after week's use * p < .05 a % Change is calculated as the difference in conditions as a percentage of initial value without TAD b Dashes indicate missing data 30 DIE SUID-AFRIKAANSE TYDSKRIF VIR KOMMUNIKASIE-AFWYKINGS, VOL, 56, 2009 Table 5 . Delayed benefit from TAD: from initial assessment (no TAD) to final (TAD) assessment Table 6 . Impact of speech and demographic variables on immediate and delayed benefit TAD, but was not assessed in this study.One other individual participant (PS) is worth mentioning as he appeared to gain no benefit from the TAD in either the immediate or delayed conditions.P8 presented with mild stuttering on the SSI-3 at his initial evaluation.During this evaluation, he reported syllable and word repetitions, no silent blocks), and level of re striction in phone use (low).It is possible that an ideal response to the AAF provided by the TAD is to be found in a combination of variables rather than in any one variable in isolation.At the very least, the large disparity in the two participants who appeared to derive the greatest gains from the TAD would preclude clinicians from excluding any potential candidates from a trial period with Nola Chambers the device, particularly based on any one of the demographic or speech variables considered in this study.
9,135
2009-12-31T00:00:00.000
[ "Linguistics" ]
LES Analysis of CO Emissions from a High Pressure Siemens Gas Turbine Prototype Combustor at Part Load : This work contributes to the understanding of mechanisms that lead to increased carbon monoxide (CO) concentrations in gas turbine combustion systems. Large-eddy simulations (LES) of a full scale high pressure prototype Siemens gas turbine combustor at three staged part load operating conditions are presented, demonstrating the ability to predict carbon monoxide pollutants from a complex technical system by investigating sources of incomplete CO oxidation. Analytically reduced chemistry is applied for the accurate pollutant prediction together with the dynamic thickened flame model. LES results show that carbon monoxide emissions at the probe location are predicted in good agreement with the available test data, indicating two operating points with moderate pollutant levels and one operating point with CO concentrations below 10 ppm. Large mixture inhomogeneities are identified in the combustion chamber for all operating points. The investigation of mixture formation indicates that fuel-rich mixtures mainly emerge from the pilot stage resulting in high equivalence ratio streaks that lead to large CO levels at the combustor outlet. Flame quenching due to flame-wall-interaction are found to be of no relevance for CO in the investigated combustion chamber. Post-processing with Lagrangian tracer particles shows that cold air—from effusion cooling or stages that are not being supplied with fuel—lead to significant flame quenching, as mixtures are shifted to leaner equivalence ratios and the oxidation of CO is inhibited. Introduction The operating hours of gas turbines at part load are becoming increasingly important in the energy turnaround. Part load operation of gas turbines is limited by incomplete carbon monoxide (CO) burnout at the lowest power settings [1]. To overcome this problem, fuel staging is applied distributing the fuel to different stages to flexibly control combustion to achieve low emissions and to avoid thermoacoustic oscillations [2]. Especially at part load, this can lead to the situation where not all the stages are supplied with fuel. The interaction of flames with cold air from the adjacent deactivated burner has a significant impact on the formation of CO. Additional effects such as short system residence times, unfavorable interaction with cooling air or flame quenching due to wall interaction promote high CO levels at the system outlet [3]. The formation of carbon monoxide has been experimentally investigated in various works [4][5][6]. Howard et al. [7] found that the consumption of CO is kinetically limited by the consumption of hydroxyl (OH), which presents the main oxidation path. The quenching of the flame inhibits the CO oxidation and consequently leads to an increase of Numerical Modeling Favre-filtered governing equations for mass, momentum, species mass fractions and absolute enthalpy are solved, using a commonly made unity Lewis number assumption. Viscosity is determined from Sutherland's law and the laminar and turbulent Prandtl and Schmidt numbers are 0.7. In order to accurately resolve the thin flame front on the numerical grid, the dynamic thickened flame (DTF) model [21][22][23] is used by introducing a thickening factor F to the species and enthalpy transport equations. The unresolved flame wrinkling is modeled by an efficiency function E using the modified [24] Charlette model [25], with β = 0.5. In the DTF model, a flame sensor Ω is introduced to avoid unphysical diffusion away from the flame and to accurately predict pure mixing. The DTF model applied to the Favre-filtered transport equations for each species mass fraction Y α and absolute enthalpy h a results in the following Equations (1) and (2) presented in Einstein summation convention: ∂ρ h a ∂t + ∂ρK ∂t Equations (1) and (2) depend on density ρ, velocity u j , viscosity µ, turbulent viscosity µ sgs , Prandtl number Pr, turbulent Prandtl number Pr sgs , Schmidt number Sc, turbulent Schmidt number Sc sgs , pressure p, the net formation rateω α for species α, the thickening factor F, the efficiency function E, the flame sensor function Ω, the absolute enthalpy h a is the sum of the chemical and sensible enthalpy, and the kinetic energy K = 1 2 u i u i . For the flame sensor function, a modified version of the formulation by Legier et al. [23] is used to spatially limit thickening to a smaller region of the flame, as shown in Equation (3). In this equation,Q andQ max 1D are the heat release rate and the maximum heat release rate from a 1D flame. The flame thickening factor F is dynamically calculated as F = 1 + Ω (F max − 1). The maximum thickening factor in each cell is determined as F max = max (n∆ mesh ) /δ 0 l , 1.0 and depends on the number of cells n on which the flame thickness is to be resolved (here, n = 5), the mesh cell size ∆ mesh and the laminar flame thickness δ 0 l . Reaction kinetics are described by analytically reduced chemistry (ARC) by Lu and Law [26] (denoted as Lu19 in the following). This mechanism was derived from GRI-3.0 [27] and contains 19 transported species and 11 species using quasi steady state approximations (QSSA). Figure 1 shows the laminar flame speed s L at different equivalence ratios Φ and the dry and corrected CO mole fraction X CO after a reference system residence time in dependency of different flame temperatures comparing Lu19, GRI-3.0, DRM19 [28] and state of the art AramcoMech2.0 [29] mechanism. For both laminar flame speed and CO mole fractions, a very good agreement between Lu19 and GRI-3.0 is visible, while minor deviations are identified for s L for Φ > 0.68. DRM19 predicts the highest flame speeds for Φ > 0.5 and AramcoMech2.0 agrees with Lu19 and GRI-30 for Φ > 0.6, but shows lower flame speeds approaching the lean blowout limit. A parallel shift of the CO turndown curve towards higher concentrations at lowered temperatures is indicated by AramcoMech2.0, while all other reaction mechanisms predict similar CO levels. The good agreement with the other mechanisms and the low computational cost justifies the choice of the applied Lu19 reaction mechanism for this work. Test Configuration In the test rig of the technology prototype Siemens gas turbine combustor, pressurized air from the compressor enters the inlet passage of the plenum through a diffusor to reproduce the inlet flow conditions in the engine [2]. A small amount of the compressed air is used for chamber cooling, while the majority mixes with fuel before entering the combustor. This combustor aims to achieve low NO x emissions by ensuring very short residence times and consists of 28 jets that are separated in four different fuel stages, namely main stages A, B and C and a pilot stage P, respectively. The jets of the different stages are arranged in a circumferential pattern. Stage B provides the largest number of jets, followed by stages A and P, and the lowest number of jets is featured with stage C. A sketch of the combustor side view is indicated in Figure 2. In each jet, fuel is injected to the cross flow of air. Additional air is added for effusion cooling to the combustion chamber. The burnt products leave the chamber through the transition piece into the exhaust gas passage where the emission sampling probe is located downstream of the transition exit. Experimental data for validation is limited to emission probe data, as laser measurement techniques can not be applied to optically closed gas turbine combustion chambers. Three different staging concepts under similar load levels were compared, which mainly differ in terms of fuel staging and small pressure variations. In this work, the level of stratification of a specific staged operating point is characterized by the ratio of the nominal minimum and maximum equivalence ratio Φ min /Φ max . Operating point 1 (7.42 bar) is characterized by a weak stratification (Φ 1,min /Φ 1,max = 0.65), operating point 2 (6.86 bar) by stages A and C not being supplied with fuel resulting in strong stratification (Φ 2,min /Φ 2,max = 0 ), and moderate stratification (Φ 3,min /Φ 3,max = 0.39) for operating point 3 (7.23 bar). In this work, pure methane (CH 4 ) is used as fuel and mass flows have been corrected by the lower heating value with respect to the natural gas utilized in the tests. The operating conditions from the test for the different cases are summarized in Table 1. Simulation Setup A compressible version of OpenFOAM-6 is applied for the large-eddy simulations with unstructured meshes, which required robust schemes [30]. The temporal discretization blends explicit and implicit contributions, weighted at 0.41 and 0.59. Convection is discretized by a total variation diminishing (TVD) scheme to preserve stability. Eddy-diffusivity and eddy-viscosity approaches are applied, the sub-grid viscosity is computed with the transported sub-grid kinetic energy model by Yoshizawa et al. [31]. For the pressure-velocity coupling, a pressure-implicit with splitting of operators (PISO) algorithm is used. Dirichlet boundary conditions are applied at the inlet for the fuel and air mass flows and temperatures, while the pressure is specified at the system outlet. Temperatures are applied to the chamber walls to account for non-adiabatic effects on the flame due to heated wall material and have been approximated from one-dimensional heat transfer analyses. For a reasonable thermal boundary layer treatment at the walls, Spalding's [32] wall functions are applied. A coarser (C) and finer mesh with 35 and 61 million hexahedral cells resolve 2.0 mm and 1.0 mm in the flame region. Simulations are parallelized using 3600 and 6096 CPUs, respectively. The sampling of the flow statistics is started after quasi steady-state is reached (∼100 ms) for another 100 ms. High velocities in small cells that are located in the fuel injector nozzles determine the simulation time-step width resulting in a corresponding convective CFL number of 0.03 in the flame region. The considerable computational cost for each simulated case is presented in Table 2. Flow Field and Flame Properties Contours of instantaneous normalized temperature T/T ad , carbon monoxide mass fractions Y CO and axial velocity U x fields for the moderate stratified case 3 are presented in Figure 3, including iso-lines for different equivalence ratios. Note that for reasons of confidentiality, contours are only shown for a small but representative section of the combustion chamber that is relevant for interpretation. Highest temperatures and Φ values are indicated for the pilot stage, whereas leaner mixtures at lower temperatures can be observed for the other stages. For richer mixtures, as shown in the region of the pilot stage, maximum CO mass fractions can be detected. Interestingly, large carbon monoxide concentrations are also found close to the chamber walls that may be attributed to quenching effects at the cold wall. Leaner mixtures at the chamber center show large CO levels that decay towards the chamber outlet. The velocity field presents a highly turbulent flow with strong swirling motion at the pilot stage. Jet bulk velocities of 140 m/s result in a jet Reynolds number of 6.4 × 10 6 . Emission Probe Time averaged carbon monoxide mole fractions from the simulations are compared against emission data from the probe location in Table 3. Operating points 1 (weak stratification) and 2 (strong stratification) result in measured carbon monoxide concentrations of around 30 ppm, while a good CO burnout is achieved for case 3 (moderate stratification), showing very low CO emissions of 2.51 ppm. Carbon monoxide emissions are predicted with very good agreement with the test data for case 3, while CO is underestimated for operating points 1 and 2. Similar predictions are achieved for cases 1C and 1, indicating negligible grid dependence for the prediction of CO. As carbon monoxide levels are in overall agreement with the test data, it can be assumed that most relevant phenomena that have an impact on the formation and oxidation of CO are captured in the simulations. For this reason, further analyses focus on the effects that lead to increased CO levels for the weak and strong stratified cases 1 and 2 in comparison to the good burnout achieved in case 3 (moderate stratification). Impact of Mixture Formation The different staging strategies applied in the investigated cases lead to different levels of stratification and mixture inhomogeneities. To identify whether increased CO concentrations at the combustor outlet result from lean (large CO levels due to incomplete oxidation) or rich mixtures (large CO levels from mixtures at equilibrium)-with respect to the nominal global equivalence ratio Φ glob -the impact of the fuel-air mixture formation on the CO formation is investigated in the following. Figure 4 presents carbon monoxide mass fractions in dependency of the equivalence ratio for different system residence times τ from 1D flames. Low CO levels can be achieved for lean mixtures at very large residence time. For shorter residence times, the lean CO rise shifts to richer mixtures. As richer mixtures (Φ > 0.5) are unaffected by the residence time (equilibrium concentrations are reached in a very short time by sufficiently high temperatures) mixture inhomogeneities may significantly increase carbon monoxide concentrations. Note that in 3D turbulent flows, however, the residence time of the mixtures is determined by complex turbulent flow structures and local recirculation zones. The presence of high CO concentrations at the combustor outlet at richer mixtures for the weakly stratified case 1 and at lean mixtures for the strongly stratified case 2 stresses the relevance of mixture inhomogeneities on the formation of carbon monoxide. The mixture distribution in the combustion chamber is presented in Figure 6, showing normalized temperature over equivalence ratio. A wide scatter of equivalence ratios is identified for all cases ranging from pure air (Φ = 0) to rich mixtures (Φ = 1.5). Mixtures with temperatures below the adiabatic flame temperature are detected as a result of heat losses at the chamber walls and supplied cooling air. Furthermore, these mixtures can also be associated with slow chemistry resulting from finite rate kinetics accounted for in this work. For case 1 (weak stratification), the scatter is most dense for mixtures leaner than the global equivalence ratio; significant numbers can also be seen for richer mixtures. The strongly stratified case 2 is characterized by a high number of mixtures on the very lean side which correspond to stages A and C being operated with pure air that mixes with the hot jets. This is in agreement with the observation of large CO levels on the lean side (see Figure 5). For case 3 (moderate stratification), a sparse distribution is shown on the very lean side and the scatter is most dense at Φ values corresponding to the global equivalence ratio. To further analyze the origin of mixture inhomogeneities and to potentially assign the problem to the mixing behavior of a single stage, Favre-filtered transport equations for passive scalars have been introduced representing the mixture fraction of the individual stages A, B, C and P (i.e., Z = Z A + Z B + Z C + Z P ). The contribution of the single stages on the global mixture in the combustion chamber is analyzed by their joint probability density functions (JPDF) by means of equivalence ratio Φ, as shown in Figure 7a-c. The nominal fuel stage equivalence ratio (horizontal line) indicates the ideal case, where air is supplied by equal shares to the stages and mixes homogeneously with fuel. Deviations from the nominal global equivalence ratio (vertical line) illustrate the unmixedness caused by the single stages. For all cases, mixture inhomogeneities can be observed in all of the stages and a high impact of the pilot stage on fuel-rich mixtures is presented as a linear dependency between the equivalence ratio and Φ P is shown for Φ > 0.8. For the weakly stratified case 1 (Figure 7a), stages A and C present a high probability on the lean side and a linear dependency on mixtures Φ > 0.2, indicating also a significant contribution to richer mixtures. Note that results for 1C are not shown here for brevity and can be found in Appendix A. The B stage predominantly contributes to the global equivalence ratio; large mixture inhomogeneities and a strong impact from cooling air can be seen, as a linear dependency for 0 < Φ < 0.7 is detected. In the strongly stratified case 2 (Figure 7b), stages A and C are not operated with fuel. The mixture depends on stages B and P, and a significant impact of air is identified for both stages as shown by a high probability on lean mixtures. For case 3 (moderate stratification), stages A and C show highest probability on the very lean side (Φ < 0.3) with a linear dependency up to Φ glob,3 , as shown in Figure 7c. Interestingly, stage B presents highest density at the nominal operating point -marked by the intersection point between the nominal global and stage equivalence ratio. In summary, the pilot stage contributes to the formation of fuel rich mixtures in all cases. For case 1 with weak stratification, significant contributions on richer mixtures are also due to stages A and C, and lean mixtures mostly depend on stage B. For the strongly stratified case 2, operating stages B and P contribute to the entire range of mixtures, with larger impact from the pilot on fuel-rich mixtures. For the moderate stratified case 3, stages A and C are only relevant for the formation of lean mixtures and stage B for all mixtures. The contribution of each stage to CO at the combustor outlet is shown in Figure 8a-c, presenting carbon monoxide mass fractions over Φ colored by the contribution of the stage equivalence ratio Φ stage to the total mixture. For the weak stratified case 1 (Figure 8a), low carbon monoxide levels at equivalence ratios Φ < 0.6 can predominantly be attributed to stage B, with increasing contribution towards leaner mixtures. Larger CO concentrations at richer mixtures are controlled by stages A and P, with minor impact from stage C. High carbon monoxide concentrations on the lean side in the strongly stratified case 2 (Figure 8b) are due to both operating stages B and P, while CO at very lean mixtures (Φ < 0.2) depend on the pilot stage. Richer mixtures with lower CO values are due to stage B. Interestingly, a second lower branch between 0.3 < Φ < 0.5 with very low carbon monoxide concentrations can be identified, indicating a good burnout. Stage B shows the highest relevance on CO for almost the complete range of mixtures in the moderate stratified case 3 (Figure 8c), followed by the pilot. At Φ = 0.4, stage C indicates highest relevance on carbon monoxide levels; negligible contributions are observed for stage A. To summarize, large CO concentrations are located at richer mixtures for case 1 (weak stratification) and are governed by stages A and P. For the strongly stratified case 2, both operating stages-with greater weights by the pilot-contribute to high CO levels on the lean side. Richer mixtures at Φ > 0.6 or large carbon monoxide mass fractions at leaner conditions are not present for the moderate stratified case 3, which is mostly influenced by stage B. Large carbon monoxide levels at the combustor outlet that are related to richer mixtures, i.e., Φ > 0.5, can be attributed to poor mixing of fuel and air. However, high CO concentrations on the lean side result from incomplete oxidation, that may be attributed to local quenching effects at the cold wall or by mixing with air. Flame Quenching at Cold Wall Flame quenching at the cold wall leads to a significant increase of CO in this region. The oxidation of carbon monoxide is inhibited and CO species concentrations are convected with high axial velocities towards the system outlet. The significance of flame quenching at the cold wall on the incomplete carbon monoxide burnout in the investigated operating points is analyzed in Figure 9. The plot shows the radial distribution of the mean carbon monoxide concentration in terms of cell distance towards the chamber wall d wall at the combustor exit cross section. It can be seen that very low CO levels are present in the proximity of the cold wall (d wall /d max wall = 0) in any of the cases and that concentrations increase towards the center of the cross section. Irrespective of whether flame quenching occurs in any upstream location in the combustion chamber, it has no impact on the outlet concentration. Hence, it can be concluded that flame quenching at the cold wall is insignificant for the incomplete CO burnout of the investigated operating points in this configuration. Impact of Secondary Air To investigate the impact of air on the formation of CO, an additional Favre-filtered transport equation has been solved for the primary air ratio ξ. Air that has mixed with fuel before entering the combustion chamber is denoted as primary air (ξ = 1), whereas secondary air (ξ = 0) is defined as air supplied through a jet that is not being operated or by effusion cooling. Cooling air is supplied to the chamber in the wall region in proximity to the jets from stage B. To investigate flame quenching events by the interaction with secondary air, Lagrangian tracer particles are used. As this analysis was not affordable during simulation run-time using the Lu19 reaction mechanism and fine grids, the tracer particles are only added to the Reynolds averaged LES results. The is arguably not ideal but still sheds light on the results. Potential effects of the impact of effusion cooling is investigated for all stages. One jet from each stage is selected as a representative and the massless tracer particles are injected at the outlet of the mixing passage. Particles evolve based on the local flow conditions and track the history of mean carbon monoxide mass fraction Y CO , equivalence ratio Φ and primary air ratio ξ. The CO particle history in dependency of Φ and ξ for all investigated cases is presented in Figure 10a-c. The shown set of particle histories is representative for all particles injected. Carbon monoxide mass fraction histories may not start at τ/τ exit = 0, as Lagrangian particles have the same axial but not radial injection position and, hence, particles injected at a small radius, with respect to the jet diameter, cross the flame front at later times. Incomplete CO burnout is identified by a carbon monoxide trajectory that does not decrease after reaching its peak value within the flame front. For the weakly stratified case 1 (Figure 10a), a good burnout can be identified for stage C with a rather uniform equivalence ratio of around Φ = 0.6 downstream of the flame front and negligible impact by secondary air. High carbon monoxide mass fractions at the combustor outlet at richer mixtures (i.e., Φ > 0.6) result from the pilot stage but also from stage A. Stage B and to some extent stage P indicate particles with high CO concentrations at very lean mixtures with increased contributions of secondary air. It can be assumed that these mixtures are affected by flame quenching, as their local equivalence ratio is shifted towards leaner values by the interaction with secondary air and, hence, the oxidation of CO is inhibited. Figure 10b shows CO particle histories for the strongly stratification case 2. Particles from stages A and C, operating with air only, indicate strong interaction with stages B and P, as equivalence and primary air ratios increase towards τ exit to Φ = 0.4 and ξ = 0.7, respectively, and CO levels of almost 1000 ppm can be seen. Consistent to Figure 8b, large CO concentrations on the fuel-lean side at the combustor outlet can be identified for particle trajectories from the pilot stage. It is likely that these increased carbon monoxide mass fractions result from quenching events, as the equivalence ratio is lowered from Φ = 1.0 to 0.3, while the primary air ratio at the same time indicates a reduction from ξ = 1.0 to 0.3. In comparison to the pilot stage, a lower impact by secondary air can be observed for stage B, which, however, shows increased CO levels for richer mixtures. The CO particle history in dependency of Φ and ξ for case 3 (moderate stratification) is presented in Figure 10c. A negligible impact of secondary air and homogeneous mixtures are identified for stage A and C, indicating a good CO burnout with low concentrations at the combustor exit time. For stage B, particles are identified where secondary air leads to a shift of the equivalence ratio to leaner mixtures. Interestingly, mixtures are not quenched but show a slower oxidation which appears to be complete at τ/τ exit = 1. The cooling air shows strongest impact on the pilot stage, where the burnout of CO for mixtures at around Φ = 0.5 is slowed down, leading to increased concentrations. The Lagrangian particle post-processing reveals that increased CO levels in case 1 (weak stratification) can be attributed to richer mixtures originating mostly from the pilot stage and, that flame quenching by the interaction with cold air affects stage B. For the strongly stratified case 2, most significant impact by secondary air is observed, resulting in local flame quenching at lean mixtures in stages B and P. In case of the moderate stratification (3), a negligible influence of cooling air is found and increased CO concentrations can be attributed to richer mixtures in stage P. In the event of local flame quenching the oxidation of CO is inhibited due to the absence of OH species [7]. Figure 11 shows the carbon monoxide mass fraction over equivalence ratio in dependency of the primary air ratio for quenched mixtures, that is, mixtures withω CO = 0 and Y OH = 0, in the combustion chamber. Quenched mixtures are identified for very lean mixtures (Φ < 0.2) with high content of secondary air. High carbon monoxide mass fractions resulting from quenching is not of great importance for cases with weak (1) and moderate stratification (3), as only a spare distribution is present. A high density of quenched pockets can be seen for the strongly stratified case 2, indicating a dense scatter with CO concentrations up to around 5000 ppm. This is in agreement with previous findings from Lagrangian tracer particles (see Figure 10b) revealing a significant impact of secondary air and explains the origin of high carbon monoxide levels on the lean side at the combustor outlet for case 2 (ref. Figure 5). Finally, the impact of secondary air on the mixture is assessed at the combustor outlet in Figure 12. A minor impact of ξ on mixtures in the weak and moderate stratified cases 1 and 3 can be observed, as at least 80% primary air is present at the combustor outlet. The highest CO mass fractions can be found at ξ = 1 and lower concentrations are present at leaner mixtures with higher contribution of secondary air. For the strongly stratified case 2, however, a wide range of ξ values can be identified with maximum carbon monoxide levels of around 1000 ppm at very lean mixtures with at least 60% secondary air content. This is in agreement with high CO concentrations at quenched mixtures from Figure 11 and illustrates the important role of secondary air on the mixture and the subsequent shift towards leaner equivalence ratios, where CO oxidation rates are significantly reduced or even inhibited. Conclusions Large-eddy simulations of a high pressure full scale Siemens prototype combustor at three staged part load conditions have been performed, demonstrating the ability to predict CO from a complex technical combustion system and mechanisms leading to incomplete carbon monoxide burnout in either cases were investigated. Combustion kinetics were modeled using direct reduced chemistry. Predictions of CO concentrations at the probe location were predicted with good agreement with the test data. It was found, that large CO levels corresponding to operating point 1 (weak stratification) could be related to high equivalence ratios streaks, whereas operating point 2 (strong stratification) revealed increased CO levels on the fuel-lean side. A good CO burnout has been noted for operating point 3 (moderate stratification). Increased CO concentrations could not be related to flame quenching events at the cold wall. The contribution of each stage on the total mixture showed that increased CO concentrations in the weakly stratified case 1 could be attributed to mixture inhomogeneities resulting from poor mixing, especially of the pilot stage, while flame quenching events were identified to have minor influence on CO in this case. Lagrangian particles revealed that cold air from effusion cooling and from stages not operated with fuel had a significant impact on the oxidation of CO in the case with strong stratification 2, as large numbers of mixtures could be identified where the equivalence ratio was shifted to lean mixtures so that quenching occurred and, hence, the carbon monoxide burnout was inhibited. Low carbon monoxide emissions in operating point 3 (moderate stratification), were found to result from a uniform mixture distribution and no significant flame quenching effects resulting from the interaction with secondary air. Funding: The authors gratefully acknowledge the financial support through Siemens and BMWi through CEC3 (funding reference number 03ET7073D) and the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SUPERMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). Acknowledgments: We would like to thank Isolde Siebelist, Maximilian Schäfer and Patrick Wollny for many helpful discussions and their technical support. Conflicts of Interest: The authors declare no conflict of interest.
6,881
2020-11-03T00:00:00.000
[ "Engineering", "Environmental Science" ]
Inverse Uniqueness in Interior Transmission Problem and Its Eigenvalue Tunneling in Simple Domain We study inverse uniqueness with a knowledge of spectral data of an interior transmission problem in a penetrable simple domain. We expand the solution in a series of one-dimensional problems in the far-fields. We define an ODE by restricting the PDE along a fixed scattered direction. Accordingly, we obtain a Sturm-Liouville problem for each scattered direction. There exists the correspondence between the ODE spectrum and the PDE spectrum. We deduce the inverse uniqueness on the index of refraction from the discussion on the uniqueness anglewise of the Strum-Liouville problem. Introduction In this paper, we study the inverse spectral problem in the following homogeneous interior transmission problem: where ] is the unit outer normal; is a simple domain in R 3 containing the origin with the Lipschitz boundary ; () ∈ C 2 (R 3 ); () > 0, for ∈ ; () = 1, for ∉ .Equation ( 1) is called the homogeneous interior transmission eigenvalue problem.We say ∈ C is an interior transmission eigenvalue of (1) if there is a nontrivial pair of solutions (, V) such that , V ∈ 2 (), − V ∈ 2 0 ().The last two conditions in (1) are the Sommerfeld radiations condition to ensure the uniqueness on the scattered waves.We assume that () ̸ = 1 near from its interior, which minimizes the support of . To ensure the uniqueness of the scattered solution, we impose the Sommerfeld radiation condition: Problem ( 1) occurs naturally when one considers the scattering of the plane waves by certain inhomogeneity defined by an index of refraction inside the domain .The inverse problem is to determine the index of refraction by the measurement of the scattered waves in the farfields.The inverse scattering problem plays a role in various disciplines of science and technology such as sonar and radar, geophysical sciences, medical imaging, remote sensing, and nondestructive testing in instrument manufacturing.For the origin of interior transmission eigenvalue problem, we refer to Kirsch [1] and Colton and Monk [2].For theoretical study and historic literature, we refer to [1,[3][4][5][6][7][8][9][10][11][12][13].To study the existence or location of the eigenvalues is a subject of high research interest [1,2,5,6,8,11,[14][15][16][17][18]].Weyl's type of asymptotics for the interior transmission eigenvalues is expected, even though problem (1) is defined in noncompact R 3 .In that case, the distribution of the eigenvalues is directly connected to certain invariant characteristics on the scatterer.In this regard, we apply the methods from entire function theory [19][20][21][22][23] to study the distributional laws of the eigenvalues.We also refer to [24] for the reconstruction of the interior transmission eigenvalues and [25] for a numerical description on the distribution of the eigenvalues.It is remarkable that an example on the nonuniqueness of the index of refraction is constructed in [4,Section 6] for the class of radially symmetric indices of refraction with a jump discontinuity of (||).Finding the optimal regularity assumption on the index of refraction to attain the uniqueness or the nonuniqueness remains an open problem.The breakthrough is made from the point of view of inverse Sturm-Liouville theory [18] that inverse L 2 -uniqueness on the radially symmetric index of refraction is obtained if certain extra local information [18, Theorem 1] is provided. For the nonsymmetrically stratified medium, there are not too many known results [6,7,16,17].In this paper, we mainly follow the complex analysis methods [3,14,18,26,27] to study the nonsymmetrical scatterers as a series of onedimensional problems along the rays scattering from the origin.The analysis along each ray possibly has multiple intersection points with , so we expect certain tunneling effect in a penetrable domain.In this paper, the new perspective is the asymptotic analysis inside and outside the perturbation.We give a global uniqueness on the index of refraction in simple domain by stating the following result.Theorem 1.Let 1 , 2 be two unknown indices of refraction as assumed in (1).If they have the same set of eigenvalues, then 1 ≡ 2 . Preliminaries We apply Rellich's expansion in scattering theory.Firstly we expand the solution (V, ) of (1) in two series of spherical harmonics by Rellich's lemma [8, page 32] in the far-fields: where fl ||, 0 ≤ < ∞; x = (,) ∈ S in which the system is independent of and x.The forward problem describes the distribution of the zeros of (; 0 ), while the inverse problem specifies the index of refraction by the topology of the zero set.In [14,18,26,27], we have discussed the methods to find the zeros of (; 0 ). Let ∈ C be a possible eigenvalue of (7).Applying the analytic continuation of the Helmholtz equation and Rellich's lemma [8, page 32, 33, 222], the solutions parameterized by solve outside the simple domain . We note that representation (3) initially holds outside || ≥ 0 , and the core of many inverse problems is to extend the solution into the perturbation.For our case, we want to extend representation (3) into || ≤ 0 for some possible set of .Let x1 ∈ S 2 be a given scattered direction satisfying the following geometric condition: For x1 , we extend each Fourier coefficient (; ) with ∈ C determined by system (7) for all , toward the origin until it meets the boundary at ( 1 , x1 ).Along the given x1 , we apply the differential operator Δ + 2 with to { , ()}, which accordingly can solve problem (1) replaced with the manmade radially symmetric index of refraction () = (x) = (x 1 ) for all x ∈ S 2 .More importantly, the interior transmission condition implies the following ODE: If there is merely one intersection point for [0, 0 ]× x1 with , then we set the initial conditions of (; ) according to the following condition: The behavior of the Bessel function () near = 0 is found in [28, page 437].We refer initial condition (14) to [18].That is, We observe that the uniqueness of the ODE ( 7) is valid up to the boundary : In particular, (; 1 ) = (; 0 ) by the uniqueness of ODE (13) along the line segment ( 0 , x1 ) to ( 1 , x1 ). For ≥ 0, we can take , = , = 1 in (16).From the point of view of the Helmholtz equation, both satisfy the Sommerfeld radiation condition whenever (; ) solves ( 13) and (14).By the uniqueness implied by the Sommerfeld radiation condition, we can choose that Using a similar argument, we deduce that In general, the solution (; ) depends on the scattered direction x whenever entering the perturbation, so we denote the extended solution of (13) as ŷ (; ) and accordingly the functional determinant as D (; 1 ).Thus, ( 13) is relabeled as The eigenvalues of ( 20) are discussed in [14,26,27] by the singular Sturm-Liouville theory in [29][30][31]. However, we are working on a simple domain in this paper.Hence, we modify the solution extension into in previous discussion.Instead of ( 16), we now ask for any ∈ C that satisfies the following conditions: in which R is the intersection set along the scattered angle x defined by and we will discuss the well-posedness of ŷ (; ).For each fixed x ∈ S 2 , (21) provides an initial condition at r ∈ R. Hence, the solution ŷ () of is constructed piecewise from infinity to the origin, at which We put it as a lemma. By the assumption of (1), we deduce that R is a finite discrete set and r1 < r2 < ⋅ ⋅ ⋅ < rM for each fixed x ∈ S 2 , in the case that (, x), (, x), and (, x) are any three consecutive points along the incident direction x.Whenever (, x) is a tangent point at the boundary, we disregard it and consider the line segment from (, x) to (, x) as either completely inside or outside the perturbation.Without loss of generality, we assume that R contains no tangent point.See Figure 1. To sketch an idea of how we construct a discrete set of for Lemma 2, we start with the first segment into the perturbation and discuss the well-posedness of the initial value problem starting at rM : which has unique solution inward up to rM −1 given that (r) is a known function for ∈ C. The behavior of the solution is understood by the singular Sturm-Liouville theory provided in Section 2. Because rM is the first intersection, the uniqueness of (25) holds up to rM for ∈ C. We deduce from the unique analytic continuation that hold outside for each fixed .In particular, D (; rM −1 ) = 0 holds by the construction of ŷ (; ).More importantly, the functional determinant D (; rM −1 ) = 0 is an algebraic condition that filters out a discrete set of eigenvalues from C for the problem The spectral theory of (28) is simply singular Sturm-Liouville theory [29][30][31].We have taken a similar approach in [14,26,27,32].Let 1 be one of its eigenvalues.Leaving the perturbation at rM −1 toward the origin, 1 defines another ODE system: in which D ( 1 ; rM −2 ) = 0 holds due to the construction of ŷ (; ) and the analytic continuation of the Helmholtz equation.The same 1 appears at rM −2 and is ready to define yet another new ODE: By the analytic continuation of of the Helmholtz equation, the same 1 satisfies (28), (29), and ( 30) and appears at rM −3 and then consecutively into each intersection interval by its construction.That is, system (28) extends to R + .There are several ways to produce an ODE flow from the origin to the infinity that satisfies D (; r0 ) = 0, D (; r1 ) = 0, . . .D (; rM ) = 0. (32) More importantly, system (30) inside also produces its group of eigenvalues that appear at rM −3 and consecutively into all other intersection intervals by repeating the argument after (28).That is, system (30) extends to R + .In this way, we consider the piecewise construction of eigenfunctions for all possible discrete ∈ C that make sense of the following system: ( Each element of the zero set of D (; r ) defines an initial value condition for the ODE and an algebraic condition to filter out a discrete spectrum of (33).There is the uniqueness and the existence to the solution of ( 33) defined by the piecewise construction as shown above, and we call the extended solution ŷ (; ) for each possible the eigenvalue tunneling in the interior transmission problem.In this paper, we use the solution ŷ (; ) constructed as the procedure above.Such a construction can be set to initiate at r0 and tunnels to the infinity.We have already discussed the simple case in the radially symmetric and starlike domains [14,26,27]: For example, with the initial condition D (; r0 ) = 0, the function D (; r1 ) is an entire function of exponential type [3,4,14,26,27,32].Thus, the eigenvalues of (34) form a discrete set in C, accumulate into the eigenvalues of (33), and tunnel to the infinity.Conversely, once we find an eigenvalue of ( 33) for some along some x, it solves (7) by the uniqueness of ODE up to || = 0 and then (10) by the analytic continuation of the Helmholtz equation.Whenever we collect all such eigenvalues from each incident x ∈ S 2 , they constitute the interior transmission eigenvalues of (1).The geometric characteristics of the perturbation are connected by rays of ODE system to the far-fields. Asymptotic Expansions and Cartwright-Levinson Theory To study the functional determinants D (; ), we collect the following asymptotic behaviors of ŷ (; ) and ŷ (; ).For each fixed x, we apply the Liouville transformation [7,8,[29][30][31]33]: where Here we recall that is 1 outside : in which For simplicity of the notation, we drop all the superscripts about x whenever the context is clear. Definition 3. Let () be an integral function of order and let (, , , ) denote the number of the zeros of () inside the angle [, ] and || ≤ .One defines the density function as with some fixed 0 ∉ such that is at most a countable set [21][22][23]34]. Lemma 4. The functional determinant D (; ) is of order one and of type + B(), r0 ≤ ≤ r1 .In particular, one has the following density identity: Proof.We begin with (8): Advances in Mathematical Physics We have outside the zeros of (); similarly, outside the zeros of ŷ (; ).The term α() is bounded and bounded away from zero outside the zeros of () and ŷ (; ) on real axis.The behaviors of the ẑ (; ) and ẑ (; ) of ( 35) are well-known from the works [3,8,14,26,27,32,33,35].In particular, the following asymptotics hold.For > 0 and R ≥ 0, there is a constant such that Consequently, we can compute Lindelöf 's indicator function [14,22,23,26,27,34] for ŷ (; ) and then D (; ) for (41): In case that α () ≡ 0, instead of (48), we have [3,33], we have ≡ 1.The sufficient condition is obvious.This proves the lemma.Thus, Lemma 4 merely describes the eigenvalue density of problem (34).To describe the density for (33), we may apply the translation invariant properties of interior transmission eigenvalues.Alternatively, we may consider the problem from the point of view of uniqueness theorem of ODE as in Section 1.In [r 1 , r2 ], except the previous eigenvalues of (34), we consider the new eigenvalue density of the problem produced on the second interval; that is, The density is zero, because ŷ (; )/ and () satisfy the same differential equation and initial condition at r1 until r2 .Thus, there are only trivial eigenfunctions in [r Proof.This is only Lemma 4 and the discussion on (51).Now we refresh the idea of the eigenvalue tunneling.(55) Proof.Let the eigenvalue solve the system of (54) for some ≥ 0, in which the first two equations there give an entire function in and the third condition implies that the eigenvalues of (54) form a discrete set in C [14,26,27].With given by (54), we continue ODE system (54) with the mixed boundary condition: In general, we can take r as the reference point by proceeding with the previous argument.(61)
3,355.8
2016-01-12T00:00:00.000
[ "Mathematics", "Physics" ]
Glacial biodiversity of the southernmost glaciers of the European Alps (Clapier and Peirabroc, Italy) We applied a multi-taxa approach integrating the co-occurrence of plants, ground beetles, spiders and springtails with soil parameters (temperatures and chemical characteristics) in order to describe the primary succession along two glacier forelands in the Maritime Alps (Italy), a hotspot of Mediterranean biodiversity. We compared these successions to those from Central Alps: Maritime glacier forelands markedly differ for their higher values of species richness and species turnover. Contrary to our expectation, Maritime glacier forelands follow a ‘replacement change model’, like continental succession of Inner Alps and differently from other peripheral successions. We propose that the temperatures along these Mediterranean glacier forelands are warmer than those along other Alpine glacier forelands, which promote the faster species turnover. Furthermore, we found that early and mid successional stages of the investigated glaciers are richer in cold-adapted and endemic species than the later ones: we confirmed that the ‘replacement change’ model disadvantages pioneer, cold-adapted species. Given the overall correspondence among cold-adapted and endemic species, the most threatened in this climate phase, our results raise new concerns about the extinction risk of these species. We also describe supraglacial habitat of Maritime glaciers demonstrating that supraglacial debris represents an environment decoupled from the regional climate and may have an important role as refugium for coldadapted and hygrophilous plant and animal species, whose survival can be threatened by climate change and by a rapid ecological succession in the adjacent forelands. Introduction Alpine glaciers are retreating globally due to climate change (Paul et al. 2015;Roe et al. 2017), freeing bare grounds -the glacier forelands -that are colonized by several micro-and macro-organisms (e.g. bacteria, plants, arthropods) giving an excellent opportunity to study an ecological succession triggered by climate changes (Cauvy-Fraunié and Dangles 2019; Ficetola et al. 2021). The main driver of this succession is the time since deglaciation (Erschbamer and Caccianiga 2016;Hågvar et al. 2020), but its dynamics also depend on local climate (Matthews, 1992), biogeographic context (Tampucci et al. 2015) and by physical and chemical conditions at microscale (Castle et al. 2016;Hågvar et al. 2020). In addition, Rosero et al. (2021) recently demonstrated that the patterns of colonisation are taxa-dependent, i.e. different taxa can follow different models along to the same ecological succession. Two main colonisation models were described (Vater and Matthews 2015;Ficetola et al. 2021): the 'addition and persistence' and 'replacement-change'. The former consists of the persistence of pioneer species (i.e. the initial colonisers) from the recently deglaciated sites (early successional stages) to latesuccessional stages. Conversely, with the 'replacement-change' process, mainly observed in the Alps, a group of initial colonisers (the pioneer community) is progressively replaced by other species; in this case, there is a species turnover. The two models can be distinguished through the persistence of pioneer species throughout the succession, which can be assessed by different indices (see Matthews et al. 2018) although fixed threshold values cannot be established. A pilot study by Tampucci et al. (2015) performed in the Central Italian Alps highlighted how colonization dynamic is different in inner mountain chains with respect to peripheral ones, as a consequence of regional climate and altitude (see also Vater and Matthews 2013). In the peripheral chains of the Southern European Alps, the oceanic climate regime seems to allow the persistence of pioneer species along the glacier forelands and makes the succession slower than on glacier forelands at the same altitude under continental climatic regime, probably because of the harsher conditions during the growing season. This phenomenon is particularly evident for plants (Tampucci et al. 2015). This observation is consistent with the autosuccession concept tested by Matthews et al. (2018) along a climatic gradient in Norway, where a 'replacement change' model could be observed in the subalpine zone, progressively replaced by a pattern characterized by a longer persistence of pioneer species, ending with an autosuccession (overlap between pioneer and late successional stages) in the most-climatically-limited sites of the high-alpine zone. The long-lasting persistence of pioneer stages is particularly important as, in some areas, it allows the survival and extended distributional area of many endemic species (Tampucci et al. 2015). An additional effect of climate change observed is the increase of supraglacial stony debris due to the reduction of the pressure of the ice volume on the headwalls and the amplification of frost and heat weathering that increase their erosion (Paul et al. 2007). The supraglacial debris can hosts cold-adapted species currently threatened by global warming (Caccianiga et al., 2011;Gobbi et al. 2011Gobbi et al. , 2017Valle et al. 2020;Valle et al. 2022) and reduces the ablation rate (Nakawo and Rana 1999), thus potentially acting as refugium for these species during the current warm climatic stage. In the context of climate change, peripheral glacial areas deserve particular attention for at least three reasons: (1) they display one of the plausible future scenarios for the whole inner chain, given their overall low altitude and the occurrence of few, small and rapidly shrinking glaciers; (2) they are characterised by high richness of endemic species (Medail and Quezel 1999), since they were partially ice-free during glacial periods, acting as refugia (Schonswetter et al. 2005); (3) they could host threatened cold-adapted species in recentlydeglaciated areas and on supraglacial debris (Tampucci et al. 2015;Valle et al. 2020) Maritime Alps (maximum altitude: 3297 m a.s.l.) are the southernmost portion of the European Alps, and border the Mediterranean Sea. They host two small glaciers, Clapier and Peirabroc, the southernmost of the whole Alpine chain (Smiraglia and Diolaiuti 2015). A large amount of rainfall mainly concentrated in spring and autumn as snowfall allows Maritime glaciers to persist at low latitude and relatively low altitude (Hannss 1970). Maritime Alps represent the richest area in terms of biodiversity in the European Alps (Medail and Quezel 1999;Villemant et al. 2015) due to the peripheral position with respect to the ice sheet during the Ice Ages, the proximity to the sea, the high environmental variability due to the lithological variety and the high altitude of peaks that allow species of the Alpine altitudinal belt to persist within the Mediterranean region. Because of this peculiar climatic and biogeographic context, Maritime glaciers are unique within the European Alps. This paper aims to analyse the ecological succession of plant and arthropod (Aracnida: Araneae, Coleoptera: Carabidae and Hexapoda: Collembola) communities along the glacier foreland and on the supraglacial stony debris of the Clapier and Peirabroc glaciers. We hypothesize that: i) different taxa colonise the glacier foreland and the supraglacial habitat, in relation to soil parameters and temperature, with different colonization patterns from each other; ii) succession model in Maritime glaciers are similar to those of other peripheral glaciers as reported in Tampucci et al., 2015; iii) as a consequence of hypothesis ii, cold-adapted species are distributed throughout the whole succession, from pioneer to late successional stages; iv) supraglacial habitat hosts cryophilic (i.e. cold-adapted and hygrophilous) species; v) supraglacial habitat of peripheral glaciers is a peculiar environment hosting a more endemic taxa with respect to supraglacial habitat of inner Alps. Study area The Maritime Alps represent the southernmost part of the Alpine chain, and occur both in Italy and France. We studied Peirabroc (44°07'14" N, 7°24'53'' E) and Clapier (44°06'51'' N, 7°25'21'' E), the last remaining glaciers of Maritime Alps (Smiraglia and Diolaiuti 2015) (Fig. 1, Appendix 1). The bedrock is siliceous, consisting of gneiss and amphibolite (Piana et al. 2017) The studied glaciers showed an overall retreat following the end of the Little Ice Age (LIA, c. mid 19 th century); an advance phase was recorded during the 1930s and in 1951. The retreat pace increased after 2002 Pappalardo 1995, 2010). However, no glaciological data are available for the period 1967-1989, in correspondence to the last consistent advance of Alpine glaciers. Thus, a further possible advance phase was not recorded for these glaciers and only approximate dating of the glacial deposits is possible (Table 1). Smiraglia and Diolaiuti (2015) reported a surface reduction of 30% for Peirabroc (from 0.1 to 0.07 km 2 ) and of 77% for Clapier (from 0.3 to 0.09 km 2 ) for the period 1957-2010. Approximately 1/3 of the surface of both glaciers is covered by supraglacial stony debris, which is located in the proximal part of the ice tongue (debris cover estimated with Agea 2015 Orthophoto). The minimum altitude of the glaciers tongue recorded in 2019 was 2430 m asl for Peirabroc and 2650 m asl for Clapier; the tongue of Clapier is separated from the accumulation basin at 2750 m asl. Sampling design Five environmental units were selected, three occurring on Peirabroc, four on Clapier, and one common to both glaciers. The environmental units correspond to a specific deglaciation or moraine deposition age, from the glacier front to terrains icefreed since the Late Glacial Period (LG-c.10000 years BP) - (Table 1); the environmental unit corresponding to LG terrains (PEI5) is common to both glaciers, it ideally represents the late-successional stage of the succession. Terrain age was obtained from literature data reporting previous glaciological surveys (see previous paragraph). An environmental unit was selected also on the supraglacial debris of both glaciers (Fig. 1, Table 1, Appendixes 1 and 2). Two plots were placed in each environmental unit, each one consisting of three sampling points at least 10 meters apart from each other. For each sampling point: (1) We performed a vegetation survey in a quadrat of 5 x 5 m 2 . The cover of rock outcrop, debris, of the whole plant cover and of every single species was estimated with a resolution of 5%; a cover value of 3% or 1%, was assigned for rare (less than 5% cover) and sporadic (one individual) species (Table 2, Appendix 3). (2) We placed a pitfall trap, to catch and preserve arthropods, consisting of a plastic glass (diameter 7 cm) filled up with a non-toxic and frost-resistant solution made by 2:1 water and wine-vinegar, with salt and few drops of soap (Gobbi 2020); pitfall traps were collected and re-set during two sampling sessions (Harry et al. 2011;Lencioni and Gobbi 2021): 20/21 August 2019 -10/12 September 2019. Among the sampled taxa, ground beetles (Coleoptera, Carabidae), springtails (Hexapoda, Collembola) and spiders (Aracnida, Araneae) were chosen for the analyses, because they are ubiquitous and good ecological indicators, particularly in glacial environment (Hågvar et al. 2020). (3) We collected a soil sample of 200 g for analyse pH values, organic matter content (Walkley-Black method). In every plot (except plot CLA2, where it was not possible) a soil sample of approximately 2 kg was taken to estimate grain size distributions. The sampled arthropods were preserved in ethanol and stored at Natural Science Museum of Bergamo, Italy (spiders), and at MUSE -Science Museum of Trento, Italy (ground beetles, springtails and other taxa not identified at the species level). Two dataloggers (Tinytag plus 2) were placed, one in correspondence to supraglacial debris and one near the LIA Moraines of each glacier in order to analyse the patterns of mean daily ground surface temperature and humidity during the period 3 August 2019 -13 September 2020. The devices were placed between stones at a depth of c. 10 cm, in order to shield them from direct solar radiation, and to obtain micrometeorological data about the substrate in which plant roots and arthropods develop. The recording was set at 30 minute intervals. Datalogger on Peirabroc supraglacial debris was downloaded in September 2019; afterwards, it was lost during winter due to avalanches and rockfalls; thus only data from one month (4 August 2019 -11 September 2019) are available. Data analysis Vegetation data were expressed as cover values (%), while occurrence data of the considered grounddwelling arthropods were expressed as presence/ absence, since the second sampling session was not available for all the sampling points because many traps were damaged by snow and wild fauna. Site (altitude, slope, aspect) and soil data were standardized (y = (x -mean)/ standard deviation; Kreyszig 1979) and aspect was normalized with (-cos(X)). We defined as "cold-adapted" all the species strictly linked to the Alpine and Nival altitudinal belts (Table 2). In particular, concerning plants, we defined as "cold-adapted" the species with temperature index = 1 (alpine and nival) and temperature range of variation = I (temperature index variation at most ±1) in Landolt et al. (2010); concerning arthropods, we referenced to the available descriptive literature about the ecological requirement of each identified taxon (Thaler 1988(Thaler , 1999Gisin 1960;Isaia et al. 2007;Bisio 2008;Jordana 2012;Pantini and Isaia 2019;Monzini 2010, 2011;Potapov 2001). Hygrophilous species are those linked to high availability of water (but not aquatic): we consider hygrophilous plant species with Landolt's humidity index = 4 or 4.5; concerning arthropods, we referenced to the available descriptive literature about the ecological requirement of each identified taxon (as above). Species that are both cold-adapted and hygrophilous are defined cryophilic (Deharveng et al. 2008). All analyses were performed with PAST 4.05 software (Hammer et al. 2001). Environmental variables In order to calculate changes in mean annual temperature and snow persistence along the glacier foreland, micrometeorological data recorded on LIA moraines were used to estimate soil temperature on the whole glacier foreland, applying a standard adiabatic gradient of 0.6°C/100 m (Rolland 2003), as tested by Tampucci et al. (2015). Data recorded by datalogger placed on the supraglacial debris of Clapier were used to describe the supraglacial environment. Temperature data obtained by the dataloggers allowed us to outline the snow cover period, where temperature remain constant and close to 0°C. (Appendix 4) The shorter series of data available for the Peirabroc supraglacial environment was compared with Clapier's corresponding series in order to evaluate differences or homologies in trends between the two glaciers. We used descriptive statistics (mean value and standard deviation for each environmental unit) to describe the distribution of soil parameters (soil pH, organic matter content, grain size distribution, total plant cover) along the investigated glacier forelands. A non-parametric monotone correlation coefficient (Spearman's rho) was calculated to investigate the collinearity between the soil variables, then Principal Component Analysis (PCA) was used to evaluate the association among them in order to rule out some of the auto-correlated variables from the subsequent analyses (Hammer 1999(Hammer -2021. Plant and arthropod succession in relation to environmental gradients Patterns of plant and arthropod species distribution along the glacier foreland in relation to environmental variables were described through canonical correspondence analysis (CCA; Legendre and Legendre 1998). We selected this direct gradient analysis because the response of species to the environmental variables is supposed to be unimodal due to the presence of complex ecological filtering driving the response of species occurrence and/or abundance (see Ficetola et al. 2021); furthermore, this analysis is particularly suitable for heterogeneous datasets along long ecological gradients (Hammer et al. 2001;Zeleny 2022) . These analyses were carried out: (A) for plants, on a matrix of continuous data of plant species including 22 sampling points for 76 species on Peirabroc (20 species out of 96 were omitted since occurring in only one sampling point; 2 sampling points were omitted since no plant species was recorded in them) and 22 sampling points for 73 species on Clapier (19 species out of 92 were omitted since occurred in only one sampling point; 2 sampling points were omitted since no plant species was recorded in them); (B) for arthropods, on a binary matrix with 21 plots and 27 species on Peirabroc and 21 plots for 30 species on Clapier. Environmental variables included in all CCA analyses were slope, aspect, pH and soil organic matter; the three most correlated variables -gravel and sand, silt and clay and plant cover -were omitted, because of their ecological redundancy (Appendix 5). For identifying typical plant and arthropod species of each environmental unit, we used Indicator Species Analysis (indicator value: IndVal; Podani & Csányi 2010), carried out on the matrix used for CCA, merging plots of the same age into the same environmental unit according to Table 1. Comparative analysis of succession parameters In order to compare the succession trend of different regions of the Alpine chain, we compared the ecological succession of the two Maritime Alps glacier foreland with one glacier from the peripheral (southern) Alps: Trobio glacier (Orobian Alps, glacier foreland above the tree line, 2350-2550 m asl, (Tampucci et al. 2015) and with two glaciers form Rhaetian (inner) Alps: Rotmoos glacier (Rhaetian Alps, glacier foreland near the potential tree line, 2280-2400 m asl, Austrian Alps; Kaufmann 2001; Marcante et al. 2009), and Cedec glacier (Rhaetian Alps, glacier foreland above the tree line, 2694-2726 m asl, Italian Alps; Gobbi et al. 2010). All these glaciers are characterized by siliceous bedrock. The terrain age for each sampled site was taken from the original publications (literature cited above). Specifically, for each ecological succession of the glacier forelands we calculated two indices of turnover for plants, spiders, ground beetles and springtails (springtail data were available only for Peirabroc and Clapier): (1) Whittaker species turnover index (Whittaker 1972): βW = (γ -α)/α = γ/α -1 (where γ is the total species diversity and α is the mean species diversity at the habitat level); (2) Persistence index (Vater and Matthews 2015): PPn = 100c/a (where c is the number of common species of the two sites and a the number of species of the most pioneer site). To perform a homogeneous comparison, we merged the two pioneer stages of Clapier (CLA2 and CLA3), considering the following four deglaciation stages (Tampucci et al. 2015): 1 = pioneer (1-30 years since deglaciation); 2 = early (31-100 years since deglaciation); 3 = mid (101-170 years since deglaciation); 4 = late (c. 10.000 years old, ice-free since the LG) (Table 1). Plant and arthropod data were not available for stage 1 on Peirabroc and Clapier, because this environment was not included in our sampling design being not clearly identifiable on the field: for this reason, the total persistence index was calculated from 2 to 4 for all glacier forelands. Environmental gradients along the glacier foreland and on the supraglacial debris The mean annual temperature measured on the LIA moraine was 4.2°C for Clapier (at 2510 m asl) and 3.3°C for Peirabroc (at 2420 m asl). The values calculated for the uppermost areas of the glacier forelands are 4.0°C for Clapier (at 2630 m asl)) and 2.5°C for Peirabroc (at 2460 m asl). The mean annual temperature of the supraglacial debris, available only for Clapier, was -1°C. Snow lasted on the Peirabroc LIA moraines for about 214 days and for 183 days on Clapier; on the supraglacial debris of Clapier it lasted for 295 days; considering the similarity among the thermal trends on the two glaciers we can expect similar data of snow persistence on Peirabroc glacier. Soil parameters were not related to slope and aspect on both glaciers (Appendix 5). Soil parameters along the glacier foreland showed a progressive decrease of pH (from 7.5 to 5.5), gravel and sand fraction (from 99% to 60%) and a corresponding increase of organic matter content (from 4 to 163 g/kg), silt and clay fraction (from 1% to 40%) with increasing terrain age (Appendix 6) PCA gave similar results for the two glaciers (Appendix 7): soil data were displaced along PCA axis 1, particularly for Peirabroc glacier, representing the main environmental gradient, while slope and aspect were related to axis 2. Plant community succession In both Peirabroc and Clapier sites plots are arranged following two main gradients outlined by the CCA (Fig. 2): the first corresponds to soil evolution, expressed by pH value and organic matter content, that arrange plots following their chronological succession and are highly correlated with CCA axis 1 (Pearson r index 0.91 and -0.87 for Peirabroc and -0.71 and 0.92 for Clapier, respectively). The second gradient is related to topographic data (aspect: Pearson r index with CCA axis 2 0.71 and 0.65 for Peirabroc and Clapier, respectively). The plant succession dynamic is similar along the two glacier forelands, with differences due to sporadic species occurrence. In the early successional stage of Peirabroc glacier forelands (environmental unit PEI2) we found 31 plant species (mean total plant cover for plot 32%); on Clapier this successional stage includes two different environmental units: on the young glacier foreland (CLA2) we found only six plant species (mean total plant cover for plot 2%). On the young moraine (CLA3) we found 24 species, with a mean total plant cover of 34%. According to Indval (Table 2, Appendix 8) the best indicator species of early successional stages are Oxyria digyna, Arabis alpina, Saxifraga aizoides, Hornungia alpina and Linaria alpina. In mid-successional stages on Peirabroc (PEI3, LIA moraines) eleven early colonizer species persisted, but 37 late colonizers appeared, thus reaching the highest species richness (48) with a mean total plant cover of 97%. This could be observed also in the mid-successional stage of Clapier (CLA4), with 53 species and a mean total plant cover of 115%. According to IndVal, the indicator species for this environmental unit (Table 2) are Myosotis alpestris, Euphrasia alpina, Trifolium thalii, Luzula spicata and Armeria alpina. Late successional stages (PEI5) showed slightly higher plant cover values (133%) with many exclusive late successional species such as Carex sempervirens, Scorzoneroides helvetica, Nardus stricta and Ranunculus montanus, which are the best indicator species according to IndVal (Table 2). In general, only few species are ubiquitous along all the glacier forelands: Leucanthemopsis alpina, Poa alpina, Luzula alpinopilosa, Saxifraga bryoides, Saxifraga exarata. Analysing the general trend of species richness (Fig. 3), plants show an increase in species richness on the mid-successional stages and then a decrease in the late successional stages. Among the early successional species, only Saxifraga pedemontana ssp. pedemontana, the hygrophilous Saxifraga aizoides, the cold-adapted Saxifraga retusa, Poa laxa and the cryophilic Adenostyles leucophylla have been found on supraglacial debris, in Peirabroc sites (PEI1); the percentage of endemic plant species is 60% (3 species among 5) and also the percentage of cold-adapted species is 60%; 2 among 3 species are both endemic and cold-adapted. On Clapier supraglacial debris (CLA1) no plant species was found. Arthropod community succession Along Peirabroc glacier foreland arthropod communities are arranged primarily in relation to a soil evolution gradient (Fig. 4), with CCA axis 1 highly correlated to organic matter content (Pearson r index = 0.83) and soil pH (Pearson r index = -0.92). Axis 2 is related to aspect (Pearson r index= -0.42). Along Clapier glacier foreland arthropod distribution follows two main gradients: the first is that of the soil organic matter (Pearson r index = 0.86), arranging plots and species along the CCA axis 1 and the second is that of aspect and slope, correlated to CCA axis 2 (Pearson r index = -0.66, Pearson r index = -0.58). IndVal analysis significantly associated to early successional stage of the young glacier foreland (CLA2) the cold-adapted and endemic springtail Orchesella cf. frontimaculata and the spider Entelecara sp. (Table 2, Appendix 8) with a total amount of two species of spiders and four of ground-dwelling springtails; only one species of ground beetle was sampled here, the cryophilic and endemic Oreonebria angusticollis ssp. microcephala. In the early successional stages of young moraines (PEI2 and CLA2-3), one ground beetle species (Nebria jockischii), six species of spiders (four cold-adapted and three endemic) and three grounddwelling springtail species were found. The spider Coelotes pabulator and the springtails Lepidocyrtus gr. curvicollis and Orchesella alticola resulted an indicator species of this successional stage according to IndVal (Table 2). In the mid-successional stages (PEI3 and CLA4) we found five ground beetle species (three coldadapted and endemic), seven spider species (four cold-adapted and three endemic; among these seven species, only Coelotes pabulator was found on Peirabroc) and seven ground-dwelling springtail species (one of them cold-adapted and endemic). The ground beetles IndVal analysis showed the Carabus pedemontanus, Pterostichus morio ssp. fenestrellanus, the spider Zelotes gallicus and the springtails Fasciosminthurus sauteri and Entomobrya lanuginosa to be indicator species for these successional stages (Table 2). The late successional stage (PEI5) hosts an arthropod community quite different from that of the previous successional stage, with four ground beetle, six spider and three ground-dwelling springtail species. According to IndVal, the indicator species linked to this successional stage are: the ground beetle Harpalus rubripes, the wolf spider Pardosa blanda and the springtail Orchesella quinquefasciata ( Table 2). The main difference between the two glaciers is the low number of spiders in the mid-successional stage of Peirabroc. Analysing the general trend of species richness, arthropods and plants follow different colonization patterns, with some differences between the two glaciers ( Fig. 3 b-d). Spiders show a strong decrease in total species richness in the mid-successional stages of Peirabroc but a peak on Clapier. Grounddwelling springtails show the highest number of species in the early successional stages and then a monotonic decrease on Peirabroc. On Clapier the trend is similar but with lower initial values. Ground beetles show an increase of species during the succession until the LIA moraines, and then the Fig. 4 Arthropod Canonical Correspondence Analysis (CCA) graphs for a) Peirabroc and b) Clapier. asp = aspect, slo = slope, org = organic matter, pH = soil reaction, pH. number stabilizes; only along the Clapier glacier foreland there is a positive peak in the midsuccessional stages. The supraglacial debris also hosts arthropod species. Five ground-dwelling arthropod taxa were found: the cryophilic and endemic ground beetle Oreonebria angusticollis ssp. microcephala (for the indicator species for this environment; Table 2) the springtails Orchesella cf. frontimaculata (coldadapted and endemic species), Isotomidae sp., Deutherosminthurus pallipes, Lepidocyrtus gr curvicollis. No spiders were found on this environmental unit. The percentage of endemic species among arthropods in this habitat is 40%, considering ground beetles and springtails, and 100%, considering only spiders and ground beetles. The same percentages also represent the incidence of cold-adapted species, since all species are both coldadapted and endemic. Comparative analysis of succession patterns The pattern of total plant species richness along the Clapier and Peirabroc chronosequence was characterised by a mid-successional maximum in correspondence to LIA moraines. Differently, an early-successional maximum was recorded for Rotmoos (Fig. 3a). A trend similar to the plant trend, with a lower maximum, was observed for ground beetles (Fig. 3b); also for ground beetles, on Rotmoos the maximum of species richness was reached earlier than on the other glaciers. Unlike the other taxa, for spiders it is difficult to identify a general trend common among glaciers (Fig. 3c). The comparison between Peirabroc and Clapier springtail trends suggested a general decrease in species richness along the foreland (Fig. 3d). In general, Peirabroc and Clapier are characterized by higher values of the Whittaker species turnover βW than the other glaciers previously studied in inner and peripheral Alps. For plant succession, the highest values on Peirabroc and Clapier occurred in the transition from early to midsuccessional stages (Fig. 5a), differently from other glaciers where usually the turnover lies between mid and late-successional stages. A similar trend, with different absolute values, was observed for ground beetles (Fig. 5b). For spiders, the trend is similar among glaciers, with a late peak in turnover (Fig. 5c). The comparison among Whittaker indexes in springtail succession along Peirabroc and Clapier glacier forelands suggested very different trends among the two Maritime glaciers (Fig. 5d). The persistence index (Fig. 6a-d) showed lower values for Clapier and Peirabroc than for the previously studied glaciers, confirming an overall higher turnover. In particular, all pioneer taxa persisted to the mid-successional stage on Trobio glacier, while on Cedec and Rotmoos species persisted longer through the succession. Plants (with the exception of Rotmoos) and spiders were the most persistent taxa from pioneer to mid successional stages; instead, ground beetles are persistent from mid to late successional stages. On Peirabroc and Clapier, springtail persistence index is very low in every successional stage. Thermal data Our work is among the first to describe plant and arthropod communities colonizing the southernmost glacier forelands of the European Alps: Peirabroc and Clapier glacier forelands stand out for having remarkably higher average annual temperature (respectively 2.5° and 4.0°C) with respect to other studies in the inner (-1.8°C/-1.3°C ; Kaufmann et al. 2002a,b), and peripheral (1.7°C in Gobbi et al. 2017; from 0.5°C to 1.3°C in Tampucci et al. 2015) Central Alps, at comparable altitude. The average annual temperature recorded by our dataloggers is comparable to values reported by Federici and Pappalardo (2010) and Rapetti and Vittorini (1992) for the same area, confirming reliability of our data. The average annual temperature recorded on supraglacial debris (-1°C) is similar to those recorded on other debris-covered glaciers of the European Alps (Gobbi et al. 2017;Valle et al. 2020). Biodiversity of the southernmost Alpine glacier forelands The investigated glacial environments host a remarkable biodiversity, with some noteworthy peculiarities of endemic species most of which are also cold-adapted and considered to be alpine species, but there are species of the area straddling both the Southern Alps and the Northern Apennines: the ground beetle Oreonebria macrodera, from Maritime Alps to Northern Apennines, and the spider Coelotes osellai, from Maritime Alps to Apuan Alps. The most relevant findings include the plant Saxifraga pedemontana ssp. pedemontana, the ground beetles Carabus pedemontanus, Oreonebria angusticollis ssp. microcephala, Amara carduii sbsp. psyllocephala, Pterostichus morio sbsp fenestrellanus, the spider Vesubia jugorum, and the springtail Orchesella cf. frontimaculata. These species have a very restricted distribution range and are strictly linked to cold environments; in particular, Vesubia jugorum is the only spider present in IUCN's Red List of threatened species (Mammola et al. 2016). Vesubia jugorum is classified as endangered because the current observed extent of occurrence (EEO 4,412 km 2 ) and the area of occupancy (AOO 835 km 2 ) are declining due to climate change (Isaia and Mammola 2018). Fig. 6 Persistence index for plants a), ground beetles b), spiders c) and springtails d) along the compared glacier forelands. 1-2 = from pioneer to early successional stages, 2-3 = from early to mid successional stages, 3-4 = from mid to late successional stages. Data are not available for the passage from stage 1 to 2 on Peirabroc and Clapier: for this reason, the total persistence index was calculated from 2 to 4 for all glacier forelands. Supraglacial biodiversity is represented only by few species extremely specialised to cold and wet high-altitude environments. These include the springtail Orchesella cf. frontimaculata and the ground beetle Oreonebria angusticollis; both were found only in supraglacial habitat and in early successional stages, confirming for Oreonebria angusticollis its exclusivity for cold and wet habitats observed by Gobbi et al. (2011) and Bisio and Taglianti (2021). Among springtail species, Fasciosminthurus sauteri is new for the Italian fauna; this species is an Alpine species described for Switzerland (Nayrolles and Lienhard 1990), where it was found in a scree slope vegetation and in a Seslerio-Caricetum grassland above 1800 m asl. Our data confirmed its presence in an open environment at high altitude; since we collected it quite far from the locus typicus, we can suppose that its distribution is underestimated and it may include a larger part of the Alps. Homologies and differences among plant and arthropod successions along Peirabroc and Clapier glacier forelands Plant and arthropod succession along Peirabroc and Clapier glacier forelands is arranged mainly in relation to soil evolution gradient driven by the time since deglaciation, as already observed in other Alpine glacier forelands (Matthews 1992;Burga 1999;Caccianiga et al. 2001;Khedim et al. 2021). However, we observed that aspect and slope also play an important role and, the latter for the arthropod communities of Clapier, suggesting that the microenvironmental variability could influence the successional pathway. LG terrains usually host a lower number of plant species with respect to LIA moraines (Caccianiga et al. 2001;Tampucci et al. 2015); on the other hand, we cannot exclude an additional negative impact of grazing by ungulates (e.g. chamois and alpine ibex, pers. obs.) on plant species richness. The higher values of turnover index and the lower persistence values with respects to other glacier forelands indicate that, despite some differences among taxa, plant and arthropod of Peirabroc and Clapier show the succession 'replacement-change' model, confirming observations by Rosero et al. (2021) and confuting our first hypothesis. In addition, Peirabroc and Clapier show some differences in their successional patterns despite their proximity: this finding stresses the hypothesis that each successioneven on a very small scale -has its own characteristics, perhaps in relation to the limited extension of these environments and the great environmental heterogeneity (Kaufmann et al. 2002b;Mori et al. 2008). Such differences between the two proglacial successions are particularly evident for spiders and springtails. We hypothesize that the high variability of spider and springtail successions could be due to their microhabitat sensitivity and mobility (Rusek 2001;Widenfalk et al. 2016) that make pitfall traps a nonfully exhaustive sampling method for these taxa. In addition, this may have been enhanced by the short duration of the sampling, for the reasons explained in Material and Method chapter. Successional patterns on different glacier forelands: peculiarities of peripheral glaciers with Mediterranean climate Peirabroc and Clapier markedly differ from all the other successions we have considered in their higher species turnover and, for this reason, our results disprove hypothesis (ii) because, contrary to our expectation, Peirabroc and Clapier follow a typical 'replacement change model', with high turnover rates, as observed along the glacier forelands crossing the tree line in the inner Alps (Gobbi et al. 2006(Gobbi et al. , 2007Tampucci et al. 2015) as well as in Norway (Matthews et al. 2018), instead of an 'addition and persistence model' as observed in the other peripheral glaciers (see. Tampucci et al. 2015). The 'replacement change model' of colonisation has been associated with less severe environmental conditions, such as higher mean summer temperature and lower disturbance, and to a greater species pool (Holten 2003;Walker et al. 2004;D'Amico et al. 2015;Matthews et al. 2018). Tampucci et al. (2015) associated such conditions with continental climate of the inner Alpine chain, with higher tree line position and generally warmer conditions during the growing season, whereas the oceanic climate of the peripheral chains results in more severe environmental conditions during the favourable season and ultimately in a longer persistence of pioneer species. We propose that, despite of their peripheral position, the peculiar climatic traits of Maritime Alps provide mild temperatures that could promote the rapid species turnover observed along the succession, as pointed out by Ficetola et al. (2021). Trends in species richness of plants and ground beetles seem to reflect the altitudinal distribution of the glacier with respect to the tree line. Peirabroc and Clapier glacier forelands are similar to Trobio and Cedec, being all above the tree line, while Rotmoos differs from the others. With respect to all other analysed successions, Peirabroc and Clapier show the highest absolute values in species richness along plant succession, reflecting the biogeographic role of Maritime Alps as hotspot of biodiversity in the Mediterranean basin (Medail and Quezel 1999). Early and mid-successional stages of the investigated glaciers are richer in species number and host the highest percentage and number of coldadapted and/or endemic species, thus disproving our hypothesis (iii) that cold-adapted species are equally distributed along the succession, in contrast to results reported by Tampucci et al. (2015) in the Southern Alps. We propose that this is due directly to the considerably average higher temperatures of the proglacial habitat of Peirabroc and Clapier, which promotes the 'replacement change' model that disadvantages pioneer, cold-adapted species. Given the overall correspondence among cold-adapted and endemic species, the most threatened in this climate scenario (Tampucci et al. 2015; Cauvy-Fraunié and Dangles 2019), our results raise new concerns about the extinction risk of these species. Supraglacial habitat, a threatened refugium for cryophilic and endemic species The supraglacial habitat hosts well-defined plant and arthropod communities with cold-adapted and/or hygrophilous species. In particular, arthropod species with the combination of these characteristics, like Oreonebria angusticollis ssp. microcephala, are able to persist only on the glacier surface or on terrains very close to the ice tongue. Other arthropods, cold-adapted but not hygrophilous, seem to prefer either supraglacial or early-successional habitats. This is the case of Orchesella cf. frontimaculata; for this species the competition for food resources can be a factor limiting its presence in other environments, where other Orchesella species, such as O. alticola and O. quinquefasciata, occur. Other cold-adapted species -like the spiders Coelotes pickardi pastor, Pardosa nigra, Dysdera cribrata, or the ground beetles Amara carduii ssp. psyllocephala, Carabus pedemontanus, Pterostichus morio ssp. fenestrellanus and Oreonebria macrodera occur also on mid-and late-successional stages. Thus, hypothesis (iv), that cryophilic species are more linked to the ice, is confirmed, and it is particularly evident for arthropods. The incidence of endemism in this supraglacial habitat is high, as predicted in hypothesis (v), especially if we compare it to the Inner Alps where no endemic species were observed (see data from Gobbi et al. 2006). The low average annual temperatures recorded on the supraglacial debris emphasize the specific features of such habitats in comparison to the nearby glacier forelands with their relatively warmer thermal profiles; supraglacial debris climatic profile depends on microhabitat features like ice presence, debris thickness and conductivity (Mihalcea et al. 2008;Schauwecker et al. 2015;Gibson et al. 2017). Thus, supraglacial debris represents an environment decoupled from the regional climate and may have an important role as refugium for cold-adapted and hygrophilous plant and animal species, whose survival can be threatened by climate change and by a fast ecological succession in the adjacent forelands. By now, the situation in the Maritime Alps is alarming, considering the uniqueness of these glaciers and of their biodiversity in relation to the reduced surface of Peirabroc glacier, but, especially, to the observed fragmentation of Clapier glacier's tongue. Conclusions Every primary succession is mainly driven by soil evolution -a proxy for time since deglaciation. However, differences at regional but even at local scale (i.e. between two nearby glacier foreland) could be observed, suggesting that every succession responds to regional climate, local biodiversity, microhabitat heterogeneity and extension, but also to stochastic events (Matthews 1992, Erschbamer andCaccianiga 2016;Ficetola et al. 2021); this emphasizes the important role of the scale of observation, particularly when dealing with different taxa.. The Maritime Alps represent a peripheral chain with a unique combination of specific climatic features and taxonomic richness, with particular reference to endemic species. Due to their great variability and to the importance of glacial habitats as refugium for cold-adapted and endemic threatened species (Valle et al. 2021(Valle et al. , 2022, it is important to expand the number of case studies in order to have a more complete vision of the phenomenon. Mediterranean glacial habitats, already especially threatened with disappearance for their geographical position, are further threatened by the fast species turnover, which implies that many cold adapted and endemic species, more linked to these environments, have a severe extinction risk. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Funding note: Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement.
9,231.6
2022-08-01T00:00:00.000
[ "Environmental Science", "Biology" ]
GADTI: Graph Autoencoder Approach for DTI Prediction From Heterogeneous Network Identifying drug–target interaction (DTI) is the basis for drug development. However, the method of using biochemical experiments to discover drug-target interactions has low coverage and high costs. Many computational methods have been developed to predict potential drug-target interactions based on known drug-target interactions, but the accuracy of these methods still needs to be improved. In this article, a graph autoencoder approach for DTI prediction (GADTI) was proposed to discover potential interactions between drugs and targets using a heterogeneous network, which integrates diverse drug-related and target-related datasets. Its encoder consists of two components: a graph convolutional network (GCN) and a random walk with restart (RWR). And the decoder is DistMult, a matrix factorization model, using embedding vectors from encoder to discover potential DTIs. The combination of GCN and RWR can provide nodes with more information through a larger neighborhood, and it can also avoid over-smoothing and computational complexity caused by multi-layer message passing. Based on the 10-fold cross-validation, we conduct three experiments in different scenarios. The results show that GADTI is superior to the baseline methods in both the area under the receiver operator characteristic curve and the area under the precision–recall curve. In addition, based on the latest Drugbank dataset (V5.1.8), the case study shows that 54.8% of new approved DTIs are predicted by GADTI. INTRODUCTION The drug acts on the target protein, thereby affecting the expression of the target protein to achieve the therapeutic effect on the disease. Therefore, finding drug-target interactions is the basis of drug development. The research and development of innovative drugs often requires billions of dollars and more than a decade of work, and usually ends in failure. Hence, it is an important choice for pharmaceutical companies to discover potential drug-target interactions (DTIs) by using the known DTIs. The properties of existing drugs are familiar to people, and their safety is guaranteed. However, there are some limits in both coverage and throughput of biochemical experiments to identify new DTIs. Consequently, the prediction of DTIs using computational methods has attracted extensive attention. Early computational methods were mainly based on drugdrug similarity and target-target similarity, or the features of drugs and targets. Some of the methods based on similarity first calculate the similarity between the drug pairs (e.g., chemical structure similarity) and the similarity between the target pairs (e.g., protein sequence similarity), and then use the known DTIs to score the unknown DTI (Cheng et al., 2012;Mei et al., 2013;Wang et al., 2014). Other similarity-based methods process a random walk on the network composed of multiple data sources, such as drug-drug interactions, target-target interactions, and DTIs to obtain the similarity between nodes to predict new DTIs (Chen et al., 2012;Seal et al., 2015). In the methods based on features, both the drugs and the targets are represented as fixed-length feature vectors, and the known drug-target pairs are divided into positive and negative categories. Then the DTI prediction is transformed into a binary classification problem. In recent years, network embedding methods (Perozzi et al., 2014) have shown excellent performance in network data analysis (Cai et al., 2017), and have been introduced into DTI prediction (Su et al., 2018;Bagherian et al., 2020;Liu et al., 2020). Network embedding is also known as graph embedding. In network embedding, nodes such as drugs and targets can all be converted into low-dimensional vectors that represent their features and can be directly used for DTI prediction. The main methods of network embedding include matrix factorization, random walk, and deep learning. A multiple similarities collaborative matrix factorization model was proposed to predict DTI. It incorporates anatomical therapeutic chemical similarity and chemical structure similarity of drugs, as well as genomic sequence similarity, gene ontology (GO) similarity, and proteinprotein interaction (PPI) network similarity of targets. A combination of these similarity matrices was used to approximate the drug feature matrix D and the target feature matrix T, and then the inner product between D and T was utilized to approximate the DTI matrix. TriModel (Mohamed et al., 2019) uses the drug-related knowledge graph to find potential DTIs. It learns the feature vectors of nodes in the knowledge graph through tensor decomposition. These vectors are used to determine whether the drug and the target interact. Meanwhile, DTINet (Luo et al., 2017) first uses the random walk to obtain the low-dimensional feature vector of each drug and protein, projects the drug vector and protein vector into the same space, and then discovers new interactions through matrix completion. Encouraged by the DeepWalk (Perozzi et al., 2014) model, some researchers have combined the random walk with shallow neural networks (Zong et al., 2017(Zong et al., , 2019Zhu et al., 2018). These methods first construct a heterogeneous network based on multiple data sources, and then apply DeepWalk, node2vec (Grover and Leskovec, 2016), and other algorithms to the network to obtain the embedding vectors of drug nodes and target nodes. NeoDTI (Wan et al., 2019) uses a deep learning method based on neighborhood information aggregation to discover new DTIs. It aggregates neighbor information based on edge types in heterogeneous networks. Then, the feature vector of the node is used to reconstruct the original network. There are also several studies based on drug structure and protein sequence (Wen et al., 2017;Karimi et al., 2019;Öztürk et al., 2019). Starting from the chemical structure and protein sequence of compounds, deep learning methods are then employed to predict drug-target binding affinity. Matrix factorization methods can capture the global structure of the network, but its space complexity increases rapidly as the network scale increases. Random walk methods are more efficient because they usually gather only local features. Deep learning methods are outstanding in DTI prediction because it can discover hidden features and associations from multi-source heterogeneous network, and it is easy to integrate externally associated data of drugs and targets (e.g., GO) to improve performance. However, deep learning is computationally expensive and time-consuming. Among the deep learning methods, the graph convolutional network (GCN)based message passing (also known as neighborhood information aggregation) algorithms have recently attracted special attention due to their flexibility and good performance (Kearnes et al., 2016;Ying et al., 2018;Wan et al., 2019). The GCN algorithms usually only consider the neighborhood with a short distance (e.g., the first-order neighborhood) because large distances will lead to over-smoothing, which degrades performance and increases computational complexity. However, the short distance easily leads to insufficient information about the neighborhood of the node Xu et al., 2018). In this article, we propose a graph autoencoder approach for DTI prediction using a heterogeneous network (GADTI), which combines a graph convolutional network, matrix factorization, and random walk. GADTI first constructs a heterogeneous network that integrates eight data sources related to drugs and targets. Then, it runs a graph autoencoder model on the network to discover new DTIs. The encoder of the graph autoencoder includes two components: a GCN and a random walk with restart (RWR). The GCN component aggregates the first-order neighborhood information of each node and uses it to subsequently update the feature vector of nodes. The RWR component propagates the influence of nodes over the heterogeneous network. Through this, we obtain the embedding vectors of nodes, which are sent to the decoder. We use the matrix factorization model DistMult (Yang et al., 2015) to reconstruct the original heterogeneous network from the embedding vectors of nodes. Through the combination of GCN and RWR, GADTI can provide nodes with more information through a larger neighborhood while avoiding the over-smoothing and computational complexity caused by multilayer message passing. The experimental results demonstrate that our approach is effective and efficient to predict potential DTIs. Dataset We adopted a dataset used in previous studies (Luo et al., 2017;Wan et al., 2019). It consists of eight networks, including four types of nodes (drugs, targets, diseases, and side effects) and eight types of edges (drug-drug interaction, DTI, drugdisease association, drug-side effects association, protein-protein interaction, protein-disease association, drug chemical structure similarity, and protein sequence similarity). These data come from public databases such as DrugBank, HPRD, and SIDER. The weights of edges in all networks are non-negative. Furthermore, only the drug chemical structure similarity and the protein sequence similarity are real-valued, and thus represent drug-drug chemical structure similarity scores and proteinprotein sequence similarity scores. The others are binary values indicating whether there is an interaction or association between nodes. Table 1 lists the sources and statistics of these data. Spatial-Based Graph Convolutional Network Most recent network embedding methods are based on the GCN, especially spatial-based GCN. These methods define convolution on graph as neighborhood information aggregation. They generate embeddings for nodes by aggregating the local neighborhood of the nodes instead of the entire network, which is regarded as a message passing mechanism. A typical spatial-based GCN method includes two phases. In the initialization phase, it generates an initial vector based on the features of each node. If all the nodes in the network have no features, a one-hot vector is assigned to each node and a neural network is used to generate the initial vector. In the second phase, the vectors of nodes are updated by a combination of aggregated neighborhood vectors and the previous vectors of the nodes. These updates can be done through neural networks or linear transformations. The embedding vector of a node is a function of its neighborhood (including the node itself). This process looks similar to the receptive field of the convolution kernel in image processing, so it is called GCN. After one aggregation, the embedding vector of the node contains the feature information of its first-order neighbors. If we repeat this aggregation process K times, the embedding vector of the node can capture the feature information of its K-order neighbors. In the spatial-based GCN, the information of a node is first passed to its first-order neighbors, and then propagated to higherorder neighbors through edges on the network. Therefore, these methods are also called message passing methods. The process of graph convolution operation is summarized as follows: where AGGREGATE() and UPDATE ()are functions to aggregate neighborhood information and update node vectors, respectively; u, v are nodes; a (n) v is the aggregated feature information of v at the n-th iteration; N (v) indicates the neighborhood of v; and h (n) v is the embedding vector of v at the nth iteration. After the iteration, we obtain h (K) v , which represents the features of v and can be directly used for node-level tasks such as node similarity calculations, node classification, and link prediction. Graph Autoencoder The graph autoencoder takes the network and the feature vectors of the node as input to generate a low-dimensional embedding vector of the node or the entire network. Unlike traditional autoencoders, the encoder of a graph autoencoder is usually a GCN and its variants, and the decoder can be an inner product (Kipf and Welling, 2016;Pan et al., 2018) or matrix factorization (Zitnik et al., 2018;Lan et al., 2020). Generative adversarial networks (GANs) (Goodfellow et al., 2014) and attention mechanisms have also been applied to graph autoencoders (Ma et al., 2018;Pan et al., 2018;Jin et al., 2019). For heterogeneous graphs containing multiple edge types, the encoder aggregates neighbor features one by one according to the edge type, and then merges them to obtain the embedding vectors of the nodes (Gligorijevic et al., 2018;Ma et al., 2018;Zitnik et al., 2018). GADTI The data related to drugs and targets are represented in the form of a network, and the DTI prediction is then transformed into a link prediction of the network. Definition 1 Network G = (V, R), where v ∈ V and r ∈ R are nodes and edges, respectively. Given a network G, v d and v t are the drug node and target node, respectively. Our goal is to determine whether the unknown edge r dt = (v d , v t ) exists, or how likely it is to exist. To this end, we developed an end-to-end framework GADTI based on the graph autoencoder to discover new DTIs. This approach combines a graph convolutional network, matrix factorization, and random walk. GADTI first integrates multiple data sources to build a heterogeneous network, and then conducts prediction through a graph autoencoder model. As shown in Figure 1, GADTI has two main components: • An encoder: a GCN followed by an RWR, which produces embeddings for nodes in G; • A decoder: a matrix factorization model using these embeddings to predict DTIs. Encoder The encoder consists of a GCN and an RWR. The GCN is used to aggregate first-order neighbor information to update node representation. Then, an RWR on the entire heterogeneous network allows the influence of nodes to spread far away so that we can obtain the final embedding vector. This approach can provide more information to nodes through a larger neighborhood while avoiding the oversmoothing and computational complexity caused by multi-layer convolutional networks. Aggregation by GCN In this stage, only the first-order neighborhood of the node is considered. For each node, we first group its first-order neighbors according to the type of edge. Then, for each neighbor group, a neighborhood aggregation operation is performed to aggregate information. Finally, the neighbor information of different groups is accumulated and concatenated with the previous embedding vector of the node, and then sent to the neural network to generate a new embedding vector. The process of aggregating and updating are defined as follows: where a r v refers to the aggregated neighborhood information of v related to edge type r, h 0 v ∈ R d refers to the initial embedding vector of v, d denotes the dimension of vector, R indicates the set of edge types, N r (v) are the neighbors of v related to edge type r, σ is a non-linear activation function, and W 0 r ∈ R d×d and b r ∈ R d are edge-type specific parameter matrix and bias terms used to aggregate neighborhood information, respectively. c v r is a normalization constant that we choose to be c v r = N r (v) . MEAN () is an element-wise mean operator, h * v is the updated embedding vector of v. Figure 2 shows a small example of the network. Drug node D1 is associated with two diseases and one side effect, as well as targets two proteins, and interacts with three other drugs. The bold dotted line indicates the similarity between drugs. The process of the encoder is provided in Figure 3. Multiple different single-layer neural networks (SLNs) are used in the encoder according to edge types. We take the drug node D1 in Figure 2 as an example. Since there are five types of edges connected to D1, there will be five SLNs to aggregate neighbor information of corresponding edge types. The mean operator is chosen as the aggregation function, to perform an element-wise mean of the vectors in {h 0 v } ∪ {a r v : r ∈ R}. It results in the new node embedding vector h * . Relu (x) = max(0, x) is selected as the element-wise activation function. A projection with learnable parameters is employed to initialize h 0 . Propagation by RWR The multi-hop neighborhood information aggregation implemented by multi-layer convolution often leads to over-smoothing. The aforementioned GCN only considers the one-hop graph structure, which causes the multi-hop information of the node to be underutilized. In order to solve this problem, we introduce an RWR, which spreads the influence of nodes to other nodes that are not directly adjacent through a walk on the heterogeneous network. The introduction of multi-hop information extends the range of information aggregation from the first-order neighborhood to the high-order neighborhood, which is equivalent to increasing the receptive field of convolution, thereby realizing long-range message passing. Assuming that the transition matrix of the heterogeneous network is A and the restart probability is α, the RWR is defined as follows (Tong et al., 2008): where I is the identity matrix, and A ppr (u, v) indicates the influence of node u on node v. According to Equation (3), we can spread node information over long distances to get the final node embedding vector: where H * is the node embedding vector matrix obtained by the aforementioned convolution operation. Since the time complexity of Equation (4) is O(n 2 ), when the network scale is large, it may be expensive. Therefore, we introduce the iterative form of Equation (4): It is easy to prove that lim Because all drug node pairs have edges of chemical structure similarity, there may be two edges between some drug node pairs. The same is true for target node pairs, and will bring inconvenience to the random walk. To simplify the problem, we delete the edges representing the similarity of drug structure and protein sequence from the heterogeneous network. That is, the graph convolution operates on a complete heterogeneous network whereas the random walk is only performed on a subnetwork of the complete network. Decoder While encoder maps each node in the heterogeneous network to a real-valued embedding vector, the decoder reconstructs the original network from the embedding vectors. The decoder is essentially a scoring function s (u, r, v) : R d × R× R d → R, used to score the triplets (u, r, v) so that we can evaluate the probability of edge r existing between u and v, where u and v are nodes, and r is a certain type of edge. In our experiments, we use DistMult (Yang et al., 2015) as the decoder, which is known to perform well on standard link prediction benchmarks. The scoring function is: where e u and e v are the embedding vectors of u and v, respectively. e u T is the transpose of e u , and M r ∈ R d×d is an edge-type specific diagonal matrix. In terms of Equation (6), we can reconstruct the original networks. Take the reconstruction of a DTI network as an example: where V drug and V protein are the matrices of drug embedding vectors and target embedding vectors, respectively, and M DTI is the diagonal matrix used to reconstruct the DTI network. Training The loss of network reconstruction is as follows: where Network r original and Network r reconstruction are the original network with edge type r and the corresponding reconstructed network, respectively. P is a mask matrix where P ij =1 indicates that the element in the i-th row and j-th column of Network r original appears in the training set, otherwise it does not occur. Q is a matrix that stores the difference between the predicted value and the ground truth in the training set. We further add the regularization term of the weight coefficient to obtain the objective function: Our optimization goal is to minimize Equation (9), where w w 2 is the sum of the squares of all the weights, and λ is an adjustment coefficient. In GADTI, there are three trainable parameters: (1) four matrices for initializing node vectors, i.e., W drug , W disease , W protein and W sideeffect ; (2) 12 edge-typespecific neural network weight matrices W 0 r for aggregating neighborhood information; and (3) 8 edge-type-specific diagonal matrices M r used to reconstruct the networks. We adopted the same sampling strategy and dataset division strategy as Wan et al. (2019). For the DTI network, the sample pair with an edge connection is regarded as the positive sample, and the sample pair without a connection is the negative sample. We randomly collect 10 negative samples for each positive sample to form the DTI dataset used by the model. Ten-fold cross-validation was used for performance evaluation. In each fold, the DTI dataset is randomly divided into three independent parts: training set, validation set and test set, with ratios of 0.855, 0.045, and 0.1 respectively. The training set of GADTI is composed of the training set of DTIs and other seven datasets. In each iteration, we update the model parameters on the training set, and then evaluate the model on the validation set. If the new model parameters show better performance on the validation set than before, the test set will be used to test the generalization ability of the model. In addition to L2 regularization, early stopping is introduced to alleviate over-fitting. If the performance of the model on the validation set does not increase for n iterations, it can be considered that overfitting has occurred, so the training will stop early. Adaptive moment estimation algorithm (Adam) (Kingma and Ba, 2015) is selected to minimize the objective function. The dimension of embedding vector and the learning rate are set to 1,000 and 0.001, respectively, according to independent experiments. Our code runs on PyTorch V1.7 and DGL V0.5. Performance Evaluation We used 10-fold cross-validation to test the performance of our algorithm, and stratified sampling to ensure that the proportion of samples in each category in the training set and test set were the same as in the original dataset. The area under the receiver operator characteristic curve (AUROC) (Le, 2019) and the area under the precision-recall curve (AUPRC) were chosen to evaluate the performance of our approach and baseline methods. The receiver operator characteristics (ROC) curve is suitable for evaluating the overall performance of the classifier because it takes both positive and negative samples into consideration (Le et al., 2020). However, class imbalance often occurs in actual datasets. For example, in a DTI network, the number of negative samples is much larger than that of positive samples. In this case, the ROC curve presents an overly optimistic estimate of the effect. Conversely, both indicators of the precision-recall (PR) curve focus on positive samples. In the class imbalance cases, people are mainly concerned with positive samples, and thus the PR curve is widely considered to be better than the ROC curve. We use both AUROC and AUPRC. The larger the value of AUROC and AUPRC, the better the performance of the method. Comparison With Baseline Methods To evaluate the performance of GADTI, we compared it with four popular computational methods: MSCMF , TL_HGBI (Wang et al., 2014), DTINet (Luo et al., 2017), and NeoDTI (Wan et al., 2019). These methods all predict DTIs from a heterogeneous network composed of multiple datasets. MSCMF uses matrix factorization methods and linear combinations of matrices to achieve prediction. TL_HGBI first establishes a three-layer heterogeneous network consisting of disease, drug, and protein data, and then uses an iterative strategy for drug repositioning. Meanwhile, DTINet focuses on learning low-dimensional vector representations of features that can accurately interpret the topological characteristics of each node in a heterogeneous network, and then makes predictions based on these representations through a vector space projection scheme. NeoDTI is close to the non-random walk version of GADTI. It first aggregates neighborhood information, and then reconstructs the network through two bilinear transformations. We run all five methods on the same dataset and implement three rounds of 10-fold cross-validation to compare their performance. The hyperparameters used in the baseline methods are the same as those in NeoDTI. When the ratio of positive sample to negative sample is 1:10, the results of GADTI and the baseline methods are shown in Figures 4, 5. We observe that GADTI has an AUROC value of 0.9582, which is higher than those of NeoDTI (0.9509), DTINet (0.9208), TL_HGBI (0.8914), and MSCMF (0.8355). Meanwhile, in terms of AUPRC, which is more suitable for the current class imbalance case, GADTI is also better than all the baseline methods. Our approach slightly outperforms the second-best method (0.73% in terms of AUROC and 0.79% in terms of AUPRC). Some DTI prediction methods based on machine learning include all unknown DTIs (treated as negative examples) in the training. To have a better comparison, we did additional test in this scenario. Experiment shows that GADTI still achieve the best performance, with an AUROC of 0.9369 and an AUPRC of 0.6205, and it stays ahead by a bigger margin. We notice that the AUROC values of all methods range from 0.8504 to 0.9369, but the AUPRC values range from 0.0312 to 0.6205, which is a large gap. Figure 6 shows the experimental results of the dataset including all unknown DTIs. The dataset in section Dataset contains homologous proteins or structurally similar drugs, which reduces the difficulty of predicting their interactions. In other words, the good performance of the DTI prediction method may come from a simple algorithm rather than a well-designed algorithm. Therefore, we carried out an additional experiment which is FIGURE 4 | Comparison between MSCMF, TL_HGBI, DTINet, NeoDTI, and GADTI in terms of AUROC and AUPRC based on 10-fold cross-validation (#positive: #negative = 1:10). Figure 7 shows the experimental results, where the ratio of positive samples to negative samples is 1:10. GADTI greatly outperforms the second-best method (2.55% in terms of AUROC and 4.74% in terms of AUPRC). Case Study To evaluate the prediction performance, we downloaded the latest approved DTI dataset (V5. DISCUSSION Finding novel DTI pairs is of great significance for drug development. However, biochemical experiments are very costly and time-consuming. Therefore, computational methods have attracted much attention recently because they can quickly and cheaply evaluate potential DTIs. Early DTI prediction studies are mainly divided into two categories: (a) inferring based on drug similarity and target similarity (Chen et al., 2012;Cheng et al., 2012;Mei et al., 2013;Wang et al., 2014;Seal et al., 2015); and (b) binary prediction based on drug feature and target feature (Nagamine et al., 2009;Lan et al., 2016;Olayan et al., 2018;Chen et al., 2019;Shi et al., 2019). The GADTI approach proposed in this paper also utilizes similarity data and the features of drugs and targets, which are represented in vectors. However, unlike previous studies, the network embedding method and the graph autoencoder framework are introduced to learn the embedding feature vectors of drugs and targets from multi-source heterogeneous networks for predicting unknown DTIs. We use AUROC and AUPRC to evaluate the performance of GADTI and the baseline methods. The results show that GADTI greatly outperforms the other methods in three different scenarios. Only NeoDTI achieves comparable results under the situation where the ratio of positive sample to negative sample is 1:10 (Figure 4). This may be because NeoDTI also adopts GCN for aggregating and updating. In case study, GADTI accurately predicted 54.8% of the new DTIs ( Table 2). We observe that the hit numbers of configuration B are less than those of configuration A, in case of m = 20, 30, and 40. However, the gap decreases with the decrease of m. We can see that in case of m = 10, the result is just reversed: the hit number of configuration B is much greater than that of configuration A. A reasonable inference is that configuration B, all unknown pairs are treated as negative examples, can make the ranking of potential DTIS more accurate. As a result of our experiments we conclude that, compared with baseline methods, GADTI is more reliable and effective in discovering potential DTIs. Hence, it can be used to identify new targets for existing drugs. The reason why GADTI performs well is that it aggregates multi-hop neighborhood information and avoids oversmoothing. First of all, GADTI uses a GCN to aggregate first-order neighbor information from heterogeneous networks to update node representation. Then, an RWR is carried out on the whole network to spread the influence of nodes. The combination of the GCN and RWR introduces multi-hop information for node feature updating. It extends the scope of information aggregation from the first-order neighborhood to the higher-order neighborhood, which is equivalent to increasing the receptive field of convolution, thereby realizing long-range message passing. Although GADTI has made outstanding achievements in DTI prediction, it still has room for improvement. For new nodes of drugs or targets that did not appear during training, GADTI cannot directly predict their interaction with known nodes, that is, it needs to restart training to make predictions. In addition, GADTI cannot predict isolated new nodes that are not associated with known drugs or target nodes. In future research, we will introduce node features and improve the model structure to try to solve these two problems. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found at: https://github.com/shulijiuba/GADTI. AUTHOR CONTRIBUTIONS ZL and QC conceived the project, developed the prediction approach. ZL and WL designed and implemented the experiments. ZL, HP, XH, and SP analyzed the result. ZL wrote the paper. All authors read and approved the final manuscript.
6,645.8
2021-04-09T00:00:00.000
[ "Computer Science", "Medicine" ]
Prefrontal Cortex and Somatosensory Cortex in Tactile Crossmodal Association: An Independent Component Analysis of ERP Recordings Our previous studies on scalp-recorded event-related potentials (ERPs) showed that somatosensory N140 evoked by a tactile vibration in working memory tasks was enhanced when human subjects expected a coming visual stimulus that had been paired with the tactile stimulus. The results suggested that such enhancement represented the cortical activities involved in tactile-visual crossmodal association. In the present study, we further hypothesized that the enhancement represented the neural activities in somatosensory and frontal cortices in the crossmodal association. By applying independent component analysis (ICA) to the ERP data, we found independent components (ICs) located in the medial prefrontal cortex (around the anterior cingulate cortex, ACC) and the primary somatosensory cortex (SI). The activity represented by the IC in SI cortex showed enhancement in expectation of the visual stimulus. Such differential activity thus suggested the participation of SI cortex in the task-related crossmodal association. Further, the coherence analysis and the Granger causality spectral analysis of the ICs showed that SI cortex appeared to cooperate with ACC in attention and perception of the tactile stimulus in crossmodal association. The results of our study support with new evidence an important idea in cortical neurophysiology: higher cognitive operations develop from the modality-specific sensory cortices (in the present study, SI cortex) that are involved in sensation and perception of various stimuli. es on scalp-r corded event-related potentials (ERPs) showed that somatosensory N140 evoked by a tactile vibration in working memory tasks was enhanced when human subjects expected a coming visual stimulus that had been paired with the tactile stimulus.The results suggested that such enhancement represented the cortical activities involved in tactile-visual crossmodal association.In the present study, we further hypothesized that the enhancement represented the neural activities in somatosensory and frontal cortices in the crossmodal association.By applying independent component analysis (ICA) to the ERP data, we found independent components (ICs) located in the medial prefrontal cortex (around the anterior cingulate cortex, ACC) and the primary somatosensory cortex (SI).The activity represented by the IC in SI cortex showed enhancement in expectation of the visual stimulus.Such differential activity thus suggested the participation of SI cortex in the task-related crossmodal association.Further, the coherence analysis and the Granger causali y spectral analysis of the ICs showed that SI cortex appeared to cooperate with ACC in attention and perception of the tactile stimulus in crossmodal association.The results of our study support with new evidence an important idea in cortical neurophysiology: higher cognitive operations develop from the modality-specific sensory cortices (in the present study, SI cortex) that are involved in sensation and perception of various stimu i. INTRODUCTION Recent monkey studies have shown evidence that cells in primary somatosensory cortex (SI) and secondary somatosensory cortex (SII) change their firing correlated with tactile unimodal working memory [1][2][3][4].In a recent human study [5], it was shown that SI cortex retained a memory trace of the tactile stimulus for a short period.Further, cells in the somatosensory cortex of monkeys were shown to respond to task-related stimuli of more than one sensory modality in working memory tasks [6][7][8].Crossmodal effects have also been observed in studies on neural mechanisms of attention in monkeys, in which firing changes in cells of somatosensory cortex were found in the crossmodal attention switch [9,10], and in attention studies of humans, in which changes in early modalityspecific sensory (visual, auditory, and somatosensory) ERP (eventrelated potential) components were detected [11,12].The above observations suggest that crossmoda links affect sensory-perceptual processes within modality-specific cortical regions [11]. In behavioral studies, it has been shown that viewing the stimulated body part can improve tactile discrimination at the stimulated site [13][14][15].The visual-tactile improvement may be linked to modulations of neural activities in SI [15,16] through the higher-level multimodal associative cortex [16][17][18][19], suggesting the involvement of both sensory and associative cortical areas in visual-tactile crossmodal associations. In our previous study [20], we found that the amplitude of the ERP component N140 evoked by the tactile stimulus was increased when the subject expected a coming visual stimulus that had been paired with the tactile stimulus in comparison to this component evoked by the same tactile stimulus without crossmodal expectation.It has been suggested that the somatosensory N140 is generated by sources in multiple cortical areas, including frontal cortex and SII cortex [21][22][23].By applying independent component ana sis (ICA) in the present stud to the EEG (electroencephalogram) data recorded in the unimodal and crossmodal tasks, we explored independent components (ICs) that represented neural activities in cortical areas.We found that the crossmodal modulation of the N140 represented the neural activities in somatosensory (SI, and possibly SII as well) and frontal cortical areas that cooperated with each other in c ossmodal association in the tasks. the ERP data of 8 out of 10 participants were from that study.Figure 1 (lower) shows ERP components, P45, P100, and N140 at 15 electrodes.A three-way repeated measures multivariate analysis of variance (MANOVA) was performed for comparisons of amplitude and latency of the components.The within-subject factors were LR (left-right electrode locations), AP (anteriorposterior electrode locations), and Modality (crossmodal and unimodal).Amplitude of N140 recorded at those 15 electrodes was significantly affected by Modality (F = 12.8, df (Effect) = 1, df (Error) = 9, p,0.01) and AP (F = 16.9, df (Effect) = 4, df (Error) = 6, p,0.01), but not by LR.Latency of N140 was significantly affected by AP (F = 5.3, df (Effect) = 4, df (Error) = 6, p,0.001), but not by Modality or LR.Amplitude of P100 was significantly modulated by LR (F = 10.7,df (Effect) = 2, df (Error) = 8, p,0.01) but not by Modality or AP.Latency of P100 was significantly affected by LR (F = 8.3, df (Effect) = 2, df (Error) = 8, p,0 05), but not by Modality or AP. Independent components (ICs) Thirty different independent components were found through ICA from each task of each subject.In comparison among them, 2 independent components across 2 tasks and 10 subjects showed consistent temporal activities and topographies of their coefficients of spatial projection to scalp electrodes.We defined those 2 ICs as IC-F (F: frontal) and IC-RS (RS: right somatosensory). Topographies The IC-F appeared to be active in prefrontal areas, and the IC-RS appeared to be active in right somatosensory areas.The topography of the IC-F is shown in Figure 2, and of the IC-RS is shown in Figure 3. Individual topographic maps were normalized by root mean square, and made the same polarities [24].Topographies of both IC-F and IC-RS were apparently consistent across 10 subjects and the two tasks.Top graphies of IC-F and IC-RS were averaged respectively across subjects and tasks (Figure 4), and the grand mean of the topographies of each IC was then submitted to BESA2000 to obtain the location of the IC-related dipole in the brain (Figure 4).The IC-F-dipole location was found to be around the medial prefrontal areas, anterior part of the midline of the brain (Talairach coordinates [25]: 0.5, 19.5, 43.4).This location was estimated to be in anterior cingulate cortex (ACC, area 32).IC-RS-dipole was estimated around the right primary somatosensory area (Talairach coordinates: 33.0,222.5,41.6, area 3). Temporal activities Temporal activities of two independent components, IC-F and IC-RS were analyzed.Back-proj ctions of the IC-F showed waveforms with peaks similar to the original ERP components, P100 and N140 (Figure 5).A four-way repeated measures MANOVA (see Materials and Methods) showed no significant difference in latency of those two components between the IC-F back-projections and the original ERPs (P100: F = 3.3, df (Effect) = 1, df (Error) = 9, p = 0.103; N140: F = 0.2, df (Effect) = 1, df (Error) = 9, p = 0.691).Results of the amplitude analysis showed that substantial proportions of the original P100 and N140 were contributed by the IC-F.No significant difference was observed between modalities in IC-F contributions to the N140, although the significant difference between modalities in this component was shown in the original ERPs. Back-projections of the IC-RS sho ed a component similar to the original ERP component P45 observed at three electrodes on the right side, contralateral to the tactile stimulus (Figure 6).No significant difference was observed between the IC-RS component and the ERP P45 in latency among the 9 electrodes.This IC-RS component was significantly affected among these 9 electrodes by AP (F = 12.1, df (Effect) = 2, df (Error) = 8, p,0.005) and LR (F = 33.6,df (Effect) = 2, df (Error) = 8, p,0.0005), but not by Modality (F = .3, df (Effect) = 1, df (Error) = 9, p = 0.07) although Post hoc test (Tukey HSD) showed that this component was significantly higher in the crossmodal task at several electrodes (figure 7). Back-projections of the IC-RS in the ranges of 70,100 ms, and 100,160 ms were analyzed respectively.The peak amplitude in the duration of 70,100 ms was significantly affected by AP (F = 17.0, df (Effect) = 2, df (Error) = 8, p,0.005) and LR (F = 19.1,df (Effect) = 2, df (Error) = 8, p,0.001), but not by Modality, and in the duration of 100,160 ms the peak amplitude was also significantly affected by AP (F = 9.2, df (Effect) = 2, df (Error) = 8, p,0.01) and LR (F = 16.0,df (Effect) = 2, df (Error) = 8, p,0.005), but not by Modality.For both durations, interactions between AP and LR were significant (F = 6.9, df (Effect) = 4, df (Error) = 6, p,0.02).The results of Post hoc test (Tukey HSD) for differences in the amplitude between the two tasks are shown in Figure 7. Time-frequency representation (TFR), coherence, and Granger causality spectra Power spectra for IC-F and the original ERPs at FCz, and for IC-RS and the original ERPs at C4 were analyzed across all subjects (Figure 8).At these two electrode sites, back-projections from IC-F and IC-RS showed highest amplitude respectively.Results indicated that in the time range of 100,300 ms, independent components and ERPs showed activi es mainly w th frequency in the theta band (3)(4)(5)(6)(7). Coherence between IC-F and IC-RS is indicated in Figure 9 (left side).A three-way repeated measures MANOVA (see Materials and Methods) showed no significant effects of any main factor (Modality, Duration, Frequency-Band).Post hoc test (Tukey HSD), however, showed in the crossmodal task the significantly stronger coherence in the theta band during 100,200 ms after the onset of S-1 compared with the baseline (2100,0 ms). Granger causality spectra were obtained (see Materials and Methods) to test the direction of the connectivity between IC-F and IC-RS in the crossmodal task since significant coherence was observed between these ICs in the task.Results showed trends that the connectivity after the onset of the stimulus (0,300 ms) was stronger than before the stimulus (2100,0 ms) in the task (Fig. 10).The causality index (CI) was significantly affected by Frequency-Band (F = 10.5, df (Effect) = 3, df (Error) = 7, p,0.01), but not by other two factors (Modulation, Duration).The interaction between Modulation and Frequency-Band was marginal (F = 4.2, df (Effect) = 3, df (Error) = 7, p = 0.05).Post hoc analysis showed that, in general, the pre-stimulus CI was the smallest.For crossmodal bottom-up modulation (Fig. 10), CI in the theta band in the duration from 100 ms to 200 ms after the onset of S-1 was significantly (p,0.001)larger than that before the S-1 (2100,0 ms). DISCUSSION ICA is a technique that has been successfully applied to human EEG studies in the last decade [24,[26][27][28][29][30][31].ICA completely decomposes single-trial (or continuous) EEG data, separating the data into distinct information sources.By using this technique in data analysis, the multi-channel EEG can be decomposed into spatially fixed, temporally maximally independent components, and the scalp maps associated with some of these ICs resembled the scalp projections of synchronous activity inside the brain in a cognitive task [27].Thus, when subjects perform behavioral tasks the ICs likely represent neural activities in those brain areas where they are located [24,26,27,31,32].In the present study, the use of the ICA technique enabled us to find from the original EEG data two ICs (IC-F and IC-RS) that represented neural activities correlated with tactile working memory tasks, unimodal or crossmodal.This finding strongly suggests that cortical locations of those two ICs, medial prefrontal cortex and SI are involved n perception of the tactile stimulus and crossmodal associations in the task, and it may therefore provide us with a better understanding of the neural mechanism underlying the crossmodal working memory.The results of our study showed the beneficial application of the ICA technique that leads us to valuable findings that would not have been possible with the traditional ERP analysis. Studies have shown that P100 and N140 are enhanced when attention is directed to the somatosensory stimuli, and are modulated by endogenous spatial attention as well [22,[33][34][35][36][37][38][39][40]. In the present study, the subject's attention was directed to the tactile stimulus (S-1) to detect the frequency of vibration.The level of attention was essentially the same in both tasks.Backprojections of IC-F showed its substantial non-differential (similar in both tasks) contributions to P100 and N140, indicating that medial prefrontal cortex, most likely ACC, was one of the major sources of these two ERP components, which represented neural activities of ACC in attention to the same tactile stimulus (S-1) in the tasks.This finding showed that the ACC was involved in attention on the tactile stimulus as early as 100 ms after the onset of the stimulus.The present finding is consistent with the findings of other studies showing that the ACC plays an important role in attention of various stimuli [23,[41][42][43][44][45][46][47][48]. The back-projection analysis showed that another independent component found in this study, IC-RS, was the main generator for the ERP component P45 that typically represents the neural activity in SI cortex evoked by the contralateral somatosensory stimulus [21,35,36,49].This suggested that changes in backprojection from the IC-RS represented neural activities of SI cortex in the task.The location of the IC-RS also supported the notion that the dynamic changes in IC-RS activity represented the changes in SI activity.Significant differences in IC-RS backprojection between the unimodal task and the crossmodal task were observed after the onset of the tactile stimulus, apparently because of the enhancement of IC-RS activity in the crossmodal task.The enhancement and the location of IC-RS strongly suggested that the crossmodal association between tactile and visual stimuli involved activities in the SI cortex as early as 100 ms after the onset of the tactile stimulus, or even earlier since P45 of the back-projection from IC-RS also showed trends toward differential reaction between the tasks.This new finding in the present study agrees with the findings in other studies that show participation of SI cortex in crossmodal association in monkeys [6][7][8], and in humans [15,16]. In our previous study [20], we argued that the enhancement of N140 in the crossmodal task was unlike y to be due to attention, movement, or load of the task, but rather was related to crossmodal transfer of information between the tactile and the visual modalities in the task.The IC-F found in our present study had a sizeable contribution to the N140, but its contribution did not show any significant difference between the tasks.Although we were not able to locate an IC that was consistent across subjects and tasks in SII area because of the limitations of the ICA technique, it is a reasonable assumption that the significant difference in N140 between the tasks likely resulted from the activity in SII since SII has been shown to be another major source, in addition to the prefrontal source, to generate somatosensory N140 [22,[50][51][52][53]. Nonetheless, the possibility that other prefrontal areas contributed to the difference in N140 cannot be completely eliminated.It has also been suggested that P100-generators are located in the SII cortex [51,[54][55][56][57]. T erefore, the crossmodal modulation of both P100 and N140 generated by the subject's expectation of visual stimuli in the task may involve the change in neural activities in the SII cortex.The results of our study suggest that the crossmodal association may not only occur in association cortical regions, such as frontal cortex and posterior parietal cortex, but also occur in tactile modalityspecific cortical regions, such as SI and SII cortex. Studies have shown that the ACC is involved in attentional modulation of sensory processing in primary visual and primary auditory cortices (e.g., [47] ).The theta oscillation in ACC may play an important role in the attentional modulation [58][59][60].Our coherence analysis indicated that in the crossmodal task, compared with the baseline period the coherence in theta range during the period of 100,200 ms after the onset of the tactile stimulus was significantly increased between IC-F and IC-RS, showing that the activity in SI cortex may be synchronized with the activity in ACC in crossmodal as ciation.This coherence between two areas suggested that ACC cooperated with SI cortex in attention and perception of the tactile stimu s under the i fluence of the crossmodal association.The Granger causality analysis of the coherence indicates that activities of ACC may be affected by SI cortex (bottom-up) as early as 100,200 ms after the onset of the tactile stimulus. In conclusion, modulation was observed in the present study on activities of somatosensory cortex in the crossmodal task.Although how tactile crossmodal association is processed in the somatosensory system is still not well understood, our study clearly show that SI cortex (presumably SII cortex as well) participates in the taskrelated crossmodal association that has been suggested by previous monkey and human studies (e.g., [7,16] ).In the process of crossmodal association, somatosensory cortex appeared also to cooperate with the higher level association cortex, the medial prefrontal cortex, in attention and perception of the tactile stimulus.Taken together, the results of our study support the idea with new evidence that higher cognitive operations develop from the modality-specific sensory cortices that are involved in sensation and perception of various stimuli [1,[61][62][63][64][65][66][67][68], and fit the concept of the perception-action cycle [69,70] that describes the cortical neural dynamics of sensory-motor behaviors. MATERIALS AND METHODS Detail of experimental p ocedures for behavioral tasks and EEG recording have been described previously [71]. Participants Twelve paid normal adult volunteers were recruited for the present study (10 men, 2 women, aged 19-47 years).Two participants were excluded because of excessive blinking or excessive muscle artifacts.Thus data from 10 subjects were collected and analyzed in the study.The data of 8 out of those 10 subjects were also used previously [20].All participants signed informed consent.The protocols of the experiments were approved by the IRB of the Johns Hopkins School of Medicine. Stimuli and EEG recording Experiments were carried out in a quiet, dimly lit room.Participants sat in a comfortable chair, facing a light-emitting Behavioral tasks The scalp-ERPs were recorded when participants performed a tactile-tactile delayed matching-to-sample task (unimodal task) or a tactile-visual delayed matching-to-sample task (crossmodal task).The subjects were instructed to focus on the LED throughout a recording session to avoid eye movement and eye blinking within any trial of the task.Trials that showed eye-blinks, excessive eye movements, or muscle artifacts were excluded from data analysis.In the unimodal task, a complete trial contained a sequence of events (Figure 1 upper).A trial started with stimulus-1 (S-1), a 0ms tactile vibration of either high (150 Hz) or low (80 Hz) frequency.After a delay of 1,500 ms, stimulus-2 (S-2), a 100-ms tactile vibration again (150 Hz or 80 Hz) was presented.During the delay, the subject was instructed to memorize the vibration frequency of S-1, and to expect S-2 that would match the frequency of S-1.The subject indicated at the end of the trial whether S-2 matched S-1 by pressing one of two buttons (e.g., left for match, right for nonmatch).The frequency of S-1 or S-2 was presented randomly from trial to trial to prevent the subject fro getting any clue f r performance.The intertrial interval between trials was chosen randomly in a range of 4-5 seconds.The subject's response time to S-2 was recorded. In the crossmodal task (Figure 1 upper), the task sequence was identical to the unimodal task except that in this task S-2 was a visual cue (100 ms), a green or red LED associated with the tactile vibration.Associations between the tactile stimuli (S-1) and the visual stimuli (S-2) were assigned before the subject started performing the task (e.g., green associated with high frequency; red with low frequency) and counterbalanced across subjects.At the end of each trial, the subject indicated by pressing a button whether S-2 (LED) was associated with S-1. EEG data analysis Original EEG data from which trials with eye-blinks, excessive eye movements, or muscle artifacts had been excluded were filtered with a digital zero-phase filter ( nite Impulse Response filter, p ss band 2 to 30 Hz).Amplitude of an ERP component was calculated as the difference between its peak and the baseline (200 ms preceding the onset of S-1) mean value.Its latency was measured from S-1 onset to the peak.A three-way repeated measures MANOVA was performed for comparisons of amplitude and latency of the original ERP components.The within-subject factors of the analysis were left-right electrode locations (LR) (left, center, right-corresponding to electrode locations of 3, z, and 4), anterior-posterior electrode locations (AP) (frontal, frontocentral, central, centroparietal, parietal levels-electrode locations of F, FC, C, CP, and P), and Modality (crossmodal and unimodal). Independent component analysis Analysis of EEG data recorded from 30 electrodes was performed by using Matlab 7.0 (Math Works, Natick, MA) and EEGLAB4.51(SwartzCenter for Computational Neurosciences, La Jolla, CA; http://www.sccn.ucsd.edu/eeglab), a freely available open source software toolbox.BESA2000 (MEGIS, Graefeling, Germany) was also used to localize dipoles of independent components (ICs). The filtered-E G data (2,30 Hz) that preserved theta, alpha, and beta band information [31] were used for the ICA study.The onset of S-1 in each trial was used as the task-event marker to separate a trial into a period before the onset and a period after the onset.In each trial, filtered-EEG of 1,500 ms (500 ms before the onset of S-1 and 1,000 ms after it) were extracted from the continuous EEG to form a data epoch.The mean value of EEG amplitude in the first 100 ms of th epoch was calculated from all trials of each task in each individual subject.To obtain the EEG data epoch for further processing, this mean value was then subtracted from each corresponding data epoch to reduce the influence of EEG variance across trials. All data epochs were put together and submitted to infomax ICA [72,73] that comes from the ICA families performing blind source separation.A 30630 unmixing square matrix was found by using Infomax ICA.When this matrix was multiplied by the EEG data epochs, maximally, temporally independent activities were obtained.In this calculation of the independent activities, a weight change of 10e-6 together with iterations ,800 were set as the stop criterion [24]. Let X denote the EEG data and M denote the unmixing square matrix.Then independent activities (S) are: S = MX.We can change the formula into X = M 21 S. In this formula, one row of the matrix S represents the temporal activity of one IC, and the corresponding column of the matrix M 21 represents this IC's spatial pattern at the scalp electrodes.The back-projection of an IC at one electrode is obtained by multiplying the temporal activity of this IC with its coefficient of the corresponding spatial pattern at this electrode.EEG at one electrode can be considered as the sum of back-projections of all ICs at this electrode.The temporal independent activity and its corresponding spatial pattern together characterize an IC that may correlate with the activity of a neuronal clique.In the present study, we screened activities of ICs to determine potentially common temporal patterns of those ICs across all trials of each task and subjects, and we also visually screened topographies of ICs to assess their potentially common spatial patterns across tasks and subjects.ICs showing event (onset of S-1)-related activities consistently across trials of each task and subjects, and spatial topographies consistently across tasks and subjects, were selected in the screening.The spatial topographies may reflect the dipole activity, presumably caused by partially synchronous activities within certain ortical source patches that produce far-field potent als through volume conduction.The above process of selection resulted in identification of ICs for each subject and each task, The grand average topography across subjects and tasks of a selected IC was submitted to BESA2000 that uses a standard four-shell spherical head model (i.e., brain, cerebrospinal fluid, bone, and scalp) to find the location of the IC-related dipole (source model) in the brain.The dipole was derived in BESA2000 by fitting it iteratively to the averaged IC topography parameters until minimal residual variance was reached.In the present study, the values of residual variance lower than 10% were used as the threshold [74]. Time-frequency representations (TFRs) and coherence TFRs of ICs and ERPs from the electrodes that showed the largest IC back-projections were computed on single trials in the frequency range of 2,30 Hz by using Hanning windowed short time Fourier transformation.The window had a fixed length of 250 ms, moving across every time stamp.The mean value of the windowed-period was taken away to avoid the variation of direct current.Zeros were then added after each windowed-period to make TFRs smoother across the Frequency axis.The ratio of the zero-pad to the windowed-period was 32.The TFRs were then normalized for each frequency by subtracting the baseline (200 ms before the onset of S-1) mean value, and dividing by the baseline standard deviation [75].The coherence spectrum of the two independent components (ICs) was calculated by using: The window length used in the coherence calculation was the same as those in calculation of the power spect m.The ratio of the zero-pad to the w ndowed-period was 8.A three-way repeatedmeasures MANOVA was applied to compare mean coherence values among tasks, time durations, and frequency bands with Modality (crossmodal or unimodal), Duration (2100,0 ms, 0,100 ms, 100,200 ms, 200,300 ms, with the onset of S-1 as time 0) and Frequency-Band (Theta Band: 2,8 Hz; Alpha Band: 8,14 Hz; Early Beta Band: 14,20 Hz; Late Beta Band: 20,30 Hz) as the within-subject factors. C 12 ~S12 j j 2 S 1 S 2 TFRs of ICs and ERPs, and coherence between ICs were also calculated with the window length of 500 ms.Results were similar to those obtained from th the window length of 250 ms (see supplementary material, Figure S1 and Figure S2). Granger Causality Spectral Analysis In order to examine the directional relationship between the two ICs, Granger causality spect the relative strength of influence.For each subject, the mea tracted from the trial to get zero-mean stochastic process that is required for application of the autoregressive modeling.The multivariate autoregressive (MVAR) model was estimated with the 100-ms window for all trials in the time range from 100 ms before, to 300 ms after, the onset of S-1.The MVAR model of order m describes the data as: X m k~0 A k X t{k ~Et Where E t is a temporally uncorrelated residual error with covariance matrix D, and A k are 262 (2 ICs) coefficient matrices.Once the model coefficients A k and D are estimated, the spectral matrix can be written as: S f r function of the system.In the present study, the optimal order for the MVAR model was determined by the Akaike Information Criterion (AIC) [78].The order of 5 was selected because the AIC dropped monotonically with increasing model order up to 5. The Granger causality spectra were then calculated.The power at a specific frequency could be decomposed into an intrinsic part and a predicted part by other signals.The Granger causality at each frequency was thus defined by the ratio of predicted power to total power [77].Causality Index was calculated by using: I 2?
6,424.6
2007-08-22T00:00:00.000
[ "Biology", "Psychology" ]
System‐level analyses of keystone genes required for mammalian tooth development Abstract When a null mutation of a gene causes a complete developmental arrest, the gene is typically considered essential for life. Yet, in most cases, null mutations have more subtle effects on the phenotype. Here we used the phenotypic severity of mutations as a tool to examine system‐level dynamics of gene expression. We classify genes required for the normal development of the mouse molar into different categories that range from essential to subtle modification of the phenotype. Collectively, we call these the developmental keystone genes. Transcriptome profiling using microarray and RNAseq analyses of patterning stage mouse molars show highly elevated expression levels for genes essential for the progression of tooth development, a result reminiscent of essential genes in single‐cell organisms. Elevated expression levels of progression genes were also detected in developing rat molars, suggesting evolutionary conservation of this system‐level dynamics. Single‐cell RNAseq analyses of developing mouse molars reveal that even though the size of the expression domain, measured in the number of cells, is the main driver of organ‐level expression, progression genes show high cell‐level transcript abundances. Progression genes are also upregulated within their pathways, which themselves are highly expressed. In contrast, a high proportion of the genes required for normal tooth patterning are secreted ligands that are expressed in fewer cells than their receptors and intracellular components. Overall, even though expression patterns of individual genes can be highly different, conserved system‐level principles of gene expression can be detected using phenotypically defined gene categories. | INTRODUCTION Much of the functional evidence for the roles of developmental genes comes from natural mutants or experiments in which the activity of a gene is altered. Most often these experiments involve deactivation, or a null mutation where the production of a specific gene product is prevented altogether. In the cases where development of an organism is arrested, the specific gene is considered to be absolutely required or essential for development (Amsterdam et al., 2004;Dickinson et al., 2016). Through a large number of experiments in different organisms, an increasingly nuanced view of developmental regulation has emerged showing that some genes appear to be absolutely required, whereas others may cause milder effects on the phenotype (Bogue et al., 2018;Brown et al., 2018). Yet, there are a large number of genes that, despite being dynamically regulated during individual organ development, have no detectable phenotypic effect when null mutated. Within the framework of distinct phenotypic outcomes of gene deactivation it can be argued that there is a gradation from developmentally "more essential" to "less essential" genes. Collectively, these can be considered to be analogous to the keystone species concept used in ecology (Paine, 1969;Terborgh, 1986). These genes, which can be called "developmental keystone genes," are not necessarily essential for development. Rather, compared to all the genes, developmental keystone genes exert a disproportional effect on the phenotype. As large-scale analyses of transcriptomes produce expression profiles for individual organs at the organ and single-cell level, it is now possible to address whether there might be any system-level differences between the regulation of essential and other keystone genes during organogenesis. Here we address such differences using the mammalian tooth. Especially the development of the mouse molar is well characterized, with over 70 genes that are known to be individually required for normal tooth development (Bei, 2009;Harjunmaa et al., 2012;Nieminen, 2009). The dynamic expression patterns and detailed effects of null mutations of these genes are also exceedingly well characterized, ranging from a complete developmental arrest to relatively mild modifications of morphology, or defects in the mineralized hard tissues (Harjunmaa et al., 2012;Nieminen et al., 1998;Nieminen, 2009). Our principal focus is on a critical step in tooth development, namely the formation of the cap stage tooth germ (Figure 1). At this stage the patterning of tooth crown begins, and the effects of experimental modifications in several signaling pathways first manifest themselves around this time of development (Jernvall & Thesleff, 2012). To develop a classification approach for the studied genes, we divide them into different categories based on our analysis of published in vivo experiments. Specifically, our classification criteria are based on the phenotypes of the mice where each gene is knocked out. Operationally, our classification applies only to our organ of interest even though classification following the same logic could be done for any organ. This single organ focus also means that genes that have no effect in one organ may be critical for the development of another organ. The first gene category is the progression category containing essential genes that cause a developmental arrest of the tooth when null mutated ( Figure 1; genes with references in Appendix S1). The second set of genes belongs to the shape category and they alter the morphology of the tooth when null mutated. Unlike the nullmutations of progression genes, many shape gene mutations cause subtle modifications of teeth that remain functional (Morita et al., 2020, in press), hence these genes are not strictly essential for tooth F I G U R E 1 Keystone gene categories of tooth development. Mouse molar development progresses from initiation and patterning to formation of the hard tissues and eruption. These steps are mediated by reciprocal signaling between epithelium (pink) and mesenchyme (magenta). A central step in the patterning is the formation of the epithelial signaling center, the primary enamel knot (blue oval inside the cap stage tooth). Several genes are known to be required for the developmental progression and regulation of the shape around the time of cap stage, and here we focused mainly on transcriptomes in the bud, and cap stage molars. Expression of progression and shape category genes were compared to tissue and dispensable genes, as also to other developmental process genes. Fewer initiation and eruption category genes are known, and they were excluded from the analyses. For listing of the genes, see Appendix S1 and Table S1 [Color figure can be viewed at wileyonlinelibrary.com] development. The third category is the tissue category and null mutations in these genes cause defects in the tooth hard tissues, enamel and dentine. Both the progression and shape categories include genes that are required for normal cap-stage formation. In contrast, the tissue category is principally related to the formation of extracellular matrix and these genes are known to be needed much later in the development (Nieminen at al., 1998). Because there is more than a 5-day delay from the cap-stage to matrix secretion in the mouse molar, here we considered the tissue category as a control for the first two categories. Additionally, we compiled a second control set of developmental genes that, while expressed during tooth development, are reported to lack phenotypic effects when null mutated (Table S1). This dispensable category is defined purely within our operational framework of identifiable phenotypic effects and we do not imply that these genes are necessarily unimportant even within the context of tooth development. Many genes function in concert and the effects of their deletion only manifest when mutated in combinations (also known as synthetic mutations). Some dispensable genes may function in combinations even though no such evidence exists as yet. We identified five such redundant pairs of paralogous genes and a single gene whose null phenotype surfaces in heterozygous background of its paralogue. Altogether these 11 genes were tabulated separately as a double category. In the progression, shape, tissue, and dispensable categories we tabulated 15, 28, 27, and 100 genes respectively (Figure 1, Appendix S1, and Table S1). While still limited, these genes should represent a robust classification of validated experimental effects. We note that these groupings do not exclude the possibility that a progression gene, for example, can also be required for normal hard tissue formation. Therefore, the keystone gene categories can be considered to reflect the temporal order in which they are first required during odontogenesis. Moreover, many of the 100 dispensable category genes could belong to the double category, but finding them would require testing close to 5000 double-mutant combinations. In addition to the categories studied here, there are genes required for the initiation of tooth development, of which many are also potentially involved in tooth renewal. Because the phenotypic effect of these initiation genes on tooth development precedes the visible morphogenesis, and the phenotype might include complete lack of cells of the odontogenic lineage, we excluded these genes from our analyses. Similarly, we excluded genes preventing tooth eruption with no specific effect on the tooth itself (Appendix S1). To examine our gene categories in the context of whole transcriptomes, we compared the expression levels with all the developmental-process genes (GO:0032502; Ashburner et al., 2000), as also with all the other protein coding genes. To obtain a robust readout of system-level expression patterns, we performed microarray, bulk RNAseq, and single-cell RNAseq analyses of developing mouse molars. We examined the keystone gene categories at the levels of the whole organ, cells, and signaling pathways. For a comparative test, we also examined bulk RNAseq of the keystone genes in developing rat molars. The analyses revealed systematic differences in the expression of the different keystone gene categories, suggesting distinct, high-level properties of developmental regulation. | Gene classification details Because many developmental genes function in multiple organs and stages during development, full mutants of several genes are lethal before tooth development even begins. Therefore, when available, we also used information on the tooth phenotypes of conditional mutant mice. The effect of conditional mutants can be milder and in our data we have four shape genes that could potentially be in the progression category (Appendix S1). For the analyses of pathways, we created a manually curated list of genes in the six key pathways (Wnt, Tgfβ, Fgf, Hh, Eda, Notch) and allocated the genes into these pathways where appropriate. Genes were also classified as "ligand" (signal), "receptor," "intracellular molecule," "transcription factor," or "other." Because these kinds of classifications are not always trivial as some biological molecules have multiple functions in the cell, we used the inferred primary role in teeth. The developmental-process genes with GO term "GO:0032502" were obtained from R package "org.Mm.eg.db" (Carlson, 2019). Only curated RefSeq genes were used in the study. The classification of mouse genes was transferred to one-to-one orthologs of rat genome. The data of orthologs were downloaded from Ensembl server using R package "biomart" (Durinck et al., 2005). We note that whereas the keystone gene terminology has also been considered within the context of their effects on ecosystems (Skovmand et al., 2018), here we limit the explanatory level to a specific organ system. | Dissection of teeth Wild type tooth germs were dissected from mouse embryonic stages corresponding to E13, E14, and E16 molars. For bulk and single-cell RNAseq we used C57BL/6JOlaHsd mice, and for microarray we used NMRI mice. The wild type rat tooth germs were dissected from DA/ HanRj rat embryonic stages E15 and E17, which correspond morphologically to E13 and E14 mouse molars (mouse and rat molars are relatively similar in shape). Minimal amount of surrounding tissue was left around the tooth germ, at the same time making sure that the tooth was not damaged in the process. The tissue was immediately stored in RNAlater (Qiagen GmbH) in −80°C for RNAseq or in TRI Reagent (Merck) in −80°C for microarray. For microarray, a few tooth germs were pooled for each sample and five biological replicas were made. For RNAseq, each tooth was handled individually. Seven biological replicates were made for mouse and five biological replicates for rat. Numbers of left and right teeth were balanced. The tooth germ was homogenized into TRI Reagent (Merck) using Precellys 24-homogenizer (Bertin Instruments). The RNA was extracted by guanidium thiocyanate-phenol-chloroform method and then further purified by RNeasy Plus micro kit (Qiagen GmbH) according to manufacturer's instructions. The RNA quality was assessed for some samples with 2100 Bioanalyzer (Agilent) and all the RNA integrity number values were above 9. The purity of RNA was analysed by Nanodrop microvolume spectrophotometer (Thermo Fisher Scientific). RNA concentration was measured by | Bulk RNA expression analysis Gene expression levels were measured both in microarray (Affymetrix Mouse Exon Array 1.0, GPL6096) and RNAseq (platforms GPL19057, Illumina NextSeq 500). The mouse microarray gene signals were normalized with aroma.affymetrix (Bengtsson et al., 2008) package using Brainarray custom CDF (Version 23, released on August 12, 2019; Dai et al., 2005). The RNAseq reads (84 bp) of mouse and rat were evaluated and bad reads are filtered out using FastQC (Andrews et al., 2012), AfterQC (Chen et al., 2017), and Trimmomatic (Bolger et al., 2014). This resulted on average 63 million reads per mouse sample and 45 million reads per rat sample. Good reads for mouse and rat were aligned with STAR (Dobin et al., 2013) to GRCm38 (mm10/Ensembl release 90) and Rnor_6.0 (Ensembl release 101), respectively. Counts for each gene were performed by HTSeq (Anders et al., 2015) tool. Results are shown without normalization of gene expression based on gene length as it does not change the pattern of results. On average 85% of reads were uniquely mapped to the genome. | Single-cell RNA sequencing Single cell RNA sequencing was performed on mouse E14 cap stage tooth cells. The teeth were dissected as described above. Each tooth was processed individually in the single-cell dissociation. In total four teeth were analyzed. Each tooth germ was treated with 0.1 mg/ml liberase (Roche) in Dulbecco's solution for 15 min at 28°C in shaking at 300 rpm followed by gentle pipetting to detach the mesenchymal cells. Then the tissue preparation was treated with TrypLE Select (Life Technologies) for 15 min at 28°C in shaking at 300 rpm followed by gentle pipetting to detach the epithelial cells. The cells were washed once in phosphate-buffered saline (PBS) with 0.04% bovine serum albumin (BSA). The cells were resuspended in 50 µl PBS with 0.04% BSA. We used the Chromium single cell 3' library & gel bead Kit v3 (10x Genomics). In short, all samples and reagents were prepared and loaded into the chip. Then, Chromium controller was used for droplet generation. Reverse transcription was conducted in the droplets. cDNA was recovered through emulsification and bead purification. Preamplified cDNA was further subjected to library preparation. Libraries were sequenced on an Illumina Novaseq 6000 (Illumina). All the sequencing data are available in GEO under the accession number GSE142201. | Data analysis For scRNAseq, 10x Genomics Cell Ranger v3.0.1 pipelines were used for data processing and analysis. The "cellranger mkfastq" was used to produce fastq files and "cellranger count" to perform alignment, filtering and UMI counting. Alignment was done against mouse genome GRCm38/mm10. The resultant individual count data were finally aggregated with "cellranger aggr." Further, the filtered aggregated feature-barcode matrix was checked for quality and normalization using R package Seurat (Stuart et al., 2019). Only cells with ≥20 genes and genes expressed in at least three cells were considered for all the downstream analysis. For a robust set of cells for the expression level calculations, we limited the analyses to 30,930 cells that had transcripts from 3000 to 9000 genes (7000-180,000 unique molecular identifiers) with <10% of the transcripts being mitochondrial. For comparison with bulk RNAseq data, single-cell data was normalized with DeSeq2 (Love et al., 2014) together with the corresponding bulk RNAseq samples, and median expression levels were plotted. The average cell-level expression of a gene X was calculated as where NX k is normalized expression of gene X in cell k and the denominator is the count of cells with nonzero reads. All statistical tests were performed using R package "rcompanion" (Mangiafico, 2019) and custom R scripts. Even though expression levels of the shape category genes (genes required for normal shape development) are lower than that of the progression category (Figure 2), at least the E14 microarray data suggests elevated expression levels relative to all the other control categories (p values range from .0001 to .0901; Table S3). | Data availability statement The moderately elevated levels of expression by the shape category genes could indicate that they are required slightly later in development, or that the most robust upregulation happens only for genes that are essential for the progression of the development. The latter option seems to be supported by a RNAseq analysis of E16 molar, showing only slight upregulation of shape category genes in the bell stage molars (Table S3). | Transcriptomes of developing rat molars show elevated expression of the progression genes Because our gene categories were based on experimental evidence from the mouse, we also tested whether comparable expression levels can be detected for the same genes in the rat. Evolutionary divergence of Mus-Rattus dates back to the Middle Miocene (Kimura et al., 2015), allowing a modest approximation of conservation in the expression levels. Examination of bud (E15) and cap (E17) stage RNAseq of rat molars shows comparable upregulation of progression and shape category genes as in the mouse (Figure 3, Table S2 and S3). Considering also that many of the null mutations in keystone gene in the mouse are known to have comparable phenotypic effects in humans (Nieminen, 2009), our keystone gene categories and analyses are likely to apply to mammalian teeth in general. Figures 2 and 4b). As in the previous analyses (Table S3), the progression category shows the highest expression levels compared to the control gene sets (p values range from .0071 to .0310; Table S3). Although the mean expression of the shape category is intermediate progression of tooth development, a pattern that seems to be shared with essential genes of single cell organisms (Dong et al., 2018). We note that although the dispensable category has several genes showing comparable expression levels with that of the progression category genes at the tissue level (Figure 2), their cell-level transcript abundances are predominantly low (Figure 5b). Next we examined more closely the differences between progression and shape category genes, and to what extent the upregulation of the keystone genes reflects the overall expression of the corresponding pathways. | Keystone gene upregulation in the context of their pathways In our data the developmental-process genes appear to have slightly elevated expression levels compared to the other protein coding genes (Figures 2, 3, and 4b), suggesting an expected and general recruitment of the pathways required for organogenesis. To place the progression and shape category genes into the specific context of their corresponding pathways, we investigated in E14 mouse bulk RNAseq whether the pathways implicated in tooth development show elevated expression levels. Six pathways, Fgf, Wnt, Tgfβ, Hedgehog (Hh), Notch, and Ectodysplasin (Eda), contain the majority of progression and shape genes (Section 2). First we used the RNAseq of E14 stage molars to test whether these pathways show elevated expression levels. We manually identified 272 genes belonging to the six pathways (Section 2 and Table S4). Comparison of the median expression levels of the six-pathway genes with the developmental-process genes shows that the pathway genes are a highly upregulated set of genes (Figure 6a; p < .0001, random resampling). This difference suggests that the experimentally identified progression and shape genes might be highly expressed partly because they belong to the developmentally upregulated pathways. To specifically test this possibility, we contrasted the expression levels of the progression and shape genes to the genes of their corresponding signaling families. The 15 progression category genes belong to four signaling families (Wnt, Tgfβ, Fgf, Hh) with 221 genes in our tabulations. Even though these pathways are generally upregulated in the E14 tooth, (Figure 6c; p = .5919). Whereas this contrasting pattern between progression and shape genes within their pathways may explain the subtle upregulation of the shape category (Figure 2), the difference warrants a closer look. Examination of the two gene categories reveals that compared to the progression category genes, relatively large proportion of the shape category genes are ligands (36% shape genes compared to 20% progression genes, Appendix S1). In our E14 scRNAseq data, ligands show generally smaller expression domains than other genes (roughly by half, Figure 6d,e), and the low expression of the shape category genes seems to be at least in part driven by the ligands (Figure 6c and Table S5). Overall, the upregulation of the keystone genes within their pathways appears to be influenced by the kind of proteins they encode. In this context it is noteworthy that patterning of tooth shape requires spatial regulation of secondary enamel knots and cusps, providing a plausible explanation for the high proportion of genes encoding diffusing ligands in the shape category. | DISCUSSION Identification and mechanistic characterization of developmentally essential or important genes have motivated a considerable research effort (e.g.; Amsterdam et al., 2004;Bogue et al., 2018;Brown et al., 2018;Dickinson et al., 2016). One general realization has been that despite the large number of genes being dynamically expressed during organogenesis, only a subset appears to have discernable effects on the phenotype. This parallels with the keystone species concept used in ecological research (Paine, 1969;Terborgh, 1986). Keystone species, that may include relatively few species in a community, are thought to have disproportionally large influence on their environment. Similarly, keystone genes of development, while not necessarily essential for life, have disproportionally large effects on the phenotypic outcome of their system. Here we took advantage of this kind of in-depth knowledge of the details of the phenotypic effects of various developmental genes (Appendix S1). This allowed us to classify genes into different categories ranging from essential to "less essential" and all the way to dispensable. Obviously, as in ecological data, our category groupings can be considered a work in progress as new genes and reclassifications are bound to refine the Most notably, genes that are essential for the progression of mouse molar development were highly expressed (Figures 2-4). These genes were highly expressed even within their pathways (- Figure 6a,b) and had markedly high cell-level transcript abundances ( Figure 5b). This pattern conforms to analyses of single cell organisms (Dong et al., 2018), thereby supporting expression level as one general criterion for essential genes. The high expression level of progression category genes may well signify their absolute requirement during the cap stage of tooth development. Indeed, it is typically by this stage that a developmental arrest happens when many of the progression genes are null mutated. Interestingly, mice heterozygous for the null-mutated progression genes appear to have normal teeth (Appendix S1). A possible hypothesis to be explored is to examine whether the high cell-level transcript abundance of the progression category is a form of haplosufficiency in which the developmental system is buffered against mutations affecting one allele. Another possibility, that has some experimental support (Benazet et al., 2009) Because all the studied progression and shape category genes are involved in the development of multiple organ systems, our results may evolutionarily point to cis-regulatory differences that specifically promote the expression of these genes in an organ specific manner. Consequently, species that are less reliant on teeth (e.g., some seals) or have rudimentary teeth (e.g., baleen whales) can be predicted to have lowered expression levels of the progression genes. Nevertheless, at an organism level, our gene categories should not be considered as indicative of having simple effects on individual fitness. For example, producing offspring, which have defective enamel, could be more costly to the mammalian parent than offspring with a null mutation causing an arrest of tooth development with comparable defects in other organ systems, and thus early lethality. That our results may apply to other species than the mouse is supported by the similarity of the organ level expression patterns in the mouse and the rat (Figure 3), as also by the comparable phenotypic effects of mutations in the mouse and in the human (Nieminen, 2009). Therefore, we suggest that even though gene expression profiles may differ in details among species, the overall, high-level patterns of essential gene expression dynamics should be evolutionarily conserved. More generally, empirical and modeling studies in ants show that the essential genes involved in homologous morphological characters are not always the same in different taxa (Abouheif & Wray, 2002;Nahmad et al., 2008). This kind of developmental system drift (True & Haag, 2001) could also underlie homologous teeth in different mammalian groups, perhaps through up or downregulation of different keystone genes in different lineages. In general though, developmental system drift without changes in the phenotype can be expected to be most frequent in the regulation of the dispensable category genes. Towards the general attempt to understand the numerous genes expressed in a developing organ system, our results point to the potential to use cell-level expression levels to identify genes critical for organogenesis. Here the single-cell transcriptomes provided a more nuanced view into the spatial patterns of the different gene categories than the tissue level transcriptomes alone, which mostly reflect the size of the expression domain (Figure 5a). In our tabulation, over a third of the shape category genes were ligands. Tooth shape patterning involves spatial placement of signaling centers that in turn direct the growth and folding of the tissue (Jernvall & Thesleff, 2012). The involvement of several secreted ligands in this patterning process, and consequently in the shape category, is likely to reflect the requirement of the developmental machinery to produce functional cusp patterns. These cusp patterns are also a major target of natural selection because evolutionary diversity of mammalian teeth largely consists of different cusp configurations. At the same time, partly due to ligands having generally more restricted expression domains compared to receptors and intracellular proteins, the shape category expression levels were found to be generally lower than that of the progression category. That ligands tend to have smaller expression domains whereas receptors have broader expression domains for tissue competence has been recognized in many unrelated studies (e.g. Bachler & Neubüser, 2001;Wessells et al., 1999;and partly in Salvador-Martínez & Salazar-Ciudad, 2015), but our analyses suggest that this is a general principle detectable in system-level transcriptome data. This pattern is also compatible with the classic concepts of tissue competence and evocators or signals produced by organizers (Waddington, 1940). Nevertheless, it remains to be explored spatially how the low signal-competence ratio emerges from highly heterogeneous expression domains of various genes, and within the complex three-dimensional context of a developing tooth (e.g., Harjunmaa et al., 2014;Krivanek et al., 2020;Pantalacci et al., 2017). In our data (for accession number, see Section 2), at least the ligand Eda is expressed in a larger number of cells than its receptor Edar, suggesting that there are individual exceptions to the general pattern. Another potentially interesting observation is that Sostdc1 and Fst, both secreted sequesters or inhibitors of signaling, were among the most broadly expressed of the ligands. Thus, at least some of the exceptions to the low signal-competence ratio may be modulators of tissue competence. In conclusions, the over 20,000 genes of mammalian genomes, and even higher numbers in many plant genomes, call for systems to categorize them. Especially the high-throughput experiments have accentuated the need for comprehension of the bigger picture in genome-wide analysis. However, there is no single way to do the classification of genes. Biological complexity offers a multitude of ways to categorize, ranging from structural to functional characteristics, and from evolutionary relationships to location of expression. Here our aim was to create a categorization that would provide insight to systems-level understanding of organogenesis and still HALLIKAS ET AL. | 15 include organ level details. By combining the experimental evidence on the effects of gene null-mutations with single-cell level transcriptome data, we uncovered potential generalities affecting expression levels of genes in a developing system. With advances in the analyses of transcriptomes and gene regulation, it will be possible to explore experimental data from other organs and species to test and identify system level principles of organogenesis.
6,444.2
2020-10-31T00:00:00.000
[ "Biology" ]
The GGDEF-EAL protein CdgB from Azospirillum baldaniorum Sp245, is a dual function enzyme with potential polar localization Azospirillum baldaniorum Sp245, a plant growth-promoting rhizobacterium, can form biofilms through a process controlled by the second messenger cyclic diguanylate monophosphate (c-di-GMP). A. baldaniorum has a variety of proteins potentially involved in controlling the turnover of c-di-GMP many of which are coupled to sensory domains that could be involved in establishing a mutualistic relationship with the host. Here, we present in silico analysis and experimental characterization of the function of CdgB (AZOBR_p410089), a predicted MHYT-PAS-GGDEF-EAL multidomain protein from A. baldaniorum Sp245. When overproduced, CdgB behaves predominantly as a c-di-GMP phosphodiesterase (PDE) in A. baldaniorum Sp245. It inhibits biofilm formation and extracellular polymeric substances production and promotes swimming motility. However, a CdgB variant with a degenerate PDE domain behaves as diguanylate cyclase (DGC). This strongly suggest that CdgB is capable of dual activity. Variants with alterations in the DGC domain and the MHYT domain negatively affects extracellular polymeric substances production and induction of swimming motility. Surprisingly, we observed that overproduction of CdgB results in increased c-di-GMP accumulation in the heterologous host Escherichia coli, suggesting under certain conditions, the WT CdgB variant can behave predominantly as a DGC. Furthermore, we also demonstrated that CdgB is anchored to the cell membrane and localizes potentially to the cell poles. This localization is dependent on the presence of the MHYT domain. In summary, our results suggest that CdgB can provide versatility to signaling modules that control motile and sessile lifestyles in response to key environmental signals in A. baldaniorum. Introduction Sequence alignments were performed using Clustal Omega [23] and STRAP [24]. The structural model for PAS-GGDEF-EAL of CdgB for schematic representation was built by I-TAS-SER and SwissModel servers [25,26] taking as reference the structure of RbdA (Protein Data Bank [PDB]:5XGB). The structures of the GGDEF and EAL domains for molecular coupling analyses were built by the SwissModel server taking as reference the structures of RbdA (Protein Data Bank [PDB] code 5XGD) [27] for the GGDEF domain and MucR ([PDB] code 5M1T) [28] for the EAL domain, both from P. aeruginosa. The quality of the models was analyzed on the Molprobity server [29] and a subsequent structural minimization was carried out in UCSF Chimera 1.10 software [30]. The GTP and c-di-GMP substrates were taken from the PubChem database [31], to which hydrogens were added and structural minimization was performed for each in Avogadro 2.0 [32]. PDBQT files were generated in the AutodockTools tool, where Mg 2+ ion uploads were added with the SwissModel server. Molecular coupling analyses were carried out on the AutoDockTool4 platform [33]. Finally, the visualization and preparation of figures were carried out in the UCSF Chimera 1.10 program. Construction of plasmids and variant strains Isolation of genomic and plasmid DNA used for DNA restriction enzyme digestion, electrophoretic agarose analysis, and transformation assays was carried out according to standard protocols [38]. The ΔcdgB mutant strain was constructed by replacing the coding region of cdgB with a kanamycin resistance cassette as previously described [17]. The DNA fragments flanking cdgB (GenBank accession number WP_014199675) were amplified by PCR using the primers Fkpnaz88 and Rxhoaz88 to generate the upstream A fragment of 992 bp, and the primers Fspeaz90 and Rsacaz90 to generate the downstream B fragment of 993 bp, each of which were subsequently cloned in the pGEM-T Easy vector, through TA cloning (Promega, Madison, WI, USA), respectively and transformed in E. coli DH5α to obtain the corresponding pGEM-A and pGEM-B constructs. The A fragment was excised with KpnI and XhoI, then was ligated into pJMS-Km suicide vector previously digested with the same restriction enzymes to yield the plasmid pJMS-Km-FA which was subsequently transformed into E. coli DH5α. Both pGEM-B and pJMS-Km-FA were digested with SpeI and SacI restriction enzymes and ligated to generate the pMMS construct, which contains the ΔcdgB::km r fragment. The pMMS construct was mobilized into A. baldaniorum Sp245 by biparental mating using E. coli S17.1 as a donor strain. Transconjugants were screened on K-lactate minimal medium with kanamycin 50 μg/mL. The mutation of interest in single colonies that were resistant to Km was further confirmed by PCR and DNA sequencing analysis. The cdgB overexpression constructs, were generated using the pMP2444 broad host-range plasmid [36]. The full-length ORF of cdgB gene, was amplified using the Forf89 and Rorf89 primers and subsequently cloned into pGEM-TEasy to obtain the construct pGEM-cdgB. The pGEM-cdgB and pMP2444 plasmids were digested with the EcoR1 restriction enzyme. The EcoRI digested fragments corresponding to cdgB and linearized pMP2444 were ligated to generate pMP-cdgB. The desired 5'-3' orientation of the insert (cdgB expressed under the lac promoter) was confirmed by restriction analysis with HindIII. Competent E. coli S17.1 cells were transformed with the pMP-cdgB plasmid. Transformed cells were used as donors in biparental matings to transfer pMP-cdgB to A. baldaniorum Sp245. The point mutation on the GGDEF motif (D456K) was introduced by inverted PCR using the Q5 Site-Directed Mutagenesis kit (New England BioLabs) following the manufacturer's instructions (using the primer pair GGDEF-F and GGDEF-R). The resulting pMP-cdgB SGDEF-SGKEF plasmid was introduced into E. coli S17.1 and subsequently transferred to A. baldaniorum Sp245 by biparental conjugation. The same strategy was used for introducing the point mutation in the EAL motif (E580A) (using the primer pair EAL-F and EAL-R). The resulting pMP-cdgB EAL-AAL plasmid was introduced into E. coli S17.1 and subsequently transferred to A. baldaniorum Sp245 through conjugation. All plasmid constructs were sequenced to confirm the correct sequence of cdgB and its desired orientation (downstream and under the control of the lac promoter). These constructs do not have the gene that produces the LacI repressor, hence the Plac promoter is constitutively active. To generate a cdgB::egfp translational fusion we used a PCR fusion approach designed by Yang et al. [39]. Briefly, the cdgB gene, without a stop codon, was amplified with primers cdgB-F-24 and cdgB-R-24, and the egfp reporter gene was amplified using gfp-F-24 and gfp-R-24. The primer cdgB-R-24 and gfp-F-24 have compatible overhangs that allow fusing the amplicons through PCR amplification using the primer pair cdgB-F-24-gfp-R-24. The fused PCR product and plasmid pMP2444 were digesting with BamH1 and XbaI restriction enzymes, and ligated to generate the plasmid pMP-cdgB::egfp. E. coli S17.1 competent cells were transformed with pMP-cdgB::egfp and a transformant was used to transfer the plasmid to A. baldaniorum Sp245 by conjugation to as previously described [16]. The MHYT domain was deleted by digesting the pMP-cdgB::egfp construct with SalI. The digested fragment of 7113 bp that lack the MHYT domain was ligated to produce the pMP-cdgB ΔMHYT plasmid, which was introduced to E. coli S17.1 by chemical transformation. The plasmid was transferred by conjugation to A. baldaniorum Sp245. The cdgB ΔMHYT deletion allele was sequenced to verify that the open reading frame was not shifted. Analysis of growth curves To determine possible effects of plasmid pMP-cdgB or ΔcdgB deletion mutant on growth rates, growth curves were performed and compared with strains carrying the empty vector pMP2444 or WT strain. Overnight cultures were diluted to an optical density at 600 nm (OD 600 ) of 0.01 in 100 ml Erlenmeyer flasks (3 replicates per strain) containing 25 ml of NFB � medium supplemented with Gm, when it was necessary for plasmid selection. Cultures were incubated in a rotary shaker (150 r.p.m.) at 30˚C and OD 600 measured every 2 hours, using an EON microplate spectrophotometer (BioTek, Winooski, VT, USA) at 595 nm. Determination of extracellular polymeric substance production To analyze EPS production, A. baldaniorum and derivative strains were grown in LB � and diluted to obtain bacterial suspensions with an OD 600 of 1.2-1.4 in NFB � medium supplemented with KNO 3 and grown for five days at 30˚C, under static conditions. Next, cultures were centrifuged at 10,000 g, the pellets were suspended in 1 ml NFB � media, as described above, and a 0.005% (w/v) Congo Red (CR) colorant solution (Sigma-Aldrich, Chemical) was added to achieve a 40 μg/ml concentration. The cells were incubated with agitation (200 rpm) for two hours. Afterward, CR bound to cells was quantified as previously described [17,19]. CR measurements were normalized by total protein concentration measured with the Bradford method. The results are of three independent assays with three biological determinations. Motility assay The swim motility assay was performed as previously described [19,42]. Briefly, bacteria were grown in LB � medium at 30˚C until reaching 5x10 6 -5x10 7 CFU/mL, afterwards 5 μL of the culture were spotted over semisolid minimal K medium supplemented with malate, succinate, or proline (10 mM) as a carbon source and containing 0.25% (w/v) agar. The size of the bacterial motility ring was measured in cm after incubation at 30˚C for 48 h. Relative quantification of c-di-GMP accumulation using a genetic biosensor C-di-GMP levels were analyzed using the riboswitch-based dual fluorescence reporter system as described previously [18,19,43]. This biosensor expresses AmCyan and TurboRFP, both fluorescent proteins (green and red, respectively), from the same constitutive promoter. Tur-boRFP production is inhibited at low levels of c-di-GMP due to the presence of c-di-GMPbinding riboswitch. For this purpose, the constructs pGEX-CdgB, pGEX-cdgB SGDEF-SGKEF , pGEX-cdgB EAL-AAL , pGEX-CdgA (positive control) [17] and the empty vector pGEX-4T-1 (negative control) were used to transformed a competent E. coli S17.1 strain containing the pDZ-119 vector [37]. All bacterial strains were grown in LB medium at 30˚to an OD 600 of approximately 0.6. Afterwards, 0.1 mM IPTG was added to the bacterial culture, and these were further incubated for 24 h for induction of protein expression. Cultures were concentrated 10-fold and resuspended in water. c-di-GMP production was analyzed macroscopically relating color intensity with the production of the second messenger. Microscopic assessment of c-di-GMP was performed using a Nikon Eclipse TE2000U fluorescence microscope. A drop of the induced culture was deposited on a coverslip and covered with a 1% agarose pad. The excitation and emission of the calibrator AmCyan fluorophore were recorded at 457 and 520 nm, respectively. The reporter TurboRFP fluorophore was excited at 553 nm, while its emission was measured at 574 nm. Images obtained were edited with the Nikon NIS Elements. Merge images represented the overlay of the fluorescence images AmCyan green, TurboRFP red and both yellow. The relative fluorescence intensity (RFI) was calculated as the ratio between the TurboRFP and AmCyan fluorescence intensities and is directly proportional to cdi-GMP levels, as analyzed using ImageJ software. The RFI values represent the standard deviations of three biological replicates, and significant differences are indicated at � P < 0.05 according to Student´s t-test by SigmaPlot as previously described [19,44]. Microscopy studies To visualize bacteria by fluorescence and confocal laser scanner microscopy (CLSM), A. baldaniorum Sp245 and its derivative strains were grown in 5 mL of NFB � medium at 30˚C, with agitation (150 rpm) for 18 h. Then, 1 mL of each culture was centrifuged at 8,000 rpm for 2 min, and each pellet was suspended in 100 μL of FM4-64FX dye (10 μg/mL) (Thermo Fisher Scientific) in phosphate buffer saline (PBS) and maintained for 10 min at 4˚C to stain the lipids of the membrane. FM4-64FX fluorescence was observed with a microscope (TE 2000U; Nikon, Tokyo, Japan) with a 100x objective lens (Plan Oil immersion) and photographed with a DS-QilMc camera (Nikon). Subsequently, 10μL of each suspension was mounted onto a glass coverslip and sealed with a 1% (w/v) agarose plug. When necessary, we also used DAPI to stain nucleoids. For video recordings, the samples were viewed with an Eclipse Ti-E C2+ confocal laser scanning microscope (Nikon) with a 60X objective lens (Plan Apo VC, water immersion). eGFP was excited at 488 nm, and its fluorescent emission was captured at 510 nm, FM4-64FX was excited at 565 nm and its fluorescent emission was captured at 734 nm, while DAPI was excited at 358 nm and its fluorescent emission was captured at 461 nm. Image slices were visualized and processed using the NIS Elements software (Nikon). The images were edited with ImageJ software (NIH, Bethesda, MD, USA) as previously described [16]. To visualize biofilms formed by the strains under study, the bacteria were grown in NFB � +KNO 3 medium supplemented with 85 μM of Calcofluor-White colorant (CWC) (Sigma-Aldrich, United States) and inoculated onto FluoroDish glass-bottom dishes (Fisher Scientific), at 30˚C in static conditions as previously described [41]. Biofilms forming on the surfaces of dishes were recorded after 5 days and observed by CLMS. CWC was excited at 440 nm with a UV laser and its emission was captured at 500-520 nm. The samples were scanned at an x/y scanning resolution of 1,024 × 1,024 pixels. The step size in the z-direction was 0.1 μm. The image stacks were visualized and processed using NIS Elements and edited using ImageJ as previously described [41]. This analysis allowed generating a three-dimensional view of the biofilm through the measurement of signal intensity. The biofilm structure can be observed as an intensity surface plot, where the intensity of the signal represents the density of EPS produced. Western blotting analysis For analysis of the expression of GFP-fusion proteins in A. baldaniorum Sp245, derivative strains were cultured in NFB � +KNO 3 medium at 30˚C to an OD 600 of 1.0. Bacteria cells were collected by centrifugation at 10,000 rpm for 5 min. The cells were resuspended in PBS, supplemented with Roche Complete Mini EDTA-free protease inhibitor, and ultrasonicated for 2 min. Cellular debris was sedimented by centrifugation (30 min. at 10,000 X g). The supernatant (soluble fraction) was obtained, and the pellet (insoluble fraction) was resuspended in PBS supplemented with detergent mix (0.5% Triton X 100; 0.1% SDS, and 0.5% 7-Deoxycholic acid sodium salt), and centrifuged at 25,000 X g for 2 h, to obtain protein membrane total extract. The total protein concentration of the respective fractions was determined using the Bradford method. The samples were resuspended in 5X loading buffer, boiled, separated on 10% SDS-PAGE, and subsequently transferred onto polyvinylidene fluoride membranes (Merck Millipore, Darmstadt, Germany) for immunoblotting using anti-GFP antibody (GFP D5.1 Rabbit mAb, HRP Conjugate, Cell Signaling Technology, Danvers, Massachusetts, USA.) HRP-DAB substrate Kit (Thermo Scientific Pierce) was used to check the target proteins on the membrane according to the manufacturer's guidelines. Statistical analysis Means were compared by Student's t-test to determine statistically significant differences. Differences were considered significant at P-values less than 0.05. The domain architecture of CdgB contains multiple sensory domains and conserved GGDEF and EAL domains CdgB (AZOBR_p410089) from A. baldaniorum Sp245 [45] is 794 amino acid residues long and is predicted to have seven transmembrane regions, a PAS domain, a GGDEF and an EAL domain (Fig 1A and 1B). The transmembrane regions conform a conserved MHYT domain which proposed to be involved in gas sensing [46,47] (Fig 1C). Three MHYT motifs, characterized by conserved methionine, histidine, tyrosine, and threonine residues, are located in positions 64-67, 125-128, and 187-190 (Figs 1B and S1). Each MHYT motif spans two transmembrane helices, and is projected to the outer face of the cytoplasmic membrane [46]. The PAS domain faces the cytoplasm and is comprised by 107 amino acid residues from position 256 to 362. The C-terminus of CdgB resides in the cytosol and includes both the GGDEF (363-534 aa) and EAL (544-786 aa) domains (Fig 1B). The primary sequence of the GGDEF domain of CdgB was aligned to the GGDEF domain of two characterized DGCs, PleD and RbdA [27,48]. CdgB has an SGDEF motif in place of the canonical GGD[E]EF motif. This motif has been found in active DGCs in both bacteria and eukaryotes [49][50][51]. PleD and RbdA have an allosteric autoinhibitory site with a RXXD motif, CdgB has a PXXD motif instead, hence it is most likely not autoinhibited by its catalytic product (Fig 2A and 2B). The amino acid sequence of the EAL domain of CdgB was aligned to the sequence of the EAL domain of two well characterized PDEs, RocR and MucR [28,52]. The alignment revealed that the EAL domain of CdgB conserve the residues involved in binding to c-di-GMP (Y 565 , Q 566 , P 567 , R 584 , D 698 , D 720 ), Mg 2+ (E 580 , N 365 , E 667 , D 697 , K 718 , E 757 , Q 774 ), and for the formation and stability of loop 6 (E 670 ). This loop is an essential structure for dimerization [53] (Fig 3A and 3B). To extend our in silico analysis of CdgB, we conducted tertiary structure predictions using available crystal structures of validated GGDEF and EAL domains from the Protein Data Bank repository. The GGDEF domain from CdgB was modeled using as template the crystallographic coordinates of the GGDEF domain from RbdA (PDB input: 5XGD; identity 31.84%) from P. aeruginosa [27]. The EAL domain from CdgB was modeled using as template the coordinates of the three-dimensional structure of the EAL domain of MucR (PDB input: 5M1T; identity 45.82%) from P. aeruginosa [28] (Figs 2 and 3). The three-dimensional model of the GGDEF domain of CdgB was used to analyze the potential interaction with its substrates using molecular coupling analyses (molecular docking) (Fig 2C). Our analysis predicted an interaction of GTP with the GGDEF domain of CdgB with ΔG = -8.2 Kcal/mol (Fig 2D). The interaction interphase includes residues N 420 , D 429 , D 455 . These amino acids, conserved in GGDEF domains, have been shown to be essential for the activity of DGCs [27]. The first 2 residues are important for the binding of GTP, while D 455 (located at the active site) coordinates the Mg +2 ion to perform the nucleophilic attack on the Those that interact with cofactor are marked in green. The residue that stabilizes structurally the loop 6 is shown in alpha phosphate of GTP eliminating a pyrophosphate, resulting in the formation of c-di-GMP from another GTP molecule [48,54]. Molecular docking analysis also predicted that the EAL domain of CdgB can bind c-di-GMP with a ΔG = -8.58 Kcal/mol (Fig 3C and 3D). The N 635 residue is responsible for coordinating Mg 2+ ions and is essential for the EAL domain, as its mutation was reported to significantly affect PDE activity [28,53]. The amino acid residues R 584 and Q 566 could be involved in substrate binding by interacting with the anionic phosphate oxygen and guanine moieties of c-di-GMP [52,54,55]. These analyses suggest that CdgB could potentially act as a dual DGC/PDE protein. We found cdgB orthologues in different species within the Azospirillum genus, as well as in other alpha-proteobacteria (S2 Fig). This could suggest that this domain architecture and potential bifunctional activity may give these bacteria a selective advantage or the ability to adapt to changes in environmental signals. Analyses of phenotypical consequences of alterations in cdgB in A. baldaniorum Sp245 The role of GGDEF-EAL proteins with dual activity has been poorly explored, hence we decided to investigate whether cdgB influences the biofilm formation, EPS production, and swimming motility of A. baldaniorum Sp245. To do so we generated an A. baldaniorum Sp245 strain with a deletion cdgB (A. baldaniorum ΔcdgB) and use different overexpression constructs to analyze the effect of overproducing the WT allele (pMP-cdgB) and mutated alleles with different point mutations that result on altered GGDEF or EAL motifs (pMP-cdgB SGDEF-SGKEF and pMP-cdgB EAL-AAL respectively), or a deletion of the MHYT domain (pMP-cdgB ΔMHYT ), we also included the A. baldaniorum pMP-cdgB::egfp to analyze the stability of the CdgB::eGFP fusion. As negative controls, we analyzed strains containing the broad host-range plasmid pMP2444 ( Table 1). We first evaluated the growth rate of the strains of interest in NFB � media supplemented with KNO 3 as a nitrogen source [18,19]. Our results showed that all the strains exhibited similar growth characteristics (S3 Fig). Next, we analyzed the results from biofilm formation assays. The ΔcdgB mutant strain, made 25% less biofilm compared to the parental strain (Fig 4). This could suggest that in the WT background CdgB is required for biofilm formation. Since c-di-GMP is required for biofilm formation this result would suggest that a baseline levels of expression CdgB may act as a DGC. The presence of the complementation construct pMP-cdgB in the ΔcdgB mutant strain partially restored biofilm formation compared to the WT strain with the overexpression plasmid pMP2444 (Fig 4). Interestingly, the overproduction of CdgB, CdgB-ΔMHYT and CdgB-SGDEF-SGKEF in the WT background of A. baldaniorum Sp245, resulted in decreased biofilm formation compared to both the parental WT strain and the ΔcdgB mutant strain. This suggests that, in the presence of an intact endogenous copy of cdgB, overproduction of CdgB from a plasmid has a negative effect on biofilm formation. This would indicate that at levels beyond a yet not defined threshold, CdgB may act as a PDE. To further evaluate the potential PDE activity of CdgB, we overproduced a CdgB variant with point mutations in the EAL domain that alter amino acids required for its catalytic yellow. The catalytic residue is in red. The EAL motif is marked with a red box, and loop 6 is inside a box with black outline. (B) Structural model of the CdgB EAL monomer generated by homology modeling. The EAL motif is highlighted in red and loop 6 is highlighted in orange. (C) Structural model of the CdgB EAL monomer in complex with c-di-GMP. The EAL domain is presented in surface mode. (D) Model of the EAL domain in complex with c-di-GMP generated by molecular coupling with a ΔG = -8.58 Kcal/mol and overlapped with the 5M1T crystal structure, corresponding 744 to EAL domain from P. aeruginosa showed in light purple color with an RMSD of 0.757 of ligands. The Mg 2+ ions are shown in green. PLOS ONE that, when overproduced, CdgB can act as a DGC when the amino acids required for PDE activity are altered. Together these results led us to propose that CdgB is a dual-function enzyme capable of synthesizing and degrading c-di-GMP. To further support our conclusions, we next estimated the ability of these strains to produce extracellular polymeric substance (EPS) that are likely required for biofilm formation. The strains were grown in NFB � + KNO 3 and tested for their capacity to bind the colorant Congo Red (Fig 5), a dye that binds extracellular DNA, several exopolysaccharides, amyloid proteins and has been used as an indirect measurement of c-di-GMP production [56,57]. In general, we observed that the biofilm phenotype of the different strains positively correlates with EPS production (Figs 4 and 5), except for the strains that overproduce CdgB-ΔMHYT and CdgB-SGDEF-SGKEF. We did not observe significant differences in EPS production in strains overproducing these CdgB variants compared to the control strain (no CdgB overproduction). These variants may have reduced PDE activity although more evidence is required to make that conclusion. We further analyzed exopolysaccharide production on mature biofilms formed under static conditions in NFB � media supplemented with KNO 3 by staining with the dye calcofluor white (CWC) (Fig 6). This dye shows affinity to polysaccharides with β-1,3 and β-1,4 linkages, such as those present in capsular polysaccharides and exopolysaccharides produced by Azospirillum [58]. The ΔcdgB mutant strain and the CdgB-overexpressing-strain showed a low CWC binding, compared to WT and control strains in mature biofilms. In contrast, the strain overproducing CdgB EAL-AAL produced more exopolysaccharides compared to the controls. The strain PLOS ONE overproducing CdgB SGDEF-SGKEF behaved as the WT strain and the negative control, while the strain overproducing CdgB ΔMHYT behaved comparable to the strain overproducing CdgB (Fig 6). These results further support our observations on the role of CdgB in controlling biofilm formation and EPS production, as well as for the proposed importance of EPS production on biofilm maturation in A. baldaniorum Sp245 [59]. Since motility is regulated by c-di-GMP in a opposite manner compared to biofilm formation and EPS production, we next evaluated the effect of overexpressing CdgB and the mutated variants on swimming motility in the presence of a variety of chemosensory signals (malate, succinate and proline). The overproduction of CdgB promoted swimming motility in the presence of any of the tested chemoattractants, while overproduction of the CdgB variant with the AAL motif had the opposite effect. Strains overproducing the CdgB variants with either the SGKEF motif or the deletion of the MHYT domain, as well as the ΔcdgB mutant strain, showed none or modest effects on motility (Fig 7). These results further suggest that CdgB can affect phenotypes associated with the levels of c-di-GMP, and hence that it is most likely a dual-function DGC-PDE. The switch in catalytic activity could depend on its abundance in A. baldaniorum cells or the presence of signals that inhibit its PDE activity. CdgB has DGC activity in Escherichia coli To analyze the effect of CdgB on c-di-GMP accumulation we examined the cellular levels of cdi-GMP with the use of a c-di-GMP biosensor in the heterologous host E. coli S17.1 [18]. When the concentration of the second messenger increases, TurboRFP increases proportionally [37,43]. To this end we analyzed CdgB native and CdgB variants with point mutations in the DGC domain (CdgB SGDEF-SGKEF ) and the EAL domain (CdgB EAL-AAL ), respectively. The expression of CdgB (pGEX-CdgB/ pDZ-119 strain) resulted in accumulation of red fluorescent individual cells and the formation of red fluorescent colonies (Fig 8). These traits were also observed when the previously characterized DGC CdgA was overproduced, but not when the empty plasmid pGEX-4T1 was used instead (Fig 8) [17]. The bacterial cells containing pGEX-CdgB SGDEF-SGKEF expression vector showed the same color as that of the negative pGEX-4T1 control. Whereas the pGEX-CdgB EAL-AAL strain increased the accumulation of red fluorescent bacterial cells and colonies (Fig 8). These results strongly suggest that CdgB has both DGC and PDE activities in E. coli. CdgB is potentialy polarly localized Studies have shown that some proteins involved in c-di-GMP metabolism need an specific localization to regulate cell behavior [60][61][62]. Bioinformatic analyses predicted that CdgB contains an MHYT domain with seven transmembrane helices (Fig 1). These observations prompted us to evaluate if CdgB is anchored to the cell membrane through its MHYT domain. To do so, we analyzed the localization of a CdgB-eGFP fusion protein through fluorescence microscopy. To better delineate the cellular membrane, we stained the cells with a membrane lipid-specific dye (FM4-64FX). We observed CdgB anchored to the cell membrane and eventually arrive to cell poles (Fig 9) (S1 Video). The membrane anchor of CdgB depended on the presence of the MHYT domain (Fig 9). We also confirmed through immunoblot analysis that the strain of A. baldaniorum expressing CdgB ΔMHYT produced a truncated CdgB-eGFP protein of 85 kDa in the cytoplasmic fraction, while the A. baldaniorum strain expressing CdgB-eGFP produced a protein of 112 kDa, mainly detected in protein extracts obtained by detergent solubilization (S4 Fig). Our results unveiled that CdgB is possibly polarly localized. We found, in both strains tested (A. balda-niorum+cdgB::egfp and A. baldaniorum ΔcdgB+cdgB::egfp), that CdgB-eGFP displayed unipolar, bipolar and multisite distributions (S5 Fig). This compartmentalization could play a role in controlling the activity of this bifunctional enzyme perhaps through a signal sensed by the MHYT domain in the periplasm. Discussion A. baldaniorum Sp245 is an environmental bacterium capable of establishing mutualistic relationships with a variety of plants. Both as a soil dwelling bacterium and a symbiont, A. baldaniorum Sp245 is exposed to a plethora of signals that need to be integrated by sensory modules to adapt its behavior and increase fitness. Signaling modules that incorporate the second messenger c-di-GMP are crucial for this bacterium to engage in sessile or motile lifestyles [2,[17][18][19]63]. Here we reported our findings regarding the characterization of a GGDEF-EAL protein with PAS and MHYT sensory domains which we named CdgB. Proteins with GGDE-F-EAL tandem domains can be grouped based on the conservation of the GGDEF and EAL motifs. Some of these proteins have only one of the two motifs conserved [47], both domains degenerated [64,65], or both motifs conserved [10,47,66]. Typically, even when both motifs are conserved, these proteins have a predominant catalytic activity. Evidence has been accumulating of the role of bifunctional GGDEF-EAL proteins with dual enzymatic activity (DGC and PDE). The predominant activity of these proteins is controlled by self-made or external signals [67][68][69]. Our results strongly suggest that CdgB is a bifunctional enzyme. In the genetic background of A. baldaniorum Sp245, under the conditions tested, overproduction of CdgB results in phenotypic changes typically associated with the activity of PDEs. Interestingly, in a heterologous host the predominant activity observed was of DGC. These differences could stem from signals that are species-specific or the result of the growth conditions used for these experiments. We speculate that the different behavior of CdgB in E. coli could be due to differences in nitrogen metabolism. Recently Park et al. [70] demonstrated that a nitrite transporter stimulated biofilm formation by controlled NO production via appropriate nitrite reductase suppression and subsequent diguanylate cyclase activation, and c-di-GMP production in E. coli and P. aeruginosa. Additionally, E. coli possesses a transcription factor named NorR with a non-heme iron center to sense NO [71], indication of different regulation mechanisms. More work needs to be done to elucidate what could be favoring the DGC activity of CdgB in E. coli. The sensory domain MHYT could potentially play an important role in controlling the enzymatic activity of CdgB. The first characterized hybrid protein DGC-PDE with an MHYT domain was YkoW from Bacillus subtilis although the involvement of this domain in signal perception and enzymatic function could not be ascertained. Nevertheless, it was proposed that the MHYT domain could potentially sense O 2 , CO, or NO through the coordination of one or two copper atoms [46]. Other proteins characterized to date with an MHYT domain are NbdA and MucR from P. aeruginosa both implicated in NO-mediated biofilm dispersion. The deletion of nbdA and mucR impaired NO-induced biofilm dispersion, only NbdA appeared to be specific to the NO-induced dispersion response. Although MucR was shown to play a role in biofilm dispersion in response to NO and glutamate, exposure to NO in the presence of MucR but the absence of NbdA did not result in increased PDE activity. MucR is one of the few proteins harboring both EAL and GGDEF domains that possess both DGC and PDE activity [47,66]. Furthermore, MucR is proposed to interact with the alginate biosynthesis protein Alg44, which contains a c-di-GMP binding PilZ domain essential for alginate biosynthesis [5,66]. Recently, other MucR homologous protein was described in Azotobacter vinelandii implicated in the alginate biosynthesis too [72]. Oxygen and nitric oxide are key signals during the establishment of A. baldaniorum Sp245 as a symbiont [2,40]. Interestingly, there are at least two chemotaxis receptors, Tlp1 and Aer, involved in O 2 sensing that incorporate c-di-GMP detection through a conserved PilZ domain [2,4]. Aer also integrates other chemical cues produced by the roots [4]. These chemosensory receptors could be potential signaling partners of CdgB which could perhaps sense similar chemotaxis signals. The PAS domain of CdgB could add another layer to its sensory repertoire. This domain is present in a variety of bacterial signaling proteins; and is able to bind several molecules, such as oxygen and Flavin adenine dinucleotide (FAD) [9,73]. NO has been shown to promote root growth promotion [74] and biofilm formation in A. baldaniorum Sp245, the absence of a periplasmic nitrate reductase (Nap) significantly affects biofilm formation through a mechanism yet to be explored [40]. Since the MHYT domain has been proposed to sense NO it is tempting to speculate that CdgB could participate in a singling module that controls biofilm formation in response to nitric oxide. We speculate that NO may be sensed by the MHYT domain of CdgB. Reception of this signal may influence CdgB activity by modulating DGC activity and possibly PDE domain activity. Nevertheless, the molecular details of this mechanism remain largely unknown, and need more analyses. The polar localization of CdgB is intriguing. It remains to be shown if this recruitment occurs in the flagellated cell pole and who are the interacting partners of CdgB. Cellular compartmentalization of signaling modules opens the possibility of localized sensing and short-range signal transduction to a closely localized effector. Future work will be aimed to identify potential members of the CdgB signaling module. ::egfp, B) A. baldaniorum ΔcdgB+cdgB::egfp were grown in NFB � medium. Photographs of the GFP fusion protein in WT and ΔcdgB strains were detected using a fluorescence microscope (TE 2000U; Nikon). Different subcellular locations of protein CdgB-eGFP including polar, bipolar, multisite were visualized. In red is showed membrane lipids with FM4-64FX, in green the CdgB-eGFP fusion, and blue bacterial nucleoid staining with DAPI. The images are representative of three biology repeats. Scale bar correspond to 10 μm. (TIF) S1 Video. Visualization of A. baldaniorum cdgB::egfp. Time-lapse. The samples were scanned at an x/y scanning resolution of 1,024 x 1,024 pixels. Supporting information Step size in z directions was 0.05 μm. The Plan Apo VC 60X WI objective was used. The eGFP was excited at 488nm, and the FM4-64FX was excited at 561 nm with objective lens WI (water immersion). (AVI) 39. Yang
7,695.8
2022-11-23T00:00:00.000
[ "Biology", "Environmental Science", "Chemistry" ]
Personalized Medicine in ANCA-Associated Vasculitis ANCA Specificity as the Guide? Anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a small- to medium-vessel necrotizing vasculitis responsible for excess morbidity and mortality (1). The AAVs, which include granulomatosis with polyangiitis (GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (EGPA), are among the most difficult types of vasculitis to treat. Although clinicopathologic disease definitions have been used traditionally to categorize patients into one of these three diagnoses, more recently ANCA specificity for either proteinase 3 (PR3) or myeloperoxidase (MPO) has been advocated for the purpose of disease classification (2). This is because differences in genetics, pathogenesis, risk factors, treatment responses, and outcomes align more closely with PR3- or MPO-ANCA type than with the clinocopathologic diagnosis. Moreover, classifying patients as GPA or MPA can be challenging because biopsies are not obtained routinely in most cases and existing classification systems can provide discrepant classification for the same patient (3). In this review, we address the recent literature supporting the use of ANCA specificity to study and personalize the care of AAV patients (Table 1). We focus particularly on patients with GPA or MPA. Non-MHC variants such as those in the SERPINA1 and PRTN3 genes have been associated with PR3-ANCA+ but not MPO-ANCA+ disease, but variants in PTPN22 are observed in both MPO-and PR3-ANCA+ disease (4,5). Functional studies have expanded upon previous GWAS studies and confirmed the potential pathogenic link between genetic variants and AAV (6). Given the associations between genetic variants and ANCA specificity, genetic testing may play a future role in identifying patients at risk for AAV. In fact, the presence of several of these variants (e.g., MHC and non-MHC) in the same individual increases the odds that the individual will develop AAV (4). However, additional studies are necessary to understand how genetic testing might be used in the clinical setting. Moreover, our knowledge of genetic associations in AAV stems from studies of patients of European descent and may be difficult to extrapolate to patients with other ancestry. One previous case-control study found that genetic variants at DRB1 might predispose African American patients to PR3-ANCA+ AAV (7), but additional studies in patients of non-European descent are needed. PATHOGENESIS OF PR3-AND MPO-ANCA+ AAV The pathogenesis of AAV is complex and the precise cause or causes remain unknown, but MPO-and PR3-ANCA are generally considered to have substantial roles in the pathophysiology of most patients' disease (8). Direct proof of a relationship between the presence of these antibodies and the initiation of disease in humans, however, remains lacking, despite the fact that compelling animal models for AAV exist. This is particularly true for MPO-ANCA, as discussed below (9). MPO-and PR3-ANCA+ AAV appear to share many features of pathogenesis, yet certain differences have also been observed. Myeloperoxidase and proteinase 3, the targets of MPO-and PR3-ANCA, respectively, are both found in neutrophil granules and monocyte lysosomes. PR3 is normally expressed on the neutrophil cell surface, more so in PR3-ANCA+ patients than healthy controls. In contrast, MPO is not spontaneously expressed on neutrophil cell surfaces but surface MPO expression is detectable after neutrophil activation (10). In AAV, the binding of MPO-or PR3-ANCA to neutrophils induces activation and degranulation as well as adhesion and transmigration of neutrophils across the vascular endothelium, culminating in endothelial cell damage. The role of monocytes in AAV is less well understood. The pathogenic importance of MPO-ANCA is supported by the ability of these antibodies to induce a vasculitis syndrome resembling AAV when MPO-ANCA are transferred into experimental mouse models (9). The development of a similar animal model for PR3-ANCA+ AAV has been elusive to date, in part due to differences in PR3 expression in mice and humans. Several additional observations support the importance of PR3-and MPO-ANCA in the pathogenesis of AAV. These include: (1) the great majority of patients with AAV are MPO-or PR3-ANCA+ (2,11) there are consistent differences in clinical features of AAV according to ANCA type (see below); (3) B-cell targeted therapies and/or plasma exchange are efficacious in both PR3-and MPO-ANCA+ AAV (4,12,13) there is some correlation between ANCA titer and disease activity (see below); (5) transplacental transfer of MPO-ANCA is reported to have caused AAV in a newborn (6,14); PR3-ANCA+ antibodies are known to appear in patients' blood years before clinical presentation (15); and (7) genetic variants in proteinase 3, the antigenic target of PR3-ANCA, are associated with PR3-ANCA+ AAV (see above). However, the presence of MPO-or PR3-ANCA positivity does not always correlate with disease activity, suggesting that multiple factors are necessary to induce vasculitic and granulomatous features of AAV. Such factors include genes, infections, medications, environmental exposures, the epitope specificity of ANCA, and almost certainly others (8). Neutrophil extracellular traps (NETs) are increasingly recognized as important for the pathogenesis of autoimmune conditions, including both MPO-and PR3-ANCA+ AAV (16,17). In normal individuals, NETs are immunogenic and have a role in trapping and killing invading extracellular microbes. Notably, NETs can activate certain immune cells, including autoreactive B cells (16,17), and cause end-organ damage. Spontaneous NET formation is observed more often in AAV patients than in healthy controls, likely because of stimulation of neutrophils by ANCA (16), and correlates with disease activity (17). Upon stimulation, NETs containing PR3 and MPO (16) are released in both the circulation as well as in damaged tissues. Complement has traditionally not been thought to play a role in the pathogenesis of these "pauci-immune" vasculitides. Neither immunoglobulins nor complement components are observed prominently in the biopsy specimens from patients with AAV. The lack of immunoglobulin and complement in the renal lesions of AAV, for example, contrasts strikingly with the glomerular lesions observed in systemic lupus erythematosus, for example. However, mounting evidence suggests that activation of the alternative pathway is important to the pathogenesis of MPO-ANCA+ and, more recently, PR3-ANCA+ AAV (18,19). A recent study by Wu et al. suggested that the classical or lectin complement pathways are activated in PR3-ANCA+ but not MPO-ANCA+ AAV (18). Moreover, avacopan, a C5a receptor inhibitor, was found in early phase trials to have efficacy in AAV and have a potential role as a glucocorticoid-sparing drug in remission induction (20). The results of an ongoing phase 3 randomized controlled trial evaluating its efficacy for remission induction will be an important proof-of-concept advance in our understanding of the role of complement activation in AAV (21). Cytokine profiles may highlight potential differences in pathogenesis between MPO-and PR3-ANCA+ patients. Berti et al. recently compared differences in serum cytokine profiles associated with inflammation, proliferation, vascular injury, and tissue damage and repair among AAV patients grouped according to ANCA type or clinical diagnosis (22). Differences according to phenotype (e.g., PR3-vs. MPO-ANCA+ and GPA vs. MPA) were observed regardless of whether ANCA type or clinicopathologic condition was used to group patients, but the differences were more striking when PR3-and MPO-ANCA patients were compared to one another. In the study by Berti et al., nine biomarkers were higher among the PR3-ANCA+ subset (22). These included interleukin (IL)-6, granulocyte-macrophage colony-stimulating factor, IL-15, IL-18, CXCL8/IL-8, CCL17/thymus and activation-regulated chemokine, IL-18 binding protein, soluble IL-2 receptor a, and nerve growth factor b. Four cytokines were higher in the MPO-ANCA+ subset, including soluble IL-6 receptor, soluble tumor necrosis factor receptor type II, neutrophil gelatinaseassociated lipocalin, and soluble intercellular adhesion molecule. In multivariate-adjusted analyses, no cytokine levels remained significantly associated with either GPA or MPA, but several associations between cytokines and ANCA-type persisted. Additional studies are necessary to further validate these observations, particularly in larger MPO-ANCA+ cohorts. In conclusion, the current pathogenic model of AAV suggest that MPO-and PR3-ANCA+ vasculitis share many similar pathogenic features. However, recent studies suggest that there may also be differences in complement activation and cytokine profiles according to ANCA type. Additional studies are necessary to clarify how pathogenesis may differ according to ANCA type. Differences in pathogenesis between PR3-and MPO-ANCA+ patients may identify novel treatments guided by ANCA specificity. AAV RISK FACTORS Several potential risk factors have been associated with the development of AAV, including environmental, drug, and infectious exposures. Silica Silica exposure, typically related to occupational history, has been associated with AAV in several studies. Indeed, a recent metaanalysis found that silica exposure was associated with a 2.6fold higher odds (OR 2.6, 95% CI: 1.5-4.4) of AAV (23). This observation was true for MPA and GPA patients, suggesting that similar risk exists for both MPO-and PR3-ANCA+ subjects. In another study, MPO-ANCA+ disease was more common than PR3-ANCA+ disease (24) among cases with high silica exposure, but additional studies of this question would be useful. Staphylococcus aureus There is a long-standing interest in understanding potential associations between microbes, particularly chronic nasal carriage of Staphylococcus aureus, and the risk of AAV and flare. These suspected associations date back to early observations of infectious symptoms and secondary sinonasal infections in GPA patients with sinonasal disease (25). Subsequently, a small clinical trial in GPA, the majority of whom were presumably PR3-ANCA+, found that trimethoprim/sulfamethoxazole was associated with a 70% (HR 0.3, 95% CI: 0.1-0.8) reduction in risk of flare compared to placebo. These findings have been interpreted as support of the hypothesized role of S. aureus or other microbes as risk factors for AAV relapse (26). However, it has been noted that the effects of trimethoprim/sulfamethoxazole on disease activity might be mediated through mechanisms other than reducing S. aureus carriage, given that changes in S. aureus carriage on antibiotics did not necessarily relate to subsequent flare. More recently, in a sub-study of two randomized clinical trials, GPA patients with chronic nasal S. aureus carriage were observed to have a higher risk of relapse than GPA patients without chronic S. aureus carriage (27). Again, these findings suggest an association between chronic S. aureus carriage and relapse risk, but the authors propose that that an underlying genetic confounder might be responsible for this observation. In GPA, and therefore likely PR3-ANCA+ AAV, we can therefore only surmise that chronic nasal carriage of S. aureus may be associated with the risk of flare, but further studies are needed to account for potential confounders of this observed association. There is no strong evidence base to suggest that S. aureus or other infections, however, are risk factors for GPA or AAV generally. Medication-Induced AAV A number of drug exposures, including prescribed medications and illicit substances, have been associated with AAV, though well-designed studies assessing the association between these exposures and risk of AAV are lacking. Case series and anecdotal experience strongly suggest potential associations between drug exposures, particularly hydralazine (28), propylthiouracil (28,29), and levamisole (typically when in adulterated cocaine) (30). The link between these medications and AAV appears to be far stronger for MPO-ANCA+ AAV than for PR3-ANCA+ AAV. Extremely high titers of MPO are often reported in these cases. In one single-center study, 13 of 30 (43%) patients with the highest MPO-ANCA titers in a large hospital's ANCA lab had been exposed to hydralazine or propylthiouracil (28). Levamisole-contaminated cocaine has also been associated with AAV. This drug-induced syndrome is manifested often by large-joint arthralgias and cutaneous lesions, purpuric earlobe lesions, and frequently MPO-ANCA positivity but often dual positivity (50% were PR3-and MPO-ANCA+ in one study) (30). The presence of both MPO-and PR3-ANCA positivity is not seen in all cases of drug-induced AAV, but dual-positivity should raise suspicion for a drug culprit. It is important to note that the presence of ANCA positivity in the setting of drug exposure can occur without clinical features of vasculitis and is not diagnostic of AAV. The MPO-ANCA in propylthiouracil therapy, for instance, may have features that distinguish it from the pathogenic MPO-ANCA seen in classic AAV (29). In summary, several risk factors for AAV have been proposed and these may differ according to ANCA type (e.g., S. aureus in PR3-ANCA+, drugs in MPO-ANCA+). However, environmental exposures, particularly to silica, appear to be a common risk factor in both PR3-and MPO-ANCA+ AAV. Additional well-designed studies are needed to better characterize environmental, infectious, and other exposurerelated risk factors in AAV, particularly according to ANCA type. ANCA TESTING FOR THE DIAGNOSIS AND MONITORING OF AAV The initial discovery of ANCA among patients with clinical syndromes that would be characterized as GPA or MPA was a major milestone in the diagnosis and management of these conditions (31). Following the discovery of ANCA and spreading availability of testing, the diagnosis of GPA or MPA was increasingly made with confidence in the proper clinical setting, often without a biopsy. The classic approach to ANCA testing is a two-step process (32). First, indirect immunofluorescence (IIF) is performed to detect a cytoplasmic or peri-nuclear ANCA pattern. Second, immunoassays of samples positive for ANCA by IIF are performed to confirm the IIF results and to detect ANCA specificity (e.g., PR3-ANCA or MPO-ANCA). However, accumulating evidence suggests that the test performance (e.g., receiver operating characteristic curves) of contemporary immunoassays is quite strong and less susceptible to inter-reader variability and other potential sources of imprecision than IIF (33). For instance, in a study by Damoiseaux et al., the area under the curve (AUC) of immunoassays for PR3-or MPO-ANCA was between 94 and 96%, whereas the AUC for IIF was between 84 and 92% (33). A two-step process for ANCA testing has not been found to improve test performance (33,34). Therefore, a one-step process using only immunoassay testing for PR3-or MPO-ANCA without IIF is sufficient for diagnosing AAV. In addition to test performance, it is also important to consider the test results appropriately. Though PR3-and MPO-ANCA test results are often interpreted as positive or negative, the test performance may vary according to titer such that increasing titers may more accurately classify patients according to the correct diagnosis (34). The role of serial ANCA testing in the management, as opposed to diagnosis, of AAV patients remains poorly defined and controversial. In a post-hoc analysis of the Wegener Granulomatosis Etanercept Trial (WGET) trial in which patients with GPA were randomized to conventional therapy (cyclophosphamide or methotrexate) or conventional therapy plus etanercept (35), PR3-ANCA titers correlated with disease activity and both PR3-and MPO-ANCA titers decreased during remission induction (36). Notably, the vast majority (∼73%) of patients in WGET were PR3-ANCA+ (35). A meta-analysis that includes post-hoc analyses of WGET as well as other studies found that a rise in ANCA levels in patients in remission was associated with a positive likelihood ratio of 2.8 (95% CI: 1.7-4.9) of a future relapse; the absence of a rise in ANCA was associated a negative likelihood ratio of 0.5 (95% CI: 0.3-0.9) of having a future relapse (37). Becoming ANCA negative, and even staying ANCA negative during follow-up, has not been observed to be a reliable indicator that a patient will achieve or maintain remission (36,37). The utility of repeat testing may differ according to ANCA type, especially with contemporary treatment strategies. Findings from the Rituximab in ANCA-Associated Vasculitis (RAVE) trial provided additional insights into the potential value of serial ANCA testing. In the RAVE trial, MPO-and PR3-ANCA+ patients were randomized to remission induction with either rituximab (RTX) or cyclophosphamide followed by azathioprine (CYC/AZA) (12). Approximately 67% of patients were PR3-ANCA+ in RAVE. Similar to observations from WGET, RAVE patients who became ANCA negative were not more likely to achieve clinical remission at 6 months (12). However, differences in the likelihood of becoming ANCA negative were observed according to ANCA type and treatment. In particular, PR3-ANCA+ patients treated with RTX were more likely than those treated with CYC/AZA to become ANCA negative. There was no difference in the rate of becoming ANCA negative among MPO-ANCA+ patients treated with RTX or CYC/AZA (12). Among PR3-ANCA+ patients treated with RTX in RAVE, a post-hoc analysis found that a rise (defined as a doubling) in the PR3-ANCA titer was associated with a higher risk of severe relapse within 1 year, especially in those with a history of renal involvement or alveolar hemorrhage (38). This was not observed among PR3-ANCA+ patients treated with CYC/AZA in RAVE and was not observed in a post-hoc analysis of WGET where most patients were PR3-ANCA+ and received CYC for severe disease (36). Thus, the potential utility of serial PR3-ANCA testing may be specific to patients treated with rituximab, as opposed to other therapies. In summary, the significance of an isolated increase in an ANCA titer without an associated change in symptoms or Frontiers in Immunology | www.frontiersin.org findings otherwise suggestive of a disease flare is of unclear significance. Certainly not all patients who experience an increase in their ANCA titers will go on to have a disease flare and, if they do, the timing of a flare could be many months to even more than a year following the ANCA titer rise. Therefore, one must weigh the risks and benefits of treatment decisions guided by only ANCA titers (36). The ANCA type and treatment exposure may influence the predictive ability of changes in titers so the utility of serial ANCA measurements may evolve over time as our treatment regimens change. It is important to note that most studies to date evaluating the predictive value of changes in ANCA titers have been limited because of frequency of titer measurements, variations in outcome definition, and the inclusion of mostly PR3-ANCA+ patients. CLINICAL FEATURES Demographics MPO-ANCA+ patients are more likely to be female and, on average, 10 years older than PR3-ANCA+ patients at presentation (39). There are also differences in the distribution of ANCA type according to race and geography such that Japanese, Chinese, and Southern European AAV patients are more likely to be MPO-rather than PR3-ANCA+ when compared with non-Japanese, non-Chinese, and Northern European AAV patients (40). In a population-based study comparing AAV incidence and features in defined geographic regions of the UK and Japan, more than 80% of cases in Japan were MPO-ANCA+. In contrast, more than 66% of cases in the UK were PR3-ANCA+ (40). Clinical Phenotype With regard to clinical phenotype, those who are PR3-ANCA+ more often have a presentation consistent with GPA whereas those who are MPO-ANCA+ tend to have features of MPA. However, ∼10% of patients with GPA are MPO-ANCA+; PR3-ANCA+ MPA seems to be a rarer phenomenon (41,42). In contrast to MPO-ANCA+ patients, those who are PR3-ANCA+ are more likely to have involvement of ears, nose, sinuses, and throat (3,39,43). Whereas both MPO-and PR3-ANCA+ patients can have lung involvement, those who are MPO-ANCA+ more often present with features of interstitial lung disease (e.g., fibrosing lung disease) rather than cavitary lesions and/or nodules characteristic of PR3-ANCA+ disease (44,45). Evolving literature suggests that MPO-ANCA+ patients are at higher risk for bronchiectasis, which is often present prior to AAV presentation. In two recent cohort studies, MPO-ANCA+ subjects were found to have bronchiectasis more often than PR3-ANCA+ subjects (44,46). In one, only MPO-ANCA+ subjects had bronchiectasis (46). In the other, MPO-ANCA+ subjects were twice as likely to have bronchiectasis (31% vs. 15%) and the bronchiectasis was more severe among the MPO-ANCA+ subjects (44). The high proportion of MPO-ANCA+ patients with bronchiectasis raises the question of whether it might predispose to MPO-ANCA+ AAV, be more likely to complicate MPO-ANCA+ AAV, or go undetected for some time before AAV comes to medical attention. In addition to differences in respiratory tract involvement, MPO-ANCA+ patients more often have renal involvement than PR3-ANCA+ patients. Moreover, among MPO-and PR3-ANCA+ patients with renal involvement, those who are MPO-ANCA+ often present with more severe renal disease, characterized by a lower glomerular filtration rate, greater need for renal replacement therapy (31% vs. 20%), and more chronic appearing lesions on renal biopsy (47). However, ANCA type does not consistently predict the risk of end-stage renal disease (3). Features Among Patients With Discordant ANCA Types and Clinical Phenotypes Though ANCA type is increasingly recognized as a clinicallymeaningful and standardized approach to characterizing AAV patients, the combination of ANCA type with clinical phenotype (e.g., GPA or MPA) may identify additional subtypes with unique features ( Table 2). Several studies have suggested that there may be differences between MPO-ANCA+ GPA patients compared with those who are PR3-ANCA+ or those who are MPO-ANCA+ and have presentations consistent with MPA (45). In one single-center cohort study by Schirmer et al., MPO-ANCA+ GPA patients were found to have limited disease more often, to have higher rates of subglottic stenosis, and to have lower rates of renal involvement compared with PR3-ANCA+ GPA patients (45). In a nephrology clinic-based cohort study by Chang et al., Chinese patients with MPO-ANCA+ GPA were found to have less severe renal disease than PR3-ANCA+ GPA patients and a lower risk of progressive renal failure (42). In contrast, disease manifestations did not differ between MPO-ANCA+ and PR3-ANCA+ GPA patients who had been enrolled in two large clinical trials (41) and studied in a post-hoc analysis by Miloslavsky et al. These conflicting results with regard to disease manifestations may be related to differences in study design (clinical trial vs. single center cohort study) (48), classification of GPA and MPA, and enrollment criteria. They may also reflect the limitations of attempting to address these questions in studies of small sample sizes. Discordant associations between ANCA type and clinical phenotype may also have implications for relapse rates. In the study by Miloslavsky et al., MPO-ANCA+ GPA patients flared more often than MPO-ANCA+ MPA patients (41). Due to statistical limitations, this question could not be addressed in the Schirmer et al. (45). In the study by Chang et al., MPO-ANCA+ GPA patients had a lower flare rate than PR3-ANCA+ GPA (42). In summary, reliable interpretations of the results of these small studies that often provide disparate results is difficult. Nevertheless, it is important to note that MPO-ANCA+ GPA patients may have a unique natural history, especially when compared with PR3-ANCA+ GPA patients. RESPONSE TO TREATMENT ACCORDING TO ANCA TYPE The Rituximab in ANCA-Associated Vasculitis (RAVE) trial randomized patients with severe PR3-or MPO-ANCA+ AAV to either rituximab (RTX) or cyclophosphamide/azathioprine (CYC/AZA) for induction therapy. RTX was found to be noninferior to CYC/AZA for remission induction. In a post-hoc analysis of the RAVE trial, however, PR3-ANCA+ patients treated with RTX had a 2-fold higher odds (OR 2.1, 95% CI: 1.0-4.3) of achieving remission at 6 months than those treated with CYC/AZA (39). This was also true among those PR3-ANCA+ patients who were randomized in the setting of relapsing disease. There was no difference between the efficacy of RTX or CYC/AZA among MPO-ANCA+ patients with regard to achieving remission. There may also be a difference in the efficacy of mycophenolate mofetil for remission induction in MPO-ANCA+ AAV compared with PR3-ANCA+ AAV patients without life-threatening disease (49). In the recent open-label, non-inferiority MYCYC trial, patients were randomized to mycophenolate mofetil or cyclophosphamide for remission induction. Both arms received azathioprine for maintenance therapy after remission induction. Remission rates at 6 months were similar in the mycophenolate mofetil and cyclophosphamide groups (67% vs. 61%) such that the two were found to be non-inferior to one another. Following remission, more patients in the mycophenolate mofetil group relapsed when compared with those in the cyclophosphamide group (33% vs. 19%). This difference, however, was strongly driven by relapses in PR3-ANCA+ patients, 48% of whom relapsed following mycophenolate mofetil compared with 24% following cyclophosphamide. Therefore, it may be that mycophenolate mofetil is a reasonable option for remission induction in patients who are MPO-ANCA+ but may not be ideal for patients who are PR3-ANCA+. PR3-ANCA+ patients have been found in multiple studies to relapse more often than MPO-ANCA+ patients following remission induction (3,45,50). For instance, in one large United States community-based cohort, PR3-ANCA+ patients have been consistently found to have a nearly 2-fold higher risk of relapse than MPO-ANCA+ patients (3,51). Though this cohort is largely composed of patients with renal involvement, similar observations regarding differences in the risk of relapse between PR3-ANCA+ and MPO-ANCA+ patients have been made in the RAVE trial (12); a cohort composed of patients from several large European clinical trials (52); as well as a recently described large multi-center Spanish cohort (53). All of those studies included patients with both renal and non-renal manifestations. Patients with PR3-ANCA+ disease may also be more likely to have treatment-refractory disease. The term "treatmentrefractory" is often challenging to define and differing definitions have been used across studies. In the RAVE trial, however, the term "early treatment failure" was used to describe patients whose disease was not responding to therapy at the 1 month time point. Eleven of the 12 early treatment failures in the RAVE trial were PR3-ANCA+ (54). Patients with PR3-ANCA+ disease in the RAVE trial also had a 29% chance of failing the primary outcome at 6 months because of the recurrence of active disease (54). These observations suggest that different treatment approaches may be indicated for patients depending on ANCA type. PR3-ANCA+ patients, in contrast to MPO-ANCA+ patients, may benefit from rituximab rather than cyclophosphamide for remission induction and may also benefit from continued immunosuppression following remission given their increased risk of relapse. It may be reasonable, for example, to consider an extra one-gram infusion of rituximab at 4 months of treatment in the interest of inducing a solid disease remission. Flare rates, however, vary significantly depending on the regimen used to maintain remission. In the recent MAINRITSAN trials comparing different contemporary maintenance strategies, those using rituximab at fixed doses had relatively low flare rates (3% at 22 months and 10% at 28 months) (55,56) compared with the approximate 32% rate of relapse at 18 months without maintenance therapy (50) and 29% relapse rate at 28 months with azathioprine as maintenance (55). The vast majority of patients enrolled in these trials were PR3-ANCA+ so it is difficult to assess how flare rates may vary between PR3-and MPO-ANCA+ patients using contemporary maintenance strategies. One single-center experience using continuous B cell depletion with rituximab in MPO-and PR3-ANCA+ AAV patients reported a relapse rate of 20% but the duration of follow-up in this study is not reported (43), nor is relapse rate according to ANCA type. Additional studies are necessary to determine flare rates according to ANCA type using contemporary maintenance strategies and to understand the optimal long-term management of AAV according to ANCA type. LONG-TERM OUTCOMES ACCORDING TO ANCA TYPE As short-term AAV outcomes are optimized, increasing attention has shifted toward improving long-term outcomes. Particular focus has been paid to reducing the incidence of end-stage renal disease (ESRD) and death in AAV. Over the last two decades, renal survival in AAV has improved, such that fewer patients are developing ESRD (57). As mentioned, MPO-ANCA+ patients with biopsy-proven disease typically have more chronic, as opposed to active, renal lesions at the time of diagnosis (47,(58)(59)(60) when compared to PR3-ANCA+ patients. However, in a large cohort study by Rhee et al., there was no difference in renal survival when MPO- and PR3-ANCA+ patients were compared both in unadjusted and adjusted analyses (aHR 0.92, 95% CI: 0.6-1.5) (57). In that study, the most important predictor of long-term renal survival was renal function at presentation. Similar observations have been made in other studies of ESRD outcomes associated with AAV (61). Overall, mortality among patients with AAV is approximately 3-fold higher than that of the general population (62) but the gap in survival has improved over the last two decades (63,64). Both PR3-and MPO-ANCA+ AAV patients are at similarly increased risk of death compared to the general population (47,65). In other words, PR3-and MPO-ANCA+ AAV patients have a similar risk of death after accounting for differences in age-and sex-distributions between the subgroups (3). However, a recent study suggested that there may be differences in cause-specific death according to ANCA type. While more studies are needed, MPO-ANCA+ patients may be at higher risk for death due to cardiovascular disease even after accounting for differences in renal involvement, age, and sex (65). This observation is also consistent with the results of a prior study which found that MPO-ANCA+ patients may be at higher risk of non-fatal CVD events (66). Collectively, these findings suggest that to further improve long-term survival in AAV, PR3-, and MPO-ANCA+ patients may benefit from different targeted interventions. Additional studies are necessary to determine whether the management of CVD risk should differ according to ANCA type. ANCA-NEGATIVE AAV While ANCA type is increasingly used to classify patients with AAV, it is important to note that a portion of patients with AAV are ANCA negative because the diagnosis of AAV remains based on clinicopathologic features rather than a positive ANCA test. This is especially true in patients with limited AAV and/or non-renal AAV (67). Rates of ANCA negativity in AAV are difficult to estimate because ANCA positivity is often used in AAV diagnostic algorithms. However, ∼20% of patients with AAV are thought to be ANCA negative; rates may be as high as 40% in those with limited AAV in historic studies (33,67,68). It is important to note that there are an increasing number of methods that can be used to detect PR3-or MPO-ANCA positivity and that the diagnostic test performance characteristics of these methods can vary (33). Therefore, in the setting of high diagnostic suspicion but negative ANCA testing, it may be useful to test for ANCA positivity using an alternative method for ANCA detection (68). There is limited data on the comparison of patients with ANCA negative AAV vs. PR3-ANCA+ AAV vs. MPO-ANCA+ AAV (41). Moreover, many contemporary AAV trials exclude patients who have no history ANCA positivity. Studies of ANCA negative AAV are an important avenue of future investigation. CONCLUSIONS ANCA testing is a useful test to establish a diagnosis of AAV in the appropriate clinical setting. ANCA testing also provides important insights into differences in genetic risk, pathogenesis, and response to treatment between PR3-and MPO-ANCA positive patients. A growing body of evidence supports the hypothesis that PR3-and MPO-ANCA+ AAV might represent distinct diseases rather than a single spectrum of disease. A number of research questions can be addressed to further advance our understanding of the potential use of ANCA type for guiding AAV care ( Table 3). The available evidence suggests that AAV treatment might be optimized using a personalized approach guided by a patient's ANCA type. AUTHOR CONTRIBUTIONS ZW and JS contributed to conception of the review, literature search, manuscript drafting and revision, read the review for publication, and agree to be accountable for all aspects of the review.
7,030.8
2019-12-06T00:00:00.000
[ "Medicine", "Biology" ]
A comparative investigation on structure evolution of ZrN and CrN coatings against ion irradiation Binary ZrN and CrN nanostructured coatings deposited by magnetron sputtering were irradiated with 600 keV Kr3+ at room temperature. The ion irradiation fluences varied from 0 to 1×1017 Kr3+/cm−2. The results indicate the microstructure of the CrN illustrates higher stability during the Kr3+ ion irradiation compared to that of the ZrN. The ion irradiation produces surface etching of the CrN coating. However, the etching transfers to recrystallization and grain coarsening on the ZrN coating surface as the Kr3+ fluence increases. Introduction Nuclear reactors fuelled with (U, Pu)O 2 are wildly used in Europe [1,2,3]. Consequently, plenty of plutonium and highly radioactive wastes are produced every year. In order to reduce the toxicity of the nuclear wastes, inert matrix fuels (IMFs) have been developed to optimize the burn up of the nuclear fuels [4,5,6]. The IMFs such as nitrides and carbides have been proposed to be suitable materials for fast neutronic systems owing to their relatively high melting temperature, low neutron absorption cross-section, high thermal conductivity, superior hardness and high corrosion resistance [7,8,9,10]. The nitrides such as ZrN can form a solid solution with the fuels (for example (U, Zr)N and (Pu, Zr)N) and act as the inert matrixs to reduce the high fission density. Unfortunately, sintering temperatures for bulk nitrides or carbides (such as ZrN, ZrC and TiC) are extremely high [11,12,13]. In addition, the grain sizes of the bulk ceramics range in dozens of micrometers. It is well known that nanocrystallization is one of the most important methods for strengthening the mechanical properties of materials [14,15]. Therefore, there is a requirement to fabricate nanostructured nitrides under a relatively low temperature and investigate their irradiation behaviors against ion irradiation. In this study, both ZrN and CrN nanograined coatings deposited at 300 C were irradiated. A comparative investigation on irradiation behaviors of the nanograined ZrN and the CrN against the ion irradiation was conducted. Materials and methods ZrN and CrN coatings were deposited on polished silicon (111) wafers and sapphire substrates by magnetron co-sputtering in N 2 eAr mixture atmosphere (99.999% purity for each gas). Zr and Cr metallic targets (Ø 76 mm, purity 99.9%) were used for the sputtering. The targets were pre-sputtered for 10 min to remove the surface contaminants after reaching the base pressure of 5.0 Â 10 À4 Pa. During the ZrN and CrN depositions, the working pressure was kept at 140 mPa with the N 2 /(Ar þ N 2 ) flow ratio of 40%. The deposition temperature, time and bias were maintained at 300 C, 90 min and -120 V, respectively. The thicknesses of the ZrN and the CrN were controlled at w3 mm. The nitride samples (deposited on sapphire substrates) were irradiated with 600 keV Kr 3þ ions at room temperature in the Ion Beam Materials Laboratory at Los Alamos National Lab, using a 200 kV Danfysik High Current Research Ion Implanter. The 600 keV Kr 3þ ions were implanted at normal incidence with an average flux of w1.0Â10 12 Kr 3þ /cm 2 /s. The total irradiation fluences were set at 0, 5.0Â10 15 , 5.0Â10 16 and 1.0Â10 17 Kr 3þ /cm 2 , respectively. Both normal and grazing incidence X-ray diffraction (GIXRD, Rigaku Ultima IV, Japan) tests were conducted to obtain the phase structure of the samples. An incidence angle of 3 was used during the GIXRD tests. The microstructure of the pristine ZrN and CrN was investigated by TEM observation (FEI, Tecnai G 2 F20, U.S.). Scanning electron microscope (SEM, FEI, Nano 430, U.S.) was applied to obtain both surface and cross-sectional morphologies of the coatings. Both hardness and elastic modulus values of the coating were determined by nanoindentation tester (Anton-Paar TriTec, TTX-NHT 2 , Austria). The maximum penetration depth was set at w10% of the coating thickness (i.e. w300 nm) to minimize the influence of the indentation size effect. Both loading and unloading time of the indenter was fixed at 30 s. In order to release material creep, a 30 s pause time was applied after the loading. Results & discussion The TEM observation was carried out to acquire the microstructure of the coatings. Kr 3þ /cm À2 on ZrN, (e) pristine CrN, (f) 5Â10 15 Kr 3þ /cm À2 on CrN, (g) 5Â10 16 Kr 3þ /cm À2 on CrN, (h) 1Â10 17 Kr 3þ /cm À2 on CrN. the surface etching of the ZrN can be noticed. However, the etching transfers to recrystallization and grain coarsening on the ZrN coating surface as the ion fluence increases up to 5Â10 16 Kr 3þ /cm 2 . A larger grain size of the ZrN can be achieved with further increasing the ion fluence to 1Â10 17 Kr 3þ /cm 2 . In addition, microcracks (marked by arrows) can be observed on the surface grains of the ZrN. For the CrN, the pristine coating indicates a pyramid microstructure. After the Kr 3þ ion irradiation with the fluence of 5.0Â10 16 /cm 2 , the surface etching of the CrN can be noticed as well. Additionally, the roughness of the coating surface decreases. With further increasing the ion fluence up to 1Â10 17 Kr 3þ /cm 2 , only surface etching can be observed. This is substantially different from the irradiation behavior of the ZrN coating, which indicates the transformation from etching to recrystallization and grain coarsening. Nanoindentation tests were applied to illustrate the evolution in hardness and elastic modulus the coatings, the obtained values were shown in Table 1. Both the hardness and the elastic modulus the ZrN coating decrease with the applying of the Kr 3þ ion irradiation, owing to the increase of the grain sizes. However, both hardness and elastic modulus of the CrN coatings change slightly after the Kr 3þ ion irradiation. These above results reveal that the microstructure of the nanostructured CrN illustrates a higher stability during the Kr 3þ ion irradiation comparing to that of the nanograined ZrN. Fig. 4 shows the cross-sectional SEM images of the ZrN and the CrN coatings after the 600 keV Kr 3þ ion irradiation with a fluence of 1Â10 17 Kr 3þ /cm 2 . No bubbles or cracks can be observed in the cross-sections of these coatings. This indicates the higher irradiation tolerance of the nanostructured nitrides compared to the bulk ceramics of materials [9,16,17]. The column-grains can be observed in the ZrN bottom layer. No continuous and columnar grains can be noticed in the ZrN irradiated layers. It demonstrates that the continuous column-grains were interrupted on the ZrN coating surface. In addition, protrusion (marked by circles) on the coating surface can be noticed as well. Actually, when the recrystallization and the grain coarsening of the ZrN occurred, the ZrN columnar grains will be broken. However, the cross-section of the irradiated CrN still indicates a continuous and columnar structure. The average widths of the CrN columnar grains change slightly after the ion irradiation. In addition, in Fig. 4(d), the cross-section of the irradiated CrN was much smooth compared to the pristine CrN. It reveals that the ion irradiation produced bombardment and subsequent densification of the CrN irradiated layer, resulting in the increasing of internal compressive stress ( Fig. 2(b)). These above results further prove that the microstructure of the CrN illustrates a higher stability during the Kr 3þ ion irradiation comparing to that of the ZrN. An analytical TEM investigation into the irradiated layers of the ZrN and the CrN coatings has to be conducted in future work. Conclusions In summary, both nanograined ZrN and CrN were irradiated with 600 keV Kr 3þ at room temperature. The CrN shows higher structure stability against the irradiation
1,793.4
2019-03-01T00:00:00.000
[ "Materials Science" ]
Evaluation of a quality control phantom for digital chest radiography Rationale and Objectives: To examine the effectiveness and suitability of a quality control (QC) phantom for a routine QC program in digital radiography. Materials and Methods: The chest phantom consists of copper and aluminum cutouts arranged to resemble the appearance of a chest. Performance of the digital radiography (DR) system is evaluated using high and low contrast resolution objects placed in the “heart,” “lung,” and “subdiaphragm” areas of the phantom. In addition, the signal levels from these areas were compared to similar areas from clinical chest radiographs. Results: The test objects included within the phantom were effective in assessing image quality except within the subdiaphragm area, where most of the low contrast disks were visible. Spatial resolution for the DR systems evaluated with the phantom ranged from 2.6 lp/mm to 4 lp/mm, falling within the middle of the line pair range provided. The signal levels of the heart and diaphragm regions relative to the lung region of the phantom were significantly higher than in clinical chest radiographs (0.67 versus 0.21 and 0.28 versus 0.10 for the heart and diaphragm regions, respectively). The heart‐to‐diaphragm signal level ratio, however, was comparable to those in clinical radiographs. Conclusion: The findings suggest that the attenuation characteristics of the phantom are somewhat different from actual chests, but this did not appear to affect the post‐processing used by the imaging systems and usefulness for QC of these systems. The qualitative and quantitative measurements on the phantom for different systems were similar, suggesting that a single phantom can be used to evaluate system performance in a routine QC program for a wide range of digital radiography systems. This makes the implementation of a uniform QC program easier for institutions with a mixture of different digital radiography systems. PACS number(s): 87.57.–s, 87.62.+n I. INTRODUCTION Utilization of digital radiography ͑DR͒ in radiology departments is becoming increasingly widespread. Benefits of digital radiography include reduced costs associated with film developing and handling, increased dynamic range of the acquired image, and reduced repeat rate. Digital storage of the acquired images also provides the ability to perform image manipulation and long-term image archiving. Images can be made widely available to remote locations for display or diagnosis over computer networks. Realizing and maintaining these benefits requires the implementation of an effective quality control ͑QC͒ program. Quality control for digital radiography should be considered as essential as a quality control program for film processors. A QC program should include routine testing and inspection of the digital radiography components ͓e.g., imaging detec-tor͑s͒, cassettes, plate readers, etc.͔ performed daily, weekly, and annually. 1 Control limits on various imaging parameters related to image quality ͑e.g., exposure indicator, signal-to-noise ratio, and spatial resolution͒ also need to be established. 2 The results of the QC tests should be documented and evaluated for any trends occurring overtime. In order to be practical, the tests in a QC program should be relatively easy to perform and not require detailed or complicated setup procedures. To accomplish this, a chest phantom has been developed 3 with embedded test objects to evaluate the resolution and contrast detectability of digital radiography systems. This study involves the evaluation of the phantom and its characteristics under different imaging conditions with six different digital radiography systems in order to assess the usability of the phantom as part of a routine QC program. A. Phantom construction A phantom for digital chest radiography 3 ͑Nuclear Associates Model 07-646, Nuclear Associates, Carle Place, NY͒ was assessed for performance and ability to perform routine evaluation of image quality on digital imaging systems. The chest phantom was pseudoanthropomorphic in that it was designed to resemble the appearance of chest radiographs while including various objects for assessing image quality. This allowed the image processing software to treat the resulting image as a chest image and thus to facilitate producing reproducible results and mimic the clinical utilization of the digital system ͑Fig. 1͒. The phantom was constructed from layers of 0.5-mm thick copper and 6-mm thick aluminum sheets cut into shapes resembling the heart, diaphragm, spine, and ribs, and a copper sheet with cutouts in the shape of the lung fields. The components were arranged to produce an image that resembled a chest radiograph. A wire mesh covering the phantom served to broaden any peaks in the image histogram resulting from large areas of uniform exposure. The entire phantom was sandwiched between 2.5-cm thick acrylic sheets to provide additional attenuation. Within the phantom were three test objects for performing a subset of tests recommended by AAPM Task Group #10. 1 The line pair test object was located in the lung region and consisted of nine line pair groups ranging from 2.3 to 5 lp/mm oriented at 45°͓Fig. 2͑a͔͒. The purpose of this test object is to evaluate the effect of changes to the laser, optics, or scanning subsystems on spatial resolution. For a properly operating and calibrated digital radiography system, the range of line pairs that are provided by the test object should be capable of measuring the Nyquist frequency for most currently available systems. Although the line pair pattern is limited to measuring the limiting resolution of the DR system ͑ideally this is the Nyquist frequency͒, degradation in the operation of the laser, optics, or scanning subsystems significant enough to visibly affect image quality should be reflected by changes in the number of line pairs visible. A comprehensive evaluation of the resolution properties of a digital radiography system would require a measurement of the system MTF, which would be beyond the scope of a routine QC program. Objects to evaluate contrast detail sensitivity and signal level ͓Figs. 2͑a͒ and 2͑b͔͒ were located in the lung, heart, and subdiaphragm region of the phantom. The contrast sensitivity objects were composed of copper disks of varying thickness and size. Copper disk thickness ranged from 0.006 to 0.076 mm in the lung, 0.013 to 0.127 mm in the heart, and 0.051 to 0.406 mm in the subdiaphragm area. Disk diameters ranged from 0.5 to 6 mm. This provided a range of contrast detail combinations specific to each region for assessing the contrast detail sensitivity of the digital imaging system. A loop of wire in each test object provided a reference region of interest ͑ROI͒ area to obtain optical density or mean pixel value measurements. Readers are referred to Chotas et al. 3 for additional details on the design, composition, and construction of the phantom and embedded test objects. B. Data acquisition In order to assess the applicability of the phantom, digital radiographs were obtained at seven different techniques on five computed radiography systems and a direct radiography system ͑Table I͒. Images of the phantom were obtained at two different kVp settings, 81 and 117 kVp. Using one of the systems, the Fuji FCR-9501-HQ system, images were obtained using phototimed techniques at each kVp. Four additional images were then acquired at each kVp using approximately 1/5 and 5 times the phototimed mAs. For the other imaging systems, the radiographic techniques were adjusted to produce entrance skin exposures ͑ESE͒ similar to those measured for the Fuji FCR-9501-HQ system. Additional images were also obtained for each system using varying techniques currently utilized at our institutions. All images were obtained at 180 cm source to image distance ͑SID͒ using a conventional wall Bucky, except for the Philips Thoravision and the Fuji 9501, which are both integrated systems. The entrance exposure to the phantom was measured for each exposure using a Radcal 1515 exposure meter ͑Radcal Corp., Monrovia, CA͒ and a 6 cm 3 ion chamber ͑Radcal 10ϫ5-6͒ positioned in the midmediastinal region in front of the phantom. Because of this setup, backscattered radiation from the phantom was included in the exposure measurements. A linear gray-scale transformation was applied to each image and any other postprocessing procedures applied by the systems were turned off. Films were produced of each image using the laser printer associated with the particular imaging system ͑Table I͒. Hard-copy films were used to evaluate the spatial resolution and low-contrast test objects because of the concern for the wide variations in the display quality of imaging workstations in soft-copy presentations. The phantom manufacturer's protocol was used to evaluate the images, which included measurements of the average optical density, mean pixel value, and low-contrast resolution in the heart, lung, and subdiaphragm regions, and spatial resolution in the lung region. For the average pixel value, the ROI function of the review workstations was used to obtain mean and standard deviations within the reference regions. The ROIs were placed within the center of the specified loop regions in the heart, lung, and subdiaphragm areas. The spatial resolution and the number of visible low-contrast objects in each region were recorded by two independent observers. Spatial resolution was evaluated visually from the hard-copy film using a 25ϫ magnifier and a viewbox. The resolution test object was viewed under magnification and the smallest line pair object that could be resolved was recorded. In addition to the phantom images, eight chest radiographs of actual patients, acquired with the Fuji 9501-HQ system, were used to compare the phantom to actual patient images. All patient images were acquired at 115 kVp using phototimed techniques. For each patient image the mean pixel values and standard deviations were obtained from ROI's placed in similar locations as those in the phantom. Care was taken to use representative but relatively ''clear'' areas in the regions of the clinical images to minimize the dependence of the results on background anatomical variations. C. Relative signal evaluation For each ROI in the lung, heart, and subdiaphragm regions of the phantom and patient images, signal levels were normalized relative to the lung region by calculating the ratio of plate exposure ͑derived from the average pixel values in the ROI͒ from each region ͑E i ͒ to that in the lung region ͑E L ͒. The pixel value to plate exposure relationship was provided by each manufacturer and in general had the form Qϭaϫlog(bϫE)ϩc, where E is the plate exposure, Q is the pixel value, and a, b, c are system-specific constants. The normalized or relative signal levels were calculated using the equation where Q i is the mean pixel value from region i, Q L is the mean pixel value from the lung region, and M is a system dependent proportionality factor. For the Fuji systems, M was set equal to 1024/L, where L is the latitude of the image reported by the system. For the Agfa systems, M ϭ1157.51, determined empirically. For the Kodak 4 and Lumisys 6 systems, M was set to 1000. Since digital images were not available for the Fuji 9501 and Philips Thoravision, a relative signal could not be computed so these units were omitted from the relative signal evaluation. D. Histogram analysis To compare the phantom images to the patient images, the area normalized signal or pixel value frequency histograms showing the relative exposures at the plate were generated for images acquired with the Fuji system. Pixel values in each image were converted to relative exposure values using where L is the latitude of the image, Q is the image pixel value, S is the Fuji exposure indicator or sensitivity value, and c is a constant. Histograms for the patient images were filtered with a low-pass filter to remove high frequency sampling noise present in the data. The sampling noise is introduced by the Fuji CR reader during a two-stage, down-sampling processing step where the image data is converted from 12-bit to 11-bit and subsequently to 10-bit pixel representation. The shapes of the patient histograms were compared to the phantom histogram and the idealized chest histogram used by the Fuji CR processing software. 5 III. RESULTS AND DISCUSSION A. Resolution and contrast detail sensitivity Table II lists the measured spatial resolution and entrance skin exposures from each imaging system. The spatial resolution measurement for all digital radiography systems was lower than but related to the Nyquist frequency of the systems. The pixel size in most current digital radiography systems is between 100 to 200 m, 1 depending on cassette size. For a 35ϫ43 cm cassette, the pixel size for the Kodak, Agfa, and Lumisys readers was 0.17 mm/pixel, and 0.2 mm/pixel for the Fuji and Philips readers. For these pixel sizes, the frequency range and increments of line pair patterns of the phantom are sufficient to reveal degradation in the resolution response of the system. Spatial resolution was highest for the Fuji 9501-HQ system at 4 lp/mm. For the Agfa and Kodak units, spatial resolution was generally around 2.8-3.0 lp/mm depending on technique. At 81 kV, 1.1 C/kg ͑4.3 mR͒ ESE, the image from the Kodak system was extremely noisy and showed large areas of pixel dropout due to insufficient exposure, particularly in the subdiaphragm area. The Philips Thoravision showed spatial resolution ranging from 2.6-2.7 lp/mm. At the low exposure technique obtained at 81 kVp, the image was nonuniform and mottled due to insufficient exposure, and the minimum line pair in the test pattern was unresolved. Artifacts introduced by inappropriate image processing at very low exposures obscured the visibility of the test objects for the Thoravision and KESPR-400 systems. In general, images acquired at the low technique showed slightly lower resolution due to quantum mottle noise for all systems. Figure 3 illustrates the sum of disks visible in the lung, heart, and subdiaphragm regions for just the phototimed techniques. There were 25 disks present in each of the three phantom test objects ͑75 total disks present͒. The phantom shows somewhat similar low-contrast performance across each of the systems, with the Kodak, Fuji 9501, and Philips systems resolving slightly more disks. The variation may be a result of window/level settings and differences in the output of the laser printers used, which was not controlled. The phantom appears to be useful for tracking low contrast visibility for a variety of systems, as it does not exhibit overwhelmingly better or worse performance on any particular system, and targets an appropriate range of visibility for radiographic applications. Results from the low-contrast evaluation are listed in Table II. The number listed for each technique gives the number of disks visible in each of the lung, heart, and subdiaphragm test objects. For the lung and heart fields, typically 3-4 disks were seen in the first three rows and 0-2 disks visible in the two smallest rows ͑1.0 and 0.5 mm diameter͒. This suggests that the contrast range of the disks is suitable for spotting changes in contrast detail sensitivity. For the subdiaphragmatic area, there were typically 4-5 disks visible in each row except for the smallest row ͑0.5 mm diameter͒. Therefore, the contrast ranges for the diaphragm region may not be sensitive enough to detect changes in contrast detail sensitivity for under-exposed regions. Contrast detail sensitivity improved with higher exposures, and the reverse was true for lower exposures ͑low mAs tech-niques͒. B. Relative signal evaluation and histogram analysis The relative signal values for phantom images are tabulated in Table III. The subdiaphragm region of the 81 kVp low exposure technique obtained on the Kodak system was left out of the analysis because the test pattern was unresolved due to the extremely low exposure in this region. The ratio of signal levels in the heart and subdiaphragm regions to the lung tended to be slightly higher at 117 kVp relative to those at 81 kVp. This was expected based on decreased subject contrast at higher beam energies. As the x-ray beam energy increased, the difference in signal level decreases and the histogram became compressed due to decreasing subject contrast ͑see Fig. 4͒. This resulted in an increase in the relative signal levels at higher beam energies. The general similarity of the signal level ratios with imaging system indicates that the phantom can produce consistent results for different systems. This is a positive attribute for implementing QC programs at institutions with a heterogeneous mix of digital radiographic systems. Table IV shows the relative signal values for the patient images. Figure 5 shows bar graphs of the average relative signal level for the patient chest radiographs ͓5͑a͔͒ and the imaging systems ͓5͑b͔͒. In phantom images, the ratio of the heart to sub-diaphragm relative signal level was 2.43Ϯ0.14, very similar to that in patient images (2.07Ϯ0.44). Thus the relative attenuation between the heart and subdiaphragm regions of the phantom is a good match to the attenuation differences found in actual patients. However, the absolute value of the average lung-relative signal level for the heart region was 3.22Ϯ0.31 times higher in phantom images (0.67Ϯ0.03) compared to that in patient images (0.21Ϯ0.06). For the sub-diaphragm region, the average relative signal in the phantom images was 2.75Ϯ0.34 times higher than that in patient images ͑0.28Ϯ0.04 versus 0.10Ϯ0.03, respectively͒. The results suggest that these regions of the phantom are not attenuating enough with respect to the lung to generate histograms more similar to those of real chest radiographs. Relative signal for the chest phantom Ϯ1 SD averaged over the low, middle, and high exposure images. Signal levels were normalized to that in the lung. The average relative signal for the Kodak KESPR-400 unit at 81 kVp excludes the low exposure image due to extremely low signal level. Histograms of two phantom images using phototimed techniques at 81 and 117 kVp acquired with the Fuji AC3 system are shown in Fig. 4. Area normalized histograms of the relative logexposure to the plate for the phantom images show four distinct peaks corresponding to the different regions of the phantom ͑lung, heart, subdiaphragm, and unattenuated region͒. The generalized chest histogram used by Fuji 5 ͑Fig. 6͒ consists of two regions, a broad peak of the main image data and a sharp narrow peak representing the directly exposed areas of the image. A valley representing the skin and other low attenuation regions separates the two peaks. No valley that would correspond to skin or soft tissue is present in the histogram of the phantom images. This deficiency, however, did not appear to affect the quality of post-processed images. The effect of kVp on the phantom histograms is apparent. Shifting to higher kVp decreases the dynamic range of the image, making the histogram narrower. The histogram is also shifted to the right, towards higher exposure values. Examples of area-normalized histograms of the patient images are illustrated in Fig. 7. These histograms showed wide variations in shape and range, depending on the size, physical condition and image quality of the final image. Broad peaks representing the different fields are seen in some of the patient histograms. Also visible is the valley corresponding to the skin and other low attenuating tissues. The breadth of the histograms of the phantom and the patient images suggests that they have significantly different dynamic ranges. The scalar value of the dynamic range or latitude of Fuji CR images is reported by the system in the L parameter associated with the image, where L is the log 10 of the dynamic range. For the patient images at 115 kVp, the average L value was 2.38Ϯ0.20 while for the phantom images at almost the same kVp ͑i.e., 117͒, the average L value was 1.6Ϯ0.0. Although the phantom was designed to resemble a chest radiographically, the phantom does not look exactly like a chest. The ribs are composed of linear structures with sharp, well-defined edges, and the mesh pattern is visible across the phantom. However, the presence of these linear structures has shown to be of some benefit. During the evaluation period, a subtle blurring artifact appearing as a slightly darkened band was observed on clinical images coming from an Agfa CR reader. The appearance of the artifact was often masked by normal anatomical variations in the image, but could still be observed with window/level changes. Using the phantom to evaluate the system, it was discovered that the artifact was due to a pixel shift, clearly visible in the sharp mesh FIG. 5. ͑a͒ Average relative signal levels for the patient radiographs. ͑b͒ Average relative signal levels for the phantom radiographs. FIG. 6. Generalized Fuji histogram illustrating the histogram segments corresponding to different image areas. 5 The graph is meant to show the parts of the histogram which correspond to various anatomical regions and do not necessarily reflect the actual histograms used by the Fuji image processing software. lines of the phantom images. This incident demonstrated the usefulness of the phantom in identifying and characterizing subtle artifacts that might otherwise be masked by variations in patient anatomy. IV. CONCLUSION Just as film processors require a routine quality control program to monitor processing quality for film, digital radiography systems also require a quality control program to monitor image quality ͑e.g., changing exposure conditions and hardware deterioration͒. In this study, a QC phantom designed for this purpose was evaluated. The phantom was evaluated using six digital radiography systems from five different manufacturers. Phantom characteristics were similar across each of the imaging systems with respect to relative signal levels in different areas of the phantom images. The line pair test object provided a sufficient range of line pairs to evaluate and monitor the Nyquist frequency of the DR systems used in the study. The line pair test object is not expected to be able to detect small or subtle changes in the optical system of CR systems. Such changes would be more appropriately detected by measuring the MTF of the CR system, which would be beyond the scope of a routine QC program. With two of the DR systems, the image processing appeared to be unable to deal with extremely low exposures and high quantum mottle in the images, producing artifacts that obscured the visibility of the test objects. There was variation in the low-contrast performance of the phantom across the DR systems used, but the low contrast characteristics of the phantom were felt to be reasonably similar for all systems. Use of standardized window/level settings and exposure technique may help to reduce variability in the results. While the low contrast objects were able to detect contrast changes over a wide range of exposures, the sensitivity of the low contrast objects to exposure was not investigated. Both the spatial resolution object and background mesh should help identify problems with laser and components involved in the scanning process. Attenuation properties of the phantom were found to be somewhat different from actual chest radiographs. The phantom produces images with a much narrower dynamic range than is found with clinical chest images. However, these differ- ences did not appear to be problematic in the phantom's intended use. Further work is required to track the long-term capabilities of the phantom and the sensitivity of the phantom to detect changes in the DR system. Overall, the results suggest that the phantom can be an effective tool for a routine QC program for diagnosing image artifacts and monitoring the performance and image quality of digital radiography systems.
5,345.2
2001-03-01T00:00:00.000
[ "Medicine", "Physics" ]
Improved prediction of clinical pregnancy using artificial intelligence with enhanced inner cell mass and trophectoderm images This study aimed to assess the performance of an artificial intelligence (AI) model for predicting clinical pregnancy using enhanced inner cell mass (ICM) and trophectoderm (TE) images. In this retrospective study, we included static images of 2555 day-5-blastocysts from seven in vitro fertilization centers in South Korea. The main outcome of the study was the predictive capability of the model to detect clinical pregnancies (gestational sac). Compared with the original embryo images, the use of enhanced ICM and TE images improved the average area under the receiver operating characteristic curve for the AI model from 0.716 to 0.741. Additionally, a gradient-weighted class activation mapping analysis demonstrated that the enhanced image-trained AI model was able to extract features from crucial areas of the embryo in 99% (506/512) of the cases. Particularly, it could extract the ICM and TE. In contrast, the AI model trained on the original images focused on the main areas in only 86% (438/512) of the cases. Our results highlight the potential efficacy of using ICM- and TE-enhanced embryo images when training AI models to predict clinical pregnancy. www.nature.com/scientificreports/used, along with relevant clinical data, to train an AI model.This model provides a predictive value for clinical pregnancy to help embryologists select embryos for transfer.Although further clinical validation is required, recent studies have reported performance levels higher than or comparable to those of human experts in predicting clinical pregnancies 10 .Additionally, age has been highlighted as an important factor in many machine learning studies [11][12][13] as the natural conception rates decline as women age 14,15 .Including age in the AI model is expected to improve performance in predicting the likelihood of pregnancy. However, the variable quality and focus of images are common pitfalls of the AI models that have been presented thus far.Most pregnancy prediction models use blastocyst images, and previous studies have established a correlation between the morphology of the ICM and TE during clinical pregnancy [16][17][18][19] .In a previous study, reasonable and stable interpretations were achieved by paying adequate attention to the ICM and TE regions 13 .In other areas which require medical image analysis, such as lung cancer or disease, segmenting the lung region from the original CT image and then predicting the presence of the cancer or asbestosis has yielded a better performance than that when using the entire original CT image 20,21 .Based on this, we hypothesized that the performance of clinical pregnancy models may be enhanced by segmentation guidance on the ICM/TE regions. In this study, we propose a novel algorithm that could reduce the incidence of erroneous predictions by generating enhanced images from segmented ICM and TE images, which were then used to train an AI model along with female age.This is the first report to prove that the performance of the AI model can be enhanced by focusing on the ICM and TE regions.Our proposed method was validated using gradient-weighted class activation mapping (Grad-CAM), emphasizing that the ICM and TE regions are critical for predicting pregnancy.Notably, we defined a positive pregnancy indication as the presence of a gestational sac (G-sac). Study design and data preparation In this retrospective study conducted between June 2011 and May 2022, 8646 blastocyst images were collected on day 5 from seven IVF clinics.Images were captured using an inverted microscope or stereomicroscope before embryo transfer.Blastocysts from fresh and frozen transfers were matched to clinical pregnancy outcomes as determined by imaging the gestational sac at 4-6 weeks.The presence of a G-sac in the ultrasound scan was used to indicate clinical pregnancy.This method can be used to predict an intrauterine pregnancy with a specificity of 97.6% 22 .Additionally, using G-sac presence as an indicator can minimize non-embryonic factors being mistaken as pregnancy outcomes or advances.Blastocysts with matched pregnancy outcomes were included in the analysis.Whereas blastocysts from multiple embryo transfers with a different number of G-sacs from that of the transferred embryos were excluded.Images from clinics without relevant pregnancy information and those from the remaining clinics were divided into four groups (A, B, C, and D) with at least 200 images being provided by each group.The remaining images from the other three clinics were combined into one group (E) due to insufficient data for statistical analysis.For this study, out of a total of 8646 images, 2555 (30%) images were finally used for analysis according to the inclusion and exclusion criteria.An algorithm was developed using image pre-processing and model learning techniques.Figure 1 Informed consent was waived by the IRB of the aforementioned institutions since this study was retrospective, and the personal information in the data was blinded.The present study was designed and conducted in accordance with the relevant guidelines and regulations of the ethical principles for medical research involving human subjects, as stated by the World Medical Association Declaration of Helsinki. Generation of enhanced ICM and TE images The day-5-blastocyst embryo images collected from the IVF clinics were manually annotated by trained personnel to identify the ICM and TE regions.The ICM was defined as the tightly packed mass of cells within the inner blastocoel cavity and the TE was defined as the spherical layer of outer cells surrounding 23 .The accuracy of the annotation was manually examined first by embryologists and then by a group of laboratory directors with over 20 years of experience.The coordinates of the ICM and TE regions were stored as JSON files, and the Python library OpenCV (version 3.3.1)was then employed to generate segmented images of the ICM and TE regions from the original images.The original embryo image was represented by merging the red, green, and blue channels to display the natural colors.Conversely, the grayscale image was composed of only a single channel.Enhanced ICM and TE images were created by combining three grayscale images: the original embryo, ICM, and TE images.Instead of using the original red, green, and blue channels found in a regular color image, each image was converted into a grayscale and treated as a single channel.These three grayscale images were then merged to form the final enhanced ICM and TE images using the three channels.Conventional convolutional neural networks were designed to process images with three channels, and grayscale images were merged into three channels to align them with the original image format (Fig. 2). Training data split and image preprocessing Two separate experiments were conducted to evaluate the performance of the proposed method.In the first experiment, pregnancy was predicted using only the original embryo microscopy images.In the second experiment, we predicted pregnancy after generating enhanced images using the ICM and TE segmentation images.In total, 2555 images were divided into either training data (2043 images, 80%) or model performance test data (512 images, 20%).We then utilized threefold cross-validation to divide the 2043 images, including as the training data, into three folds.Each fold was utilized to train and validate the model, and performance was evaluated using fixed model performance test data.Supplementary Table S1 illustrates the overall data segmentation and distribution.Before applying the enhanced ICM and TE images to the deep-learning model, image pre-processing was performed to make them suitable for training.During the image pre-processing step, all pixel values of the image were normalized, and the image was resized to 224 × 224 pixels.Since the sample size of the image dataset was not sufficiently large, we attempted to improve the efficiency of image classification by applying image augmentation 24,25 .This made the model more robust by augmenting the images with transformations, allowing it to learn more images during training.To perform image augmentation, TensorFlow 2.10.0, a deep-learning neural network API in Python 26 , was used with the following options: -Random brightness -Random saturation -Random contrast -Flip images vertically -Flip images horizontally In addition, since a pre-trained model was used, the image was resized to fit the learned size of each pretrained model.All data split and image preprocessing procedures described above were applied to both the first and second experiments. Statistical analysis To determine whether there was a significant difference in age between the negative and positive pregnancy groups, a t-test was performed on the entire dataset.Furthermore, binary classifiers were used to estimate the probability of an instance belonging to the positive class using prediction scores; as these scores are often poorly calibrated, they may not accurately reflect the true probabilities 27 .The Hosmer-Lemeshow test was used to compare the expected and observed frequencies of the positive class to assess calibration 28 . Model development and evaluation For the convolutional neural network model, we utilized the original and enhanced ICM and TE images and compared their respective performances.Due to the limited sample size, we fine-tuned the pre-trained model to learn the model and used 224 × 224 images, which is the same size as that utilized in ImageNet.Moreover, after concatenating the patient's age to the last fully connected layer of the architecture, it was configured to return the predicted pregnancy value through the sigmoid layer.The complete process from image input to predicted value return is illustrated in Fig. 3. Model training was conducted on a machine equipped with an Intel Xeon CPU @ 2.10 GHz and an RTX3090 (24 GB) GPU, utilizing the Python programming language (version 3.8.0). To evaluate the performance of our algorithm, we used four key metrics: sensitivity, specificity, accuracy, and the AUROC.The formulas for these metrics are as follows.TP = number of true positive samples.TN = number of true negative samples.FP = number of false positive samples.FN = number of false negative samples The areas under the receiver operating characteristic curves (AUROCs) of the AI models were compared using original embryo images and enhanced ICM and TE images.We utilized three representative pre-trained models for the convolutional neural network architecture, DenseNet121 29 , VGG16 30 , and ResNet50 31 .These models had been trained from large-scale ImageNet datasets 24 and were fine-tuned to assess our embryo images.Grad-CAM is a technique used in the field of computer vision to understand and visualize important image regions, contributing to deep neural network prediction.It aims to generate a heat map that highlights the input image regions that are the most important for a particular class prediction.By generating these heat maps, Grad-CAM provides insights into the internal workings of deep neural networks, allowing researchers and practitioners to Results A t-test results, as shown in Table 1, indicated that the average age of the negative group (36.2 years) was significantly higher than that of the positive group (34.0 years).This age difference was also observed for each of the five clinics, with the negative group consistently having a higher average age than that of the positive group; this difference was statistically significant for all five clinics.Our results revealed that all three models performed better when trained using enhanced ICM and TE images with age information.The best performance was obtained when utilizing the enhanced ICM and TE images in the ResNet50 architecture, with an average mean (standard deviation) AUROC of 0.741 (0.014) (Table 2).AUROCs' performance by threefold is shown in Supplementary Fig. S1.The original image also exhibited the best performance in the ResNet50 architecture, with an AUROC mean and variance of 0.716 (0.019).The boxplots in Fig. 4 show that the enhanced ICM and TE images had better AUROC values than that of the original images across all three models.The lower quartile (Q1) of the enhanced ICM and TE images was higher than the upper quartile (Q3) of the original images, indicating better performance based on the AUROC metric.In our study, we analyzed the regions that the CNN model learned from using Grad-CAM when predicting pregnancy with the proposed reconstructed image and the original image.Our findings indicated that the ICM and TE regions were learned more intensively when using the reconstructed images instead of the original images.The Grad-CAM results in Fig. 5A,B illustrate a case where the original image led to an incorrectly predicted outcome, while the respective reconstructed image led to a correct prediction.In the original image, the features from the ICM and TE regions could not be extracted, leading to an incorrect prediction.Conversely, the reconstructed image model was trained to focus on the ICM and TE regions, which produced an accurate prediction.However, in the case of Fig. 5C,D, the features in the ICM and TE regions within the embryo could be adequately identified in both the original and reconstructed images; however, the predicted outcome was still incorrect.We checked Grad-CAM for 4 out of a total of 512 cases from the test set: when both the original and enhanced ICM and TE images were correctly predicted (n = 314), when both were incorrectly predicted (n = 106), when only the original image was correctly predicted (n = 35), and when only the enhanced ICM and TE images were correctly predicted (n = 57).The number of images focused on learning the ICM and TE regions increased from 85.5% (438/512) to 98.8% (506/512) when the images were enhanced using ICM and TE (Table 3).In Table 3, "Other" refers to cases where the model focused on other areas not including any ICM or TE areas. The Hosmer-Lemeshow test was conducted on our best-performing model to assess calibration and provided non-significant results with a p-value < 0.001 and a Brier score of 0.241.To address this issue, we applied isotonic regression and achieved a well-calibrated model with a p-value of 0.265 and a Brier score of 0.178.A calibration plot is shown in Supplementary Fig. S2. Discussion This study aimed to determine the effect of segmentation guidance on AI performance in predicting clinical pregnancy using embryo images.AI predictive performance improved when the model was guided by ICM and TE segmentation.Although overlooking appropriate areas in embryo images has been previously pointed out as an issue of AI, there have been no reported attempts to solve it.To the best of our knowledge, this is the first study to verify improvements in AI predictive performance using ICM and TE segmentation. Segmentation technology is commonly employed in various medical imaging domains.The utilization of segmentation technology in the computer-aided diagnosis of medical images is increasingly recognized as a valuable and critical aspect of medicine.This approach allows for the effective utilization of large volumes of medical data while mitigating the risk of misdiagnosis resulting from subjective visual observations [32][33][34] .Research in other medical imaging areas has indicated that AI trained on datasets simplified through segmentation has a higher diagnostic performance when focusing on the regions of interest 21,35,36 .Although the operating mechanism www.nature.com/scientificreports/ of deep learning algorithms remains elusive, it is widely accepted that the less complex the data, the higher the learning efficiency that AI can achieve under the same conditions 37 .The ICM and TE morphologies have long been used worldwide to evaluate and screen embryos.Since these structures play important roles in determining the fate of cells during embryonic differentiation, many studies have investigated the correlation between their morphology and pregnancy rates 16,18 .Several studies have shown that the ICM and TE are also strongly related to the live birth rate, and the ICM is known as a major factor that can predict euploid embryos 19,38 .Despite expecting the AI model to learn and accurately represent the morphology of ICM and TE by training it with segmented images the results were not as good as anticipated.Particularly, when explaining the inferences of the deep learning model using techniques such as Grad-CAM, we noticed that the model often failed to focus on the ICM or TE.However, the enhanced ICM and TE images proposed in this study ensured the morphology of such embryonic structures was emphasized for the deep learning model.This www.nature.com/scientificreports/approach provided a strong basis for creating an AI system that can predict clinical pregnancy by incorporating relevant clinical domain knowledge.One of the major strengths of this study was the high-quality dataset.The key to developing an effective AI model is to train it with a sufficient volume of good data.This study used nationally curated data that were well-refined and labeled from embryo images collected from multiple institutions, which were endorsed by a third-party inspector.Since the dataset was created as part of a government-funded project, quality control was adequately performed, and the original dataset is scheduled to be made public in the future, ensuring external reliability. The additional task of investigating the incorrectly predicted cases was undertaken in this study, particularly in the four categories shown in Table 3.Two laboratory directors closely examined the original and enhanced ICM and TE images.For clear images, both the original model and the enhanced ICM and TE models correctly predicted the actual outcome.For less clear images, the segmentation model outperformed the original model.For inherently messy images, both models failed to predict pregnancies.Of the 75 messy images, 25 showed irregular contours of the zona pellucida, embedded sperm in the zona, or cytoplasmic darkness, and the AI may have misinterpreted these characteristics as fragmentation.The use of well-focused and clear images ensures a fair performance of the AI model.Furthermore, image reconstruction using ICM and TE segmentation was validated and may be helpful for less-clear images to a certain extent.However, for cases where the original model outperformed the segmentation model, no specific pattern was found in Grad-CAM, and potential explanations include variables such as clinical or genetic information that may help overcome the limitations of morphological assessment.Moreover, we provided the AI model with the correct ICM and TE information according to the labeling of the laboratory directors, but Grad-CAM confirmed that the model had a bias to learn by looking at other regions besides ICM and TE.However, it answered correctly in two cases and incorrectly in four cases (Table 3).The incorrect answers can be interpreted as the failure to extract features from ICM and TE, and the two correct answers can be interpreted as features found elsewhere inferring probabilities similar to the embryo's state by chance.In conditions where prediction is challenging, such as pregnancy, the accuracy of the prediction is typically around 70%.In the 44 cases within the "Other" group the original image was supposed to focus on a different region (other than the ICM or TE); however, the correct answer was given.This is because Grad-CAM can learn the wrong region of an image and sometimes as a result the AI model's predicted value can coincidentally match the correct answer. In our study, we applied isotonic regression to solve the calibration problem.However, even after applying isotonic regression, the true positive rate decreased in the range of high predicted probabilities (0.7-0.9).This has been confirmed to be due to the occurrence of incorrect cases.In these cases, the AI model predicted goodquality embryos after being successfully trained on ICM and TE images; however, pregnancy did not occur due to various complex factors.It is important to note that calibration can be more difficult for small sample sizes because there may be insufficient information to accurately estimate the mapping between the predicted scores and probabilities.In such cases, it may be necessary to collect additional data or employ alternative methods to enhance calibration.Therefore, we aim to collect a global dataset that includes both domestic and foreign data and develop a more accurate and robust model in future research. While morphology evaluation is the most widely used methodology for embryo selection, chromosomal analysis, also known as preimplantation genetic testing (PGT), provides additional information.However, PGT is invasive and expensive, and there is no established way to interpret and resolve the problem of potential mosaicism 39 .To overcome this challenge, AI technologies have been proposed as a non-invasive alternative.1][42] .However, this level of accuracy is insufficient compared to invasive PGT.Further research is required to improve the accuracy.This should include the application of the technology proposed in this study and the combination of morphological evaluation by AI with metabolomic analysis and non-invasive PGT using culture media or blastocysts. This study had limitations.To generalize our results, an international study is required, as our study was conducted on the same racial background.In addition, because our dataset consisted of images from multiple institutions, the color and sharpness of the images varied slightly.Although this allows for better performance in terms of validation compared to training a single institution, it is still difficult to guarantee the same performance when unfamiliar, heterogeneous images are processed.Furthermore, to automate our method, we must develop an algorithm that automatically segments the ICM and TE areas.In future research, we plan to develop an ICM and TE segmentation model that automatically segments these areas from the original embryo image, creates an enhanced ICM and TE image, and feeds it as input to the AI model.Moreover, the blastocyst stage and ICM and TE grade of the embryo are known to be major factors affecting pregnancy.Therefore, evaluating these factors in addition to image interpretation would improve prediction accuracy.However, the grading of ICM and TE is subjective and requires expert help to obtain information.Therefore, we were unable to reflect this information in our study.In future studies, researchers should create a prediction model by distinguishing blastocyst stages and conduct ongoing research on the impact of such factors on actual pregnancy. In conclusion, this study demonstrated that the predictive performance of AI improved when using enhanced ICM and TE images.AI can provide objective information to empower the assessment of embryologists; however, it often analyzes irrelevant parts of images, leading to incorrect results, particularly when the image is out of focus.Our research findings may help generalize the AI model for application to embryo images with various focuses.Further research can be the first step towards transparent AI models for embryo assessment. Figure 1 . Figure 1.Overall process of algorithm development.AUROC area under the receiver operating characteristic curve, ICM inner cell mass, TE trophectoderm. Figure 2 . Figure 2. Differences in the RGB channels between the original and enhanced ICM and TE images.RGB redgreen-blue, ICM inner cell mass, TE trophectoderm. Figure 3 . Figure 3. Overall process of the prediction algorithm using images of the gestational sac.ICM inner cell mass, TE trophectoderm, CNN convolutional neural network. Figure 4 . Figure 4. AUROC box plot comparison of the AI models.AUROC area under the receiver operating characteristic curve, AI artificial intelligence, ICM inner cell mass, TE trophectoderm, Q quartile. Figure 5 . Figure 5. Grad-CAM results.(A) For a negative pregnancy, the original image (a) led to an incorrect prediction, while the enhanced ICM and TE image (b) produced a correct prediction.(B) For pregnancy positivity, when the original image was incorrectly predicted (a), the enhanced ICM and TE image (b) were correctly predicted.(C) Both the original and the enhanced ICM and TE images were incorrectly predicted as negative pregnancies.(D) Both the original and the enhanced ICM and TE images were incorrectly predicted as pregnancy-positive.ICM inner cell mass, TE trophectoderm, Grad-CAM gradient-weighted class activation mapping. Table 1 . Baseline distribution of embryos and female patient ages in each IVF laboratory.IVF in vitro fertilization, N sample size, SD standard deviation. Table 2 . Performance of deep learning models for G-SAC prediction.G-SAC gestational sac, CNN convolutional neural network, AUROC area under the receiver operating characteristic curve.
5,324.2
2024-02-08T00:00:00.000
[ "Medicine", "Computer Science" ]
Algorithmic differentiation improves the computational efficiency of OpenSim-based trajectory optimization of human movement Algorithmic differentiation (AD) is an alternative to finite differences (FD) for evaluating function derivatives. The primary aim of this study was to demonstrate the computational benefits of using AD instead of FD in OpenSim-based trajectory optimization of human movement. The secondary aim was to evaluate computational choices including different AD tools, different linear solvers, and the use of first- or second-order derivatives. First, we enabled the use of AD in OpenSim through a custom source code transformation tool and through the operator overloading tool ADOL-C. Second, we developed an interface between OpenSim and CasADi to solve trajectory optimization problems. Third, we evaluated computational choices through simulations of perturbed balance, two-dimensional predictive simulations of walking, and three-dimensional tracking simulations of walking. We performed all simulations using direct collocation and implicit differential equations. Using AD through our custom tool was between 1.8 ± 0.1 and 17.8 ± 4.9 times faster than using FD, and between 3.6 ± 0.3 and 12.3 ± 1.3 times faster than using AD through ADOL-C. The linear solver efficiency was problem-dependent and no solver was consistently more efficient. Using second-order derivatives was more efficient for balance simulations but less efficient for walking simulations. The walking simulations were physiologically realistic. These results highlight how the use of AD drastically decreases computational time of trajectory optimization problems as compared to more common FD. Overall, combining AD with direct collocation and implicit differential equations decreases the computational burden of trajectory optimization of human movement, which will facilitate their use for biomechanical applications requiring the use of detailed models of the musculoskeletal system. Introduction Combining musculoskeletal modeling and dynamic simulation is a powerful approach to study the mechanisms underlying human movement. In the last decades, researchers have primarily used inverse dynamic simulations to identify biomechanical variables (e.g., muscle forces and joint loads) underlying observed movements. Yet dynamic simulations can also be applied to generate novel movements. Such predictive simulations have the potential to reveal cause-effect relationships that cannot be explored based on inverse dynamic simulations that require movement kinematics as input. Novel movements can be generated by solving trajectory optimization problems. Generally, trajectory optimization consists of identifying a trajectory that optimizes an objective function subject to a set of dynamic and path constraints [1]. In the biomechanical field, researchers have used trajectory optimization for solving two main types of problems. In tracking problems, the objective function is the difference between a variable's measured and simulated value [2][3][4], whereas in predictive problems, the objective function represents a movement related performance criterion (e.g., minimizing muscle fatigue) [5][6][7][8]. However, the nonlinearity and stiffness of the dynamic equations characterizing the musculoskeletal system cause the underlying optimal control problems to be challenging to solve and computationally expensive [5,7,8]. For example, small changes in controls can cause large changes in kinematics and hence a foot to penetrate into the ground, drastically increasing ground reaction forces. These challenges have caused the biomechanics community to primarily perform studies based on inverse dynamic analyses of observed movements rather than trajectory optimization of novel movements. Over the last decade, the increase in computer performance and the use of efficient numerical methods have equipped researchers with more efficient tools for solving trajectory optimization problems. In particular, direct collocation methods [4,6,[8][9][10][11] and implicit formulations of the musculoskeletal dynamics [10,12] have become popular. Direct collocation reduces the sensitivity of the objective function to the optimization variables, compared to other methods such as direct shooting [5], by reducing the time horizon of the integration. Direct collocation converts optimal control problems into large sparse nonlinear programming problems (NLPs) that readily available NLP solvers (e.g., IPOPT [13]) can solve efficiently. Implicit formulations of the musculoskeletal dynamics improve the numerical conditioning of the NLP over explicit formulations by, for example, removing the need to divide by small muscle activations [10] or invert a mass matrix that is near-singular due to body segments with a large range of masses and moments of inertia [12]. In implicit formulations, additional controls are typically introduced for the time derivative of the states, which allows imposing the nonlinear dynamic equations as algebraic constraints in their implicit rather than explicit form (i.e., _ y ¼ u; 0 = f i (y,u) instead of _ y ¼ f e ðyÞ). Algorithmic differentiation (AD) is another numerical tool that can improve the efficiency of trajectory optimization [14,15]. AD is a technique for evaluating derivatives of functions represented by computer programs. It is, therefore, an alternative to finite differences (FD) for evaluating the derivative matrices required by the NLP solver, namely the objective function gradient, the constraint Jacobian, and the Hessian of the Lagrangian (henceforth referred to as simply Hessian). These evaluations are obtained free of truncation errors, in contrast with FD, and for a computational cost of the same order of magnitude as the cost of evaluating the original function. AD relies on the observation that any function can be broken down into a sequence of elementary operations, forming an expression graph (example in Fig 1). AD then relies on the chain rule of calculus that describes how to calculate the derivative of a composition of functions [15]. By traversing a function's expression graph while applying the chain rule, AD allows computing the function derivatives. Note that, like FD, AD can exploit the sparsity of the aforementioned derivative matrices resulting, for example, from applying direct collocation [16]. AD allows traversing the expression graph in two directions or modes: from the inputs to the outputs in its forward mode and from the outputs to the inputs in its reverse mode. This permits the evaluation of two types of directional derivatives: Jacobian-times-vector product and Jacobian-transposed-times-vector product in the forward and reverse mode, respectively. The computational efficiency of the AD mode depends on the problem dimensions. Consider the function G : R n ! R m : y ¼ GðxÞ describing the m NLP constraints y as a function of the n optimization variables x. The constraint Jacobian J = @y/@x is a matrix with size m x n. In the forward mode, J relates forward seeds _ x to forward sensitivities _ y : _ y ¼ J _ x (example in Fig 1). In the reverse mode, J T relates reverse seeds � y to reverse sensitivities � x : � x ¼ J T � y (example in Fig 1). In the forward mode, the cost of evaluating J is proportional to n times the cost of evaluating G. In the reverse mode, the cost of evaluating J T is proportional to m times the cost of evaluating G. If there are many more inputs n than outputs m, the reverse mode may drastically decrease the number of function evaluations required to evaluate J and highly reduce the computational time (CPU time) as compared to the forward mode [15,17]. Two main approaches exist for adding AD to existing software, namely operator overloading and source code transformation. Source code transformation is inherently faster than operator overloading but may not be readily available for all features of a programming language. In the operator overloading approach, AD's algorithms are applied after the evaluation of the original function using concrete numerical inputs. This is typically performed by introducing a new numerical type that stores information about partial derivatives as calculations proceed (e.g., through operator overloading in C++) [15,17]. Examples of AD tools using operator overloading in C++ are ADOL-C [18] and CppAD [19]. In the source code transformation approach, the AD tool analyzes a given function's source code and outputs a new function A function y = f(x 1 ,x 2 ) = cos x 2 −x 2 x 1 is broken down into a sequence of elementary operations, forming an expression graph. In the forward mode, the forward seeds _ x 1 and _ x 2 are propagated from the inputs to the output, and the Jacobian J = @f/@x relates _ x 1 and _ x 2 to the forward sensitivity _ y. In the reverse mode, the reverse seed � y is propagated from the output to the inputs, and the transposed Jacobian J T relates � y to the reverse sensitivities � x 1 and � x 2 . that computes the forward or reverse mode of that function. Examples of AD tools using source code transformation are ADiGator for MATLAB [20] and CasADi that is available for C++, Python, and MATLAB [21]. CasADi is a modern actively developed tool for nonlinear optimization and AD that has many additional features (e.g., code generation) and interfaces with NLP solvers designed to handle large and sparse NLPs (e.g., IPOPT). CasADi provides a high-level, symbolic, way to construct an expression graph, on which source code transformation is applied. The resultant expression graph can be code-generated to achieve the computational efficiency of pure source code transformation. AD has a long history [14] but has rarely been applied in biomechanics, likely because AD is relatively unknown in the field and is not integrated as part of widely used biomechanical software packages. In previous work, we solved muscle redundancy problems while exploiting AD [10,22]. For this purpose, we used GPOPS-II [23], a MATLAB software for solving optimal control problems with direct collocation, in combination with ADiGator. However, these problems were limited to models implemented in MATLAB, enabling the use of ADiGator. Generating simulations of human movement requires expanding these problems to account for the multi-body dynamics. OpenSim [24,25] and its dynamics engine Simbody [26] are widely used open-source software packages for musculoskeletal modeling and biomechanical dynamic simulation. These packages provide multi-body dynamics models and have been used for trajectory optimization of human gait [3,4,8,11]. Yet they currently do not leverage tools for AD. Moreover, they are written in C++, which would prevent the use of ADiGator. AD is increasingly used for trajectory optimization in related fields such as rigid body dynamics for robotic applications and several software packages leverage AD tools [27]. Rob-CoGen is a modeling tool for rigid body dynamics that supports AD through source code transformation. Giftthaler et al. showed that trajectory optimization of gait for a quadrupedal robot modeled with RobCoGen was five times faster with AD than with FD [27]. Other packages for robotic applications with modules supporting AD include Drake [28], Robotran [29], MBSlib [30], and Pinocchio [31]. Drake is a collection of tools that relies on Eigen [32] for linear algebra. Eigen has a module supporting AD's forward mode using operator overloading. Robotran is a symbolic software to model multibody systems that can be interfaced with CasADi to solve optimal control problems. MBSlib is a multibody system library supporting AD through ADOL-C. Finally, Pinocchio is a software platform implementing algorithms for rigid body dynamics that can be interfaced with ADOL-C, CppAD, and CasADi. Note that AD is not exclusively used for trajectory optimization and is also applied in other related fields including deep learning with libraries such as TensorFlow [33] and Theano [34], and applications for robotic gait optimization (e.g., [35]). The contribution of this study is threefold. First, we enabled the use of AD in OpenSim and Simbody (henceforth referred to as OpenSim). We compared two approaches: we incorporated the operator overloading AD tool ADOL-C and we developed our own AD tool Recorder that uses operator overloading to construct an expression graph on which source code transformation is applied using CasADi. Second, we interfaced OpenSim with CasADi, enabling trajectory optimization using OpenSim's multi-body dynamics models while benefitting from CasADi's efficient interface with NLP solvers. Third, we evaluated the efficiency of different computational choices based on trajectory optimization problems of varying complexity solved with IPOPT. We compared three different derivative scenarios: AD with ADOL-C, AD with Recorder, and FD. In addition, we compared different linear solvers and different Hessian calculation schemes within IPOPT, to aid users in choosing the most efficient solver settings. Primal-dual interior point methods such as IPOPT rely on linear solvers to solve the primaldual system, which involves the Hessian, when computing the Newton step direction during the optimization [36]. The Hessian can be exact (i.e., based on second-order derivative information) or approximated with a limited-memory quasi-Newton method (L-BFGS) that only requires first-order derivative information. We found that using AD through Recorder was more efficient than using FD or AD through ADOL-C, whereas the efficiency of the linear solver and Hessian calculation scheme was problem-dependent. Tools to enable the use of AD in OpenSim We first incorporated the operator overloading AD tool ADOL-C in OpenSim. ADOL-C relies on the concept of active variables, which are variables that may be considered as differentiable quantities at some time during the execution of a computer program [18]. To distinguish these variables and store information about their partial derivatives, ADOL-C introduced the augmented scalar type adouble whose real part is of standard type double. All active variables should be of type adouble. To differentiate OpenSim functions using ADOL-C, we modified OpenSim's source code by replacing the type of potential active variables to adouble (example for SimTK::square() in Fig 2). We maintained a layer of indirection so that OpenSim could be compiled to use either double or adouble as the scalar type. We excluded parts of the code, such as numerical optimizers, that were not relevant to this study. The limited computational benefits of using AD through ADOL-C led us to seek alternative AD strategies (see discussion for more detail). We developed our own tool, Recorder, which combines the versatility of operator overloading and the speed of source code transformation. Recorder is a C++ scalar type for which all operators are overloaded to generate an expression graph. When evaluating an OpenSim function numerically at a nominal point, Recorder generates the function's expression graph as MATLAB source code in a format that CasADi's AD algorithms can transform into C-code (see S1 Appendix for source code from the example of Fig 1). Note that this workflow is currently only practical when the branches (if-tests) encountered at the nominal point remain valid for all evaluations encountered during the optimization. To use Recorder with OpenSim, we relied on the code we had modified for incorporating ADOL-C but replaced adouble with the Recorder scalar type (example for SimTK::square() in Fig 2). This change required minimal effort but enabled Recorder to identify all differentiable variables when constructing the expression graphs. Interface between OpenSim and CasADi We enabled the use of OpenSim functions within the CasADi environment by compiling the functions and their derivatives as Dynamic-link Libraries that are then imported as external functions for use by CasADi (Fig 2). The function derivatives can be computed through ADOL-C (AD-ADOLC in Fig 2) or through Recorder (AD-Recorder in Fig 2). Trajectory optimization problems to evaluate computational choices We designed three example trajectory optimization problems to evaluate different computational choices (see Tables 1-3 for detailed formulations). The general formulation of the optimal control problems consists of computing the controls u(t), states x(t), and timeindependent parameters p minimizing an objective functional: where t i and t f are initial and final times, and t is time [37]. This objective functional is subject T=s T ¼ũ T qð0Þ ¼qð1Þ ¼ṽð0Þ ¼ṽð1Þ ¼ 0 Controls: we introduced accelerations (time derivative of velocities) as controls (implicit formulations) in addition to joint torques. Bounds: lb and ub are for lower and upper bounds, respectively. Scaling: we used time scaling for the joint states and controls. Objective function: to avoid singular arcs, situations for which controls are not uniquely defined by the optimality conditions [37], we appended a penalty function L p with the remaining controls to the objective function L. Dynamic constraints are scaled using the same scale factors as used for the states [37]. We used implicit formulations. Path constraints: f s (�) computes net joint torques T according to the skeleton dynamics. and to algebraic path constraints: which are equality constraints if g min = g max . The optimization variables are typically bounded as follows: In the first example, we perturbed the balance of nine inverted pendulums, with between two and 10 degrees of freedom, by applying a backward translation to their base of support. The optimal control problem identified the joint torques necessary to restore the pendulums' Controls are introduced for the time derivative of the states (implicit formulations) in addition to trunk excitations. Bounds are manually (man) set for the joint states and controls; lb and ub are for lower and upper bounds, respectively. Scaling: joint states and controls, and tendon forces are scaled such that the lower and upper bounds are between -1 and 1. Objective function L is normalized by distance traveled d. To avoid singular arcs [37], a penalty function L p (with low weight) with the remaining controls is appended to L. Dynamic constraints are scaled using the scale factors used for the states [37]. Path constraints: f s (�) computes net joint torques T according to the skeleton dynamics, f c (�) describes the Hill-type muscle contraction dynamics [10], MA m is moment arm of muscle m; � xð�Þ contains all states except the pelvis forward position q pelvis,for (symmetry), and 1.33 m s -1 is the prescribed gait speed. In the second example, we performed predictive simulations of walking with a two-dimensional (2D) musculoskeletal model (10 degrees of freedom, 18 muscles actuating the lower limbs, one ideal torque actuator at the trunk, and two contact spheres per foot [24]). We identified muscle excitations and half walking cycle duration that minimized a weighted sum of muscle fatigue (i.e., muscle activations at the third power [6]) and joint accelerations subject to constraints describing the musculoskeletal dynamics, imposing left-right symmetry, and prescribing gait speed (i.e., distance travelled by the pelvis divided by gait cycle duration). Imposing left-right symmetry allowed us to only optimize for half a gait cycle. In the third example, we performed tracking simulations of walking with a three-dimensional (3D) musculoskeletal model (29 degrees of freedom, 92 muscles actuating the lower limbs and trunk, eight ideal torque actuators at the arms, and six contact spheres per foot [4,24,38]) while calibrating the foot-ground contact model. We identified muscle excitations and contact sphere parameters (locations and radii) that minimized a weighted sum of muscle effort (i.e., squared muscle activations) and the difference between measured and simulated variables (joint angles and torques, and ground reaction forces and torques) while satisfying the musculoskeletal dynamics. Data collection was approved by the Ethics Committee at UZ / KU Leuven (Belgium). In these examples, we modeled pendulum/skeletal movement with Newtonian rigid body dynamics and, for the walking simulations, compliant Hunt-Crossley foot-ground contact [24,26]. We created a continuous approximation of a contact model from Simbody to provide twice continuously differentiable contact forces, which are required when using second-order gradient-based optimization algorithms [39]. We performed the approximations of conditional if-tests using hyperbolic tangent functions. For the muscle-driven walking simulations, we described muscle activation and contraction dynamics using Raasch's model [9,40] and a Hill-type muscle model [10,41], respectively. We defined muscle-tendon lengths, velocities, and moment arms as a function of joint positions and velocities using polynomial functions [42]. We optimized the polynomial coefficients to fit muscle-tendon lengths and moment arms (maximal root mean square deviation: 3 mm; maximal order: ninth) obtained from OpenSim for a wide range of joint positions. We transcribed each optimal control problem into a NLP using a third order Radau quadrature collocation scheme. We formulated each problem in MATLAB using CasADi and IPOPT. We imposed an NLP relative error tolerance of 1 x 10 −6 and used an adaptive barrier parameter update strategy. We selected a number of mesh intervals for each problem such that the results were qualitatively similar when using a mesh twice as fine. We used 10 and three initial guesses for the pendulum and walking simulations, respectively. We ran all simulations on a single core of a standard laptop computer with a 2.9 GHz Intel Core i7 processor. Results analysis We compared CPU time and number of iterations required to solve the problems using the different computational choices. First, we compared AD, using the Recorder approach, with FD. Second, we compared the AD approaches, namely AD-Recorder and AD-ADOLC. We performed these two comparisons using the linear solver mumps [43], which CasADi provides, and an approximated Hessian. Third, we compared different linear solvers, namely mumps with the collection of solvers from HSL (ma27, ma57, ma77, ma86, and ma97) [44], The comparisons are expressed as ratios (mean ± one standard deviation; results obtained with solver from the HSL collection over results obtained with mumps � indicates ma27, ma57, ma77, ma86, or ma97). The ratios are averaged over results from different initial guesses. Ratios larger than one indicate faster convergence, fewer iterations, or less time per iteration with mumps. The use of the solvers ma57 and ma97 led to memory issues for the 3D tracking simulations and these cases were therefore excluded from the analysis. The simulations were run using AD-Recorder and an approximated Hessian. https://doi.org/10.1371/journal.pone.0217730.t005 while using AD-Recorder and an approximated Hessian. Finally, we compared the use of approximated and exact Hessians. For this last comparison, we used AD-Recorder and tested all linear solvers. In all cases, we ran simulations from different initial guesses and compared results from simulations that started from the same initial guess and converged to similar optimal solutions. Table 4 distinguishes the numerical tools used in our analyses. Results Using AD-Recorder was computationally more efficient than using FD or AD-ADOLC ( Fig 3). The CPU time decreased when using AD-Recorder as compared to FD (between 1.8 ± 0.1 and 17.8 ± 4.9 times faster with AD-Recorder) and AD-ADOLC (between 3.6 ± 0.3 and 12.3 ± 1.3 times faster with AD-Recorder). CPU time spent in evaluating the objective function gradient accounted for 95 ± 10% (average ± standard deviation) of the difference in CPU time between AD-Recorder and FD. The difference in CPU time spent in evaluating the constraint Jacobian accounted for 89 ± 6% of the difference in CPU time between AD-Recorder and AD-ADOLC. The number of iterations was similar when using AD-Recorder, FD, and AD-A-DOLC. For the 2D predictive and 3D tracking simulations, one and two cases, respectively, out of nine (three derivative scenarios and three initial guesses) were excluded from the comparison as they converged to different solutions. The solvers from the HSL collection were on average more efficient (faster with a similar number of iterations) than mumps for the pendulum simulations, but the efficiency varied for the 2D predictive and 3D tracking simulations ( Table 5). The solver ma27 was on average faster than mumps in all cases although ma27 required more iterations for the 2D predictive simulations. The other solvers from the HSL collection were on average slower than mumps for the 2D predictive simulations. For the 3D tracking simulations, the solvers ma77 and ma86 were faster and slower, respectively, than mumps. The solvers ma57 and ma97 failed to solve muscle activations: bic is biceps, fem is femoris, sh is short head, max is maximus, gastroc is gastrocnemius, ant is anterior; ground reaction forces: BW is body weight). Experimental data are shown as mean ± two standard deviations. (Bottom) Results from 3D tracking simulations of walking (joint angles: R is right, L is left, add is adduction, rot is rotation; muscle activations: med is medialis, long is longus, lat is lateralis). The vertical lines indicate right heel strike (solid) and left toe-off (dashed); only part of the gait cycle, when experimental ground reaction forces are available, is tracked. The experimental electromyography data is normalized to peak muscle activations. The foot diagrams depict a down-up view of the configuration of the contact spheres of the right foot pre-calibration (left: generic) and postcalibration (right: optimized). The coefficient of determination R 2 is given for the tracked variables. https://doi.org/10.1371/journal.pone.0217730.g005 Algorithmic differentiation speeds up trajectory optimization of human movement the 3D tracking simulations due to memory issues. For all simulations, the solvers from the HSL collection except ma86 (and ma77 for the 2D predictive simulations) required less CPU time per iteration than mumps. For the 2D predictive and 3D tracking simulations, one case out of 18 (six solvers and three initial guesses) and four cases out of 12 (four solvers and three initial guesses), respectively, were excluded from the comparison as they converged to different solutions. Using an exact Hessian was more efficient than using an approximated Hessian for the pendulum simulations but not for the 2D predictive simulations (Fig 4). The exact Hessian required less CPU time and fewer iterations than the approximated Hessian for the pendulum simulations (average 2.4 ± 1.2 times faster and 2.5 ± 0.9 times fewer iterations). By contrast, the exact Hessian required more CPU time and iterations than the approximated Hessian for the 2D predictive simulations (average 6.0 ± 0.8 times slower and 2.1 ± 0.2 times more iterations). For the pendulum simulations, 27 cases out of 540 (nine pendulums, six solvers, and 10 initial guesses) were excluded from the comparison as they converged to different solutions with the two Hessian settings. One case was also excluded as it had not converged after 3000 iterations with the exact Hessian but converged in 209 iterations with the approximated Hessian. For the 2D predictive simulations, only results obtained with the solvers ma86 and ma97 were included, since the use of the other solvers led to memory issues. Further, four cases out of six (two solvers and three initial guesses) were excluded from the comparison as they converged to different solutions with the two Hessian settings. Finally, the 3D tracking simulations were not included for this comparison as the large problem size induced memory issues with the exact Hessian. In the different analyses, we examined the cases that we excluded from the comparison because of convergence to different solutions but we did not find that one derivative scenario, solver, or initial guess consistency led to a local optimum with a lower cost. The pendulum simulations required at most 21 s and 366 iterations to converge (results obtained with AD-Recorder, mumps, and an approximated Hessian); CPU time and number of iterations depended on the number of degrees of freedom (S1 Movie). The 2D predictive simulations reproduced salient features of human gait but deviated from experimental data in three noticeable ways (Fig 5; S2 Movie). First, the predicted knee flexion during mid-stance was limited, resulting in small knee torques. Second, the simulations produced less ankle plantarflexion at push-off. Third, the vertical ground reaction forces exhibited a large peak at impact. The simulations converged in less than one CPU minute (average over solutions starting from three initial guesses: 36 ± 17 s and 247 ± 143 iterations; results obtained with AD-Recorder, mumps, and an approximated Hessian). The 3D tracking simulations accurately tracked the experimental walking data (average coefficient of determination R 2 : 0.95 ± 0.17; Fig 5; S3 Movie). Simulated muscle activations also qualitatively resembled experimental electromyography data, even though electromyography was not tracked (Fig 5). The configuration of the contact spheres differed from the generic model after the calibration. The simulations converged in less than 20 CPU minutes (average over simulations starting from two initial guesses: 19 ± 7 minutes and 493 ± 151 iterations; results obtained with AD-Recorder, mumps, and an approximated Hessian). Discussion We showed that the use of AD over FD improved the computational efficiency of OpenSimbased trajectory optimization of human movement. Specifically, AD drastically decreased the CPU time spent in evaluating the objective function gradient. This time decrease results from AD's ability to evaluate a Jacobian-transposed-times-vector product through its reverse mode. The objective function gradient has many inputs (all optimization variables) but only one output. It can thus be evaluated in only one reverse sensitivity sweep; the computational cost is hence proportional to the cost of evaluating the objective function. By contrast, with FD, the computational cost is proportional to the number of optimization variables times the cost of evaluating the objective function. The efficiency benefit of AD also increased with the complexity of the problems. This is expected, since the number of optimization variables increases with problem size; FD thus requires more objective function evaluations, whereas AD still requires only one reverse sweep. In our problems, AD did not outperform FD when evaluating the constraint Jacobian. Yet we expect that AD will be more efficient than FD for trajectory optimization problems in which the number of optimization variables largely exceeds the number of constraints, thereby resulting in faster constraint Jacobian evaluations with AD's reverse mode. The choice of the objective function influences CPU time. As an illustration, we added a term representing the metabolic energy rate [45] to the objective function of the 2D predictive simulations. Minimizing metabolic energy rate is common in predictive studies of walking [5,7,39]. Solving the resulting optimal control problem was about 60 times faster with AD-Recorder than with FD (although FD required fewer iterations), whereas AD-Recorder was only about 10 times faster than FD without incorporating the metabolic energy rate in the objective function. This increased time difference can be explained by our use of computationally expensive hyperbolic tangent functions to make the metabolic energy rate model twice continuously differentiable, as required when using second-order gradient-based optimization algorithms [39]. Overall, AD reduces the number of function evaluations, which has an even larger effect if these functions are expensive to compute. The implementation of AD was computationally more efficient through Recorder than through ADOL-C. Specifically, Recorder decreased the CPU time by a factor 4-12 compared to ADOL-C. ADOL-C records all calculations involving differential variables on a sequential data set called a tape [18], which is then evaluated by ADOL-C's virtual machine. By contrast, Recorder generates plain C-code. The factor 4-12 is the difference between a virtual machine interpreting a list of instructions (ADOL-C) and machine code performing these instructions directly (Recorder). The effort required to enable the use of AD through Recorder was minimal once OpenSim's source code had been modified for use with the ADOL-C libraries. Indeed, Recorder relies on operator overloading for constructing the expression graphs, which is similar to ADOL-C. The only required change was to replace the adouble scalar type (ADOL-C) by the Recorder scalar type. Recorder also facilitates the interface with CasADi, since it generates expression graphs in a format from which CasADi can directly generate C-code. This code can then be compiled as a Dynamic-link Library and imported in the CasADi environment without any scripting input required from the user (Fig 2). Using ADOL-C's AD algorithms with CasADi necessitates manually writing C++ code to provide forward and reverse directional derivatives using ADOL-C's drivers in a format recognized by CasADi, which might be prone to errors (Fig 2). Note that the manual effort required for using Recorder or ADOL-C is independent of problem complexity. Overall, using Recorder is more efficient but also simpler than using ADOL-C when solving trajectory optimization problems with CasADi. The process of converting OpenSim's source code to code that compiles with the AD tools (ADOL-C and Recorder) was a considerable but one-time effort. OpenSim-based trajectory optimization problems can now be solved through the proposed framework while benefiting from AD and without any additional developments. We made our OpenSim-based AD framework available so that others can build upon our work. Importantly, using AD does not increase the complexity for the end user as compared to using FD. Indeed, the simulation framework relies on CasADi that provides evaluations of function derivatives to the NLP solver. Hence, the user does not need to re-implement AD's forward and reverse algorithms. It is also worth mentioning that, in this study, we used Recorder to enable the use of AD with OpenSim. However, Recorder is a general C++ class that could be applied to any other C+ + code for use with CasADi. Compiling existing source code with Recorder would require replacing the scalar type of active variables (i.e., differentiable quantities) with the Recorder scalar type. Our study suggests that this programming effort might be particularly valuable when the goal is to solve complex trajectory optimization problems. Specifically, our results showed that the difference between AD and FD increased with problem size. Users might thus consider the programming effort only when the aim is to solve multiple complex problems and when they are not satisfied with the computational performance obtained with FD. It is difficult to provide guidelines for the linear solver selection based on our results, as their efficiency was problem-dependent. In contrast with mumps, the solvers from the HSL collection do not freely come with CasADi and are only free to academics. Hence, our study does not support the extra effort to obtain them since they did not consistently outperform mumps in our applications. Yet an in-depth analysis of the solvers' options and underlying mathematical details should be considered in future work. The use of an exact Hessian, rather than an approximated Hessian, improved the computational efficiency for the pendulum simulations but not for the walking simulations. For the 2D walking simulations, using an exact Hessian required more CPU time but also more iterations. This might seem surprising, since an exact Hessian is expected to provide more accurate information and, therefore, lead to convergence in fewer iterations. However, IPOPT requires the Hessian to be positive definite when calculating a Newton step to guarantee that the step is in the descent direction. When this is not the case, the Hessian is approximated with a positive definite Hessian by adding the identity matrix multiplied by a regularization term to the Hessian [36]. We observed that for the 2D predictive simulations, the magnitude of the regularization term was much greater than for the pendulum simulations. Yet excessive regularization might degrade the performance of the algorithm, as regularization alters the second-order derivative information and causes IPOPT to behave more like a steepest-descent algorithm [46]. The approximated Hessian requires no regularization, which likely explains the difference in number of iterations. Overall, convexification of the currently non-convex optimal control problems is expected to further improve the computational efficiency [9]. Our comparison of derivative scenarios (AD-ADOLC, AD-Recorder, and FD), linear solvers (mumps and the HSL collection), and Hessian calculation schemes was based on several specific choices. First, we solved all problems using the NLP solver IPOPT, whereas other solvers compatible with CasADi, such as SNOPT [47] and KNITRO [48] (see [21] for detailed list), might behave differently. We selected IPOPT since it is open-source (SNOPT and KNITRO are commercial products), widely used, and well suited for large and very sparse NLPs [21]. Second, we transcribed the optimal control problems into NLPs using a third order Radau quadrature collocation scheme, whereas different orders, schemes (e.g., Legendre), and transcription methods (e.g., trapezoidal and Hermite-Simpson) might lead to different results. We selected quadrature collocation methods as they achieve exponential convergence if the underlying function is sufficiently smooth [1,49]. Third, we used specific models of muscle activation dynamics, contraction dynamics, and compliant contacts, whereas other models might behave differently. We selected models that were continuously differentiable for use with gradientbased optimization algorithms. Finally, our focus was on solving trajectory optimization problems for biomechanical applications with OpenSim. We chose OpenSim as it is an open-source and widely used software package in biomechanics. The difference in computational performance between AD and FD might thus vary with other software packages and applications. Investigating all these other modeling and computational choices was out of the scope of this study but might be useful for helping users select the best settings for their applications. Overall, our study underlined the computational benefit of using AD over FD for trajectory optimization in biomechanics, which is in agreement with previous research in robotics (e.g., [27]). The 2D predictive and 3D tracking simulations produced realistic movements although deviations remain between simulated and measured data. Modeling choices rather than local optima likely explain these deviations. These choices have a greater influence on the predictive simulations, since deviations from measured data are minimized in tracking simulations, whereas only the motor task goal is specified in the objective function of predictive simulations. Several modeling choices might explain the main deviations for the predictive simulations. First, we did not model stability requirements, which might explain the limited knee flexion during mid-stance [6,39]. Instead, we included muscle activity in the cost function, which might explain why reducing knee torques and, therefore, knee extensor activity was optimal. Second, the model did not include a metatarsophalangeal joint, which might explain the limited ankle plantarflexion at push-off; similar ankle kinematics have indeed been observed experimentally when limiting the range of motion of the metatarsophalangeal joint [50]. Third, the lack of knee flexion combined with the simple trunk model (i.e., one degree of freedom controlled by one ideal torque actuator) might explain the high vertical ground reaction forces at impact [6]. Finally, the goal of the motor task (i.e., minimizing muscle fatigue) likely does not fully explain the control strategies governing human walking. In this study, the focus was on evaluating different computational choices but future work should exploit the improved computational efficiency to explore how modeling choices affect the correspondence between simulated and measured quantities. Our results indicate that AD is particularly beneficial with increasingly complex models. Hence, our OpenSim-based AD framework might allow researchers to rely on complex models, such as three-dimensional muscle-driven neuro-musculoskeletal models, in their studies. This model complexity might be highly desirable when studying, for instance, the impact of treatment on gait performance in patients with neuro-musculoskeletal disorders. Indeed, in such cases, the model should be complex enough to describe the musculoskeletal structures and motor control processes underlying gait that may be affected by treatment. Previous studies based on predictive models reported high computational times and were therefore limited to few predictions when relying on complex musculoskeletal models [5,8,51]. Using AD has the potential to drastically decrease the computational time of such predictive simulations, thereby extending their application. Conclusions In this study, we enabled the use of AD when performing OpenSim-based trajectory optimization of human movement. We showed that using AD drastically improved the computational efficiency of such simulations. This improved efficiency is highly desirable for researchers using complex models or aiming to implement such models in clinical practice where time constraints are typically more stringent than in research context. Overall, the combination of AD with other efficient numerical tools such as direct collocation and implicit differential equations allows overcoming the computational roadblocks that have long limited the use of trajectory optimization for biomechanical applications. In the future, we aim to exploit this computational efficiency to design optimal treatments for neuro-musculoskeletal disorders, such as cerebral palsy. Supporting information S1 Appendix. Example source code. Recorder provides the expression graph of the function to differentiate as MATLAB source code in a format that CasADi's AD algorithms can then transform into C-code. This file provides MATLAB and C source code resulting from applying these two steps on the example function from
9,058
2019-10-17T00:00:00.000
[ "Computer Science", "Engineering" ]
Optimal Homotopy Asymptotic Method-Least Square for Solving Nonlinear Fractional-Order Gradient-Based Dynamic System from an Optimization Problem In this paper, we consider an approximate analytical method of optimal homotopy asymptotic method-least square (OHAM-LS) to obtain a solution of nonlinear fractional-order gradient-based dynamic system (FOGBDS) generated from nonlinear programming (NLP) optimization problems. The problem is formulated in a class of nonlinear fractional di ff erential equations, (FDEs) and the solutions of the equations, modelled with a conformable fractional derivative (CFD) of the steepest descent approach, are considered to fi nd the minimizing point of the problem. The formulation extends the integer solution of optimization problems to an arbitrary-order solution. We exhibit that OHAM-LS enables us to determine the convergence domain of the series solution obtained by initiating convergence-control parameter C j ′ s . Three illustrative examples were included to show the e ff ectiveness and importance of the proposed techniques. , Introduction Consider a nonlinear programming-constrained optimization problems (NLPCOPs) of the form min x∈R n f x ð Þ subject to g k x ð Þ ≤ 0 and h k x ð Þ = 0∀k ∈ I = 1, 2:: where f : R n ⟶ R, h k : R n ⟶ R, and g k : R n ⟶ R, k, are C 2 functions. Let X 0 = fx ∈ R n | h k = 0, g k ≤ 0, i ∈ Ig be the feasible set of Equation (1), and we assume that X 0 is not empty. The general idea of obtaining an approximate analytical solution to Equation (1) is to transform to an unconstrained nonlinear programming problem by any suitable technique such as augmented Lagrange method, barrier method, and penalty method [1,2]; it can then be solved by any unconstrained optimization numerical method like the steepest descent method, conjugate gradient method, and Newton method. In optimization, the penalty method is the most efficient method to transform a constrained optimization problem into an unconstrained optimization problem [3][4][5]. An efficient penalty function for equality and inequality problem Equation (1) is given below where σ = 2. It can be seen that under some conditions, the solutions to Equation (1) are solutions of the unconstrained below [6], where μ > 0 is an auxiliary penalty variable. The corollary connecting the minimizer of the constraint problem in Equation (1) and unconstrained problem in Equation (4) is seen in [7]. The gradient descent method as a standard optimization algorithm has been widely applied in many engineering applications, such as optimization machine learning and image [8][9][10]. Through diverse research and studies, it is established that the gradient method is one of the most reliable and efficient ways to find the optimal solution of optimization problems [11]. Nowadays, one of the critical points of the gradient method is how to improve the performance further. As an important area of mathematics, fractional calculus is believed to be an excellent tool to enhance the old gradient descent method, mainly because of its special long memory characteristics and nonlocality [12][13][14]. In the past decade, several methods have been considered to solve unconstrained nonlinear optimization in the form of ordinary differential equation (ODE) dynamic system of which the gradient-based method is one of the approaches. The technique transforms the nonlinear optimization problem to an ODE dynamic system with some optimality conditions, to obtain optimal solutions to the optimization problem. The gradient-based method was first proposed by [15], was developed by [16,17], and was later extended to solve differential nonlinear programming problems [18]. However, the studies of nonlinear fractional-order gradient-based dynamic systems are still in the infant stage and are considered further in this paper. Arbitrary-order ODEs, which are the generalizations of integer-order ODEs, are mostly used to model problems in applied sciences. Several numerical methods had been used to solve linear and nonlinear problems of FDEs, such as the Adomian decomposition method (ADM) [19], variational iteration method (VIM) [20], homotopy perturbation method for solving fractional Zakharov-Kuznetsov equation [21], a numerical method for FDEs [22], and multivariate padé approximation (MPA) [23]. The usefulness of an arbitrary-order started receiving tremendous attention of researchers in the field of applied science and engineering in the last two decades where some authors in the area of optimization focused on developing approximate analytical methods for different types of nonlinear constrained optimization problems in the form of IVPs of nonlinear FDE systems including multistage ADM for NLP [24], a fractional dynamics trajectory approach [25], the convergence of HAM and application [26], fractional steepest descent approach [27], studied optimal solution of fractional gradient [28], gradient descent direction with Caputo derivative sense for BP neural networks [29], fractional-order gradient methods [30], and conformable fractional gradient-based system [31]. In 2008, Marinca and Herisanu [32] introduced a numerical method called OHAM to solve a nonlinear problem, later extended by Azimi et al. [33] for strong nonlinear differential equations (NLDEs). This powerful tool called OHAM has not been applied in the area of FOGBDS, which motivates this work. So, in this paper, we showed that the steady-state solutions xðtÞ of the proposed system can be approximated analytically to the expected exact optimal solution x * of the nonlinear programming constrained optimization problem by OHAM-LS as t ⟶ ∞. The significant contribution is summarized as follows: (1) The reason why OHAM-LS is preferable to be the method used [25,31] to solve FOGBDS (2) The reason why some existing approximate analytical method cannot guarantee the convergence of the series solution is discussed (3) From the previous approximation analytical method of solving FOGBDS, accurate optimal values control-convergence parameter had been a little bit difficult to achieve which is easily address with least square optimization techniques (4) OHAM-LS with guaranteed convergence ability is proposed with conformable fractional derivative sense to solve FOGBDS. The fastest convergence ability of the proposed compared with fourth-order Runge-Kutta is also shown We arrange the paper as follows: a brief introduction to the fractional calculus and OHAM-LS derivation is given in Section 2. Section 3 is devoted to problem formulation of OHAM-LS with FOGBDS and the key contributions. In Section 4, we solved some NLP constrained optimization problems to show the effectiveness of the proposed method. The results obtained from OHAM-LS are plotted in several figures with numerical method comparisons to confirm the validity and ability of the method to solve the problem. In the last section are the conclusions. Preliminaries 2.1. Fractional Calculus. The most common arbitrary-order in literature is the Riemann-Liouville's and the Caputo fractional derivative. The arbitrary-order definitions are generally used for mathematical modelling within many areas, especially when the classical-order derivative operator fails or additional memory effect is required. However, the limitation of these two definitions is that they do not provide some of the features that the classical derivative provides, such as chain rule, quotient rule, product rule, and derivative of Advances in Mathematical Physics constant. Recently, Khalil et al. [34] have characterized a new fractional derivative operator, which is an extension of the usual conformable fractional derivative, to overcome these deficiencies. Besides these advantages, the conformable fractional derivative does not show the memory effect, which is inherent for the other classical fractional derivatives. Definition 1. Let f : ½0, ∞Þ ⟶ R be a given function. The α th order CFD of f given by ∀x > 0 and α ∈ ð0, 1 This new definition preserves many properties of the classical derivatives refer to [34,35]. Some features that we will adopt are as follows: where the integral is the regular Riemann improper integral, and α ∈ ð0, 1. We start from the fundamental principle of OHAM as described in [36][37][38]. Consider the IVPs with initial conditions where L i is a linear operator, N i is a nonlinear operator, t is an independent variable, z i ðtÞ is an unknown function, φ is the problem domain, and g i ðtÞ is a known function. According to OHAM, one can construct an homotopy map H i ðϕ i ðt, pÞ: φ × ½0, 1 ⟶ φ which satisfies where p ∈ ½0, 1 is an embedding parameter, H i ðpÞ is a nonzero auxiliary function for p ≠ 0, Hð0Þ = 0, and ϕ i ðt, pÞ is an unknown function. Obviously, when p = 0 and p = 1, it holds that ϕ i ðt, 0Þ = z i,0 ðtÞ and ϕ i ðt, 1Þ = z i ðtÞ, respectively. Thus, as p varies from 0 to 1, the solution ϕ i ðt, pÞ approaches from z i,0 ðtÞ to z i ðtÞ where z i,0 ðtÞ is the initial guess that satisfies the linear operator which is obtained from Equation (8) for p = 0 as H i ðpÞ is chosen in the form where C j would be determined in the last part of this work. We consider Equation (8) in the form Now substituting Equation (11) in Equation (8) and equating the coefficient of like power of p, we obtain the governing equation of z i,0 ðtÞ in a linear form, given in Equation (9). The firstand second-order problems are given by and the general governing equations for z i,k ðtÞ are given by where N i,m ðz 0 ðtÞ, z i,1 ðtÞ ⋯ , z i,m ðtÞÞ is the coefficient of p m , obtained by expanding N i ðϕ i ðt, p, C j ÞÞ in series with respect to the embedding parameter p where ϕ i ðt, p, C j Þ is obtained from Equation (11). It should noted that z i,k for k ≥ 0 is governed by the linear Equations (9), (12), and (14) with linear initial conditions that come from the original problem, which can be easily solved. It has been shown that the convergence of the series Equation (16) depends upon the C j . If it is convergent at p = 1, we have The result of the mth-order approximation is given as 3 Advances in Mathematical Physics Substituting Equation (18) in Equation (6), we get the following expression for the residual If R i ðt, C j Þ = 0, thenz i ðt, C j Þ is the exact solution. Usually, such a case does not arise for nonlinear problems. Several methods [39,40] can be used to find the optimal values of convergence-control parameters C j ′s like the method of the least square method, collocation method, Ritz method, and Galerkin's method. By applying the least square method, we have minimized the functional where the value a and b depends on the given problem. With these known C j ′ s, the approximate solution (of mth-order) is well determined. The correctness of the method by (1) Error Norm L 2 . The OHAM-LS is based on hybridization of OHAM with the least square method of optimization technique. The OHAM enable us to determine the convergence domain of the series solution, and the least square method allows us to obtain the optimal values of the C s k . Remark 7. The existing approximate analytical for FOGBDS cannot guarantee convergence mainly because they possess no criteria for the establishment for convergence of the series solution Equations (20) and (21). Construction of OHAM-LS with FOGBDS Generated by NLPCOPs We begin by considering a NLP constrained in the form where f : R n ⟶ R is the objective function, h k ðxÞ: R n ⟶ R are equality constraint functions, g k ðxÞ: R n ⟶ R are inequality constraint functions, and C 2 are continuous differentiable functions. One of the main ideas of solving unconstrained NLP is by searching for the next point by choosing proper search direction d k and the stepsize α k as in the Newton direction [45], trust-region algorithm for unconstrained optimization [46]; the descent method [47], conjugate gradient method [48], three-term conjugate gradient method [49], and subspace method for nonlinear optimization [50]; the hybrid method for convex NLP [51]; CCM for optimization problem and application [52]; and descent direction stochastic approximation for optimization problem [53]. But there are studies for other approaches. In this paper, we obtain the minimizing point of the problem by solving a certain initial-value system of FDEs. This kind of FOGBDS was first proposed by Evirgen and Özdemir [24]. Using the penalty function Equation (2) and (3) for Equation (24) with ρ = 2, the conformable FOGBDS model can be constructed as subject to the initial conditions where ∇ x Fðx, μÞ is the gradient vector of Equation (25) with respect to x k ∈ R n and T α is the CFD of 0 < α ≤ 1. Note that a point x e is called an equilibrium point of Equation (25) if it satisfies the RHS of Equation (25). We reformulate fractional dynamic system Equation (25) as We used OHAM-LS to obtain the solution of system Equation (27) by constructing the following homotopy where k = 1, 2 ⋯ , n and p ∈ ½0, 1. If p = 0, Equation (28) becomes and when p = 1, the homotopy Equation (28) becomes Advances in Mathematical Physics subject to the initial conditions, The correction functional for the system of conformable fractional nonlinear differential equation Equation (30), according to OHAM-LS, can be constructed as Thus as p varies from 0 to 1, the solution φ k ðt, pÞ approaches from x k,0 ðtÞ to x k ðtÞ where x k,0 ðtÞ is the initial guess that satisfies the linear operator which is obtained from Equation (32) for p = 0 as where C j can be determined later. We get an approximate solution by expanding φ k ðt, p, C j Þ in Taylor's series with respect to p; we have Now using Equation (35) in Equation (32) and equating the coefficient of like power of p, we obtain the governing equation of x i,0 ðtÞ in a linear form, given in Equation (33). The 1st-and 2nd-order problems are given by and the general governing equations for x k,i ðtÞ are given by where N k,m ðx 0 ðtÞ, x k,1 ðtÞ ⋯ , x k,m ðtÞÞ is the coefficient of p m , obtained by expanding N k ðφ k ðt, p, C j ÞÞ in series with respect to p. It has been shown that the convergence of the series Equation (38) depends upon the C j . If it is convergent at p = 1, one has The solution of Equation (30) is determined approximately in the form, Substituting Equation (40) in Equation (30), we get the following expression for the residual error If R k ðt, C j Þ = 0, thenx k ðt, C j Þ is the exact solution. Usually, such a case does not arise for nonlinear problems. Using the least square method as below minimizes the functional where the value of a and b depends on the given problem. With these known C k , the analytical approximate solution (of mth-order) is well determined. The steps for optimal homotopy asymptotic methodleast square (OHAM-LS) are as follows: Step 1. We transform the nonlinear constrained optimization problem to the unconstrained optimization problem by a penalty method. Step 2. We find the gradient of the unconstrained optimization problem, with given initial conditions. Step 3. We choose the linear and nonlinear operators for OHAM-LS. Step 4. We construct homotopy for the conformable fractional nonlinear differential equation which includes embedding parameter, auxiliary function, and the unknown function. Step 5. We substitute the series solution results into the governing equation and equate to zero for an exact solution. Usually, such case a does not arise in nonlinear problems. Advances in Mathematical Physics Step 6. We find the optimal values for C j ′ s by using the optimization method called least square method, for good analytical approximate solution. Numerical Examples and Results In Minimize f x ð Þ = 100 whose exact solution is not known, but expected optimal solution is x * 1 = 1:9993, x * 2 = 3:9998. First, we transform the constraint problem to an unconstrained problem by quadratic penalty function for σ = 2; then, we have where μ ∈ R + , and so that the nonlinear FOGBDS can be given as where 0 < α ≤ 1. By using OHAM-LS with auxiliary penalty variable μ = 200, the terms of the OHAM-LS solutions for fractional order are acquired by using the concept of homotopy. According to Equation (6)), we choose the linear and nonlinear operators in the following forms: Advances in Mathematical Physics We can construct the following homotopy where Substituting Equations (56)-(58) into Equations (54) and (55) and equating the coefficient of the same powers of p result to the following set of linear FDEs. Example 2. Consider the NLPCOPs test problem from Schittkowski [54] [No 320]. Minimize This is a practical problem, and the exact solution is not known, but the expected optimal solution is x * 1 = 9:395, x * 2 = −0:6846. First, the quadratic penalty function is used to get the unconstrained optimization problem as follows: where μ ∈ R + and so that the nonlinear FOGBDS be given as By using OHAM-LS with μ = 10 6 , the terms of the OHAM-LS solutions for fractional order are acquired by using the concept of homotopy. According to Equation (6), we choose the linear and nonlinear operators in the following forms: Table 2: Comparisons and absolute error between OHAM-LS and RK4, α = 1. Minimize f x ð Þ = x 2 1 + x 2 2 + 2x 2 This is a practical problem, and the exact solution is not known, but the expected optimal solution is x * 1 = 0, x * 2 = 1, x * 3 = 2, and x * 4 = −1. From the above procedure, the secondorder approximate solution obtained by OHAM-LS at α = 0:9, for p = 1, is and for x 2 2 ðtÞ, Substituting these optimal values into Equations (106)-(109), we havẽ Tables 5 and 6 show the C k at different values of α for Example 3. Tables 7 and 8 show the comparisons and the absolute error between OHAM-LS and RK4 at α = 1. Also, Figure 3 shows the comparisons of OHAM-LS at α = 1, 0:9, 0:8, and 0:7 with RK4 at α = 1, which verifies the performance of the present method as an excellent tool for NLPCOPs. For α = 1, it can be seen that the approximate analytical solution agrees with the ideal solution. Thus, as α approaches 1, the classical solution for the system is recovered. Variable Conclusions In this paper, we implemented OHAM-LS for solving nonlinear FOGBDS from the optimization problem. The fractional derivative is considered in a new conformable fractional derivative sense. The optimization minimization approach of the least square method helps to obtain optimal values of the C s j for accurate approximate analytical solutions. The comparisons between the fourth-order Runge-Kutta (α = 1) and OHAM-LS show that our present method performs rapid convergence to the expected optimal solutions of the optimization problem. The results obtained are in close agreement with the exact solution, and those from the RK4 and OHAM-LS are reliable, dependable, and efficient for finding an approximate analytical solution for nonlinear FOGBDS optimization problem.
4,503.8
2020-07-26T00:00:00.000
[ "Mathematics" ]
Feasibility of Using Improved Convolutional Neural Network to Classify BI-RADS 4 Breast Lesions: Compare Deep Learning Features of the Lesion Itself and the Minimum Bounding Cube of Lesion To determine the feasibility of using a deep learning (DL) approach to identify benign and malignant BI-RADS 4 lesions with preoperative breast DCE-MRI images and compare two 3D segmentation methods. The patients admitted from January 2014 to October 2020 were retrospectively analyzed. Breast MRI examination was performed before surgical resection or biopsy, and the masses were classified as BI-RADS 4. The first postcontrast images of DCE-MRI T1WI sequence were selected. There were two 3D segmentation methods for the lesions, one was manual segmentation along the edge of the lesion slice by slice, and the other was the minimum bounding cube of the lesion. Then, DL feature extraction was carried out; the pixel values of the image data are normalized to 0-1 range. The model was established based on the blueprint of the classic residual network ResNet50, retaining its residual module and improved 2D convolution module to 3D. At the same time, an attention mechanism was added to transform the attention mechanism module, which only fit the 2D image convolution module, into a 3D-Convolutional Block Attention Module (CBAM) to adapt to 3D-MRI. After the last CBAM, the algorithm stretches the output high-dimensional features into a one-dimensional vector and connects 2 fully connected slices, before finally setting two output results (P1, P2), which, respectively, represent the probability of benign and malignant lesions. Accuracy, sensitivity, specificity, negative predictive value, positive predictive value, the recall rate and area under the ROC curve (AUC) were used as evaluation indicators. A total of 203 patients were enrolled, with 207 mass lesions including 101 benign lesions and 106 malignant lesions. The data set was divided into the training set ( n = 145 ), the validation set ( n = 22 ), and the test set ( n = 40 ) at the ratio of 7 : 1 : 2; fivefold cross-validation was performed. The mean AUC based on the minimum bounding cube of lesion and the 3D-ROI of lesion itself were 0.827 and 0.799, the accuracy was 78.54% and 74.63%, the sensitivity was 78.85% and 83.65%, the specificity was 78.22% and 65.35%, the NPV was 78.85% and 71.31%, the PPV was 78.22% and 79.52%, the recall rate was 78.85% and 83.65%, respectively. There was no statistical difference in AUC based on the lesion itself model and the minimum bounding cube model ( Z = 0.771 , p = 0.4408 ). The minimum bounding cube based on the edge of the lesion showed higher accuracy, specificity, and lower recall rate in identifying benign and malignant lesions. Based on the lesion 3D-ROI segmentation using a minimum bounding cube can more effectively reflect the information of the lesion itself and the surrounding tissues. Its DL model performs better than the lesion itself. Using the DL approach with a 3D attention mechanism based on ResNet50 to identify benign and malignant BI-RADS 4 lesions was feasible. Introduction Breast cancer is a serious threat to women's health and has become the world's most common cancer [1]. Early detection, early diagnosis, and early treatment can improve both survival and prognosis of breast cancer patients [2][3][4]. Greenwood et al. [5] have reported that breast MRI plays an important role in screening and assessing the extent of ductal carcinoma in situ (DCIS) and predicting the potential invasiveness. The degree of early enhancement reflects the vascular richness and blood perfusion of the lesion and can reflect the characteristics of the lesion. According to the guideline of the American College of Radiology (ACR), the possibility range of the BI-RADS 4 of malignancy is 2%-95% as defined by the breast imaging report and data system (BI-RADS) [6]. Lesions with BI-RADS 4 classification are difficult to define clearly. The signs of the lesions are overlapping and intricate. These lesions, benign or malignant, are all classified as BI-RADS 4, along with recommended invasive procedures such as needle biopsy to obtain pathological evidence [7][8][9]. Therefore, comprehensive understanding and improved evaluation methods of benign and malignant breast lesions are urgently needed to reduce invasive operations and the burden on patients. In recent years, with the rapid development of artificial intelligence-assisted diagnosis systems, deep learning has emerged as a subfield of machine learning [10][11][12][13]. Its application in medical imaging has attracted much attention, along with its wide use in image recognition, segmentation, and analysis [14]. Several studies [15,16] have attempted to increase the number of layers of CNNs from the original 5 layers of the AlexNet network [17] to the 19 layers of the VGG network. Theoretically, a deeper network leads to better effect, but the increase in network depth will also bring additional problems that in turn cause reduced performance. The main reason for the performance reduction was gradient dispersion (vanishing gradients in backpropagation lead to weakened error signal) and gradient explosion (accumulation of large error gradients results in infinity in loss function) that were caused by the increase in the number of network layers. The residual module was proposed by Khalili and Wong [15], which could effectively solve the aforementioned problems above and has become the standard configuration of CNNs. The CNNs learned a large number of features. Some features were not important for the final result, while some others played a key role in predicting results thus deserve more attention. Based on this theory, Woo et al. [18] proposed the Convolutional Block Attention Module (CBAM). The so-called greater attention was to give higher weight to those key features. In this study, the efficiency of feature extraction and classification of BI-RADS 4 breast lesions with two segmentation methods was compared by the DL model with a 3D attention mechanism, so as to verify the feasibility of using an improved convolutional neural network. Materials and Methods 2.1. Study Cohort and Imaging Protocol. The patients who underwent breast MRI examinations at Nantong First Peo-ple's Hospital were retrospectively collected from January 2014 to October 2020. A total of 296 patients with breast lesions were enrolled in the study. Inclusion criteria: (1) the diameter of the lesion was greater than 1 cm, or lesions were visible to naked eyes at least two consecutive slices; (2) the image quality was high without obvious artifacts or distortion; (3) the lesions were all mass and showed irregular margins, or inhomogeneous enhancement, or ring enhancement in MRI and classified as BI-RADS 4 by the radiologist. Exclusion criteria: (1) the breast mass showed no enhancement; (2) radiotherapy/chemotherapy or invasive operations such as biopsy before breast MRI; (3) the characteristics of the lesion and the pathological diagnosis were not clear. All MRIs in this study were acquired using a Siemens 3.0 T magnetic resonance scanner (Verio; Siemens, Erlangen, Germany) with 16-channel phased array breast-specific coil. The patients were placed in the prone position with headfirst entry; the breasts naturally hanged in the breast coil, and the nipple remained at the center of the coil. The scan sequence parameters were as follows: DCE T1-weighted axial fat suppression 3D spoiler gradient echo: TR 4.67 ms, TE 1.66 ms, flip angle 10 o , FOV 340 mm × 340 mm, slice thickness 1.2 mm, scanning of 6 phases without interval, scan time 6 min 25 s, high-pressure syringe injection of 15-20 mL contrast agent Gd-DTPA based on body weight (0.2 ml/kg) at a flow rate of 2 mL/s, and then injection of the same amount of normal saline to flush the tube. After the 25 s injection, scanning was triggered, and each phase was collected for 1 min. The first phase was nonenhancement, and phases 2-6 were enhanced. Our study focused on phase 2 images which was named DCE-MRI T1WI first postcontrast sequence. 2.2. 3D-ROI Lesion Segmentation. All DCE-MRI T1WI first postcontrast images of breast mass that meet the inclusion criteria were imported into the image processing software ITK-SNAP 3.8.0 in DICOM format, and the lesions were manually segmented by an attending physician with 8 years of experience in breast MRI diagnosis and reviewed by a chief physician with more than 10 years of experience in breast MRI diagnosis: (1) based on the ROI of the lesion itself (Figures 1 and 2), the 3D-ROI segmentation method was used to manually delineate the boundary of the lesion slice by slice along the edge of the lesion, containing cystic degeneration, necrosis, and calcification within the lesion; (2) based on the minimum bounding cube, the maximum diameter of the lesion was then projected onto 3 coordinate axes of the image to determine its coverage range of x, y, and z axes, and the bounding box of the lesion was finally obtained (Figures 3 and 4). Lesion Feature Extraction. There are two methods of feature extraction. One is to take the minimum bounding cube of the lesion (including the lesion and part of the peritumoral area), and the other is to take only the lesion itself and set the value of the image pixels of part of the nonlesion area to 0. The minimum bounding cube is the smallest circumscribed cube containing the lesion. In addition, before Wireless Communications and Mobile Computing inputting to the CNN, the pixel values of the image data are normalized to 0-1 range. The formula is as follows: where x represents the normalized image pixel value, X represents the original image pixel value, and X max and X min represent the maximum pixel value and the minimum pixel value of the minimum bounding cube of all lesions, respectively. In this study, a total of 207 masses were obtained, of which 106 were malignant and 101 were benign. The data Model Establishment. The model was established based on the blueprint of the classic residual network ResNet50 [19], retaining its residual module but changing the convolution module to a 3D convolution module. At the same time, an attention mechanism was added to transform the attention mechanism module, which only fit the 2D image convolution module, into 3D-Convolutional Block Attention Module (CBAM) to adapt to 3D-MRI, as shown in Figure 5. CBAM includes a channel attention module and a spatial attention module, which together can solve the question of which channel and which position characteristics play decisive roles in final prediction [18]. Input module, residual module, channel attention module, downsampling module, and fully connected module constitute the main modules of the network. Among them, the residual module was mainly used to extract features, the CBAM module was mainly used to give higher weight to key features, and the downsampling module was used to reduce the size of the feature map and to increase the number of channels in the feature map. Blocks are used ( Figure 5) to reflect the size change of the feature map. After the last CBAM, the algorithm stretches the output high-dimensional features into a one-dimensional vector and connects 2 fully connected slices. Lesion classification network parameters are shown in Table 1. The network uses cross-entropy cost function as the loss function and stochastic gradient descent (SGD) whose weight decay is 0.0001 and momentum is 0.9 as the optimizer. The batch size is 16. Dynamic learning rate strategy is taken during the train process. The initial learning rate is 0.1, which is considered as a big number, halved every 25 epochs of iterations. Before finally setting two output results (P1, P2), which, respectively, represent the probability of benign and malignant lesions. The lesion is classified as benign if P1 > P2. Otherwise, the lesion is classified as malignant. "res_conv" is a residual convolution block which contains shortcut connection, and "res conv * N" means the block has N convolution blocks that share the same parameters. 3D_CBAM uses 1 × 1 × 1 convolutions to adjust the channel numbers of the current feature map. Wireless Communications and Mobile Computing lesions in the breast, 14 patients with incomplete examination or perfusion scan breast MRI, and 11 patients with breast lesions combined with nonmass enhancement lesions. Eventually, 203 patients were enrolled for analyses ( Table 1). The patients were 17-86 years old with an average age of 48:5 ± 13:1 years old. Among them, there was only one male patient, aged 54 years. There were 105 patients with malignant lesions with an average age of 55:5 ± 11:3 years and 98 patients with benign lesions with an average age of 41:0 ± 10:6 years old. A total of 207 masses were included in the study (Table 2). Table 3. In comparison, the model 1 analysis achieved mean AUC of 0.799, accuracy of 74.63%, sensitivity of 83.65%, specificity of 65.35%, NPV of 71.31%, PPV of 79.52%, and recall rate of 83.65% and the model 2 analysis achieved an average AUC of 0.827, accuracy of 78.54%, sensitivity of 78.85%, specificity and PPV of 78.22%, NPV and recall rate of 78.85%. There was no statistical difference in AUC based on the lesion itself model and the minimum bounding cube model (Z = 0:771, p = 0:4408). The minimum bounding cube based on the edge of the lesion showed higher accuracy, specificity, and lower recall rate in identifying benign and malignant lesions. Discussion Deep learning in convolutional neural networks (CNNs) is usually based on manually or semiautomatically segmented tags to learn to recognize image features. Because breast MRI is different from MRI for abdomen and lung lesions, its position is fixed in a special breast coil and is less affected by breathing movement, leading to relatively higher reproducibility of the segmentation method for breast lesions. However, the segmentation methods are quite different. Previous studies have mostly extracted the two-dimensional features of the lesion (2D-ROI) [20], selected the largest slice of the lesion or the most obvious slice of lesion enhancement [21], and segmented along the edge of the lesion. 2D-ROI can only represent the information covered by the current area and cannot reflect all the information of the lesion. Therefore, this will definitely affect the reliability of DL models. The use of 3D-ROI is helpful to observe the lesion's overall morphology, leading to more accurate and comprehensive reflection of the characteristics of the lesion [22]. And more weight is given to the hemodynamic characteristics of the relevant lesion in the model based on the usual imaging physicians' reading habits and the advantages of early enhanced MRI. The Efficacy of a Deep Learning Model Based on the Minimum Bounding Cube of the Lesion in Breast Lesion Classification. This study used two different segmentation methods for 3D-ROI of the lesion: one was based on the lesion itself, and the other one was based on the minimum bounding cube of the lesion edge. These two different segmentation methods were compared for their impact on the accuracy of the DL model. Our results revealed that the DL model based on the minimum bounding cube of the lesion edge is more accurate, with a mean AUC value of about Wireless Communications and Mobile Computing 0.827. The reason may be that the minimum bounding cube based on the lesion edge not only contains the internal information of the lesion but also includes some tissues surrounding the lesion. Zhou et al. [23] applied 5 different input boxes (tumor alone, the smallest bounding box, and 1.2, 1.5, and 2.0 time box) in deep learning and showed that the performance of diagnosis gradually decreases as the bounding box increases. The per-lesion diagnostic accuracy was the highest when using the smallest bounding box (89%), but the tumor ROI on all slices were automatically segmented on contrastenhanced maps by using the fuzzy-C-means (FCM) clustering algorithm with 3D connected-component labeling, This study used manually segmented images as a standard for comparison, which may be more accurate. And the minimum bounding cube based on the tumor edge did not expand the box size but instead used 3D-CBAM to increase the weight of key information, in order to prevent the box containing too much information from normal tissue that dilutes the effective information in the overall box or reduces the resolution of the effective information of the image imported into the neural network. The DL model that is based on the minimum bounding cube of postcontrast images of DCE-MRI T1WI sequence showed superiority in the test set, a mean specificity of 78.22%, which are better than those of the DL model that is based on just the lesion itself. The reason may be that the microenvironment around the tumor plays a critical role in tumor growth and aggressive tissue behavior [24,25]. 3D-CBAM was to give higher weight to those key features. The area around the tumor contains much valuable and hidden information about the disease, including survival predictors for vascular activity and lymphangiogenesis and the infiltration of lymphatic and blood vessels around the tumor, and immune response signals around the tumor for interstitial response and lymphocyte infiltration around the tumor [26]. As we have shown in a previous study [27], the peritumoral edema on T2WI images is better and appears as T2WI hyperintensity around the tumor. This sign is combined with the T2WI signal, leading to significantly increased sensitivity and specificity for the differential diagnosis of benign and malignant breast tumors, and there is a positive correlation between peritumoral edema and Ki-67 expression. These results demonstrate the importance of the tissue surrounding the tumor. However, related studies are still limited at present; thus, the information about surrounding tissues has not been captured by the artificial intelligence learning technology. Braman et al. [26] collected a total of 117 patients and extracted omics features after marking the breast tumors and surrounding areas (2.5-5 mm area around the tumor) using breast images from DCE-MRI-T1WI. Their results showed that the omics features of surrounding tissues helped to predict pCR and that combined use of tumor internal characteristics and peritumoral characteristics led to better prediction accuracy, which as a whole may help guide the personalized treatment of locally advanced breast cancer. This indicates that extracting the information contained in the tissue around the tumor has a high clinical application value. The Diagnostic Efficacy of the Deep Learning Model Based on First Postcontrast Images of DCE-MRI T1WI Sequence in Benign and Malignant Breast Lesions. The deep learning model that is based on the minimum bounding cube of dynamic contrast postcontrast images has high specificity in the classification of benign and malignant breast lesions. We speculate that this may be related to the early hemodynamic information of the lesion, as shown in a previous study of ours that DCE-MRI can not only reveal tumor's morphological changes but also reflect its microvascular perfusion, angiogenesis, grades, and malignancy for evaluating the effect of tumor treatment and prognosis. The degree of early enhancement reflects the abundance of blood vessels and blood perfusion of the disease [28]. Malignant lesions grow fast, have multiple large blood vessels, are immature, and have a large number of arteriovenous anastomoses. In addition to the high accuracy in diagnostic performance of the minimum bounding cube based on the edge of the lesion, we also found that the method is relatively simple and easy to use, as it only needs to find the largest level of the three dimensions of the image through image processing software. At this level, the minimum rectangle that can cover the outermost edge of the lesion is used, and finally, the minimum bounding cube containing the lesion is generated by the computer traversal method. However, the 3D-ROI based on the lesion itself needs to be delineated slice by slice and along the edge. For nonenhancement sequence images, sometimes, the edge of the lesion is unclear, leading to the lack of local edge information of the lesion. Conclusion In summary, based on the segmentation method of the minimum bounding cube at the edge of the lesion, postcontrast images of DCE-MRI T1WI sequence were extracted, and a DL model was established. This model can combine the information inside the lesion and that of containing peritumoral area to improve the diagnostic efficacy for both benign and malignant breast lesions. Using the DL approach with a 3D attention mechanism based on ResNet50 to identify benign and malignant BI-RADS 4 lesions was feasible. Limitations of This Study This study was a small-sample single-center study, and the results obtained in this study need to be confirmed by future large-sample multicenter investigations. Only mass lesions were included in the study; thus, whether the segmentation method is equally applicable to nonmass lesions remains to be tested. The inclusion/exclusion criteria are quite stringent and exclude many of the lesions which a radiologist reading breast MRI will routinely come across. The study only used first postcontrast images of DCE-MRI T1WI for segmentation by the minimum bounding cube of the lesion, which needs to be examined to see if it fits other sequences of image segmentation. Another limitation is that this study only compared two lesion segmentation methods; thus, future 7 Wireless Communications and Mobile Computing investigation is needed to test whether other ROIs containing peritumoral area may be better. Data Availability All data generated or analyzed during this study are available from the corresponding author Wei Xing upon reasonable request. Ethical Approval The retrospective study was approved by the Ethical Review Board of Nantong First People's Hospital (No. 2020KY236) and was conducted according to the Declaration of Helsinki principles. Consent All patients signed informed consent. Conflicts of Interest The authors declare that they have no conflict of interest.
4,971.4
2021-09-08T00:00:00.000
[ "Medicine", "Computer Science" ]
Modeling and analysis of functional method comparison data Abstract We consider modeling and analysis of functional data arising in method comparison studies. The observed data consist of repeated measurements of a continuous variable obtained using multiple methods of measurement on a sample of subjects. The data are treated as multivariate functional data that are observed with noise at a common set of discrete time points which may vary from subject to subject. The proposed methodology uses functional principal components analysis within the framework of a mixed-effects model to represent the observations in terms of a small number of method-specific principal components. Two approaches for estimating the unknowns in the model, both adaptations of general techniques developed for multivariate functional principal components analysis, are presented. Bootstrapping is employed to get estimates of bias and covariance matrix of model parameter estimates. These in turn are used to compute confidence intervals for parameters and functions thereof, such as the measures of similarity and agreement between the measurement methods, that are necessary for data analysis. The estimation approaches are evaluated using simulation. The methodology is illustrated by analyzing two datasets. Introduction Multivariate functional data arise when repeated measurements of J ð! 2Þ variables are taken over time on every subject (Ramsay and Silverman 2005;Berrendero, Justel, and Svarc 2011;Chiou, Chen, and Yang 2014;Jacques and Preda 2014;Happ and Greven 2018). The measurements of each variable on a subject are assumed to be values of an underlying smooth random function that is observed with noise at discrete time points. For each subject, all the J variables are recorded at every observation time. Thus, these data consist of J curves per subject, observed at a common set of discrete observation times. This set of times, however, may vary from subject to subject. There is dependence in the J curves as they come from the same subject. We are specifically interested in the special case of multivariate functional data arising in method comparison studies (Choudhary and Nagaraja 2017). They involve measuring a continuous variable on every subject using multiple methods of measurement in a common unit. All methods measure the variable with error. The primary goal in these studies is to evaluate whether the methods agree sufficiently well to be used interchangeably. It is evident from more than 25,000 citations of Bland and Altman (1986), which proposed the popular limits of agreement approach for agreement evaluation with scalar observations, that such studies are common in biomedical sciences. The measurements from the multiple methods are the dependent functional variables here. For example, consider two method comparison datasets, both with J ¼ 2, that motivated this work-body fat data Chinchilli et al. (1996) and body temperature data Li and Chow (2005). In the first, we have measurements of percentage body fat made using skinfold calipers and dual energy x-ray absorptiometry (DEXA) in a cohort of adolescent girls over a period of about 4 years. These longitudinal data are an example of sparse bivariate functional data. In the second, we have core body temperature-the temperature of tissues deep within the body, measured every minute over a period of 90 minutes at two locations in the body-esophagus and rectum. These data are an example of dense bivariate functional data. Our interest is in evaluating agreement between measurements from caliper and DEXA methods in the first case and between measurements taken at the two body locations in the second case. There is a growing body of literature on the analysis of method comparison data. See Barnhart, Haber, and Lin (2007) and Choudhary and Nagaraja (2017) for an introduction. Nevertheless, almost all the literature assumes that the observations are scalar. For scalar data, evaluation of agreement between two methods involves quantifying how far the methods are from having perfect agreement, in which case the joint distribution of the methods is concentrated on the line of equality. In other words, two methods in perfect agreement have equal means, equal variances, and a correlation of one; or equivalently, their differences are zero with probability one. In the statistical literature, agreement is commonly evaluated by performing inference on measures of agreement such as concordance correlation coefficient (CCC) of Lin (1989) and total deviation index (TDI) of . However, in the biomedical literature, the limits of agreement approach of Bland and Altman (1986) is the most popular. These measures are defined in Sec. 4. A reader interested in their comparison may consult Barnhart, Haber, and Lin (2007). In addition to evaluation of agreement, a secondary goal of a method comparison study is to evaluate similarity of methods by comparing their marginal characteristics such as means and precisions. This is typically done by performing inference on measures of similarity such as mean difference and precision ratio (Dunn 2007). Evaluation of similarity is a necessary supplement to evaluation of agreement as it provides information about the sources of disagreement between the methods (Choudhary and Nagaraja 2017, Chapter 1). In the method comparison literature, we are only aware of Li and Chow (2005) that deals with functional observations. It extends the ideas of Lin (1989) to develop a CCC for functional data from J ¼ 2 methods. But this approach has drawbacks that limit its usefulness. First, it produces a single overall index of agreement over the entire time interval. However, given the functional nature of the data, an index that changes smoothly over time may be preferable over the overall scalar index because the former allows insight into how the extent of agreement changes over time. Second, the approach is specifically designed for CCC-a function of first and second order moments of the measurements. It is unclear how the approach can be adapted for other measures of agreement such as TDI, which is a percentile (see Sec. 4). This is an issue because CCC is often criticized for being unduly influenced by the between-subject variation in the data as it may lead to misleading conclusions (see, e.g., Barnhart, Haber, and Lin 2007). Third, the approach assumes that all curves are observed at the same time points. This assumption is unnecessarily restrictive. For example, it does not hold for the body fat data although it holds for the body temperature data. Fourth, the approach in its present form cannot deal with J > 2 methods. These drawbacks may be overcome by a model-based approach for analyzing functional method comparison data. The model parameters can be used to obtain functional analogs of any measure of similarity and agreement for scalar observations. The model would allow the observation times to differ between the subjects. It can also accommodate more than two methods. This is the approach we take in this article. Functional data analysis is currently an active area of research, see Ramsay and Silverman (2005) for an introduction. A common analytical approach involves performing a functional principal components analysis (FPCA) to obtain a parsimonious representation of the data (Ramsay and Silverman 2005, Chapter 8). The PACE (principal components analysis through conditional expectation) methodology of Yao, M€ uller, and Wang (2005) is a popular approach for FPCA of data that are observed with measurement error. It involves decomposing the functional observations via a Karhunen-Lo eve expansion and using the framework of mixed-effects model for estimating coefficients in the expansion as best linear unbiased predictors of random effects, and estimating error variance by smoothing the covariance function. This approach and its refinement due to Goldsmith, Greven, and Crainiceanu (2013) are implemented in the refund (Goldsmith et al. 2016) and MFPCA (Happ 2018) packages for the statistical software system R (R Core Team 2018). Methodologies for FPCA of multivariate functional data have also been developed, see, e.g., Ramsay and Silverman (2005, Chapter 8), Berrendero, Justel, and Svarc (2011), Jacques and Preda (2014), Chiou, Chen, and Yang (2014), and Happ and Greven (2018). Among these, the approaches of Chiou, Chen, and Yang (2014) and Happ and Greven (2018) are of specific interest in this article as they can be used for data observed with measurement error, which is the case for our method comparison data. Although Chiou, Chen, and Yang (2014) and Happ and Greven (2018) differ in their basic premise regarding univariate components of the multivariate observation-in particular, they may have different units in Chiou, Chen, and Yang (2014) and they may be observed on different (dimensional) domains in Happ and Greven (2018)-both first obtain a Karhunen-Lo eve expansion of the multivariate observations. Thereafter, Chiou, Chen, and Yang (2014) estimate the unknowns by a generalization of the PACE methodology. They also employ normalization to deal with the different units. On the other hand, Happ and Greven (2018) establish a relation between univariate and multivariate FPC decompositions and employ it to obtain estimates of the unknowns in the multivariate model using their estimates from the univariate models. The univariate estimates may be obtained, e.g., using the PACE approach of Yao, M€ uller, and Wang (2005). This methodology is implemented in an R package MFPCA (Happ 2018). This brings us to our approach for analysis of functional method comparison data. In Sec. 2, we begin by writing a subject's observed curve from a measurement method as a sum of an unobservable true smooth curve and a random measurement error. Each measurement method has its own mean and covariance functions and error variance. Next, the method-specific true curves are represented via a multivariate Karhunen-Lo eve expansion. In Sec. 3, we consider two approaches for estimating the unknowns in the model. The first approach-termed MPACE-directly adapts the PACE methodology to deal with multivariate data along the lines of Chiou, Chen, and Yang (2014). The second approach-termed UPACE-adapts the methodology of Happ and Greven (2018). Bootstrap is used to construct relevant confidence intervals and bands. In Sec. 4, we discuss evaluation of similarity and agreement under the assumed model. Sec. 5 presents a simulation study to evaluate properties of the two estimation approaches. The body fat data are analyzed in Sec. 6. Sec. 7 concludes with a discussion. Appendix A contains some technical details. An analysis of the body temperature data and additional simulation results are presented in the online Supplemental Material, which can be accessed from the journal website. Modeling of data Let the random function X j denote the true unobservable curve measured using method j ¼ 1, :::, J ð! 2Þ for a randomly selected subject from the population of interest. The curves are defined on a common domain T ¼ ½a, b, a < b 2 R: Let the mean and covariance functions of the random functions be denoted by l j ðtÞ ¼ EðX j ðtÞÞ, G jl ðs, tÞ ¼ covðX j ðsÞ, X l ðtÞÞ, j, l ¼ 1, :::, J; s, t 2 T : Let X ¼ ðX 1 , :::, X J Þ T denote the J  1 vector of the curves and lðtÞ ¼ ðl 1 ðtÞ, :::, l J ðtÞÞ T be the J  1 vector of its mean. Model for population curves Under certain conditions (Chiou, Chen, and Yang 2014;Happ and Greven 2018), the multivariate Karhunen-Lo eve Theorem provides a stochastic representation of X as Here, / k ðtÞ ¼ ð/ k1 ðtÞ, :::, / kJ ðtÞÞ T are orthonormal eigenfunctions, satisfying the property that the inner product of / k and / l , given as P J j¼1 Ð T / kj ðtÞ/ lj ðtÞdt, equals zero if k 6 ¼ l and one if k ¼ l; and n k -called "scores"-are uncorrelated random variables with mean zero and variance k k . The variances k k are eigenvalues associated with the eigenfunctions / k and are non-increasing, i.e., k 1 ! k 2 ! ::: ! 0: We can write (1) as X j ðtÞ ¼ l j ðtÞ þ X 1 k¼1 n k / kj ðtÞ, j ¼ 1, :::, J; t 2 T : (2) Thus, the Karhunen-Lo eve representation provides a basis expansion of the curve X j in terms of the basis functions / 1j , / 2j , ::: that depend on method j, whereas the random coefficients n 1 , n 2 , ::: are common to all methods. It is these coefficients that induce dependence within and between the curves. In particular, under (2), the covariance functions can be written as G jl ðs, tÞ ¼ k k / kj ðsÞ/ kl ðtÞ, j, l ¼ 1, :::, J; s, t 2 T : The true curves X j ðtÞ are observed with error as Y j ðtÞ ¼ X j ðtÞ þ j ðtÞ, where the errors j ðtÞ are independent random variables with mean zero and variance s 2 j , j ¼ 1, :::, J, and are independent of the true values. Using (2), we can write this model as Y j ðtÞ ¼ l j ðtÞ þ X 1 k¼1 n k / kj ðtÞ þ j ðtÞ, j ¼ 1, :::, J; t 2 T : Thus, the mean and autocovariance functions of the observed curves are EðY j ðtÞÞ ¼ l j ðtÞ, covðY j ðsÞ, Y j ðtÞÞ ¼ G jj ðs, tÞ þ s 2 j Iðs ¼ tÞ, j ¼ 1, :::J, and their cross covariance function is covðY j ðsÞ, Y l ðtÞÞ ¼ G jl ðs, tÞ, j 6 ¼ l ¼ 1, :::, J: Here I is the indicator function. It follows that, for each t 2 T , the vector ðY 1 ðtÞ, :::, Y J ðtÞÞ has a J-variate distribution with mean ðl 1 ðtÞ, :::, l J ðtÞÞ, variance ðr 2 1 ðtÞ, :::, r 2 J ðtÞÞ, and correlation q jl ðtÞ, where r 2 j ðtÞ ¼ G jj ðt, tÞ þ s 2 j , q jl ðtÞ ¼ G jl ðt, tÞ r j ðtÞr l ðtÞ , j 6 ¼ l ¼ 1, :::, J: Further, for j 6 ¼ l, the difference D jl ðtÞ ¼ Y j ðtÞ À Y l ðtÞ has a distribution with mean d jl ðtÞ and variance g 2 jl ðtÞ, where d jl ðtÞ ¼ l j ðtÞ À l l ðtÞ, g 2 jl ðtÞ ¼ r 2 j ðtÞ þ r 2 l ðtÞ À 2G jl ðt, tÞ: These distributions are used in Sec. 4 to get functional analogs of measures of similarity and agreement. Model for observed data Suppose there are n subjects in the study, indexed as i ¼ 1, :::, n: The observed data consist of J curves per subject, one from each method, observed at discrete observation times. Specifically, let Y ij ðt im Þ denote the observation from method j on subject i taken at time t im , m ¼ 1, :::, N i , j ¼ 1, :::, J, i ¼ 1, :::, n: The J curves for a subject are linked in that they are observed at common observation times t im , m ¼ 1, :::, N i : Thus, subject i contributes JN i observations. The number of observations and the observation times need not be the same for each subject. The design is balanced if the observation times are common for all subjects and the linked observations are available at each observation time from every subject. Otherwise, the design is unbalanced. The functional data are usually said to be dense when the design is balanced and the common N i is large, and they are said to be sparse when the design is unbalanced and N i is small. To obtain a model for the observed data, let X ij ðtÞ, ij ðtÞ, Y ij ðtÞ, and n ik denote the respective counterparts of the population quantities X j ðtÞ, j ðtÞ, Y j ðtÞ, and n k , given by (2) and (4), for subject i. The quantities for subject i are assumed to be independent copies of the corresponding population quantities. Thus, the model for the data can be written as where the errors ij ðt im Þ are independent random variables with mean zero and variance s 2 j : The model postulates that a subject's true curve from a method is an infinite linear combination of method-specific basis functions that are common to all subjects but with subject-specific coefficients that are common to all methods. The eigenfunctions / kj ðtÞ serve as the basis functions and the scores n ik serve as the coefficients. To analyze these data, first we perform a dimension reduction by truncating the infinite sum in (8) to K terms, where K is the number of FPC to be selected. This leads to as the approximate model. It has the structure of a mixed-effects model. The true model (4) is used to define the parameters and their functions that are the target of inference. But they are estimated by fitting this approximate model to the data. The number of components K is treated as an unknown component in the model. The issue of estimation of unknowns is taken up in Sec. 3. To write (9) in the matrix notation, define the N i  1 vectors t i ¼ ðt i1 , :: Þ, :::, l j ðt iN i ÞÞ T , and ij ðt i Þ ¼ ð ij ðt i1 Þ, :::, ij ðt iN i ÞÞ T : These respectively represent the vectors of the observation times for subject i, the corresponding observations from method j, their means, and the associated random errors. Next, define the JN i  1 vectors and take the JN i  JN i diagonal matrix R i ¼ diagfs 2 1 , :::, s 2 1 , :::, s 2 J , :::, s 2 J g, where s 2 j is repeated N i times for each j, as the covariance matrix of i ðt i Þ: Further, define n i ¼ ðn i1 , :::, n iK Þ T as the K  1 vector of scores and K ¼ diagfk 1 , :::, k K g as its K  K diagonal covariance matrix; / kj ðt i Þ ¼ ð/ kj ðt i1 Þ, :::, / kj ðt iN i ÞÞ T as the N i  1 vector of values of kth eigenfunction / kj associated with method j; / k ðt i Þ ¼ ð/ T k1 ðt i Þ, :::, / T kJ ðt i ÞÞ T as the JN i  1 vector by stacking the values for all the methods; and Uðt i Þ ¼ ð/ 1 ðt i Þ, :::, / K ðt i ÞÞ as their JN i  K matrix. With this notation, the model (9) can be written as Here the n i follow independent distributions with mean 0 and covariance matrix K, i ðt i Þ follow independent distributions with mean 0 and covariance matrix R i , and the two vectors are mutually independent. It follows that The elements of the first term of varðY i ðt i ÞÞ consist of values of covariance functions given by (3) but with the infinite sums therein truncated to K terms. The unknowns in the model are h ¼ fl 1 , :::, l J , K, k 1 , :::, k K , / 11 , :::, / J1 , :::, / 1K , :::, / JK , s 2 1 , :::, s 2 J g: The mean functions and eigenfunctions in h depend on t 2 T as well but this dependency is suppressed for convenience. Next, we discuss estimation of h to get the plug-in estimatorĥ: Parameter estimation Let N 0 be the number of unique observation times in the data and t 0 ¼ ðt 01 , :::, t 0N 0 Þ T be the N 0  1 vector of these times in increasing order. The elements of t 0 form a grid in the domain T : By definition, there is at least one observation from all measurement methods at each time in t 0 : For estimation, we begin by pooling observations from each method on all subjects and smoothing them ignoring the within-subject dependence. Separate smoothing is performed for each method. This results in a smooth estimatel j ðtÞ of l j ðtÞ, j ¼ 1, :::, J: Then, each observation in the data is centered by subtracting off the corresponding estimated mean asỸ ij ðt im Þ ¼ Y ij ðt im Þ Àl j ðt im Þ: These centered observations are used to form JN 0  1 vectorsỸ i ðt 0 Þ in the same way as Y i ðt i Þ are formed in (10). If the subject i does not have an observation for some t 2 t 0 , that observation is set to be missing inỸ i ðt 0 Þ: In Appendix A, we describe the two approaches-MPACE and UPACE-for estimating the remaining unknown components of h and the multivariate scores n in the model (11). Both use the centered data as inputs and involve the PACE methodology of Yao, M€ uller, and Wang (2005) for univariate functional data. MPACE directly adapts the PACE methodology to deal with multivariate data along the lines of Chiou, Chen, and Yang (2014), whereas UPACE adapts the approach of Happ and Greven (2018). UPACE is computationally simpler of the two as it involves first applying the univariate PACE methodology separately to each component of the multivariate data and then processing the results. However, this may result in loss of efficiency in estimates, especially of the error variances s 2 j because they also come from univariate analyses rather than a multivariate analysis as in MPACE. Although the smoothing needed in these approaches and also for estimating the mean functions is performed here using gam function in R package mgcv (Wood 2017), any other smoothing technique-e.g., local linear regression as in Yao, M€ uller, and Wang (2005) and Chiou, Chen, and Yang (2014)-can also be used without affecting the general methodology. Upon model fitting, the fitted curves areŶ i ðt i Þ ¼l i ðt i Þ þÛðt i Þn i , i ¼ 1, :::, n: Confidence intervals and bands Suppose w wðhÞ is a function of model parameters of interest. Examples of w include the precision ratio s 2 1 =s 2 2 : Often, the parameter function depends on t, i.e., it has the form wðtÞ wðt, hÞ, t 2 T : Examples of wðtÞ include the mean difference d jl ðtÞ and the agreement measures defined in next section. Since w can be considered a special case of wðtÞ, we focus on constructing one-and two-sided confidence bands for wðtÞ: In effect, we construct pointwise and simultaneous intervals on a relatively fine grid t of L points in T , say, t 1 , :::, t L : This grid may be the same as the grid t 0 formed by the observed time points, used for estimation in Sec. 3. Or it may consist of a subset of these time points. In practice, L 2 ½25, 50 is often adequate. LetŵðtÞ wðt,ĥÞ be the plug-in estimator of wðtÞ: Also, letŵðtÞ and wðtÞ be L  1 vectors representing the values of the two functions evaluated at the elements of t. When n is large, the joint distribution ofŵðtÞ À wðtÞ can be approximated by a N L ðb, SÞ distribution, possibly after a applying normalizing transformation, where the L  1 vector b ¼ ðb 1 , :::, b L Þ T and the L  L matrix S ¼ ðs jk Þ j, k¼1, :::, L respectively represent the estimated bias vector and covariance matrix of the estimators. Once b and S are available, an approximate 100ð1 À aÞ% one-or two-sided pointwise confidence band for wðtÞ, t 2 T can be computed as where z a is the 100ath percentile of a N 1 ð0, 1Þ distribution. A simultaneous band can be constructed by replacing z a in (13) by an appropriate percentile (Choudhary and Nagaraja 2017, Chapter 3) that can be computed using the multcomp package of Hothorn, Bretz, and Westfall (2008) in R or via simulation as we do here. We now present a bootstrap methodology to compute b and S. It has the following steps: 1. Sample n indices with replacement from the integers 1, :::, n: Take the observed curves associated with the sampled subject indices as a resample of the original data. 2. Apply the estimation and FPCA approach described in Appendix A to estimate h from the resampled data to getĥ à : 3. Useĥ à to estimate wðtÞ asŵ à ðtÞ: Thisŵ à ðtÞ is a resample ofŵðtÞ: 4. Repeat the previous steps Q times to get the resamplesŵ à q ðtÞ, q ¼ 1, :::, Q: Compute the bias vector b as P Q q¼1ŵ à q ðtÞ=Q ÀŵðtÞ, and the covariance matrix S as the sample covariance matrix of the resamples. In practice, Q ¼ 500 is often enough to estimate b and S. If there is evidence that a bias correction is not needed, then the term b l in (13) can be dropped (see Sec. 5 for an example). Note that a separate FPCA is performed in each bootstrap repetition. Therefore, the resulting confidence intervals also account for the uncertainty due to FPC decomposition in addition to the usual uncertainty due to sampling (Goldsmith, Greven, and Crainiceanu 2013). The procedure of this subsection can be easily adapted to construct confidence interval for a parameter function w that does not depend on t. Evaluation of similarity and agreement We now focus on how to evaluate similarity and agreement of a pair of measurement methods j and l, j 6 ¼ l ¼ 1, :::, J: This evaluation can be repeated for all such pairs of interest. For similarity evaluation, inference is performed on two measures of similarity-difference in means of the methods and ratio of their precisions (Dunn 2007). Under the true model (4), d jl ðtÞ given by (7) is the mean difference and s 2 j =s 2 l is the precision ratio. For agreement evaluation, inference is performed on functional analogs of agreement measures originally developed for scalar data. These are obtained by using the definitions of the measures under the bivariate distribution of ðY j ðtÞ, Y l ðtÞÞ induced by the true model (4) for each t 2 T : We specifically consider two agreement measures. One is the concordance correlation coefficient (CCC) due to Lin (1989). It is defined in terms of first and second order moments of the paired observations. Using (5) and (6), the functional CCC can be expressed as See Lin (1989) for properties of a CCC. Here we just note that jCCC jl ðtÞj jq jl ðtÞj 1 and CCC jl ðtÞ ¼ q jl ðtÞ if l j ðtÞ ¼ l l ðtÞ and r 2 j ðtÞ ¼ r 2 l ðtÞ: A large positive value for CCC implies good agreement. The methods j and l have perfect agreement when CCC jl ðtÞ ¼ 1 for all t. The other measure is the total deviation index (TDI) due to . For a given large probability p 0 , it is defined as the p 0 th percentile of absolute difference in the paired observations. For inference on TDI, we additionally assume that the scores and the errors in the models (4) and (9) follow normal distributions. Under this assumption, for each t 2 T , the difference D jl ðtÞ follows a normal distribution with mean d jl ðtÞ and variance g 2 jl ðtÞ, given by (7). This implies that the functional TDI can be expressed as where v 2 1, p 0 ðDÞ is the 100p 0 th percentile of a noncentral v 2 distribution with one degree of freedom and noncentrality parameter D. A TDI is non-negative, and its small value implies good agreement. Agreement between methods j and l is perfect when TDI jl ðtÞ ¼ 0 for all t. The measures of similarity and agreement are estimated by plug-in. Similarity of the methods is evaluated by examining a two-sided confidence band for d jl ðtÞ and a two-sided confidence interval for s 2 j =s 2 l : Agreement between the methods is evaluated by examining appropriate onesided confidence bands for agreement measures. Since a large value for CCC and a small value for TDI imply good agreement, an upper confidence band for CCC and a lower confidence band for TDI are appropriate. The construction of confidence intervals and bands was discussed in the previous subsection. To improve accuracy, the intervals for precision ratio and TDI are obtained by first applying a log transformation and those for CCC are obtained by first applying the Fisher's z-transformation. The results are then transformed back to the original scale. As mentioned in Sec. 1, the limits of agreement approach of Bland and Altman (1986) is quite popular in the biomedical literature for agreement evaluation. This involves, under the normality assumption for the differences, computing estimated mean ± 1.96 times the estimated standard deviation of the differences, and examining whether the limits contain any unacceptably large differences. Using (7), the functional limits of agreement ared jl ðtÞ61:96ĝ jl ðtÞ, t 2 T : Simulation study In this section, we use Monte Carlo simulation to evaluate performance of point and interval estimators of key parameters and parameter functions, including measures of similarity and agreement, provided by the MPACE and UPACE approaches. This investigation focuses on J ¼ 2 measurement methods and takes mean squared error (MSE) of a point estimator and coverage probability of a confidence interval as the measure of accuracy. The data are simulated from the true model (4) along the lines of our real data examples by taking the domain as T ¼ ½0, 1; assuming normality for scores and errors; taking the mean functions of the two methods as l 1 ðtÞ ¼ 24 þ t and l 2 ðtÞ ¼ 23 þ 2t; and setting the eigenvalues as k k ¼ 100  e ÀðkÀ1Þ=2 for k 6 and zero for k > 6. The eigenfunctions corresponding to the non-zero eigenvalues are taken as the eigenfunctions estimated from the body temperature data by restricting them to the selected domain T : The grid t grid ¼ fu : u ¼ 0, 1=49, :::, 1g of 50 equally-spaced points between 0 and 1 is used for simulating data as well as point and interval estimation. We consider a total of four dense and sparse designs. In the dense case, a balanced design with N i ¼ 50 is considered. The observation times in this case are all points on t grid , and all subjects have the same observation times. In the sparse case, three scenarios with increasing sparsity are considered. Two are balanced designs with N i ¼ 30 and N i ¼ 20, and the third is an unbalanced design with N i distributed as a Poisson random variable with mean 20. We refer to these four designs as (a), (b), (c), and (d), respectively. The observation times in the sparse cases are drawn from a uniform distribution on t grid separately for each subject. Consequently, in the sparse case, the subjects may not have the same observation times. In all the four designs, observations from both measurement methods are simulated at each observation time, ensuring paired data. The observations for different subjects are independent. Three combinations of values are chosen for the error variances of the methods, namely, ðs 2 1 , s 2 2 Þ ¼ (2, 2), (2, 4), and (4, 4), to allow a range of practical scenarios. Three values are chosen for the number of subjects, n 2 f50, 100, 200g: Further, as is common in practice, p 0 ¼ 0:90 is taken for TDI and 1 À a ¼ 0:95 is taken for the confidence intervals and bands. Thus, we consider a total of 4  3  3 ¼ 36 settings. Table 1. MSEs of estimators of quantities that are free of t and average MSEs of estimators of quantities that depend on t, computed using MPACE (marked as M) and UPACE (marked as U) approaches, and the ratio of the MSEs (marked as U/M) in case of four designs: (a) N i ¼ 50 (dense data), (b) N i ¼ 30 (sparse data), (c) N i ¼ 20 (sparse data), and (d) unbalanced design with mean N i ¼ 20 (sparse data), each with (s 2 1 , s 2 2 Þ ¼ ð2, 2Þ: For each setting, we simulate a dataset, perform parameter estimation as described in Sec. 3.1 and Appendix A, and construct 95% confidence intervals and bands as described in Secs. 3.2 and 4. The proportion of variation explained that is needed for FPCA is taken to be 0.99 for both MPACE and UPACE. For the smoothing involved in point estimation, gam function in mgcv package of Wood (2017) is used with default settings. For interval estimation, Q ¼ 250 bootstrap resamples are used. The entire process from data simulation to interval estimation is repeated 300 times. The results are used to compute estimated MSEs of point estimators of log ðs 2 1 Þ, log ðs 2 2 Þ, log ðs 2 2 =s 2 1 Þ, log fTDIðp 0 , tÞg and zfCCCðtÞg, with zðÁÞ denoting the Fisher's ztransformation, and estimated coverage probabilities for confidence intervals of these quantities. The coverage probabilities are also computed for l 1 ðtÞ, l 2 ðtÞ, and dðtÞ but these quantities are excluded from the MSE calculation as both MPACE and UPACE use the same point estimators for them. We additionally compute estimated MSE ofK and estimates of , which provide an overall measure of accuracy in prediction of scores and individual curves, respectively. For convenience, these measures are also referred to as MSE. The efficiency of MPACE relative to UPACE is measured by dividing the MSE in case of UPACE by its MPACE counterpart. From a practical viewpoint, if a relative efficiency falls between 0.9 and 1.1, we may consider the two approaches to be equally accurate for estimating that quantity. Now, a note about interval estimation of log fTDIðp 0 , tÞg is in order. Our initial simulation studies showed that its confidence band tended to be more accurate without the bias correction. Therefore, we drop the bias term from (13) when computing the confidence band for this measure in the remainder of this article. Table 1 presents the MSEs for the two approaches and their relative efficiencies for ðs 2 1 , s 2 2 Þ ¼ ð2, 2Þ: We see that, with a few exceptions, the efficiency tends to decrease with the sparsity of design. Further, the efficiencies for the curves and scores in all cases are between 0.96 and 1.05, implying that the two approaches may be considered equally accurate for estimating them. Also, the efficiencies for K are between 0.26 and 0.83 in all cases but one. This suggests that UPACE is more accurate than MPACE for estimation of K. Additional investigation shows that MPACE tends to overestimate K. All the efficiencies for zðCCCÞ are greater than one, implying superiority of MPACE over UPACE. For the remaining quantities, the efficiencies depend on n and sparsity of design. In particular, for dense data (Design (a)), the efficiencies range between 0.96 and 1.20, indicating superiority of MPACE. However, as the level of sparsity increases, MPACE begins to lose its efficiency advantage to UPACE, especially when n ¼ 50. But then the advantage of UPACE also shrinks as n increases. For example, for Design (d), the efficiencies range between 0.84 and 0.96 when n ¼ 50, clearly indicating superiority of UPACE, but the range becomes 0.98 to 1.03 when n ¼ 200, indicating nearly the same efficiency of the two approaches. Qualitatively similar conclusions hold in case of ðs 2 1 , s 2 2 Þ ¼ ð4, 4Þ (see Table 2) and also (2, 4), the results for which are omitted. On the whole, these findings indicate that MPACE may be considered slightly more efficient than UPACE for dense data but the converse is true for sparse data with small n. In the other cases, the two may be considered more or less equally efficient. These conclusions remain unaffected by the error variances. Next, we examine estimated coverage probabilities of the confidence intervals. Table 3 presents the coverage probabilities for confidence intervals of error variances and their ratio, which are free of t. With a few exceptions, the entries are 1-2% higher than the nominal level of 95%, suggesting the intervals are slightly conservative. Both MPACE and UPACE appear equally accurate and there is little impact of n or the error variances. For parameter functions that depend on t, Table 4 presents averages of estimated pointwise coverage probabilities of the confidence bands. There is no difference in the entries for l 1 , l 2 , Table 3. Estimated coverage probabilities (in %) of 95% confidence intervals for error variances and their ratio in case of four designs: (a) N i ¼ 50 (dense data), (b) N i ¼ 30 (sparse data), (c) N i ¼ 20 (sparse data), and (d) unbalanced design with mean N i ¼ 20 (sparse data). (s 2 1 , s 2 2 Þ (2, 2) (2, 4) (4, 4) and d between MPACE and UPACE because both use the same estimates for them. In general, these entries are about 1% higher than 95%. For CCC, the entries are close to 95% for MPACE but about 96-98% for UPACE. For TDI, the entries are close to 95% for MPACE. This is also true for UPACE for n ! 100: These conclusions hold regardless of the values of the error variances and whether the design is dense or sparse. Table 5 presents estimated simultaneous coverage probabilities of the confidence bands. With the exception of TDI, in which case the entries are below 95%, the other entries may be considered close to 95%, especially when n ! 100: In case of TDI, the accuracy of MPACE improves with n and it may be considered acceptable for n ¼ 200. Although the accuracy of UPACE also improves with n, but it remains quite liberal even with n ¼ 200. Taken together, our key findings based on the settings considered and their practical implications may be summarized as follows. First, the sparsity of design affects the relative performance of the two approaches in point estimation but not so much in interval estimation. However, the error variances do not seem to have much impact on the performance. Second, for both point and interval estimation, MPACE may be considered to have an edge over UPACE. Finally, we have also evaluated the two variants of MPACE and UPACE algorithms mentioned in Appendix A. However, we did not find any noticeable difference in the results from those presented here. Therefore, these are omitted. The results of an additional simulation study to evaluate the impact of non-normality is presented in online Supplemental Material. Analysis of body fat data These data from Chinchilli et al. (1996) consist of percentage body fat measurements taken over time on a cohort of 112 adolescent girls using skinfold calipers (method 1) and DEXA (method 2) methods. Age at visit is the time variable t here. See Chinchilli et al. (1996), King, Chinchilli, and Carrasco (2007), Hiriote and Chinchilli (2011), and Rathnayake and Choudhary (2017) for more details about the dataset. Upon pre-processing the data which includes retaining only the observation times for which paired observations are available from both methods, we get a total of 2 à 654 ¼ 1308 observations from n ¼ 91 girls. The observations range between 12.7 and 37.4. There are 56 distinct observation times on the domain T ¼ ½11:2, 16:8 years and their numbers per subject range between 4 and 8 with an average of 7.2. Figure 1 presents the individual longitudinal profiles from the two methods, superimposed with their estimated mean functions (see below). The caliper mean ranges from 23.6 to 24.7, whereas the DEXA mean ranges from 21.4 to 24.3. They also behave differently over the domain. For example, the caliper mean remains essentially flat until age 14, then it decreases slightly until about age 15.5, and begins to increase thereafter. However, the DEXA mean decreases in the beginning, bottoms out around age 13, and increases thereafter with some flattening near the end. Figure 2 shows the age-specific scatterplots for ages 12 through 16. (Note that to draw these plots, the ages have been rounded to the nearest integer. Otherwise, there would be relatively few points in each plot, making it hard to discern any pattern.) The methods appear moderately Table 5. Estimated simultaneous coverage probabilities (in %) of 95% simultaneous confidence bands in case of four designs, (a) N i ¼ 50 (dense data), (b) N i ¼ 30 (sparse data), (c) N i ¼ 20 (sparse data) and (d) unbalanced design with mean N i ¼ 20 (sparse data). (s 2 1 , s 2 2 Þ (2, 2) (2, 4) (4, 4) correlated at these ages, with associated sample correlations 0:80, 0:73, 0:66, 0:67, and 0.73, respectively. Also, the points do not tightly cluster around the 45 line for any age, implying that the methods do not agree well. Our next task is to perform an FPCA of these data by fitting the model (9) using both MPACE and UPACE approaches. The smoothing is performed using gam function in mgcv package of R as described in the simulation section. The resulting mean functions are displayed in Figures 1 and 3. The FPCA yields the following estimates for the number of FPC needed to explain at least 99% of variability, eigenvalues, and error variances: ( MPACE requires one fewer FPC than UPACE but both yield similar estimates for the error variances. Figure 4 presents the estimated eigenfunctions. Ignoring a sign flip as the FPC are unique only up to a sign change, we see that the first three eigenfunctions for caliper and the first two eigenfunctions for DEXA from the two approaches are quite similar. The resulting estimates of standard deviation functions and correlation function for caliper and DEXA are displayed in Figure 3. The standard deviation functions estimated by MPACE and UPACE are somewhat similar, with the function exhibiting a decreasing trend for caliper and an increasing trend for DEXA. However, the two correlation functions seem quite different. In particular, the UPACE estimate shows a decreasing trend throughout, whereas the MPACE estimate shows an initial decreasing trend with minima at age 14, followed by an increasing trend. The latter pattern is consistent with the trend of correlation associated with the scatterplots in Figure 2. Therefore, we use MPACE for rest of the analysis here. Figure 3. The estimated mean, standard deviation, correlation, and mean difference functions for the two methods for percentage body fat data. The bottom right panel also shows a 95% simultaneous confidence band for the mean difference function. We now proceed as described in Sec. 4 to compute interval estimates for measures of similarity and agreement using Q ¼ 500 bootstrap resamples. The estimate and 95% simultaneous confidence band for mean difference (caliper -DEXA) are displayed in Figure 3. The estimate increases from 1 around age 11 to about 3 around age 13, then starts to decrease and falls slightly under zero around age 15, and then increases to about 0.5 around age 17. The band lies above zero over the age interval from 11.5 to 14.5. The estimate and 95% confidence interval for precision ratio (caliper over DEXA) are 1.14 and ð0:60, 2:57Þ, respectively. Taken together, these findings indicate that the methods have the same precision but their means are not the same. Hence the methods cannot be regarded as similar. For agreement evaluation, Figure 5 presents estimates and 95% one-sided simultaneous confidence bands for CCC and TDI (with p 0 ¼ 0:90) functions. A lower band for CCC and an upper band for TDI are presented. The pattern of increase and decrease of TDI estimate broadly resembles that of the mean difference function in Figure 3. This indicates that the agreement between the methods is best in the beginning. Then, it becomes progressively worse as age increases to about age 13.5, starts to get better till about age 14.5, and gets progressively worse thereafter. The same conclusion can be reached on the basis of CCC also. The TDI upper bound ranges between 6.78 and 9.64 and the CCC lower bound ranges between 0.22 and 0.60. Based on these values, the methods cannot be considered to agree well. It is also clear from the similarity evaluation that this lack of agreement is primarily due to a difference in the means of the two methods. These conclusions are consistent with other analyses of these data reported in Chinchilli et al. (1996), King, Chinchilli, and Carrasco (2007), and Rathnayake and Choudhary (2017). Summary and discussion To summarize, this article discusses modeling and analysis of functional data arising in a method comparison study. The methodology involves representing the data using a truncated Karhunen-Lo eve expansion. The unknowns in the model are estimated using two approaches-MPACE and UPACE, both adaptations of existing methods for FPCA of multivariate functional data observed with noise. Confidence intervals for measures of similarity and agreement, obtained by bootstrapping, are used to evaluate similarity and agreement of the measurement methods. A separate FPC decomposition is obtained for each bootstrap resample. Therefore, the variability due to FPC decomposition is also accounted for in the confidence intervals. Although both MPACE and UPACE often have comparable performance, there is evidence in both simulation studies and real data analysis that sometimes MPACE works better than UPACE. Here we use splines for smoothing involved in estimation. However, any other smoothing method can also be used without affecting the general methodology. No parametric assumption is required unless the inference on TDI is needed in which case normality is assumed for scores and errors. This article takes a multivariate FPCA approach to model the data. Given that the mixedeffects models are common for modeling univariate method comparison data (Choudhary and Nagaraja 2017, Chapter 3), an alternative would be to take a functional mixed-effects model approach. For example, Zhou, Huang, and Carroll (2008) use this to model dependence in paired functional variables. However, this methodology is difficult to implement, especially since no computer program is publicly available to fit their model. Although our methodology works for both dense and sparse functional data, it assumes that observations from all methods are available at each observation time. But this assumption is restrictive. For example, it does not hold for the body fat data. However, it may be possible to relax this assumption. Further research is needed to explore these directions. Software in the form of R code, together with illustration and documentation, is available at http://www.utdallas.edu/$pankaj/. Appendix A: Two approaches for parameter estimation This appendix describes two approaches based on FPCA for estimating components of h besides the mean functions. Estimation of mean function was discussed in Sec. 3.1. Both use the centered dataỸ i ðt 0 Þ, i ¼ 1, :::, n as input. A.1. Approach 1 (MPACE) This approach is an adaptation of the PACE methodology for univariate functional data (Yao, M€ uller, and Wang 2005;Goldsmith, Greven, and Crainiceanu 2013) to deal with multivariate functional data. A similar approach has been used by Chiou, Chen, and Yang (2014) for normalized functional data. It involves the following steps. 1. Compute the sum of products P n i¼1Ỹ i ðt 0 ÞỸ T i ðt 0 Þ using only the non-missing observations inỸ i ðt 0 Þ: Divide each element of this JN 0  JN 0 matrix by the corresponding number of non-missing terms contributing to the sum. This divisor is n for a balanced design. If at least two observations are available at each t 2 t 0 , we may subtract 1 from the number of non-missing terms contributing to the sum and use that as the divisor. Denote the resulting matrix as V. It has a block structure, where each of the submatrices is a N 0  N 0 matrix and V jl ¼ V T lj , j 6 ¼ l: By construction, there is no missing entry in this matrix. The elements of V jj provide a raw estimate of the autocovariance function G jj ðs, tÞ þ s 2 j Iðs ¼ tÞ of Y j ðtÞ for s, t 2 t 0 , see (5). Likewise, the elements of V jl provide a raw estimate of the cross covariance function G jl ðs, tÞ, given by (3), for s, t 2 t 0 : 2. Perform bivariate smoothing of the off-diagonal elements of V jj , separately for each j, to obtain preliminary smooth estimates of the functions G jj ðs, tÞ: Evaluate the estimated functions at s, t 2 t 0 to get N 0  N 0 matri-cesṼ jj , j ¼ 1, :::, J: 3. Perform bivariate smoothing of the elements of V jl , separately for each (j, l) pair with l > j, to obtain preliminary smooth estimates of the functions G jl ðs, tÞ: Evaluate the estimated functions at s, t 2 t 0 to get N 0  N 0 matricesṼ jl , l > j ¼ 1, :::, J: 4. Compute the JN 0  JN 0 matrixṼ, an analog of V, by replacing V jj on the diagonal, V jl above the diagonal, and V lj below the diagonal of V withṼ jj ,Ṽ jl , andṼ T jl , respectively. 5. Use a quadrature rule (e.g., the trapezoidal rule) that approximates an integral Ð T f ðtÞdt as P N0 q¼1 w q f ðt 0q Þ, where the quadrature points t 01 , :::, t 0N0 are the elements of t 0 , to get the associated weights w 1 , :::, w N0 : Form a JN 0  JN 0 diagonal matrix W with the entire set of weights ðw 1 , :::, w N0 Þ repeated J times as the diagonal elements. Compute the JN 0  JN 0 matrix U ¼ W 1=2Ṽ W 1=2 : 6. Perform a spectral decomposition of U to get the eigenvaluesk k and the corresponding JN 0  1 orthogonal eigenvectors u k , k ¼ 1, :::, JN 0 : Replace any negative eigenvalues, which may possibly be nearly zero, with zero. ChooseK as the smallest number of eigenvalues for which ð P k¼1Kk k = P JN0 k¼1k k Þ ! p, where p is a specified lower bound on the proportion of total variability in the observed curves to be explained by the principal components. Compute the vectors/ k ðt 0 Þ ¼ W À1=2 u k for k ¼ 1, :::,K: The eigenvaluesk k provide the estimated score variances and the corresponding vectors/ k ðt 0 Þ ¼ ð/ 8. For j ¼ 1, :::, J, computeŝ 2 j by subtractingĜ jj ðt, tÞ given above from the diagonal elements of V jj -given by (A.1) and which estimate G jj ðt, tÞ þ s 2 j -for t 2 t 0 and combining the differences to form a single number. One way to accomplish this is to proceed along the lines of the implementations of the PACE methodology in R packages MFPCA and refund (Goldsmith et al. 2016;Happ 2018). For this, define an interval T à & T as T à ¼ minft 2 t 0 : t ! t 01 þ ðt 0N0 À t 01 Þ=4g, maxft 2 t 0 : t t 0N0 À ðt 0N0 À t 01 Þ=4g  à , and let jT à j be its length. Also let t à 01 , :::, t à 0Q be those elements of t 0 that also lie in T à : Corresponding to each t à 0q , there is a diagonal element of matrix V jj in (A.1), say, v jj ðt à 0q Þ: Then, as in Step 5, let w à 1 , :::, w à Q be the weights associated with a quadrature rule that takes t à 01 , :::, t à 0Q as the quadrature points. Finally, takê provided it is positive, otherwise take it to be zero. This is the estimate we use here. An alternative is to proceed as in Goldsmith, Greven, and Crainiceanu (2013) and use the average difference between v jj ðtÞ and G jj ðt, tÞ computed over the middle 60% of the grid t 0 : In either case, some observation times from the two ends of t 0 are discarded to improve stability of the estimate. 9. Estimate the score vector n i by its estimated best linear unbiased predictor under model (11), The estimated matrices here are plug-in estimates of their population counterparts. The matrixṼ in Step 4 is not guaranteed to be positive definite. Therefore, we may replace it by its nearest positive definite approximation, computed using the nearPD function in R package Matrix (Bates and Maechler 2017) which implements the algorithm of Higham (2002), before continuing with the rest of the steps. This variant of the algorithm is evaluated using simulation in Sec. 5. A.2. Approach 2 (UPACE) This approach is a special case of a general approach for multivariate FPCA proposed by Happ and Greven (2018). In our adaptation here, it begins with the centered dataỸ i ðt 0 Þ, i ¼ 1, :::, n and involves the following steps. 1. Use the PACE methodology (Yao, M€ uller, and Wang 2005) to perform univariate FPCA of data from each measurement method separately. This effectively amounts to considering data from one measurement method at a time, assuming a model for it similar to (9) that is based on a univariate Karhunen-Lo eve expansion, and fitting the model by applying the algorithm of the previous section by suitably modifying it. Suppose for data from measurement method j ¼ 1, :::, J, this results inK ðjÞ as the smallest number of principal components explaining at least a specified proportion p ðjÞ of variability;/ ðjÞ k ðt 0 Þ as the N 0  1 vector of values of the kth estimated eigenfunction for t 2 t 0 , k ¼ 1, :::,K ðjÞ ;ŝ 2ðjÞ as the estimated error variance; andn ðjÞ i as theK j  1 vector of estimated scores for the ith subject. Note that the corresponding true scores have expectation zero and we have a total ofK þ ¼K ð1Þ þ ::: þK ðJÞ estimated univariate scores for each subject. Theŝ 2ðjÞ resulting from this univariate FPCA also estimate the error variances in h, i.e.,ŝ 2 j ¼ŝ 2ðjÞ , j ¼ 1, :::, J: 2. Arrange the univariate scores as a n ÂK þ matrixN wherê and form theirK þ ÂK þ covariance matrix Z ¼N TN =ðn À 1Þ: 3. Perform a spectral decomposition of Z to get the eigenvaluesk k and the correspondingK þ  1 orthogonal eigenvectors z k , k ¼ 1, :::,K þ : Replace any negative eigenvalues, which may possibly be nearly zero, with zero. ChooseK as the smallest number of eigenvalues for which ð P k¼1Kk k = PK þ k¼1k k Þ ! p, where p is a specified lower bound on the proportion of variability explained. By construction,K K þ : TheseK andk k estimate the corresponding components K and k k of h: 4. For k ¼ 1, :::,K, represent theK þ  1 eigenvector z k as z k ¼ ðz (11), as the ith row of the n ÂK matrixNðz 1 , :::, zK Þ: To conclude, both MPACE and UPACE approaches provide estimates of all the components of h except the mean functions whose estimation was discussed in Sec. 3.1. They also provide estimates of the multivariate scores in the model (11). In applications, MPACE involves choosing only one proportion of variation explained-p for multivariate data. On the other hand, UPACE involves choosing J þ 1 such proportions-p ð1Þ , :::, p ðJÞ for univariate data and p for multivariate data. In practice, such proportions are taken to be large, e.g., between 0.95 and 0.99. The specific choice will depend on the application and can be guided by a scree plot (Ramsay and Silverman 2005, Chapter 8). We have chosen 0.99 in this work. A variant of the UPACE algorithm with p ðjÞ ¼ 1 for j ¼ 1, :::, J is evaluated using simulation in Sec. 5.
13,259.4
2020-10-14T00:00:00.000
[ "Mathematics" ]
a-decay half-lives of some nuclei from ground state to ground state using different nuclear potential Theoretical a-decay half-lives of some nuclei from ground state to ground state are calculated using different nuclear potential model including Coulomb proximity potential (CPPM), Royer proximity potential and Broglia andWinther 1991. The calculated values comparing with experimental data, it is observed that the CPPM model is in good agreement with the experimental data. Introduction George Gamow interpreted the theory of alpha decay in terms of the quantum tunneling from the potential well of the nucleus [1].There are many theoretical schemes that used to define a cluster radioactivity and alpha-like models using various ideas such as the ground-state energy, nuclear spin and parity, nuclear deformation and shell effects [2][3][4][5][6][7][8][9][10][11][12][13][14].Frequently used models include the fissionlike [15], generalized liquid drop [16], generalized density dependent cluster [17], unified model for a decay and a capture [18], Coulomb and proximity potential [19] and unified fission [20].These models, with their own merits and failures, have been in acceptable agreement with the experimental data [21,22].Spontaneous fission and cluster radioactivity were studied in 1980 by Sandulescu, Poenaru, and Greiner [23] based on the quantum mechanical fragmentation theory.Rose and Jones experimentally observed the radioactive decay of 223 Ra by emitting 14 C in mid 1980s [24,25].Recently, the concept of heavy-particle radioactivity is further explored by Poenaru et al. [26].Hassanabadi et al. considered the alpha-decay half-lives for the even-even nuclei from 178 Po to 238 U and derived the decay constant [27].Also the half-life for the emission of various clusters from even-even isotopes of barium in the ground and excited states were studied using the Coulomb and proximity potential model by Santhosh et al. [28].Also, there are many efficient and useful empirical formulas to calculate alpha decay half-lives which are given in reference [29][30][31][32].In this study we used three different nuclear potential including Coulomb proximity potential (CPPM), Royer proximity potential (RPP) and Broglia and Winther 1991 model (BW91).From those models we calculated alpha decay half-lives for 57 nuclei that have Z = 67-91, from ground state to ground state, the root mean square (RMS) deviation was evaluated, and the results are compared with experimental data. Formalism of a-decay According to one dimensional WKB approximation, the barrier penetration P is given by [33], where a, b are tunneling point of integral which are given as V(a) = V(b) = Q.The interaction potential for two spherical nuclei is given by [34], where the first term represents the Coulomb potential with Z 1 and Z 2 are the atomic numbers of parent and daughter nuclei, the second term is nuclear potential and the final term is centrifugal potential which dependent on the angular momentum ℓ , and reduced mass of nuclei m.The half-life of alpha decay can be calculated as [35] T , is frequency of collision with barrier per second, E is the empirical vibration energy, is given as [36] where Q is the energy released [37], and A 2 the mass number of a-particle.By substitution value of E and P in equation ( 3) determines the half-lives. In this section, we present the details of three nuclear potential models used for the calculation of a-decay halflives.When two surfaces approach each other within a distance of 2-3 fm, additional force due to the proximity of the surface is labeled as proximity potential [38].In this section we discuss each model in details. Coulomb and proximity potential model (CPPM) The proximity potential is considered as [39], where Z 1 and Z 2 are the atomic numbers of parent and daughter nuclei, z is the distance between the near surfaces of the fragments, and the nuclear surface tension coefficient is given as, where A, Z and N represent mass, proton and neutron numbers of parent nuclei, respectively, and r is the distance between fragment centers and is given as and C 1 , C 2 are the Susmann central radii of fragments are given as: f is the universal proximity potential which is given by [40] fðeÞ ¼ À4:41e Àe=0:7176 for e > 1:947; ð8Þ where e = z/b, is the overlap distance in unit of b where the width of the nuclear surface b ≈ 1 fm.The semi-empirical formula for R i in term of mass number is given as [41], 2 .2 Royer proximity potential model (RPPM) For the a emission where the proximity energy between the two separated a particle and daughter nucleus plays the central role a very accurate formula has been obtained as [42] V p ðrÞ ¼ 4pgexpðÀ1:38ðr À R 1 À R 2 ÞÞ Â 0:6584A 2=3 À 0:172 where A is the mass of the parent nucleus and r the masscenter distance. Broglia and Winther 1991 model (BW91) Broglia and Winther derived a refined version of the BW91 potential by taking Wood-saxon potential with dependent condition of being appropriate with the value of the maximum nuclear force which is predicted by proximity potential model.This model reduced in [38,43] V here a = 0.63 fm and Here the radius R i has the form The surface energy coefficient g has the form where A, Z, and N are the total number for (p, d) parent and daughter, respectively, g 0 = 0.95 Mev/fm2 and k s = 1.8. Results and discussion The a-decay half-lives provided by the above nuclear potential models are presented in Table 1 which included CPPM, Royer proximity potential and BW91.The angular momentum l loaded by a-decay from ground state to ground state transition and obeys by the spin-parity selection rule [44] ℓ ¼ for odd D j and where D j = |j p À j d |, j p , p p and j d , p d are the spin and parity value of parent and daughter, respectively.The relative superiority of the present choice of the potential can be as well seen in the in Table 1 where our results are reported for different potential models.The outcome of our study is presented in Figures 1-3.In Figure 1 to provide best view of the results, we have plotted logarithm a-decay half-lives including CPPM, RPP, BW91 and experimental data vs. neutron number of parent nuclei, the figures shows the increasing disposal of logarithm half-live for decreasing neutron number of parent nuclei, also this figure refer the three models are more close to experimental data, which indicates to the agreeable of the results.The DT parameter is determined, which is the different between experimental half-live to theoretical, and reported in Figure 2; which indicated the DT of more isotopes is less than one; it seems that the results are more close to experimental data.We predict that the nuclei with higher neutron number a larger half-life and thence more stable.Figure 3 describes the relation between logarithm a-decay half-lives vs. Q-value, it shown that the logarithm a-decay decreases when Q-value increases; it is in agreement with a larger Q-value increases the instability.We calculated the RMS deviation which is defined as [45] RMS for present models which reported in Table 2; which indicate the CPPM model is best model to calculate a-decay half-life comparative with RPP and BW91 models. Conclusion Three different nuclear potential are used to calculate the a-decay half-lives for some nuclei from ground state to ground state including CPPM, RPP and BW91.The momentums are taken into account.RMS deviations are calculated, it shows that the best nuclear potential is CPPM.The results are compared with experimental data; this comparison provides a reference how to select nuclear potential to calculate a-decay halflives. Table 1 . Comparative study of a-decay half-lives using nuclear potential models included CPPM, RPPM and BW91.
1,771
2018-01-01T00:00:00.000
[ "Physics" ]
A VR-based approach in conducting MTM for manual workplaces Due to current trends in the manufacturing industry, such as mass customization, manual operations contribute drastically to the overall costs of a product. Methods-Time-Measurement (MTM) identifies the optimization potential of manual workplaces, which significantly influences a worker’s productivity. However, traditional MTM requires great efforts to observe and transcribe manual assembly processes. Yet, various digital approaches exist that facilitate MTM analyses. While most of these approaches require the existence of real workplaces or cardboard mock-ups, it would be beneficial to conduct a virtual MTM in earlier phases of production planning. However, the quality of virtual MTM analyses compared to traditional MTM conducted in reality has not been assessed yet. This paper is addressing it by conducting a comparative user study with 21 participants completing the same task both at a real and virtual workplace, which they access via virtual reality technology. Our results show that participants’ MTM-2 values achieved at the VR workplace are comparable to those at the real workplace. However, time study data reveals that participants moved considerably slower in VR and thus needed more time to accomplish the task. Consequently, for the measurement of manual work in VR, it is even necessary to utilize predetermined times, such as MTM-2 since time study data is insufficient. This paper also serves as a proof of concept for future studies, investigating automated transcription systems that would further decrease the efforts conducting MTM analyses. Various investigation levels for manual work, adapted from [27] One TMU is equivalent to 0.036 s, which allows for an indepth analysis even for basic operations in the sub-second range. MTM has different standards, and each standard has its own set of basic operations and predetermined times. 1 The most comprehensive MTM standard is MTM-1 which consists of the following basic operations: reach, grasp, release, move, position, apply pressure, disengage, turn, crank, visual inspection, and body, leg, and foot movements. MTM-1 is not only complex but also precise due to the high level of detail. This is particularly favored in cases of mass production that should be analyzed with the highest possible level of detail. If such a detailed work description is not required, e.g., for a smaller production pipeline, the MTM-2 standard is used. MTM-2 is based on MTM-1 and consists of the following basic operations: get, put, apply pressure, regrasp, crank, eye action, foot motion, step, and bend and arise. MTM-2 is less precise than MTM-1, but it also allows conducting faster MTM analyses. For even smaller production entities, e.g., for batch production, the so-called MTM-MEK is used. It consists of the following basic operations: get and place, place, handle tool, operate, motion cycles, body motion, and visual control. And finally, to make MTM more universally applicable, the MTM-UAS standard was introduced. It has the same basic operations as MTM-MEK but with even less detailed information for each basic motion. A brief overview of the abovementioned MTM standards is presented in Fig. 2. When analyzing manual operations with MTM, a video of a work process is recorded and manually transcribed. This process is time-consuming due to the need for the subsequent manual video transcription, which has to be conducted by an MTM expert. Furthermore, the MTM analysis is frequently done when the workplace is already built and can hardly be altered anymore. To overcome this issue, cardboard mock-ups are used in the planning phase, 1 http://mtm-international.org/technical-platform/ which represent the workplace to be investigated in fullsize. Besides the time, cost, and generated waste, cardboard engineering often lacks details, which makes it difficult for workers to perform in the same way as they would do at a real workplace. Such details include raw materials and intermediates, which are usually not available during planning, to be processed or assembled by a worker. However, novel opportunities arise due to assembly lines and production plants increasingly being planned and designed virtually. This is based on virtual planning data, which is usually available in the form of three-dimensional geometry (e.g., computer-aided design data), could be harnessed to generate a virtual environment (VE), wherein workers perform the manual operations to be analyzed virtually. This can be achieved through virtual reality (VR) technology, which immerses users within a VE, while allowing them to interact with virtual objects through the utilization of the controllers. Few prior works investigated the potentials of VR technology to support manual work measurement with a particular focus on MTM analysis. However, the setups in prior works are characterized by numerous limitations, such as not allowing for real walking, non-intuitive interaction, and the weightlessness of virtual objects [10,11]. Consequently, the real work procedure cannot be fully replicated, and the results of work measurement in VR might be only of limited benefit [5]. Within this paper, we address this by developing a VE that emulates a typical workplace for manual assemblies, in which a worker can naturally walk, while completing a given task. The goal of the paper is to investigate work measurement in VR, including an MTM analysis and time study, compared to work measurement at the corresponding physical workplace. To achieve this, we conduct a comparative study, wherein we analyze 21 study participants performing the same task in a VE and at a physical workplace using an MTM-2 approach and time study. We organize the remainder of this work as follows: Section 2 introduces the related work, Section 3 describes the task and the technical setup. The results are described in Section 4. Section 5 discusses the achieved results and, finally, Section 6 concludes the paper, and an outlook is given in Section 7. Related work The measurement of manual work in the manufacturing industry is associated with various benefits, including productivity gains and quality improvements [6]. With growing mass customization in an automated or semi-automated work environment, manual assembly tasks also grow in importance. Depending on a product's configuration, manufacturing and assembly tasks can vary in time, and thus current research focuses on decentralized production. In such decentralized production environments, so-called "slow" products do not block "faster" ones anymore, and thus the overall output of production can be kept stable or even increased, although multiple product variants are produced. However, decentralized production needs not only a careful planning but also a thorough investigation of manual assembly tasks, including walking and human-robot interaction. Moreover, there is no such support for planning manual work; and thus, MTM is still based on traditional cardboard engineering. Since most of the 3D models for machines and tools already exist, it seems reasonable to employ VR technology to conduct MTM analyses. An early approach to digitally evaluate an ergonomic design was introduced by Schmidt and Wendt [25]. The software tool "COSIMAN" (Computer Simulation of Manual Assembly) was able to evaluate basic designs but was limited to more complex scenarios. Moreover, the handling of the software tool was too complex for industrial applications. Further applications of COSIMAN were also introduced by Kummelsteiner [16]. Another VR-supported MTM approach was introduced by Chan [7], who simulated a manual assembly line and retrieved MTM values by using virtual humans (avatars). However, the control of the avatars by a computer mouse, as well as missing functionalities for the presentation of products and process designs, imposed a substantial effort to achieve simulation results. In spite of its drawbacks regarding operability, there exists a wide range of digital human model programs, as stated by Mühlstedt et al. [21]; they all offer an exocentric view on the manufacturing task. The main focus of these virtual human simulation programs is to analyze human postures and to determine the workplace, as described by Yang et al. [30]. Furthermore, these programs should assess the visibility and accessibility of an operation as stated by Chedmail et al. [8] or evaluate postures as stated by Bubb et al. [4]. In these virtual human programs, conventional MTM also can be integrated, as well as posture analysis techniques [13]. Based on inverse kinematics, the physical strain on each joint can be calculated for any given operational and external load [29]. Since motion capture systems became available, motion tracking methods for tracking an operator's real movements to control the manikin became popular due to its realistic feeling and outstanding, lifelike realism. Already, in 2000, Chryssolouris et al. [9] proposed a "virtual assembly work cell," which allows natural interaction with the VE to perform an assembly task. However, the overall system complexity and the particular limitations of the tracking system only allowed for a limited range of applications. Instead of a consecutive definition of the avatar's movement, newer systems allow for a direct coupling of the avatar to the workers' movements. For doing so, the user wears components of an optical tracking system, a data glove, and a data helmet. In 2005, Jianping and Keyi [15] tracked the real operator's movement in real time to control the virtual human "JACK" within a virtual maintenance system. Furthermore, a virtual assembly design environment was developed by Jayaram et al. [14], who also uses a virtual human that can be directly controlled by the worker's movements. The motion data of the real operator was recorded and imported into JACK to perform an ergonomic analysis. Later, Wu et al. [28] used data gloves and a field-of-view tracker to capture the operator's hand for a virtual assembly path planning in a VE. In 2012, Osterlund and Lawrence [23] used full-body optical motion tracking to control avatars in an astronaut training. However, their setup was dependent on a well-defined physical setup, much like in the abovementioned cardboard installation. According to various sources [19,26], establishing a VE with complete physical attributes is still difficult and time-consuming, since such systems require a long setup time, in particular for optical, outside-in tracking systems. Following Seth et al. [26], also a simulation of realistic interaction using haptic devices is still difficult for the virtual prototyping community, while sensations, such as real walking to perceive distances, are completely missing. Hence, it is still inconvenient for a real operator to control a virtual human. Following Qiu et al. [24], current human motion capturing technology substantially limits the range of its applications. With the spread of information technology, different approaches for analyzing workers' performance and training new workers had appeared. Benter et al. [3] introduced an approach that used a 3D camera to capture data and analyzed working time with the MTM-1 method. The workplace consisted of three workstations, and the worker was assembling gearboxes for the duration of 20 min. Ma et al. [20] came up with a framework to evaluate manual work operations with the support of motion tracking. For these purposes, a marker-based optical tracking system with a total of 13 markers was used. The system consisted of a head-mounted display (HMD), a data glove, and eight cameras for body motion and hand-gesture motion recognition. To evaluate workers' performance, a so-called Maynard operation sequence technique (MOST) was planned for use. There are three motion groups in the MOST system: general move, controlled move, and tool use. To validate the technical feasibility of the proposed framework, only two work tasks were taken: lifting an object and pushing a button. No walking was included. Another approach to automatically monitor the execution of human-based assembly operations was proposed by Andrianakos et al. [1]. Their approach is based on machine learning techniques and utilizes a Single Shot Detector algorithm for object detection. This algorithm detects objects of multiple categories and sizes in real time. For data collection, a single vision sensor (i.e., a webcam) was used. To evaluate this approach, a simple three-step assembly task was proposed. An important restriction of this method is that a task must be completed in a defined specific order. Müller et al. [22] designed a Smart-Assembly-Workplace (SAW) which was used in a bicycle e-hub assembly. SAW was designed to share knowledge about the assembly sequence with less qualified workers in an intuitive way. SAW consists of a combined working and learning environment. To define the learning sequence and time, MTM is used. For data retrieval, the motionsensing device Microsoft Kinect is installed on top of the workplace, facing downwards, which allows the determination of the worker's hand's position without using markers. However, the reliability of Kinect depends on illumination conditions. It was reported that the hand tracking often fails to correctly locate the hands if illumination conditions vary. A tree-based approach to recognize MTM-UAS codes in VR was proposed by Bellarbi et al. [2]. It captures the tracking data of an HMD and controllers and divides this data into small sequences of movements. In this algorithm, all possible body motions belong to one of these three categories: eye movement, body movement, or hand movement. Each small sequence of movements from the captured data is compared to the data from the algorithm tree in order to get a corresponding to this sequence MTM-UAS code. First approaches by Kunz et al. [17] showed that also real walking can be integrated in such virtual interpretations. However, a comparison to the real-world counterpart is still missing. However, the quality of MTM analyses and time study conducted in VR compared to the analysis of a corresponding real world task should be investigated [5,11]. This paper will thus contribute to the topic of performing MTM analyses completely in VR by comparing the findings to a similar real-world workplace. Methodology In this section, we describe the methodology of this paper, including a user study, where each participant performed the same task in VR and reality. Furthermore, we outline the data retrieval and analysis procedure to conduct the intended comparison of work measurement in reality and VR. Participants The user study consists of 21 participants: 6 female and 15 male, recruited from the university's student body. Since MTM is designed for a workplace evaluation by experienced workers, we had to design the task for the user study suitable for participants without any prior training. As the task was consciously chosen to be very simple, any biasing effect by learning is unlikely. Thus, each participant performed the same task twice, once at the real workplace and once in VR. To avoid any biasing from the task sequence, participants either started with the real task or with the virtual task in alternating order. Technical setup The technical setup consists of two identical workplaces in reality (see Fig. 4 left) and VR (see Fig. 4 right). Participants access the VR workplace with the HTC Vive Pro system. It uses so-called lighthouses to track users' head position and orientation, as well as their hands holding the HTC Vive controllers. In our virtual setup, the participants are using only one controller to manipulate the objects. The controller has additional buttons and a touchpad to allow further interaction with the VE. Pulling the controller's trigger, for instance, performs a grasping action to grab the virtual object with the virtual hand. Users see their virtual hand and the complete VE through the HMD, which also visually disconnects them from the real world. To reach all relevant objects in the VE, users are able to freely walk within a 5m × 5m tracking space. In both the real and the virtual scenery, the users' movements are video-captured for a later manual transcription. Task description To compare the MTM analysis in reality and VR, a task that could be completed by inexperienced study participants was chosen. It is a simplified version of an industrial task, containing basic operations of the MTM-2 standard such as get, put, eye action, step, and bend and arise. In contradiction to MTM-UAS and MTM-MEK, which evaluate the walking time based on geometric distances only, MTM-2 allows counting steps of the user and gives a predetermined TMU value for each step. This allows evaluating walking behavior based on human behavior and not only on geometrically obtained values. The top-down layout of the workplace is shown in Fig. 3. At the beginning of each study session, a pre-recorded video in VR with text instructions is shown to the participants. In this video, the task and the sequence of the process steps are introduced. The virtual workplace is designed to fit into the tracking space of 5m × 5m that is supported by the HTC Vive system. The participants start in front of a palette in 1.8 m distance to the larger table. They grab the box from the floor and put it on the bigger table. Afterward, they grab the screwdrivers one by one from the smaller table and put them into the box. Then they walk to the box lid, grab it, and close the box with the lid. The last step is to take the closed box, walk to the palette, and put this box on the palette. There was no time limit for this task. Participants are asked to complete the task at a natural speed. The real and virtual environments are shown in Fig. 4. Data retrieval Each study participant was recorded while performing the task in reality and in VR. The recording of the real task was conducted through a video camera, while "screen recordings" (i.e., a recording of a participant's 1 st person view) were created for the task in VR. Subsequently, each participant's movements were manually transcribed according to the corresponding MTM-2 codes. The transcription was done by two people independently. Additionally, a time study was conducted for each user by measuring their task completion time in VR and reality. Results This section describes the findings of our comparative user study, including MTM-2 analysis and time study for the task conducted in reality and VR. MTM-2 and time study in reality Since MTM relies on statistically retrieved values (the "Measured Times"), this first data evaluation is to assess whether the participants of the study performed the given task at a normal working speed or not. For this, we compare the time study and MTM-2 transcription of the task conducted in reality by each participant. The results of this analysis are shown in Table 1 on the right and plotted in Fig. 5. When analyzing the results of the time study, we see a mean value of m time study, real = 27.667 s with a standard deviation of SD time study, real = 4.4 s. This, in fact, shows that there are considerable differences between each individual study participant. However, when analyzing MTM-2 values of the transcription instead, we achieve m MTM-2, real = 27.710 s and SD MTM-2, real = 2.4 s. This is an expected outcome, since the MTM-2 analysis decouples workers' movement from their individual speed in performing a task. MTM-2 and time study in VR Similar to the analysis in reality, we analyze both the MTM-2 and time study for each participant performing the task in VR, which is shown in Table 1 on the left and plotted in Fig. 6. When analyzing the time study from the VR task only, we see a mean value of m time study, vr =36.286 s with a standard deviation of SD time study, vr =7.9 s. This shows that the performance between participants substantially differs in VR. Seven users had an MTM-2 time that is larger than the time study, meaning that they performed the actions in VR faster than it is foreseen by the MTM-2 (see Table 1 on the left). In comparison to the time study, the transcribed MTM-2 values of exactly the same work process are considerably smaller. The MTM-2 mean value for VR is m MTM-2, vr = 28.991 s, while the standard deviation is SD MTM-2, vr = 2.7 s. Similarly to the analysis in reality, this is an expected result since MTM-2 decouples the speed of a worker's movement from the descriptive class of their movement. Comparison of results in reality and VR As it is one strength of MTM-2 to make workplaces comparable, we chose the overall MTM-2 values for the task conducted in reality and VR for the comparison. It is hypothesized that equal MTM-2 values will show that a VR workplace is capable of adequately representing a real workplace. The overall information about the comparison of the virtual and real task completion time is shown in Table 2. Figure 7 shows that the MTM-2 values in reality and VR are very similar for each user, which confirms our Fig. 4 Comparison of real and virtual tasks during the user study. Left: the real workplace; right: the virtual counterpart previously stated hypothesis that the MTM-2 analysis for a task conducted in VR has comparable quality and can, in fact, replace a real one. However, it is also visible that the values are not absolutely coherent. To further clarify the resulting error, the quotient between the corresponding MTM values for an individual user is calculated, which is referred to as VR MTM-2 accuracy and should ideally be equal to 1. The deviation of the VR MTM-2 accuracy from 1 gives a measure of the quality of the MTM-2 analysis in VR. Figure 8 shows the deviation from this ideal value. For the VR MTM-2 accuracy, we achieve a mean of m VR MTM-2 accuracy = 1.05 and SD VR MTM-2 accuracy = 0.08. Discussion While there is already a good correlation between the MTM-2 analysis in reality and VR, we made some observations during the course of the study, which could be an explanation for the typically longer task completion time when being exposed to the VE. Missing haptic feedback for grasping objects When grasping an object, humans rely on vision for coarsely approaching the object to be grasped, while the decision whether an object is grasped relies on haptic cues only. Such haptic feedback was missing in the VE, and thus users had to observe the object for a yellow wireframe that indicates that the virtual hand touches the virtual object. Although this is a common way in VR to indicate whether objects are touched or not, it imposes an additional cognitive effort, which was taken into account during the MTM-2 transcription as an "eye action" (E = 7 TMUs). However, we noticed that users significantly slow down their physical movement, in particular when trying to grasp the lid from the ground. This is The accuracy of the virtual MTM is obtained by dividing the corresponding virtual to the real TMU value Unfamiliar navigation means We noticed that participants who never had any VR experience before did not walk naturally when navigating the first distance from the starting position to the box. They either made smaller steps, or generally moved slower without shortening the step size. However, we also noticed that users quickly familiarize with real walking in VR, so that already after this first very short path their walking speed was close to natural again. Conclusion In this paper, we followed the research question, how well an MTM-2 analysis and a time study could be conducted in VR compared to traditional analyses in reality. Conducting a study in VR brings the benefit of avoiding non-optimal workplace designs already in the planning phase without requiring the construction of physical mock-ups (cardboard engineering) for data acquisition. Our comparative user study includes two identical workplaces in reality and VR and shows that it is possible to achieve comparable results in both setups using the MTM-2 evaluation. However, the overall completion time that is measured by direct observation time study differs substantially. This leads us to the conclusion that it is even necessary to analyze manual work procedures in VR by means of predetermined times, such as MTM-2, since the overall completion time measured by direct observation is higher in VR. Outlook Future work should supplement user studies by standardized questionnaires such as the NASA TLX [12] for measuring additional task loads that might be evoked by using a VR system. These studies could address a more complex work scenario at a given industrial workplace, employing professional workers to participate. While the detection and transcription of basic motions was done manually for our study, the full potential of an MTM analysis in VR can be harnessed with an automated transcription. Four days were needed for the manual MTM-2 transcription of 21 users' recordings, both in VR and reality, with two trials each. This potential time saving through automated transcription can be considered as a rough estimate for the actual savings potential. Therefore, our study can also be seen as a proof of concept for a fully automated MTM analysis, which was proposed by Bellarbi et al. [2]. This leads us to the next research question about the number of trackers and their position on a user's body for capturing body motions. In our study, we track the head position together with one hand-held controller. However, it is likely that this will not be sufficient to identify all body motions and poses, e.g., bending. We agreed that this issue could be solved by attaching trackers to the user's pelvis or feet. Further studies could focus on the optimal number of trackers and their placement or the most suitable VR hardware setup to capture and further analyze body motions. Another issue for the automated transcription could be the separation of intended body motions that are required for completing the task from unintended body motions for handling the VR system itself. Unintended body motions could be caused by the uncomfortable attachment of the HMD or an interfering cable required by the VR system. In order to move towards an even more detailed analysis such as MTM-1, we envision to also technically capture precise hand gesture detection. There are different ways to recognize gestures, but the most common ones used in VR are optical and inertial tracking. For optical tracking, systems like the Oculus Quest II could be used. Inertial tracking for hand gesture recognition usually consists of equipment with some mounted inertial trackers, e.g., Sensoryx gloves. It may be possible that using inertial gloves or optical tracking instead of controllers will be less accepted by the user since haptic feedback cannot be addressed yet, and thus grasping a virtual object may feel unnatural. Acknowledgements We further want to thank Inspacion AG for providing the virtual factory surroundings. Author contribution V. Gorobets: Acquisition and analysis of the study data, paper writing. V. Holzwarth: Analysis of the study data, paper writing. C. Hirt: Paper writing. N. Jufer: Industrial advisor. A. Kunz: Project management, acquisition and analysis of the study data, paper writing. Funding Open Access funding provided by ETH Zurich. This work is fully funded by the Swiss Innovation Agency Innosuisse as part of the Eurostars project with the number "E!113504." Availability of data and material The data is completely available. Declarations Ethics approval According to the ethics committee, there was no ethical approval necessary for the here presented study. The recorded data was properly anonymized and participants cannot be identified using the here presented data. Consent to participate All of the participants were recruited on their own will and were informed that they could stop the study session for any reason at any given time. Consent for publication All authors give their consent for publication of the here presented work. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
6,515.6
2021-06-29T00:00:00.000
[ "Engineering", "Computer Science" ]
REDUCIBLE FUNCTIONAL DIFFERENTIAL EQUATIONS This is the first part of a survey on analytic solutions of functional differential equations (FDE). Some classes of FDE that can be reduced to ordinary differential equations are considered since they often provide an insight into the structure of analytic solutions to equations with more general argument deviations. Reducible FDE also find important applications in the study of stability of differential-difference equations and arise in a number of biological models. In studying equations with a deviating argument, not only the general properties are of interest, but also the selection and analysis of the individual classes of such equations which admit of simple methods of investigation.In this section we consider a special type of functional differential equations that can be transformed into ordinary differential equations and thus provide an abundant source of relations with analytic solutions. Obviously, the key to the solution is the fact that the function f(t) i/t maps the interval (0, ) one-to-one onto itself and that the relation f(f(t)) t, (2.3) or, equivalently, is satisfied for each t (0, oo). A function f(t) 7 t that maps a set G onto itself and satlsles on G condition (2.3), is called an involution.In other words, an involution is a mapping which coincides with its own inverse.Let fl(t) f(t) fn+l(t) f(f (t)) n 1 2 n denote the iterations of a function f:G G.A function f:G G is said to be an involution of order m if there exists an integer m _> 2 such that f (t) t for each m t E G, and f (t) t for n I, m i.It is easy to check that the following n functions are involutions. EXAMPLE 2.1.f(t) c t on R (_oo, oo), where c is an arbitrary real. EXAMPLE 2.4.The function f (z) ez, where E exp(2i/m), is an involution of order m on the complex plane. We denote the set of all such functions by I.The graph of each f e I is sym- metric about the line x t in the (t, x) plane.Conversely, if F is the set of points of the (t, x) plane, symmetric about the line x t and which contains for each t a single point with abscissa t, then F is a graph of a function from I. One of the methods for obtaining strong involutions is the following [14].Assume that a real function g(t, x) is defined on the set of all ordered pairs of real numbers and is such that if g(t, x) O, then g(x, t) 0 (in particular, this is fulfilled if g is symmetric, i.e., g(t, x) g(x, t)).If to each t there corresponds a single real x f(t) such that g(t, x) 0, then f g I.For example, g(t, x) (2.4) t-_oo t-+o THEOREM 2.1.A continuous strong involution f(t) has a unique fixed point. PROOF.The continuous function @(t) f(t) t satisfies relations of the form (2.4) and, therefore, has a zero which is unique by virtue of its strict monotonicity. We also consider hyperbolic involutary mappings t+ (2 + > 0) (2.5) f(t) Ytwhich leave two points fixed.We introduce the following definition.DEFINITION 2.2.A relation of the form (n) F(t, x(fl(t)) x(fk(t)) F(t, x(t), x(f(t))) (2.6) satisfy the following hypotheses. (i) The function f(t) is a continuously differentiable strong involution with a fixed point t O. (ii) The function F is defined and is continuously differentiable in the whole space of its arguments. (iii) The given equation is uniquely solvable with respect to x(f(t)): x(f(t)) G(t, x(t), x'(t)). (2.7) Then the solution of the ordinary differential equation F F F x"(t) (2.8) (where x(f(t)) is given by expression (2.7)) with the initial conditions is a solution of Eq. (2.6) with the initial condition x(t O) x O. The second of the initial conditions (2.8')is a compatibility condition and is found from Eq. (2.6), with regard to (2.9) and f(t 0) (_oo, o). Then the solution of the ordinary differential equation x"(t) with the initial conditions x(t O) Xo, x'(t o F(Xo) is a solution of Eq. (2.10) with the initial condition X(to) x O. COROLLARY.Theorems 2.2 and 2.3 remain valid if f(t) is an involution of the form (2.5), while the equations are considered on one of the intervals (_oo, a/y) or (I, ). REMARK.Let t O be the fixed point of an involution f(t).For t > to, (2.6) and (2.10) are retarded equations, whereas for t < t O they are of advanced (2.12) The fixed point of the involution f(t) a-t is t o a/2.The initial condition for (2.11) is x() x O; the corresponding conditions for (2.12) are 1 x() x o, x'() x0 Eq. (2.12) is integrable in quadratures: This is the solution of the original equation (2.11). The topic of the paper [16] is the equation (2.13) where x is an unknown function. THEOREM 2.4 ([16]).Let the following conditions be satisfied: (I) The function f maps the open set G into G, G being a subset of the set R of real numbers. (2) The function f has iterations such that fl(t) f(t) fk(t) f(fk_l(t)) f (t) t for each t e G, where m is the smallest natural number for which the last expression holds. (3) The function f has derivatives up to, and inclusive of, the order mn n for each t g G, f'(t) # 0 for each t G. (4) The function F(t, Ul, u2, Un+ I) is mn n times differentiable of its F arguments for each t g G and Ur R (r I, n + I) and u n+l (5) The unknown function x has derivatives up to, and inclusive of, the order mn on G. In this case there exists an ordinary differential equation of order mn such that each solution of Eq. (2.13) is simultaneously a solution of this differential equat ion. Let us consider the functional differential equation [17] (2) The functions x and f (r i n) have derivatives up to the order p, r where p max(k I kn) so that f'(t) # 0 for every t G and r I n. r (3) For the function F at least one relation F x (s) To investigate the equation x'(t) f(x(t), x(-t)), the author of [6] denotes y(t) x(-t) and obtains y'(t Hence, the solutions of the original equation correspond to the solutions of the system of ordinary differential equations d__x f(x y) dy _f(y x) dt dt with the condition x(O) y(O).From the qualitative analysis of the solutions of the associated system he derives qualitative information about the solutions of the equation with transformed argument.The linear case is discussed in some detail. Several examples of more general equations are also considered. Boundary-value problems for differential equations with reflection of the argu- ment are studied in [I0]. LINEAR EQUATIONS In this section we study equations of the form n Lx(t) with an involution f(t). THEOREM 3.1 ([i]).Suppose that the initial conditions are posed for Eq. (3.1) in which the coefficients ak(t) the function (t), and the strong (or hyperbolic (2.5)) involution f(t) with fixed point t o belong to the class cn(_m, oo) (or cn(/y, oo)).If n > I, then f'(t) # O.We introduce the operator Then the solution of the linear ordinary differential equation PROOF.By successively differentiating (3.1) n times, we obtain These relations are multiplied by a0(f(t)) al(f(t) a (f(t)) respectively n and the results are added together: k=O Thus, we obtain Eq. (3,4).In order that the solution of this equation satisfies problem (3.1)-(3.2),we need to pose the following initial conditions for (3.4): the values of the function x(t) and of its n are determined from the relations Mx(t) x (k)(f(t)) + Mk@(t), k 0 n i (k) by substituting the values t O and x k for t and x (t). is integrable in quadratures and has a fundamental system of solutions of the form ta(In t) sin(b In t), ta(In t) j cos(b In t), a and b are real and is a nonnegative integer. (3.7) PROOF.By an n-fold differentiation Eq. (3.6) is reduced to the Euler equation For n i this follows from (2.2).Let us assume that the assertion is true for n and prove its validity for n + i.In accordance with formula (3.3), we introduce for Eq. (3.6) the operator On the basis of (3.4) and (3.8) we have 2n Mnx (n) Consequently, the equation x (n+l) (t) x() is reduced by an (n+l) -fold differentiation to the Euler equation Mn+ix(n+l)(t) x(t). At the same time we established the recurrence relation It is well known [18] that the Euler equation has a fundamental system of solutions of the form (3.7), where a + bi is a root of the characteristic equation and j is a nonnegative integer smaller than its multiplicity.The theorem is proved.x'(t) x( + 9(t), 0 < t < 9(t) g (0, x x 0 reduces to the problem t2x"(t) + x(t) t29'(t) x(1) O, x'(1) x 0 + 9(1). If b d and a c, Eq. (3.10) is equivalent to the system of equations ax(t) + btx'(t) u(t), u() tr-s-lu(t). If b d and a c, (3.10) reduces to the functional equation x(tl_) t r-s-I x(t). In the case of b -d and a -c Eq. (3.10) reduces to the system ax(t) + btx'(t) u(t) u() -t r-s-I u(t). In the case of b -d and a # -c (3.10) reduces to the functional equation x() -t r-s-I x(t). The equation x'(t) x(f(t)) with an involution f(t) has been studied in [19]. Consider the equation [13] with respect to the unknown function x(t): x'(t) a(t)x(f(t)) + b(t), (3.13) (i) The function f maps an open set G onto G. (2) The function f can be iterated in the following way: where m is the least natural number for which the last relation holds. (3) The functions a(t), b(t) and f(t) are m 1 times differentiable on G, and x(t) is m times differentiable on the same set. THEOREM 3.5 ([i]).In the system x'(t) Ax(t) + Bx(c-t), x(c/2) x 0 (3.15) let A and B be constant commutative r xr matrices, x be an r-dimensional vector, and B be nonsingular. It follows from here that, by appropriate choice of c I, c2, c3, and c4, we can obtain both oscillating and nonoscillating solutions of the above equations.On the other hand, it is known that, for ordinary second-order equations, all solutions are either simultaneously oscillating or simultaneously nonoscillating. It has been also proved in [7] that the system x'(t) A(t)x(t) + f(t, x(tl-)) 1 <_ t < II f(t, x())II <--II x()ll q, where 6 > 0 and q _> 1 are constants, is stable with respect to the first approxima- t ion. For the equation n Z aktkx(k) k=0 (t) x(), 0 < t < (3.16) we prove the following result.s THEOREM 3.6.Eq.(3.16) is reducible by the substitution t e to a linear ordi- nary differential equation with constant coefficients and has a fundamental system of solutions of the form (3.7).s PROOF.Put e and x(e s) y(s), then tx'(t) y'(s).Assume that tkx(k)(t) Ly(s), where L is a linear differential operator with constant coefficients.From the relation we obtain tk+l x (k+l) L[y'(s) ky(s)], which proves the assertion. The functional differential equation Q'(t) AQ(t) + BQT(T t), < t < (3.17) where A, B are n x n constant matrices, T _> 0, Q(t) is a differentiable n n matrix and QT(t) is its transpose, has been studied in [20].Existence, uniqueness and an algebraic representation of its solutions are given.This equation, of considerable interest in its own right, arises naturally in the construction of Liapunov functio- nals for retarded differential equations of the form x'(t) Cx(t) + Dx(t-I), where C, D are constant n n matrices.The role played by the matrix Q(t) is analogous to the one played by a positive definite matrix in the construction of Liapunov functions for ordinary differential equations.It is shown that, unlike the infinite dimen- sionality of the vector space of solutions of functional differential equations, the linear vector space of solutions to (3.17) is of dimension n 2.Moreover, the authors 2 give a complete algebraic characterization of these n linearly independent solutions which parallels the one for ordinary differential equations, indicate computationally simple methods for obtaining the solutions, and allude to the variation of constants formula for the nonhomogeneous problem. The initial condition for (3.17 where K is an arbitrary n n matrix.Eq. (3.17) is intimately related to the system Q'(t) For any two n n matrices P, S, let the n x n matrix PS denote the Kronecker (or direct) product [21] and introduce the notation for the n x n matrix Sl* S (sij) (n,) Sn* where si, and s,j are, respectively, the i th row and the j th colun of S; further, let there correspond to the n> n matrix S the n2-vector s (Sl, Sn,)T.With this notation Eqs. (3.19) THEOREM 3.7 ([20]).Eq. (3.17) with the initial condition (3.18) has a unique solution Q(t) for < t < oo. Examination of the proof makes it clear that knowledge of the solution to (3.21) immediately yields the sol'ution of (3.17)-(3.18).But (3.21) is a standard initial- value problem in ordinary differential equations; the structure of the solutions of such problems is well known.Furthermore, since the 2n 2 2n 2 matrix C has a very special structure, it is possible to recover the structure of the solutions of Eq. (3.17).Let I' %p' p 2n2' be the distinct eigenvalues of the matrix C, that is, solutions of the determinantal equation each ., I, p, with algebraic multiplicity m. and geometric multiplicities nj,r Zr =s I n.mj, Zj m.=3 2n2" Then 2n 2 linearly independent solutions of (3.21) are given by T q-i j q (t) exp(% (t-T q (t -) )) Z (q-i)' r i=l where q i, n., and the 2n 2 linearly independent eigenvectors and generalized eigenvectors are given by A change of notation, and a return from the vector to the matrix form, shows that 2n linearly independent solutions of (3.19) are given by r T q (t-r exp(%j(t where the generalized eigenmatrix pair (L i Mj i)associated with the eigenvalue j ,r' ,r satisfies the equations (3.24) The structure of these equations is a most particular one; indeed, if they are multi- plied by -I, transposed, and written in reverse order, they yield BL. M. -%. will also be a solution; moreover, %. and -%.have the same geometric multiplici- 3 3 3 16 S.M. SHAH AND J. WIENER ties and the same algebraic multiplicity.Hence, the distinct eigenvalues always appear in pairs (%.%j), and if the generalized eigenmatrix pairs corresponding to 3' i i %. are (L., r, Mj,r), the generalized eigenmatrix pairs corresponding to -%j will be .T .T 1 (-i) i+l Lj I ).These remarks imply that if the solution (3.23) cor- responding to %. is added to the solution (3.23) corresponding to -%. multiplied by (-I) q+i the n 2 linearly independent solutions of (3 19) given by Zj q(t) But this is precisely condition (3.20)" it therefore follows that the expressions r (3"25) 2 are n linearly independent solutions of (3.17). Eq. (3.17) has been used in [22] for the construction of Liapunov functionals and also encountered in a somewhat different form in [23]. Some problems of mathematical physics lead to the study of initial and boundary value problems for equations in partial derivatives with deviating arguments.Since research in this direction is developed poorly, the investigation of equations with involutions is of certain interest.They can be reduced to equations without argu- ment deviations and, on the other hand, their study discovers essential differences that may appear between the behavior of solutions to functional differential equations and the corresponding equations without argument deviations. The solution of the mixed problem with homogeneous boundary conditions and ini- tial values at the fixed point t o of the involution f(t) for the equations u t(t, x) au (t x) + bu (f(t) x) Its investigation is carried out by means of Theorem 3.1, according to which the solution of the equation THEOREM 3.9.The solution of the problem ut(t x) au (t, x) + hu (c-t, x), PROOF.By separating the variables, we obtain In this case, Eq. (3.30) takes the form The completion of the proof is a result of simple computations.Depending on the relations between the coefficients a and b, the following possibilities may occur: (i) T (t) C (cos ), (lal < Ibl); 2 2 (2) T (t) (a+b)(t---)), (lal Ibl); (3) T (t) )exp(-c (t-))], (Ib < lt). IEOREM 3.10.The solution of the equation u ( satisfying the boundary and initial conditions if a 2 b 2. In the case a < expansion (3.28) diverges for all t # 0. Omitting the calculations, we formulate a qualitative result. THEOREM 3.11.An equation that contains, along with the unknown function x(t) and its deriva- tives, the value x(-t) and, possibly, the derivatives of x at the point -t, is called a differential equation with reflection.An equation in which as well as the unknown function x(t) and its derivatives, the values x(1t-aI) X(mt-am and the cor- are mth roots of uni- responding values of the derivatives appear, where gl' m ty and al' m are complex numbers, is called a differential equation with rota- tion. For m 2 this last definition includes the previous one.Linear first-order equations with constant coefficients and with reflection have been examined in detail in [5].There is also an indication (p.169) that "the problem is much more diffi- cult in the case of a differential equation with reflection of order greater than one".Meanwhile, general results for systems of any order with rotation appeared in [3], [4], [9], and [24]. Consider the scalar equation Xk, k 0 n-1 with complex constants ak, bk, e, then the method is extended to some systems with variable coefficients.Turning to (4.1) and assuming that is smooth enough, we where P and Q are linear differential operators of order n with constant coefficients Pk' qk and Pn an' qn hn. COROLLARY.Under assumptions (4.4) and m I, (4.5) is reducible to a linear ordinary differential equation with constant coefficients. The analysis of the matrix equation X'(t) AX(t) + exp(at) [BX(et) + CX'(et)], (4.6) x(0) E with constant (complex) coefficients was carried out in [3].The norm of a matrix is defined to be lcll max .leij!, (4.7) and E is the identity matrix. THEOREM 4.3. ([3]).If e is a root of unity (e # I), Icll < 1, and the matrix A is commuting with B and C, then problem (4.6) is reducible to an ordinary linear system with constant coefficients. Thus, the use of the operator Lm_ I at the conclusive stage yields (4.11). THEOREM 4.6 ([9]).The system tAX'(t) + BX(t) X(Et) (4.12) with constant matrices A and B is integrable in the closed form if e m I, det A 0. 3 Hence, on the basis of the previous theorem, (4.12) is reducible to the ordinary system (tAdldt + B) m X(t) X(t).(4.13) This is Euler's equation with matrix coefficients.Since its order is higher than that of (4.12) we substitute the general solution of (4.13) in (4.12) and equate the coefficients of the like terms in the corresponding logarithmic sums to find the additional unknown constants. EXAMPLE 4.2.We connect with the equation [9] tx'(t) 2x(t) x(et), e3 1 with constant coefficients A and B, det A # 0 and e m 1 is Integrable in closed form and has a solution X(t) e(t)t A-IB (4.16) where the matrix P(t) is a finite linear combination of exponential functions. PROOF.The transition from (4.15) to an ordinary equation is realized by means of the operators L. Biological models often lead to systems of delay or functional differential equations (FDE) and to questions concerning the stability of equilbrium solutions of such equations.The monographs [28] and [29] discuss a number of examples of such models which describe phenomena from population dynamics, ecology, and physiology. The work [29] is mainly devoted to the analysis of models leading to reducible FDE. A necessary and sufficient condition for the reducibility of a FDE to a system of ordinary differential equations is given by the author of [30].His method is fre- quently used to study FDE arising in biological models.We omit these topics and refer to a recent paper [31].For the study of analytic solutions to FDE, which will be the main topic in the next part of our paper, we also mention survey [32]. 1. 16. 17. 18. a (t), x(fl(t))x (fl(t))X(fn(t))x (f (t)))=0 (2 14)where x is an unknown function and where the following conditions are fulfilled:(I) The functions fl'f form a finite group of order n with respect to n superposition of functions, fl(t) t, and map the open set G into G, G being the largest open set wherein all expressions appearing in this paper are defined. for equations(3.31)and (3.32) are posed at the fixed point of the involution f(t) c t. and apply A I to te given equation with operators A and B defined by (4.1) to Px (Qx)(et) + exp(-c,t/l THEOREM 4.4.([27]).Suppose we are given a differential equation with reflec- tion of order n with constant coefficients n [a-x(k) + bkX(-t)] y(t). of the linear ordinary differential equation m + (0) ,k=O PROOF.Applying the operator L I to (4.9) and taking into account that (LoX)(et) x(e2t)+ (et) t O It is especially clear to see the function f(t) is a continuously differentiable strong involution with a fixed point t o and the function F is defined, continuously differentiable, and strictly monotonic on type. The solutions of the equations AX'(t) (ekE + t-IB)x(t) are matrices Xk(t) exp(ktA-l)t A-IB, k I, mo
5,065.2
1985-01-01T00:00:00.000
[ "Mathematics" ]
Streptococcus suis Serotype 2 Biofilms Inhibit the Formation of Neutrophil Extracellular Traps Invasive infections caused by Streptococcus suis serotype 2 (SS2) has emerged as a clinical problem in recent years. Neutrophil extracellular traps (NETs) are an important mechanism for the trapping and killing of pathogens that are resistant to phagocytosis. Biofilm formation can protect bacteria from being killed by phagocytes. Until now, there have only been a few studies that focused on the interactions between bacterial biofilms and NETs. SS2 in both a biofilm state and a planktonic cell state were incubated with phagocytes and NETs, and bacterial survival was assessed. DNase I and cytochalasin B were used to degrade NET DNA or suppress phagocytosis, respectively. Extracellular DNA was stained with impermeable fluorescent dye to quantify NET formation. Biofilm formation increased up to 6-fold in the presence of neutrophils, and biofilms were identified in murine tissue. Both planktonic and biofilm cells induced neutrophils chemotaxis to the infection site, with neutrophils increasing by 85.1 and 73.8%, respectively. The bacteria in biofilms were not phagocytized. The bactericidal efficacy of NETs on the biofilms and planktonic cells were equal; however, the biofilm extracellular matrix can inhibit NET release. Although biofilms inhibit NETs release, NETs appear to be an important mechanism to eliminate SS2 biofilms. This knowledge advances the understanding of biofilms and may aid in the development of treatments for persistent infections with a biofilm component. INTRODUCTION Streptococcus suis (SS) is a major swine pathogen that causes a variety of diseases such as septicemia, meningitis, and endocarditis, which lead to economic losses. S. suis serotype 2 (SS2) is considered the most pathogenic and prevalent capsular type (Wertheim et al., 2009;Kerdsin et al., 2016). People working with pigs or people who consume pork-derived products from infected animals are at risk. During the last decade, several human epidemic outbreaks were reported in Asia and all over the world (Gottschalk et al., 2010a,b;Goyette-Desjardins et al., 2014). In addition, streptococcus toxic shock-like syndrome (STSLS), a peracute infection characterized by shock and a high mortality rate, is reported to be caused by SS2, resulting in increased public health concerns worldwide (Tang et al., 2006;Gomez et al., 2014). Bacterial biofilms are bacterial communities and are an important mechanism for bacterial resistance to immune system pressures and antimicrobials (Bojarska et al., 2016). Most of SS2 clinical isolates can form biofilms, which contribute to persistent infection, transmission and difficulties to eradicate infection (Bojarska et al., 2016). However, little information is available on the interaction between the host immune system and SS2 biofilms (Thurlow et al., 2011;Yang et al., 2016). Neutrophil extracellular traps (NETs), which are composed of granule and nuclear constituents, are made by activated neutrophils (Brinkmann et al., 2004;Uhlmann et al., 2016). The nuclear constituents are DNA and histones; DNA is the backbone of NETs and traps the pathogens by charge interactions (Wartha et al., 2007). In the recent years, NETs have been identified as a significant antibacterial mechanism employed by neutrophils (Csomos et al., 2016). Neutrophils are observed to generate NETs upon activation with interleukin-8 (IL-8), phorbol myristate acetate (PMA), lipopolysaccharide (LPS), and various microbes (Leshner et al., 2012). NETs can disarm and kill a variety of pathogens, including GAS, S. aureus, Shigella flexneri, and fungi, by capturing the microbes and providing a high local concentration of antimicrobial granules (Brinkmann et al., 2004;Buchanan et al., 2006;May et al., 2015). NETs have been found to be abundant at in vivo sites of infection and inflammation, including in cases of the autoimmune disease systemic lupus erythematosus and a murine model of pneumococcal pneumonia (Beiter et al., 2006;Hakkim et al., 2010). In previous studies examining the immune system response to various microorganisms, certain microbes have been shown to evade phagocytosis but become entrapped by NETs (Branzk et al., 2014). Candida albicans biofilms evade phagocytosis and impair NET formation (Johnson et al., 2016). However, the nature of bacteria and fungi is dramatically different, particularly in size and biofilm structure. Therefore, whether bacterial biofilms can stimulate NET formation is unknown and the influence of SS2 biofilms on bacterial survival in NETs is unclear and requires exploration. Bacterial biofilm formation allows bacteria to persist in the host, making the treatment of streptococcosis challenging (Walker et al., 2005). Cases of human infections worldwide stress the lack of knowledge on the virulence and interactions with host immune cells. Our study provides further knowledge on SS2 biofilm and immune response interactions, which can lead to novel approaches to streptococcosis clinical therapy. Further understanding of host-SS2 interactions may help to explain the complex evolution of the emerging human threat. Ethics Statement This study was carried out in an accordance to animal welfare standards and were approved by the Ethical Committee for Animal Experiments of Nanjing Agricultural University, China. All animal experiments accorded with the guidelines of the Animal Welfare Council of China. Bacterial Strains and Cells The wild-type SS2 strain ZY05719 is an isolate from Jiangsu Province and was grown in Todd-Hewitt broth (THB) medium (Difco, BD, Franklin, NJ, USA) at 37 • C on a gently rocking shaker. The bacteria were cultured to the mid-exponential phase and were collected in media for the experiment using planktonic cells. SS2 biofilms were identified with Congo Red Agar composed of 3% THB, 0.08% Congo Red (Sigma Aldrich, St. Louis, Mo, USA), 0.5% glucose (Biosharp, Anhui, China) and 1.5% agar powder. Neutrophils from the bones of mice were cultured in RPMI 1640 (Gibco-BRL, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 2% heatinactivated fetal bovine serum (FBS) at 37 • C in 5% CO 2 . RAW264.7 cells (ATCC R TIB-71 TM ) were purchased from the American Type Culture Collection (ATCC) and were cultured in Dulbecco's modified Eagle's medium (DMEM; Wisent, Canada) supplemented with 10% FBS at 37 • C in 5% CO 2 . Isolation of Neutrophils from Mouse Bone Marrow Neutrophils were isolated from 4-week-old ICR mice as previously described with modifications (Zhao et al., 2015). Briefly, the mice were euthanized and sprayed with 70% ethanol. The bone marrow from the tibias and femurs was flushed with sterile PBS with a 20-gauge needle into a 15 ml Falcon tube (BD Falcon), and the cells were washed by centrifugation at 400 × g for 10 min. The cell pellet was resuspended in 3 ml of PBS. A Percoll (Sigma Aldrich) density gradient was prepared in a 15 ml Falcon tube by the careful addition of 3 ml of 80% Percoll followed by 3 ml of 65% Percoll and 3 ml of 55% Percoll. The cell suspension was overlaid carefully and centrifuged for 30 min at 1000 × g at room temperature. The top layer and the 55% Percoll layer were carefully aspirated and discarded. The cells at the 80/65% gradient interface were collected and then washed and suspended in RPMI1640 medium. A greater than 90% neutrophil purity was confirmed by Trypan blue staining and flow cytometry. Neutrophil Detection by Flow Cytometry Flow cytometry was performed as previously described with modification (Barletta et al., 2012). The neutrophils were stained with 0.1 µg of FITC-mouse Ly6G antibody (eBioscience, San Diego, CA, USA) and 0.1 µg of PE-mouse CD11b antibody (eBioscience). All of the experiments were recorded using Accuri Cflow software (BD Bioscience, CA, USA) and were analyzed using FlowJo software (Three Star, Ashland, OR, USA). Biofilm Formation In vitro SS2 biofilm formation required the presence of fibrinogen in the culture medium (Freeman et al., 1989;Bojarska et al., 2016). For SS2 biofilm formation in vitro, 100 µl of THB with 2.5 mg/ml of human plasma fibrinogen (Sigma Aldrich) and 100 µl of the bacterial suspension at a concentration of 10 6 colony forming units (CFU)/ml were incubated in a 96-well plate at 37 • C for 24 h. Each well was washed carefully with PBS to remove planktonic bacteria. For biofilm experiments, the biofilms were resuspended in PBS by repeated pipetting, through which the biofilms were physically dispersed (Johnson et al., 2016). To detect the effect of neutrophils on SS2 biofilm formation, 100 µl of a 10 4 CFU/ml bacteria suspension in RPMI was mixed with 100 µl of purified neutrophils at 10 6 /ml with or without DNase I or cytochalasin B in a 96-well plate and incubated for 24 h at 37 • C in 5% CO 2 . The purified neutrophils without bacteria were included as a negative control. DNase I and cytochalasin B were added to bacterial suspension only in the presence of fibrinogen to evaluate the influence of these two inhibitors on biofilm formation. Biofilm formation in the above assay was detected in a 96well plate using a 0.1% crystal violet stain (Kosikowska et al., 2016). After incubation for 24 h, the plates were washed three times with PBS to remove nonadherent cells. To each well, 200 µl of methyl alcohol was added to fix the cells, and then the plates were placed in a 37 • C dryer oven to remove the methyl alcohol. After the plates were washed with PBS three times, the biofilm in each well was stained with 200 µl of 0.1% crystal violet for 20 min. Following staining, the plates were washed three times, and the crystal violet staining the cells was dissolved with 95% ethyl alcohol. The biofilm was detected with a multifunctional microplate reader (Tecan Infinite Pro, Austria) at an optical density (OD) of 595 nm. Biofilm Detection In vivo To determine whether SS2 forms biofilms in vivo, the bacteria were grown to an OD 600 of 0.8 and were then washed three times with PBS. Four-week-old ICR mice were challenged with SS2 ZY05719 at 10 8 CFU/ml by intraperitoneal injection. Three days post-injection, the mice were challenged again. At 12 h after the injection, the mice were euthanized, and the heart, liver, spleen, lungs, and kidneys were collected and homogenized. SS2 biofilm formation was determined using plate streaking in modified Congo Red THB plate agar. In order to exclude the color change of Congo Red THB plate is caused by planktonic bacteria or tissue homogenate, planktonic ZY05719 was added into organs homogenate of non-injected mice and the mixture was streaked on Congo Red THB plate directly. The bacteria isolated in vivo were detected with polymerase chain reaction (PCR) using the primer combinations GAPDHF/GAPDGR to detect SS2 GAPDH: forward, 5 ′ -CATGGACAGATAAAGATGG-3 ′ ; reverse, 5 ′ -GCAGCGTATTCTGTCAAACG-3 ′ and CPSF/CPSR to detect the SS2 serotype: forward, 5 ′ -GACGGCAACATTGTTGAGTC-3 ′ ; reverse, 5 ′ -CTCCTAACCACTGTTCAGTG-3 ′ . Phagocytosis Assay The phagocytosis assay was performed with RAW264.7 cells as previously described with some modifications (Mitterstiller et al., 2016). Briefly, RAW264.7 cells were incubated in 24well plates, and then the cell monolayers were washed three times with PBS. An aliquot (100 µl) of suspension containing 10 6 CFU/ml planktonic cells or biofilm cells were added to the cells. The 24-well plate was centrifuged at 800 × g for 10 min and was incubated for 2 h at 37 • C in 5% CO 2 . Next, the cells were washed with DMEM and were treated with 200 µg/ml of penicillin-streptomycin for 1 h to kill extracellular bacteria. The cells were washed with DMEM, and then 100 µl of trypsin and 900 µl of sterile deionized water were added per well to release the bacteria. The viable bacteria number was determined by plating serial dilutions. Bacteria incubated in DMEM for 2 h without RAW264.7 cells were served as control group to quantify the initial inoculum. The level of phagocytosis was calculated as (CFUs of viable bacteria in experimental group)/(CFUs of viable bacteria in the control group). Neutrophil and NET Bactericidal Assays The neutrophils bactericidal assay was performed according to a previous method with slight modifications (Uchiyama et al., 2015). The neutrophils were divided into 3 groups: an untreated group containing only purified neutrophils, and two groups of neutrophils were treated with either DNase I (Sigma Aldrich) to inhibit NET formation or with cytochalasin B (Sigma Aldrich) to suppress neutrophil phagocytosis. Bacteria at 3 × 10 7 CFU were added to the neutrophils at a multiplicity of infection (MOI) of 10. After incubation with planktonic SS2 or biofilm cells for 90 min, the neutrophils were permeabilized with 0.2% Triton X-100 (Sigma Aldrich) on ice to release the intracellular bacteria. The surviving bacteria were diluted and plated on THB agar, and the CFUs were counted. Bacteria without incubation were serially diluted and plated to quantify the initial inoculum. The neutrophils were stimulated by PMA (200 nM, Sigma Aldrich) for 4 h to form NETs as previously described (Ma et al., 2017). Thereafter, the mixtures were centrifuged at 800 × g for 10 min to remove the cells. Planktonic SS2 and biofilm cells at 2 × 10 7 CFU were added to the NET supernatant and were incubated for 1 h at 37 • C in 5% CO 2 . The bacteria without incubation in NETs supernatant were diluted and plated on THB agar as a control. Bacterial Survival in Mouse Blood In vivo Bacterial survival in the blood was determined as previously described (Derkaoui et al., 2016). An aliquot (200 µl) of planktonic or biofilm SS2 at an OD 600 of 0.5 was injected into mice via the tail vein route. To further evaluate the function of NETs, the bacteria were injected into the tail vein with DNase I (10 mg/kg of body weight), and at 12 h post-infection, DNase I was injected again. Planktonic and biofilm SS2 cells without incubation were plated on THB agar as a control. At 2, 4, 8, and 24 h post-infection, blood was collected via heart puncture, and the blood was serially diluted and plated on THB agar plates. Visualization and Quantification of NETs In vitro NETs were observed in vitro as previously described (Yost et al., 2016). Briefly, neutrophils were pretreated with cytochalasin B for 15 min before incubation with PMA and bacteria. Planktonic ZY05719 at an OD 600 of 0.6-0.8 were washed twice with PBS, added to the neutrophils at an MOI of 10 on poly-L-lysine-coated cover slides, and then centrifuged at 800 × g for 10 min. After incubation at 37 • C for 3 h, the cover slides were fixed with 4% paraformaldehyde for 10 min, permeabilized with 0.1% Triton X-100 and were then blocked with donkey serum at 4 • C overnight. The samples were stained with the primary rabbit anti-neutrophil histone H4 antibody (citrulline 3, 1:1000 diluted, Merck Millipore, Billerica, MA, USA) for 1 h at RT, followed by incubation with goat anti-rabbit Alexa 568 antibody (1:100 dilution, Jackson ImmunoResearch, West Grove, PA, USA). The DNA was visualized by staining with 4' ,6-diamidino-2phenylindole (DAPI, Thermo Fisher). The images were recorded using a fluorescence microscope (Zeiss, Germany). Bacteria entrapped by NETs were stained with SYTO 9 green fluorescent nucleic stain (Thermo Fisher) and observed using 100 × oil objective. The NET quantification assay was performed as previously described (Riyapa et al., 2012). For determining the capacity of neutrophils to form NETs in the presence of planktonic and biofilm SS2, 200 µl of neutrophils were incubated with 20 µl of planktonic SS2 cells, biofilm cells or bacteria separated from biofilm matrix with or without PMA for 3 h. The biofilms were disrupted and the mixture was centrifuged at 3,000 × g for 10 min to separate the bacteria from biofilm matrix. The supernatant was biofilm extracellular matrix and was collected. The precipitate was bacteria that were separated from matrix and the bacteria were washed 3 times with PBS. As a positive control, neutrophils were stimulated with 200 nM PMA. The corresponding bacteria or biofilm matrix were incubated in media without neutrophils to eliminate the background fluorescence. The negative control was the purified neutrophils incubated in media. Extracellular DNA was used to evaluate the quantity of NETs, which was quantified using a Quant-iT Picogreen dsDNA assay kit (Invitrogen). Briefly, after incubation, the reaction mixture were centrifuged at 800 × g for 5 min to discard the cells. Subsequently, 100 µl of supernatant was added to 100 µl of a working solution, which was then mixed thoroughly. After incubation for 5 min, the fluorescence was read with a multifunctional microplate reader (Tecan Infinite Pro) at 480 nm (excitation)/520 nm (emission). Statistical Analysis All experiments were repeated at least 3 times. Student's t-test and the GraphPad Prism 5 Software package (GraphPad Software, La Jolla, CA, USA) were used to perform statistical analyses. Values of p < 0.05 were considered statistically significant. SS2 Biofilm Formation Improved in the Presence of Neutrophils Wild-type SS2 can form biofilms only in the presence of fibrinogen in vitro; however, we found that wild-type SS2 can form biofilms without fibrinogen in the presence of neutrophils. Either phagocytosis or NETs of neutrophil is inhibited, the formation of SS2 biofilm decreased (Figures 1A,B). DNase I and cytochalasin B were used to degrade NET DNA and inhibit phagocytosis and these two inhibitors had no influence on SS2 biofilm formation in the presence of fibrinogen ( Figure 1C). This results suggest that in the presence of neutrophil infiltration, SS2 is more liable to form biofilms, which may enhance bacterial survival. Identification of SS2 Biofilms in Mouse Organs Considering that SS2 biofilm formation in THB requires fibrinogen, it is necessary to determine whether SS2 forms biofilms in vivo. Biofilm formation can cause a color change of the Congo Red Agar from red to black. Importantly, biofilm SS2 was isolated from the liver, spleen, and kidney of healthy mice challenged with planktonic SS2 (Figure 2A). Control groups are designed to evaluate the influence of planktonic bacteria and tissue homogenate on Congo Red Agar, and the results showed that normal tissue homogenate, planktonic SS2 and even their simple mixtures cannot cause a color change of the Congo Red Agar (Figure 2B). The isolated biofilm cells were confirmed by PCR ( Figure 2C).These results indicated that SS2 could form bacterial biofilms during the process of infection. Chemotaxis of Neutrophils to the Site of Infection Site with Planktonic and Biofilm Cells The mice were infected with biofilms and planktonic SS2 at the same OD 600 using the murine peritoneal infection model; then, mouse immune cells were collected from peritoneal lavage fluids. Cells collected from mice without infection were served as blank control ( Figure 3A). The results indicated that both planktonic and biofilm cells caused a significant increase in neutrophils in the peritoneal cavity, from 2.4 to 87.5% and 76.2%, respectively (Figures 3B,C). The neutrophil infiltration provides an environment for the interaction between neutrophils and pathogens. Phagocytosis Efficiency of Biofilm Cells and Planktonic Cells The results of the phagocytosis assay indicated that approximately 40% of planktonic SS2 can be engulfed by RAW246.7 cells, a type of professional phagocyte. However, only a few biofilm cells could be plated on THB agar, which indicated that it was more difficult to engulf biofilm SS2 ( Figure 4A). Purified neutrophils had a significant bactericidal effect, and the survival capability of biofilm SS2 was nearly twice that of planktonic SS2. When DNase I was added with neutrophil, the survival rate of planktonic bacteria were nearly 2 times greater than that of the corresponding untreated group. The bacterial survival rate in the biofilm group with DNase I treatment was nearly 30% higher than that of the corresponding untreated control group. For planktonic SS2, when neutrophil phagocytosis was suppressed, the survival rate of planktonic SS2 was increased significantly. However, regardless of phagocytosis inhibition, there was little influence on the survival capability of the biofilm cells ( Figure 4B). Importantly, the results showed that the inhibition of phagocytosis was more beneficial for the survival of planktonic SS2 than the degradation of NETs, while NETs appear to play an important role in biofilm SS2 elimination. NETs Bactericidal Activity To evaluate the bactericidal capacity of NETs, the NETs bactericidal assay was performed. In the presence of NETs, nearly 25% of planktonic and 20% of biofilm SS2 cells were killed according to viable bacteria quantification on THB agar (Figure 5). The result indicated that NETs could kill both planktonic and biofilm SS2 and the bactericidal efficiency of NETs on planktonic SS2 and biofilm cells was comparable. Planktonic and biofilm SS2 cells were phagocytized by RAW264.7, and 40% of planktonic SS2 cells were phagocytized; however, few biofilm SS2 cells were phagocytized. (B) Planktonic and biofilm SS2 cells were killed by neutrophils, and the survival rate of biofilm cell was almost twice that of planktonic cells. When neutrophils were treated with DNase I and cytochalasin B, the survival rate of planktonic cells were increased by 2-fold and 3-fold, respectively. When neutrophils were treated with DNase I, the viable biofilm cells increased significantly; however, when neutrophils were pretreated with cytochalasin B, there was no significant difference between the untreated control and pretreatment groups. The results are depicted as the mean ± SD (n = 5). ***p < 0.001; ns, no difference between the groups. Inhibition of NET Formation by SS2 Biofilm Because NET formation is an important mechanism to kill SS2 biofilm, we next examined whether SS2 biofilms induced NET formation by neutrophils. SS2 ZY05719 could induce NETs release and could be captured by the NETs (Figures 6A,B). To further study the influence of biofilms on NET formation, the NETs quantification assay was developed. Only planktonic SS2 and bacteria separated from biofilm extracellular matrix could stimulate NETs release compared to biofilm cells. Importantly, the supernatant of the dispersed biofilm mixture, which is mainly composed of biofilm extracellular matrix, could not stimulate NET formation. In this case, bacteria were incubated with PMA stimulated neutrophils to determine whether biofilms failed to activate neutrophils or inhibit NET formation. The extracellular DNA of NETs induced by planktonic SS2 and bacteria separated from biofilm matrix was enhanced by PMA; however the biofilms and biofilm extracellular matrix inhibited PMA-induced NETs as well ( Figure 6C). These results indicated that bacteria both from the planktonic state and the dispersed biofilm state can stimulate NETs release; however, the extracellular biofilm matrix inhibited NET formation. FIGURE 5 | Bactericidal capability of NETs. The bactericidal rate was calculated by viable bacteria quantification on THB agar. The results are depicted as the mean ± SD (n = 5). ns, no difference between groups. Survival of Planktonic and Biofilm SS2 In vivo Following the observation that NETs kill both planktonic SS2 and biofilm SS2 in vitro, we aimed to identify the function of NETs in blood in vivo. In the first 8 h post-infection, both viable planktonic SS2 and viable biofilm SS2 cells decreased. This result may be ascribe to the host immune response. However, biofilm SS2 survived better than the planktonic cells in the blood stream. Notably, the survival of planktonic and biofilm SS2 was enhanced when NETs were degraded by DNase I. When NETs were destroyed with DNaseI, The viable bacteria in biofilm was much more than planktonic bacteria after 2 h post-infection (Figure 7). These results confirmed that biofilm SS2 demonstrated enhanced survival in the host, particularly when NET DNA was degraded. DISCUSSION SS2 is an important emerging zoonotic pathogen in humans (Smith et al., 2001;Goyette-Desjardins et al., 2014). Most of SS2 human invasive isolates formed biofilm according to previous studies (Bojarska et al., 2016). Bacterial biofilm formed on host surfaces, which is critical to the virulence of these organisms. Neutrophils initiate potent response to evasive pathogens and surveille the tissue in the circulation, which plays an essential role in innate immunity (Nicolas- Avila et al., 2017). These cells rapidly react to infection and clear pathogens. To date, little information is available on the interaction between neutrophil and SS2. It is critical to study the response of neutrophils to SS2. In this study, we examined the interaction between SS2 and neutrophils and SS2 biofilm formation was increased in the presence of neutrophils. SS2 biofilms can mediate neutrophil phagocytosis evasion; however, SS2 biofilms can be killed by NETs, and the bactericidal efficiency is comparable to the action of NETs on planktonic SS2. Importantly, SS2 biofilm cells can inhibit NET formation, mainly because of the biofilm extracellular matrix. Bacterial biofilms can exist in a range of host tissues in the process of bacterial infection, which enables the bacterial communities to persist in the host (Boles and Horswill, 2011). From the perspective of the bacteria and host immune system relationship, pathogens form biofilms to increase chances of survival and to cause persistent infection in the host (Watters et al., 2016). For example, biofilm formation provides pneumococci with a protected environment for bacterial cells and enables transmission from person to person during nasopharyngeal colonization (Marks et al., 2013). Human neutrophils can enhance the development of Pseudomonas aeruginosa biofilms (Walker et al., 2005). In addition, previous studies have reported that macrophages and monocytes increase C. albicans biofilm formation (Chandra et al., 2007;Watters et al., 2016). In this study, the results showed that neutrophils can promote SS2 biofilm formation and that biofilm SS2 was better able to survive than planktonic cells in macrophages and neutrophils in vitro and in blood in vivo. To survive in the host, resistance to phagocytes in the blood is a crucial event for the pathogenicity of SS2 (Zhu et al., 2016). These findings suggest that biofilm formation is a survival strategy utilized by SS2 to evade phagocytosis. In addition, our results demonstrate that SS2 can form biofilms in some tissues such as the liver, spleen and kidney in vivo. In addition to phagocytosis, neutrophils release NETs to trap and kill pathogens through extracellular DNA and antimicrobial proteins (Thammavongsa et al., 2013). Various pathogens can be killed by NETs including parasites, fungi, bacteria, and viruses (Saitoh et al., 2012;Uchiyama et al., 2015;Avila et al., 2016;Von Kockritz-Blickwede et al., 2016). It has previously been reported that a microbe size-sensing mechanism allows neutrophils to selectively respond to pathogens on the basis of microbe size. Small microbes are more likely to be taken up in a phagolysosome instead of stimulating NET formation (Branzk et al., 2014). One study showed that C. albicans with hyphae, which are too large to be phagocytized, are large enough to induce NET formation; however, C. albicans in yeast form failed to induce NETs release and C. albicans biofilms impaired NET formation (Branzk et al., 2014;Johnson et al., 2016). C. albicans biofilms consist of two main kinds of cells, small oval yeast-form cells and long tubular hyphal cells, and both yeast cells and hyphae are crucial for biofilm formation. SS2 and many pathogenic bacteria can induce NET formation. Therefore, virulence mechanisms may have a critical role in NET formation and microbe size may not the most important virulence mechanism that induce NETs in response to bacterial stimuli. SS2 biofilms are communities of bacteria with extracellular DNA, proteins and exopolysaccharides. Importantly, biofilm extracellular matrix can vary greatly depending on the microorganisms present. The different properties between bacterial pathogens and the fungal pathogen C. albicans may contribute to the different activities of biofilms in response to neutrophils. Importantly, phagocytosis is a much faster mechanism than NET formation and phagocytosis remains the major method for host immune cells to clear invasive SS2 cells (Fuchs et al., 2007;Nordenfelt and Tapper, 2011). A FIGURE 6 | NETs visualization and quantification. (A) Neutrophils were stimulated with planktonic SS2. The pictures from the left to right were labeled with the following dyes: DNA with DAPI (blue), histone H4 (citrulline 3) with Alexa 568 conjugated (red), and an overlay of the first two pictures using ZEN 2012 software (Zeiss). Scale bar, 10 µm. (B) At 100 × magnification with oil, DNA was stained with the SYTO 9 green fluorescent nucleic stain. Arrows indicate the NETs structure; the round shapes indicate free bacteria without entrapment, and the square shapes indicate bacteria entrapped by NET DNA. (C) Relative fluorescence units were used to evaluate the quantity of NETs. Planktonic SS2 and bacteria separated from biofilm matrix could induce NETs release. The NET formation level induced by planktonic SS2 with PMA was twice that induced by planktonic SS2. The NET formation induced by bacteria separated from biofilm matrix with PMA-treated neutrophils was twice that induced by bacteria only. NETs induced by biofilm SS2 and the biofilm matrix were similar to the negative control. The results are depicted as the mean ± SD (n = 5). **p < 0.01; ***p < 0.001; ns, no difference between groups. reasonable hypothesis is that NETs aid in the killing of SS2 biofilm cells that are difficult to phagocytose by immune cells and that NETs appear to be an important method of eliminating SS2 biofilm cells. Both planktonic SS2 and biofilm SS2 cells can cause neutrophil accumulation at infection sites, providing an ideal environment for NETs immunoreaction. Significantly, NETs appear to have equal bactericidal efficacy for biofilm and planktonic SS2 cells. Neutrophils were treated with DNase I and cytochalasin B to degrade NETs and to suppress phagocytosis, respectively. For planktonic SS2, when NETs and phagocytosis were suppressed separately, the bacterial survival in neutrophils was improved significantly. These results indicated that NET formation and phagocytosis are both important mechanisms for killing invasive planktonic SS2, which is consistent with previous reports (De Buhr et al., 2017). Planktonic bacteria were more likely to be cleared by phagocytosis. When NET DNA was degraded, the survival of SS2 biofilm cells increased; however, phagocytosis had no obvious bactericidal effect on biofilm bacteria. In addition, in blood survival assays in vivo, biofilm cells were better able to survive compared to planktonic cells. When NET DNA was degraded, biofilms protected the bacteria from being killed and biofilm cells had enhanced survival in vivo, indicating that NETs could be an important bactericidal mechanism to entrap and kill bacteria biofilms in the host blood stream. Both phagocytosis and NETs are important bactericidal mechanisms for planktonic cells, and planktonic SS2 can stimulate NETs release and can be entrapped by NETs. NET formation appeared to be an efficient bactericidal mechanism for biofilm cells in this study; however, bacterial biofilms and the biofilm extracellular matrix could inhibit NET formation even in the presence of PMA, indicating that biofilms inhibit NETs release mainly through the extracellular matrix. Notably, bacteria separated from biofilms matrix still have the ability to induce NET formation. Further work will address on the mechanism of NET inhibition through biofilm matrix. Secretion of nuclease has been the main strategy to degrade the NET DNA backbone for bacteria in previous studies (Uchiyama FIGURE 7 | Bacteria survival rate in mouse blood. A comparison of the survival rate of bacteria isolated from mouse blood post-infection between planktonic and biofilm SS2 cells is shown. At 2, 4, 8, and 24 h post-infection, there were significantly more viable biofilm SS2 cells than planktonic SS2 cells. When DNase I was added to degrade NET DNA, the bacteria both in planktonic state and biofilm state displayed enhanced survival in the blood, particularly for biofilm SS2 cells. The survival rate between the group of planktonic ZY with DNase I and the group of biofilm ZY with DNase I was compared in each time point. The results are depicted as the mean ± SD (n = 3). **p < 0.01; ***p < 0.001; ns, no difference between groups. et al., 2012). In this study we found that biofilm is another mechanism to inhibit NETs release. Importantly, SS2 biofilms inhibit NETs release through the biofilm extracellular matrix. Biofilms are significant protective shelters for bacteria and enable survival by allowing the pathogen to persist and resist the host immune system. Although biofilms can evade phagocytosis and inhibit NET formation, NETs derived from neutrophils stimulated by planktonic bacteria and host inflammatory factors might be a significant mechanism of eliminating bacterial biofilms. This study provides novel knowledge on the battles between NETs and bacterial biofilms and can potentially inform novel strategies for the clinical treatment of streptococcal disease.
7,380.8
2017-03-20T00:00:00.000
[ "Biology" ]
Prediction of Dynamic Stability Using Mapped Chebyshev Pseudospectral Method A mapped Chebyshev pseudospectral method is extended to solve three-dimensional unsteady fl ow problems. As the classical Chebyshev spectral approach can lead to numerical instabilities due to ill conditioning of the spectral matrix, the Chebyshev points are evenly redistributed over the domain by an inverse sine mapping function. The mapped Chebyshev pseudospectral method can be used as an alternative time-spectral approach that uses a Chebyshev collocation operator to approximate the time derivative terms in the unsteady fl ow governing equations, and the method can make general applications to both nonperiodic and periodic problems. In this study, the mapped Chebyshev pseudospectral method is employed to solve three-dimensional periodic problem to verify the spectral accuracy and computational e ffi ciency with those of the Fourier pseudospectral method and the time-accurate method. The results show a good agreement with both of the Fourier pseudospectral method and the time-accurate method. The fl ow solutions also demonstrate a good agreement with the experimental data. Similar to the Fourier pseudospectral method, the mapped Chebyshev pseudospectral method approximates the unsteady fl ow solutions with a precise accuracy at a considerably e ff ective computational cost compared to the conventional time-accurate method. Introduction The conventional time-marching approach and timespectral method have been widely used to solve various types of unsteady flow problems that are governed by partial derivative equations.These methods have been developed to approximate the solutions of PDEs by different approaches.To estimate the time derivative term in the unsteady flow governing equations, the time-marching method employs a finite difference method, while timespectral methods used in this study apply a discrete Fourier expansion and Chebyshev polynomials. In this study, two different basis functions are used for the spectral method in order to solve time-dependent problems.The first kind is based on a frequency domain method by casting a discrete Fourier expansions.Hall et al. [1] first proposed an application of the frequency-domain-based approach to a time-dependent nonlinear flow problem.The method utilizes the Fourier spectral method to the temporal discretization and approximates the time derivative terms in the unsteady transport equations.The transformation to frequency domain from time-domain turns the unsteady form of the governing equations into the time-independent equations by treating time derivative term as a source term, and this allows ones to solve unsteady state problems as steady-state manner at multiple time instance points [2].The frequency domain-based time-spectral method can deliver very accurate approximation on periodic unsteady problems at an efficient computational cost while it satisfies Nyquist theorem.However, the computational efficiency will become rather slower with an increasing number of harmonics without knowledge of frequency information prior [3]. If this is the case and the problem is nonperiodic, the governing equation can be represented with a series of Chebyshev polynomials.The Chebyshev polynomials are one class of the orthogonal Lagrangian polynomials, and the main features of the use of the Chebyshev polynomials are its general application capability to either periodic or nonperiodic problems compared to the frequency-based timespectral method [3].The classical Chebyshev polynomials can provide remarkable solution resolution near the boundaries [3], but this feature of the Chebyshev polynomials is not favorable for current unsteady problems for its high potential outcome of the numerical instability.To mitigate such numerical instability, an application of conformal mappings [4][5][6][7][8][9] to the traditional Chebyshev pseudospectral method is considered.Im et al. [2] have applied conformal mapping techniques to unsteady flow problems, and an inverse sine mapping function is employed for even distribution of the time instance points over the range for current problem. The mapped Chebyshev pseudospectral method can provide the solution accuracy as fine as that of time-accurate method at an effective efficiency in computational cost.Im et al. [10] solved two-dimensional airfoil flows under forced oscillation by the Euler and Navier-Stokes solvers with the mapped Chebyshev pseudospectral method.The results obtained very similar characteristics compared to those of the Fourier pseudospectral method in terms of both solution accuracy and computational efficiency. In this paper, a mapped Chebyshev pseudospectral method is expended to three-dimensional case and employed to approximate solutions of the time-dependent flow past both two-and three-dimensional bodies under a forced harmonic oscillation.The results are used to calculate dynamic derivatives of the oscillating bodies.The standard NACA 0012 and a tailless aircraft, the Lockheed Martin Tactical Aircraft Systems-Innovative Control Effector (ICE) configuration, [11][12][13] are subjected to the unsteady flow simulations.The solutions are obtained by computation of Euler and Navier-Stokes equations using time-accurate method and time-spectral methods. The details of Chebyshev pseudospectral method with a mapping function and the calculation method of dynamic derivatives are explored in Section 2 and Section 3. The solutions from the present method are compared with those from the time-accurate method, the Fourier pseudospectral method, and experimental data [12] to validate the accuracy of the present method in Section 4. Finally, dynamic derivatives are obtained through investigations of the unsteady flow solutions from each of computational methods in Section 5. Time-Spectral Methods 2.1.Fourier Pseudospectral Method.Frequency domain approach can provide very accurate solutions for the timedependent transport equations of conservation of mass, momentum, and energy with a temporal periodicity at an efficient computational cost, compared to the traditional timeaccurate method.The frequency-based time-spectral method uses a discrete Fourier expansions to estimate the temporal derivative terms in the unsteady flow governing equations. where U is the vector of conservative variables and R is the residual of the spatial discretization of the fluxes. The time spectral representation of the governing equation can be obtained by the approximation of I, U, and R with discrete Fourier expansions truncated at N H th harmonic as follows. where T is the period and ω is the angular frequency.If (3), (4), (5), and ( 6) are compared to each other, one can find the relationships between Fourier coefficients, and the resulting equations can be represented as The vectors of the Fourier coefficients in (7), (8), and (9) have a size of N T = N H + 1, and a general form of the system of the equations can be represented as follows: where However, a direct calculation of ( 10) is difficult to solve due to its highly nonlinearity that requires N 3 order of operations [1].The computational expenses increase rapidly with the number of harmonics.Hall et al. [1] suggested an alternative way to resolve such problems by casting inverse Fourier transformation to (10) as shown in where F is the Fourier transformation matrix and F −1 is inverse Fourier transformation matrix.The pseudotime derivative term is added to (13) to solve with the pseudo time-stepping, and the final form of the governing equations becomes The temporal derivative term is treated as a source term, and the equation becomes a time-independent function. Chebyshev Pseudospectral Method. With an absence of the frequency term, the Chevyshev pseudospectral method can be used for both periodic and nonperiodic problems.The Chebyshev pseudospectral method is based on a series of Chebyshev polynomials to approximate the solution as an arbitrary function.The Chebyshev pseudospectral method can exhibit general applications to unsteady flow problems, compared to the frequency pseudospectral method.However, the method itself inherently suffers from clustering of the collocation points near the boundaries.This uneven distribution of the collocation points can easily lead to numerical instabilities due to the ill conditioning of the spectral derivative matrix [10].A need for conformal mapping arose to mitigate the numerical instabilities, and the conformal mapping functions have been investigated by a number of researchers.Im et al. [10] considered several types of mapping functions in his study of mapped Chebyshev pseudospectral method.He considered four different types of conformal mapping functions: inverse sine mapping function [9,14], polynomial mapping function [4], tangent function [5][6][7], and hyperbolic sine function [8].His study on conformal mapping showed the inverse sine function which represented the most favorable performance in terms of even distribution and convergence characteristics.The detail of the other mapping function study can be found in [9,10,14]. Inverse Sine Function. The standard Chebyshev pseudospectral method uses a cosine function to provide a distribution of collocation points over the interval −1, 1 . In this study, inverse sine mapping function is used for even distribution of the collocation points as it showed the most favorable performance, compared to the other mapping functions [10].The inverse mapping function, proposed by Kosloff and Tal-Ezer [9,14], can be derived into either symmetric or nonsymmetric transformation forms over the interval −1, 1 , and the nonsymmetric transformation in (16) is considered to deliver more flexibility in the distribution of the collocation points over the domain. where α and β are the parameters that control the distribution of the collocation points near the boundaries [10,14].An arbitrary and smooth continuous function u t can be represented with a series of orthogonal polynomials in the domain −1 ≤ t ≤ 1 as follows: where a n in ( 21) is the Chebyshev coefficient of the function u t , and it can be defined in a vector form as The matrix D T in ( 23) includes the Chebyshev polynomials of the function, u .The T N in the matrix represent the Nth degrees of Chebyshev polynomial.The differential of u can be represented in a vector form as where the elements of b in ( 24) are the Chebyshev coefficients and D T corresponds to the Chebyshev polynomials of the u ′ t .The Chebyshev coefficient u ′ can be represented as 26 where the element G i,j of the matrix G in ( 26) is defined by the value of i + j .If the value of i + j is an odd number, G ij becomes 2j/c i or turns into 0 otherwise.If the interval is transformed from the interval of −1 ≤ t ≤ 1 to the interval 0 ≤ t ≤ T max , the differential can be computed by dividing each component of the G ij by T max /2.Finally, the time derivatives of an arbitrary function u t can be computed as Here, D ch in 27 represents a Chebyshev collocation operator with the matrix size of N + 1, N + 1 . Implementation. The Chebyshev pseudospectral method with mapping function is applied to the three dimensional unsteady Reynolds Average Navier-Stokes equations as follows: where Q represents the conservative variables, and F, G, and H denote the residual of viscous and inviscid flux vectors in the X, Y, and Z direction.These vectors of conservative and flux variables can be approximated with (29), ( 30), (31), and (32), and the time differential of the Q in the interval of 0 ≤ t ≤ T max can be replaced with the mapped Chebyshev collocation operator derived in (28) as where Q , F , G , and H have the size of N + 1, 1 matrix and each component is defined at the points which are redistributed by the arcsine mapping function. The pseudotime stepping is applied to (33). The diagonalized alternate direction implicit (D-ADI) technique [15] was applied to the Chebyshev pseudospectral method for the time integration.In this study, explicit discretization of the Chebyshev pseudospectral operator and implicit discretization of residual term are taken account into the governing equation with the pseudotime stepping to take advantages of the implicit method for the larger time step size and the advantage of the explicit method for diagonal dominance of the Jacobian matrix as shown in (36).The explicit application of the Chebyshev operator removes its influence on the nondiagonal elements of the matrix, and the source term can be directly added into the flux term at the pseudotime level [10]. The implicit integration of the spatial flux used the linearization technique as follows: The U, V, and W in (37), (38), and (39) represent the flux Jacobian matrices with the size of N + 1, N + 1 .Throughout the replacements of flux terms in (36) with (37), (38), and (39), (36) becomes The flux terms in (37), (38), and (39) are the function of the variables at the specific time instance over the domain of 0 ≤ t ≤ T max , and only diagonal elements of the matrix U, V, and W are considered as below. Each of the diagonal element is at different time instances and denoted with subscript zero through N and dependent only on the flow variables in that time instance.The key point here is that the diagonal elements of the simplified diagonal Jacobian matrix are identical to those of the flux Jacobian matrix in the time-domain solution algorithm.Therefore, computation of additional terms in Jacobian matrices of V and W is not required.Finally, (28) can be represented as (42), employing a diagonally implicit method. , 43 where Here, the vector S contains the Chebyshev spectral source terms and flux terms as shown in (42).The T n in (45) can be computed by the method proposed by Pulliam and Chaussee [15].Therefore, the final form of the governing equations is represented such that the mapped Chebyshev collocation operator is treated as a source term in the governing equation in a similar way to the harmonic source term of the frequency domain method [2,16]. 2.6.Computational Efficiency and Solution Accuracy.The main advantage of the time-spectral method is the spectral accuracy and its convergence rate.The time-spectral method can achieve an order of accuracy equivalent to that of the time-accurate method at a considerably effective computational cost.Although the time-spectral solutions can be corrupted by aliasing errors if the number of the time sampling points is less than doubles of the Nyquist frequency or 3/2 − rule [17], one can find a number of sampling points with which the aliasing error is not significant to the time-spectral solution accuracy through frequency spectrum analysis. A precise quantification of the aliasing errors is important to find an optimal number of sampling points.It is ideal if the number of sampling points is chosen by the Nyquist theorem or 3/2 − rule since it can deliver exactly the same order of accuracy compared to that of the time-accurate method.However, the computational cost highly depends on the number of sampling points, and this will be lagging off the convergence rate of the time-spectral method with a larger number of sampling points.The total computational cost will exceed that of the time-accurate method.Thus, the trade-off between the solution accuracy and the computational cost should be considered. The number of time instance points is determined by the magnitude of the aliasing error.The aliasing error is quantified by calculating the norm of the discrepancy of the amplitude in the frequency contents between the time-accurate and time-spectral methods as follows. , 46 where C i,TA is the amplitude at each wave number in frequency contents of the time-accurate method and C i,TS is that of the time-spectral method.In this study, the timespectral solutions are obtained with the number of time instance points to find an appropriate number that maintains the spectral accuracy at an effective computational efficiency. Dynamic Derivative Analysis In a previous study on stability analysis on tailless aircraft [18], stability and control behavior of a tailless aircraft are investigated for Innovative Control Effectors (ICE) configuration developed by Lockheed Martin.The system matrix of the ICE configuration is developed as follows: where ∂C A /∂u, ∂C N /∂u, ∂C m /∂u, ∂C A /∂w, ∂C N /∂w, and ∂ C m /∂w are the static derivatives and ∂C A /∂q, ∂C N /∂q, and ∂C m /∂q are the steady dynamic derivatives.These derivative can be easily found using steady CFD computations and finite difference method with rotating flow scheme [19].However, the unsteady dynamic derivative terms, ∂C A /∂α, ∂C N /∂α, and ∂C m /∂α, require time-marching unsteady CFD computations. The dynamic derivatives can be approximated based on the unsteady flow solutions, in which a forced sinusoidal oscillation is imposed at the center of gravity of aircraft [20].From one cycle of unsteadiness during the harmonic oscillation, the mathematical model for the increment in aerodynamic forces and moments can be expressed as follows with an assumption of a linear relationship between the aerodynamic properties and flight states [21]. where Δα t = α A sin ωt , 49 Δα t = q = ωα A cos ωt 50 Herein, α A is the amplitude of angle of attack and ω is an oscillation frequency.Equation (48) can be rewritten with (49) and (50) based on an assumption α t = q as shown in (51) [22]. Herein, C kα and C kα + C kq represent stiffness (in-phase) and damping (out-of-phase) components of dynamic derivatives, respectively [21].The determination of the dynamic derivatives can be achieved by comparison of the first harmonic flow solutions from the time-accurate and timespectral methods.Prior to the comparisons, it was assured that the unsteady flow solutions from each method are fully converged.The solutions from each method are transformed into the frequency domain through discrete Fourier transform as shown below. where the vector of X ′ s and C ′ s represent the flow solutions and the Fourier coefficients of the corresponding flow solutions in the frequency domain, respectively.Here, the first Fourier coefficients, C C 1 and S S 1 , are compared with the coefficients in (52). Validation of Chebyshev Pseudospectral Method and Dynamic Derivatives method for time integration and periodic boundary conditions.The spatial numerical flux in each direction is discretized using Roe's FDS with a third order MUSCL scheme to achieve higher order accuracy, and turbulent eddy viscosity is computed from the two-equation k − ω Wilcox-Durbin + turbulence model [23]. Unsteady flow solutions of oscillating NACA0012 are approximated by the time-accurate method (TA), frequency pseudospectral method (FPM), and Chebyshev pseudospectral method (CPM).In computation with the time-spectral methods, the spectral accuracy can be contaminated by aliasing errors due to a lack of number of sampling points.In this study, the aliasing errors are represented by the norm of the discrepancy in the amplitude at every wave number between the time-accurate method and the time-spectral method results.The results are plotted in Figure 2(a).The aliasing error is quantified for a different number of the time instance points, and Figure 2(b) shows that the aliasing errors are reduced with an increasing number of the time instance points.This indicates the aliasing errors become less effective on the solution reconstruction as the number of the time instance points increases, and one can find an optimum number, in terms of computational cost, which provides the spectral accuracy fairly equivalent to the order of accuracy of the original wave form.As shown in Figure 2(b), the aliasing error converges at the time instance points of 15, and the time-spectral methods give a precise solution accuracy with respect to the time-accurate solutions.However, the computational time linearly increases with a number of collocation points over the interval as shown in Figure 3. The advantages in the computational efficiency vanish when the number of the time instance points is more than 50.In this study, the number of the time instance points of 15 is chosen as an optimal for the Fourier-and mapped Chebyshev pseudospectral method, and it gives twice faster computational efficiency compared to the time-accurate method. Normal force and pitching moment coefficients are compared with those from time-accurate method and the frequency pseudospectral method as shown in Figure 4.The solid line in black represents the time-accurate method, and dotted and dashed lines in blue correspond to the present method, while the solid line in red represents the frequency pseudospectral method.The experimental data are shown in black circles [11].With a small number of collocation points, the results show a large deviation from those of the time-accurate method and the frequency pseudospectral method results with 15 time instance points.However, the results of the present method are shown in a good agreement with the solutions from the other methods as the time instance points increases.With 7 or more number of time instance points, the results from Chebyshev pseudospectral method converge to the time-accurate method results.The time-spectral method results are also shown in reasonably good agreement with the experimental data. Dynamic derivatives are obtained for oscillating NACA 0012 from the unsteady flow solutions from the timeaccurate method, the frequency pseudospectral method, and the mapped Chebyshev pseudospectral method.The number of collocation points varied from 5 to 15 for the Chebyshev pseudospectral method, and the frequency pseudospectral method is used with 15 time instance points.The results are summarized in Tables 1 and 2. The dynamic derivatives from normal coefficients are presented in Table 1.With a fewer number of collocation points, the values of the dynamic derivatives show some discrepancy compared to the time-accurate method results.However, they are converging to the results from the time-accurate method and the frequency pseudospectral method as the number of collocation points increases. ICE Configuration. The mapped Chebyshev pseudospectal method is applied to an investigation of the oscillating three-dimensional body.The ICE configuration is used to evaluate the accuracy of the present method.The unsteady flow solutions are obtained by computation of Euler equations with the mapped Chebyshev-and the Fourier collocation operator.A structured mesh is employed with 5.2 million elements within 9 blocks as shown in Figure 5.A forced harmonic oscillation is imposed on the aircraft center of gravity with reduced frequency of k = 0 1242, with an initial angle of attack of α 0 = 0 0, and the amplitude of 4.16 and 8.33 degrees.The Mach number is given as 0.0266 which is the same as the wind tunnel test conditions.The experiment was performed with the Reynolds number of 0 574 × 10 6 in Subsonic Air Force Research Laboratory Vertical Wind Tunnel (AFRL) at Wright-Patterson AFB, Ohio [12].The ICE configuration was tested under various pitching rate conditions during the longitudinal stability test as presented in Table 3.The magnitude of amplitude varied from 4.16 and 8.33 degrees to match pitch rate qc/2V of 0.009 and 0.018 at 0 angle of attack [12]. The number of the time instance points varied to investigate the spectral accuracy of the time-spectral methods.The frequency contents of the unsteady solutions from the Fourier pseudospectral method and the mapped Chebyshev pseudospectral method are represented in Figure 6(a).Similar to the pitching NACA 0012 case in the previous section, the discrepancy between the time-accurate method and the mapped Chebyshev pseudospectral method decreases as the number of the time instance points increases.As shown in Figure 6(b), the minimum number of the time instance points should be chosen more than five to minimize effects of the aliasing error on the solution accuracy.The reconstructed solutions from the present method are shown in Figure 7 through dotted and dashed lines in blue.The Fourier pseudospectral method solutions are represented in red solid line, and the time-accurate method is shown in black solid line.The reconstructed solutions from the Fourier pseudospectral method and the mapped pseudospectral method are shown in a good agreement with that of timeaccurate method. The computational cost is also investigated and compared between the time-accurate and the time-spectral methods as shown in Figure 8.The total computational time for the time-accurate method is about 333 hours.On the other hand, it linearly increases with the number of the time instance points for the mapped Chebyshev pseudospectral method and the Fourier pseudospectral method.The computational cost becomes the same order of magnitude when the number of time instance points is more than 25.From the frequency content analysis, the time-spectral methods with the time instance points more than eleven can provide the order of solution accuracy equivalent to that of the timeaccurate method.This indicates that the mapped Chebyshev pseudospectral method can compute the solution at half cost of the time-accurate method for the current application on ICE geometry. To validate the time-spectral methods for the threedimensional flow, the unsteady flow solutions are compared with experimental data as shown in Figure 9.The circle symbols in black and solid line with diamond symbols in red indicate the experimental data and the Fourier pseudospectral method results, respectively.The solid line in blue corresponds to the results from the present method.Figure 9 shows a good agreement in normal force coefficients with the experimental data.The pitching moment, however, presents a large deviation at high pitching rate.The issues can be found in an inaccurate prediction of the flow separation caused by artificial dissipation term in Euler solver.This can become a significant concern for the calculation of the complex vortex structures on the delta wing, which are formed from the nose and the leading edge region.As shown in Figure 10(b), the pressure contours are dynamically changing as the angle of attack, but it is hard to find an evidence of the interactions between the vortex structure and the surface of the body, which would come out as flow separations and reattachments as it goes downstream.The calculation of dynamic derivatives is extended to three-dimensional case.Unsteady flow solutions of oscillating ICE configuration are obtained using the Fourier pseudospectral method with 11 time instance points and the mapped Chebyshev pseudospectral method with 5 to 11 time instance points.As shown in Tables 4 and 5, the discrepancy in dynamic derivatives between the current method and time-accuracy method is decreasing with a larger number of the time instance points.The out-of-phase and in-phase components of dynamic derivatives show a good agreement with the Fourier pseudospectral method results with 11 time instance points and the time-accurate method. Conclusions A mapped Chebyshev pseudospectral method is extended for three-dimensional unsteady flow problems.While the spectral method can find unsteady flow solutions with an accuracy equivalent to the time-accurate method, the frequencybased pseudospectral approach has been limited to periodic problems.However, the Chebyshev pseudospectral method can be an alternative time-spectral approach with its application to both periodic and nonperiodic problems.In light of such potential, the mapped Chebyshev pseudospectral method is employed to solve three-dimensional unsteady periodic flow in order to validate its spectral solution accuracy and computational efficiency with those of Fourier pseudospectral method and the traditional time-accurate method.The unsteady flow solutions of the flow past NACA 0012 and ICE configuration under the forced pitching oscillation are obtained by the time-accurate method, the Fourier pseudospectral method, and the mapped Chebyshev pseudospectral method.The results from the present method are compared with the experimental data for validations as well.The mapped Chebyshev pseudospectral method found unsteady flow solutions in a good agreement with the other methods for both two-and three-dimensional cases.The present method also provided a good agreement in normal force coefficient but showed a discrepancy in some degree in pitching moment coefficient, compared to the experimental data.Finally, dynamic derivatives are obtained based on the unsteady flow solutions from the time-accurate, the Fourier pseudospectral method, and the mapped Chebyshev pseudospectral method.With 11 time instance points, the values of dynamic derivatives based on the mapped Chebyshev pseudospectral method results are obtained similar to those of the time-accurate method and the Fourier pseudospectral method at the same number of time instance points.The mapped Chebyshev pseudospectral method showed computational cost equivalent to the Fourier pseudospectral method, and about 2.5 and 2 times faster computational efficiency for NACA 0012 and ICE configuration case, respectively, compared to that of time-accurate method.In general, the mapped Chebyshev pseudospectral method approximated the unsteady flow solutions for the periodic oscillating body with a reasonable accuracy compared to the experimental data and provided a good agreement with those from the traditional time-accurate method and the frequency pseudospectral as well. A: Amplitude of oscillation a: Chebyshev coefficient b: Chebyshev coefficient for the first differential D: Matrix in harmonic balance equation D ch : Chebyshev collocation matrix Pseudotime. Data Availability The data for the time-accurate and the time-spectral methods for NACA0012 and ICE configuration is not available because it is owned by the university. Disclosure This paper is an extended work from previous study [24], and comparisons between the time-accurate method and time-spectral method are included for solution accuracy and computational efficiency. 4. 1 . NACA 0012.A harmonic oscillation is imposed on the quarter chord of NACA 0012 airfoil to validate the accuracy of Chebyshev pseudospectral method.The computation of Navier-Stokes equations is performed with reduced frequency k = 0 0814, initial angle of attack α 0 = 0 016, and pitching amplitude α A = 2 51.The airfoil oscillates in a range of −2 494 < α < 2 526.The unsteady flow solutions are approximated at Mach number M = 0 755 using the timeaccurate method and the time-spectral method.A C-type structured mesh is employed for the NACA 0012 airfoil with 4096 points and 4260 elements as shown in Figure1.The Chebyshev collocation operator with the mapping function is applied to the time differential term along with D-ADI Figure 2 :Figure 3 : Figure 2: Frequency analysis for the time-accurate method and the time-spectral method results. Figure 4 : Figure 4: Normal force and pitching moment hysteresis of NACA 0012. L2 norms of difference in amplitude of frequency contents between the time-accurate method and the mapped Chebyshev pseudospectral method for ICE configuration under pitching motion Figure 6 : Figure 6: Frequency analysis for the time-accurate method and the time-spectral method results. Figure 7 : Figure 7: Normal force and pitching moment hysteresis of ICE configuration. Figure 8 : Figure 8: Computational time for ICE configuration under pitching motion. Figure 9 : Figure 9: Comparison of normal force and pitching moment coefficient with experimental data. D T : Chebyshev coefficient matrix D T : Chebyshev coefficient matrix for the first differential F: Fourier transform matrix G, H: Fluxes g: A mapping function g ′ : Derivative of a mapping function k c : Reduced frequency M: Matrix in frequency domain equation M ch : Number of reconstruction points by Chebyshev M ∞ : Freestream Mach number N hs : Number of harmonics p, q: Constant of polynomial mapping function Q: Conservative variables R: Residual T: Chebyshev polynomial T max : Time interval t: Time V, W: Flux Jacobian matrices in mapped Chebyshev method y: Variable of a mapping function α, β: Constant of inverse sine mapping function α AOA : Angle of attack ϕ: Frequency τ: Figure 10 : Figure 10: Surface pressure contours of ICE configuration at different collocation points. Table 1 : Dynamic derivatives for NACA 0012 airfoil from normal force. Table 2 : Dynamic derivatives for NACA 0012 airfoil from pitching moment. Table 3 : Wind tunnel test conditions for forced oscillation. Table 4 : Dynamic derivatives for ICE configuration from normal force. Table 5 : Dynamic derivatives for ICE configuration from pitching moment.
7,035
2016-01-04T00:00:00.000
[ "Mathematics" ]
Design and Evaluation of Binary-Tree Based Scalable 2D and 3D Network-on-Chip Architecture Abstract Network-on-Chip (NoC) has been developed as a most prevailing innovation in the paradigm of communication-centric technology. It solves the limitations of bus-based systems, with the incorporation of 3D IC technology, and it reduces packaging density and improves performance of Multiprocessor System-on-Chip. There is need of suitable NoC topology for these applications and desired performances. This paper proposes a scalable binary tree-based topology for 2D and 3D NoCs. The average degree of the proposed network is reduced around 40% of the torus whereas the diameter also reduced significantly, as compared to other topologies. Introduction Advancements in the semiconductor technology have increased the integration of many heterogeneous components on single chip with Multiprocessor Systemon-Chip (MPSoC). On the single chip, many features are demanded such as high performance and high Related Works Most of the NoC uses the mesh-based topology for implementation as it has regular and simple structure but the drawback in mesh topology is that it requires more resources, like, more links required for larger size network which results in needless area and energy overhead [7]. A cross-by-pass-mesh (CBP-Mesh) topology has been proposed which reduces the diameter and average hop counts [8]. It has high scalability and is based on mesh topology architecture. The extra bypass links provide small route to destination from the source, and this results in improvement in the performance of the NoC, but at little complexity and cost of power and energy requirement. With the emergence of 3D IC technology, 3D NoC is a compelling option for chip interconnection. As 3D Recursive Network Topology (3D RNT) was given in [9], it has partial vertical links by using TSVs and also proposed the routing algorithm for the topology. But the issue of heat dissipation is still an issue with it. In [10], an optimization technique, the ant colony optimization (ACO) was applied to 3D networks having torus, mesh, and hypercube topologies for routing protocols. The 3D Ant Colony Routing (3D-ACR) and the another optimization technique used in [11] is based on Hopfield Neural Network (HNN) for 2D mesh topology, and they have some shortcomings like the scalability of the algorithm and chip area overhead problem for implementation. In [12], exhausted survey of 3D NoCs is given. The transistor scalability issues can be solved by using the 3D NoC which is well suited for heterogeneous multiprocessor SoC. From the various current papers and articles, we can say that there is still more focus on designing a 2D and 3D NoCs' architecture which can give better scalability and performance improvement and to overcome the issues of area overhead, energy, or power consumption and to reduce the number of links. The proposed topology is the combination of binary tree and ring topology for 2D and 3D NoCs' architecture, that drastically removes the number of links. The less links means reduction in chip resources which saves area and energy consumption. The average degree of the proposed network is reduced around 40% of the torus which reduces the router cost and complexity. The diameter also reduced significantly, as compared to other topologies which give shorter routing paths and the advantage for the latency of the routing, and the scaling of the network improves as the diameter of the topology directly linked with it. Proposed Topology A binary tree is a well-established structure in which each node has two child nodes named as left child node concepts, which reduces the cost of communication and time-to-market. 3D NoC has the advantage of 3D integrated chip (IC) and combines it with NoC. This gives an option to connect many heterogeneous chips vertically with the small vertical links. For vertical connection on different chip layers, Through-Silicon-Vias (TSVs) are mostly used, as it consumes less power and have low delay and high bandwidth. The physical structure of NoC is decided by its topology. Regular and fixed topologies like mesh, torus, ring, etc. are commonly used in NoC. Mesh is mostly used because of its simple and regular structure, but it has many limitations, for example, for a larger network, the links' requirement is more, which requires extra area and power of the chip. By reducing the links, there is an advantage of both the area and power in NoC. Several topologies have already been proposed topologies such as torus and mesh [1] are mostly used. Torus topology overcomes the large diameter of the mesh topology. The 3D torus topology has been shown in Figure 1, it has long wrap-around links and complex that can be seen from the figure. Quadrant-based routing designed for 3D torus in [2], it used few TSVs in place of all vertical connection. It suffers from long wrap-around links causing more delay, which was reduced by using folded torus topology. Hierarchical binary tree [3] have small bisection width through better management. Fat tree overcome small bisection bandwidth of binary tree but the problem of high-degree node comes [4]. Mesh of tree is the hybrid of mesh and tree architecture [5]. Binary search tree-based ring topology was proposed and that had drastic reduction in degree and meter of network [5]. The proposed topology is the combination of binary search tree and ring topology which drastically removes the number of links used in [6]; thus, the proposed topology has the advantage of both area and power saving on chip along with reduction in degree. The remaining part of this paper is organized as follows: Section 2 presents the related works, Section 3 presents the proposed topology and its characteristics, in Section 4, discussion, comparison, and analysis are presented, and in final section, conclusions are presented. and right child node originated from a core node. Binary search tree-based ring topology was proposed in [13]. The basic module of it consists of tree nodes that can communicate directly with each other as shown in Figure 2. Although this topology drastically reduces in the diameter and degree of the node, it has higher number of links. As the number of nodes increases, more levels of BST-ring topology are needed, which gives rise to redundant links, and that adds to unnecessary area and power consumption of the NoC architecture. In this work, a combination of BST and BST-ring topology is proposed which gives the significant improvement in the node degree with the less number of communication links. The proposed topology is shown in Figure 2. For level L = 1, Nodes total degree: The degree of the proposed topologies is compared with torus topology and ring-based tree topology as shown in Figure 5. It has found to be better than them, and value of degree of the proposed topology is around 40% less than the torus topology. The possible distribution of the cores and nodes are shown in Figure 3 and Figure 6. It shows the layout of the proposed topology. We can have a generalized expression as follows for level L = n as: where k ≥ 0, m = 1 and N = n − 1, when L = n is even and m = 2 and N = n − 1, when L = n is odd. From the above Equations (2) and (4), we can find the average degree (d a ) of the network as: Discussion and Analysis The performance and scaling property of the networkon-chip can be analyzed by theoretical and mathematical modeling of the network. Analysis of proposed network parameters is given in Table 1. In scaling property, the performance is highly dependent on the diameter of the network. Diameter (D N ) measures the number of maximum nodes traveled from source node to destination node. Graphical analysis shown in Figure 4 represents comparison of the diameter (D) of proposed topology and exiting topologies such as mesh and WK-recursive [14]. From the Figure 4, it can be seen that the proposed topology has the lowest diameter among other topologies. At level 5, number of nodes are 93, the mesh has diameter 17.28 whereas proposed topology has diameter 9. (4) Conclusion Topology is the basic building block for designing the NoC; it decides the roadmap for traversal of the packets. Performance, complexity, and scalability mainly depend on the topology of the network. Various metrics of the proposed topology are explored and compared with the other existing most common topologies. It is found that diameter of the proposed topology is considerably small compared to other topologies such as mesh and WK-recursive. The degree of proposed topology is significantly lower than the other existing topology. This results in lower number of links required that not only save the area overhead but also reduce the complexity and cost of router. With the extension of the topology for 3D NoC, the number of TSVs required is reduced which is important while designing 3D ICs. As extension to present work in future, it is planned to design routing algorithm that can deal with the faults and congestion and to calculate the power consumption. Disclosure statement No potential conflict of interest was reported by the authors.
2,112.6
2017-10-02T00:00:00.000
[ "Engineering", "Computer Science" ]
The Function of the HGF/c-Met Axis in Hepatocellular Carcinoma Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide, leading to a large global cancer burden. Hepatocyte growth factor (HGF) and its high-affinity receptor, mesenchymal epithelial transition factor (c-Met), are closely related to the onset, progression, and metastasis of multiple tumors. The HGF/c-Met axis is involved in cell proliferation, movement, differentiation, invasion, angiogenesis, and apoptosis by activating multiple downstream signaling pathways. In this review, we focus on the function of the HGF/c-Met axis in HCC. The HGF/c-Met axis promotes the onset, proliferation, invasion, and metastasis of HCC. Moreover, it can serve as a biomarker for diagnosis and prognosis, as well as a therapeutic target for HCC. In addition, it is closely related to drug resistance during HCC treatment. INTRODUCTION Hepatocellular carcinoma (HCC) is the sixth most common cancer and the fourth leading cause of cancer-related death in the world (Ferlay et al., 2019), and its burden is expected to increase in the next few years. The main cause of HCC is chronic liver disease caused by hepatitis virus B and/or C. However, other factors are also related to the occurrence of HCC, including alcoholrelated liver disease, obesity, type 2 diabetes, and mellitus-related non-alcoholic fatty liver disease (Dyson et al., 2014;Williams et al., 2014). Only about 40% of patients with early-stage or localized HCC are suitable for potentially curative treatments (surgical resection, liver transplantation, and local radiofrequency ablation) and 20% are suitable for transcatheter arterial chemoembolization (TACE) if in an intermediate stage (Giordano and Columbano, 2014). Due to lack of early effective diagnosis, around 80% patients are diagnosed with advanced HCC. The prognosis of advanced HCC is poor, the median overall survival (OS) time is roughly 1-2 months (Ren et al., 2020) and only systemic therapy can improve survival time. THE HGF/c-MET AXIS Hepatocyte growth factor was originally discovered in in vitro experiments as a hepatocyte mitogen (Nakamura et al., 1984) for primary hepatocytes that promoted the cell motility of epithelial cells (Stoker et al., 1987). Subsequently, several studies also revealed other effects, such as intensifying cell motility, angiogenesis, immune response, cell differentiation, and antiapoptosis (Garcia-Vilas and Medina, 2018). Hepatocyte stromal cells or HCC tumor cells can express and release HGF into the tumor microenvironment (Matsumoto et al., 2008). HGF binds to its specific receptor, c-Met, which is located on the surface of hepatocytes, in a paracrine or autocrine manner. Moreover, the autocrine and paracrine activation of c-Met play an important role in the development and metastasis in HCC (Xie et al., 2001). Originally, in a chemically transformed human osteosarcoma cell line, researchers found the c-Met proto-oncogene and identified it as a fusion gene (Cooper et al., 1984). It encodes the receptor for the ligand HGF. Several kinds of cells express c-Met, such as epithelial cells, neurons, hepatocytes, and hematopoietic cells (Fasolo et al., 2013). C-Met is a receptor tyrosine kinase (RTK) that is composed of a disulfide-linked heterodimeric complex. The complex is a transmembrane monomer that has five catalytic tyrosines in a cytoplasmic tail with four distinct hotspots (Basilico et al., 2014). One of five catalytic tyrosine regulates c-Met negatively (Y1003), while the others (Y1234, Y1235, Y1349, Y1356) (Bradley et al., 2017) regulate c-Met positively. Y1003 regulates Cbl-mediated Met lysosomal degradation. Activated Y1234 and Y1235 upregulate kinase activity and result in phosphorylation of the docking site residues Y1349 and Y1356, leading to the recruitment of adaptor proteins and signaling molecules. Additionally, protein kinase-c activates S985 to degenerate c-Met. Hotspots are the domains of Met responsible for interaction with HGF. For four hotspots, the first hotspot is located on blades 2-3 of the semaphoring (SEMA) homology domain β-propeller, the known HGF β chain binging site. The second and third hotspot are known as the HGF α chain, localized on blade five of the SEMA domain and immunoglobulin-plexintranscription factor (IPT) homology domains 2-3, respectively. The fourth hotspot is not previously correlated with the HGF binding site, which is across the plexinsemaphorin-integrin homology domain (PSI)-IPT 1 domains. C-Met, activated by the canonical pathway or the non-canonical pathway is involved in cell proliferation, motility, angiogenesis, invasion, and apoptosis. Non-canonical Mode of c-Met Activation Pattern C-Met can also be inappropriately activated by other pathways. Deregulated Met activation can induce several types of tumors in humans. (I) Des-γ-carboxy prothrombin (DCP) is secreted from HCC cells and activates c-Met because it contains two structural regions that are similar to HGF (Suzuki et al., 2005;Zhang Y.S. et al., 2014). Due to this similarity, DCP can bind to and activate c-Met. Moreover, DCP is used as a tumor screening and diagnostic biomarker owing to its sensitivity and specificity. (II) C-Met is modulated through crosstalk with different membrane receptors, including epidermal growth factor receptor (EGFR), human epidermal growth factor receptor (HER), Integrin, βcatenin, cluster of differentiation-44 (CD44), intercellular adhesion molecule-1 (ICAM-1), Plexin B1, VEGF-A, insulin receptor (INSR), FAS, Mucin 1 (MUC1), neuropilin (Nrp)-1 and -2, and focal adhesion kinase (FAK) (Figure 2; Jo et al., 2015;Garcia-Vilas and Medina, 2018). Although this crosstalk is not necessary for cell survival, it is able to better integrate the signals presented in the extracellular environment. Even if the crosstalk is redundant in physiological conditions, these interacting receptors may collaborate with each other in promoting tumorigenesis and/or metastasis and even cause resistance to target drugs in pathologic conditions. (III) C-Met overexpression drives the receptor activation and is induced by a few factors, including hypoxia (Pennacchietti et al., 2003;Ghiso and Giordano, 2013), inactivation of tumor suppressor genes, activation of upstream oncogenes and loss of miRNAs (Corso and Giordano, 2013). (IV) C-Met mutations can activate the receptor, thereby altering substrate specificity or catalytic activity. The identification of germline activating mutations in hereditary papillary renal carcinomas is unequivocal evidence correlating Met with cancer (Schmidt et al., 1999). (V) C-Met can also be activated by amplification. It has been shown that non-canonical pathways are associated with tumor progression, metastasis (Garcia-Vilas and Medina, 2018) and drug resistance (Migliore and Giordano, 2008;Scagliotti et al., 2013) in in vivo experiments. (VI) Autocrine Met-induced-activation is due to ectopic Met expression in cells yielding HGF, especially in acute myeloid leukemia (Kentsis et al., 2012). (VII) miRNAs directly degrade messenger RNA or repress translation to regulate gene expression (Friedman et al., 2009). Deregulated miRNA expression in HCC tissues has been detected. (VIII) Long noncoding RNAs (lncRNAs) could modulate c-Met expression by interacting with miRNAs . (IX) Slug can mediate activation of c-Met in a ligand-independent manner because of increased levels of fibronectin and induced integrin α V function (Chang et al., 2019). The HGF/c-Met axis has an important role in cellular behaviors, such as cell proliferation, migration, survival, morphogenesis and the epithelial-mesenchymal transition (EMT) (Taher et al., 2002;Bouattour et al., 2018). Moreover, it also is essential for liver formation, growth, regeneration, protection, and angiogenesis during embryonic development and in adulthood after injury (Borowiak et al., 2004). In HGF and c-Met knockout mice, mice are embryonic lethal due in part to impaired liver formation (Schmidt et al., 1995). After partial hepatectomy, HGF expression levels increased quickly in rodents (Nakamura et al., 1984), and mice that conditionally inactivate c-Met in mature hepatocytes show insufficient liver regeneration (Borowiak et al., 2004). However, the aberrant activation of HGF/c-Met signaling pathways, such as c-Met over-expression, amplification, binding to other ligands or abnormally high HGF levels, leads the initiation and progression of tumors, such as non-small cell lung cancer, HCC, colon cancer, renal caner, and breast cancer (Goyal et al., 2013;Li J. et al., 2019). Furthermore, both the canonical and non-canonical signaling pathways require the dimerization and autophosphorylation of c-Met. Therefore, c-Met is the key factor in the HGF/c-Met signaling pathways. C-Met and its downstream signal mediators are promising targets in treating patients with advanced HCC. THE HGF/c-MET AXIS IN HCC Numerous in vivo and in vitro studies have demonstrated that HGF/c-Met play a critical role in the development of various human cancers (renal, lung, liver, breast, colon, thyroid, ovarian, and pancreas). HGF/c-Met signaling pathways are uncontrolled in human cancer via overexpression of HGF or c-Met, gene amplification, mutational activation of c-Met, down-regulation of Met-targeted miRNA, binding to other ligands, autocrine signaling, or abnormally high HGF levels. Deregulated activation of c-Met contributes to a few aspects of tumor progression, such as inducing neoplastic cells to disaggregate from the tumor mass, eroding basement membranes, infiltrating stromal matrices, and finally colonizing new tissues to form metastases (Corso and Giordano, 2013). Here we mainly discuss the HGF/c-Met axis in HCC. Onset Chronic liver diseases such as cirrhosis and hepatitis B or C are triggers of HCC (Janevska et al., 2015). There is a complicated interplay between HCC, chronic liver diseases and c-Met. Liver diseases reduce hepatocytes and increase the need for hepatocyte proliferation, thereby promoting up-regulation of c-Met and/or HGF. The increasing c-Met levels induce hepatocyte proliferation, regeneration, and survival during liver repair and delay the development of liver diseases by repressing chronic inflammation and the progression of fibrosis. Although it is potentially beneficial for liver diseases, increased c-Met activity can initiate, drive or promote the progression of HCC (Bouattour et al., 2018). Reversely, a knockout of the c-Met increased chemically-mediated HCC initiation but did not affect phenobarbital-induced HCC promotion (Marx-Stoelting et al., 2009). Moreover, the intact and normal HGF/c-Met signaling is elementary for sustaining normal redox homeostasis and could suppress tumor in the N-nitrosodiethylamine-induced HCC (Takami et al., 2007). Additionally, c-Met may induce VEGF-A expression, which can enhance tumor angiogenesis Zhang et al., 2018b). As mentioned above, c-Met is aberrantly activated by gene amplification, overexpression, mutation, binding to other ligands, autocrine signaling or abnormally high HGF levels in cancer. However, according to the study by Takeo et al. (2001), Met amplification was at a very low frequency in HCC (onetwentieth). In the study by Kondo, the amplification frequency is 159th (Kondo et al., 2013). Concerning activating Met kinase domain mutations, Park and Di Renzo and Lee and Aebersold (Park et al., 1999;Di Renzo et al., 2000;Lee et al., 2000;Aebersold et al., 2003) observed three missense mutations in childhood HCC (K1262R, M12681, T11911, respectively) (Park et al., 1999). Mutations in the Casitas B-cell lymphoma (Cbl)-binding domain are demonstrated to be oncogenic because binding of Cbl to Y1003 causes Met ubiquitination that is vital to maintenance of physiological Met activation and prevention of continued activation of Met (Abella et al., 2005;Peschard and Park, 2007). Under normal conditions, activated Met is rapidly removed from the cell surface by ubiquitination, and then targets the lysosomal degradation chamber. More and more evidence shows that the ubiquitination of RTK is the key to its lysosomal degradation and recruitment of the ubiquitin protein ligase Cbl family is required for ligand-induced degradation of many RTKs. Moreover, the phosphorylation of Y1003 provides a direct docking site for the SH2-like crystal structure of Cbl (TKB) domain of Cbl ubiquitin ligase and is required for ligand-dependent ubiquitination and Met receptor degradation. Therefore, Y1003 mutations in the cbl-binding region can lead to the continued activation of Met and its downstream signaling pathways, and even induce cancer. Furthermore, mutations in the juxtamembrane domain led to tumorigenesis in an in vitro trial (Graveel et al., 2013). Levels of c-Met were higher (or showed overexpression) in 20-48% of HCC samples than levels in peritumoral liver tissue (Tavian et al., 2000). Over-expression of c-Met occurs more often than mutation and amplification. While not all HCC are related to HGF or c-Met overexpression (Zhang et al., 2005), HCC patients with c-Met overexpression have poor prognosis. The expression of HGF is decreased in HCC, but is increased in peritumoral liver tissue (Garcia-Vilas and Medina, 2018). The increased secretion of HGF in the peritumoral liver tissue may be due to the increased release of HGF from hepatic stellate cells to the peritumoral liver tissue. While the decreased secretion of HGF in HCC tissue may be due to the HGF from HCC cells directly bounding to c-Met through autocrine pathway. In addition to the above, cooperation of the HGF/c-Met pathway with MUCI (Bozkaya et al., 2012) or β-catenin (Tao et al., 2016) can induce hepatocarcinogenesis. Qiao et al. (2019) demonstrated that loss of axis inhibition protein (Axin1) cooperated with c-Met to cause HCC in mice. Similarly, loss of β-Catenin also exacerbated hepatocarcinogenesis driven by Met and oncogenic β-catenin . A study carried out by Kaposi-Novak has found that a Metregulated expression signature correlated vascular invasion rate and decreased mean survival time and microvessel density in a subset of human HCC and liver metastases (Kaposi-Novak et al., 2006). Wang et al. (2001) did a trial by using human Met transgenic mice to understand how ligand-independent activation of RTKs affects tumorigenesis. It was found that transgenic mice developed HCC, which subsided when the transgene was inhibited, which showed that Met over-expression induced tumorigenesis without HGF. The HGF/c-Met axis can also induce onset of HCC by promoting angiogenesis (Giordano and Columbano, 2014). In summary, although there are many ways to activate the c-met signaling pathway to induce the occurrence of HCC, c-met expression and activation are indispensable. Therefore, c-met is a therapeutic target that is worthy of research, and there are still many mechanisms of how HGF/c-Met signaling mediates tumorigenesis in HCC that we need to explore. Proliferation Besides onset, the HGF/c-Met axis is also involved in proliferation of HCC. In 2005, to investigate the effects of c-Met expression on HCC cell growth Zhang et al. (2005) used an adenovirus-delivered small interfering RNA (siRNA) method to observe the knockdown of c-Met on tumorigenic growth of HCC in in vitro and in vivo trials In the in vitro trial, compared with adenovirus alcohol dehydrogenase (AdH1)-null or mock-infected cells, proliferation of MHCC97-L cells, which had high c-Met expression, were inhibited by adenovirus AdH1-siRNA, and c-Met expression also decreased. The MHCC97-L cells were arrested at G1-G0 phase. In the in vivo study, the proliferative indices of adenovirus AdH1-siRNA/Met-injected mouse tumors were lower (23.4%) than the adenovirus AdH1null injected tumors (69.8%) and mock-injected tumors (72.8%). C-Met expression was obviously reduced by adenovirus AdH1-siRNA/Met injection. In addition, some studies have found lncRNAs can promote HCC cells proliferation, migration, and even invasion. According to the study of Zhang, lncRNA FLVCR1-AS1 sponges miR-513c and increases c-Met expression in HCC cells, which induces HCC progression (Zhang et al., 2018a). In another study, Zhang et al. (2019) has demonstrated that lncRNA HULC promotes HCC progression by inhibiting miR-2052 expression and activating c-Met signaling pathway. However, another study found that HGF plays a crucial role in HCC proliferation induced by cancer-associated fibroblasts from HCC (H-CAFs) in in vitro and in vivo trials (Jia et al., 2013). Tumor volume growth was consistent with HGF production. Furthermore, the effect of H-CAF conditioned medium on proliferation of HCC cells was significantly reduced by anti-HGF. Therefore, according to these studies, HGF/c-Met can induce proliferation of hepatocellular cells. Invasion and Metastasis The high lethality of HCC results from primary tumors invading and migrating to other tissues. This process begins with tumors invading blood vessels and subsequently migrating into intrahepatic and extrahepatic tissues. Tumor metastasis is a complicated multistep process and invasion is a major element of this process, which includes damage of basement membranes and proteolysis of the extracellular matrix (ECM) (Liotta and Kohn, 2001). Several studies have reported that overexpression levels of HGF or c-Met in HCC correlate with incidence of invasion and metastasis and suggest that HGF/c-Met signaling had a crucial role in the invasion and metastasis of HCC cells (Ueki et al., 1997;Junbo et al., 1999;Wang et al., 2007). HGF/c-Met signaling pathways are involved in HCC cell invasion via HGF-induced c-Met phosphorylation, AKT phosphorylation, nuclear factor-κB (NF-κB) activation, and matrix metalloproteinase-9 (MMP-9) expression (Wang et al., 2007). To investigate the invasion and metastasis effect of HGF/c-Met signaling in HCC, Liu and colleagues conducted a study using two different HGF-treated HCC cell lines, Hep3B and HepG2. The Hep3B HCC cell line was p53 deficient and overexpressed c-Met after treatment with HGF. Loss of p53 expression reinforced HGF/c-Met signaling, which promoted invasion and metastasis by upregulating Snail expression (Liu et al., 2016). Xie et al. (2010) found that overexpression of c-Met induces cell invasion. Other studies have also reported that peritumoral stromal neutrophils and mesenchymal cells secrete high levels of HGF, which drove high rates of proliferation, invasion and metastasis in HCC by promoting the EMT (Ding et al., 2010;He et al., 2016). Moreover, phenotypic analysis validated that mixed-lineage leukemia (MLL), an epigenetic regulator, interacts with HGF/c-Met signaling to induce invasion and metastatic growth of HCC cell lines (Marquardt and Thorgeirsson, 2013;Takeda et al., 2013). The liver is an organ filled with blood vessels that rely on angiogenesis for cellular regeneration (Whittaker et al., 2010). Likewise, angiogenesis plays a critical role in tumor growth, invasion and metastasis (Semela and Dufour, 2004). The angiogenic balance between proangiogenic and antiangiogenic factors maintains normal angiogenesis (Semela and Dufour, 2004). However, the balance in HCC is disordered due to excessive angiogenic factors that are secreted by tumor cells, endothelial cells and pericytes. Many angiogenic factors, such as VEGF-A, HGF, transforming growth factor (TGF) and epidermal growth factor (EGF) (Folkman, 2003), demonstrated elevated expression levels in HCC tumors (Mas et al., 2007), Moreover, these factors induce angiogenesis through a number of mechanisms, one of them is via the HGF/c-Met signaling pathway. Several studies have reported that the HGF/c-Met axis induces angiogenesis and cell growth through interaction with the VEGF and VEGFR pathway and decreasing expression of thrombospondin-1 (Zhang et al., 2003;Abounader and Laterra, 2005). DIAGNOSIS AND PROGNOSIS Although new treatments have been used in HCC patients and provided possible cures, the long-term survival rate is still poor due to late diagnosis and high recurrence. Therefore, sensitive and specific diagnostic or prognostic biomarkers are urgently needed. Although the HGF/c-Met axis is an emerging study target, the possibility of its use in diagnosis and prognosis has been studied in addition to its mechanism in HCC. In the study by Yamagamim et al. (2002), HCC patients had significantly increased serum levels of HGF than patients with chronic viral hepatitis C and cirrhosis. Thus, the serum HGF concentration may be helpful as a tumor biomarker for HCC. Likewise, Karabulut et al. (2014) and Zhuang et al. (2017) also identified serum HGF level as a potential diagnostic. However, Unic et al. (2018) demonstrated that the individual diagnostic performance of HGF was inadequate. Although the concentration of HGF is obviously higher in patients with alcoholic liver cirrhosis than in healthy humans, there is no significant difference in HGF serum level between cirrhosis patients with HCC and cirrhosis patients without HCC. It may be due to the reduction of hepatocytes in cirrhosis patient, which promotes the secretion of HGF and then activates c-Met to increase hepatocytes proliferation. Moreover, the diagnosis sensitivity of HGF was very high (90.62%) but the specificity was very low (25.81%). In conclusion, although the independent use of HGF for diagnosis is controversial, HGF is useful when combined with other diagnostic markers. Some studies also suggested that the HGF/c-Met axis has prognostic value for patients with HCC (Zhuang et al., 2017;Garcia-Vilas and Medina, 2018). Overexpression of c-Met correlated with decreased 5-year survival in patients with HCC. In addition, the Met-driven expression signature defines a subset of HCC which has poor prognosis and an aggressive phenotype (Kaposi-Novak et al., 2006). Vejchapipat et al. (2004) has found that inoperable patients with HCC had higher levels of serum HGF than healthy humans due to the impaired clearance of HGF. Serum HGF concentration was negatively correlated with long-term survival time in HCC patients. Additionally, a serum HGF level of 1.0 ng/mL or more indicated a serious prognosis in patients with HCC. Nevertheless, another study (Ke et al., 2009) showed that c-Met was not an independent prognostic factor of HCC for OS and cumulative recurrence, but the combination of c-Met/CD151 was. Meanwhile, an other study (Gong et al., 2018) also suggested that the prognostic value of c-Met is contradictory. By univariate analysis, c-Met overexpression was significantly correlated with clinicopathological factors, but not with multivariate analysis. Furthermore, c-Met overexpression was not identified to be obviously correlated with OS rates in this study. The reason for the difference in these studies may be the small number of surveyed people or the different technique and scoring system. Thus, the role of HGF and c-Met as prognostic factors for HCC needs to be explored further in the future. Though, the combination of HGF and c-Met with other biomarkers may be useful in predicting the prognosis of HCC. TARGET THERAPIES As mentioned above, there are five therapies that can prolong the expected lifespan of patients with HCC including, surgical resection, liver transplantation, local ablation, TACE and sorafenib. Only 40% percent of patients with early stage HCC are eligible for potentially curative treatments (surgical resection, transplantation, local ablation), which prolong median survival times over 60 months (Llovet et al., 2015). For patients with intermediate-stage HCC, TACE can improve estimated median survival by 26 months (Kudo et al., 2014). However, a large proportion of HCC patients are diagnosed at advanced stage and only systemic treatment with sorafenib can extend OS from 6 to 11 months (Llovet et al., 2008). Thus, sorafenib is regarded as the first-line treatment in advanced HCC patients with a manageable adverse event (Llovet et al., 2008). Nevertheless, in the past few years, seven randomized phase III clinical trials, which tested other first-line and second-line treatments, in intermediate-stage or advanced-stage HCC patients have not found any obvious OS benefits (Llovet et al., 2015). Moreover, the intrinsic or acquired resistance of sorafenib is the major obstacle in treatment. Thus, based on the understanding of the role of the HGF/c-Met signaling pathway and the uniqueness of c-Met in HCC, more and more therapeutic strategies target c-Met and the interaction between c-Met and downstream signaling mediators instead of the interaction between HGF and c-Met because of the varying activation of c-Met. c-Met Inhibitors So far, there are six Met inhibitors developed and they have been tested in 10 HCC clinical trials (Bouattour et al., 2018). Small molecular kinase inhibitors can block phosphorylation of the catalytic domain in the receptor by competitive or non-competitive antagonism of the ATP binding site, thereby preventing the recruitment of signal transducers and mediators, and thus impeding the transmission of downstream signals (Munshi et al., 2010;Gao et al., 2012). Anti-c-Met agents can be categorized into three types: selective c-Met tyrosine kinase inhibitors (TKIs), multi-targeted TKIs including against c-Met, and monoclonal antibodies against HGF or c-Met (Goyal et al., 2013). A lot of preclinical studies have demonstrated the feasibility of HGF/c-Met as targets for the treatment of patients with HCC. For instance, AMG 337, a potential and highly selective small molecule Met kinase inhibitor, significantly decreases tumor growth of Met-high-expression and Metamplified HCC cell lines in in vitro and in vivo trials (Du et al., 2016). Additionally, Indo5, selectively abrogating HGF-induced c-Met pathway activation and brain-derived neurotrophic factor (BDNF)/nerve growth factor (NDF)-induced Trks signaling activation, significantly inhibits HCC tumor growth in xenograft mice (Luo et al., 2019). Moreover, PHA665752 supressed cell proliferation, and induced apoptosis in MHCC97-L and MHCC97-H cells which overexpress c-Met through blocking phosphorylation of c-Met and downstream PI3K/Akt and MAPK/Erk pathways (You et al., 2011). PHA665752 also repressed MHCC97-L and MHCC97-H tumor growth in xenograft models. Also, several clinical trials are (Garcia-Vilas and Medina, 2018) being carried out in HCC patients using c-Met inhibitors ( Table 1), including cabozanitinib, foretinib, cobazitinib, gefitinib, crizotinib, MSC2156119, AZD4547, MK2461, and INC280 (Schiffer et al., 2005;Goyal et al., 2013;Bladt et al., 2014;Sun et al., 2017;Garcia-Vilas and Medina, 2018). Among them, cabozanitinib is undergoing randomized phase III clinical trials. In the latest clinical trial, OS and progression-free survival (PFS) in months was calculated for Phase 2 of Tepotinib 500 mg (5.55 and 3.22 months, respectively) (NCT02115373). In a randomized phase II study of axitinib, axitinib combined with best supportive care (BSC) did not improve OS versus placebo combined with BSC (Kang et al., 2015) (NCT01210495). Nevertheless, axitinib combined with BSC led to a significant prolongation of PFS and time to tumor progression (TTP) and an increase of clinical benefit rate (CBR), and the toxicity of patients with advanced HCC is acceptable. However, there is currently no effective method for treating HCC based on traditional monotherapy with TKIs. In a previous phase II randomized controlled clinical study, Tivantinib, a highly selective c-Met inhibitor, improved median time to progression and OS time in patients with Met-high advanced HCC compared with placebo (Santoro et al., 2013). However, in a randomized phase III double-blind clinical trial, tivantinib did not improve OS time in patients with Met-high advanced HCC treated with sorafenib compared with placebo (Rimassa et al., 2018) (NCT01755767). Despite the failure of tivantinib in a phase III clinical trial to achieve its primary endpoint, we could not deny the role of c-Met inhibitors in the treatment of liver cancer. The reason for this failure, in my opinion, may be due to tivantinib being a non-selective c-Met inhibitor rather a selective c-Met inhibitor. The cytotoxicity against many HCC cell lines of tivantinib was unrelated to c-Met expression but related to inhibiting microtubule assembly (Aoyama et al., 2014) and Glycogen Synthase Kinase-3 alpha (GSK3a) and beta (GSK3b) (Remsing Rix et al., 2014) in other studies. Inhibition of non-c-Met targets may enhance the antitumor activity of non-selective c-Met inhibitors, but is also correlated with increased toxicity and limits the dose so that inhibitors cannot effectively suppress c-Met (Bouattour et al., 2018). Furthermore, the increased toxicity and limitation of potential benefit may outweigh the enhanced antitumor activity because of inhibition of multiple targets. Moreover, we cannot attribute the anti-tumor effect of non-selective c-Met inhibitors to the inhibition of c-Met. Thus, selective c-Met inhibitors may be a better choice for treatment in patients and more studies are needed to identify the reason for phase II clinical trial failure and the feasibility of using c-Met targeting therapies. MicroRNAs Fortunately, using potential miRNAs for suppressing aberrant c-Met signal is an emerging and promising therapy strategy that bypasses traditional approaches (Karagonlar et al., 2015) ( Table 2). miRNAs are small non-coding RNAs and regulate gene expression by resolving mRNA or suppressing translation (Karagonlar et al., 2015). Previous studies have found that miRNA expression in cancer tissue is different from normal tissues (Murakami et al., 2006;Volinia et al., 2006;Ladeiro et al., 2008). miRNAs can not only inhibit tumor growth, proliferation, invasion, and metastasis but also induce many kinds of tumors. Herein, we mainly focus on the role of miRNAs in HCC. For example, miR-101 suppresses the proliferation and migration of HCC cells and tumors through targeting HGF/c-Met, Girdin, SOX9 and TGF-β in in vitro and in vivo trials (Cao et al., 2016;Yang et al., 2016;Yan et al., 2018;Liu et al., 2019). miRNA-206 targets c-Met and cyclin-dependent kinase 6 (Cdk6) to suppress development of HCC in mice . Also, miR-26a inhibits tumor growth, metastasis and angiogenesis of HCC via targeting HGF-induced c-Met signaling pathways and FBXO11, ST3GAL5 signaling pathways (Yang et al., 2014;Cai et al., 2017;Ma et al., 2018). MiR-93 induces HCC cell proliferation and invasion by activating c-Met/PI3K/Akt signaling pathways and targeting PDCD4 and TIMP2 (Ohta et al., 2015;Ji et al., 2017;Xue et al., 2018). Thus, we can design miRNA mimics to overexpress miRNAs that down-regulate c-Met signaling pathways, and design miRNAs antagonists to inhibit miRNAs that up-regulate c-Met signaling pathways (Karagonlar et al., 2015). Moreover, one of the major advantages of miRNA therapy is simultaneously targeting multiple effectors in several signaling pathways involved in tumorigenesis. Side Effect of c-Met Inhibitors Although c-Met inhibitors show a survival benefit for advanced HCC, there has also been some toxicity and adverse effects demonstrated. For instance, compound 8 was a potent and highly selective ATP competitive c-Met inhibitor. Moreover, compound 8 showed good oral bioavailability and good half life and moderate plasma clearance and volume distribution. In addition, Compound 8 also demonstrated effective tumor inhibition. However, it increased heart rate, cardiac output, and induced myocardial degeneration in mice and thus was terminated as a preclinical candidate (Cui et al., 2013). Also, adverse effects such as hypertension, decreased appetite, ascites and pyrexia were found in a phase I/II multicenter study of the single-agent foretinib . Additionally, ascites, anemia, abdominal pain, and neutropenia were observed in a phase III study of tivantinib (Rimassa et al., 2018). The reason for these aforementioned side effects may be correlated with the physiological function of HGF/c-Met in many organs, including cytoprotective, regenerative, and reduction of apoptosis after injury (Birchmeier et al., 2003). Thus, c-Met inhibitors may block the physiological function of HGF/c-Met and induce side effects. Moreover, the adverse effect of c-Met inhibitors may be due to the inhibition of non-c-Met targets when using non-selective c-Met inhibitors. Inhibitors of c-Met Downstream Mediators To solve the side effects mentioned above, several therapy strategies have been suggested. Among them, specifically targeting the downstream mediators of c-Met involved in tumor progression is a promising method, including the Grb2 SH2 domain, Src, MAPK, STAT3, Shp2 , and Fak (Jo et al., 2015). Nevertheless, there are still problems, especially because these signal pathways are shared with other RTKs, which may cause other unpredictable reactions. Therefore, this requires the identification of more specific and suitable downstream targets. Recently endosomal processing has been demonstrated to play a pivotal role in the progression of HCC . Receptor endocytosis is crucial for signal transduction, either clathrin-dependent or -independent (Sorkin and von Zastrow, 2009;McMahon and Boucrot, 2011). HGF binding to c-Met induces the activation of downstream signal mediators, including ERK (You et al., 2011), c-Jun N-terminal kinase (JNK) (Rodrigues et al., 1997) and AKT (Zhang et al., 2018c). Both JNK and ERK mediate HCC cell migration by phosphorylating paxillin at serine residues, which is called an HGF-induced focal adhesion signaling molecule . Protein kinases C ε (PKCε) and golgi-localized γ-ear-containing ARF binding protein 3 (GGA3) regulate HGF-induced c-Met endocytosis (Kermorgant et al., 2004) to direct fluctuating JNK and paxillin signaling pathways which involve HCC cell migration . Importantly, endocytosis blockers, such as dynasore, could prevent the HGF-induced HCC cell migration and invasion by inhibiting critical endosomal components . Thus, critical endosomal components may be promising targets in HGF/c-Met signaling pathways for HCC treatment. In addition, Hu et al. (2017) suggested that hydrogen peroxide-inducible clone-5 (Hic-5) may be crucial for c-Met signaling pathways and HCC metastasis because it mediates HGF-induced reactive oxygen species (ROS)-JNK-signaling pathways in HCC and may be also a specific and safe target for treating HCC patients. Natural Compound and Herbal Medicines In recent years, more and more studies have found that natural compounds can inhibit the progression of liver cancer. For example, deguelin can suppress tumor angiogenesis on vascular endothelial cells by decreasing autocrine VEGF and repressing HGF-induced c-Met signaling pathways, thereby inhibiting HCC progression . Also, Cinobufacini, a well-known traditional Chinese medicine extracted from toad skins and venom glands, has a therapeutic effect in HCC (Qi et al., 2018). A study has found that Cinobufacini could suppress HepG2 cell invasion and metastasis through the inhibition of the c-Met/ERK induced EMT (Qi et al., 2018). Moreover, madecassoside (MAD), isolated from Centella asiatica could repress the activation of the HGF-induced c-Met-PKC-ERK1/2-Cyclooxygenase-2 (COX-2)-Prostaglandin E2 (PGE2) cascade to inhibit HCC cell proliferation and invasion. In conclusion, natural compounds and herbals may be potential therapeutic targets for HCC. Resistance in c-Met Inhibitors and Sorafenib and Combined Inhibition of HGF/c-Met and Other Pathways C-Met inhibitor therapy has failed to result in satisfactory outcomes in phase III clinical trials for HCC. Therefore, it is urgent to understand mechanisms and find new strategies such as effective combination therapies. Several studies have shown resistance in c-Met inhibitors due to various mechanisms. First, c-Met inhibitors only affect high-expression c-Met patients with HCC. Thus, enrolled low-expression c-Met patients with HCC could lead to the occurrence of resistance in c-Met inhibitors. Second, as mentioned above, c-Met inhibitors which target the interaction between HGF and c-Met may lose efficacy owing to cell attachment (Wang et al., 2001), gene amplification of c-Met, DCP binding to c-Met (Suzuki et al., 2005), gene mutation in the c-Met activation loop (Okuma and Kondo, 2016) and crosstalk with other membrane receptors (Garcia-Vilas and Medina, 2018). Third, inhibition of c-Met signaling pathways triggers the EGFR pathway as a compensatory survival pathway (Steinway et al., 2015). Fourth, phosphorylation status of FGFR determines different sensitivities of HCC cells to c-Met inhibitors (Jo et al., 2015). Fifth, Li H. et al. (2019) suggested that c-Met inhibitors up-regulate the expression of PD-L1 in HCC cells by suppressing GSK3B-mediated PD-L1 degradation and induce T-cell suppression and tumor evasion of the immune response. Finally, when inhibiting the activation of HGF-induced c-Met, HCC cells can sustain survival through Y1234/1235dephosphorylated c-Met induced autophagy . New therapeutic strategies have been developed against the mechanism of c-Met inhibitors resistance described above. Among them, combined c-Met inhibitors and other pathway inhibitors is a promising treatment. For example, combined inhibition of both c-Met and EGFR pathways could repress the tumor growth of HCC (Steinway et al., 2015). Additionally, targeting both c-Met and FGFR pathways provides superior suppression of HCC progression (Jo et al., 2015). Also, combination of c-Met inhibitor and anti-PD1 treatment represses HCC growth and improves mouse survival . Moreover, targeting c-Met and autophagy could overcome resistance in HCC . Recently, a growing body of evidence suggests that aberrant activation of HGF/c-Met signaling is associated with resistance of target therapies (Corso and Giordano, 2013), including sorafenib. Sorafenib is a standard therapy for advanced HCC, thus the resistance to sorafenib is a major concerning problem. Firtina Karagonlar et al. (2016) have found that in HCC patients on long term sorafenib treatment, the upregulation of HGF induces autocrine activation of HGF/c-Met signaling pathways, increasing the invasion and migration abilities of HCC cells and leading to resistance to sorafenib. Moreover, a recent study has demonstrated that tumor associated M2 macrophages secret HGF in a feed-forward manner, leading the resistance to sorafenib (Dong et al., 2019). Also, a new study found that HGF activates phosphorylated (P)-ERK/Snail/EMT, P-STAT3/Snail/EMT and AKT/ERK1/2-EGR1 (Han et al., 2017;Chen and Xia, 2019;Xiang et al., 2019) signaling pathways to induce the resistance to sorafenib. According to the study of Chen, IncRNA NEAT1 induces sorafenib resistance of HCC patients through repressing miR-335 expression and activation of c-Met-Akt signaling pathway (Chen and Xia, 2019). Thus, the concentration of HGF in serum may be a potential predictive marker for sorafenib efficacy (Shao et al., 2015) and the combination of HGF and c-Met inhibitors and sorefenib could improve the efficacy of the first line systemic treatment (Goyal et al., 2013;Dong et al., 2019). For instance, regorafenib plays a crucial role of reversing HGFinduced sorafenib resistance through the inhibition of the EMT . Angiopoietin-like protein (ANGPTL1) not only inhibits sorafenib resistance, but also inhibits cancer stemness and tumor growth of HCC cells via suppressing the EMT through the Met-AKT/ERK-EGR-1-Slug signaling cascade . In conclusion, although the tivantinib phase III trial failed and the reason is not clear, the role of c-Met inhibitors in treating HCC can not be denied. C-Met inhibitors are still mainstream of research and deserve more research on its cause of failure and new clinical trials. miRNAs, natural compound and herbal medicines are emerging treatments of HCC which can inhibit multiple pathways including c-Met signaling pathway. But it also means that there may be other side effects. The advantage of c-met downstream pathway inhibitors is that they are highly targeted and have little side effects, but this requires us to further understand the key targets of HCC. The drug resistance may be due to the heterogeneity of liver cancer cells and combination therapy may be a good solution. CONCLUSION The HGF/c-Met axis has an important role in cellular behaviors such as cell proliferation, migration, survival, migration, morphogenesis, and the EMT. Moreover, it is also essential for liver formation, growth, regeneration and protection and angiogenesis during embryonic development and in adulthood after injury. Especially in chronic liver diseases, inflammation decreases hepatocytes and increases the need for c-Met activity to promote hepatocyte proliferation, regeneration and suppress inflammation. Nevertheless, the aberrant activation of c-Met and downstream signaling pathways through overexpression of HGF or c-Met, gene amplification, mutational activation of c-Met, down-regulation of Met-targeted miRNA, binding to other ligands, autocrine signaling or abnormally high HGF levels initiates and drives tumorigenesis and promotes tumor growth, invasion, metastasis, and angiogenesis in HCC. In the progression of liver cancer, c-Met is regulated by various factors such as miRNAs and SOCS1. Furthermore, c-Met cooperates with other signaling pathways such as MUCI or β-catenin in promoting tumorigenesis. Therefore, the treatment of liver cancer with c-Met as a target is a potential and promising therapy strategy. So far, six c-Met inhibitors have entered clinical trials and have been shown to inhibit tumor growth and invasion. Moreover, selective c-Met inhibitors are superior to non-selective c-Met inhibitors in the treatment of liver cancer due to their low toxicity. However, the failure of the tivantinib phase III trial suggests that we need to further study the causes of failure and the feasibility of c-Met inhibitors in treating HCC. In addition, we should also consider the relationship between c-Met inhibitors and liver disease, although there is no clinical evidence that c-Met inhibitors worsen liver function. In chronic liver diseases, c-Met expression is increased to promote hepatocyte proliferation and inhibit inflammation. C-Met inhibitors suppress this positive regulation and may accelerate advanced liver disease. Meanwhile, liver diseases affect the drug pharmacokinetics, pharmacodynamics, reduce enzyme activity, impair hepatic clearance of drugs and even change the interplay between drugs. These effects may alter the dose of the drug needed to reach the desired blood concentration and induce novel toxicities. Thus, we should consider whether patients with Child-Pugh B or C disease can tolerate the dose established in clinical trials because in the past, most clinical trials were conducted in patients with Child-Pugh A disease. Besides c-Met inhibitors, many new therapeutic strategies have been developed, such as the use of miRNAs to regulate HGF/c-Met signaling pathways to inhibit liver cancer progression, targeting endocytosis and more downstream molecules, Hic-5 as a therapeutic strategy to reduce side effects of c-Met inhibitors, as well as herbal treatments. c-Met is also involved in the resistance mechanism of sorafenib, which can be solved by the combination of c-Met inhibitor and sorafenib. For the self-resistance of c-Met inhibitors, the combination of c-Met inhibitors and other inhibitors can be used to solve this problem, such as FGFR and EGFT inhibitors, autophagy inhibitor and anti-PD1 treatment. Whether HGF/c-Met can be used as independent diagnostic and prognostic markers still requires further research, but HGF/c-Met in combination with other diagnostic and prognostic markers is valuable. In general, the mechanism of the HGF/c-Met pathway involvement in liver cancer requires more research, and c-Met inhibitors are a potential and promising therapeutic strategy in patients with HCC. AUTHOR CONTRIBUTIONS ZR and ZY proposed the study and the guarantors. HW, BR, and JL performed the research and wrote the first draft. All authors contributed to the interpretation of the study and to further drafts. The funding sources had no role in the design of this study nor any role during its execution, analyses, data interpretation, or decision to submit results.
9,111.2
2020-02-07T00:00:00.000
[ "Biology", "Medicine" ]
Shear Bond Strength and Fracture Analysis of Human vs. Bovine Teeth Purpose To evaluate if bovine enamel and dentin are appropriate substitutes for the respective human hard tooth tissues to test shear bond strength (SBS) and fracture analysis. Materials and Methods 80 sound and caries-free human erupted third molars and 80 freshly extracted bovine permanent central incisors (10 specimens for each group) were used to investigate enamel and dentine adhesion of one 2-step self-etch (SE) and one 3-step etch and rinse (E&R) product. To test SBS the buccal or labial areas were ground plane to obtain appropriate enamel or dentine areas. SE and E&R were applied and SBS was measured prior to and after 500 thermocycles between +5 and +55°C. Fracture analysis was performed for all debonded areas. Results ANOVA revealed significant differences of enamel and dentin SBS prior to and after thermocycling for both of the adhesives. SBS- of E&R-bonded human enamel increased after thermocycling but SE-bonded did not. Bovine enamel SE-bonded showed higher SBS after TC but E&R-bonded had lower SBS. No differences were found for human dentin SE- or E&R-bonded prior to or after thermocycling but bovine dentin SE-bonded increased whereas bovine dentine E&R-bonded decreased. Considering the totalized and adhesive failures, fracture analysis did not show significances between the adhesives or the respective tooth tissues prior to or after thermocycling. Conclusion Although SBS was different on human and bovine teeth, no differences were found for fracture analysis. This indicates that solely conducted SBS on bovine substrate are not sufficient to judge the perfomance of adhesives, thus bovine teeth are questionnable as a substrate for shear bond testing. Introduction To harvest sound human teeth for in vitro testing of adhesive systems is becoming more and more difficult since indicated extractions are declining considerably. Furthermore ethical aspects have attracted more interest when human tissue is involved. Therefore, many scientists use bovine teeth as substitutes for human teeth to test bond strength [1][2][3][4][5][6][7][8][9]. As a consequence other authors have explored whether there are differences in bond strength [10][11][12][13], microleakage [14] and morphology [15,16] of human versus bovine teeth. Camargo et al. [16] found that as regards the number of dentin tubules, the bovine specimens presented a significantly higher mean value than the human specimens but no difference in the diameters of human and bovine dentin tubules was observed. Bovine enamel was reported to demineralize and erode faster than human enamel [17]. Saleh et al. [10] and Schilke et al. [18] discovered highly significant differences between shear and tensile bond strengths of human and bovine enamel; however, regression prediction equations supported the use of bovine teeth as a reliable substitute to human counterparts in bonding studies of orthodontic adhesion [10]. Söderholm [19] stated in his letter to the editor that bond strength values do not present the true stress levels triggering failures of resin to hard tooth tissues adhesion. In his opinion taking a fracture mechanical approach might be more appropriate. It is widely accepted that shear bond test which pulls out tooth substrate must mean that the adhesive strength is superior to the cohesive strength of the tooth substrate, and that the meaning of the obtained value cannot be interpreted quantitatively anymore [20]. Following this hypothesis the present investigation did not only evaluate bovine as a substitute for human teeth by measuring shear bond strength but also by performing fracture analysis, which had not been done by the identified literature. The null hypothesis was that no differences between human and bovine teeth occur in (a) shear bond strength and (b) fracture analysis. Materials and Methods Two commercial adhesives, one 2-step self-etch and one 3-step etch & rinse product, were selected (Table 1). Shear bond strength on human and bovine enamel and dentin prior to and after thermocycling was measured and fracture analysis was conducted after debonding. 80 sound and caries-free human erupted third permanent molars of 18 to 40 year-old patients extracted for surgical reasons and 80 freshly extracted bovine permanent central incisors were thoroughly washed in running water and all blood and adherent tissues mechanically removed. The bovine teeth were not older than 2 days after the animals have been slaughtered. Regarding the human teeth all patients were informed, that their molars are used for scientific research. All patients gave their consent verbally. The samples were collected by two dentists in their private offices, collected and transferred to us anonymously, so that an identification of one individual tooth was impossible. The ethic committee of the medical faculty of the Heinrich-Heine-University of Düsseldorf gave formal approval (internal study number: 4094). Until preparation for shear bond strength measurement, the teeth were stored no longer than a maximum period of 4 weeks according to ISO/TS 11405:2003, the first week in a 0.5% chloramine T trihydrate (Sigma Aldrich Chemie GmbH, Taufkirchen, Germany) bacteriostatic/bactericidal solution and thereafter in distilled water at 462uC. To obtain similar dentin quality the teeth were X-rayed to determine the distance between the pulp chamber and the dentin surface to be bonded (X-ray device Philips Oralix U3-DC, Soredex Ltd., Helsinki, Finland, application data: 0.32 s, 10 mA, 60 kV, film Agfa Dentus M2, Class-D, Heraeus Kulzer GmbH, Hanau, Germany). Afterwards they were embedded in MMA/PMMA embedding resin (Technovit 4000, Heraeus Kulzer GmbH, Hanau, Germany) using a polyethylene mold (diameter: 25 mm, height: 30 mm) so that their buccal or labial areas, respectively, were close to the surface of the embedding resin. After removal from the mold, the teeth were ground under water cooling with 600 grit, 800 grit and finally 1000 grit grinding paper until a plane enamel or dentine area of at least 7 mm in diameter was exposed and a minimum dentin layer of 2.5 mm remained above the pulp chamber. Grinding was done by the same person by hand on a plane table. The prepared human and bovine enamel and dentine specimens were randomly arranged in four groups of twenty specimens each for each of the adhesives, and their surfaces were treated with the respective adhesive according to the manufacturer's instructions for use (Table 2). Thereafter a black opaque Teflon split mold (diameter: 2560.5 mm, thickness: 260.1 mm) with a 360.1 mm diameter hole in its center was fixed on the thus treated dentine surfaces and the hole was filled with a first increment of approximately 0.8 mm with Clearfil AP-X (shade A3, #01122B, Kuraray Co. Inc., Kurashiki, Japan). The final increment was covered with a 0.05 mm transparent polyester foil prior to curing to avoid an inhibition layer. Each increment was light cured for 40 s. Light curing of all specimens was done with the tungsten halogen light Hilux Ultra Plus (Benlioglu Dental Inc. Ankara, Turkey) and the 11 mm diameter light guide in the constant polymerization mode (full light power from the start). Each time after a series of ten specimens had been cured the output of the curing device was checked with the Curing Light Meter (Benlioglu Dental Inc.). Irradiances between 750 and 850 mW cm 22 (mean 800667 mW cm 22 ) were measured and no significant decrease of the output could be observed. After polymerization, all specimens were stored for 24 hours in water at 37uC. One half of the specimens was shear bond tested immediately and the other half was thermocycled 500 times in water between +5uC and +55uC. The specimens were left for 30 s at each temperature level. The transfer time was 15 s. The shear test was carried out according to ISO/TS 11405:2003, Annex A test methods for measurement of bond strength [21] with a shear test device as described by ISO 10477 Amendment 1 ( Figure 1) [22] and a Universal Testing Machine (Test GmbH, Erkrath, Germany). The cylinders formed by the resin-based restorative material had diameters of 360.1 mm and were loaded with a constant crosshead speed of 0.75 mm min 21 . The load at break was recorded and the bond strength B was calculated in MPa using the formula B = F6S 21 , in which F is the load in N at break and S is the bonded area of the cylinder in mm 2 . Fracture analysis was conducted after the shear test. Digital photographs were taken from the de-bonded dentin surfaces (Canon EOS 20 D, Canon Inc., Tokyo, Japan) and fracture analysis was performed by visual inspection with Scion Image 4.0.2 scientific photo-software (Scion Corporation, Frederick, MD, USA). Statistical Analysis Means and standard deviations were calculated. Normal distribution was tested by the Kolmogoroff-Smirnoff-Test. Oneway ANOVA and post hoc Scheffé's test were carried out for shear bond strength and surface tension (SPSS 15.0, SPSS, Chicago, IL, USA). This was performed separately for each of the different properties. Significant changes of shear bond strength prior to and after thermocycling were calculated with the lowest significant difference ANOVA. Results of the fracture analysis were compared with the non-parametric Mann-Whitney-U test. The Wilcoxon signed ranks test was used to calculate significances between the cohesive failures in the resin and the tooth for the Results The results of the shear bond strength test are shown in Table 3 and the results of the fracture analysis are shown in Table 4. ANOVA revealed significant differences of enamel and dentin shear bond strength values prior to and after thermocycling for both of the tested adhesives. No differences were found for Clearfil SE Bond and Optibond FL human enamel specimens prior to and after thermocycling. The bovine enamel specimens of Clearfil SE Bond showed higher bond strength after thermocycling but the Optibond FL showed lower results. No differences were found for Clearfil SE Bond-treated or for Optibond FL-treated human dentin prior to and after thermocycling yet the bond strength of Clearfil SE Bond-treated bovine dentin specimens increased whereas that of the Optibond FL-treated specimens decreased. Prior to and after thermocycling, significant differences in shear bond strength were found between human and bovine enamel as well for human and bovine dentin for both of the tested adhesives. Shear bond strengths of human and bovine enamel differed significantly for Clearfil SE Bond and Optibond FL prior to thermocycling, but after thermocycling the significance remained existent only for Optibond FL. The human and bovine dentin samples showed significant differences only for Optibond FL prior to but not after thermocycling; the Clearfil SE Bond samples behaved contrariwise. Fracture analysis data did not reveal significances either between the materials or the respective hard tooth tissues when the adhesive and the totalized cohesive failures were considered. Significances were only detected in the cohesive failures in the resin or the tooth tissues, respectively. No correlation was found between shear bond strength and cohesive or adhesive failures, respectively. Discussion To test adhesion on human enamel and dentin in comparison with bovine enamel and dentin, a conventional 3-step etch and rinse system and a 2-step self-etch system was used to evaluate if different results occurred for different bonding approaches. Shear bond strength measurement and fracture analysis are well established methods to evaluate resin enamel or resin dentin adhesion [12,18,21,[23][24][25][26][27]. Although the shear bond test is critically discussed and strongly competes with micro-tensile and micro-shear bond tests it is still considered to be a valid method and, therefore, is also used in the most recently published literature [25,[28][29][30][31][32][33][34]. However, also micro-tensile and micro-shear bond tests have to be discussed critically. Placido et al. [35] reviewed the different test methods and compared shear bond and micro-shear bond test using the finite element stress analysis. They concluded that although a shear load was applied for both tests, there was always a predominance of tensile stresses. They concluded further that the thicker relative adhesive layer in the micro-shear test concentrates stresses highly influencing the maximum load. Therefore, they judged micro-tensile test worse representing shear bond strength than shear test. There are also specific critical aspects for the micro-tensile test [20,[36][37][38]. Different values are achieved for different bonding areas meaning the smaller the area the higher the bond strength [35,38] and the finite element analysis proved strong influence of specimen attachment and dimension on micro-tensile strength [36]. Therefore, there is no ideal bond test and there is still a need for the standardization of test procedures [20]. Thermocycling, an adequate procedure to simulate aging processes, is also required by ISO/TS 11405:2003 ''Dental materials -Testing of adhesion to tooth structure'' [21]. There are numerous publications reporting bovine teeth being used to evaluate dentin bond strength of adhesive resins. But there are only few which directly compare the results obtained from bovine enamel or dentin with the respective human hard tooth tissues [11,13,18,39]. Shear bond strength of standardized orthodontic brackets on human and bovine enamel was tested with the result that bond strength on bovine enamel was approximately 40% lower than on human enamel [40]. Statistical analysis from other authors also Material Application Clearfil SE Bond Enamel or dentin was carefully dried with oil-free air. Primer was applied to entire tooth with a brush, left in place for 20 s and finally the volatile ingredients were evaporated for 10 to 15 s with a mild oil-free air stream. Bond was now also applied with a brush, dispersed with a very weak stream of air and polymerized for 10 s. Optibond FL Enamel or dentin was carefully dried with oil-free air. Etchant was applied on enamel for 30 s and on dentin for 15 s and then thoroughly rinsed off for 20 s with water. The tooth was gently dried with oil-free air to avoid desiccation. Prime (Bottle 1) was applied and rubbed in for 15 s and gently dried with air for approximately 5 s. Finally adhesive (Bottle 2) was applied, spread and with air to a thin layer and polymerized for 20 s. revealed a highly significant difference between shear and tensile bond strengths of human and bovine enamel; however, regression prediction equations supported the use of bovine teeth as a reliable substitute to human counterparts in bonding studies of orthodontic adhesion [10]. Reis et al. [11] used a micro tensile bond strength test to measure and compare bond strength of adhesive resins on human and bovine enamel and dentin. They found no statistically significant differences between these hard tooth tissues and concluded that bovine teeth proved to be possible substitutes for human teeth in either dentin or enamel bond testing, which is in accordance with other investigations [13]. The results of the present investigation (Tables 3 and 4) were in accordance with the literature. However, prior to thermocycling the 2-Step self-etch system Clearfil SE Bond performed significantly better on human than on bovine enamel but no significant difference was found for the 3-step etch and rinse adhesive Optibond FL. The opposite was observed after aging. Since bovine enamel and dentin develop more rapidly during tooth formation, bovine enamel has larger crystal grains and more lattice defects than human enamel [40]. There is a high probability that these facts influence bond strength because different grain sizes and defective lattice structures will be differently attacked by chemicals. This might explain the different performance of self-etch and etch and rinse adhesives. The same unsteadiness is apparent in the results of the dentin measurements. Now Clearfil SE Bond (self-etch) showed no difference of shear bond strength between human and bovine dentin but Optibond FL (etch and rinse) performed better on bovine dentin. After thermocycling, Optibond FL lost but Clearfil SE Bond gained bond strength. Again, the findings of the present study is in accordance with some authors [12,13] but not with other literature [11,39]. Retief et al. [12] found significantly lower shear bond strength on bovine dentin despite more dense penetration of the adhesive system into bovine than into human dentin. Therefore, they concluded that the use of bovine teeth instead of human teeth is not indicated. It is quite certain that the different bond strength tests and materials used by the different authors caused the disagreements of the results for enamel as well as for dentin bond strength. However, also some morphological differences between human and bovine teeth have been reported [16,41] that are due to the more rapid development of bovine enamel and dentin [40]. Furthermore, it has to be regarded that teeth are biological materials whose properties are influenced by various factors during formation and usage and therefore, vary in a broad range. Adhesion was also investigated by microleakage studies. They showed that there are no statistically significant differences between the behavior of human and bovine substrates [14]. Also the fracture mechanical approach reported no statistical differences between human and bovine teeth but the authors admitted that there were a few exceptions [13]. The present study also performed the fracture analysis and evaluated cohesive failures in the tooth and the material as well as adhesive failures (Table 4). In nearly all cases significantly more cohesive failures occurred in the structures than in the materials indicating good bond strength. It is the authors' opinion that only the totalized cohesive failures (cohesive in the tooth plus cohesive in the resin) are relevant to judge the adhesive quality because the adherence cannot be stronger than the inherent strength of the bonded materials. Therefore, no real bond strength values can be measured when cohesive fractures occur, which also explains why no correlation between shear bond strength and fracture pattern was detected in the present study. Furthermore, the fracture analysis results (Table 4) showed no significant differences either between human and bovine enamel or between human and bovine dentin prior to or after thermocycling for the adhesive, cohesive or totalized cohesive failures. How can it be explained that sometimes shear bond strength was low but fracture analysis showed very high cohesive fracture rates? Shear bond strength is influenced by various factors as for instance, quality of the natural substrates, the conditioning method of the substrates' surfaces, the aging method and/or the type of adhesive. Considering that bovine tooth structure has more lattice defects than human [40], acid etching (Optibond FL) might significantly weaken bovine teeth more than human teeth, resulting in lower bond strength after thermocycling. Acid etching might also attack bovine enamel more, yielding the same bond strength on human enamel with etch and rinse (Optibond FL) than with self-etch (Clearfil SE Bond) adhesives ( Table 3). The literature supported the assumption that bovine enamel demineralizes and erodes faster than human enamel [17]. The aforesaid differences disappeared after thermocycling because bond strength increased for Clearfil SE Bond but decreased for Optibond FL. The authors hypothesize that stronger acid attack on bovine enamel and, therefore, stronger destruction became noticeable after thermocycling at lower bond strength values. The same reason might be of relevance when dentine shear bond strength values are considered. The self-etch product performed better on bovine dentine because it has minor destructive forces. None of these differences were reflected by the fracture analysis because neither of the adhesives differed in the failure rates but showed significantly more totalized cohesive than adhesive failures. This indicates a very good bond between all of the tooth tissues and the adhesives. One major limitation of the present investigation is that only shear bond strength and no other bonding tests (i. e. microtensile bonding) were considered. Whereas some literature already called the shear bond strength test in question to be appropriate for bovine dentin [12] other authors performing a microtensile bond strength test did not report significant differences [11]. Furthermore, the authors cannot provide a relevant amount of totalized cohesive failures to judge an adhesive system to perform acceptably. Conclusion There are numerous factors influencing bond strength between adhesives and tooth structures so that it is very difficult to interpret the results clearly. To obtain meaningful information about the performance of adhesives, fracture analysis is a condition sine qua non. Although shear bond strength was different on human and bovine teeth, no differences were found in fracture analysis. Therefore the null hypothesis was rejected for part (a) but accepted for part (b). Shear bond strength test on bovine teeth gives different quantitative results compared to human substrate, but additional fracture analysis on bovine teeth can give similar qualitative information. Thus, bovine teeth can only partly be recommended as a substitute for human teeth.
4,691.4
2013-03-18T00:00:00.000
[ "Medicine", "Materials Science" ]
Stratospheric temperature measurement with scanning Fabry-Perot interferometer for wind retrieval from mobile Rayleigh Doppler lidar Temperature detection remains challenging in the low stratosphere, where the Rayleigh integration lidar is perturbed by aerosol contamination and ozone absorption while the rotational Raman lidar is suffered from its low scattering cross section. To correct the impacts of temperature on the Rayleigh Doppler lidar, a high spectral resolution lidar (HSRL) based on cavity scanning Fabry-Perot Interferometer (FPI) is developed. By considering the effect of the laser spectral width, Doppler broadening of the molecular backscatter, divergence of the light beam and mirror defects of the FPI, a well-behaved transmission function is proved to show the principle of HSRL in detail. Analysis of the statistical error of the HSRL is carried out in the data processing. A temperature lidar using both HSRL and Rayleigh integration techniques is incorporated into the Rayleigh Doppler wind lidar. Simultaneous wind and temperature detection is carried out based on the combined system at Delhi (37.371°N, 97.374°E; 2850 m above the sea level) in Qinghai province, China. Lower Stratosphere temperature has been measured using HSRL between 18 and 50 km with temporal resolution of 2000 seconds. The statistical error of the derived temperatures is between 0.2 and 9.2 K. The temperature profile retrieved from the HSRL and wind profile from the Rayleigh Doppler lidar show good agreement with the radiosonde data. Specifically, the max temperature deviation between the HSRL and radiosonde is 4.7 K from 18 km to 36 km, and it is 2.7 K between the HSRL and Rayleigh integration lidar from 27 km to 34 km. ©2014 Optical Society of America OCIS codes: (010.0010) Atmospheric and oceanic optics; (120.0280) Remote sensing and sensors; (280.3340) Laser Doppler velocimetry; (280.3640) Lidar. References and links 1. J. W. Meriwether and A. J. Gerrard, “Mesosphere inversion layers and stratosphere temperature enhancements,” Rev. Geophys. 42, RG3003 (2004). 2. A. Gettelman, P. Hoor, L. L. Pan, W. J. Randel, M. I. Hegglin, and T. Birner, “The extratropical upper troposphere and lower stratosphere,” Rev. Geophys. 49(3), RG3003 (2011). 3. M. P. Baldwin, L. J. Gray, T. J. Dunkerton, K. Hamilton, P. H. Haynes, W. J. Randel, J. R. Holton, M. J. Alexander, I. Hirota, T. Horinouchi, D. B. A. Jones, J. S. Kinnersley, C. Marquardt, K. Sato, and M. Takahashi, “The quasi-biennial oscillation,” Rev. Geophys. 39(2), 179–229 (2001). 4. A. J. Gerrard, Y. Bhattacharya, and J. P. Thayer, “Observations of in-situ generated gravity waves during a stratospheric temperature enhancement (STE) event,” Atmos. Chem. Phys. 11(22), 11913–11917 (2011). 5. M. P. Baldwin and T. J. Dunkerton, “Stratospheric harbingers of anomalous weather regimes,” Science 294(5542), 581–584 (2001). 6. M. P. Baldwin, D. W. J. Thompson, E. F. Shuckburgh, W. A. Norton, and N. P. Gillett, “Weather from the stratosphere?” Science 301(5631), 317–319 (2003). 7. V. Ramaswamy, M. L. Chanin, J. Angell, J. Barnett, D. Gaffen, M. Gelman, P. Keckhut, Y. Koshelkov, K. Labitzke, J. J. R. Lin, A. O’Neill, J. Nash, W. Randel, R. Rood, K. Shine, M. Shiotani, and R. Swinbank, “Stratospheric temperature trends: Observations and model simulations,” Rev. Geophys. 39(1), 71–122 (2001). #215139 $15.00 USD Received 1 Jul 2014; revised 19 Aug 2014; accepted 20 Aug 2014; published 2 Sep 2014 (C) 2014 OSA 8 September 2014 | Vol. 22, No. 18 | DOI:10.1364/OE.22.021775 | OPTICS EXPRESS 21775 8. A. Behrendt, “Temperature measurements with lidar,” in Lidar: Range-Resolved Optical Remote Sensing of the Atmosphere, C. Weitkamp, ed (Springer, 2005). 9. M. Alpers, R. Eixmann, C. Fricke-Begemann, M. Gerding, and J. Höffner, “Temperature lidar measurements from 1 to 105 km altitude using resonance, Rayleigh, and Rotational Raman scattering,” Atmos. Chem. Phys. 4(3), 793–800 (2004). 10. X. Chu and G. C. Papen, “Resonance fluorescence lidar,” in Laser Remote Sensing, T. Fujii and T. Fukuchi, eds. (CRC, 2005). 11. M. L. Chanin and A. Hauchecorne, “Lidar studies of temperature and density using Rayleigh scattering,” in International Council of Scientific Unions Middle Atmosphere Handbook (National Aeronautics and Space Administration, 1984). 12. M. Gerding, J. Höffner, J. Lautenbach, M. Rauthe, and F.-J. Lübken, “Seasonal variation of nocturnal temperatures between 1 and 105 km altitude at 54° N observed by lidar,” Atmos. Chem. Phys. 8(24), 7465–7482 (2008). 13. W. N. Chen, C. C. Tsao, and J. B. Nee, “Rayleigh lidar temperature measurements in the upper troposphere and lower stratosphere,” J. Atmos. Sol.-Terr. Phys. 66(1), 39–49 (2004). 14. J. P. Vernier, L. W. Thomason, J. P. Pommereau, A. Bourassa, J. Pelon, A. Garnier, A. Hauchecorne, L. Blanot, C. Trepte, D. Degenstein, and F. Vargas, “Major influence of tropical volcanic eruptions on the stratospheric aerosol layer during the last decade,” Geophys. Res. Lett. 38, L12807 (2011). 15. O. E. Bazhenov, V. D. Burlakov, S. I. Dolgii, and A. V. Nevzorov, “Lidar observations of aerosol disturbances of the stratosphere over Tomsk (56.5° N; 85.0° E) in volcanic activity period 2006-2011,” Int. J. Opt. 2012, 1–10 (2012). 16. T. Shibata, M. Kobuchi, and M. Maeda, “Measurements of density and temperature profiles in the middle atmosphere with a XeF lidar,” Appl. Opt. 25(5), 685–688 (1986). 17. J. P. Burrows, A. Richter, A. Dehn, B. Deters, S. Himmelmann, S. Voigt, and J. Orphal, “Atmospheric remote sensing reference data from GOME: Part 2. Temperature-dependent absorption cross-sections of O3 in the 231 −794 nm range,” J. Quant. Spectrosc. Radiat. Transfer 61(4), 509–517 (1999). 18. A. Behrendt and J. Reichardt, “Atmospheric temperature profiling in the presence of clouds with a pure rotational Raman lidar by use of an interference-filter-based polychromator,” Appl. Opt. 39(9), 1372–1378 (2000). 19. A. Behrendt, T. Nakamura, M. Onishi, R. Baumgart, and T. Tsuda, “Combined Raman lidar for the measurement of atmospheric temperature, water vapor, particle extinction coefficient, and particle backscatter coefficient,” Appl. Opt. 41(36), 7657–7666 (2002). 20. A. Behrendt, T. Nakamura, and T. Tsuda, “Combined temperature lidar for measurements in the troposphere, stratosphere, and mesosphere,” Appl. Opt. 43(14), 2930–2939 (2004). 21. Y. Arshinov, S. Bobrovnikov, I. Serikov, A. Ansmann, U. Wandinger, D. Althausen, I. Mattis, and D. Müller, “Daytime operation of a pure rotational Raman lidar by use of a Fabry-Perot interferometer,” Appl. Opt. 44(17), 3593–3603 (2005). 22. E. Eloranta, “High spectral resolution lidar,” In Range-Resolved Optical Remote Sensing of the Atmosphere. C. Weitkamp, ed. (Springer, 2005). 23. G. G. Fiocco, G. Beneditti-Michelangeli, K. Maischberger, and E. Madonna, “Measurement of temperature and aerosol to molecule ratio in the troposphere by optical radar,” Nature 229, 78–79 (1971). 24. B. Witschas, C. Lemmerz, and O. Reitebuch, “Daytime measurements of atmospheric temperature profiles (2-15 km) by lidar utilizing Rayleigh-Brillouin scattering,” Opt. Lett. 39(7), 1972–1975 (2014). 25. R. L. Schwiesow and L. Lading, “Temperature profiling by Rayleigh-scattering lidar,” Appl. Opt. 20(11), 1972– 1979 (1981). 26. H. Shimizu, S. A. Lee, and C. Y. She, “High spectral resolution lidar system with atomic blocking filters for measuring atmospheric parameters,” Appl. Opt. 22(9), 1373–1381 (1983). 27. H. Shimizu, K. Noguchi, and C. Y. She, “Atmospheric temperature measurement by a high spectral resolution lidar,” Appl. Opt. 25(9), 1460–1466 (1986). 28. C C. Y. She, R. J. Alvarez II, L. M. Caldwell, and D. A. Krueger, “High-spectral-resolution Rayleigh-Mie lidar measurement of aerosol and atmospheric profiles,” Opt. Lett. 17(7), 541–543 (1992). 29. P. Piironen and E. W. Eloranta, “Demonstration of a high-spectral-resolution lidar based on an iodine absorption filter,” Opt. Lett. 19(3), 234–236 (1994). 30. Z. Liu, I. Matsui, and N. Sugimoto, “High-spectral-resolution lidar using an iodine absorption filter for atmospheric measurements,” Opt. Eng. 38(10), 1661–1670 (1999). 31. J. W. Hair, L. M. Caldwell, D. A. Krueger, and C. Y. She, “High-spectral-resolution lidar with iodine-vapor filters: measurement of atmospheric-state and aerosol profiles,” Appl. Opt. 40(30), 5280–5294 (2001). 32. D. Hua, M. Uchida, and T. Kobayashi, “Ultraviolet Rayleigh-Mie lidar with Mie-scattering correction by FabryPerot etalons for temperature profiling of the troposphere,” Appl. Opt. 44(7), 1305–1314 (2005). 33. D. Hua, M. Uchida, and T. Kobayashi, “Ultraviolet Rayleigh-Mie lidar for daytime-temperature profiling of the troposphere,” Appl. Opt. 44(7), 1315–1322 (2005). 34. W. Huang, X. Chu, J. Wiig, B. Tan, C. Yamashita, T. Yuan, J. Yue, S. D. Harrell, C. Y. She, B. P. Williams, J. S. Friedman, and R. M. Hardesty, “Field demonstration of simultaneous wind and temperature measurements from 5 to 50 km with a Na double-edge magneto-optic filter in a multi-frequency Doppler lidar,” Opt. Lett. 34(10), 1552–1554 (2009). 35. Z. S. Liu, D. C. Bi, X. Q. Song, J. B. Xia, R. Z. Li, Z. J. Wang, and C. Y. She, “Iodine-filter-based high spectral resolution lidar for atmospheric temperature measurements,” Opt. Lett. 34(18), 2712–2714 (2009). #215139 $15.00 USD Received 1 Jul 2014; revised 19 Aug 2014; accepted 20 Aug 2014; published 2 Sep 2014 (C) 2014 OSA 8 September 2014 | Vol. 22, No. 18 | DOI:10.1364/OE.22.021775 | OPTICS EXPRESS 21776 36. Z. Cheng, D. Liu, Y. Yang, L. Yang, and H. Huang, “Interferometric filters for spectral discrimination in highspectral-resolution lidar: performance comparisons between Fabry-Perot interferometer and field-widened Michelson interferometer,” Appl. Opt. 52(32), 7838–7850 (2013). 37. D. Liu, Y. Yang, Z. Cheng, H. Huang, B. Zhang, T. Ling, and Y. Shen, “Retrieval and analysis of a polarized high-spectral-resolution lidar for profiling aerosol optical properties,” Opt. Express 21(11), 13084–13093 (2013). 38. Z. Cheng, D. Liu, J. Luo, Y. Yang, L. Su, L. Yang, H. Huang, and Y. Shen, “Effects of spectr Introduction The middle atmosphere is that portion of the Earth's atmosphere between two temperature minima at about 12 km altitude (the tropopause) and at about 85 km (the mesopause), comprising the stratosphere and mesosphere.In spite of intensive research activities over the past decades, the underlying mechanisms for some phenomena in the region, for instance, the stratosphere temperature enhancement and the mesosphere inversion layer, remain poorly understood [1][2][3][4].The troposphere influences the stratosphere mainly through atmospheric waves propagating upward.Recent researches show that the stratosphere organizes the chaotic wave forcing from below to create long-lived changes in its circulation, and exerts impact on the tropospheric weather and climate.Thus, understanding the middle atmosphere is also essential for tropospheric weather prediction [5,6].Rocketsonde data are available through the early 1960s.However, such results are sporadic because the means for exploring the middle atmosphere are expensive.Even decades of rocket launches, radiosonde observations, satellite and aircraft measurements, provide only pieces of the whole picture [7]. Today, as one of the most promising remote sensing techniques, lidar for atmospheric researches has shown its inherited superiorities: including high spatial and temporal resolution, the potential of covering the height region from the boundary layer to the mesosphere, and allowing the detection of variable atmospheric parameters, such as temperature, pressure, density, wind, as well as the trace constituents.Particularly, temperature lidar techniques are approaching the maturity for routine observations [8].Specifically, the resonance fluorescence technique, the Rayleigh integration technique and the rotational Raman technique are combined to cover the height region from the lower thermosphere to ground [9]. The resonance fluorescence technique is restricted to the altitude range between 80 and 105 km, where exist layers of metallic species, such as Fe, Ca, Na, and K atoms or ions [10]. The Rayleigh integration lidar has been proved to be the simplest tool for temperature detection in the mesosphere and stratosphere.Temperature is calculated from the molecular number density by assuming hydrostatic equilibrium.In addition, the top-to-bottom integration retrieval needs a reference point with known temperature at the beginning [11]. However, in the lower stratosphere, the aerosol contamination and ozone absorption makes the lidar backscatter no longer proportional to the molecular number density [12,13].So the Rayleigh backscatter must be corrected carefully for temperature detection.Unfortunately, in the lower stratosphere, recent volcanic eruptions aggravate the aerosol disturbances, which cannot be treated as low and stable background anymore [14,15].The ozone layer, which absorbs ultraviolet energy from the sun, is located primarily in the stratosphere, at altitudes of 15 to 35 km.The impact of O 3 above 30 km on the Rayleigh temperature lidar can be ignored at working wavelength about 350 nm, with an error smaller than 1% [16].And one should note that, the absorption cross section of O 3 varies with wavelength.It is about 30 times lager at 532 nm than that at 355 nm [17]. Generally, to extend the detection altitude downward below 30 km, the rotational Raman technique is used for direct temperature detection, where two portions of pure-rotational Raman signals having opposite dependence on temperature are extracted by using filters with narrow bandwidth.The quite low rotational Raman scattering cross section requires sophisticated filters to suppress the disturbances from Rayleigh scattering and solar radiation.Nowadays, state-of-the-art rotational Raman lidars can be used to retrieve temperature up to 25 km.However, above this altitude, the statistical error usually exceeds 10 K [18][19][20][21]. The overview above comes to a conclusion that the temperature detection remains challenging in the low stratosphere, where the Rayleigh integration lidar is perturbed by aerosol contamination and ozone absorption while the rotational Raman lidar is suffered from its low scattering cross section.Fortunately, we demonstrate in this work that this dilemma can be resolved by using the so-called high spectral resolution lidar (HSRL).According to different implementations, the HSRL techniques fall into two categories [22].On the one hand, the entire Cabannes line (more precisely, the sum of the Laudau-Placzek line and the Brillouin doublet) is obtained by scanning either the Fabry-Perot interferometer (FPI) or the laser.Then the temperature is calculated from the fitted linewidth of the Cabannes line [23,24].On the other hand, the lidar signal is measured before and passing through a static filter, for instance, a fixed FPI, Michelson interferometers, atomic or molecular absorption cells.It resolves the temperature dependent transmission of the Cabannes line through the filters [25][26][27][28][29][30][31][32][33][34][35].In comparison, the former method is less efficient due to the fact that the ultra-narrow optical filter rejects most energy of the Cabannes scattering at each step of the scanning procedure.However, it shows immunity against the sunlight and Mie contamination [24].Of course, HSRL is also a multi-function technique except for temperature detection [36][37][38] The atmospheric temperature profile is necessary as an input parameter to the Rayleigh Doppler lidar.For example, a 1 K error on the actual temperature inside the sensing volume leads to a relative error of 0.2% of the true LOS wind for ADM-Aeolus [39].In this paper, a HSRL using scanning FPI is incorporated into a mobile Rayleigh Doppler lidar for temperature detection from 18 to 35 km, and the Rayleigh integration lidar is used to retrieve temperature from 30 to 65 km.The combined system permits atmospheric temperature and wind detection simultaneously. Principle The key instrument inside the optical receiver of the HSRL used in this work is a cavity tunable FPI.Three piezo-electric actuators are used to tune the cavity while the capacitance sensors fabricated onto the mirror surfaces are used to sense changes in parallelism and cavity length.The FPI is mounted in a sealed cell with high efficiency anti-reflection coated windows and heater assembly around.This eliminates the impact of changes in environmental pressure, temperature and humidity on both the capacitance micrometers and on the optical cavity length. The transmission function of a perfect parallel plane FPI is where e R is the effective surface reflectance, ν is the optical frequency relative to the center frequency of the laser, θ is the angle of incidence of the light beams on the surfaces from within the interface, μ is the effective refractive index of the interspace, is the free spectral range.p T is the peak transmittance given by ( ) where A is the surface absorptance.For an air-gapped FPI used in this paper, where 1 μ ≈ , the transmission function can be written as ) where ( ) ( ) , e ℜ denotes the real part of a complex number.Utilizing ( ) ( ) of Fourier series can be derived from Eq. ( 3) to describe the FPI transmission: This Fourier-series-type formulation above has been found particularly useful for further evaluation and computation of experimental profiles mainly because it permits simple convolution with other common functions, notably Gaussian and Lorentzian functions [40]. In our lidar system, a multimode fiber delivers the atmospheric backscatter from telescope to receiver.This configuration provides mechanical decoupling and remote placement of the lidar components.Furthermore, the fiber reduces the field of view of the telescope at the input end and defines the divergence of the collimated beam normal to the FPI at the output end.Suppose the incident illumination is uniformly distributed under the function of a modescrambler [41,42], the actual transmission function is simply the sum of all rays from the normal to the half-maximum divergence 0 θ : Substituting the following power-reduction and sum-to-product formulas into Eq.( 5) ( ) ( ) During the scanning procedure of the FPI, if the frequency ν shifts a range about FSR ν Δ , the cosine phase term will change 2nπ rapidly.On the contrary, the variation of the sinc term according to the same change of ν is small enough to be neglected.Therefore, Eq. ( 8) can be approximated as: where, for simplicity, let ( ) ( ) To evaluate the effect of the beam divergence to the transmission function, let n = 30, ( ) 0 sin c 30ϕ = 0.98 is calculated using the system parameters listed in Table 1. The spectrum of the backscatter is broadened due to the random thermal motions of the air particles.The aerosol backscatter spectrum ν since the Brownian motion of aerosol particles does not broaden the spectrum significantly [43]. In the low pressure altitude, the inelastic Brillouin scattering is negligible and the molecular motion is thermally dominated.Thus the scattering lineshape takes the form of a thermally broadened Gaussian profile [39,44]. Note that, are the half-width at the 1/e intensity level of the spectra of the outgoing laser and the Rayleigh backscatter, respectively.Where, k is the Boltzmann's constant, a T is the atmosphere temperature, m is the average mass of the atmospheric molecules, and λ is the laser wavelength.The transmission of the aerosol backscatter is a convolution of the transmission function of the FPI and the spectrum of aerosol backscatter:  , the convolution in Eq. ( 12) can be expressed analytically [45]: Comparison of Eq. ( 4), ( 9) and ( 13) shows that, even considering the divergence of the incident light, the convolution is derived by concise multiplication of each successive term by where is the global parameter for all kinds of mirror defects [40].Since the convolution of two Gaussian functions yields still a Gaussian function.Finally, the transmission function of aerosol backscatter can be written as Similarly, the transmission of the Rayleigh backscatter is If aerosols are exist, the photon number corresponding to the mixed backscatter collected by the telescope is described as follow: where L E is the energy of the laser pulse, o η accounts for the optical efficiency of the transmitted signal, q η is the quantum efficiency, h is the Planck constant, 0 A is the area of the telescope, ( ) The backscatter is detected using photomultiplier tubes (HAMAMATSU Model R7400P-03) and acquired with transient recorders (Licel Model TR 20-160), which provides 10 5 dynamic range by combination of A/D and photo counting functions.It is a great challenge to control the frequency drift of the laser to an order of 1 MHz during the FPI scanning process on a mobile platform, since the scanning process may take few minutes or even one hour, depending on the number of sampling steps and the dwell time at each frequency step.So, a secondary solid FPI is used to monitor the frequency drift of the outgoing laser.Then the frequency drift can be measured and compensated in the data processing. To obtain the transmission of atmospheric backscatter through the FPI, the cavity spacing is scanned linearly over 20 GHz (100 sampling steps).At each step, the time-gated backscatter is summed up for 100 laser shots.After frequency drift compensation, the transmission curves at different altitudes are analyzed by applying a least squares fit procedure to Eq. ( 19).Then, temperature profile is calculated from the linewidths of the fitted curves.It is worth a mention that temperature values derived at different altitudes are independent of each other, since no response function need to be established in the retrieval. As the inset shown in Fig. 3, the optical receiver is linked using fused fiber couplers, which improve the compactness and stabilization of the system.The FPIs are sealed against pressure change introduced by the air conditioning inside the trucks.Furthermore, the temperature fluctuation of the optical receiver is controlled under 0.01 K. To validate the performance of the HSRL for low stratospheric temperature detection, comparison experiment is carried out at 6:54 Am, on Dec. 23, 2013.Temperature profiles derived from HSRL, Rayleigh integration (RI) lidar and radiosonde are plotted in Fig. 7.The temperature difference between HSRL and RI lidar, as well as the difference between HSRL and radiosonde are also shown.All the results show good agreement in the altitude from 26 km to 36 km, with a max deviation of 2.7 K. In the lower altitude, the temperature profile from RI lidar deviates from the results from HSRL and radiosonde obviously with a max value of 22.8 K, which may due to the aerosol contamination (as shown in Fig. 7) and the ozone absorption.On the contrary, acceptable agreement between HRSL and radiosonde is achieved with a max deviation of 4.7 K from 18 km to 36 km.Suppose the photon counts obey the Possion statistics at each step during the FPI scanning and the transmission in Fig. 6(a) can be approximated as a Gaussian shape, the standard deviation in estimating the transmission bandwidth in the best fit is where F ν Δ is the half-width at the 1/e intensity level of the transmission curve under estimation in Fig. 6(a).N is the total photon counts at a given altitude [47]. In the calibration, transmission of the laser pulse through the FPI is measured dozens of times and averaged, allowing us ignore the error in estimating the bandwidth of the curves in Fig. 4. Comparing Eq. ( 15) with Eq. ( 16), one can approximate that ( ) ( ) . So the statistical error of the obtained temperature profile from each FPI is calculated as The measurement on the two FPI channels is uncorrelated, reducing the final statistical uncertainty by a factor of 2 .The error bar of the temperature profile derived from HSRL is shown in Fig. 7.As we mentioned at the beginning, the HSRL built in this work is for correcting the temperature and pressure effects on the wind retrieval from the Rayleigh Doppler lidar [39].It is worth a mention that the pressure profiles are taken from the standard atmosphere when radiosonde data are not available.As shown in Fig. 7, in the wind retrieval, temperature values under and above the altitude of 35km are adopted from HSRL and RI lidar, respectively.Examples of wind detection in the altitude from 15 km to 60 km are shown in Fig. 8. Simultaneous radiosonde results are plotted for comparison.The wind speed and direction derived from Rayleigh Doppler lidar and radiosonde agree with each other in the two cases.The radiosonde data are sporadic on Dec. 24, 2013, which is due to the low GPS signal tracked by the ground antenna. Conclusion and future research Temperature profile plays an important role in atmospheric research.In addition, it is the input parameter to other remote sensing lidars.High spectral resolution technique and Rayleigh integration technique were integrated into one lidar to correct the temperature effect on the Rayleigh Doppler lidar.The combined system permits simultaneous detection of temperature and wind profiles from stratosphere to lower mesosphere.Despite the aerosol contamination and ozone absorption, temperature derived from HSRL shown good agreement with the radiosonde data in the lower stratosphere. We noticed that the pulse duration decreases from 6.8 ns to 5.0 ns as the pulse energy decaying from 350 mJ to 150 mJ, so the spectral width of the outgoing laser pulse should be monitored.For future research, one channel of the FPI will be used to extend the temperature detection to the troposphere, where the spectrum of the molecular backscatter cannot be approximated as a simple Gaussian function, the Brilliouin doublet must be considered. This work is on the assumption that atmospheric temperature is stable during the scanning of the FPI and the vertical component of the atmosphere wind is neglected.In the future, we need to shorten the scanning time without sacrificing the signal-to-noise ratio.Since high power laser and large telescope are adopted, we are going to use adaptive optics to improve the fiber-coupling efficiency [48]. The normalized spectra of aerosol backscatter ( ) M I ν and molecule backscatter (Laudau-Placzek line) ( ) R I ν are approximated by Gaussian functions normalized to unit area: ( the surface defect of the parallel mirrors can also approximated as a Gaussian distribution overlap factor at range R , R β and M β are the Rayleigh and Mie volume backscatter coefficients, d τ is the detector's response time attenuation coefficient.#215139 -$15.00USD Received 1 Jul 2014; revised 19 Aug 2014; accepted 20 Aug 2014; published 2 Sep 2014 (C) 2014 OSA 8 September 2014 | Vol.22, No. 18 | DOI:10.1364/OE.22.021775| OPTICS EXPRESS 21781telescope to the optical receiver.At the other end of the multimode fiber, the light is split and collimated into three beams: one for energy monitoring and the other two beams passing through a cavity-scanning FPI (ICOS Model ET116FS-1068) for measuring transmission curves.The FPI is designed for Doppler lidar at the beginning.It consists of three subchannels with different cavity spacing.The left and right channels used for double-edged technique in the Doppler lidar are both for temperature detection now.And the third (locking) channel is used to monitor the frequency drift of laser relative to FPI. Fig. 3 . Fig. 3. Schematic setup of the HSRL Lidar with system-level optical frequency control, and interior view of the compact receiver (inset, lower right corner). Fig. 6 . Fig. 6. (a): Measured transmission curves of the backscatter (from 18 km) through the two channels (circle) and their best fit results (dash line), (b): Residual between the measured transmissions and the fit results, (c): Profiles of transmitted backscatter along altitude at given frequencies labeled in (a). Fig. 8 . Fig. 8. Profiles of wind speed and direction derived from Doppler lidar (solid line) and radiosonde (circle). September 2014 | Vol.22, No. 18 | DOI:10.1364/OE.22.021775| OPTICS EXPRESS 21784 from 18 km to 36 km are found and averaged.Center frequencies of the left and the right channels can be calculated using the parameters ( 0
6,129.2
2014-09-08T00:00:00.000
[ "Environmental Science", "Physics" ]
Human TRIM5α mediated restriction of different HIV-1 subtypes and Lv2 sensitive and insensitive HIV-2 variants In order to characterize the antiviral activity of human TRIM5α in more detail human derived indicator cell lines over expressing wild type human TRIM5α were generated and challenged with HIV-1 and HIV-2 viruses pseudotyped with HIV envelope proteins in comparison to VSV-G pseudotyped particles. HIV envelope protein pseudotyped particles (HIV-1[NL4.3], HIV-1[BaL]) showed a similar restriction to infection (12 fold inhibition) compared to VSV-G pseudotyped viruses after challenging TZM-huTRIM5α cells. For HIV-2 a stronger restriction to infection was observed when the homologous envelope protein Env42S was pseudotyped onto these particles compared to VSV-G pseudotyped HIV-2 particles (8.6 fold inhibition versus 3.4 fold inhibition). It has been shown that HIV-2 is restricted by the restriction factor Lv2, acting on capsid like TRIM5α. A mutation of amino acid 73 (I73V) of HIV-2 capsid renders this virus Lv2-insensitive. Lv2-insensitive VSV-G pseudotyped HIV-2/I73V particles showed a similar restriction to infection as did HIV-2[VSV-G] particles (4 fold inhibition). HIV-2 envelope protein (Env42S)-pseudotyped HIV-2/I73V particles revealed a 9.3 fold increase in infection in TZM cells but remained restricted in TZM-huTRIM5α cells (80.6 fold inhibition) clearly indicating that at least two restriction factors, TRIM5α and Lv2, act on incoming HIV-2 particles. Further challenge experiments using primary isolates from different HIV-1 subtypes and from HIV-1 group O showed that wild type human TRIM5α restricted infection independent of coreceptor use of the infecting particle but to variable degrees (between 1.2 and 19.6 fold restriction). Findings TRIM5 proteins of different species inhibit infectivity of a range of different retroviruses in a species-specific fashion [1,2]. Whereas rhesus macaque TRIM5α (rhTRIM5α) efficiently restricts human immunodeficiency virus type 1 (HIV-1) replication (up to 100 fold reduction in viral titer), the human homologue shows limited but reproducible activity against HIV-1 (2 to 3 fold reduction in viral titer), but restricts N-tropic strains of the murine leukemia virus (N-MLV) very efficiently [3][4][5][6][7][8]. Different human cell lines (e.g. HeLa, 293T, C134 cells) over expressing a HA-tagged human TRIM5α have been used to determine the efficiency of HIV-1 specific restriction. Ylinen and colleagues showed that HIV-2 particles are weakly restricted by human TRIM5α expressed in TE671 cells and efficiently restricted by rhesus TRIM5α [9], thus showing a similar phenotype as HIV-1 particles. In addition to TRIM5α it was shown that a yet unidentified restriction factor expressed in human cells restricts early post entry steps of HIV-2 [10]. This factor, called Lv2, acts on incoming HIV-2 particles like TRIM5α but can be bypassed if VSV-G pseudotyped HIV-2 particles are used to challenge target cells [10][11][12]. The viral capsid of HIV-1 is the main target for the antiviral effect, since certain mutations in the capsid protein (for example exchange of glycine to valine or alanine at position 89, G89V and G89A respectively) have been shown to confer resistance to TRIM5α mediated restriction [5,[13][14][15]. For HIV-2 it has been shown that particles encoding the amino acid valine at position 73 are insensitive to Lv2-mediated restriction [11]. Most published studies to detect post entry restrictions have used viral particles pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G). This allows the determination of species-specific restrictions independent from the expression of the appropriate receptors for infection [16][17][18][19] and indicates an independence from the route of viral entry (plasma membrane fusion vs endocytotic uptake) for the observed restriction of HIV-1, whereas Lv-2 mediated restriction of HIV-2 is entry route dependent [10][11][12]. In order to use authentic viral particles (primary isolates from different subtypes, including HIV-1 group O) for the characterization of human TRIM5α mediated restriction, the indicator cell line TZM-bl [20] was stably transduced with a retroviral vector (LNCX2, Clonetech, Germany) encoding wild-type, non-tagged human TRIM5α (obtained from PD Bieniasz, [21]) and G418 resistant cells were selected. TZM-bl cells are HeLa-cell derivatives that express high levels of CD4 and both co-receptors CXCR4 and CCR5, and are stably transduced carrying a LTR-driven firefly luciferase as well as a LTR-driven βgalactosidase cassette. Challenging these indicator cells with HIV-1 and HIV-2 isolates results in the induction of luciferase and β-galactosidase allowing easy detection of infection and titration. In the absence of an antibody to measure endogenous or low level TRIM5α expression, a quantitative light-cycler RT-PCR protocol specific for the SPRY-domain was established. Total RNA (2 μg) were used to generate cDNA (superscript II, Invitrogen) using an oligo-dT primer. An aliquot of this cDNA was used as target for the SPRY-specific PCR (primers SP(+): 5'-CCTT-TCATTGTGCCCCT-3'; SP(-): 5'-GCACAGAG TCAT-GGGAC-3') as well as for the β-actin-specific PCR (primers: actin(+): 5'-GGGTCAGAAGGATTCCTATG-3'; actin(-): 5'-GGTCTCAAACATGATCTGGG-3') in order to normalize the cDNA input. The detection limit for both PCR amplifications in the presence of SYBR-green was determined using serial dilutions of plasmids containing the target sequences and revealed a threshold of 10 3 molecules per reaction. Using this established qPCR protocol a 2 fold over expression of TRIM5α mRNA in the newly selected TZM-huTRIM5α cells (10384 ± 1032 mRNA molecules versus 5102 ± 531 mRNA molecules in TZM-LNCX2 cells, normalized for β-actin cDNA) was determined. Next, the new indicator cells were challenged with VSV-G pseudotyped B-MLV particles, known to be insensitive to TRIM5α-mediated restriction. Both cell lines were equally well infected using B-MLV particles (550 ng RT per infection as determined using an RT-ELISA, Innovagen, Sweden) transducing a GFP-reporter cassette (51.2% GFPpositive TZM-LNCX2 cells and 50.0% GFP-positive TZM-huTRIM5α, respectively) showing that both cell lines support efficient retroviral infection. The selected cells expressed similar levels of CD4, CXCR4 and CCR5 on the cell surface and maintained a functional tat-inducible firefly luciferase and β-galactosidase reporter cassette like the parental TZM-bl cell line (data not shown), thus are suitable indicator cells to study the influence of human TRIM5α over expression on HIV envelope mediated infection. First, infection experiments were performed using VSV-G pseudotyped, HIV-1 NL4.3 envelope and HIV-1 BaL envelope pseudotyped HIV-1 particles encoding for wild-type capsid using increasing infectious units. TZM-bl cells transduced with the empty vector LNCX2 and G418 selected were used as reference (TZM-LNCX2). The induction of βgalactosidase due to infection of TZM-bl cells (5 × 10 3 cells per well) was determined using a luminometer at day 2 post challenge through cell lysis and addition of specific substrates (beta-glo Assay, Promega, Germany). The maximal detectable β-galactosidase activity after challenge of TZM-LNCX2 cells was set to 100% for the different pseu- respectively. This strong restriction was unexpected, since only a 2 fold over expression of TRIM5α mRNA was detected and previous studies reported only a 2-3 fold restriction of HIV-1 by human TRIM5α [3][4][5][6][7][8]. However, these studies used cells over expressing HA-tagged TRIM5α, which in the case of rhesus TRIM5α has been described to be less efficient in restricting SIV mac infection [7]. Whether the HA-tagged TRIM5α is less stable or less active than wild type TRIM5α or other factors differ between TZM-bl cells and HeLa cells influencing retroviral restriction efficiency needs to be further elucidated. However, the results obtained clearly indicate that human TRIM5α is capable to restrict HIV-1 infection quite substantially but that the restriction due to TRIM5α is entry route independent (VSV-G versus HIV-1 enve-lope) and HIV coreceptor independent (X4-tropic versus R5 tropic). Next, the restriction of HIV-2 infection due to human TRIM5α expression in TZM cells was analyzed. Like for the pseudotyped HIV-1 particles, HIV-2 reporter viruses encoding for renilla luciferase (similar to the HIV-1 reporter viruses used before) were generated through transfection of 293T cells with the proviral ROD/A-Δen-vRen plasmid and the expression plasmid for either VSV-G or Env42S envelope protein (MP11-VSV-G and MP11-Env42S, respectively) [22]. MP11-Env42S encodes for the envelope protein of the TCLA isolate HIV-2 CBL23 . In addition, a Lv2-insensitive HIV-2 variant was constructed. The proviral ROD/A-ΔenvRen plasmid (encoding isoleucine at position 73 of the capsid protein, shown to cause a Lv2sensitive phenotype in the context of the molecular clone HIV-2 MCR ) was mutagenized to exchange isoleucine at position 73 to valine resulting in a Lv2-insensitive HIV2 ROD variant (HIV-2/I73V) similar to HIV-2 MCN [11]. The resulting proviral plasmid (ROD/A/I73V-ΔenvRen) was used to generate VSV-G and Env42S envelope pseudotyped particles. Using increasing infectious doses to challenge TZM-huTRIM5α cells a 3.4 and 4.8 fold restriction of VSV-G pseudotyped HIV-2 and HIV-2/I73V particles could be determined ( fig. 1B). This result is in agreement with earlier studies using CRFK cells expressing human TRIM5α after challenge with VSV-G pseudotyped HIV-2 ROD [9] but shows in addition that the Lv2-insensitive HIV-2/I73V remains restricted by human TRIM5α. The challenge experiments with HIV-2 envelope protein Env42S pseudotyped HIV-2 particles (HIV-2[Env42S] and HIV-2/I73V[Env42S]) however confirmed again our previous observation that the Lv2-mediated restriction is entry route dependent [10,11,22]. As figure 1C shows, the over expression of human TRIM5α in TZM cells results in a 2.5 times stronger restriction to infection for Env42Spseudotyped HIV-2 particles (8.6 fold restriction) compared to VSV-G pseudotyped HIV-2 particles In order to analyse the human TRIM5α mediated restriction of primary isolates and molecular clones of different HIV-1 subtypes (A to D, G, J, CRF_AG and HIV-1 group O) (obtained through the NIH AIDS Research and Reference Reagent Program or described in further detail in [23][24][25]) the new indicator cells TZM-huTRIM5α and the control cells TZM-LNCX2 were challenged with 2 × 10 3 infectious units, as titrated on parental TZM-bl cells (equals a MOI of 0.2), and again the induction of β-galactosidase two days post infection was determined. As figure 2 shows, some HIV-1 isolates tested were only marginally restricted (1.2 to 1.4 fold for UG021, BD6 and ZA003) whereas the vast majority of isolates was restricted between 2.2 and 5.2 fold. Three exceptional strong restricted isolates could be identified, namely D117 (subtype B), ELI (subtype D) and MVP8167 (group O), being restricted between 16.6 and 19.5 fold compared to the control cells TZM-LNCX2. These three primary isolates are CXCR4-tropic variants. However, the mean restriction to infection for the remaining 18 isolates tested was 3.0 ± 1.3 fold, indicating that there are no significant coreceptor-specific differences between the X4-tropic (mean 2.5 ± 1.5 fold restriction for 7 isolates) and R5-tropic (mean 3.2 ± 1.2 fold restriction for 11 isolates) variants studied. In comparison to the experiments performed with pseudotyped particles, a weaker restriction to infection with HIV-1 NL4.3 versus HIV-1[NL4.3] was observed. NL4.3 envelope pseudotyped particles derived from 293T transfections resulted in a higher ratio of infectious units per ng RT/ml as compared to HIV-1 NL4.3 virus stocks obtained from PBMC cultures. Therefore, PBMC derived virus stocks might contain a larger proportion of virus-like particles able to abrogate TRIM5α, resulting in a weaker restriction to infection, which could explain the observed difference in restriction efficiency. However, the quantity of virus-like particles per virus preparation for the other virus stocks used is not known and difficult to address. As for the three outliers in this study it is tempting to speculate that they might not only be restricted by TRIM5α but also by Lv2 or yet another unknown restriction factor, as we could show in this study that both TRIM5α and Lv2 restriction factors can act on incoming HIV-2 capsids. However, further studies are needed together with the identification of the restriction factor Lv2. Taken together our results show that even a moderate over expression of wild-type human TRIM5α in human cells (2 fold as determined by quantitative RT-PCR) confers substantial restriction to infection for HIV-1 (12.7 fold restriction for pseudotyped HIV-1 particles) but only a weaker restriction to infection for HIV-2 (between 3.4 and 4.8 fold restriction for pseudotyped HIV-2 particles). This overall stronger restriction to infection described here compared to previous reports [3][4][5][6][7][8] could be explained by non-tagged human TRIM5α being more stable than the (A) VSV-G envelope and HIV-1 envelope protein pseudotyped viruses are equally restricted by human TRIM5α HA-tagged variant most often used in those studies. There is also the possibility that the HA-tag on TRIM5α causes a reduction in the activity as a restriction factor, as has been described for the rhesus TRIM5α variant [7]. In addition, other unidentified factors that differ between Hela-cells and TZM-bl cells could account for the observed stronger restriction and need to be further characterized. The challenge experiments using Lv2-sensitive and Lv2-insensitive that Lv2 is a potent restriction factor. It has been described that certain HIV-1 variants are also restricted by Lv2 [12]. Whether the three HIV-1 isolates D117, ELI and MVP8167 identified as being more efficiently restricted in TZM-huTRIM5α cells are in addition susceptible to Lv2-mediated restriction or restricted by yet another unidentified factor needs to be further elucidated. There is no obvious sequence similarity between HIV-1 and HIV-2 capsid around amino acid position 73, where Lv2 susceptibility has been mapped to. However, differences in viral uptake or differences in activation of target cells due to envelope binding, leading to more or less active restriction factors, could also explain the observed strong restriction efficiency for these three primary HIV-1 isolates and merit further investigations.
2,993.2
2006-11-06T00:00:00.000
[ "Biology" ]
Synthesis and Photochemistry of 1Iodocyclohexene Simultaneous application of UV light and ultrasonic irradiation to a reaction mixture containing 1-iodocyclohexene is reported. The irradiation of 1-iodocyclohexene in methanol was carried out with or without addition of zinc. The effect of ultrasound or mechanical stirring on this solid-liquid system was also compared. The irradiation of 1-iodocyclohexene in methanol in the presence of zinc increases the yield of the nucleophilic trapping product, compared with the yield after irradiation in the absence of zinc. The photodegradation of 1-iodocyclohexene was slightly accelerated after addition of zinc. A rapid formation of radical product was accompanied by substantial decrease of 1-iodocyclohexene after application of ultrasound and irradiation without the zinc. The ultrasound significantly affects the photobehaviour of this reaction, predominantly its radical route. The joint application of ultrasound and zinc contributes positively to the production of radical and ionic products. The sonochemical stirring is more effective than mechanical stirring. Introduction Alkyl halides exhibit competing radical and ionic photobehaviours [1].Iodides give predominantly ionic products.The irradiation process involves the initial homolytic cleavage of the carbon-halogen bond, followed by electron transfer within a radical pair to generate an ion pair.Irradiation of alkyl halides is a convenient method for generation of carbocations.The corresponding vinyl cations are generated by irradiation of vinyl iodides.The products of photochemical reactions depend on the light intensity, as well as on effectiveness of the reaction mixture stirring [2].The application of the ultrasound, especially in heterogeneous reactions, causes a mechanical effect responsible for the mass transfer, that is, a thorough stirring of the reaction mixture, as well as the activation of the surface of the solid reagents present [3][4][5].The second effect of ultrasound, the most pronounced one in homogenous reactions, is caused by the high temperatures (up to 5000 K) and high pressures (up to 400 atm), which are involved in the collapsing bubbles (cavities) in the ultrasound field [3][4][5].The main goal of this work was to examine simultaneous sonication and UV irradiation of 1-iodocyclohexene in methanol in absence/presence of zinc as iodine scavenger. D -ultrasound and irradiation in presence of zinc A rapid formation of 4, accompanied by a great decrease in the amount of 3 formed was observed after application of ultrasound (Table 1, entry B).The ultrasound significantly affects photobehaviour of this reaction, predominantly the radical route.With mechanical stirring in the presence of zinc as an acid scavenger, the enol ether 5 was mainly obtained (Table 1, entry C).Ultrasound and irradiation in the presence of zinc caused rapid photodegradation of 3 and an increase in the ratio of the radical products 4D and 4C.The results obtained in our study showed that ultrasound affects the photobehaviour of 1-iodo-cyclohexene in methanol.Ultrasound markedly influenced the lifetime of the radical pair resulting from initial homolytic cleavage of the carbon-halogen bond.An increase of radical product 4 was observed.The ultrasound caused rapid photodegradation of starting iodide 3 (Table 1 entries A,B).The irradiation of 1-iodocyclohexene in methanol in presence of zinc increases the yield of 5 compared with the yield after irradiation without zinc.The sonochemistry of zinc powder was investigated a few years ago [8].Ultrasound creates acoustic cavitation in liquids.In the presence of fine powders, shockwaves and turbulent flow from cavitation result in interparticle collisions of solids.Such collisions occur with enough force to cause changes in the morphology of powders.After application of ultrasound on zinc powder, dramatic changes in particle morphology were observed.Whereas the mechanical aggitation keeps zinc particles smooth (Figure 2), the ultrasound makes the particles rough and grainy (Figure 3).The particle fragmentation improves the mass transport.Ultrasound and zinc positively contribute to the production of 4 and 5.The rapid photodegradation of 3 was also observed.Mechanical stirring is more or less local, being most effective near the stirrer, but not so effective in the close proximity of the reactor walls.The sonochemical stirring is evenly distributed over the entire reaction volume, as cavities are formed and implode throughout the reaction mixture [4]. Conclusions From the results of this work it follows that ultrasound significantly affects photobehaviour of 1iodocyclohexene predominantly its the radical route.Enhancements due to ultrasound may be attributed to its chemical or mechanical effects, or to both simultaneously.The chemical effects of ultrasound are due to the implosion of microbubbles, generating free-radicals with a great propensity for reaction.Mechanical effects are caused by shock waves formed during symmetric cavitation, or by microjects formed during asymmetric cavitation. General Gas-chromatographic analyses were performed on an Agilent 6890N instrument using a HP-1 capillary column (50 m x 0.32 mm ID with 1.05 µm film thickness).Cyclohexene was used as an internal hydrocarbon standard.Proton NMR spectra were determined in chloroform-d on a Varian Mercury Plus 300 MHz spectrometer.Mass spectra were obtained by using a MSD-Agilent 5973 Network spectrometer. Ultrasonic irradiations. A photochemical reactor allowing simultaneous ultrasound irradiation was used.A sandwich piezoelectric transducer (50W, 20kHz, intensity≈16W.cm - ) was attached with an epoxy resin to the bottom of a normal photochemical reactor with internal water-cooling (50 mm diameter, 40 mL volume).Irradiations of 3 in methanol (c = 1.63x10 -3 mol/L) were carried out with a 4W (254 nm) low pressure mercury lamp.Durring irradiation, the solution was flushed with argon. A -irradiation
1,190
2007-02-16T00:00:00.000
[ "Chemistry" ]
Highly Pathogenic Reassortant Avian Influenza A(H5N1) Virus Clade 2.3.2.1a in Poultry, Bhutan Highly pathogenic avian influenza A(H5N1), clade 2.3.2.1a, with an H9-like polymerase basic protein 1 gene, isolated in Bhutan in 2012, replicated faster in vitro than its H5N1 parental genotype and was transmitted more efficiently in a chicken model. These properties likely help limit/eradicate outbreaks, combined with strict control measures. In Bhutan, the poultry sector consists of free-range backyard chickens, a rising number of commercial chicken farms, and domestic waterfowl in the south (8,9). Live-bird markets do not exist, but live birds are imported from India (8,9). Bhutan's poultry sector was severely affected by outbreaks of HPAI A(H5N1) clade 2.3.2.1 virus infection during 2012-2013 (10). Veterinary authorities enforced strict control measures, including depopulation of poultry in affected regions and burning of related housing and equipment (11). Illegal movement of poultry was the major source of outbreaks (11). Although the introduction of HPAI A(H5N1) from neighboring H5N1-endemic countries is a constant threat, the subtype is not yet entrenched in poultry in Bhutan. The Study We isolated HPAI A(H5N1) viruses from samples from 36 chickens and 9 wild birds in Bhutan, all from affected backyard farms adjacent to the highway connecting India with the capital, Thimphu (Figure 1; online Technical Appendix 1 Figure 2). Antigenic analysis of selected H5N1 isolates from Bhutan (online Technical Appendix 1) showed homogeneity and a reactivity pattern similar to that of H5N1 reference viruses from Bangladesh (Table). Amino acid differences were observed between strains A/chicken/ Bhutan/346/2012 (Ck/Bh/346) (rH5N1) and A/chicken/ Bangladesh/22478/2014 (Ck/BD/22478), representing the parental H5N1 clade 2.3.2.1a genotype (pH5N1) (online Technical Appendix 1 Table 2). We placed Ck/Bh/346 (rH5N1) and Ck/BD/22478 (pH5N1) in direct competition by co-housing chickens inoculated with each virus with naive contacts (online Technical Appendix 1). All donors shed virus oropharyngeally and cloacally, starting at 1 day postinoculation (dpi). By day 3 after contact, real-time reverse transcription PCR to detect PB1 (online Technical Appendix 1) revealed that 7 of 8 naive contacts simultaneously exposed to both viruses were infected with Ck/Bh/346 (rH5N1) alone, none was infected with Ck/BD/22478 (pH5N1) alone, and 1 was co-infected with both viruses. Thus, despite the lower infectious dose used for 30 LD 50 , Ck/Bh/346 (rH5N1) killed inoculated chickens faster than did Ck/BD/22478 (pH5N1) and was transmitted faster and more efficiently to naive contacts. We assessed the risk for human infection with rH5N1 by investigating its pathogenicity and transmissibility in ferrets (online Technical Appendix 1). Donors shed 4.5 log 10 EID 50 /mL and 3.4 log 10 EID 50 /mL in nasal wash samples at 2 dpi and 4 dpi, respectively, but cleared the virus by 6 dpi. No direct or aerosol contacts shed virus, suggesting that Ck/Bh/346 (rH5N1) was not transmitted (data not shown). No Ck/Bh/346 (rH5N1)-inoculated ferrets lost >5% of their body weight or showed elevated body temperature (data not shown). Histopathologic analysis showed that 1 donor, who was lethargic at 3-10 dpi, had mild meningoencephalitis at 14 dpi (online Technical Appendix 2, http://wwwnc.cdc.gov/EID/ article/22/12/16-0611-Techapp2.pdf). A nucleocapsid protein-positive cell was detected in the brain, suggesting that Ck/Bh/346 (rH5N1) is neurotropic. The other ferrets showed no clinical signs of disease. Virus replication was detected in the lung at 4 dpi (log 10 2.75 EID 50 /g) (online Technical Appendix 2). Conclusions Our study revealed that the viruses that caused the 2012 outbreaks in Bhutan belonged to the rH5N1 genotype (2.3.2.1a/H9-like PB1 [7:1]), whereas neither H9N2 nor the pH5N1 genotype have been detected there. rH5N1 has been isolated sporadically at live-bird markets and from chickens on farms where outbreaks occurred in Bangladesh (5,6), India (12), and Nepal (7) in 2011-2013. The lack of data on the effect of the H9-like PB1 gene on the virulence of rH5N1 makes determining its pathogenicity and transmissibility a critical public-health goal for Bhutan and neighboring countries. Ck/Bh/346 (rH5N1) killed inoculated chickens faster than did Ck/BD/22478 (pH5N1), despite the lower infectious dose used for Ck/Bh/346. In CEFs, Ck/ Bh/346replicated with greater efficiency during the first 36 hpi than did Ck/BD/22478, which possibly explains why rH5N1 transmits more efficiently to naive chickens when directly competing with pH5N1. How faster replication contributes to the increased mortality rate of naive chickens might be crucial to eradicating the disease in Bhutan. In a mountainous region with widely separated villages, small-scale poultry farming, and no live-bird markets, the severity and rapid onset of the infection could lead to hostresource exhaustion and self-limitation. To determine whether the reassortant PB1 gene accounts for the observed phenotypic properties of rH5N1, reverse genetics experiments are required. Despite its enhanced transmissibility, rH5N1 did not supplant pH5N1 in India or Bangladesh after undergoing multiple reassortment events. Possible reasons for this include the involvement of other influenza subtypes, which would complicate the competition/transmission model, especially at live-bird markets, as well as the large duck population, which is prone to inapparent HPAI infection (indicating possible underreporting). Our ferret model results suggest that avian-to-human transmission of rH5N1 is possible. However, further adaptation to the host is necessary for rH5N1 to become transmissible among mammals. Similar results have been reported for H5N1 clade 2.3.2.1 (13), H5N1 clade 2.3.4 (14), and H5Nx clade 2.3.4.4 (15). rH5N1 is potentially neurotropic, manifesting clinically as mild meningoencephalitis with no obvious respiratory involvement. This finding has implications on early diagnosis and use of antiviral drugs during the first 48 hours after clinical diagnosis for optimal therapeutic effect. pH5N1 and H9N2 virus strains will likely continue to co-circulate on the Indian subcontinent, enabling further reassortment events. Our results highlight the need for active surveillance and full-genome sequencing of all H5N1 virus isolates. Dr. Marinova-Petkova was a postdoctoral research associate at St. Jude Children's Research Hospital, Memphis, Tennessee, USA, while this research was conducted. She is now affiliated with the Centers for Disease Control and Prevention, Atlanta, Georgia, USA, where her research interests include emerging influenza viruses at the animal-human interface, the evolution of influenza A viruses, and animal models for studying influenza pathogenesis and transmission.
1,388.8
2016-12-01T00:00:00.000
[ "Biology" ]
Ease and Difficulty in L2 Phonology: A Mini-Review A variety of phonological explanations have been proposed to account for why some sounds are harder to learn than others. In this mini-review, we review such theoretical constructs and models as markedness (including the markedness differential hypothesis) and frequency-based approaches (including Bayesian models). We also discuss experimental work designed to tease apart markedness versus frequency. Processing accounts are also given. In terms of phonological domains, we present examples of feature-based accounts of segmental phenomena which predict that the L1 features (not segments) will determine the ease and difficulty of acquisition. Models which look at the type of feature which needs to be acquired, and models which look at the functional load of a given feature are also presented. This leads to a presentation of the redeployment hypothesis which demonstrates how learners can take the building blocks available in the L1 and create new structures in the L2. A broader background is provided by discussing learnability approaches and the constructs of positive and negative evidence. This leads to the asymmetry hypothesis, and presentation of new work exploring the explanatory power of a contrastive hierarchy approach. The mini-review is designed to give readers a refresher course in phonological approaches to ease and difficulty in acquisition which will help to contextualize the papers presented in this collection. INTRODUCTION Why are some sounds harder to learn than others? A Japanese learner of English may have difficulty acquiring a novel L2 English [l]/[ɹ] contrast (Brown, 2000) but less difficulty acquiring a novel L2 Russian [l]/[r] contrast (Larson Hall, 2004 (Matthews, 2000). A Brazilian Portuguese learner of English may have difficulty acquiring consonant clusters such as [sl], [sn] or [st] which are absent from the L1 (Cardoso, 2007), while a Persian learner of English who also lacks L1 [sl], [sn] and [st] may find them quite easy to acquire (Archibald and Yousefi, 2018). A Spanish learner of English may find it easier to acquire the [i]/[ɪ] contrast (which is absent from the L1) when learning Scottish English than British English (Escudero, 2002). There are also examples of so-called directionality of difficulty effects (Eckman, 2004). For example, an English learner of German might find it easier to suppress a final voicing contrast than a German learner of English would find it to learn to make a new L2 final voicing contrast. These are the types of facts researchers need to explain (the explanandum). In this short paper, I will provide an overview of some of the proposed phonological accounts (the explanans) of such cases of ease or difficulty. We begin by asking what it means to have acquired a sound. To probe such a question from a phonological perspective means that we must tackle the question of contrast. Phonemes are used to represent lexical contrasts. Such contrasts must also be implemented phonetically in both production and perception. Given that L2 production and perception may well be nonnativelike, this raises the interesting question for the L2 phonologist of determining whether the individual is 1) producing an inaccurate representation accurately, or 2) producing an accurate representation inaccurately. A case of 1) would be where an L2 learner might have the same representation for both /l/ and /r/ (i.e., not making a phonemic liquid contrast) and who also merged the production of [l] and [r]. A case of 2) would be where a learner might have a representational contrast for /b/ and /p/ (i.e., making a phonemic VOT contrast) but not implementing the contrast in a nativelike fashion. Methodologically, this reveals that researchers (and teachers) cannot rely on inaccurate production as a diagnostic of non-nativelike representation. This leads us to a related question concerning production vs. perception. Much work in L2 speech proceeds on the assumption that accurate perception must (logically and developmentally) precede accurate production (Flege, 1995). Thus, much of the literature focusses on assessing whether the subjects can discriminate phonetic contrasts reliably, and represent phonological contrasts accurately. However, there are certain cases where learners may be accurate in either production (Goto, 1971) or lexical discrimination (Darcy et al., 2012) tasks and yet remain inaccurate on discrimination tasks. In both cases, it may be that metalinguistic knowledge plays an important role. Ever since the Contrastive Analysis Hypothesis (Lado, 1957), linguists have tried to predict which aspects of L2 speech would be easy or difficult to learn. Since the 50s, both the representational models of phonology and the learning theories have become more sophisticated, and this has led to a consideration of multiple factors in exploring the construct of difficulty. Such approaches stand in marked contrast to the models of cross-language speech production (Flege, 1995) and cross-language speech perception (Best and Tyler, 2007) which primarily invoke acoustic and articulatory factors to explain difficulty in acquisition. In the field of second language acquisition (SLA), there have been many factors explored to account for aspects of learner variation, including variation in nativelikeness of L2 speech. The following factors have been explored: • L1 transfer (Trofimovich and Baker, 2006) • amount of experience (Bohn and Flege, 1992) • amount of L2 use (Guion et al., 2000) • age of learning (Abrahamsson and Hyltenstam, 2009) • orthography (Escudero and Wanrooi, 2010;Bassetti et al. 2015) • frequency (Davidson, 2006) • attention (Guion and Pederson, 2007) • training (Wang et al., 2003) It goes without saying that all of these factors do come in to play in accounting for learner behavior. What I will focus on in this mini-review are key representational issues which have informed phonological approaches to the construct of ease and difficulty. REPRESENTATIONAL APPROACHES This mini-review is focusing on representational models of phonology. There is a rich literature on output-based approaches (Tessier et al., 2013;Jesney, 2014) which tend to emphasize the computational system which generates the output form rather than emphasizing the form of the underlying (or input) representation. Markedness Some have looked to the notion of markedness (Parker, 2012) as an explanation by suggesting that unmarked structures are easier to acquire than marked ones (Carlisle, 1998). For example, it could be argued that 3-consonant onsets (e.g., [str]) were more difficult to acquire than 2-consonant onsets (e.g., [tr]) because they were more marked. Even within 2-consonant sequences work such as Broselow and Finer (1991), Eckman and Iverson (1993) demonstrate that principles such as Sonority Sequencing instantiate markedness with greater sonority distance between the adjacent segments being less marked (i.e., [pj] would be less marked than [fl]). Such machinery is designed to account for the observation that not all structures which are absent from the L1 are equally difficult to acquire in the L2. The developmental path would be from unmarked to marked structures. Some have suggested that a markedness continuum was not enough but rather that markedness differential was the locus of explanation (Eckman, 1985). Under this approach, a structure which was absent from the L1 and more marked than the L1 structure would be difficult to acquire while one which was absent from the L1 but less marked than the L1 structure would be easier to acquire. Often, however, the unmarked forms are the most frequent (e.g. 3-consonant clusters are more marked than 2-consonant clusters, and 3-consonant clusters are also less frequent than 2consonant clusters) so it is difficult to tease these factors apart. If learners are more accurate on 2-consonant clusters is it because they are more frequent or less marked? Frequency-Based Approaches Usage-based (Wulff and Ellis, 2018) and Bayesian (Wilson and Davidson, 2009) approaches argue that targetlike production accuracy is correlated with input frequency. Thus, if there are two elements which are absent from the L1 and one is frequent in the L2 input while one is infrequent, then the frequent structure might be more easily acquired. Cardoso (2007) documents a scenario in which the most frequent structure is the most marked so we can tell which construct is most explanatory. In looking at the acquisition of L2 English consonant clusters by L1 speakers of Brazilian Portuguese, he focused on [st], [sn] and [sl]. Without getting into the details of the markedness facts here, [st] is both the most frequent and the most marked of the clusters. When it came to learner production, the learners were least accurate on the most marked cluster ([st]) even though it was most frequent in their input. For production (though not perception), markedness seemed to be more explanatory than frequency. Frequency Versus Markedness The construct of markedness itself has its critics (Haspelmath, 2006;Zerbian, 2015). If the notion is ill-defined measure of complexity-difficulty or abnormality?-then how can it be a valid explanans? Responding to Archibald (1998) who suggested that positing markedness as an explanation (rather than a description) only bumped the explanation problem back a generation (because what explains markedness?), Eckman (2008;105) counter-argues that, "to reject a hypothesis because it pushes the problem of explanation back one step misses the point that all hypotheses push the problem of explanation back one step-indeed, such 'pushing back' is necessary if one is to proceed to higher level explanations." Processing Accounts While more work has been done on the role of the processor in morpho-syntax in SLA (O'Grady, 1996;O'Grady, 2006;Truscott and Sharwood Smith, 2004), Carroll (2001) explores the role of the phonological parser in mapping the acoustic signal onto phonological representations. Carroll (2013) addresses these questions in initial-state L2 learners empirically. There has also been some work done on L2 phonological parsing at the level of the syllable (Archibald, 2003;Archibald, 2004;Archibald, 2017) which suggests that structures which can be parsed are easier to acquire than structures which the parser cannot yet handle. Such models intersect with the perception literature insofar as the L2 acoustic input is filtered by the L1 phonological system (Pallier et al., 2001). In turn, such perceptual shoe-horning can lead to activation of phantom lexical competitors (Broersma and Cutler, 2007) which may slow lexical activation. The notion that only some input can be processed at any given time, thus leading to the intake to the processor being a subset of the environment input, is well-studied in applied linguistics (Corder, 1967;Schmidt, 1990). What has proved more elusive is explaining when input becomes intake (and when it does not). Certainly one of the challenges is avoiding circularity of the following sort: Q: why is x produced/perceived accurately before y? A: Because it became intake Q: How do you know it became intake? A: Because it was produced/perceived accurately. Processing accounts are not necessarily independent of abstract phonological studies as they have also been important in documenting the viability of abstract phonological features (Lahiri and Reetz, 2010;Schluter et al., 2017). Features can be explanatory when we note classes of sounds behaving in a similar fashion, for example, only nasals being allowed in syllable codas in a given L1. Thus difficulty may arise when these learners attempt to parse L2 stops into a coda. Note that the difficulty would affect, say, [p t k] as a class of voiceless stops. Representational Accounts Theories of phonological representation help us to model both synchronic and diachronic aspects of L2 phonological grammars. Özçelik (2016) addresses the general question of developmental path in L2 grammars (a fundamental concern of the field as we try to develop a transition theory). He proposes a cue-based model which clarifies which structural properties (i.e., parameters) are logical precursors to the acquisition of subsequent parameters. Özçelik and Sprouse (2016) demonstrate that interlanguage grammars are constrained by phonological universals (such as the behavior of feature spreading). Feature-based models (Brown, 2000) can be contrasted with segment-based models (Flege, 1995). A segment-based model might say that a new segment will be difficult to acquire based on a comparison of the L1 and L2 phonetic categories. A featurelevel account would argue that new L2 contrasts which were based on distinctive features that were absent from the L1 would be difficult while new contrasts based on L1 features would be easy. Brown (2000) showed that Korean learners of English could acquire new contrasts if the contrasts were based on an existing L1 feature (e.g., [continuant]) while L2 contrasts which were not based on L1 features (e.g., [distributed]) were more difficult to acquire. LaCharité and Prévost (1999) suggest that this was too strong an approach and that some features which were absent (i.e., terminal nodes) would be acquirable while others (i.e., articulator nodes) would not, as shown in (1). The features in boldface are the ones which are absent from the L1 French inventory. They predict that the acquisition of L2 English [h] will be more difficult than the acquisition of [θ] because [h] requires the learner to trigger a new articulator node. On a discrimination task, the learners were significantly less accurate identifying [h] than identifying [θ], however, on a word identification task (involving lexical access) there was no significant difference between the performance on [h] vs. [θ]. Özçelik and Sprouse (2016), however, show that L2 learners are able to acquire the features of secondary articulations (e.g., palatalized consonants). Hancin-Bhatt (1994) proposed that the functional load of a particular feature in implementing a contrast in a language would determine its weighting (with features with high functional load predicted to have greater cross-linguistic influence than those with low functional load). Archibald (2005) proposed the Redeployment Hypothesis in which it would be easier to acquire new L2 structures which could be built from existing L1 building blocks (e.g., features, or moras) than to acquire new building blocks. In some ways, this approach presages Lardiere's (2009) Feature Reassembly Hypothesis which looks to account for the difficulty that L2 learners have acquiring L2 morphology. One example of redeployment is evidenced in the L2 acquisition of Japanese geminate consonants by L1 English speakers. Japanese geminate consonants have the moraic structure shown in (2). English does not have geminate consonants, but does have a weight-sensitive stress system, shown in (3) where coda consonants project moras which attracts stress to heavy syllables. Thus, the English quantity-sensitive system can be redeployed to acquire L2 geminates. The corollary to this would be that L2 structures which could not be built from L1 components would be more difficult to acquire. Cabrelli et al. (2019), looking at Brazilian Portuguese learners of English coda consonants, also demonstrate that L2 learners can restructure their phonological grammars insofar as the L2 learners are licensing coda consonants which are not found in the L1. Carlson (2018) found similar effects in L1 Spanish. Garcia (2020) describes an interesting case where a property of the L2 (stress placement) which could be acquired on the basis of transferring an L1 property of weight-sensitivity is, in fact, difficult to acquire because another property of the L1 is able to account for the L2 data, and this property (positional bias) is more robust in the L2 input. Darcy et al. (2012) present data which show, contra Flege (1995), that some learners who were able to lexically represent a contrast were unable to accurately discriminate it. The model is known as DMAP which stands for direct mapping of acoustics to phonology. The basic empirical finding which they report on is a profile where L2 learners of French (with L1 English which lacks/y/) can distinguish lexical items which rely on a /y/ -/u/ distinction while simultaneously being unreliable in discriminating [y] from [u] in an ABX task. Detection of acoustic properties can lead to phonological restructuring (according to general economy principles of phonological inventories) which will result in a lexical contrast but the phonetic categories may not yet be targetlike. The learners rely on their current interlanguage feature hierarchy to set up contrastive lexical representations even as phonetic category formation proceeds. This is reminiscent of the Goto (1971) study where Japanese learners were able to produce an /l/-/r/ liquid contrast even while not being able to discriminate between them in a decontextualized task. It could be that the tactile feedback received in the production of these two sounds, and the orthographic distinction between "l" and "r" were able to cue the learners' production systems. This sort of metalinguistic knowledge can affect production. Davidson and Wilson (2016) extend a body of research which documents L2 learners' sensitivity to non-contrastive phonetic properties (which might account for occurrences of prothesis vs. epenthesis in cluster repair) to look at learner behavior in the classroom. While subjects listening in a classroom (compared to a sound booth) showed some differences (e.g., less prothesis repair), by and large the performance was very similar. This suggests that laboratory research may well have quite direct implications for classroom learners. Learnability and L2 Phonology Learnability approaches (Wexler and Culicover, 1980;Pinker, 1989;White, 1991) argued that learning would be faster when there was positive evidence that the L1 grammar had to change, while change that was cued only by negative evidence would be acquired more slowly. Positive evidence is evidence in the linguistic environment of well-formed structures. Negative evidence is evidence given to the learner that a particular string is ungrammatical. It would be easier to move from an L1 which was a subset of the L2 (because there is positive evidence to indicate that the current grammar is incorrect) than it would be to move from an L1 which was a superset of the L2 (as this would require negative evidence). Consider the example of L1 English and L2 Hungarian as shown in Figure 1. Hungarian secondary stress (Kerek, 1971) is quantity-sensitive to the Nucleus (meaning that only branching nucleii (i.e., long vowels (CVV)) are treated as Heavy but not branching Rhymes (i.e., closed syllables (CVC)). English stress is quantity sensitive to branching nuclei and branching rhymes. If your L1treated long (i.e., bimoraic) vowels (CVV) and closed syllables (CVC) as heavy (as English does) but the L2 only treated long vowels as heavy then it might take a while for the learner to hypothesize "wait, I've never heard a secondary stress on a closed syllable!". But L1 Hungarian to L2 English would have clear positive evidence when the learner hears stress placed on a closed syllable (as in agénda). An English learner of Hungarian would have to notice that Hungarian never stressed closed syllables. Dresher and Kaye (1991) argued that when the data reveal that closed syllables and branching nuclei behave the same with respect to stress assignment this is the universal cue for the system to be quantity sensitive to the rhyme. See Archibald (1991) for further discussion and empirical investigation. Young-Scholten's (1994) Asymmetry Hypothesis predicts that if an L2 phonological rule applies in a prosodic domain that is a superset of the L1 phonological domain then the positive evidence will make it easier to acquire. However, when the target domain is smaller than the L1 domain then the lack of positive evidence will make acquisition more difficult. In English, the rule of flapping applies within a phonological utterance (e.g., Don't sit on the mat [ɾ], it's dirty.). German has a rule of final devoicing which applies within a phonological word (e.g., Ich ha [b]e ∼ Ich hab[p]). So, English learners of German are predicted to have difficulty acquiring phonological patterns which are licensed only in smaller phonological domains. In addition to positive evidence or direct negative (i.e., correction) evidence, however, Schwartz and Goad (2017) have demonstrated that indirect positive evidence can play a role in second language learning where the L2 is a subset of the L1. In this case, L2-accented English was shown to be a source of evidence for some subjects as to the phonotactics of Brazilian Portuguese. There is one area which is just starting to be explored in L2 phonology and that is Dresher (2009) contrastive hierarchy as an explanatory tool for ease and difficulty. Dresher's model suggests that L2 features which are active (i.e., involved in many phonological processes in the language) will be easier to learn than L2 features which are inactive due to the type of evidence they present to the learner. Active features provide robust cues to the learner that a given feature must be highly ranked in a contrastive hierarchy, and is, therefore, evidence to restructure the L1 hierarchy. Archibald (2020) has explored this model in an analysis of L3 phonological systems. Such a mechanism is reminiscent of Hancin-Bhatt (1994) notion of how functional load defines featural prominence. CONCLUSION What I have attempted to show in this mini-review is that there is a rich history in addressing the question of ease vs. difficulty in L2 phonology. I hope that this overview will provide useful background to the readers of this collection. Unsurprisingly, there is no easy answer to the difficult question of ease vs. difficulty. AUTHOR CONTRIBUTION I am the sole author of this piece. ACKNOWLEDGMENTS I would like to thank the reviewers for this piece. Their keen eye for clarity and accuracy has greatly improved this mini-review. But I have to say that the friendly, supportive scholarly exchange was as enjoyable as it is rare.
4,856.6
2021-03-03T00:00:00.000
[ "Linguistics" ]
The Effect of γ-irradiation on the Structural and Physical Properties of CdSe Thin Films Thin film of CdSe has been deposited on to clean glass substrate by using CBD technique at room temperature. The samples are irradiated by γ-ray with various doses (25,50,100,150) rad. These films are characterized by XRD, which indicated that as-deposited CdSe layers and irradiated films at 25 & 50 rad of γray grew in cubic phase having preferred orientation along (111) plane in c-direction. Further the irradiated films at 100 & 150 rad of γray show polycrystalline in nature and with a mixture of cubic along with hexagonal structures. Optical absorption spectra of these thin films have been recorded using spectrophotometer. The energy band gap has been determined using these spectra. It is found that energy band gap of CdSe film is 2.09 eV and it is increased as increasing γray irradiation does. The electrical conductivity measurements gave a decrease in conductivity with the increase of γirradiation does. INTRODUCTION The II-VI semiconducting compounds, especially the cadmium chalcogenides, have been extensively studied due to their potential applications in semiconductor devices and solar cells fabrication [1][2][3]. Cadmium Selenide with some additives is nowadays attracting a great deal of attention owing to its potential, fundamental, experimental and applied interests in a variety of thin film devices such as laser screen materials, projection colour TVs, nuclear radiation detectors, light emitting diodes, etc. [4][5][6][7][8][9].Many studies have focused on cadmium selenium, because of its high luminescence quantum yield, suitable band gap and a variety of optoelectronic conversion properties [10]. Several physical and chemical techniques are available for the growth of CdSe thin films. CdSe thin films have been deposited using different techniques such as electrodeposition [11][12], molecular beam epitaxy [13], spray pyrolysis [14], successive ionic layer adsorption and reaction method [15], vacuum deposition and chemical bath deposition [16]. Among these methods chemical bath deposition has several overriding advantages with other techniques such as uniform film deposition, control of thickness, precise maintenance of deposition temperature, low cost [17][18]. In this paper, CdSe thin films deposited on the glass substrates by chemical bath deposition, then we demonstrated the effect of γ-irradiation on the optical, electrical and structural properties of CdSe films. The synthesized films were characterized and analyzed with scanning electron microscope (SEM), Xray diffraction (XRD) patterns and ultraviolet-visible (UV-vis) spectrophotometer. EXPERIMENTAL DETAILES Solutions were prepared by dissolving an appropriate amount of analytically Selenium metal and Sodium sulphite in 10 ml distilled water. Sodium selenosulphite (Na2SeSO3) can be synthesized by refluxing selenium powders in a sodium sulphite. In the experiments, (0.5M) Cadmium Chloride dissolved in 10 ml distilled water and ammonia was used as complexing agent. The temperature of the solution was allowed to rise slowly up to 35˚C. The substrates were removed from the beaker after about 20 h. After the deposition, the substrates were taken out of the bath, rinsed with distilled water, dried in air and kept in a desecrator. In order to accelerate the irradiation process, the strongest of the available γ-ray source was used, namely a 137Cs source with the activity of 0.132 Ci, emitting 662 keV γ rays. The dose average at the irradiation was (6 Gy/h) and the irradiation time was (2.5, 5, 10, 15) min at doses (25, 50, 100, 150) rad respectively. The samples were on the distance 8 mm from the radiation source. The samples were characterized by X-ray diffraction patterns employing in 2θ range from 20˚ to 80˚, using CuKα radiation (  1.5418 Å) at 40 kV and 20 mA. The morphology and particle sizes were determined by Scanning Electron Microscopy (SEM). UV-vis spectroscopy was carried out at room temperature using spectrophotometer in the range 400-1100 nm. The conductivity of these films (as-deposited and irradiation) have been determined by I-V measurements using the electrometer. RESULTS AND DISCUSSION The crystal form of the CdSe film was characterized with XRD patterns and results are shown in Fig. 1. XRD studies revealed that these samples are polycrystalline in nature exhibiting the hexagonal (wurtzite) and cubic (zinc blende) structures. Fig.1 shows XRD patterns of as-deposited and irradiated brought to you by CORE View metadata, citation and similar papers at core.ac.uk CdSe thin films at different doses (25, 50, 100, 150) rad of -radiation. As-deposited CdSe thin films were cubic structure, which showed only one intense reflection peak at 2  (29), corresponding to cubic (111) plane, which coincide well with JCPDS data No. . The thin films irradiated at 25 rad of γ-ray were cubic with slight improvement in crystallinity, whereas films irradiated at 50 rad of -ray becomes polycrystalline with cubic structure. Further, CdSe thin films irradiated at 100 rad of γ-ray were polycrystalline with a mixture of cubic along with hexagonal structure with highest intense reflection peaks at (2  29) corresponding to cubic (111) and with lowest reflection peaks corresponding to cubic and hexagonal structures . The film irradiated at 150 rad of γ-ray were also polycrystalline with a mixture of cubic along with hexagonal structures with highest intense reflection peaks at (2  29) & (2  56.5) corresponding to cubic (111) & (311) planes respectively. Scanning electron microscopy (SEM) is a convenient technique to study microstructure of thin films. Fig. 2 shows the SEM micrographs of as-deposited and irradiated with γ-ray CdSe thin films. It is observed that the as-deposited CdSe films are non homogeneous, The grains are densely packed, well defined and quasispherical having different sizes. Whereas the film irradiated at 25 rad of γ-ray was nearly similar that as-deposited CdSe films with presence simple distortion in picture. Further, films irradiated at 50 rad of γ-ray clearly show that the effect of γ-irradiation on CdSe and observed that the grains becomes nearly similarity the crystals. In fig. (2-4)was observed that the distortion in the crystals and non homogenous. The SEM of CdSe thin film irradiated at 150 rad of γ-ray clearly shows microcrystals of larger size, more crystalline behavior and well covered to the glass substrate. The transmittance spectrum of the CdSe film was recorded using UV-vis spectrophotometer at room temperature in the wavelength range 400-1100 nm. Optical transmittance of the film is shown in Fig. 3. From the optical transmittance spectra, it is observed that the transmittance of CdSe thin films increases as increasing the irradiation at doses (25, 50, 100, 150) rad. The optical band gap energy Eg can be determined from the experimental values of absorption coefficient  as a function of photon energy hν, using the following relation [19]. where hν is photon energy, Eg is band gap and A is constant. Now n can have values 1/2 for direct transition. The value of absorption coefficient is found to be of the order of 104 cm −1 . The plot of (hν) 2 versus (hν) is shown in fig. 4a which is linear at the absorption edge, indicating a direct allowed transition. Electrical conductivity of CdSe thin films is determined using four-point prop measurements at room-temperature as shown in table 2. It is found that the conductivity decreases with the increase in irradiation dose. This is explained in terms of structural changes and defects creation occurring in the irradiated films. In as-deposited CdSe films, there is some lattice defects, geometrical, and physical imperfections randomly distributed on the surface and within the volume of the film [20]. The roughness of the surface, grain boundaries and inclusions in the volume are the main components of the geometrical imperfection. The important factor, which is responsible for the physical properties of thin film, is the structure. So it expected that the decrease in the conductivity is due to increase in the mean size of the grain [21] and a decrease in the grain boundary area as shown in SEM pictures in addition to the increase of defects like vacancies and interstitial. Also our expectation of forming CdS which is added factor of decreasing in conductivity. CONCLUSIONS In summary, the influence of irradiation on the optical, structural and electrical properties of the chemically deposited CdSe thin films were investigated. The optical energy band gap has been increased from 2.09 to 2.35 eV with the increasing irradiation doses. The structure of the films has been transformed slightly from cubic to mixture of cubic and hexagonal structure at 100 & 150 rad of γ-ray. The films show typical semiconductor characteristics with conductivity of the order 1.3  10 -2 ( cm) -1 at room temperature and the electrical conductivity decreases down to 7 10 -4 ( cm) -1 at 150 rad of -ray.
1,988
2013-07-23T00:00:00.000
[ "Materials Science", "Physics" ]
A physics-informed deep learning approach for solving strongly degenerate parabolic problems In recent years, Scientific Machine Learning (SciML) methods for solving partial differential equations (PDEs) have gained increasing popularity. Within such a paradigm, Physics-Informed Neural Networks (PINNs) are novel deep learning frameworks for solving initial-boundary value problems involving nonlinear PDEs. Recently, PINNs have shown promising results in several application fields. Motivated by applications to gas filtration problems, here we present and evaluate a PINN-based approach to predict solutions to strongly degenerate parabolic problems with asymptotic structure of Laplacian type. To the best of our knowledge, this is one of the first papers demonstrating the efficacy of the PINN framework for solving such kind of problems. In particular, we estimate an appropriate approximation error for some test problems whose analytical solutions are fortunately known. The numerical experiments discussed include two and three-dimensional spatial domains, emphasizing the effectiveness of this approach in predicting accurate solutions. Introduction In this paper, we aim to exploit a novel Artificial Intelligence (AI) methodology, known as Physics-Informed Neural Networks (PINNs), to predict solutions to Cauchy-Dirichlet problems of the type where Ω is a bounded connected open subset of R n (2 ≤ n ≤ 3) with Lipschitz boundary, f and w are given real-valued functions defined over Ω × [0, T ] and ∂ par Ω T respectively, ∇u denotes the spatial gradient of an unknown solution u : Ω × [0, T ) → R, while ( • ) + stands for the positive part. A motivation for studying problem (1.1) can be found in gas filtration problems (see [1] and [3]).In order to make the paper self-contained, we provide a brief explanation in Section 1.1 below. As for the parabolic equation (1.1) 1 , the regularity properties of its weak solutions have been recently studied in [2,3] and [8].The main novelty of this PDE is that it exhibits a strong degeneracy, coming from the fact that its modulus of ellipticity vanishes in the region {|∇u| ≤ 1}, and hence its principal part behaves like a Laplace operator only at infinity. The regularity of solutions to parabolic problems with asymptotic structure of Laplacian type had already been investigated in [11], where a BMO 1 regularity was proved for solutions to asymptotically parabolic systems in the case f = 0 (see also [13], where the local Lipschitz continuity of weak solutions with respect to the spatial variable is established).In addition, we want to mention the results contained in [4], where nonhomogeneous parabolic problems with an asymptotic regularity in divergence form of p-Laplacian type are considered.There, Byun, Oh and Wang establish a global Calderón-Zygmund estimate by converting a given asymptotically regular problem to a suitable regular problem. Concerning the approach used here, the PINNs are a Scientific Machine Learning (SciML) technique based on Artificial Neural Networks (ANNs) with the feature of adding constraints to make the predicted results more in line with the physical laws of the addressed problem.The concept of PINNs was introduced in [12,15,16] and [17] to solve PDE-based problems.The PINNs predict the solution to a PDE under prescribed initial-boundary conditions by training a neural network to minimize a cost function, called loss function, which penalizes some suitable terms on a set of admissible functions u (for more information, we refer the interested reader to [6]). The kind of approach we want to propose here can offer effective solutions to real problems such as (1.1) and can be applied in many other different fields: for example, in production and advanced engineering [19], for transportation problems [7], and for virtual thermal sensors using real-time simulations [10].Additionally, it is employed to solve groundwater flow equations [5] and address petroleum and gas contamination [18]. As far as we know, this is one of the first papers demonstrating the effectiveness of the PINN framework for solving strongly degenerate parabolic problems of the type (1.1). Motivation Before describing the structure of this paper, we wish to motivate our study by pointing out that, in the physical cases n = 2 and n = 3, degenerate equations of the form (1.1) 1 may arise in gas filtration problems taking into account the initial pressure gradient. The existence of remarkable deviations from the linear Darcy filtration law has been observed in several systems consisting of a fluid and a porous medium (e.g., the filtration of a gas in argillous rocks).One of the manifestations of this nonlinearity is the existence of a limiting pressure gradient, i.e. the minimum value of the pressure gradient for which fluid motion takes place.In general, fluid motion still occurs for subcritical values of the pressure gradient, but very slowly; when achieving the limiting value of the pressure gradient, there is a marked acceleration of the filtration.Therefore, the limiting-gradient concept provides a good approximation for velocities that are not too low. In accordance with some experimental results (see [1]), under certain physical conditions one can take the gas filtration law in the very simple form where v = v(x, t) is the filtration velocity, k is the rock permeability, µ is the gas viscosity, p = p(x, t) is the pressure and β is a positive constant.Under this assumption we obtain a particularly simple expression for the gas mass velocity (flux) j, which contains only the gradient of the pressure squared, exactly as in the usual gas filtration problems: where ϱ is the gas density and C is a positive constant.Plugging expression (1.2) into the gas mass-conservation equation, we obtain the basic equation for the pressure: where m is a positive constant.Equation (1.3) implies, first of all, that the steady gas motion is described by the same relations as in the steady motion of an incompressible fluid if we replace the pressure of the incompressible fluid with the square of the gas pressure.In addition, if the gas pressure differs very little from some constant pressure p 0 , or if the gas pressure differs considerably from a constant value only in regions where the gas motion is nearly steady, then the equation for the gas filtration in the region of motion can be "linearized" following L. S. Leibenson, and thus obtaining (see [1] again) Setting u = p 2 and performing a suitable scaling, equation (1.4) turns into which is nothing but equation (1.1) 1 with f ≡ 0. This is why (1.1) 1 is sometimes called the Leibenson equation in the literature. The paper is organized as follows.Section 2 is devoted to the preliminaries: after a list of some classic notations, we provide details on the strongly degenerate parabolic problem (1.1).In Section 3, we describe the PINN methodology that was employed.Section 4 presents the results that were obtained.Finally, Section 5 provides the conclusions. Notation and preliminaries In what follows, the norm we use on R n will be the standard Euclidean one and it will be denoted by | • |.In particular, for the vectors ξ, η ∈ R n , we write ⟨ξ, η⟩ for the usual inner product and |ξ| := ⟨ξ, ξ⟩ for the backward parabolic cylinder with vertex (x 0 , t 0 ) and width ρ.Finally, for a general cylinder Q = A × (t 1 , t 2 ), where A ⊂ R n and t 1 < t 2 , we denote by the usual parabolic boundary of Q. To give the definition of a weak solution to problem (1.1), we now introduce the function H : R n → R n defined by ) is a weak solution of equation (1.1) 1 if and only if for any test function φ ∈ C ∞ 0 (Ω T ) the following integral identity holds: (2.1) We identify a function as a weak solution of the Cauchy-Dirichlet problem (1.1) if and only if (2.1) holds and, moreover, Therefore, the initial condition u = w on Ω × {0} has to be understood in the usual L 2 -sense (2.2), while the condition u = w on the lateral boundary ∂Ω × (0, T ) has to be meant in the sense of traces, i.e. (u − w) (•, t) ∈ W 1,2 0 (Ω) for almost every t ∈ (0, T ). Taking p = 2 and ν = 1 in [3, Theorem 1.1], we immediately obtain the following spatial Sobolev regularity result: is a weak solution of equation (1.1) 1 .Then the solution satisfies Furthermore, the following estimate ⋐ Ω T and a positive constant c depending on n, q and R 0 . From the above result one can easily deduce that u admits a weak time derivative u t , which belongs to the local Lebesgue space L min {2, q} loc (Ω T ).The idea is roughly as follows.Consider equation (1.1) 1 ; since the previous theorem tells us that in a certain pointwise sense the second spatial derivatives of u exist, we may develop the expression under the divergence symbol; this will give us an expression that equals u t , from which we get the desired summability of the time derivative.Such an argument has been made rigorous in [3,Theorem 1.2], from which we can derive the next result. Theorem 2.4.Under the assumptions of Theorem 2.3, the time derivative of the solution exists in the weak sense and satisfies Furthermore, the following estimate holds true for any parabolic cylinder ⋐ Ω T and a positive constant c depending on n, q and R 0 . Now, let the assumptions of Theorem 2.3 be in force.For ε ∈ [0, 1] and a couple of standard, non-negative, radially symmetric mollifiers where f is meant to be extended by zero outside Ω T .Observe that f 0 = f and f ε ∈ C ∞ (Ω T ) for every ε ∈ (0, 1]. Next, we consider a domain in space-time denoted by Ω ′ 1,2 := Ω ′ × (t 1 , t 2 ), where Ω ′ ⊆ Ω is a bounded domain with smooth boundary and (t 1 , t 2 ) ⊆ (0, T ).In the following, we will need the definitions below. ) the following integral identity holds: as a weak solution of the Cauchy-Dirichlet problem if and only if (2.4) holds and, moreover, in the usual L 2 -sense and the condition u ε = u on the lateral boundary Due to the strong degeneracy of equation (1.1) 1 , in order to prove Theorems 2.3 and 2.4 above, the authors of [3] resort to the family of approximating parabolic problems (2.5).These problems exhibit a milder degeneracy than (1.1) and the advantage of considering them stems from the fact that the existence of a unique energy solution u ε satisfying the requirements of Definition 2.6 can be ensured by the classic existence theory for parabolic equations (see [14, Chapter 2, Theorem 1.2 and Remark 1.2]). Physics-informed methodology PINNs are a type of SciML approach used in neural networks to solve PDEs.Unlike traditional neural networks, PINNs incorporate physics constraints into the model, resulting in predicted outcomes that adhere more closely to the natural laws governing the specific problem being addressed.The general form of the problem involves a PDE along with initial and/or boundary conditions. In particular, we consider a (well-posed) problem of the type where Ω is a bounded domain in R n , F denotes a nonlinear differential operator, γ is a parameter associated with the physics of the problem, B is an operator defining arbitrary initial-boundary conditions, the functions f and w represent the problem data, while u(x, t) denotes the unknown solution.The objective of PINNs is to predict the solution to (3.1) by training the neural network to minimize a cost function.The neural network's architecture used for PINNs is typically a FeedForward fully-connected Neural Network (FF-DNN), also known as Multi-Layer Perceptron (MLP).In an FF-DNN, information flows only forward direction, in the sense that the neural network does not form a loop.Furthermore, all neurons are interconnected.Once the number N of hidden layers has been chosen, for any i ∈ {1, . . ., N } and set z = (x, t) we define where W i is the weights matrix of the links between the layers i − 1 and i, while b i corresponds to the biases vector.Then, a generic layer of the neural network is defined by for some nonlinear activation function φ i .The output of the FF-DNN, denoted by ûθ (z), can be expressed as a composition of these layers by where θ represents the set of hyperparameters of the neural network and the activation function φ is assumed to be the same for all layers.To solve the differential problem (3.1) using PINNs, the PDE is approximated by finding an optimal set θ * of neural network hyperparameters that minimizes a loss function L. This function consists of two components: the former, denoted by L F , is related to the differential equation, while the latter, here denoted by L B , is connected to the initial-boundary conditions (see Fig. 3.1).In particular, the loss function can be defined as follows where ω F and ω B represent the weights that are usually applied to balance the importance of each component.Hence, we can write The aim of this approach is to approximate the solution of the PDE satisfying the initialboundary conditions.This is known in the literature as the direct problem, which is the only one we will address here. Numerical results In this section we evaluate the accuracy and effectiveness of our predictive method, by testing it with five problems of the type (1.1) whose exact solutions are known.For each problem, we will denote the exact solution by u, and the predicted (or approximate) solution by û.Sometimes, by abuse of language, for a given time t ≥ 0 we will refer to the partial maps u(•, t) and û(•, t) as the exact solution and the predicted (or approximate) solution respectively.The meaning will be clear from the context every time.We will deal with each test problem separately, so that no confusion can arise.In the first three problems, Ω will be a bounded domain of R 2 , while, in the last two problems, Ω will denote the open unit sphere of R 3 centered at the origin. In addition, for each of the test problems, we employed the same neural network architecture.This consists of four layers, each with 20 neurons.We utilized the hyperbolic tangent function as the activation function for both the input layer and the hidden layers, while a linear function served as the activation function for the output layer.Lastly, to train the neural network, we conducted 80000 epochs with a learning rate (lr) of 3×10 −3 and employed the Adaptive Moment Estimation (ADAM) optimizer.The decision to set the lr to the constant value 3 × 10 −3 was based on the observation that this specific hyperparameter led to the optimal convergence of our method.Experimentation with lr set to 1 × 10 −1 highlighted the network's inability to achieve convergence, while using an lr of 1 × 10 −5 allowed the method to converge, albeit requiring a significantly higher number of epochs.The latter scenario, while ensuring convergence, proved to be less computationally efficient.The experiments were performed on a NVIDIA GeForce RTX 3080 GPU with AMD Ryzen 9 5950X 16-Core Processor and 128 GB of RAM. First test problem The first test problem that we consider is where The exact solution of this problem is given by Therefore, for any fixed time t ≥ 0 the graph of the function u(•, t) is an elliptic paraboloid. As time goes on, this paraboloid slides along an oriented vertical axis at a constant velocity, What has been verified is that the plot of the predicted solution û(•, t) has precisely the same shape and geometric properties as the graph of the exact solution u(•, t), for both short and long times t.Moreover, the time evolution of the approximate solution û exactly mirrors the behavior described for the known solution u.A further interesting aspect that can be noticed is that the level curves of the approximate solution û(•, t) overlap almost perfectly those of u(•, t), provided that t is not very large (see Fig. We have also noted that, at time t = 0, the approximate solution is basically equal to zero in a very tiny region around the origin (0, 0) of the xy-plane.This means that the said region is composed of "numerical zeros" of the solution predicted at time t = 0, while we know that u(x, y, 0) = 0 if and only if (x, y) = (0, 0).However, this discrepancy is actually negligible, since the order of magnitude of u(x, y, 0) is not greater than 10 −6 within the above region. To assess the accuracy of our predictive method and the numerical convergence of the solution û toward u in a more quantitative way, we now look at the time behavior of the L 2 -error ∥û(•, t) − u(•, t)∥ L 2 (Ω) by considering the natural quantities and Passing from Cartesian to polar coordinates, one can easily find that and therefore . The results that we have obtained are shown in Table 1.The estimates of E(1) and E(10) are equal to 2.24 × 10 −5 and 4.30 × 10 −4 respectively, which is very satisfactory, especially considering that the order of magnitude of E rel (1) and E rel (10) is equal to 10 −6 . In order to get more accurate estimates for larger values of T , for every fixed T ≥ 100 we have used 2.1 × T equispaced points (instead of the initial 21) to discretize the time interval [0, T ].By doing so, we have observed that the variation of the estimate of E(T ) displays a monotonically increasing behavior, in accordance with the definition (4.1).However, even for 100 ≤ T ≤ 500, the order of magnitude of E rel (T ) remains not greater than 10 −6 .Therefore, for this first test problem, we can conclude that our predictive method is indeed very accurate and efficient, on both a short and long time scale. which is nothing but the approximating problem (2.5) associated with (P1).In what follows, we will denote the exact solution of (4.3) by u ε , while the predicted solution will be denoted by ûε .Throughout our tests, for 10 −9 ≤ ε ≤ 10 −3 , for Ω ′ = Ω and for (t 1 , t 2 ) = (0, T ), we have observed that the plots of the predicted solution ûε (•, t) and the exact solution u(•, t) share the same configurations and geometric peculiarities, on both a short and long time scale (see, e.g. Figure 4.3).Furthermore, we have seen that the evolution over time of ûε reflects the behavior depicted for the solution u quite faithfully.In addition, the contour lines of ûε (•, t) perfectly overlap those of u(•, t), at least for not very long times t (see Fig. Let us now assume that where z 0 = (0, 2) = (0, 0, 2).Then, the limit in (2.6) suggests that ûε should numerically converge to u as ε ↘ 0. To obtain a numerical evidence of such convergence, we have chosen Ω ′ = B 1/2 (0) and (t 1 , t 2 ) = ( 7 4 , 2) into (4.3) and examined the time behavior of the L 2 -error ∥û ε (•, t) − u(•, t)∥ L 2 (Ω ′ ) , by evaluating the quantities and . Table 2 shows the results obtained and reveals that the predicted solution ûε converges to u as ε tends to zero, although not very quickly.In fact, the estimates of E ε and E rel (ε) approach zero with a convergence rate much lower than that of ε.Furthermore, they seem to start decreasing monotonically, i.e. without oscillations, for ε ≤ 10 Second test problem Let α > 0. As a second test problem we consider where The exact solution of problem (P2) is given by u(x, y, t) ≡ u α (x, y, t) := t (x 2 + y 2 ) α . At any fixed time t > 0, the shape and geometric properties of the graph of u(•, t) strongly depend on the value of the parameter α. If α = 1 2 , then the graph of u(•, t) is a cone whose vertex coincides with the origin (0, 0, 0) at any given positive time t.As time goes on, the cone in question gets narrower and narrower around the vertical axis.In this case, the plot of the approximate solution û(•, t) has the same form as the graph of the exact solution u(•, t) for both short and long times t > 0, except near the origin, where the tip of the cone appears to have been smoothed out (see Figure 4.5, center).However, this is not a surprise at all, since we already know that for t > 0 the function is not differentiable at the center (0, 0) of Ω.When 0 < α < 1 2 , the graph of u(•, t) is cusp-shaped for any fixed time t > 0, the origin now being a cusp for all positive times.In this case, a loss on convexity occurs, which is also observed in the plot of the predicted solution û(•, t) for all times t > 0 (see, e.g. Figure 4.5, left). Lastly, when α > 1 2 the graph of u(•, t) is no longer cusp-shaped and becomes increasingly narrow around the vertical axis as t increases.Furthermore, for any fixed t > 0 the exact solution u(•, t) is convex again and its graph gets flatter and flatter near the origin when α >> 1 (see Figure 4.6). In all three of the above cases, we have noticed that the plot of û(•, t) is basically identical in its shape and geometry to the graph of the exact solution u(•, t), for both short and long periods t.Moreover, also for problem (P2) we have verified that the time evolution of the predicted solution faithfully reflects the trend described for the exact solution in all three previous cases.Therefore, we may conclude that α = 1 2 represents a critical value for the global behavior of both the exact and the predicted solution.Later, we have examined the contour lines of ûα (•, t) for α ∈ {0.3, 0.5, 1.3, 5} and for not very large times t > 0. For every fixed α ∈ {0.3, 0.5, 1.3}, the level curves of ûα (•, t) overlap quite well those of u α (•, t), with some small differences between one case and the other.More precisely, for each α ∈ {0.5, 1.3} the contour lines corresponding to the same level are almost indistinguishable, at least for not very long times t (see, for example, Figure 4.7, where t = 0.5).For α = 5 and t > 0, we have also noted that the approximate solution is essentially equal to zero in a fairly large region Σ t around the origin (0, 0) of the xy-plane (see Fig. 4.8).As already said for problem (P1), this means that such region consists of numerical zeros of û5 (•, t), while for t > 0 we know that u 5 (x, y, t) = 0 if and only if (x, y) = (0, 0).However, this discrepancy is reasonably small for short times, since the order of magnitude of u 5 (x, y, t) does not exceed 10 −2 within Σ t for 0 < t ≤ 10.To evaluate in a more quantitative manner the accuracy of our method in solving problem (P2) and the numerical convergence of the solution û toward u, we may now consider again the quantities (4.1) and (4.2).Passing from Cartesian to polar coordinates, we find so that we now have During the experimental phase, we have estimated E(T ) and E rel (T ) for α ∈ {0.3, 0.5, 1.3, 5} and T ∈ {1, 10, 20, 40, 100}.The results that we have obtained are reported in Tables 3−6 and show that, for any fixed value of α, the estimate of E(T ) follows an increasing trend, as prescribed by (4.1).Furthermore, by analyzing the orders of magnitude of E(T ) and E rel (T ), we may affirm that our approach provides very accurate predictions, on both a short and long-term scale. Third test problem We shall now consider the problem where Ω = (−1, 1) × (−1, 1), The exact solution of this problem is given by Therefore, for any fixed time t ≥ 0, the graph of the function u(•, t) is given by the union of the horizontal region and the sliding plane Let us denote by G t := H t ∪ I t the graph of u(•, t).Then, as time goes by, the set G t slides along a vertical axis with a constant velocity and no deformation, since ∂ t u ≡ 1 over Ω T (see Fig. 4.9, above).The plot of the approximate solution û(•, t) roughly resembles that of u(•, t) for both short and long times t ≥ 0, except near the joining line H t ∩ I t , where the graph of the solution appears to have been slightly smoothed (see Figure 4.9, below).However, this is not surprising at all, since we already know that, for any fixed t ≥ 0, the function u(•, t) : Ω → R defined by (4.4) is not differentiable at any point of the open segment S 0 := {(x, y) ∈ Ω : x = 0}.This fact also has repercussions in the comparison between the level curves of u(•, t) and û(•, t), whose superposition is far from being perfect on approaching the segment S 0 from the right, i.e. for x > 0 (see Fig. 4.10). Furthermore, we have also observed that the evolution of û over time accurately reflects the evolution of the set G t described above.In order to assess in a more quantitative way the accuracy of our method in solving (P3) and the distance between the solutions u and û, we resort again to the quantities defined in (4.1) and (4.2).Through an easy calculation, we get so that we now have Proceeding as for the previous problems, we have estimated E(T ) and E rel (T ) for T ∈ {1, 10, 100, 200, 300}. Table 7 contains the results obtained and reveals that the estimate of E(T ) exhibits again an increasing behavior, as expected from (4.1).Furthermore, from this table, it seems that the asymptotic trend of the estimate of E rel (T ) may encounter a sort of plateau at T = 100, after which convergence sensibly slows down.We do not know whether this is a typical behavior, since we cannot draw information from (4.5) in this sense.In fact, from the definition of E rel (T ) it is not possible to predict what the combined effect of E(T ) and sup is the ratio of two functions which are both increasing with respect to T and we cannot determine a priori the growth rate of E(T ).Nevertheless, by carefully examining the orders of magnitude of both E(T ) and E rel (T ), we can conclude that our method produces accurate results also in this case, in both short and long-term predictions. Fourth test problem We now consider the problem where Ω = {(x, y, z) ∈ R 3 : x 2 + y 2 + z 2 < 1}.This problem is the 3-dimensional version of problem (P1) and its exact solution is given by To evaluate the accuracy of our method in solving problem (P4) and the distance between the predicted solution û and the exact solution u, we confined ourselves to considering the quantities (4.1) and (4.2).Passing from Cartesian to spherical coordinates, one can easily find that and therefore Proceeding as for problem (P1), we have estimated both E(T ) and E rel (T ) for T ∈ {1, 10, 20, 30, 40, 50, 100}. The data that we have obtained are reported in Table 8 and show that the estimate of E(T ) is monotonically increasing, in agreement with the definition (4.1).From Table 8 it also emerges that the trend of the estimate of E rel (T ) has a sort of plateau between T = 30 and T = 40, after which there is a slight rise.In this regard, the same considerations made for Table 7 apply.However, for every T ≤ 100 the order of magnitude of E rel (T ) is not greater than 10 −5 .Therefore, we may affirm that our predictive method is very accurate and efficient in this case, on both a short and long time scale. Fifth test problem Let α > 0 and ω = 2α (2α + 1).The last problem that we consider is where This problem is nothing but the 3-dimensional version of (P2) and its exact solution is given by u(x, y, z, t) ≡ u α (x, y, z, t) := t (x 2 + y 2 + z 2 ) α . In order to assess the accuracy of our method in solving (P5) and the distance between the approximate solution û and the exact solution u, we again limited ourselves to estimating the quantities defined in (4.1) and (4.2).Switching from Cartesian to spherical coordinates, we can easily obtain This yields Proceeding as for problem (P2), we have estimated E(T ) and E rel (T ) for α ∈ {0. Conclusions In this paper, we have explored the ability of PINNs to accurately predict the solutions of some strongly degenerate parabolic problems arising in gas filtration through porous media. Since there are no general methods for finding analytical solutions to such problems, it is essential to use efficient and accurate numerical methods.blueOne of the most prevalent methods for addressing these problems is the Finite Difference Method (FDM), wherein PDEs are discretized into a system of algebraic equations to be solved numerically.However, the FDM necessitates the discretization of the domain into a grid of cells or nodes, which can become computationally expensive for large and intricate systems.Although the primary objective of this article is not to prove the effectiveness of a PINN compared to a classical numerical method, we engaged in a comparison with the FDM.As established in the literature, for problems characterized by a less complex domain, the FDM typically exhibits a higher level of accuracy compared to PINNs.Nevertheless, in our study, the advantage of using a PINN lies in the ability to test the model on various presented problems (varying the initial/boundary functions and the α parameter), once it has been trained.Additionally, the FDM can be utilized as a benchmark in cases where the solution to the problem is unknown, ensuring a fair comparison under equivalent accuracy conditions. For the test problems discussed here, whose exact solutions are fortunately known, we have compared the plots of the predicted solutions with those of the analytical solutions.Moreover, to evaluate the accuracy of our predictive method in a purely quantitative way, we have also analyzed the error trends over time.The proposed approach provides accurate results in line with expectations, at least in short-term predictions.However, some issues remain open, such as how to obtain fully reliable plots for the predicted solution when the exact (unknown) one is not differentiable somewhere, and how to reduce or eliminate some slight discrepancies between the contour lines of the predicted solution and those of the analytical solution in the case n = 2. To the best of our knowledge, this is one of the first papers demonstrating the effectiveness of the PINN framework for solving strongly degenerate parabolic problems with asymptotic structure of Laplacian type. Figure 3 . 1 : Figure 3.1: Overall structure of the proposed methodology.An FF-DNN serves as the neural network's architecture.Automatic differentiation is employed to calculate the loss terms associated with the model's dynamics.The loss function comprises two components: the physics loss, represented by L F , and the boundary loss denoted by L B .During the optimization phase, the objective is to minimize the loss function with respect to the set of hyperparameters θ. without deformation, since ∂ t u ≡ 1 over Ω T (see Fig. 4.1, above).To train the neural network, in each experiment we have initially used 441 points to suitably discretize the domain Ω and its boundary ∂Ω, and 21 equispaced points in the time interval [0, T ].Once the network has been trained, we have made a prediction of the solution to problem (P1) at different times t (Fig. 4.1, below). Final time T Estimate of E(T ) Estimate of E rel (T ) Final time T Estimate of E(T ) Estimate of E rel (T ) Table 10 : Estimates of E(T ) and E rel
7,663.6
2023-09-29T00:00:00.000
[ "Computer Science" ]
Precise predictions for the associated production of a $W$ boson with a top-antitop quark pair at the LHC The production of a top-antitop quark pair in association with a $W$ boson ($t\bar tW$) is one of the heaviest signatures currently probed at the Large Hadron Collider (LHC). Since the first observation reported in 2015 the corresponding rates have been found to be consistently higher than the Standard Model predictions, which are based on next-to-leading order~(NLO) calculations in the QCD and electroweak (EW) interactions. We present the first next-to-next-to-leading order (NNLO) QCD computation of $t\bar tW$ production at hadron colliders. The calculation is exact, except for the finite part of the two-loop virtual corrections, which is estimated using two different approaches that lead to consistent results within their uncertainties. We combine the newly computed NNLO QCD corrections with the complete NLO QCD+EW results, thus obtaining the most advanced perturbative prediction available to date for the \ttW inclusive cross section. The tension with the latest ATLAS and CMS results remains at the $1\sigma-2\sigma$ level. Introduction.The final state of a W ± boson produced in association with a top-antitop quark pair (t tW ) represents one of the most massive Standard Model (SM) signatures accessible at the Large Hadron Collider (LHC).Since the top quarks rapidly decay into a W boson and a b quark, the t tW process leads to two b jets and three decaying W bosons.This in turn gives rise to multi-lepton signatures that are relevant to a number of searches for physics beyond the Standard Model (BSM).In particular, t tW production is one of the few SM processes that provides an irreducible source of same-sign dilepton pairs.Additionally, the t tW signature is a relevant background for the measurement of Higgs boson production in association with a top-antitop quark pair (t tH) and for four-top (t tt t) production. Measurements of t tW production carried out by the ATLAS and CMS collaborations at centre-of-mass energies of √ s = 8 TeV [1,2] and √ s = 13 TeV [3][4][5] lead to rates consistently higher than the SM predictions.A similar situation holds for t tW measurements in the context of t tH [6,7] and t tt t [8,9] analyses.The most recent measurements [10,11], based on an integrated luminosity of about 140 fb −1 , confirm this picture, with a slight excess at the 1σ − 2σ level. In this context, it is clear that the availability of precise theoretical predictions for the t tW SM cross section is of the utmost importance.The next-to-leading order (NLO) QCD corrections to t tW production have been computed in Refs.[12][13][14], and EW corrections in Refs.[15,16].Soft-gluon effects were included in Refs.[17][18][19][20].NLO QCD effects to the complete off-shell t tW process have been considered in Refs.[21][22][23], while the complete off-shell NLO QCD+EW computation was reported in Ref. [24].Very recently, even NLO QCD corrections to off-shell t tW production in association with a light jet were computed [25].A detailed investigation of theoretical uncertainties for multi-lepton t tW signatures has been presented in Ref. [26] (see also Ref. [27]).Current experimental measurements are compared with NLO QCD+EW predictions supplemented with multijet merging [28,29], which are still affected by relatively large uncertainties.To improve upon the current situation, next-to-next-to-leading order (NNLO) QCD corrections are necessary. In this Letter we present the first computation of t tW production at NNLO in QCD.While the required treelevel and one-loop scattering amplitudes can be evaluated with automated tools, the two-loop amplitude for t tW production is yet unknown.In this work, we estimate it by using two different approaches.The first parallels the approach successfully applied in Ref. [30] to t tH production, and is based on a soft-W approximation, which allows us to extract the t tW amplitude from the two-loop amplitudes for top-pair production [31] (see also Ref. [32]).The second is based on the NNLO calculation of Ref. [33], where an approximate form of the two-loop amplitude for the production of a heavy-quark pair and a W boson is obtained from the leading-colour two-loop amplitudes for a W boson and four massless partons [34,35] through a massification procedure [36][37][38].We demonstrate that the two approximations, despite their distinct conceptual foundations and the fact that they are used in a regime where their validity is not granted, yield consistent results within their respective uncertainties.Finally, we combine the computed NNLO QCD corrections with the complete NLO QCD+EW result, thus obtaining the most accurate theoretical prediction for this process available to date. In addition to the inherent challenges involved in obtaining the relevant scattering amplitudes, the implementation of a complete NNLO calculation is a difficult task because of the presence of infrared (IR) divergences at intermediate stages of the calculation.In this work NNLO IR singularities are handled and cancelled by using the q T subtraction formalism [39], extended to heavy-quark production in Refs.[40][41][42].According to the q T subtraction formalism, the differential cross section dσ can be evaluated as The first term on the right-hand side of Eq. ( 1) corresponds to the q T = 0 contribution.It is obtained through a convolution, with respect to the longitudinalmomentum fractions z 1 and z 2 of the colliding partons, of the perturbatively computable function H with the LO cross section dσ LO .The real contribution dσ R is obtained by evaluating the cross section to produce the t tW system accompanied by additional QCD radiation that provides a recoil with finite transverse momentum q T .When dσ is evaluated at NNLO, dσ R is obtained through an NLO calculation by using the dipole subtraction formalism [43][44][45].The role of the counterterm dσ CT is to cancel the singular behaviour of dσ R in the limit q T → 0, rendering the square bracket term in Eq. ( 1) finite.The explicit form of dσ CT is completely known up to NNLO: it is obtained by perturbatively expanding the resummation formula of the logarithmically enhanced contributions to the q T distribution of the t tW system [46][47][48][49][50]. Our computation is implemented within the Matrix framework [51], suitably extended to t tW production, along the lines of what was done for heavy-quark production [41,42,52].The method was recently applied also to the NNLO calculation of t tH [30] and b bW [33] production, for which the contributions from soft-parton emissions at low transverse momentum [53] had to be properly extended to more general kinematics [54].The required tree-level and one-loop amplitudes are obtained with OpenLoops [55][56][57] and Recola [58][59][60].In order to numerically evaluate the contribution in the square bracket of Eq. ( 1), a technical cut-off r cut is introduced on the dimensionless variable q T /Q, where Q is the invariant mass of the t tW system.The final result, which corresponds to the limit r cut → 0, is extracted by computing the cross section at fixed values of r cut and performing the r cut → 0 extrapolation.More details on the procedure and its uncertainties can be found in Refs.[49,51]. The purely virtual contributions enter the first term on the right-hand side of Eq. ( 1), and more precisely the hard function H (related to H through H = Hδ(1 − z 1 )δ(1 − z 2 ) + δH) whose coefficients, in an expansion in powers of the QCD coupling α S (µ R ), are defined as Here, µ R is the renormalisation scale, and are the perturbative coefficients of the finite part of the renormalised virtual amplitude for the process u d(dū) → t tW +(−) , after the subtraction of IR singularities at the scale µ IR , according to the conventions of Ref. [61].In order to obtain an approximation of the NNLO coefficient H (2) , we use two independent approaches, applied to both the numerator and the denominator of Eq. ( 2).The first relies on a soft-W approximation.In the high-energy limit, in which the colliding quark and antiquark of momenta p 1 and p 2 radiate a soft W boson with momentum k and polarisation ε(k), the multi-loop QCD amplitude in d = 4 − 2ϵ dimensions behaves as where g is the EW coupling and M L ({p i }) the q L qR → t t virtual amplitude.In the second approach the two-loop coefficient H (2) is approximated in the ultra-relativistic limit m t ≪ Q by using a massification procedure [36][37][38].We start from the massless W + 4-parton amplitudes M mt=0 evaluated in the leading-colour approximation [35,62] to obtain where Z are perturbative functions whose explicit expression up to NNLO can be found in Ref. [37].This procedure1 was successfully applied to evaluate NNLO corrections to b bW production in Ref. [33]. In order to use Eq. ( 3) to approximate the t tW amplitudes, we need to introduce a prescription that, from an event containing a t t pair and a W boson, defines a corresponding event in which the W boson is removed.This is accomplished by absorbing the W momentum into the top quarks, thus preserving the invariant mass of the event.On the other hand, for the application of Eq. ( 4) we map the momenta of the massive top quarks into massless momenta by preserving the four-momentum of the t t pair.In both cases we reweight the respective twoloop coefficients with the exact Born matrix elements.This approach effectively captures additional kinematic effects, which we expect to extend the region of validity of the approximations well beyond where it may be assumed in the first place. For our numerical studies, we consider the on-shell production of a W boson in association with a t t pair in proton collisions, at a centre-of-mass energy of √ s = 13 TeV.We set the pole mass of the top quark to m t = 173.2GeV, while for the W mass we use m W = 80.385 GeV.We work in the G µ -scheme for the EW parameters, with G µ = 1.16639 × 10 −5 GeV −2 and m Z = 91.1876GeV.We consider a diagonal CKM matrix.We use the NNPDF31_nnlo_as_0118_luxqed set for parton distribution functions (PDF) [63] and strong coupling, which is based on the LUXqed methodology [64] to determine the photon density.We adopt the LHAPDF interface [65] and use PineAPPL [66] grids through the new Matrix+PineAPPL interface [67] to estimate PDF and α S uncertainties.For our central predictions we set the renormalization (µ R ) and factorization (µ F ) scales to the value µ 0 = m t + m W /2 ≡ M/2, and evaluate the scale uncertainties by performing a 7-point variation, varying them independently by a factor of two with the constraint 1/2 ≤ µ R /µ F ≤ 2. In order to test the quality of our approximations, we apply them to evaluate the contribution of the coefficient H (1) to the NLO correction, ∆σ NLO,H .In Fig. 1 (upper panel) the two approximations are compared to the exact result, as functions of the cut on the transverse momenta of the top quarks, p T,t/ t.We observe that both approximations get closer to the exact result if a harder cut is imposed, since the large-p T,t/ t region corresponds to a kinematical configuration where both of them are expected to reproduce the full amplitude.In particular, we observe that the soft approximation tends to undershoot the exact result, while the massification approach overshoots it.Remarkably, both approaches provide a good approximation also at the inclusive level. We now move on to the contribution of the coefficient H (2) to the NNLO correction, ∆σ NNLO,H .In Fig. 1 (lower panel) the two approximations are compared, normalised to their average.The uncertainties of the soft and massification results are also depicted.These are evaluated starting from the assumption that the uncertainty of each approximation of ∆σ NNLO,H is not smaller than the relative difference between ∆σ approx NLO,H and the exact NLO result.We obtain a first estimate of the uncertainty on ∆σ NNLO,H by conservatively multiplying ∆σ approx NLO,H by a factor of two.As an additional estimate, we consider variations of the subtraction scale µ IR , at which our approximations are applied, by a factor of two around the central scale Q (adding the exact evolution from µ IR to Q).For each of the two approximations, the uncertainty is defined as the maximum between these two estimates.From Fig. 1 we see that the two approximations are consistent within their respective uncertainties.We therefore conclude that our approach can provide a good estimate of the true NNLO hard-virtual contribution.Our best prediction for ∆σ NNLO,H is finally obtained by taking the average of the two approximations and linearly combining their uncertainties.We note that with such procedure the central values of the two approximations are enclosed within the uncertainty band of the average result.The final uncertainty on ∆σ NNLO,H turns out to be at the O(25%) level. 2 As we will observe in what follows, this leads to an uncertainty of the NNLO prediction which is significantly smaller than the residual perturbative uncertainties. Results.We now focus on our numerical predictions for the LHC.Our results for the total t tW + and t tW − cross sections are presented in Table I.In the first three rows we consider pure QCD predictions, which are labelled N n LO QCD with n = 0, 1, 2. The results in the fourth row, dubbed NNLO QCD +NLO EW , represent our best prediction.They include additively also EW corrections and all subleading (in α S ) terms up to NLO, originally computed in Ref. [16,69].We recompute them here within the Matrix framework, after validation against a recent implementation in Whizard [70].Predictions for the sum and the ratio of the t tW + and t tW − cross sections are also provided, and their scale uncertainties are evaluated by performing 7-point scale variations for each of them, keeping µ R correlated, while the values of µ F for the t tW + and t tW − cross sections are allowed to differ by at most a factor of two. 3 Finally, the most recent results by the ATLAS [11] and CMS [10] collaborations are quoted. We start by discussing the pattern of QCD corrections.The NLO cross section for both t tW + and t tW − production is about 50% larger than the corresponding LO result.The NNLO corrections are moderate, and increase the NLO result by about 15%, showing first signs of perturbative convergence.The ratio between the two cross sections shows a very stable perturbative behaviour.The size of the scale uncertainties is substantially reduced at NNLO, in line with the observed smaller corrections to the central prediction.The impact of the two-loop contribution is relatively large, about 6% − 7% of the NNLO cross section.Nonetheless, we find that the ensuing uncertainty on our prediction is O(±2%), i.e. significantly smaller than the remaining perturbative uncertainties. In addition to the value µ 0 = M/2 used in Table I, we have also considered alternative choices for the central scale, specifically µ 0 = M/4, H T /2 and H T /4, where H T is the sum of the transverse masses of the top quarks and the W boson. Results for the different perturbative orders in the QCD expansion are presented in Fig. 2. At each order, the four predictions are fully consistent within their uncertainties, and in particular the µ 0 = M/2 and µ 0 = H T /4 bands cover the central values of the other scale choices that have been considered.We note that symmetrising the band of the µ 0 = M/2 prediction at NNLO leads to an upper bound which is almost identical to that of the µ 0 = M/4 and µ 0 = H T /4 scale variations.Therefore, to be conservative, the perturbative uncertainties affecting our final NNLO QCD +NLO EW results are estimated by symmetrising the scale variation error.More precisely, we take the maximum among the upward and downward variations, assign it symmetrically and leave the nominal prediction unchanged. The EW corrections increase our NNLO QCD cross sections by about 5%.While smaller than the NNLO QCD corrections, their inclusion is crucial for an accurate description of this process, as their magnitude is comparable to the NNLO QCD scale uncertainties.The PDF (α S ) uncertainties, not shown in Table I, on the t tW + and t tW − cross sections amount to ±1.8% (±1.8%) and ±1.7% (±1.9%), respectively. 4The PDF uncertainty on their ratio, derived by recalculating the ratio for each replica, is ±1.7%.Its α S uncertainty is negligible. The current theory reference to which experimental data are compared is the FxFx prediction of Ref. [29], which reads σ FxFx t tW = 722.4+9.7% −10.8% fb.Our NNLO QCD +NLO EW prediction for the t tW cross section in Table I is fully consistent with this value, with considerably smaller uncertainties. We now compare our theoretical predictions to the measurements performed by the ATLAS and CMS collaborations in Refs.[10,11], which represent the most precise experimental determination of the t tW ± cross sections to date.From Table I we observe that the individual measurements for the t tW + and t tW − cross sections are systematically above the theoretical predictions, but all within two standard deviations of our central results, except for the t tW − measurement by the CMS collaboration.The measurement of the ratio σ t tW + /σ t tW − by the ATLAS collaboration is in excellent agreement with our prediction, whereas the CMS result exhibits some tension. Finally, we present in Fig. 3 Table I.Inclusive cross sections for t tW + and t tW − production at different perturbative orders, together with their sum and ratio.The uncertainties are computed through scale variations and for our best prediction, NNLOQCD+NLOEW, are symmetrised as discussed in the text.Where NNLO QCD corrections are included, the error from the approximation of the two-loop amplitudes is also shown.The numerical uncertainties on our predictions are at the per mille level or below.The corresponding experimental results from the ATLAS [11] and CMS [10] collaborations are also quoted, with their statistical and systematic uncertainties.[10,11], at 68% (solid) and 95% (dashed) confidence level.We indicate in black and orange the scale and the approximation uncertainties, respectively, of the NNLOQCD+NLOEW result. σ t tW + − σ t tW − plane, together with the 68% and 95% confidence level regions obtained by the two collaborations.The subdominant uncertainties due to the approximation of the two-loop corrections are also shown.When comparing to the data, we observe an overlap between the NNLO QCD +NLO EW uncertainty bands and the 1σ and 2σ contours of the ATLAS and CMS measurements, respectively.Summary.In this Letter we have presented the first calculation of the second-order QCD corrections to the hadroproduction of a W boson in association with a topantitop quark pair.Our results are exact, except for the finite part of the two-loop virtual corrections, which is computed by using two independent approximations.While these approximations are completely different in their conception, they lead to consistent results, thereby providing a strong check of our approach. We have combined our results with the NLO EW corrections, obtaining the most precise theoretical determination of the inclusive t tW ± cross section available to date.Our results significantly reduce the size of the perturbative uncertainties, allowing for a more meaningful comparison to the results obtained by the ATLAS and CMS collaborations.The high level of precision attained by our theoretical predictions will enable even more rigorous tests of the SM, as more precise experimental measurements become available. Figure 2 . Figure 2. Inclusive t tW cross sections at different orders in the QCD expansion, for different choices of the central renormalization and factorization scales. Figure 3 . Figure3.Comparison of our NNLOQCD+NLOEW result to the measurement performed by the CMS (red) and ATLAS (blue) collaborations in Refs.[10,11], at 68% (solid) and 95% (dashed) confidence level.We indicate in black and orange the scale and the approximation uncertainties, respectively, of the NNLOQCD+NLOEW result.
4,624.2
2023-06-28T00:00:00.000
[ "Physics" ]
High Quality Monolayer Graphene Synthesized by Resistive Heating Cold Wall Chemical Vapor Deposition The growth of graphene using resistive‐heating cold‐wall chemical vapor deposition (CVD) is demonstrated. This technique is 100 times faster and 99% lower cost than standard CVD. A study of Raman spectroscopy, atomic force microscopy, scanning electron microscopy, and electrical magneto‐transport measurements shows that cold‐wall CVD graphene is of comparable quality to natural graphene. Finally, the first transparent flexible graphene capacitive touch‐sensor is demonstrated. : Schematic diagram of the cold-wall CVD system used for graphene growth. The arrows indicate the direction of gas flow. The reaction chamber houses a resistively heated substrate stage equipped with an embedded thermocouple which can achieve stable temperatures of up to 1100 o C. The heater assembly slides out of the chamber for substrate loading (see Figure S2) and is then pushed back in the chamber. The hardware is controlled by a programmable logic controller electronics coupled to a touchscreen interface and all operation of the system is carried out through the touch screen. In this system the Cu foil is placed on the resistively heated stage as shown in Figure S3. The temperature at the surface of the Cu foil is measured by using a thermocouple mounted on the heater stage, thus in direct contact with the substrate. Figure S11a shows the heater stability for different temperatures as well as the chamber temperature which remains around 100 o C during the heater operation. As the Cu foil is in direct contact with the heater/thermocouple, the temperature of the substrate can be reliably controlled as the introduction of gas does not modify the foils surface temperature (see Figure S3b). a) b) Figure S3: a) The stability of the heater temperature (red) in vacuum (P=0.05 Torr) for different temperature set-points ranging from 900 o C to 1100 o C. The blue curves show the corresponding chamber temperature which is around 100 o C. b) The stability of the heater when gas with a pressure of 5Torr is introduced in the system. The pressure inside the reaction chamber can be reliably controlled using the pressure control valve. Figure S4 shows the pressure stability for different set-points which are achieved in this case by controlling the flow of Ar gas. Figure S4: Pressure stability for different set-points (a) and the gas flow required to achieve the desired pressure (b). Growth procedure for the graphene films and islands 25 μm thick copper foils (Alfa Aesar 99.999%) were annealed for 10 minutes at 1035 o C in a H 2 atmosphere to increase the Cu grain size. To understand the initial stages of graphene formation, the growth was carried out at temperatures ranging from 950 o C to 1035 o C and the growth time was varied from 10 seconds to 600 seconds. A constant flow rate of 0.4sccm of H 2 and 1.4sccm of CH 4 was used for all growths. A typical processing for the growth of continuous graphene films involves the following steps: (1) heating up the CVD system from room temperature to the growth temperature, (2) Cu foil annealing, (3) graphene nucleation and growth, (4) cooling down the system to room temperature (see Figure S5). During the heating up stage H 2 gas was flown at a rate of 0.4sccm with a chamber pressure of 0.01 Torr. The annealing step was performed for 10 minutes at 1035 o C in a H 2 atmosphere, keeping the H 2 gas flow rate at 0.4sccm and the chamber pressure of 0.01 Torr. The temperature was then lowered at 1000 o C for the growth of continuous graphene films. A constant flow rate of 0.4sccm of H 2 was kept throughout the nucleation and growth. For the nucleation stage, 1.4sccm of CH 4 was introduced for 40 seconds. This was followed by the growth stage where the CH 4 flow rate was increased to 7sccm for a 300 seconds. Finally, the system was cooled down at room temperature keeping the H 2 gas flow rate at 0.4sccm. Transfer Procedure of graphene films from the Cu foils onto SiO 2 /Si Grown graphene samples were spun with 200nm of 950K PMMA.The PMMA coated foils were vacuum cured for 30 minutes and then etched in 1M FeCl3 solution. After the copper was fully etched the films were transferred several times to deionized water and then transferred onto SiO2/Si substrates. Device fabrication Graphene devices were produced using standard electron beam lithography and reactive ion etching techniques to define Hall bar geometry (225 μm × 25 μm) shown in the false colour inset of Figure 3a with electrical contacts of Au/Cr (50 nm/ 5 nm). Electrical transport measurements The longitudinal and Hall voltages were measured in a four terminal geometry applying an AC current using a lock-in amplifier. The excitation voltage was selected to be within the linear transport regime. SEM analysis SEM Measurements: SEM micro-graphs were collected with a Phillips SEM. An acceleration voltage of 30kV, magnification of x5000 and beam current of 0.63nA was used. SEM micrographs where taken for graphene islands transferred to SiO 2 to determine the average area and separation of domains. Figure S6a shows a micrograph taken at 5000x magnification where graphene islands appear dark and the SiO 2 substrate is lighter. The image was then processed by inverting the colors and applying a threshold to create a two colour bitmap shown in Figure S6b. Using the matlab image processing toolbox, each island was identified and the area was measured [1]. Figure S6c demonstrates a single identified island on a false colour map. To reduce the effects of residues resulting from the transfer process the results were filtered to remove any island with an area smaller than 1 m 2 . The resulting islands were given random false colour to check that no islands are connected as demonstrated in Figure S6d. All calculations were based on 10 micrographs for each growth time, where the average area was estimated by summing the area of all islands (A islands ) and dividing by the total number of islands (N islands ). The average separation was estimated from the density of islands (S mean ) where density (d) was taken as the total number of islands (N islands ) divided by the total area of the micrographs (A total ). Figure S6. a) An SEM micrograph showing Graphene islands (Black) on an SiO 2 substrate, b) Processed SEM micrograph with inverted intensities and applied black and white threshold, c) A single identified island extracted from SEM micrograph shown in false colour, d) All identified islands from SEM micrograph after applying noise filter AFM analysis To study the evolution from a carbon film to graphene islands, semi-contact AFM topography images were collected with a NTMDT Ntegra AFM. Film thickness was extracted by fitting the statistical distribution of the film and substrate heights. For the contious graphene films, the images were colected in contact mode with a Bruker Innova AFM. The thickness of each growth time was determined using tapping mode AFM where a surface topography was measured, shown in Fig Raman spectra for films grown at 1000 o C and 1035 o C Raman spectra were collected in a Renishaw spectrometer with an excitation laser wavelength of 532 nm, focused to a spot size of 5 μm diameter and x50 objective lens. For films grown at higher temperatures (1000 o C, 1035 o C) we observe the same transition from nanocrystalline graphite to graphene islands as for growths at 950 o C shown in Figure 1, but at a faster rate. Figure S8a shows several spectra at 1000 o C for different times. Fig. S10 shows an optical microscope image (Fig. S10a) and the Raman spectra (Fig. S10b) for three regions of continuous graphene grown using the two stage growth method and transferred on SiO 2 /Si. The D-, G-and 2D-bands were fitted and used for the continuous growth data points which appear in Figure 1 in the main text. These continuous films were used to fabricate the Hall bar devices shown in Fig S10c (top). Fig. S10c shows the mapping of the Full width at half maximum (FWHM) of the 2D band (middle) and the intensity ratio of the D to G peak, I D /I G (bottom). The FWHM of the 2D band ranges from 30 to 35 cm -1 which is typical for CVD grown monolayer graphene. The Raman maps have been taken with 1µm step size. Characterization of continuous graphene films On the continuous films we still observe a small D peak, which indicates defects. However, this peak is usually observed on CVD grown polycrystalline graphene films and it is believed that defects arise from the misalignment of the islands as they come together and coalesce into a continuous film. Indeed, when we grow graphene islands which are larger than the area probed by our Raman measurement (i.e. spot size of 5 μm diameter) we do not observe the presence of the D band as shown in figure S10d. Therefore the D band that appears in the Raman spectra of the films is due to the defects arising from the grain boundaries. The intensity ratio of the D to G peak. d) Raman spectra of a large graphene island taken in the middle of the island. No defect-related D band is observed in this case. Touch sensor fabrication and characterization The touch sensor device was fabricated using a novel technique where all lithography is performed on the surface of a CVD graphene covered copper foil. Fig. S11 shows the outline of the fabrication process, while Fig. S12 shows images of key processing steps. CVD graphene on copper foils where coated in PMMA and contacts were defined using electron beam lithography, Fig. S11 a and b. The PMMA was developed shown in Fig. S12 a and metallized with 50 nm of gold, Fig. S11c and Fig. S12b. Strips of graphene were made between the contacts by coating the CVD graphene on copper foil with PMMA and defining a mask using electron beam lithography, Fig. S11d. The PMMA was developed and the exposed graphene was etched using Ar 2 /O 2 reactive ion etching leaving conductive graphene channels between the gold contacts, Fig. S11e and Fig. S12d To characterize the contact and sheet resistance of the graphene films processed in this way we deposited gold contacts without etching graphene strips and transferred the films to a PEN substrate, set out in Fig. S111a-c, f-h. The two terminal resistance of the graphene strips was measured in air using a probe station and a Keithley source-meter. The capacitance between graphene strips was measured using a Hameg 8118 LCR bridge with 1V AC excitation at 1KHz. The two terminal resistance was measured as a function the number of squares (distance between probes divided by the sample width) shown in Fig. S12c. The fitted linear gradient is representative of the film resistivity which we estimate to be 1.3K/ , whereas the y intercept of the linear fit is the sum of the contact resistance for the two contacts, estimated to be 68  for each contact. Figure S11. The process for fabricating the touch sensor devices. a) Graphene is grown on a copper substrate, b) The foil is coated with PMMA and contacts are exposed using electron beam lithography, c) Exposed regions are developed and metalized with 50nm of gold, d) The foil is coated with PMMA and an etch mask is defined between the gold contacts with electron beam lithography, e) Exposed graphene is etched using an Argon plasma, f)The foil is coated with PMMA and the copper is etched using 1 molar FeCl 3 , g) The film is washed in ultra-pure water and h) the film is transferred to a PEN substrate. Figure S12. a) Shows a window in PMMA after electron beam exposure and development on copper foil coated in CVD graphene, b) Shows a gold square after the metallization gold on top of a copper foil coated with CVD graphene, c) shows the resistance for different separations of gold contacts on graphene transferred to a PEN substrate where the y intercept gives contact resistance and the fitted gradient gives the resistivity, d) Shows gold contacts connected by graphene strips on the surface of the copper foil, e) shows a gold contacts supported by a PMMA film in FeCl 3 etchant, f) shows the transferred structure onto a PEN substrate. Costing of graphene growth The estimation of the cost of graphene growth was performed making several assumptions. There are three main factors affecting the price of producing graphene, the cost of growth gases; the energy cost for achieving the temperatures for growth and the cost of the copper used for growths. These calculations do not consider the cost of growth equipment such as furnaces, flow controllers and quartz tube. The costs were only estimated for published papers that contain enough information to estimate growth cost and the quality area of the graphene. Cost of Gases. The cost of growth gases was estimated by collating the total volume of each gas used from growth times and gas flow rates. The cost per unit volume was then estimated assuming the same price for a set volume of gas [4] allowing for the total cost to be estimated shown in Table S1. Table S1. The estimation of cost of each different growth gas in £/m 3 From each research article, gas flow rates and times were collated shown in Table S2. A typical growth consists of following stages: heating to growth temperature, anneal of the copper foils, graphene island nucleation and the growth stage. Summing the volume of gas used for each stage allowed for the estimation of the total volume of each gas used and the cost of each gas. Table S2. The collation of the gas consumption for several graphene growth studies, for the estimation of the total cost of graphene growth gases. The cost for methane has been substituted for that of argon, as argon diluted methane was used. Cost of Energy. To estimate the total energy consumption and cost of each growth process we collated for each different stage of the process, the total growth time, power draw and the cost of electricity in Table S3. The energy consumed during the growth process in a hot wall furnace was estimated assuming an MTI 1200X -5L tube furnace [5]. The power consumption is assumed to be at maximum during the ramping to the growth temperature (6KW) and that the power consumption scales linearly as a function of temperature to a maximum of 6KW at 1200 o C. The energy consumed by plasma based cold walled furnace is estimated at 0.7 KW. The energy consumed by a resistively heated cold walled furnace [this work] is 0.3KW for the ramping to the growth temperature and assumed to scale linearly as a function of temperature to 0.3KW at 1200 o C. The cost of electricity is estimated at £0.1352 per KWH [13]. Table S3. The collation energy consumption of each growth procedure, broken down into the heating of the foils to the growth temperature and the growth process. Cost of Copper. We assumed 1cm 2 of 25m thick copper was used in a growth. The cost of copper (99.999 %) is £88.20 for 250cm 2 giving a cost for 1cm 2 of £0.3528. Estimation of total price. By summing the cost of the growth gases, energy used and the cost of the copper foils we can estimate the total cost of graphene production, shown in Table S4. It is clear from Table S4 that the lowest cost of production is this work and that the limiting factor is the cost of the copper foils used in the growth. Table S4. The estimation of cost of each price component and the total cost of the growth for each article in £. Electronic Quality Factor estimation To determine the electronic quality factor for each article, the reported field effect mobility was used and the area of the device was estimated from dimensions given or images appearing in the articles. The data was collated into Table S5 and are shown in Figure S11. Table S5. The collation of information required to make the estimation of Electronic Quality Factor for each article. The mobility is the field effect mobility (cm 2 /Vs), the area is the device area (m 2 ) over which the mobility was estimated and the Electronic Quality Factor (m 2 xcm 2 /(Vs)). Plotting the price versus the mobility shown in Fig. S11a demonstrates the general trend of the cost of production with respect to the mobility. The cost of producing graphene using a cold wall furnace reduces the price significantly when compared to graphene produced in a hot wall furnace while not impacting on the quality of the graphene produced as the general trend would imply. The quality of cold-wall CVD graphene as compared to that grown with other methods is better assessed using the electronic quality factor (Q) that. As shown in the Figure S11b, graphene grown by resistive heating cold-wall CVD has Q ranging from 4x10 6 to 7.2x10 6 , whereas most reports of monolayer graphene grown by hot-wall CVD have Q ranging from 10 3 to 7 x 10 6 . This demonstrates the enhanced electronic quality range of graphene grown by 22 resistive heating cold-wall CVD over the reported values of monolayer graphene grown by hot-wall CVD. Thus as shown in Figure S11b the cold-wall CVD provides a method to produce high quality graphene at a much lower cost than hot-wall CVD. Employing this method in industry will reduce also the retail price of graphene which currently is as high as 21£/cm 2 as shown in Figure S11c. Figure S11. a) A plot of the price of graphene production per cm 2 against the measured mobility. The general trend is a linear fit of the data omitting the data point from this work. b) Estimated cost for different CVD growth processes for monolayer-graphene on Cu plotted against the electronic quality factor, Q. c) Retail cost of monolayer graphene as of April 2015 taken from the website of different suppliers of monolayer graphene grown by CVD on Cu.
4,260.2
2015-06-05T00:00:00.000
[ "Physics" ]
An Improved Deep Fusion CNN for Image Recognition With the development of Deep Convolutional Neural Networks (DCNNs), the extracted features for image recognition tasks have shifted from low-level features to the high-level semantic features of DCNNs. Previous studies have shown that the deeper the network is, the more abstract the features are. However, the recognition ability of deep features would be limited by insufficient training samples. To address this problem, this paper derives an improved Deep Fusion Convolutional Neural Network (DF-Net) which can make full use of the differences and complementarities during network learning and enhance feature expression under the condition of limited datasets. Specifically, DF-Net organizes two identical subnets to extract features from the input image in parallel, and then a well-designed fusion module is introduced to the deep layer of DF-Net to fuse the subnet’s features in multi-scale. Thus, the more complex mappings are created and the more abundant and accurate fusion features can be extracted to improve recognition accuracy. Furthermore, a corresponding training strategy is also proposed to speed up the convergence and reduce the computation overhead of network training. Finally, DF-Nets based on the well-known ResNet, DenseNet and MobileNetV2 are evaluated on CIFAR100, Stanford Dogs, and UECFOOD-100. Theoretical analysis and experimental results strongly demonstrate that DF-Net enhances the performance of DCNNs and increases the accuracy of image recognition. Introduction DCNNs [Ma, Li, Xia et al. (2020)] have made breakthrough progress in computer vision and have become the standard method of many visual object recognition algorithms in recent years. DCNN is a progressive structure, whose shallow neurons can sense semantic content such as structure, texture and location. On this foundation, the deep convolutional layers continue to learn more advanced and distinguishable features for classification. The ability to automatically learn image features of DCNN has brought significant changes in the field of image recognition. The basic image recognition network LeNet-5 [LeCun, Boser, Denker et al. (1989)] was composed of three operations: convolution, pooling and non-linear mapping. This kind of combination structure is also the foundation of mainstream DCNNs. With the update of neural network algorithms and the advance of hardware, the width and depth of DCNNs have been continuously improved. In terms of network depth, He et al. [He, Zhang, Ren et al. (2016)] proposed a residual block structure that aimed at the training of networks. This research indicated that the residual networks were easier optimized and could obtain high accuracy by increasing network depth. Furthermore, expanding the network width has proved feasible. Inception module introduced by GoogLeNet [Szegedy, Liu, Jia et al. (2015)] implemented convolution and pooling operations simultaneously in a parallel manner to extract more potential information. The extension of different kernels means the fusion of different scale features, which effectively improves the expression of network. It is important to design and develop accurate algorithms and systems for more accuracy of image recognition. However, creating an innovative network is a difficult task which requires abundant knowledge of DCNN as well as the stronger support from hardware. In practical application, the network structure, dataset scale and hardware computing power should be fully considered so that a designed DCNN architecture meets the needs of network performance and computing overhead simultaneously. Therefore, this paper proposes a novel deep fusion by using the latest development of deep learning. First, the fusion module is used to integrate two identical subnets in the deep layer to sufficiently improve network performance. Secondly, the training strategy of only fine-tuning the deep convolution layer is formulated to reduce the computational overhead. Consequently, a more in-depth and broader network DF-Net is constructed to extract more abundant image features and optimize the recognition effect. Relative works Image recognition is a fundamental problem in computer vision. The key factor to recognition accuracy is the performance of the extracted image features. Prior research is largely based on hand-crafted features, such as Histogram of Oriented Gradient (HOG) [Sugiarto, Prakasa, Wardoyo et al. (2017)], Harris [Qin, Li, Xiang et al. (2019)], YCbCr ] and Scale-Invariant Feature Transform (SIFT) [Bharathidevi, Chennamsetty and Prasad (2017)]. However, these hand-crafted features have a strong reliance on expertise and task specificity. Despite of the feature selection and feature fusion [Qin, Sun, Xiang et al. (2009)] brought optimization for hand-crafted features, these may result in the cumbersome of design and the unsatisfactory of performance in practical applications. Recently, DCNN has performed well in computer vision. DCNN learns simple features from big data, and then gradually learns the more abstract deep features for image recognition. For instance, in the field of the classification of food ingredients, Pan et al. [Pan, Pouyanfar, Chen et al. (2017)] extracted rich and productive features from food ingredient images using DCNN, which improved the average accuracy of image classification. Qin et al. [Qin, Pan, Xiang et al. (2020)] took advantage of DCNNs to classify the biological images effectively. In the field of target recognition, Nasrabadi [Nasrabadi (2019)] designed a high-performance system based on DCNN to detect the targets in forward looking infrared (FLIR). The advantage of DCNN is that it can learn the optimal feature representation from the target dataset and better express the information of the original image. Luo et al. [Luo, Qin, Xiang et al. (2020)] and Liu et al. ] both used DCNNs to extract the high-level semantic features; Zhang et al. [Zhang, Wu, Feng et al. (2019)] used the attentional features extracted from the DCNN to localize the target accurately. These studies illustrated the effectiveness of DCNNs in the field of visual object recognition tasks. As early as 1989, the classic DCNN LeNet-5 was proposed by LeCun et al. [LeCun, Boser, Denker et al. (1989)] for recognizing handwritten digits and machine-printed characters. LeNet-5 used a three-layer sequence combination: convolution, pooling and non-linear mapping, which formed the basis of current DCNNs. With the updating of algorithm and the improvement of deep structure, the accuracy of image recognition based on DCNNs has been continuously rising. AlexNet was the first DCNN applied to large-scale image classification by Krizhevsky et al. [Krizhevsky, Sutskever and Hinton which directly connected each layer with the same feature-map size in a feed-forward fashion. For each layer, the input was composed of the feature maps of all previous layers, and the feature maps of the current layer also became the input of all subsequent layers in the meanwhile. This operation alleviated the issue of vanishing-gradient, strengthened feature propagation and greatly reduced the network number of parameters. It is quite formidable and complicated to design an innovative and excellent network. First, it is necessary to possess the relevant theoretical knowledge of DCNNs. In addition, it requires a lot of experimental accumulation and strong inspiration during the process of improving the basic model. Furthermore, creating a new model and training it on the large-scale datasets would consume a lot of time and computing resources, namely the powerful hardware support. Therefore, on the basis of the existing research, appropriately adjusting or improving the classical network is an effective measure for deep learning. Wang et al. [Wang, Qin, Xiang et al. (2019)] constructed a multi-classifier network based on DenseNet for CAPTCHA recognition. Amin-Naji et al. [Amin-Naji, Aghagolzadeh and Ezoji (2019)] constructed a new network with the support of the ensemble learning to decrease the overfitting on limited datasets. To learn more precise features, Hou et al. [Hou, Liu and Wang (2017)] provided a general framework DualNet to address image recognition by coordinating two parallel DCNNs. Zhang et al. [Zhang, Wang and Lu (2019)] proposed two novel lightweight networks that could obtain higher recognition precision of traffic sign images in a resource-limited setting. Xiang et al. [Xiang, Guo, Yu et al. (2020)] build a two-level cascaded DCNNs, which could automatically learn the steganographic features and improve the detection performance greatly. Pan et al. [Pan, Li, Pouyanfar et al. (2020)] proposed an up-to-date CBNet (Combinational Convolutional Neural Network) which combined two different DCNNs to extract complementary features for image classification. Different from previous studies, this study innovatively designs a fusion module inspired by the inception module. The fusion module is created to implement the combination of deep features from two parallel subnets. It can give full play to the value of the basic network and enhance the expression of features. The latest constructed DF-Net brings double expression space and more complex mappings, which makes learning and representation easier. Compared with the DualNet framework which adopted the addition and combination, this paper presents the fusion concept that can effectively improve the performance of image recognition by extracted high-level semantic features with the network. At the same time, a training strategy of only fine-tuning the deep convolutional layers of the network is formulated to ensure the computational efficiency and the speedy constringency of network. Experimental results show that the proposed DF-Net framework achieves higher recognition accuracy on CIFAR100 [Krizhevsky and Hinton (2009)], Stanford Dogs [Khosla, Jayadevaprakash, Yao et al. (2011)] and UECFOOD-101 [Matsuda, Hoashi and Yanai (2012)] datasets. Algorithm implementation 3.1 The deep fusion convolutional neural network (DF-Net) So far DCNNs have made great progress in the field of image recognition, and the deep feature is the most competitive visual features. The key factor in the success of DCNNs is the training samples which consist of numerous tagged data. Through the training on the samples of known correct answers, DCNNs can learn deep semantic information from a machine perspective. However, DCNNs will easily occur over-fitting when the training samples are limited. In other words, a model with excellent extensive ability is challenging to achieve directly from the limited training samples. Although the model can perfectly perform on training set during training, the performance of this model will be relatively weak for unknown data. One effective approach for this problem is to prepare more data, but the cost of collecting enough training samples in a reasonable amount of time is enormous. To solve the problem, this paper designs a novel DF-Net architecture (see Fig. 1), which fuses two parallel models in the deep layer. Our goal is to enable the new architecture to learn more complementary features under constrained training samples. As shown in Fig Subnets component Through the multiple convolution and pooling, DCNNs can map the raw data to a multilevel representation and abstraction. However, the complexity of a single network is limited, which would restrict the ability of learning complex function mapping, particularly for small-scale datasets. It is acknowledged that if the features extracted from subnets are the same, the concatenated features are linear, similar to a single network. Therefore, it is impossible to enhance the complexity of the network by concatenating two groups of identical features. Overall, instead of extending dimension on a single network, this work deploys two independent networks to catch more potential information from input images simultaneously in SC. Fusion component In our architecture, the subnets' features are extracted before the final global pooling layer of network. It means that features fusion implements in 3D feature space, not 1D feature space. In Fig. 1, after features extracted from SC, they will be further analyzed and processed in FC. Here double-stream features are bound to be different, which exactly is the premise of information complementarity. In order to make full use of the complementarity, based on the most advanced achievements of DCNNs, an innovative fusion strategy consisting of 1×1 Convolution, Channel Shuffle [Zhang, Zhou, Lin et al. (2018)] and fusion module is proposed. By doing so, the network complexity is effectively improved, and thus the ability of learning and simulating more complex types of data is strengthened. Furthermore, the corresponding training strategy is proposed to speed up the convergence and reduce the computation overhead of network training. The fusion strategy and training strategy will be outlined in Sections 3.2 and 3.3. Fusion strategy The fusion strategy is an important strategy in which the proposed DF-Net architecture is utilized to improve the non-linear mapping relations of DCNNs and heighten the network expression. The subnets of DF-Nets can be employed for most of the well-known DCNNs, such as ResNet, GoogLeNet, DenseNet, etc. These DCNNs have one thing in common: They all obtain the final 1D features by using global average pooling instead of the fully connected layer. Therefore,a well-designed fusion module is constructed to fuse 3D features of two subnets. Although the 3D features learned more abundant information from subnets than 1D features, the learning process will produce excessive parameters, which is unfavorable for fusion module to calculate and realize the transformation of spatial information. In view of this, one essential method is that the 3D features are preprocessed to reduce the network parameters to make feature abstraction effective in fusion module. Feature pre-processing Once a network is adopted as the subnet of DF-Net, its corresponding pre-trained network will be applied and fine-tuned on the target dataset. During various training, two of the top fine-tuned models are deployed in SC to obtain the output of the final feature maps (3D feature space). The information gathered by double-stream features has a high complexity, which may lead to dimensionality disaster. In some networks (e.g., AlexNet, VGGNet), researchers employed the fully connected layer to reduce feature dimension. However, it is not suitable for our architecture because the fully connected layer is used for the 1D features, not the 3D features. For other networks (e.g., ResNet, GoogLeNet), the 1×1 convolution kernel provides help for dimensionality reduction. Inspired by the scheme, the 1×1 convolution layer is employed to subnets' features to alleviate the dimensionality disaster. Specifically, the 1×1 convolution layer with a compression ratio of 50% is appended behind the subnets to avoid the dimensional explosion. As shown in FC, the feature maps of subnets are compressed from c-channel to (c/2)-channel by the 1x1 convolution layer. Then the two compression features are concatenated so that the channels of the concatenated features are consistent with a single network. It is worth noting that RELU activation is not used after 1×1 convolution as suggested by Chollet [Chollet (2017)]. Since the rudimentary way of combination makes the data distribution highly dispersed, it may cause one of the double-stream features to be eccentrically selected during training, which is not beneficial to the mutual learning and cooperation between subnets. Therefore, the channel shuffle technology is adopted to achieve an even data distribution. The channel shuffle is similar to ShuffleNet [Zhang, Zhou, Lin et al. (2018)]. In our module, the number of groups is set as 2 (corresponding to two subnets). As shown in FC of Fig. 1, the final feature maps are gained with a staggered concatenation of two subnets. In summary, the feature pre-processing includes a 1×1 convolution layer for the feature reduction and a channel shuffle operation for information flow from two subnets. Feature pre-processing can substantially decrease the computation overhead of the fusion module and enhance the information exchange and complementarity between subnets. Fusion module In the inference of DF-Net, two subnets with the same architecture capture the visual information from the input images respectively, but their weights of neurons are not shared. In other words, these two subnets are independent of each other before channel concatenation, so the acquired features are complementary for the two subnets. In order to make full use of this complementarity, the approach is absorbed from Inception-ResNet-v2 [Szegedy, Ioffe, Vanhoucke et al. (2017)], which offers effective insights to improve the network learning by non-linear mapping, the fusion of different scale features, skip connection, etc. Inception module is an excellent local topology that performs multiple convolutions and pooling operations for the input data in parallel, and concatenates all the output results together into a multi-channel feature map. As for Inception-ResNet-v2, the residual connection is introduced into the inception module to improve recognition performance as much as possible. Enlightened by the inception module, a novel inception module is created as the fusion module that the particular design is favorable to analyze the concatenated features. In theory, the fusion module can extract more informative features from subnets for image recognition. The structure of the fusion module is described in Fig. 2. Fig. 2, an additional branch (dotted box) is added to the original Inception-ResNet-C module. After feature pre-processing, the concatenated features are used as input of the fusion module. Through the calculation of three branches on the right, the different scale features are generated and concatenated to form a 704-channel feature map. To keep the residual addition operating normal, the 704-channel feature map is followed by a 1×1 convolution layer, which is used to match the channels of the initial input by dimensionality expansion. In the residual branch, the residuals are added to the final output and a scaling factor is set as 0.3. As for the case that the fusion module is applied to network fusion, the advantages include as followed: (1) The usage of small-scale kernel size can learn and perceive highlevel semantic features with more details; (2) The multi-branch design realizes the fusion of different scale features, which can enhance the adaptability to different scales and improve the expression ability of the network; (3) The additional branch composed of a 3×3 average pooling and a 1×1 convolution is conducive to increasing the feature's diversity. More importantly, the features extracted by average pooling operations inherit the ability of classification from subnets; (4) The residual connection maps the module input to the output, which may guide the convolution and pooling operations from other branches to ensure the classification performance. In conclusion, the fusion module provides powerful abstraction abilities for reintegrating the concatenated features, as well as the fusion features are more generalizable compared with other features. Training strategy Logically, the DF-Net is a quite large network, which comprises 4 modules, including the two subnets (remove the global average pooling and classifier), the feature pre-processing module, the fusion module and the newly appended classifier. Considering the training time and GPU memory, this paper drafts a staged training scheme to update global parameters. Figure 3: The training process of DF-Net The process of training is usually called fine-tuning that the pre-trained parameters are updated on the target dataset and adjusted to the target dataset. In this work, once a network is adopted as the subnet of DF-Net, this pre-trained network by ImageNet will be utilized and firstly fine-tuned on the target dataset. During various tests, two of the top fine-tuned models are deployed to the DF-Net as subnets, and the fine-tuned weights are reused to initialize DF-Net. Accordingly, just fine-tuning the fusion module and the newly appended classifier can achieve a prediction model. According to the training strategy, the computation overhead of network training can be reduced substantially. It also makes the DF-Net convergence faster during training. Moreover, since the finetuned weights of the subnets are preserved, the classification performance of DF-Net is ensured effectively. Fig. 3 shows the training process of DF-Net, which is orderly divided into two stages, i.e., Base-Net training and DF-Net training. Training with a large amount of tagged data is a key factor of deep learning. In practice, it is hard to achieve high performance on limited training samples even if how excellent the network is. Nevertheless, the fine-tuning technology can solve this problem well. At the first stage shown in Fig. 3(a), the Base-Net (such as ResNet50) is fine-tuned on the target dataset with the pre-trained network on ImageNet. The main purpose of this stage is to adapt the parameters to the target dataset and improve the recognition performance of Base-Nets as much as possible. The next step is employing the fine-tuned Base-Nets as subnets of the DF-Net. Then, at the second stage ( Fig. 3(b)), after freezing all the parameters of subnets, the fusion module and the newly appended classifier are only fine-tuned to make the parameters adapt to the subnets and the target dataset. Correspondingly, the number of total parameters and trainable parameters is contrasted between Base-Nets and DF-Nets in Tab. 1. As shown in Tab. 1, although the number of total parameters of DF-ResNet50 and DF-DenseNet121 is about 2-3 times more than the corresponding Base-Nets, the number of total trainable parameters is only equal to 30-40% of the Base-Net. That means the training of these DF-Nets needn't additional hardware requirements. Besides, the number of trainable parameters of DF-Net constructed by the lightweight network MobileNetV2 is equivalent to 165% of the Base-Net, even higher than the total parameters of the Base-MobileNetV2. Essentially, the substantial growth of parameters implies that the DF-Net gains the larger capacity of expression space and the more complex mappings than Base-Net. Subsequent experiments show that DF-Nets achieve a markable accuracy improvement for the lightweight network. Experimental analysis In this section, the proposed DF-Nets which are built based on ResNet50 (DF-ResNet50), DenseNet121 (DF-DenseNet121), MobileNetV2 (DF-MobileNetV2) are tested on multiple widely-used datasets, including CIFAR-100, Stanford Dogs and UEC FOOD-100. Firstly, the model (e.g., ResNet50) trained on ImageNet is loaded and fine-tuned on the target dataset several times. Secondly, two models with the best performance are selected as the subnets of DF-Net (e.g., DF-ResNet50). Finally, the fusion module and classifier of the DF-Net are fine-tuned on target dataset so that the network parameters adapt to subnets and datasets. Most importantly, to measure the effectiveness and stability of novel DF-Nets architecture, this work uses the average accuracy of the DF-Net compared with the highest accuracy of the corresponding subnets. Although these classic DCNNs are perfect, the fine-tuning process is very complicated and time-consuming. In order to train all the networks well, a serial of methods is utilized for the optimizer, learning rate, parameters, etc. Here the Stochastic Gradient Descent (SGD) is used for optimization. The mini-batch size is set to 16 to balance the memory utilization and capacity. All the networks are fine-tuned 30 epochs with an initial learning rate of 0.003. In addition, to achieve a better convergence, the learning rate is decayed with a rate of 0.94 per epoch. Keras is used and it is a popular deep learning tool which provides many advanced DCNNs with pre-trained weights and supports fast network design and experimentation. The hardware is the NVIDIA RTX2080Ti GPU with 11 GB of memory, 4352 CUDA cores and 544 Tensor cores. CIFAR-100 In order to evaluate the effectiveness of the DF-Net framework, first experiment is operated on the publicly available and challenging CIFAR-100 dataset, which contains 60,000 32×32 color images for 100 categories. Commonly, 50,000 images are used for training and 10,000 images for test. During the training, images are resized to 224×224 as the network input. Tab. 2 shows the comparison of accuracy on CIFAR-100, where the (∆) represents the improvement of the DF-Nets compared with the corresponding Base-Nets. Tab. 2 compares the accuracy between DF-Nets and the corresponding Base-Nets on the CIFAR-100 dataset. Obviously, as can be seen from Tab. 2, DF-Nets have better accuracy than Base-Nets. It can be concluded that the DF-Net can effectively improve the complexity of network and learn a model with fine generalization. Specifically, the DF-Net based on MobileNetV2 (DF-MobileNetV2) achieves the highest promotion, and the accuracy reaches 82.27%, which is 2.36% higher than its Base-Net. Referring to Tab. 1, the results show that the greater the parameters increase by the deep fusion, the more remarkable the performance improvement will be. As shown in Tab. 3, the DF-ResNet50 is compared to other state-of-the-art methods with ResNet architecture. From Tab. 3, the DualNet based on ResNet56 (DNR56) [Hou, Liu and Wang (2017)] obtains the accuracy of 75.57%, which is 2.76% higher than its basic network. With the improved residual network, RoR-3-WRN58-4 [Zhang, Sun, Han et al. (2017)] and WRN [Zagoruyko and Komodakis (2016)] get the accuracy of 80.27% and 81.15% respectively. In this paper, the DF-ResNet50 achieves 84.57% accuracy, which improves the performance of Base-ResNet50 by more than 1.26%, and much higher accuracy (almost 3.42%) than WRN. The growth rate of the DF-Net is slightly lower than Dual-Net's. The reasons are that both the subnets are optimized perfectly, and the highest accuracy of subnets is set as baseline in our experiments. For the DF-Net, only the fusion module and the final classifier need be trained during the fine-tuning so that the DF-Net is approximately equal to subnet in time cost. The experimental results demonstrate the effectiveness of the proposed DF-Net framework which improves network performance and has a very high image recognition accuracy compared to other existing methods. Stanford Dogs In this section, the DF-Net is further analyzed on a Fine-Grained image Visual Classification (FGVC) Stanford Dogs dataset, which consists of 20580 images and 120 categories of dogs. The experiment applies 100 images for each category for training and the rest of the dataset for test. The Stanford Dogs dataset has an extremely high similarity that can be used for the FGVC task. Besides, compared to the image size of CIFAR-100 (32×32), the Stanford Dogs is proper to DCNNs' training. In practice, the complexity of a single network is limited, which would affect the ability of learning complex function mapping. However, the proposed DF-Net can extract more abundant and accurate fusion features from two parallel subnets for image recognition even with small-scale datasets. Tab. 4 shows the accuracy comparison between Base-Nets and DF-Nets on the Stanford Dogs dataset. All the DF-Nets beat the corresponding Base-Nets and gain better accuracy. The Base-DenseNet121 obtains the highest accuracy of 78.21% among all the Base-Nets, but DF-DenseNet121 gets an average accuracy of 79.34%, which achieves an improvement of 1.13%. Additionally, the DF-Nets increase the accuracy of 2.08% and 1.51% comparing to Base-MobileNetV2 and Base-ResNet50, respectively. In Tab. 5, the DF-ResNet50 is compared with other techniques which uses ResNet50 as basic network. The PC-ResNet50 [Dubey, Gupta, Guo et al. (2018)] obtains an accuracy of 73.35%, which is better accuracy of 3.43% than ResNet50. In this paper, the DF-ResNet50 achieves 77.63% accuracy, which improves the almost 1.51% accuracy of Base-ResNet50, and its accuracy is 4.28% higher than PC-ResNet50. In summary, FGVC problems benefit from the DF-Net framework, and the DF-Net shows more prominent advantages for the lightweight network. UECFOOD-100 In this section, another dataset is used that is FGVC dataset (with bounding box), i.e., UECFOOD-100, which includes 100 food categories with 8643 images. Each image is annotated with a label and a bounding box that indicates the food location. In the experiments, the raw images are cropped from the given bounding boxes. The dataset is divided into 5 folds. The 3 folds are used for training and the rest for test. Unlike other datasets, this dataset is pre-processed before training. The input images are cropped with the provided object bounding boxes so that the dataset has more significant interclass similarity and intra-class variation than the original data. Even so, the proposed DF-Net still makes remarkable advancement. Here, the average accuracy of DF-Nets is above 1.1% than their subnets. Among all the DF-Nets, DF-DenseNet121 is best and beats other networks, and its classification accuracy reaches 85.35%. The experimental results strongly demonstrate that DF-Net achieves excellent performance for highgranularity classification tasks. The performance comparison between DF-Nets and the existing techniques on UECFOOD-100 is shown in Tab. 7. All the evaluated methods use the same dividing and bounding box during experiments. Liu et al. [Liu, Cao, Luo et al. (2017)] proposed a practical deep learning-based food recognition system and reported the accuracy of 77.5% on UECFOOD-100. Yanai et al. [Yanai and Kawano (2015)] fine-tuned DCNN which was pre-trained with a large food-related dataset and achieved the classification accuracy of 78.77%. Using DF-Net with DenseNet121 as the Base-Net, our framework obtains best accuracy on this food dataset, improving the accuracy over the published methods by 6.58% in Tab. 7. For computing time, DF-DenseNet121 takes 0.03 seconds per image for training using our proposed architecture. When the model is applied to image recognition, it only takes 0.01 seconds per image on average. As a comparison, the training time for Liu's method is usually around 2~3 seconds. The experimental results indicate that the DF-Net architecture is a significant improvement both computing time and recognition accuracy comparing to other existing methods. In summary, the above experimental results illustrate the superiority of the novel DF-Net architecture by exhibiting extensive improvement for image recognition compared to other published technologies. Most important, the DF-Net model improves the accuracy for image recognition without additional hardware overhead. Conclusion In this paper, an up-to-date automatic classification architecture Deep Fusion Convolutional Neural Networks (DF-Net) is proposed, where the model has a strong generalization ability for image recognition with the limited dataset and without additional hardware overhead. Specifically, DF-Net firstly organizes two identical subnets to catch more potential information from images in parallel. Then the extracted subnets' features are pre-processed with a 1×1 convolution kernel to reduce the features redundancy, and a channel shuffle operation to adopt information flow each other. Next, the fusion module is introduced to the end of subnets to reintegrate the subnets' features and generate more abundant and accurate fusion features for image recognition. Furthermore, the corresponding training strategy is proposed to speed up the convergence and reduce the computation overhead of network training. Finally, DF-Nets constructed based on well-known ResNet50, DenseNet121 and MobileNetV2 are evaluated on public datasets CIFAR100, Stanford Dogs, and UECFOOD-100 using accuracy measurement. Theoretical analysis and experimental results strongly demonstrate that DF-Nets achieve more advanced recognition performance than the existing research results. Additionally, the DF-Net framework is beneficial for the fine-grained visual classification tasks of small and medium datasets. Future studies will
6,829.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Exploring Compound Promiscuity Patterns and Multi-Target Activity Spaces Compound promiscuity is rationalized as the specific interaction of a small molecule with multiple biological targets (as opposed to non-specific binding events) and represents the molecular basis of polypharmacology, an emerging theme in drug discovery and chemical biology. This concise review focuses on recent studies that have provided a detailed picture of the degree of promiscuity among different categories of small molecules. In addition, an exemplary computational approach is discussed that is designed to navigate multi-target activity spaces populated with various compounds. Introduction Over the past decade it has been increasingly recognized that many pharmaceutically relevant compounds are promiscuous in nature [ -3] and that many drugs elicit their therapeutic effects -and undesired side effects-through polypharmacology [4,5]. For a number of drugs that were originally considered to be target-selective orspecific, high degrees of promiscuity and ensuing polypharmacology have been shown to be responsible for their efficacy, with protein kinase inhibitors applied in oncology being a prime example [6]. In addition, polypharmacology also provides the basis for drug repurposing [7][8][9], another current topic of high interest in pharmaceutical research. Given that compound promiscuity represents the molecular basis of polypharmacological effects, a detailed assessment of the degree of promiscuity among compounds at different stages of the drug development pathway is of considerable interest. The unprecedented recent growth of compound activity data in the public domain has made it possible to approach this question through data mining. This is illustrated in Figure , which shows a drug-target network generated on the basis of known target annotations of approved drugs, reflecting a generally high degree of drug promiscuity. In promiscuity analysis, most efforts have thus far concentrated on elucidating the promiscuous nature of drugs, often by database analyses combined with computational predictions. Recent estimates have been that a drug might on average interact with ~3-6 targets and that 50% of all drugs might exhibit activity against more than five targets [5,0]. Results of data mining efforts are generally affected by data incompleteness [ 0], i.e., not all compounds have been tested against all targets (and probably will never be). However, given increasingly large amounts of compound activity data that become available at present (much more than one could have imagined just a few years ago), reliable trends can already be detected and some meaningful conclusions drawn from them [ ]. Herein, we review recent insights into promiscuity of screening hits, bioactive compounds, and drugs obtained through systematic mining of compound activity data. All currently investigated aspects of promiscuity are discussed. In addition, we introduce a computational and graphical framework for the analysis of multitarget activity spaces and compound promiscuity patterns.** The interested reader is also referred to other recent reviews of compound promiscuity [ , 2]. Activity data of compounds from different sources In order to comprehensively assess compound promiscuity, various types of compounds at different pharmaceutical development stages should be considered. A large number of relevant compounds and associated activity data can currently be collected from several public repositories. The PubChem BioAssay database [ 3] contains bioactivity information from confirmatory high-throughput screens including confirmed active and inactive compounds. To ensure high data confidence, a pre-requisite for meaningful data mining efforts [ ], a total of 085 confirmatory assays with reported activity against a single protein target and dose-response data were extracted from PubChem in January 20 3 [ 4]. These assays involved 437,288 compounds and 439 targets. A subset of 40, 2 compounds was confirmed to be active in one or more assays, representing screening hits at the early stages of drug discovery. More than 77% of these hits were tested in more than 50 assays, hence providing a sound basis for promiscuity analysis [ 4], as discussed below. The rapidly growing ChEMBL database [ 5] has become a major public repository of compound activity data obtained from medicinal chemistry sources. Currently, ChEMBL release 7 contains ,324,94 distinct compounds with 2,077,49 activity annotations. It should be noted that the original investigations reviewed herein were carried out over time on different versions of ChEMBL (the versions were specified in each case). To obtain high-confidence activity data from ChEMBL, only compounds with direct interaction against human targets at highest confidence level were extracted. Two types of potency measurements were separately considered, equilibrium constants (Ki) and assaydependent IC50 values. Compounds with approximate potency annotations (i.e., ">", "<", "~") were excluded. From ChEMBL release 4, 36,542 compounds active against 579 targets were collected that yielded 62,9 3 explicit Ki values, comprising the Ki subset. In the IC50 subset, there were 80,522 compounds active against 29 targets with 4,092 IC50 measurements [ 6]. These bioactive molecules, especially those from the Ki subset, were predominantly taken from medicinal chemistry literature and patent sources and hence mostly represented compounds at the hit-to-lead and lead optimization stages. The DrugBank database [ 7] is a public resource that contains drug entries, including approved small molecule drugs, approved biologicals, nutraceuticals, and experimental drugs (including compounds in clinical trials), with associated drug target information. For promiscuity analysis, 274 approved small molecule drugs and 493 experimental drugs with available structures were assembled from DrugBank 3.0. These approved drugs and drug candidates represented compounds at the late drug development stages. Compound promiscuity rates From these different data repositories, promiscuous compounds were extracted and promiscuity rates calculated as the average number of targets compounds were active against. In all cases reported herein, promiscuity rates were determined for compounds active against multiple targets, i.e., excluding compounds with reported single-target activity. Taking compounds with single-target activity into account would have reduced average promiscuity rates. From 40, 2 PubChem screening hits, 7 ,303 compounds (~50.9%) were identified to be active against two or more targets [ 4]. In addition, for the Ki and IC50 subsets of ChEMBL version 4, 3,842 (~37.9%) and 9,898 compounds (~24.7%) were identified to be promiscuous, respectively [ 6]. These compounds were active against a total of 459 and 867 human targets in the Ki and IC50 subsets, respectively. Furthermore, compound overlap between these two subsets was established on the basis of database IDs. There were 025 promiscuous compounds conserved in both subsets. The remaining 2,8 7 and 8,873 promiscuous compounds were exclusively found in the Ki and IC50 subsets, respectively. In general, the IC50 subset contained > 6000 more promiscuous compounds than the Ki subset. Furthermore, 072 approved (~84. %) and 3 experimental (~23.6%) drugs from DrugBank had multiple target annotations. For compounds from different sources, promiscuity rates are reported in Figure 2a. On average, promiscuous compounds from PubChem confirmatory assays were active against 3.7 targets. Bioactive compounds from the Ki and IC50 subsets of ChEMBL Shown is an approved drug-target bipartite network. Red nodes represent approved drugs from DrugBank 3.0 and blue nodes drug targets. Edges between red and blue nodes indicate known drug-target interactions. In total, there are 3776 drug-target interactions between 1226 approved drugs and 881 targets. Similar yet distinct drug-based target networks have earlier been introduced by Yildirim et al. [29]. The insert reports the distribution of the degree of approved drug nodes, indicating the number of targets they were active against. Exploring Compound Promiscuity Pattern and Multi-Target Activity Spaces interacted with 2.9 and 2.7 targets, respectively. Approved and experimental drugs displayed the highest degree of promiscuity, i.e., they had 6.9 and 4.7 targets, respectively [ 2]. Furthermore, from the distribution of promiscuity rates, the probability of compounds to be active against at least two or more than five targets was calculated [ 2]. The results are reported in Figure 2b. For screening hits, the probability to act against two or more targets was ~50%. However, the probability of activity against more than five targets was reduced to 7.6%. For compounds from Ki and IC50 subsets of ChEMBL 4, the probability to interact with two or more targets was ~38% and ~25%, respectively. However, the probability of activity against more than five targets was reduced to only ~% for both subsets. For approved and experimental drugs, the probability of activity against two or more targets was ~84% and ~24% and the corresponding probability of activity against more than five targets ~37% and ~3%, respectively [ 2]. Taken together, the results indicated that the degree of promiscuity of bioactive compounds from screening or medicinal chemistry sources was considerably lower than for drugs. Thus, along the drug development pathway, a notable increase in promiscuity was observed from screening hits and optimized compounds over drug candidates to approved drugs, as illustrated in Figure 2c. These findings raise questions for further analysis. For example, do these observed differences mean that promiscuous drug candidates are preferentially selected during clinical trials? Or are target activities of drugs or drug candidates much more thoroughly assessed than those of other bioactive compounds? These alternative possibilities cannot be distinguished at present. It is evident, however, that bioactive compounds from various sources including high-throughput screens have a much lower degree of promiscuity than drugs on the basis of currently available data. Promiscuity across different target families Compounds active against prominent therapeutic target families such as G-protein coupled receptors (GPCRs) or protein kinases have , average promiscuity rates are reported for all compounds active against multiple targets within a given family for the Ki and IC50 subsets from ChEMBL 14, respectively. Dashed lines indicate global promiscuity rates determined for the Ki (i.e., on average 2.9 targets per compound) or IC50 subset (i.e., 2.7). For each target family, the number of targets and available active compounds is reported. Exploring Compound Promiscuity Pattern and Multi-Target Activity Spaces previously been reported to frequently exhibit high levels of promiscuity [ , 8]. Recently, compounds active against targets belonging to five different families were assembled from ChEMBL 4 including ligands of class A GPCRs, protein kinases, ion channels, proteases, and nuclear hormone receptors [ 2]. Compounds active against individual target families were further separated into Ki and IC50 value-based subsets. Average promiscuity rates of compounds active against multiple targets within a family were determined, as reported in Figure 3. For the Ki-based subset, only compounds active against multiple ion channels displayed above-average promiscuity, with activity against 3.9 different channels (Figure 3a). By contrast, degrees of promiscuity for compounds active against the other four families were comparable to the global promiscuity rate determined for the entire Ki subset of ChEMBL 4, as discussed above. For the IC50-based subset, a different distribution of promiscuity rates was observed across these five target families. Compounds active against GPCR class A family and proteases showed a slightly higher than average degree of promiscuity ( Figure 3b). However, the promiscuity rate of ion channel ligands was in this case lower than the global rate. Taken together, the results revealed no significant and consistent increase in promiscuity for compounds active against prominent target families relative to average promiscuity rates for bioactive compounds [ 2]. Promiscuity vs. molecular weight Molecular complexity and size have frequently been implicated in promiscuity [ 9,20]. Small compounds were found to display a general tendency to be more promiscuous than larger, chemically more complex molecules. A possible explanation for these findings is that small compounds and molecular fragments are easier to accommodate in differently shaped binding sites than larger ones. The relationship between compound promiscuity and molecular weight (MW) has also been systematically investigated through data mining [ 2]. Seven subsets of bioactive compounds with increasing (MW) were collected from ChEMBL 4. These compound subsets were also separated into Ki and IC50 value-based subsets. Figure 4 reports the compound composition of each MW range-based subset and the average promiscuity rates. For compounds with Ki values (Figure 4a), the subset of smallest compounds with MW of at most 200 Da displayed the highest degree of promiscuity with on average 4. targets per compound. Compounds with MW in the range of 200 to 300 Da had only slightly above-average promiscuity. For compounds with MW of more than 300, the degree of promiscuity was comparable to the global promiscuity rate for bioactive compounds. For compounds from the IC50 subset, there was even less variation over different MW ranges and all rates were close to the average promiscuity for IC50 data ( Figure 4b). Therefore, with the exception of the smallest compounds with available Ki data, the degree of promiscuity did not notably depend on molecular size [ 2]. Activity measurement dependence On the basis of global promiscuity rates determined for compounds from the Ki and IC50 subsets of ChEMBL, there was no significant difference between the degrees of promiscuity when these two different types of activity measurements were considered. The promiscuity rate was only slightly higher for compounds in the Ki than the IC50 subset ( Figure 2a). However, when the original release of the ChEMBL database was compared with subsequent releases of ChEMBL up to version 3, it was also observed that the number of promiscuous compounds significantly increased over time. This increase was largely due to compounds with assay-dependent IC50 measurements, rather than equilibrium constants (Ki) [2 ]. To further analyze this relative increase, compound-based target relationships were determined and visualized in network representations for two subsets of promiscuous compounds with available Ki ( 3,842 compounds) or IC50 measurements ( 9,898). The networks are shown in Figure 5. In each network, nodes represent targets that are connected by an edge if two targets share at least five compounds. In the Ki subset, a total of 254 target pairs were formed that involved 287 targets. 789 pairs (~63%) were formed by targets from the same family (intra-family pairs) and 465 pairs by targets from different families (inter-family pairs). The majority of the inter-family pairs formed a central network component (Figure 5a). The target network of the IC50 subset was clearly dominated by a single large component involving targets from many different families (Figure 5b). In this case, 24 target pairs were formed involving 559 targets and ~46% of the pairs were intra-family pairs. However, more than half of the pairs (~54%) were formed across different target families. Thus, IC50 data yielded a significant increase in compound promiscuity across different target families. Nodes represent compounds and edges indicate promiscuity cliffs. Nodes are colored according to the number of target activities using a continuous color spectrum from black (i.e., 0; inactive compounds) to white (i.e., 97; highest degree of promiscuity in the data set). Two representative promiscuity cliffs involving four compounds are shown (right). Structural differences are highlighted in red. For each compound, the number of targets is reported it was active against under microarray conditions. Structure-promiscuity relationships Compound profiling data sets are obtained by screening compound libraries against arrays of targets. Currently, there are only few profiling data sets available in the public domain (most profiling data are produced in the pharmaceutical industry and kept proprietary). For example, Clemons and colleagues generated a small molecule microarray data set [22] using a total of 5,252 compounds assembled from diverse chemical sources including compounds from medicinal chemistry vendors, natural products, and compounds from diversity-oriented synthesis. These compounds were systematically screened against 00 sequence-unrelated proteins, i.e., a diverse spectrum of targets [22]. The experimentally determined activity data were then reported as a complete binary (active/inactive) matrix. Such data sets provide an opportunity to systematically explore structurepromiscuity relationships and structural determinants of promiscuity. For compounds comprising the microarray data set, the distribution of target annotations is reported in Figure 6a. The majority of compounds (i.e., ,8 9; ~77.5%) were inactive. The remaining compounds were active against -97 targets. However, only 236 compounds (~.5%) had activity against more than 0 targets. Therefore, highly promiscuous compounds were also rarely observed in the microarray experiment. For analyzing structure-promiscuity relationships, the matched molecular pair (MMP) formalism was applied [23]. An MMP represents a pair of compounds that only differ at a single site by the exchange of two substructures, i.e., a chemical transformation. The application of transformation size restrictions typically limits substructure exchanges to chemically meaningful replacements [24]. From the entire microarray set, a total of 30,954 transformation sizerestricted MMPs (i.e., ~0.03% of all possible compound pairs) were obtained. Only a small subset of 26 MMPs was formed by compounds with large differences in the number of target annotations (50 or more targets) [25]. These MMPs represented small structural modifications leading to large-magnitude changes in promiscuity under the experimental conditions of the microarray experiment. The compound pairs were thus termed "promiscuity cliffs" [25] and are organized in a network representation in Figure 6b. In the network, nodes represent compounds and edges indicate the formation of promiscuity cliffs. The topology of the network reveals a number of "promiscuity hubs", i.e., compounds involved in multiple promiscuity cliffs. Two representative promiscuity cliffs are also shown in Figure Figure 7. Compound series matrix. Three compound series (A, B and C) with related core structures resulting from MMP calculations are shown at the top. Each series contains three compounds that share a core structure (bottom left) and differ by small substituents. Structural differences between core structures are highlighted in red. The compound series matrix (CSM) is generated by combining structurally analogous series. Rows represent series and columns substituents. Each combination of a given core and substituent defines a real (filled cell) or virtual (empty cell) compound. Cells are colored according to the number of targets compounds are active against, hence reflecting the degree of compound promiscuity. Exploring Compound Promiscuity Pattern and Multi-Target Activity Spaces 6b. However, no chemical transformations or individual structural fragments were identified in the microarray data set that consistently introduced promiscuity cliffs or were exclusively present in highly promiscuous compounds. Large-magnitude changes in promiscuity might at least in part be triggered by experimental conditions of the microarray analysis. Nevertheless, the identified promiscuity cliffs provide interesting opportunities for follow-up investigations to explore potential structural determinants of compound promiscuity. Graphical mining of multi-target activity spaces The analysis of multi-target spaces is a complex task but of high interest for compound design and development. For example, one would like to rationalize promiscuity patterns in compounds sets, explore structure-promiscuity relationships, and identify key compounds for further chemical exploration. Deconvoluting multitarget activity spaces also helps to investigate relationships between selective and promiscuous compounds. In the following, we introduce a computational methodology designed for mining multi-target activity spaces and visualizing promiscuity patterns, with a special focus on closely related compound series (currently, there are no other comparable approaches available). A data structure termed Compound Series Matrix (CSM) [26] was designed on the basis of the MMP formalism [23] to organize compound series with closely related core structures in multi-target space and elucidate promiscuity patterns. The CSM represents a methodological extension of the SAR matrix data structure previously introduced by us to monitor potency distributions of analogs active against a single target [27]. An analog series consists of a set of compounds that share the same core structure and differ by defined chemical substitutions (R-groups). CSMs utilize the same structural organization scheme as SAR matrices but take multi-target activities into account. Figure 7 illustrates the generation of a CSM. At the top, three analog series A, B, and C are shown that result from the application of a two-step MMP generation procedure following the fragmentation and indexing method of Hussain and Rea [23]. In the first step, MMPs are generated from original compounds. In the second step, MMPs are computed from the core fragments obtained in the first step. Thus, the second step produces MMPs with core structures that are only distinguished by a structural change at a single site. Therefore, the resulting analog series A, B, and C have structurally related cores and overlapping sets of substituents. The two-step fragmentation and MMP generation scheme is an essential feature of the methodology (further fragmentation steps cannot be applied to capture close and chemically meaningful structural relationships). The matrix is then filled with the core and substituent combinations, as illustrated at the bottom of Figure 7. Each related core structure represents a row and each substituent a column. Thus, compounds in a column share the same substituent and compounds in a row the same core structure. Each cell in the CSM represents a unique compound. Combinations of core structures and R-groups that are not present in the compound data set yield virtual matrix compounds from which candidates for synthesis can be selected. A color code is introduced to account for multi-target activities. If a compound is present in the data set it is colored using a spectrum from light blue to dark blue depending on the number of targets the compound is active against. Thus, CSMs establish structural relationships between compounds in multi-target activity space, capture promiscuity patterns in structurally related series, and provide hypotheses for compound design. To evaluate the CSM methodology, compounds with reported Ki values of at least 0 μM (≤ 0 µM) for human targets were assembled from ChEMBL version 5. A total of 37,850 compounds were obtained that were active against 342 targets. The number of target annotations per compound ranged from to 35. This pool of compounds was subjected to two-step MMP and CSM generation, yielding 2,337 different CSMs, 665 of which contained promiscuous compounds. 064 of these multi-target CSMs exclusively covered compounds active against targets from the same family, whereas the remaining 59 matrices contained compounds with activity against targets from 2 to different families [26]. In Figure 8, two exemplary multi-target CSMs are shown that reveal compound promiscuity patterns. In Figure 8a, 29 compounds are represented by six related core structures and seven substituents. These compounds were active against six targets belonging to three different families. The number of targets per compound ranged from two to five. In the CSM, compounds sharing the same cores (rows) or substitutions (columns) displayed different degrees of promiscuity. Additionally, compounds with related cores and corresponding substitutions also displayed varying promiscuity. In Figure 8b, the most promiscuous matrix subset of a large and sparsely populated CSM comprising 23 compounds (top) is shown in detail (bottom). This subset contains compounds represented by five related core structures and six substituents. The cores differ by aromatic ring substitutions highlighted in red. These compounds were active against a total of 9 different targets belonging to three different families. The compound in the top right cell was active against 2 targets of the monoamine GPCR family. As a compound design hypothesis, virtual compounds in this column provide suggestions for other compounds that might have a similar promiscuity profile. Hence, CSMs monitor promiscuity profiles of structurally related compound series at high resolution and contain many virtual entities that can be considered as candidates for the design of compounds with desired target profiles. Conclusion Herein we have reviewed currently available insights into compound promiscuity obtained by systematic mining of activity data. In general, bioactive compounds from different sources including high-throughput screening and medicinal chemistry have a lower degree of promiscuity than indicated for drugs. In addition, there is relatively little variation of compound promiscuity for prominent drug target families when high-confidence activity measurements are considered. However, the degree of compound promiscuity across different target families is dependent on the types of activity measurements that are considered. This might result from more frequent determination of IC50 values of active compounds and diverse targets than equilibrium constants, which require larger experimental efforts. At the same time, it can also not be ruled out that assay promiscuity (rather than "true" target promiscuity) is at least partly responsible for rapidly increasing levels of cross-family promiscuity on the basis of IC50 data. Regardless, we emphasize that bioactive compounds display lower degrees of promiscuity on the basis of currently available data than often thought. Exploring Compound Promiscuity Pattern and Multi-Target Activity Spaces Figure 8. Multi-target compound series matrices. (a) Shown is a multi-target CSM containing 29 compounds active against six targets from three families. Structural differences between cores are highlighted in red. (b) A large CSM is shown that consists of 123 compounds active against 20 targets from four families. A region enriched by highly promiscuous compounds is highlighted and enlarged. Core structures and substituents are displayed. Taken together, the 11 compounds in this region are active against 19 targets from three families.
5,628
2014-01-29T00:00:00.000
[ "Chemistry", "Computer Science", "Medicine" ]
Exhaustive capture of biological variation in RNA-seq data through k-mer decomposition Each individual cell produces its own set of transcripts, which is the combined result of genetic variation, transcription regulation and post-transcriptional processing. Due to this combinatorial nature, obtaining the exhaustive set of full-length transcripts for a given species is a never-ending endeavor. Yet, each RNA deep sequencing experiment produces a variety of transcripts that depart from the reference transcriptome and should be properly identified. To address this challenge, we introduce a k-mer-based software protocol for capturing local RNA variation from a set of standard RNA-seq libraries, independently of a reference genome or transcriptome. Our software, called DE-kupl, analyzes k-mer contents and detects k-mers with differential abundance directly from the raw data files, prior to assembly or mapping. This enables to retrieve the virtually complete set of unannotated variation lying in an RNA-seq dataset. This variation is subsequently assigned to biological events such as differential lincRNAs, antisense RNAs, splice and polyadenylation variants, introns, expressed repeats, and SNV-harboring or exogenous RNA. We applied DE-kupl to public RNA-seq datasets, including an Epythelial-Mensenchymal Transition model and different human tissues. DE-kupl identified abundant novel events and showed excellent reproducibility when applied to independent deep sequencing experiments. DE-kupl is a new paradigm for analyzing differential RNA-seq data with no preconception on target events, which can also provide fresh insights into existing RNA-seq repositories. Background Successive generations of RNA sequencing technologies have established since the 1990's that organisms produce a highly diverse and adaptable set of RNA molecules. Modern transcript catalogs such as Gencode [1] now reach hundreds of thousands of transcripts, reflecting widespread pervasive transcription and alternative RNA processing. However, in spite of years of high throughput sequencing efforts and bioinformatics analysis, we contend that large areas of transcriptomic information remain essentially disregarded. To illustrate this point, let us consider the biological events that drive transcript diversity. Firstly, transcripts result from transcription initiation events either at promoters of protein-coding and non-coding genes, or at multiple antisense or inter/intra genic loci. Secondly, transcripts are processed by a large variety of mechanisms, including splicing and polyadenylation, editing [2], circularization [3] and cleavage/degradation by various nucleases [4,5]. Thirdly, an essential, yet often overlooked, source of transcript diversity, is genomic variation. Polymorphism and struc-tural variations within transcribed regions produce RNAs with single nucleotide variations (SNVs), tandem duplications or deletions, transposon integrations, unstable microsatellites or fusion events. These events are major sources of transcript variation that can strongly impact RNA processing, transport and coding potential. Current bioinformatics protocols for RNA-seq analysis do not properly account for this vast diversity of transcripts. Prevalent computing strategies can be roughly classified into two categories: reference-based tools [6,7,8,9] rely on the alignment (or pseudo-alignment) of RNA-seq reads to a reference genome or transcriptome, while de novo assembly tools [10] reconstruct full-length transcripts based on the analysis of RNA-seq reads. These protocols fail to account for true transcriptional diversity in several respects: (i) they ignore small-scale variations such as SNP or indels (ii) they rely on full-length transcripts that cannot represent the full range of variation observed in organisms and (iii) they misrepresent transcripts containing repeats due to ambiguity in alignment or assembly. We propose a new approach to RNA-seq analysis that facilitates the discovery of any type of event occurring in an RNA-seq library independently of alignment or transcript assembly. Our approach relies on k-mer indexing of sequence files, a technique that recently gained momentum in NGS data analysis [8,9,11,12,13]. To identify biologically meaningful transcript variation, our method selects k-mers with differential expression (DE) between two experimental conditions, hence its name: DE-kupl. Using public human RNA-seq datasets, we show that a large amount of RNA variation can be captured that is not represented in existing transcript catalogs. As proofs of concept, we applied DE-kupl to RNA-seq data from an Epythelial-Mensenchymal Transition (EMT) model and from different human tissues. DE-kupl identified abundant novel events and showed excellent reproducibility when applied to independent deep sequencing experiments. Results Reference datasets are an incomplete representation of actual transcriptomes First, we analyzed k-mer diversity in different human references and highthroughput experimental sequences. To this aim, we extracted all 31-nt k-mers from sequence files using the Jellyfish program [14]. Figure 1A-B compares k-mers from Gencode transcripts, the human genome reference and RNA-seq libraries from 18 different individuals [15] corresponding to three primary tissues (6 libraries/tissue). To minimize the risk of including k-mers containing sequencing errors, we retained for each tissue only the set of k-mers that appear in 6 or more individuals. Measures of k-mer abundance show that k-mers are overwhelmingly associated to Gencode transcripts ( Fig 1B1). However, when considering k-mer diversity, a large fraction of k-mers are tissue-specific and not found in the Gencode reference ( Fig 1A). These tissue-specific k-mers may result from sequencing errors, genetic variation in individuals or novel, or non-reference transcripts. The majority of RNAseq k-mers that do not occur in Gencode are found in the human genome reference (Fig 1B, 1B2), suggesting polymorphisms and errors represent a minor fraction of tissue-specific k-mers and a lot of k-mers results from expressed genome regions that are not represented in Gencode. Further scrutiny of tissue-specific k-mers shows that a significant fraction can be mapped to the transcriptome with one substitution. However, for each tissue there is an average of 1 million k-mers that cannot be mapped to either reference (1B3). Non-reference k-mers classify samples as accurately as reference transcripts. We performed a Principal Component Analysis (PCA) of the above human tissue samples using conventional transcript counts and k-mer counts. PCA based on 20,000 randomly selected unmapped k-mers was able to differentiate tissues as well as PCA based on estimated gene expression or transcript expression (Fig 2). This illustrates how a "shadow", non reference transcriptome that is not incorporated in standard analyses comprises biologically relevant expression data. When comparing RNA-seq and whole genome sequence (WGS) data from the same individual [16], library-specific k-mers represent a much larger fraction of RNA-seq than of WGS k-mers (Fig 3). This shows that non-reference sequence diversity is larger in RNA-seq than in WGS. Altogether these results point towards the existence of a significant amount of untapped biological information in RNA-seq data. Non-reference k-mers may result from three classes of biological events. First, they may stem from genetic polymorphism in the studied sample. Second, they may result from RNA processing, notably, but not limited to, splicing and polyadenylation. A predominant source of k-mers in this category is intron retention, whose products are not usually incorporated into reference databases and are mostly by-products of regular gene expression. A third, major source of k-mer "innovation" is intergenic expression (eg. lincRNA, antisense RNA, expressed repeats or endogenous viral sequences). Altogether, the combination of these genetic, transcriptional and posttranscriptional events may have a profound impact on transcript function. A new k-mer based protocol for deriving transcriptome variation from RNA-seq data We designed the DE-kupl computational protocol with the aim to capture all k-mer variation in an input set of RNA-seq libraries. This protocol is composed of four main components ( Figure 4): 1 Indexing: index and count all k-mers (k=31) in the input libraries 2 Filtering: delete k-mers representing potential sequencing errors or perfectly matching known transcripts 3 Differential Expression (DE): select k-mers with significantly different abundance across conditions 4 Assembly and annotation: build contigs of assembled k-mers and annotate contigs based on sequence alignment. DE-kupl departs radically from existing RNA-seq analysis procedures in that it does neither "map-first" (a la Tuxedo suite [17]) or "assemble-first" (a la Trinity [18]) but instead directly analyzes contents of the raw FASTQ files, displacing assembly and mapping to the final stage of the procedure. In this way, DE-kupl guarantees that no variation in the input sequence (even at the level of a single nucleotide) is lost at the initial stage of the analysis. Even unmappable k-mers such as sequences from repeats, low complexity regions or exogenous organisms, are retained up to the final stage and can be analyzed. The DE-kupl protocol is detailed in Methods. We highlight here some of its key features. First, DE-kupl must deal with the large size of the k-mer index. A single human RNA-seq library contains in the order of 10 8 distinct k-mers and an index for 50 individual samples can reach billions of k-mers. We selected the Jellyfish tool for counting k-mers [14] as it presents very fast computing times and allows to store the full index on disk for further query. The central process in DE-kupl is k-mer filtering. Filtering out unique or rare k-mers is relatively straightforward and considerably reduces k-mer diversity and the amount of sequence errors. Another stringent filter is the removal of k-mers matching reference Gencode transcripts. The rationale for this is that the bulk of k-mers in RNA-seq data comes from expressed exons, and we are not interested in this canonical exon expression, as it can be captured efficiently by conventional, reference-based protocols [8,9]. Discarding these k-mers enable us to ignore the overwhelming signal caused by known transcripts and focus on expressed regions harboring differences from the reference transcriptome. Two modes are available to perform differential analysis of k-mers ( Figure S10 & Methods): The t-test filter mode is fast and has lower sensitivity, i.e. it retrieves only the most significantly differentially expressed k-mers. The DESeq2-based mode [19] is slower, more sensitive and is recommended for small samples (fewer than 6 vs. 6 samples). Whenever possible, key steps of the procedure (k-mer table merging, t-test, k-mer assembly) were written in C, enabling the whole procedure to run on a relatively standard computer in a reasonable amount of time. Discovery and assembly of differential RNA contigs with DE-kupl To assess the capacity of DE-kupl to discover novel differential events, we applied the procedure to 12 RNA-seq samples from an EMT cell-line model [20], in which NSCLC cells were induced by ZEB1 expression over a 7-day time course. We compared 6 RNA-seq libraries from the "Epythelial" stage of the time course (uninduced and Day 1) with 6 libraries from the "Mesenchymal" stage (Day 6 and 7). The full DE-Kupl procedure was completed in about 4 hours in the t-test mode (singlethreaded), and 6.5 hours in the DESeq2 mode (multi-threaded), using 8 computing cores, 54 GB RAM and 7 to 42 GB of hard disk space (Table 1). Recurrence filters efficiently reduced k-mer counts from 707M to 92.5M and the Gencode filter further reduced counts to 40.3M. Differential analysis using the t-test mode eventually retained 3.8M k-mers that were assembled into 133.690 contigs ( Table 2). The resulting contigs ranged in size from 31 bp (corresponding to an "orphan" k-mer) to 3.6 kbp, with a major peak of short 31-40 bp contigs and a minor peak around 61 bp ( Fig 5A). Almost all (99.2%) of the 133k DE contigs mapped to the human genome. Mapping revealed that most 61 bp contigs result from assembly of 31 overlapping k-mers harboring a single nucleotide variation (SNV) at every position of the k-mer. This phenomenon also causes a higher mismatch ratio for contigs around 61 bp ( Fig 5B). Contigs that do not map to the human genome are generally shorter than mapped contigs (Fig 5A), indicating a lower signal-to-noise ratio in unmapped contigs. Expectedly, shorter mapped contigs tend to map at multiple loci more often than longer ones (Fig 5C), however 80% of all contigs are uniquely mapped (not shown). Analysis of contig locations reveals distinct contig classes. Most contigs are located in annotated introns and exons (Fig 6), however intronic contigs are predominantly exact matches while exonic contigs are predominantly mismatched. This effect is due to Gencode filtering : contigs with exact matches to introns are usually not filtered, as they do not pertain to a Gencode transcript, while contigs that match exons are filtered out unless they differ from the reference. This difference might be in the form of SNVs, or through exons extending in flanking intergenic or intronic regions. Under the same rationale, contigs mapping to intergenic and antisense regions are depleted in SNVs (Fig 6), consistent with their location in unannotated lncRNAs and antisense-RNAs, while contigs overlapping exon-exon junctions behave like exonic contigs (high rate of SNV). However, a significant fraction of exon junction contigs are exact matches, indicating they may correspond to novel junctions. Assigning contigs to biological events We assigned DE contigs generated from the EMT dataset to 11 classes of potential biological events, using the rule set described in Table 3. Since intragenic DE contigs may result from a mere over/under-expression of their host gene and do not necessarily reflect a differential usage (DU) of transcript isoforms, we implemented a simple strategy to distinguish the two situations based on the differential status of the host gene (Methods). We made this distinction for splicing, polyadenylation, SNVs and intron retention (Table 3). From the total set of 133k DE contigs (supplemental data), we extracted about 6900 contigs matching our rule set for either event class (Table 3). We noted that a single event often generates multiple contigs. We thus further grouped contigs into "loci", defined as independent annotated genes or intergenic regions harboring one or more contigs (Table 3). We describe below the main classes of events identified. Differential splicing. Analysis of split-mapped contigs found evidence of potentially novel differential splice variants in 1040 contigs (Table 3, Fig 7A,B,C). Note that this class excludes SNV-containing contigs, as to avoid known splice variants associated to genetic polymorphism. Furthermore, 171 of these contigs were classified as DU, suggesting differential splicing at these sites may not be a consequence from DE of the whole gene. Remarkably, these novel events include a number of subtle variations at 5' and 3' splice sites with 3-15 bp difference from the annotated reference, which escaped prior annotation (see eg. Fig S1). Differential polyadenylation. We extracted all contigs aligned with 5 or more clipped (e.g. non-reference) bases at their 3' end, and containing 5 or more trailing As. Out of 140 such poly-A terminated contigs, 105 (75%) contained an AATAAA or variant polyadenylation signal (Table S1), indicating they result from actual polyadenylated transcripts (Table 3, line "PolyA"). Note these are not necessarily novel polyadenylation sites since polyadenylated transcripts always create k-mers that differ from the reference transcriptome and are hence retained by DE-kupl. Indeed, only 6 of the 105 poly-A contigs mapped intergenic regions. Furthermore, only 17 poly-A contigs mapped to genes with no differential expression ("polyA DU" in 3), and these had relatively poor fold change values (Table S11) raising doubt on their DU status. Altogether this analysis demonstrates that DE-kupl is able to capture bonaf ide polyadenylated transcripts present in the sequencing reads, however we did not observe any clear case of differentially polyadenylated genes in the experiment studied. LincRNA. We identified a subset of 809 DE contigs (282 loci) corresponding to potential long intergenic non-coding RNAs (Table 3, line "lincRNA"). Criteria for lincRNAs were contigs of size > 200nt mapped to an intergenic locus. Visual inspection revealed clear lincRNA-like patterns, whith contigs clustered into well defined transcription units with abundant read coverage and evidence of splicing ( Fig 7C, Fig S2). DE-kupl is thus an effective tool for the identification of novel differentially expressed lincRNAs. Antisense RNAs. When DE-kupl is applied to stranded RNA-seq libraries (as with the EMT libraries used in this study), the resulting contigs are strand-specific and can thus be used for identifying antisense RNAs (asRNAs) and disembiguating loci with intricated expression on both strands. We identified 356 contigs from 156 loci mapping to the reverse strand of an annotated gene ( Table 3, line "asRNA"). These antisense RNAs include very strong cases of differential expression (Fig 7D), sometimes combined to apparent repression of the sense gene ( Fig S3). Allele-specific expression. As DE-kupl quantifies every SNV-containing k-mer, we set out to exploit this capacity to identify potential allele-specific expression events. We extracted all contigs including an SNV (either base substitution or indel) and mapping to an exon whose host gene was not measured as differentially expressed (Table 3, line "SNV DU"). This was a less than perfect procedure, as we did not explicitly test for a switch in allelic balance among the two conditions. Yet, among the 717 contigs identified, some displayed strong apparent changes in allelic balance between the E and M conditions (eg. Fig S4). The ability of DE-kupl to capture differential SNV between datasets may be particularly interesting when looking for recurrent mutations in subpopulations. Intron retention and other intronic events. As highly expressed transcripts often carry intronic byproducts, we expected DE-kupl to turn out a lot of "parasitic" intronic contigs. Indeed 1909 contigs mapped to intronic loci (Table 3, line "intron"). We thus focused on intronic k-mers from genes that were not DE (line "intron DU"). This filter identified 559 intronic contigs from 185 different genes. Inspection of read mapping at these loci revealed clear instances of novel skipped of extended exons (Fig S5), as well as cases where a specific short intronic region was differentially expressed, reminiscent of the pattern observed at intronic processed miRNAs and snoRNAs [21] (Fig S6). Therefore DE-kupl can be used for screening a wide variety of exon/intron processing events in addition to alternative splicing. Expressed repeats. Assessing the expression of human repeats by conventional RNA-seq analysis protocols is difficult, as ambiguous alignments render repeat regions "unmappable" [22]. Since DE-kupl first measures expression independently of mapping, we were able to collect and analyze differential contigs with multiple genome hits. 4968 contigs of size 50nt or larger have multiple hits (not shown), and 1141 are repeated more than 5 times (Table 3, line "repeat"). RepeatMasker [23] found 664 out of these 1141 sequences to match known repeats, mostly LINEs, LTRs and SINEs (Fig S8). Further inspection showed that most of the remaining multiple-hit contigs correspond to unannotated repeats or low complexity regions. One of the most striking differential repeats is an unannotated 22x66 bp tandem repeat, located about 2 Mbp from the chromosome 8 telomere. This repeat is found about 50-fold overexpressed in the Mesenchymal condition (Fig 7B, S7). These results indicate DE-kupl can serves as a screen for differential expression or activation of endogenous viral sequences and other repeat-containing transcripts. Unmapped contigs. Finally, we analyzed DE contigs that did not map the human genome. Unmapped contigs may result from transcripts produced by highly rearranged genes or by exogenous viral genomes and could thus be highly relevant biologically. In principle, DE-kupl is able to detect such events when levels of foreign RNA vary across samples. In this test set, where all samples come from an invitro cell line, we did not expect to observe such a phenomenon. Indeed, out of 114 unmapped contigs of size > 50 bp (Table 3, line "Unmapped"), the vast majority (76%) correspond to vector sequences overexpressed in the "M" condition (not shown), indicating these contigs come from the expression vector used for EMT induction. The remaining unmapped contigs were low complexity sequences or aligned to non-human primate sequences, indicating possible contamination and/or misassemblies. DE-kupl event detection is reproducible across independent datasets We cross-validated DE-kupl findings using two independent human RNA-seq datasets extracted from the Genotype-Tissue Expression (GTEx) [24] and the Human Protein Atlas (HPA) [15]. DE contigs were first obtained by running DE-kupl on 8 colon vs 8 skin libraries from GTEx. Events were classified as above into intron retentions, lincRNAs, polyadenylation sites, repeats, splice sites and unmapped. The 100 top events from each class (50 for class 'unmapped') were extracted and their k-mer labels saved as a sequence file. We then counted the occurrence of each k-mer in colon and skin libraries from the HPA project and applied DEseq2 [19] to evaluate the significance of the expression change between colon and skin (see Methods). 79% of the 550 DE k-mers identified in GTEX were also significantly DE in the HPA data ( Figure 8). Each event class showed clear reproducibility, with particularly strong effects in lincRNAs and splice variants. This demonstrates that novel events identified by DEkupl are reproducible across independent datasets in spite of independent RNA extraction, library preparation and sequencing protocols. Discussion K-mer decomposition followed by filtering and differential expression analysis is a novel way of analysing RNA-seq data that is capable of detecting a wider spectrum of transcript variation than previous protocols. DE-kupl explores all k-mers in the input RNA-seq files (vs. only k-mers from annotated transcripts in recent software [8,9]) which potentially entails heavy computational time and memory requirement. Using the Jellyfish k-mer indexing software and C-programming code for key table manipulation, we achieved time/memory requirements on par with popular mapping-based software for similarly sized datasets. Another key aspect of our protocol that rendered a "full k-mer" analysis tractable was applying successive filters for rare k-mers, Gencode transcripts and differential expression, which altogether resulted in a 200-fold reduction in k-mer counts. These filters are not only useful for technical reasons (they reduce runtimes and enable to get rid of most sequence errors), but they also allow to focus on k-mers which (i) vary significantly between the conditions under study, and (ii) would not be captured by conventional reference-based protocols. Contrarily to popular RNA-seq analysis software, DE-kupl does not attempt fulllength transcript assignment or assembly but focuses instead on local transcript variations. Indeed, we do not consider full-length transcript analysis to be realistic when screening for unspecified RNA variation, since the combinatorial nature of genomic, transcriptomic and post-transcriptomic events would require an indefinitely expanding transcript catalog. In some way DE-kupl is closer in spirit to methods analyzing local RNA-seq coverage such as RNAprof [25] and DERfinder [26], with the notable exception that DE-kupl does not involve mapping and thus avoids mapping-related pitfalls while considerably widening the range of detectable events. Another important benefit of the k-mer strategy is that k-mers representing events of interest can be used to efficiently assess the occurence of similar events in the huge public compendium of RNA-seq data. We showed that DE-kupl is able to detect a wide range of differential transcription and RNA processing events. Although specialized software may perform better at assessing specific event classes such as differential splicing, to our knowledge no software provides such an extensive screen. As differential RNA-seq analysis is often conducted with an exploratory spirit, we argue that it is preferable to cast a wide net with no preconception on target events, using DE-kupl along with a conventional gene-by-gene differential expression analysis. Note that DE-kupl might also be an interesting option for exploring other types of NGS data such as small-RNA-seq, ChIP-seq or CLIP-seq, with simple adjustments of k-mer size and event annotation rules. In this proof of concept study, we focused on RNA-seq libraries from a cell line, where no genetic polymorphism was expected among samples. The next step will be application to libraries from multiple individual organisms. Although k-mer diversity will be higher in such datasets, preliminary tests with RNA-seq data from 60 human tumors were completed successfully on a single computer server (data not shown). Analysis of patient samples open exciting perspectives. For instance, the ability of DE-kupl to simultaneously detect genetic variation and RNA expression/processing events may serve as a basis for studying genotype-phenotype relations. Analysis of patient RNA-seq data may also reveal event classes not explored in this work, such as fusion transcripts and circular RNAs. First we counted k-mers in each RNA-Seq and reference sequence set using Jellyfish (2.2.0) count, with options k = 32 and -C (canonical k-mers). The k-mer list for each tissue (Fig 1A and B) was produced by merging counts for all 6 samples and conserving only those found in all replicates. For mapping statistics (Fig 1B3), we extracted k-mers specific of each tissue and mapped them to the Ensembl 86 transcript reference using Bowtie (version 1.1.2). Unmapped k-mers were mapped a second time with Bowtie to the GRCh38 genome reference. Reads with 3 or more mismatches are not mapped by Bowtie and, therefore, are considered as unmapped. The intersection of k-mers between RNA-Seq and WGS data (Fig 1C), is based on the transcriptome and genome of lymphoblastoid cell lines [16]. K-mers were counted in these libraries with the same procedure as above. In order to reduce noise from sequencing errors, k-mers with only one occurrence were filtered out. DE-kupl Implementation The DE-kupl pipeline (Fig S12) is implemented using the Snakemake [27] workflow manager. A configuration file is filled up by the user with location of FASTQ files, the condition of each sample, as well as global parameters such as k-mer length, CPU number, maximum memory and other parameters for each step of the pipeline, as described hereinafter. K-mer counting Raw sequences (FASTQ files) are first processed with the jellyfish count command of the Jellyfish software, which produces one index (a disk representation of the Jellyfish hash-table) for each sequence library. For stranded RNA-seq libraries, reads in reverse direction relative to the transcript are reverse-complemented, ensuring proper orientation of k-mers. At this point, for each library, only k-mers having at least 2 occurrences are recorded (user-defined parameter). Once a Jellyfish index is built, we use the jellyfish dump command to output the raw-counts in a two column text file, which contains at each line a k-mer and its number of occurrences. Raw counts are then sorted alphabetically by k-mer sequence with the Unix sort command. K-mer filtering All samples counts are joined together using the dekupl-joinCounts binary to produce a single matrix will all k-mers and their abundance in all samples. Given an integer a ≥ 0, we define the recurrence of a k-mer x as the number of samples where x appears more than a times, i.e. recurrence(x, a) = n i=1 1 {xi>a} , where n is the total number of samples and x i is the number of times the k-mer x appears in sample i. The k-mer filtering step involves two user-defined parameters: an integer min recurrence abundance and an integer min recurrence such that a k-mer x is filtered out if recurrence(x, min recurrence abundance) < min recurrence, i.e. if the k-mer x appears more than min recurrence abundance times in fewer than min recurrence of the samples. Usually min recurrence is set to the number of replicates in each conditions, and min recurrence abundance is set to 5. In order to remove known transcripts sequences from our set of experimental k-mers, we also use our Jellyfish-based procedure to create the set of k-mers appearing in the reference transcriptome and we subtract this set from the experimental k-mers. Differential k-mer expression Prior to differential analysis, we compute normalization factors (NFs) using the "median ratio method" [28] on the table of k-mers after recurrence filter: for each sample, the NF is the median of the ratios between sample counts and counts of a pseudo-reference obtained by taking the geometric mean of each k-mer across all samples. To avoid dealing with the complete table of k-mers, we extracted a random subset of 30% of the k-mers and computed NFs on this subset. Computing NFs on the complete table of k-mers, on the table of k-mers after recurrence and Gencode filters, or on the table of transcripts abundances produced by Kallisto [8] led to similar values (Fig S9). To perform differential analysis, two options are implemented (Fig. S10). The first option is to apply a t-test for each k-mer on the log transformed counts, normalized with the previously computed NF. Transformation of raw counts in conjunction with linear model analysis have been successfully used for differential analysis of counts [29]. We perform the t-test independently on each k-mer and avoid complex variance modeling strategies to reduce execution time of the analysis. The t-test option has been implemented in C in the dekupl-TtestFilter binary. Note that this t-test option is not appropriate for small samples [30]. To increase the power of the analysis, in particular for small samples (typically less than 6 vs 6 libraries), we strongly advise to use the second option based on generalized linear model, implemented in the R package DESeq2 [19]. On top of modeling raw counts (normalization or prior log-transformation of the counts is not required), this approach performs information sharing across k-mers to improve variance estimation and differential analysis results. However, given the large number of k-mers, we do not apply this approach on the complete matrix of k-mer counts. We divide the matrix of k-mer counts into random chunks of approximately equal size (around one million k-mers) and apply the DESeq2 approach on each chunk independently. For each chunk, previously computed NFs are used as an input of the method, and are not computed independently on each chunk. Raw p-values, not adjusted for multiple testing, are collected as an output for each chunk, and merged into one single vector containing the raw p-values for all k-mers to test. Subsequently, raw p-values obtained from either the t-test or the DESeq2 test are adjusted for multiple comparisons using the Benjamini-Hochberg procedure [31] and k-mers with adjusted p-values above a user-set cutoff are filtered out. K-mer assembly DE k-mers are assembled de novo in order to group k-mers that potentially overlap the same event (ie. all k-mers overlapping a splice junction or SNV). To this aim, we developed our own procedure called mergeTags, which works as follows: we first identify all exact k−1 prefix-suffix overlaps between k-mers. We consider only k-mers that overlap with exactly one other k-mer, and merge all pairs of k-mers involved in such overlaps. For example, given the set of k-mers : AT G, T GA, T GC, CAT , the following contigs are produced : contigs = CAT G, T GA, T GC. We repeatedly merge contigs that overlap exactly over k − 1 bp with exactly one other contig. We then repeat this assembly process with k − 2 exact prefix-suffix overlaps, using as input the contigs produced at the previous step, and so forth for increasing values of i such that k − i > 15 bp (default value). Finally, a set of DE contigs is produced and each contig is labelled by its constitutive k-mer of lowest p-value. This assembly procedure is implemented in C in the dekupl-mergeTags binary. Contig Annotation Finally, DE contigs are annotated in order to facilitate biological event identification. First, contigs are aligned with BLAST [32] against Illumina adapters. Contigs matching adapters are discarded. Retained contigs are further mapped to the reference Hg38 human genome using the GSNAP short read aligner [33], which showed the best speed/sensitivity ratio for aligning both short and long contigs in internal tests (not shown). GSNAP is used with option -N 1 to enable identification of new splice junctions. Contigs not mapped by GSNAP are collected and re-aligned using BLAST. Alignment characteristics are extracted from GSNAP and BLAST outputs. Alignment coordinates are compared with Ensembl (v86) annotations (in GFF3 format) using BEDTools [34] and a set of locus-related features is extracted. The final set of annotated features (Table S3) is reported in a contig summary table. The annotation procedure generates two additional files: a "per locus" summary of contigs (one line per genic or intergenic locus), and a BED file of contig locations that can be used as a display track in genome browsers. In the "per locus" table, a locus is defined as either an annotated gene, the genomic region located on the opposite strand of an annotated gene, or the genomic region separating two annotated genes. The table records the number of contigs overlapping each locus as well as the contig with lowest FDR for this genomic interval. In parallel to k-mer counting and filtering, we analyze the RNA-Seq data libraries a conventional differential expression protocol. Reads are processed with Kallisto [8] to estimate transcript abundances. Transcript-level counts are then collapsed to the gene-level and processed with DESeq2 [19] to produce a set of differentially expressed genes. This information is stored in the contig summary table and used later on for defining events with differential usage ("DU" in Table 3). DE-kupl run on EMT data DE-kupl was run using RNA-seq libraries from [20] retrieved on the GEO web site under accession GSE75492. For stage "E" we used libraries GSM1956974, GSM1956975, GSM1956976, GSM1956977, GSM1956978, GSM1956979, and for stage "M" GSM1956992, GSM1956993, GSM1956994, GSM1956995, GSM1956996, GSM1956997. DE-kupl parameters were kmer_length 31, min_recurrence 6, min_recurrence_abundance 5, pvalue_threshold 0.05, lib_type stranded, diff_method Ttest. Output files are provided as supplementary material. The DE-kupl contig summary table was analyzed interactively using R commands to extract lists of contigs based on the filtering rules described in Table 3. Visualization of selected contigs was performed with IGV [35], using the bed file produced by DE-kupl and read mapping files produced by STAR [36]. Cross-validation DEkupl was applied to 8 skin and 8 colon libraries from GTEx [24]: skin library IDs: SRR1308800, SRR1309051, SRR1309767, SRR1310075, SRR1311040, SRR1351501, SRR1400467, SRR1479595; colon library IDs: SRR1316343, SRR1396146, SRR1397292, SRR1477732, SRR1488307, SRR807751, SRR812697, SRR819486. DE-kupl parameters were kmer_length 31, min_recurrence 6, min_recurrence_abundance 5, pvalue_threshold 0.05, lib_type unstranded, diff_method Ttest. DE-kupl contigs were interactively classified using R commands, applying the same rules as in Table 3. Classes asRNA and SNV-DU were not included since asRNA identification is not possible using the unstranded GTEx and HPA libraries, and we had no reason to expect common SNVs with differential usage in this dataset. DE contigs were sorted by fold-change and k-mer labels of the top 100 DE contigs in each class were extracted (50 for class 'unmapped' due to lower event number). DEkupl output files and selected k-mers are provided in supplementary material. For cross-validation, we used the same 6 skin and 6 colon RNA-Seq data as in Figure 1 (10.1126/science.1260419, E-MTAB-2836). K-mers were counted in each library using Jellyfish with options k = 31 and -C (canonical k-mers) as GTEx data were unstranded. All k-mers selected from the GTEx analysis were queried against the jellyfish databases using jellyfish query command. Finally the extracted k-mers counts were processed with DESeq2 [19] and the resulting adjusted p-values were plotted for each event class (Figure 8). Competing interests The authors declare that they have no competing interests. Intersection of k-mers present in Gencode transcripts and RNA-Seq data from three tissues: bone marrow, skin and colon. The set of k-mers for each tissue was defined as the common k-mers shared by all six individuals. B. Intersection of k-mers present in Gencode transcripts, the reference human genome (GRCh38) and RNA-Seq data (same as in A). B1. Repartition of k-mer abundances for each tissue represented in A and B. K-mers shared with Gencode are labelled as "GENCODE", among other k-mers, those shared with the human genome are labelled as "GRCh38", the remaining k-mers are labelled as "tissue-specific". The same procedure was applied in B2 and B3. B2. Repartition of k-mer diversity for each tissue. B3. Mapping statistics of k-mers labeled as "tissue-specific" in B2. These k-mers were first mapped to Gencode transcripts, and unmapped k-mers were then mapped to the GRCh38 reference using Bowtie1, allowing up to 2 mismatches in a 31-mer. Figure 2 Principal Component Analysis on non-reference k-mers discriminates tissues. Samples are labeled according to their tissues (bone marrow, colon, skin). PCA were produced with normalized, log transformed counts. For genes and transcripts, counts were generated with Kallisto based on Gencode V25. Genomic k-mers correspond to 20k random k-mers from the RNA-seq libraries that did not map to Gencode transcripts but successfully mapped to GRCh38. 80.04% Figure 3 The diversity of non-reference k-mers is greater for RNA-Seq than for whole genome sequencing (WGS). Intersection of k-mers between Gencode transcripts, the human genome (GRCh38), RNA-Seq and WGS data. RNA-Seq and WGS data originate from the same lymphoblastoid cell line (HCC1395). Figure 6 Genomic location of differentially expressed contigs. Contigs are separated by genomic location, according to their overlap with exons, exon-exon junctions, introns, antisense regions of annotated genes or intergenic regions. The right panel shows the total number of contigs in each class; the left panel shows contig distribution according to their alignment status: contigs with a single mapping location are labeled as "perfect match", "one mismatch" or "multi mismatches", contigs with multiple mapping locations are labeled as "multi-map". Figure 7 Examples of DE contigs. Sashimi plots generated from IGV using read alignments produced with STAR [36]. Sample SRR2966453 from condition D0 is labeled as "E" (epithelial). Sample SRR2966474 from condition D7 is labeled as "M" (mesenchymal). Annotations from Gencode and DE-kupl DE contigs are shown at the bottom of each frame. A. New splicing variant involving an unannotated exon, overexpressed in condition "E". B. Tandem repeat at chr8:143,204-870-143,206,916 (red region) that is overexpressed in condition "M" vs. "E". Note that the overexpressed tandem repeat is part of a larger overexpressed unannotated locus. C. A novel lincRNA overexpressed in condition "E". D. A novel antisense RNA. RNA-seq reads are aligned in the forward orientation while the gene at this locus is in the reverse orientation. The annotated gene is not expressed. F T F T T T 1 3 717 493 Intron T F 0 T T 1 4 1909 576 Intron DU F T F 0 T T 1 4 559 185 Repeats T ≥5 >50 1141 618 Unmapped F >50 114
8,666
2017-03-31T00:00:00.000
[ "Biology" ]
A method for complete characterization of complex germline rearrangements from long DNA reads 1. Department of Human Genetics, Yokohama City University Graduate School of Medicine 2. Research Institute for Microbial Diseases, Osaka University, Suita, Japan 3. Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST) 4. Graduate School of Frontier Sciences, University of Tokyo 5. Computational Bio Big-Data Open Innovation Laboratory (CBBD-OIL), AIST 6. Contributed equally Introduction Various germline DNA sequence changes are known to cause rare genetic disorders. Many small nucleotide-level changes (one to a few bases) in 4,143 genes have been reported in OMIM (https://www.omim.org/) (as of Aug 24, 2019), which are known as single gene disorders. In addition to these small changes, large structural variations of the chromosomes can also cause diseases. Previous studies on pathogenic structural changes in patients with genetic/genomic disorders found chromosomal abnormalities by microscopy, by detecting copy number variations (CNVs) using microarrays 1 , or by detecting both CNVs and breakpoints using high-throughput short read sequencing 2 . However, there are difficulties in precisely identifying sequence-level changes especially in highly similar repetitive sequences (e.g. simple repeats, recentlyintegrated transposable elements), or in finding how these rearrangements are ordered 3 . Long read sequencing (PacBio or nanopore) is advantageous for characterizing rearrangements in such cases, and is recently beginning to be used for patient genome analysis to identify pathogenic variations [4][5][6] . In addition, if rearrangements are complex (e.g. chromothripsis), long read sequencing (reads are more than 10 kb in length) has a further advantage because one read may encompass all or much of a complex rearrangement 7 . Chromothripsis is a chaotic complex rearrangement, where many fragments of the genome are rearranged into derivative chromosomes. Current approaches to analyze chromothripsis usually need manual inspection to reconstruct whole rearrangements. Detection and reconstruction methods for complex rearrangements are needed to characterize pathogenic variations from whole genome sequencing data. In order to understand rearrangements between two sequences (e.g. a read and a genome), we must determine equivalent positions, i.e. bases descended from the same base in the most recent common ancestor of the sequences. This is not necessarily easy, due to sequences that are similar but not equivalent (e.g. alpha-1 and alpha-2 globin). If we compare two sequences that have both undergone deletions, duplications, and rearrangements since their common ancestor, it seems hard to reliably determine equivalent bases. To make the problem tractable, we impose an assumption: that we are comparing a derived sequence (a DNA read) to an ancestral sequence (the genome) 8 . This means that every part of the read is descended from (equivalent to) a unique part of the genome. (The exception is "spontaneously generated" sequence not descended from an ancestor: this is rare, and we allow for it by allowing parts of the read to not align anywhere.) Thus, we need to accurately: divide the read into (one or more) parts and align each part to the genome. To do this, we first learn the rates of small insertions, deletions, and each kind of substitution in the reads 9 , then find the most-likely division and alignment based on these rates 8,10 . We can also calculate the probability that each base is wrongly aligned, which is high when part of a read aligns almost equally well to several genome loci. This approach was previously used to characterize rearrangements that are "localized", i.e. encompassed by one DNA read 8 . Here we extend this approach, to: find arbitrary (non-localized) rearrangements, subtract rearrangements found in control individuals, then order and orient rearranged DNA reads to fully reconstruct complex rearrangements in derivative chromosomes. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint Nanopore sequencing of 4 patients with chromosomal translocations We sequenced genomic DNA from 4 patients with reciprocal chromosomal translocations using a nanopore long read sequencer, Table 1). Clinical information of these patients is described in the Supplementary methods and elsewhere [11][12][13][14] 11,12 has de novo reciprocal translocation between chr2 and chrX, 46,X,t(X;2)(q22;p13) (Fig2a). The breakpoints were not detected by short read . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint sequencing 15 though they were detected by more-painstaking breakpoint PCR 11 , so we tested whether we could find this rearrangement with long reads. We performed PromethION DNA sequencing (112 Gb), and found 2,773 groups of rearranged reads compared to human reference genome hg38. After subtracting rearrangements present in 33 controls, we found 80 patient-only groups, of which two involve both chr2 and chrX (Fig2b). These are exactly the reciprocal chr2-X translocation (Fig2c, Supplementary Fig 2). The breakpoints agreed with reported breakpoints determined by Sanger sequencing 11 Supplementary Fig 3). These types of retrotransposon are known to be active or polymorphic in humans [16][17][18] . One case appears to be an orphan 3'-transduction from an L1HS in chr20: the L1HS was transcribed with readthrough into 3' flanking sequence, then the 3'-end of this transcript (without any L1HS sequence) was reversetranscribed and integrated into chr10 (Fig2e). Such orphan transductions can cause disease 19 . We also found an insertion of mitochondrial DNA (NUMT) into chr2 (Fig2e). Some of these rearrangements have been previously found in other humans, e.g. the ERV-K LTR inserted in chr12 20 . Thus our subtraction of rearrangements found in other humans was not thorough, especially because patient 1 is Caucasian whereas most of our controls (32/33) are Japanese. Patient 2 Patient 2 (Nishimura et al. described as Case1) 11 has reciprocal chromosomal translocation between chr4 and chrX, 46,X,t(X;4)(q21.3;p15.2) and a 4 kb deletion of chrX and a 7 kb deletion of chr4 (Fig3a): these were . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint found previously by Southern blot combined with inverse PCR sequencing 11 but not by short read sequencing 15 . We performed PromethION DNA sequencing (117 Gb), and found 3,336 groups of rearranged reads relative to the reference genome, which reduced to 33 groups after control subtraction (Fig3b). Only 2 out of 33 groups involve both chr4 and chrX: they show a reciprocal unbalanced chromosomal translocation exactly as described previously 11,15 (Fig3c, Supplementary Fig 4). Another of the 33 groups shows a 43 kb deletion near the translocation site at chrX:107943791-107986323 (Fig3c, Supplementary Fig 4), which eliminates the TEX13B gene ( Supplementary Fig 4), and was not previously described 15 including a 10kb deletion that removes most of the TRIM48 gene. Patient 3: complex rearrangements at chr7-chr15 translocation We next analyzed Patient 3 whose precise structure of chromosomal translocations was only partly solved before 13,15 . Patient 3 was reported to have two reciprocal chromosomal translocations between chr7 and chr15 as well as between chr9 and chr14, t(7;15)(q21;q15) and t(9,14)(q21;q11.2) (Fig4a), and has 4.6 Mb and ~1 Mb deletions on chr15 and chr7, respectively, which were predicted by microarray, although the precise locations of breakpoints were not detected in detail. We performed whole genome nanopore sequencing (95 Gb) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint on this patient and found 3,351 groups of rearranged reads relative to the reference genome, which reduced to 43 groups after control subtraction (Fig4b). Fifteen out of 43 groups are involved in the two translocations: dnarrangelink found a unique way to order and orient them without changing the number of chromosomes (Fig4c, Supplementary Fig 6). At first, there seem to be two groups involving both chr9 and chr14, which accurately indicate the balanced chr9-chr14 translocation described previously 15 . However, dnarrange-link additionally identified a complex rearrangement for t(9,14)(q21;q11.2). A part of chr4 was unexpectedly inserted to derivative chr9 (Fig4d). This rearrangement was not investigated in the previous analyses, as chr7q21 was the primary locus for split-foot. In addition to this, dnarrange respectively, which were detected by microarray (Fig 4e). Note that these deletions are not present in any part of the rearrangement, but only in the fullyreconstructed rearrangement: they are holistic properties of the complex rearrangement. One candidate gene for split-foot, SEM1 was not disrupted, nor had altered expression in lymphoblastoid cells ( Supplementary Fig 7a, b, Supplementary Results). . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint A striking feature of these rearrangements is that the rearranged fragments come from near-exactly adjacent parts of the ancestral genome (Fig4d, e). This suggests that the rearrangements occurred by shattering of the ancestral genome into multiple fragments, which rejoined in a different order and orientation with loss of some fragments. Such shattering naturally explains why the fragments come from adjacent parts of the ancestor 8 . We performed Sanger sequence confirmation for all 18 breakpoints Table 6). There were only minor differences (usually 0 or 1 bases) between Sanger sequenceconfirmed breakpoints and dnarrange predicted breakpoints from lamassemble consensus sequences ( Supplementary Fig 9). The other rearrangements are mostly local tandem-duplication or insertions (Supplementary Table 4 CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) Supplementary Fig 11). Dotplot pictures of reads that cross the chr1 breakpoint suggest that there is a reciprocal translocation, but the other half of the read aligns (with low confidence) to satellite or simple repeat sequences at centromeric regions on multiple different chromosomes (Fig5d, two example reads are shown). This limitation might be overcome by obtaining reads long enough to extend beyond the centromeric repeats, or perhaps by obtaining a reference genome that is more accurate in centromeric regions. Discussion We analyzed a variety of chromosomal translocations in 4 patients, who were selected because previous studies had difficulty in determining precise breakpoints by conventional approaches including microarrays and short read sequencing. Especially, complex rearrangements in Patient 3 were not solved even by intensive analysis 13,15 . Our method could not only precisely detect breakpoints but also characterize how shattered fragments were ordered and oriented. To the best of our knowledge, there has been no method to filter patient-only rearrangements, and connect them to reconstruct rearranged chromosomes by an automatic algorithm. Recently, long read sequencing is becoming available for individual genome analysis due to a decrease in cost and increase in output data size. Accordingly, there have been a few approaches to use long read sequencing to detect structural variations 7,8,21 , including tandem-repeat changes in rare genetic diseases 6 , providing evidence that long read sequencing has a clear advantage in precisely detecting rearrangements. We observed that multiple breakpoints were jointly detected in a single read in Patient 3 (Supplementary . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint Fig 8d, e), because long enough reads can cover several breakpoints, which is helpful to phase and order rearrangements. There are continuous efforts to obtain longer nanopore reads, however, in case of complex rearrangements (e.g. chromothripsis), it is not easy to cover whole rearrangements, as seen in Table 5). In summary, our approach using dnarrange and long read sequencing is superior to conventional approaches (e.g. microarray) because: it can 1) connect multiple rearrangements, 2) subtract shared rearrangements, and 3) detect balanced chromosomal rearrangements (e.g. inversion). Our approach in this study narrowed down patient-only rearrangements using 33 controls. The number of rearrangements decreased exponentially with the first few samples to a few hundreds. This may be due to the presence of common rearrangements in the population. We suspect large numbers of controls will not be needed if there is a target rearrangement locus (e.g. 4p15.2) because the number of candidates is small. In all 4 patients, . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint patient-only (not present in at least 66 autosomal alleles of 33 controls) rearrangements were fewer than 100. If we were to further narrow down to ultra-rare variations that may cause rare congenital disorders, a larger number of controls may be considered. Patient 1 has more patient-only groups of rearranged reads (80) than the other patients (33, 43 and 14). This is because the patient is Caucasian and most of the control data used were Japanese (32/33 datasets). Applying ethnicity-matched controls, or parents or other relatives, will be useful to further remove benign rearrangements. We noticed that large fractions of these rearrangements are insertions or tandem multiplications (Supplementary Table 4 Table 7). Interestingly, most of the inserted sequences were aligned to TEs. TE insertions may be a common type of rare variation seen in individuals. In addition to TE-insertion, we detected rare processed pseudogene insertion in 3 patients. Two of these insertions were previously described with allele frequency 1-10% in Japanese (MFF) and 1-10% in non-Japanese (MATR3) 26 . We also observed non-tandem duplications that do not seem to be retrotranspositions: interestingly, about half of these are localized, i.e. a copy of a DNA segment is inserted near (e.g., within a few kb of) the original segment 8 (see blue highlighted loci in Supplementary Table 5). Our analysis proves useful despite its dubious assumption that the reference genome is ancestral to the DNA reads. This may be partly because we focus on disease-causing rearrangements, which are likely to be derived. Also, incorrect rearrangements due to a non-ancestral reference may be found . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint in both patients and controls, thus filtered out. It would be useful to construct a reference human genome that is ancestral (and complete), as far as possible, because this simplifies the relationship between the reference and extant human DNA sequences 8 . Our method in combination with subtracting shared rearrangements in control datasets has a great strength in precisely detecting chromosomal rearrangements, including inversions, translocations, TE insertions, NUMT and processed pseudogene insertions. There has been no method that can effectively subtract rearrangements shared in the population, thus we believe our method is useful to analyze complex rearrangements in a clinical setting (i.e. rare genetic disease or perhaps cancer genomes). We also showed a limitation of our method: detecting rearrangements in large repetitive regions beyond the length of long reads in Patient 4. These regions are still elusive and highly variable between individuals. To date there is no good method to detect rearrangements in large repetitive regions (e.g. centromeric or telomeric repeats) genome-wide. We hope our understanding of these still-intractable regions will expand as sequencing technologies advance. In conclusion, we developed an effective method to find chromosomal aberration, with precise breakpoint identification, only from long read sequencing. Our method also provides an automatic algorithm for reconstruction of complex rearrangements. Long read sequencing may be considered when chromosomal abnormalities are suspected. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint Samples and ethical issues All genomic DNA from patients and controls were examined after obtaining informed consent. Experimental protocols were approved by institutional review board of Yokohama City University under the number of A19080001. dnarrange dnarrange finds DNA reads that have rearrangements relative to a reference genome, and discards "case" reads that share rearrangements with "control" reads (Supplementary Methods). It takes one or more files of read-to-genome alignments, where each file is a "case" or a "control". It assumes the alignments have this property, which is guaranteed by last-split: each read base is aligned to at most one genome base. dnarrange first performs these steps, for cases and controls: 1. In order to recognize large "deletions" as rearrangements, if an alignment has deletions >= g (a threshold; default 10kb), split it into separate alignments either side of these deletions. 2. Get rearranged reads. We classify rearrangements into four types: interchromosome, inter-strand (if a read's alignment jumps between the two strand of a chromosome), non-colinear (if a read's alignment jumps backwards on the chromosome), and "big gap" (if a read's alignment jumps forwards on the chromosome by >= g). 3. Discard any "case" read that shares a rearrangement with any "control" read. (Two reads are deemed to share a rearrangement if they have similar . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint rearrangements that overlap in the genome: the precise criteria are in the Supplementary methods, Supplementary Fig 13, 14.) It then performs these steps, for cases only: 4. Discard any read with any rearrangement not shared by any other read. Repeat this step until no further reads are discarded (so that dnarrange has the useful property of idempotence). Group reads that share rearrangements. First, a link is made between any pair of reads that share a rearrangement. Then, groups are connected components, i.e. sets of reads linked directly or indirectly. 6. Discard groups with fewer than 3 reads. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint * The left end is downstream of (has higher reference coordinate than) the right end. In order to infer the actual links, we require some further information or assumption. We make this assumption: there are as many links as possible, or equivalently, the derived genome has as few chromosomes as possible. For example, in Fig6a, B1 may be linked to C2, but in that case it becomes impossible to link C1 to anything, and D1 to anything. Based on our assumption, we instead link B1 to C1 and D1 to C2. In this example, dnarrange-link infers two derivative chromosomes: one is reconstructed from two reads by linking A2 to E1, the other is reconstructed from three reads by linking D1 to C2 and C1 to B1 (Fig6b). The two types of end, with linkability relationship, define a bipartite graph. To infer the links based on our assumption, we find a "maximum matching" in this graph. If there is more than one maximum matching, one is chosen arbitrarily, and a warning message is printed. In . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint 1. Calculate the rates of insertion, deletion, and substitutions between two reads by "doubling" the rates from last-train, because errors occur in both reads. 2. Use these rates to find pairwise alignments between the reads with LAST. LAST also calculates the probability that each pair of bases is wrongly aligned (which is high when there are alternative alignments with near-equal likelihood). Some results using a prototype of lamassemble were published previously 6 . Sanger-sequence confirmation of breakpoints . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) (which The copyright holder for this preprint this version posted October 5, 2019. ; https://doi.org/10.1101/19006379 doi: medRxiv preprint PCR primers for breakpoints estimated from rearrangements were designed using primer3 plus software (Supplementary Table 6). PCR amplification was done using ExTaq (Takara) and LATaq, then amplified products were Sanger sequenced using BioDye Terminator v3.1 Cycle Sequencing kit with 3130xl genetic analyzer (Applied Biosystems, CA, USA). Long DNA reads are aligned to a reference genome using LAST (blue box), then dnarrange finds rearranged reads, and groups reads that overlap the same rearrangement (pink box). lamassemble merges/assembles each group of reads into a consensus sequence (yellow box). When there is a "complex" rearrangement (more than one group of rearranged reads is needed to understand the full structure of the rearrangement), dnarrange-link was used to infer the order and orientation of the groups, and thereby reconstruct derivative chromosomes (green box). b. Derivative chr R was reconstructed by linking A2 to E1 (left). Derivative chr S was reconstructed by linking B1 to C1, and D1 to C2 (right). B1 can also be linked to C2, but in that case it is impossible to link C1 to anything, and D1 to anything, thus this possibility was suppressed. Web resources . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. was not certified by peer review)
5,348
2019-10-05T00:00:00.000
[ "Biology", "Computer Science" ]
First Integrals , Integrating Factors , and Invariant Solutions of the Path Equation Based on Noether and λ-Symmetries and Applied Analysis 3 Definition 6. IfX is a partial Noether operator corresponding to partial Lagrangian L, then the gauge function B(x, y) exists. Hence, the first integral is given by I = ξL + (η − y 󸀠 ξ) L y 󸀠 − B. (15) 3. Noether Symmetries of Path Equation Thedifferential equation describing the path of theminimum drag work is given in the form y 󸀠󸀠 − f 󸀠 (y) f (y) − y 󸀠2 f 󸀠 (y) f (y) = 0, (16) where y = y(x) is the altitude function. In this section we use partial Lagrangian approach to analyze Noether symmetries. Firstly, we can determine the Euler-Lagrange operator (3) for the path equation (16) such as δ αy = ∂ ∂y − D x ∂ ∂y x + D 2 x ∂ ∂y xx , (17) and the partial Lagrangian L for the path equation (16) is L = 1 2 y 󸀠2 + lnf (y) . (18) Then the application of (18) to (14) and separation with respect to powers of y and arranging yield the set of determining equations, the over-system of partial differential equations 1 2 ξ y + ξ f 󸀠 (y) f (y) = 0, (19) η y − 1 2 ξ x + η f 󸀠 (y) f (y) = 0, (20) η x + ξ y lnf (y) − B y = 0, (21) ξ x lnf (y) − B x + η f 󸀠 (y) f (y) = 0. (22) To find the infinitesimals ξ and η, (19)–(22) should be solved together. First, (19) is integrated as Introduction In a fluid medium, drag forces are the major sources of energy loss for moving objects.Fuel consumption may have reduced to minimize the drag work.This can be achieved by the selection of optimum path.The drag force depends on the density of fluid, the drag coefficient, the cross-sectional area, and the velocity.These parameters are the combination of the altitude-dependent parameters which can be expressed as a single arbitrary function.If all parameters are assumed to be constants, then the minimum drag work path would be a linear path.But these parameters change during the motion.And all parameters can be defined as the function of altitude [1,2]. The main purpose of the work is to study Noether and -symmetry classifications of the path equation for the different forms of arbitrary function of the governing equation [3][4][5][6][7].Based on Noether's theorem, if Noether symmetries of an ordinary differential equation are known, then the conservation laws of this equation can be obtained directly by using Euler-Lagrange equations [8].However, in order to apply this theorem, a differential equation should have standard Lagrangian.Thus, an important problem in such studies is to determine the standard Lagrangian of the differential equation.In fact, for many problems in the literature, it may not be possible to determine the Lagrangian function of the equation.To overcome this problem, partial Lagrangian method can be used alternatively and the Noether symmetries and first integrals can be obtained in spite of the fact that the differential equation does not have a standard Lagrangian [9].Here, we examine the partial Lagrangian of path equation and classify the Noether symmetries and first integrals corresponding to special forms of arbitrary function in the governing equation. The second type of classification that is called symmetries is carried out by using the relation with Lie point symmetries as a direct method.For second-order ordinary differential equation, the method of finding -symmetries has been investigated extensively by Muriel and Romero [10,11].They have demonstrated that integrating factors and the integrals from -symmetries for a second-order ordinary differential equation can be determined algorithmically [12].In their studies, for the sake of simplicity, the -symmetry is assumed to be a linear form as (, ) = 1 (, ) + 2 (, ).However, it is possible to show that the -symmetry cannot be chosen generally in this linear form.Therefore, we propose in this study to use the relation between Lie point symmetries and -symmetries for the classification. For convenience the generalized operator (4) can be rewritten by using characteristic function such as and the Noether operator associated with a generalized operator can be defined Definition 3. Let us consider an th-order ordinary differential equation system then the first integral of this system is a differential function (9) ∈ A, the universal space and the vector space of all differential functions of all finite orders, which is given by the following formula: and this equality is valid for every solution of (9).The first integral is also referred to as the local conservation law. On the other hand the Euler-Lagrange equations can be defined as following form and similarly the form of partial Euler-Lagrange equations is Definition 5. Let ∈ A be a vector that satisfies ̸ = + , where is a constant.Then () represents th prolongation of the generalized operator (7), and partial Noether operator corresponding to a partial Lagrangian is formulated as in which = ( Noether Symmetries of Path Equation The differential equation describing the path of the minimum drag work is given in the form where = () is the altitude function.In this section we use partial Lagrangian approach to analyze Noether symmetries.Firstly, we can determine the Euler-Lagrange operator (3) for the path equation ( 16) such as and the partial Lagrangian for the path equation ( 16) is Then the application of ( 18) to ( 14) and separation with respect to powers of and arranging yield the set of determining equations, the over-system of partial differential equations To find the infinitesimals and , (19)-( 22) should be solved together.First, ( 19) is integrated as and then substituting (23) into (20) and solving for yield Here several cases should be examined separately for different forms of (). 3.1.() = = .For this case the solution of (27) gives to the following infinitesimals: where are constants = 1, . . ., 5. Integrating (21) with respect to gives The associated infinitesimal generators turn out to be Thus, the first integrals by Definition 6 are given as follows: (31) 3.2.() = .For the linear case of (), we obtain where 1 is a constant.The partial Noether operator is and the first integral is (34) 3.3.() = .The solution of determining equations for the form of () = gives the following infinitesimals where are constants = 1, . . ., 5, and the gauge function is The associated five-parameter symmetry generators take the form and the corresponding first integrals are 3.4.() = 1/( + ).For this case, the infinitesimal functions read where are constants = 1, . . ., 5, and the gauge function is The corresponding Noether symmetry generators are Abstract and Applied Analysis 5 And the conservation laws are 3.5.() = .For this choice of (), we find the infinitesimals where 1 is constant, and we have the first integral For convenience all Noether symmetries and first integrals are presented in Table 1. Invariant Solutions. Invariant solutions that satisfy the original path equation can be obtained by first integrals according to the relation = 0. We here determine some special cases and investigate the corresponding invariant solutions. Case 1.(a) For the case of () = , the conservation law is by using the relation = 0, then the invariant solution of path equation ( 16) is where 1 , are constants. (b) For the same () function, the conservation law is and the invariant solution similar to previous one is where 1 , are constants. Case 2. Let us consider () = 1/( + ), then the first integral yields and the solution of this equation gives where 1 , are constants, in which it is obvious that the invariant solution (50) satisfies the original path equation. 𝜆-Symmetries of Path Equation The relationship between -symmetries, integration factors and first integrals of second-order ordinary differential equation is very important from the mathematical point of view [10][11][12].Let us consider first the second-order differential equation of the form and let vector field of (51) be in the form of In terms of , a first integral of ( 51) is any function in the form of (, , ) providing equality of () = 0.An integrating factor of (51) is any function satisfying the following equation: where is total derivative operator in the form of Thus -symmetries of second-order differential equation ( 51) can be obtained directly by using Lie symmetries of this same equation.Secondly, let be a Lie point symmetry of (51), and then the characteristic of is and for the path equation ( 16) the total derivative operator can be written as (57) thus the vector field is called -symmetry of ( 16) if the following equality is satisfied. The following four steps can be defined for finding symmetries and first integrals. (1) Find a first integral (, , ) of [, (1)] , that is, a particular solution of the equation where [, (1)] is the first-order -prolongation of the vector field .(2) The solution of (59) will be in terms of first order derivative of .To write equation of (51) in terms of the reduced equation of , we can obtain the firstorder derivative the solution of (59) and we can write (51) equation in terms of .(3) Let be an arbitrary constant of the solution of the reduced equation written in terms of .Therefore, is an integrating factor of (51).(4) The solution of (, , ) is the first integral of [, (1)] . 𝜆-Symmetries Using Lie Symmetries of Path Equation. Let us consider an th-order ODE as follows: Thus the invariance criterion of (61) is pr ( () − (, , , , . . ., (−1) ) () = = 0. (62) The expansion of relation (62) gives the determining equation related to path equation, which is the system of partial differential equations.In this system there are three unknowns, namely, , , and , which are difficult to solve because they are highly nonlinear.In the literature [10][11][12], for the convenience the function are chosen generally in the form In addition, for solving the remaining determining equations, the infinitesimal functions and are chosen specifically as = 0 and = 1 [10][11][12].Therefore, the number of unknowns in the equation is reduced to find 1 (, ) and 2 (, ) functions, and finally, -symmetries can be determined explicitly.However, for the path equation ( 16), it is possible to check that -symmetries of this equation cannot be determined by taking the form of in (63).Thus, we study -symmetries of path equation by using the relation with the Lie point symmetries of the same equation [2,19].Here Lie point symmetries of path equation are examined by considering four different cases of function (). Arbitrary 𝑓(𝑦). For arbitrary () the one-parameter Lie group of transformations is and the generator is Applying this generator (56), we obtain the characteristic Using (58), the -symmetry is obtained in the following form: If we substitute -symmetry (67) in (59), then we have It is clear that a solution of (68) is To write (16) in terms of {, , }, we can express the following equality using (69): Taking derivative of (70) with respect to gives and by using and , ( 16) becomes It is easy to see that the general solution of this equation is According to (60),we find the integration factor to be of the form Then the conserved form satisfies the following equality: which gives the original path equation.Thus the reduced equation is where is a constant, and the solution of (76) is determined for two different cases of arbitrary () function. (i) For () = , where 1 is a constant, is the solution of original path equation ( 16).(ii) For () = , is the other solution of the same equation.(79) Thus, we can calculate -symmetry of path equation using, for example, 1 Lie symmetry generator.For this generator 1 the infinitesimals are Therefore, the characteristic is written as By using (58) we obtain the -symmetry A solution of (59) for this case is and we can write = /, then to obtain path equation in terms of {, , } one can have By using these equalities (84) we find the following equation: in which the general solution is To find the integration factor one can write above equation in terms of as and then the integration factor becomes If we substitute = / in (87), then the reduced equation in terms of is and the solution of (89) is where and 3 are constants.It is clear that this solution satisfies the original path equation ( 16).Also, one can write which is the first integral of equation that provides the path equation ( 16). 𝑓(𝑦) = 1/(𝑚𝑦 + 𝑛). For this case the eight-parameter symmetry generators are obtained as follows: Using these infinitesimals we find the characteristic and the -symmetry is Abstract and Applied Analysis 9 By using (95) the equation (59) becomes A solution of (96) is This equation can be written as By differentiation of (98) we have and if we substitute (98) and (99) into the path equation, we obtain and the solution of (100) is To define , one can write Therefore, by using the relation (60) we find the integration factor If we rewrite (102) in terms of and then we substitute this expression into integration factor, the reduced equation of path equation becomes where is a constant.By the solution of (104), we obtain the solution that satisfies the original path equation (16) as where 3 is a constant, and the corresponding conservation law is If we apply the operator (52) to this characteristic (109), we obtain (O ¸) = 0, and the -symmetry is equal to zero.For 2 symmetry generator we find also = 0 similar to previous one.Hence, we can use another symmetry generator, for example, 7 to obtain -symmetry.For this case, are infinitesimals, and the corresponding characteristic is We find the -symmetry from (58) as in the following form: By applying (112) to (59) we obtain the solution And we write this expression (113) in terms of {, , } as By differentiating (114) with respect to one can write and by substituting and to the original path equation we obtain where the solution of ( 116) is To define this equality in terms of variable then is defined as follows: so we obtain the integration factor using (60) Finally one can write the conservation law which gives the original path equation.And thus we can express the first integral, which is reduced form of the path equation where is a constant.Integrating (121) we obtain the solution that satisfies the original equation where 1 is a constant. By considering (58), the -symmetry becomes The solution of (59) is To write ( 16) in terms of {, , }, we can express the following equality: By taking derivative (128) with respect to , then we have ) ⋅ (129) If we substitute and into the path equation, then one can find and a solution of this equation ( 130) is By using (60) we find the integration factor of the form It is easy to see that the conserved form satisfies the following equality: and this equality gives the original path equation.Thus the reduced form of path equation is where is a constant.And all results are summarized in Table 2. 𝜆-Symmetries and Jacobi Last Multiplier Approach Definition of ∈ ∞ ( (1) )-Symmetry.Let V be a vector field on which is open subset, and has the property of ⊂ × .For ∈ N, () ⊂ × () denotes the corresponding jet space, and their elements are (, () ) = (, , 1 , . . ., ), where, for = 1, . . ., , denotes the derivative of order of with respect to .In addition let = (, ) + (, ) be a vector field defined on , and let ∈ ∞ ( (1) ) be an arbitrary function.Then the -prolongation of is pr = (, ) + (, ) + (1) (, , , , . . ., (−1) ) + (2) (, , , , . . ., (−1) ) , with where is total derivative operator with respect to such that In this section we analyze -symmetries of path equation by using Jacobi last multiplier as another approach.First (61) can be written by using system of first-order equations, which is equivalent to the expression and by solving the following differential equation, the Jacobi last multiplier of (138) is found: where, namely, is The nonlocal approach [13,20] to -symmetries is analyzed to seek -symmetries such that With this idea always can be considered to be of the form such as = log (1/).But this relation cannot be considered if the divergence of (138) Div ≡ ∑ =1 ( / ) is equal to zero.So is chosen like this form because any Jacobi last multiplier is a first integral of (138).In this section we again consider different choices of () for -symmetry classification. 𝑓(𝑦) = 𝑘 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡. For this case the divergence of the path equation yields Substituting into (135) then from the solution of the determining equations ( 62) we obtain eight-parameter infinitesimals and the generators are which corresponds to the classical Lie point symmetries since is equal to zero. 5.2.() = .Another special form we consider here is () = .For this case we obtain the divergence of ( 16) in the form and by substituting into the prolongation formula, the infinitesimals can be found as follows: and the corresponding generator is which is a new -symmetry. Case 1 ( = 1/3).The divergence of path equation for this value of is the -infinitesimals can be written as and the -generator is Case 2 ( = 1/2).For another specific value of the divergence is the -infinitesimals are found as follows: and the -generator is In summary all new -symmetries are presented in Table 3. Invariant Solutions. In this section we present some invariant solutions based on Jacobi multiplier approach. Case 1.For the case () = we can investigate 1 to find the invariant solution of path equation.The first prolongation of 1 is Pr and the Lagrange equations are gives the first order invariants that replaced into path equation generate the first-order equation the solution of this equation yields and the first integral is this equality gives the original path equation (16).The reduced form of path equation is in which the solution of (169) is where 1 and are constants.It is clear that (170) is similar to the solutions (48) and ( 122).If we apply similar process for the 2 symmetry generator, we obtain first-order invariants for this case as and the first integral is another reduced form of path equation ( 16) is The solution of ( 173) is given by The solution of (178) is where 1 and c are constants, and it is clear that (182) is similar to the solution (105). Conclusion The aim of this study is to classify Noether and -symmetries of path equation describing the minimum drag work.The symmetry classification of the equation is analyzed with respect to different choices of altitude-dependent arbitrary function () of the governing equation, which represents a combination of the density, the drag coefficient, the cross sectional area, and the velocity.It is a fact that an ordinary differential equation should have a Lagrangian function to obtain Noether symmetries.In this study we consider the partial Lagrangian approach for obtaining Noether symmetries and constructing a classification in the problem.Thus, new first integrals (conserved forms) are obtained directly by using each Noether symmetry given by symmetry of the equation.With this point of view we find and classify the new forms of first integrals, and then the invariant solutions of path equation are constructed for specific forms of (). In the literature, as a different and a new concept, symmetries of the second order ordinary differential equations are analyzed by assuming -function in the linear form.However, in our study, we prove that it is not possible to obtain -symmetries of the drag equation by selecting function in a linear form.So we study another approach to obtain -symmetries based on using Lie point symmetries of the path equation.Thus, we have derived -symmetries, integrating factors, first integrals, and the reduced form of the original path equation.Based on using these new symmetries, we present some new different invariant solutions by calculating new reduced forms and first integrals. In our study, additionally, the Jacobi last multiplier concept is presented as a new and an alternative approach to construct -symmetries of the path equation algorithmically.In this method, first, -function is determined by taking divergence of the governing equation and then the infinitesimals functions and are determined from the determining equations, then we calculate new -symmetries.In this study we generate first-order equations by using these new symmetries, which provide invariant solutions of path equation.After all calculations we present that all methods discussed in this study have their own important properties to find first integrals and invariant solutions of ordinary differential equations, and the advantages of these approaches are given for specific cases.Furthermore, all symmetry classifications are presented in tables. Table 1 : Noether symmetry classification table of path equation. 4.1.5.() = .If () is assumed in the polynomial form and then Lie symmetry generators are Table 2 : Table of -symmetry classification with Lie symmetry of path equation.
5,014.8
2013-06-13T00:00:00.000
[ "Physics", "Mathematics" ]
Productive and Receptive Collocational Knowledge of Iranian EFL Learners at Different Proficiency Levels In the present study, an attempt was made to probe into the probable difference between Iranian intermediate and advanced EFL learners’ receptive and productive collocational knowledge. To this end, 60 EFL learners studying at Islamic Azad University, Isfahan Branch, including 30 advanced and 30 intermediate learners, were chosen through the Oxford Placement Test (OPT). The participants at each level of proficiency received two tests of collocations, namely receptive collocation test and productive test of collocations. Paired-samples t test showed no statistically significant difference between productive and receptive knowledge of collocations of the advanced EFL learners. However, the mean comparison between the receptive and productive collocation test scores of intermediate EFL learners revealed a significant difference. Pedagogical implications emanating from the obtained results are elaborated in the study. INTRODUCTION The origin of the term collocation was the Latin verb collocare, meaning to arrange.However, Firth (1957) who is known as the father of collocations first introduced this term to refer to "the company that words keep" (p.183).Firth believed that it is essential to know what words come with words.Benson (1986) states that collocations are a subbranch of formulaic language, and many language researchers have paid attention to the acquisition of collocations. According to Sadat Kiaee, Heravi Moghaddam, and Moheb Hosseini (2013), collocations are "words that 'fit together' intuitively with great expectation in the syntagmatic and paradigmatic levels.The syntagmatic relation of lexical words, which is horizontal, refers to the collocability of words" (p.2).In terms of paradigmatic, connections refers to sets of words in the same class.For instance, the word 'dog' is in syntagmatic relation with 'hairy' and in paradigmatic relation with 'cat'.Collocations are predictable patterns and phrases or groups of words that typically co-occur.They include what have traditionally been considered lexical items, as well as structural patterns which may seem closer to grammar and combinations of words that simply 'go together.' In addition, McCarthy (1990) believes that "in vocabulary teaching there is a high importance of collocation."He also suggests that "the relationship of collocation is fundamental in the study of vocabulary, and collocation is an important organizing principle in the vocabulary of any language" (p.12).Furthermore, learning collocations is essential for language learners because collocations can be used both in oral and written language (Lennon, 1996).Considering the importance of collocations, lexicographers take into consideration that collocations should be completely explained to L2 learners because a little knowledge of these vocabulary items can be dangerous to speech and writing production. There are three approaches about the phenomenon of collocation.The first approach, which is the lexical approach, believes that lexis is a separate issue, which is distinct from grammar.The second approach is the semantic approach, which, similar to the lexical view, overlooks grammar, but emphasizes the semantic aspects of the words that control their meaning.The last approach is the structural approach, which emphasizes the importance of both lexis and grammar in the study of collocations.In addition, so far just a few number of collocations have been studied by the researchers from different perspectives.Therefore, just some limited results have been gained.Contrary to this, more patterns of collocations have been studied by the structural approach of collocation.The importance of collocations has been the focus of a number of studies in the field of language learning.According to Brown (1974), learning collocations improves the learners' language skills and sub skills. Accordingly, the present paper aimed to compare Iranian EFL learners' receptive collocational knowledge and productive collocational knowledge at advanced and intermediate levels.In fact, the research questions of the present study were: (1) Is there a significant difference between the 12 IJALEL 6(7): [11][12][13][14][15][16] Persian-speaking advanced EFL learners' productive and receptive knowledge of collocations?and (2) Is there a significant difference between the Persian-speaking intermediate EFL learners' productive and receptive knowledge of collocations? LITERATURE REVIEW As mentioned, three major approaches have studied collocations.The lexical approach, which is the oldest one and was developed by Firth (1951), believes that collocation is an "abstraction at the syntagmatic level" and is not directly linked to the "conceptual or idea approach to the meaning of words " (p. 196).This framework was adjusted by Halliday (1966) and Sinclair (1991).The second view toward collocation was the semantic approach, which focuses the form of collocations.Other issues were also studied under this approach like the issue of "why words collocate with certain other words, and how the meaning of a word is reduced to its ultimate contrastive elements resulting in the atomization of meaning" (Bahns, 1993, p. 175). The third approach of collocations is the structural approach.According to this approach, the structural patterns govern collocations.There are some contradictions between the grammatical outlook and the two aforementioned approaches.The difference is that this approach mainly focuses on grammatical and lexical structure (Gitsaki, 1999).Lexis cannot be separated from grammar, because the two are distinctive but related aspects of one phenomenon (Bahns, 1993).Kjellmer (1990) stated that articles, prepositions, and the base form of verbs are collocational.In contradiction, adjectives, singular proper nouns, and adverbs are not collocational in nature.Gitsaki (1996) through the analysis of collocations identified 37 categories of collocation overall: 8 lexical and 29 grammatical. Some empirical studies have so far been conducted in the field of collocations.Dechert and Lennon (1989) came to the conclusion that advanced English learners who had an experience of at least ten years living with native speakers could not speak and write like native speakers.Furthermore, their production caused misunderstanding and interrupted comprehension.Dechert and Lennon maintain that the errors made by the subjects are not mainly grammatical, but lexical ones. In another study, Bahns and Eldaw (1993) studied advanced EFL learners' productive knowledge of English ver-b+noun collocations.The participants were classified into two groups.One group took a cloze test containing 10 sentences, each of which had a verb+noun collocation in which the verb was missed.The other group received a translation test in which they were supposed to translate 15 sentences, each made up by a verb+noun collocation in a sentence.The results showed that around 50% of learners' responses were acceptable English collocations.Finally, Bahns and Eldaw concluded that "collocation is problematic, even for advanced students" (1993, p. 102). Similarly, Gitsaki (1999) intended to measure post-beginner, intermediate, and post-intermediate ESL learners' knowledge of collocations.Three tasks were employed including essay writing, translation, and fill-in-the-blank.The results showed a positive correlation between proficiency and the knowledge of collocation.It was found that frequency of collocations lead to better learning of collocations. In another study, Nesselhauf (2005) investigated the use of verb/noun collocations among advanced German learners of English in free writing.It was found that production of collocations is affected by their L1.It was also shown that the most frequent error was the wrong choice of the verb. In the Iranian EFL context, as far as learners' general proficiency is concerned, Koosha and Jafarpour (2006) studied the collocational proficiency of prepositions across various levels of EFL proficiency.In addition, they studied the influence of EFL learners' L1 on their collocational proficiency of prepositions.Two hundred EFL learners were chosen through an English language proficiency test.Two completion tests of collocations were utilized.The results showed that EFL learners' performance in the test of collocation had a positive correlation with their general language proficiency.Finally, it was shown that Iranian EFL learners transferred their L1 collocational patterns to their L2 production. In the same vein as the above studies, Bagherzadeh, Hosseini, and Akbarian ( 2007) studied the relationship between collocational competence and general language proficiency among thirty Iranian EFL learners.The results showed that there was a relationship between the collocation test and TOEFL and between the vocabulary section of TOEFL and the collocation test. In another study, Keshavarz and Salimi (2007) employed open-ended, multiple choice cloze tests, and TOEFL to measure collocational competence and language proficiency of one hundred Iranian students.A test of collocation was used.The results showed a significant relationship between the results of the cloze tests and collocational competence. Similarly, Shokouhi and Mirsalari (2010) studied the relationship between collocational proficiency and general linguistic proficiency of EFL learners.A 90-item multiple-choice test was run among thirty-five subjects.The results revealed no significant correlation between the general linguistic proficiency and collocational proficiency of EFL learners.Lexical collocations were found to be easier than grammatical collocations. Along with studies on the impact of language proficiency on collocation knowledge, the colocations have been studied in other fields, too.For instance, contrary to the above researches, Bazzaz and Samad (2011) investigated the effects of collocational proficiency on the use of verb-noun collocations in writing.The participants were twenty-seven Iranian PhD students in a Malaysian university.The number of collocations that the students used in their essays were calculated.The results showed a positive relationship between proficiency of collocations and the use of verb-noun collocations in the stories. In addition, Bahardoust (2012) studied lexical collocations in L1 and L2.Midterm and final tests were used as sources of data.The results showed that the rate of verbnoun and adjective-noun were higher than other collocation 13 Productive and Receptive Collocational Knowledge of Iranian EFL Learners at Different Proficiency Levels types, while the rate of noun-verb was lower.It was shown that L1 collocations had a higher rate and frequency, and L1 had both positive and negative effects on collocations. The above-mentioned studies in Iran investigated collocational proficiency of EFL learners from different points of view; however, some contradictions are observed because some studies indicated that collocational proficiency increases along with improvement in language proficiency.Other studies showed that language proficiency has no effect on collocational proficiency like the study by Shokouhi and Mirsalari (2010).Some other studies worked on language transfer from L1 and came to the conclusion that negative transfer can cause a problem (e.g.Koosha & Jafarpour, 2006). In addition to the aforesaid studies, Nesselhauf (2005) investigated the relationship between language proficiency and collocational proficiency.Nesselhauf (2005) studied the use of verb-noun collocations by advanced German learners of English in free writing.He showed there was a correlation between language proficiency and collocational proficiency. Similarly, Shehata (2008) studied the use of collocations by advanced Arabic-speaking learners.Two production tests and one reception test were used.The results proved the strong influence of learners L1 and the language learning environment on learning collocations.In addition, the results revealed that students' productive proficiency of collocations was less than their receptive proficiency of collocations. Thus, the review of literature shows that not much study has so far been conducted on the relationship of language proficiency and receptive versus productive knowledge of collocations across different levels of proficiency; therefore, to fill the existing gap, the present research was set out to investigate the relationship between language proficiency and receptive versus productive knowledge of collocations among Iranian intermediate and advanced EFL learners. Design of the Study The present study had an ex post facto design (alternatively called causal-comparative design) since quantitative data were collected and analyzed from two groups of learners while no treatment or intervention whatsoever was carried out on them.In fact, the learners at both levels of proficiency took a receptive test of collocation as well as a productive collocation test and the difference between the two sets of test scores was investigated for each proficiency level. Participants The participants in this study were 60 learners majoring in English at the English Department at Islamic Azad University, Isfahan Branch.They were 30 available MA students who were considered to be advanced EFL learners and 30 BA students who were coded as intermediate EFL learners.The participants' mean age was calculated and it was 24.56 years.These participants were both male and female students.Their L1 background was Persian (Farsi).In or-der to assure the homogeneity of the participants in terms of their general proficiency at each level of proficiency, an Oxford Placement Test (OPT) was employed. Instruments In order to establish the general language proficiency of the participants, the first instrument, i.e., Oxford Placement Test (2004), was used.In order to measure the participants' receptive knowledge of collocation, a receptive test of English lexical collocations was employed.The receptive test was adapted from (Haqiqi, 2007), and it was comprised of 50 items.The items in this test included different types of lexical collocations like noun+ noun, verb+ noun.The reliability index of the test was calculated using Cronbach alpha (r =.92).The last instrument used was a productive English collocation test (Haqiqi, 2007).The productive collocation test consisted of fill-in-the-blank items with the initial letters of collocations as clues to the right answer.The test was highly reliable and produced a reliability estimate of.89through Cronbach alpha. Procedures The study intended to assess intermediate and advanced Iranian EFL learners' productive and receptive knowledge of English collocations.At the outset of the study, an OPT was run among 100 male and female EFL undergraduate and postgraduate students.Then, from among them, 30 advanced and 30 intermediate EFL learners were selected according to the scoring rubrics of the test.The participants at each level (i.e.intermediate and advanced) received two tests of collocations, namely receptive and productive tests of collocations.Then, the gained scores were analyzed through SPSS. RESULTS The results obtained for each of the research questions are presented in what follows. Research Question One The first research question of the study was: Is there a significant difference between the Persian-speaking advanced EFL learners' productive and receptive knowledge of collocations?In order to answer this research question, two tests of collocations, namely productive and receptive tests were given to the advanced learners and the results were compared.Table 1 presents the descriptive statistics. As shown in Table 1, the mean difference of the receptive and productive collocation tests is not very much (the mean scores of the receptive test being 36.03 and that of productive test being 34.21. Figure 1 compares the mean scores of the two tests. The mean difference between the receptive and productive tests does not seem to be significant; however, in order to be more objective, a paired-samples t test was run.Table 2 presents the results. The results of the paired samples t test indicated that there was no significant difference between the advanced IJALEL 6( 7):11-16 EFL learners' productive and receptive knowledge on collocations, t(29) =.46, p >.05.Thus, the first null-hypothesis could not be rejected. Research Question Two The second research question of the study asked: Is there a significant difference between the Persian-speaking intermediate EFL learners' productive and receptive knowledge of collocations?In order to compare the intermediate EFL learners' mean scores on productive and receptive lexical English collocations, the mean scores were compared.The results are presented in Tables 3 and 4. As shown in Table 3 and Figure 2, the mean score of the receptive knowledge (M = 23.95) of intermediate learners was higher than their productive knowledge (M = 21.23).Figure 1 shows the results in pictorial form. The mean difference between the two groups is obvious; however, in order to be more objective, a paired-samples t test was run, the results of which are presented in Table 4. According to the results presented in Table 4, the mean difference between the collocational receptive and productive tests by intermediate EFL learners was significant, t(29) = 2.92, p <. 05; this led to the rejection of the second null hypothesis of the study. DISCUSSION In this section, it is attempted to present some reasons behind the findings, and to compare the findings with other studies in the field. Research Question One One of the objectives of the present research was an attempt to find out whether there was a significant difference between productive and receptive collocational knowledge of Iranian intermediate EFL learners.The results of the study revealed that the intermediate learners' performance on the receptive collocation test scores were significantly higher than their productive collocation test scores. The higher scores of the intermediate EFL learners in the receptive test can be attributed to the fact that intermediate EFL learners in receptive tests could take advantage of their passive knowledge which is easier to access compared with their active knowledge.However, for advanced EFL learners access to passive knowledge seems not to be out of reach, and the passive knowledge is as great as active knowledge.Furthermore, in receptive tests, the test takers have the opportunity to guess the meaning of the collocations form context. Various contextual clues may be at work when test takers deal with receptive tests of collocations.However, when production of collocations is observed in productive tests, no contextual clues is at work, and the test takers have to start from their own knowledge. In the Iranian EFL context, the findings of the present research are in line with the study by Koosha and Jafarpour (2006), who intended to discover whether collocational proficiency of prepositions could be examined at the differ-ent levels of EFL learners' proficiency.It was revealed that learners' performance in the test of collocation preposition was positively related to their level of language proficiency. On the contrary, the findings of the present research are in contrast with the study by Bazzaz and Samad (2011), who indicated that there was a large positive relationship between general language proficiency of intermediate EFL learners' and their productive collocation knowledge in writing tasks.In the present study, the receptive and productive collocation knowledge of intermediate EFL learners was investigated, while in Bazzaz and Samad's (2011) study the general proficiency level of EFL learners and no their productive knowledge of collocation was surveyed. Research Question Two The second research question intended to compare the productive and receptive collocational knowledge of Iranian advanced EFL learners.The results revealed no significant The results for such finding may be attributed to the fact that advanced EFL learners can easily make a link between their productive and receptive knowledge of vocabulary.This improves their scores in productive tests of collocations. The results of the present research also are in contradiction to the study by Al-Amor (2006), who evaluated the productive and receptive collocational knowledge of Saudi EFL learners.In his research, it was found that there was a significant relationship between the EFL learners' receptive and productive knowledge of collocations.In addition, the participants in his research gained better results on the productive test compared with the receptive test.The reason for such findings according to Al-Amor was the fact that the target collocations in his receptive test were of lower frequency than those in the productive test.As it was mentioned earlier, in the present research, it was revealed that the receptive knowledge of collocations is stronger than productive knowledge of collocations, and the frequency of the collocations in the tests employed in the present research was similar.The findings of the present study also are in the same vein with the study by Shehata (2008) who found that advanced EFL learners performed better on the receptive test. CONCLUSION The contradictory results obtained in previous studies on collocations provided the motive to conduct the present study.The results of the study indicated a slight difference between the receptive and productive collocational knowledge of advanced EFL learners, while intermediate learners' receptive collocations test scores were significantly higher than their productive collocations test scores.According to the findings of the present study, some implications for teachers and L2 learners can be assumed.The results can help language teachers to attribute the problems which learners have in the development of their language proficiency partly to the lack of collocational knowledge.In fact, teaching collocations to EFL learners should be granted more attention.Inspired by the findings of the present research, language teachers and learners should take into account that knowing a word or a collocational combination, is not just to identify the meaning of the word or collocation in tests, rather to be able to use the collocation in language production.In addition, the people who are in charge of language teaching in EFL contexts, are suggested to make their best to bridge the existing gap between EFL learners' receptive and productive knowledge.In order to achieve this goal, some exercises like developing paragraphs can be suggested.Teaching collocations in language classes seems not to be sufficient; therefore, language teachers should ask language learners to use the learned collocations productively. Figure 1 .Figure 2 . Figure 1.Receptive and Productive Tests' mean comparisons of Advanced Learners Table 1 . Descriptive statistics of receptive and productive collocational knowledge of advanced learners Table 2 . Results of paired-samples t test between advanced learners' productive and receptive tests Table 3 . Descriptive statistics of receptive and productive collocational knowledge of intermediate learners Table 4 . Results of paired-samples test between intermediate learners' productive and receptive tests
4,726.4
2017-10-10T00:00:00.000
[ "Linguistics", "Education" ]
Optimal trade execution under small market impact and portfolio liquidation with semimartingale strategies We consider an optimal liquidation problem with instantaneous price impact and stochastic resilience for small instantaneous impact factors. Within our modelling framework, the optimal portfolio process converges to the solution of an optimal liquidation problem with general semimartingale controls when the instantaneous impact factor converges to zero. Our results provide a unified framework within which to embed the two most commonly used modelling frameworks in the liquidation literature and provide a microscopic foundation for the use of semimartingale liquidation strategies and the use of portfolio processes of unbounded variation. Our convergence results are based on novel convergence results for BSDEs with singular terminal conditions and novel representation results of BSDEs in terms of uniformly continuous functions of forward processes. Introduction The impact of limited liquidity on optimal trade execution has been extensively analyzed in the mathematical finance and stochastic control literature in recent years.The majority of the optimal portfolio liquidation literature allows for one of two possible price impacts.The first approach, pioneered by Bertsimas and Lo [6] and Almgren and Chriss [3], divides the price impact in a purely temporary effect, which depends only on the present trading rate and does not influence future prices, and in a permanent effect, which influences the price depending only on the total volume that has been traded in the past.The temporary impact is typically assumed to be linear in the trading rate, leading to a quadratic term in the cost functional.The original modelling framework has been extended in various directions including general stochastic settings with and without model uncertainty and multi-player and mean-field-type models by many authors including Ankirchner et al. [4], Cartea et al. [9], Fu et al. [14], Gatheral and Schied [17], Graewe et al. [19], Horst et al. [23], Kruse and Popier [25] and Neuman and Voß [30]. A second approach, initiated by Obizhaeva and Wang [31], assumes that price impact is not permanent, but transient with the impact of past trades on current prices decaying over time.When impact is transient, one often allows for both absolutely continuous and singular trading strategies.When singular controls are admissible, optimal liquidation strategies usually comprise large block trades at the initial and terminal time.The work of Obizhaeva and Wang has been extended by Alfonsi et al. [2], Chen et al. [11], Fruth et al. [13], Gatheral [16], Guéant [20], Horst and Naujokat [21], Lokka and Xu [28] and Predoiu et al. [32], among many others. Single and multi-asset liquidation problems with instantaneous and transient market impact and stochastic resilience where trading is confined to absolutely continuous strategies have been analyzed in Graewe and Horst [18] and Horst and Xia [22], respectively.This is consistent with the empirical work of Large [26] and Lo and Hall [27], which suggests that this resilience does indeed vary stochastically.Although only absolutely continuous trading strategies were admissible in [18,22], numerical simulations reported in [18] suggest that if all model parameters are deterministic constants, then the optimal portfolio process converges to the optimal solution in [31] with two block trades and a constant trading rate as the instantaneous impact parameter converges to zero.Cartea and Jaimungal [10] provide empirical evidence that the instantaneous price impact is indeed (much) smaller than permanent (or transient) price impact.The numerical simulations in [18] suggest that the model in [18] provides a common framework within which to embed the two most commonly used liquidation models [3,31] as limiting cases. This paper provides a rigorous convergence analysis within a Markovian factor model.It turns out that the stochastic setting is quite different from the deterministic one.Most importantly, we show that in the stochastic setting, the optimal portfolio processes obtained in [18] converge in the Skorohod M 2 topology to a process of infinite variation with jumps as the instantaneous market impact parameter converges to zero.Our second main result is to prove that the limiting portfolio process is optimal in a liquidation model with semimartingale execution strategies and to explicitly compute the optimal trading cost in the semimartingale execution framework. Showing that the limiting model solves a liquidation model with semimartingale execution strategies is more than mere byproduct.Control problems with semimartingale strategies are usually difficult to solve because there are no canonical candidates for the value function and/or optimal strategies.We show that the optimal solution in the limiting model is fully determined by the unique bounded solution to a one-dimensional quadratic BSDE.Our limit result provides a novel approach to solving control problems with semimartingale strategies that complements the approaches in [1] and [15].They solved related models by passing to a continuous time limit from a sequence of discrete time models.Within a portfolio liquidation framework, inventory processes with infinite variation were first considered by Lorenz and Schied [29] to the best of our knowledge.Later, Becherer et al. [5] considered a trading framework with generalized price impact and proved that the cost functional depends continuously on the trading strategy, considered in several topologies.Bouchard et al. [7] considered infinite variation inventory processes in the context of hedging. The paper closest to ours is the recent work by Ackermann et al. [1].They considered a liquidation model with general RCLL semimartingale trading strategies.Their framework is more general than ours as they allow for more general filtrations and stochastic order book depth.At the same time, their analysis is confined to risk-neutral traders.In our setting, when the model parameters are deterministic and the instantaneous price impact goes to zero, the case of risk neutral traders -which is then a special case to the model studied in [1] -is explicitly solvable.Allowing for risk aversion renders the impact model significantly more complicated as it adds a quadratic dependence of the integrated trading rate into the HJB equation, cf.[18] for details. Our work also complements the work of Gârleanu and Pedersen [15].They consider an array of market impact models, including a model with purely transient costs.They write [p.497] that "with purely [transient] price-impact costs, the optimal portfolio policy can have jumps and infinite quadratic variation."As in [1], they justify portfolio holdings with infinite quadratic variation by taking a limit of a sequence of discrete time models with increasing trading frequency.They also prove that the optimal portfolio processes in the discrete time models converge to the optimal portfolio process in the corresponding continuous time model if either the instantaneous price impact converges to a positive constant or the instantaneous price impact factor multiplied by the (increasing) trading frequency converges to zero.However, they do not consider the general case of an instantaneous price impact factor converging to zero.Most importantly, they consider a portfolio choice problem on an infinite time horizon, which avoids the liquidation constraint at the terminal time. Last but not least our work complements the work of Carmona and Webster [8], who provide strong evidence that inventories of large traders often do have indeed a non-trivial quadratic variation component.For instance, for the Blackberry stock, they analyze the inventories of "the three most active Blackberry traders" on a particular day, namely CIBC World Markets Inc., Royal Bank of Canada Capital Markets, and TD Securities Inc.From their data, they "suspect that RBC (resp.TD Securities) were trading to acquire a long (resp.short) position in Blackberry" and found that the corresponding inventory processes were with infinite variation.More generally, they find that systematic tests "on different days and other actively traded stocks give systematic rejections of this null hypothesis [quadratic variation of inventory being zero], with a p-value never greater than 10 −5 ."Our results suggest that inventories with nontrivial quadratic variation arise naturally when market depth is high and resilience and/or market risk fluctuates stochastically.This is very intuitive; in deep markets it is comparably cheap to frequently adjust portfolios to stochastically varying market environments. The main technical challenge in establishing our convergence results is that the optimal solution to the limiting model cannot be obtained by taking the limit of the three-dimensional quadratic BSDE system that characterizes the optimal solution in the model with positive instantaneous impact.Instead, we prove that the limit is fully characterized by the solution to a one-dimensional quadratic BSDE.Remarkably, this BSDE is independent of the liquidation requirement.As a result, full liquidation takes place if the instantaneous impact parameter converges to zero even it is not strictly required.The reason is a loss in book value of the remaining shares that outweighs the liquidation cost for small instantaneous impact.Our convergence result is based on a novel representation result for solutions for BSDEs driven by Itô processes in terms of uniformly continuous functions of the forward process and on a series of novel convergence results for sequences of singular stochastic integral equations and random ODEs, which we choose to report in an abstract setting in Appendix A and B, respectively. The limiting portfolio process is optimal in a liquidation model with general semimartingale execution strategies.Within our modeling framework where the cost coefficients are driven by continuous factor processes block trades optimally occur only at the beginning and the end of the trading period.This is very intuitive as one would expect large block trades to require some form trigger such as an external shock leading to a discontinuous change of cost coefficients.The proof of optimality proceeds in three steps.We first prove that the process with jumps can be approximated by absolutely continuous ones.This allows us to approximate the trading costs in the semimartingale model by trading costs in the pre-limit models from which we finally deduce the optimality of the limiting process in the semimartingale model by using the optimality of the approximating inventory processes in the pre-limit models.As a byproduct of our approximation, we also obtain that the optimal costs are given in terms of the aforementioned one-dimensional quadratic BSDE. The rest of the paper is organized as follows.In Section 2, we recall the modelling setup from [18,22] and summarize our main results.The proofs are given in Sections 3 and 4. A series of fairly abstract convergence results for various stochastic equations with singularities upon which our convergence results are based is postponed to two appendices. Notation.Throughout, randomness is described by an R m -valued Brownian motion {W t } t∈[0,T ] defined on (Ω, F, {F t } t∈[0,T ] , P), a complete probability space, where {F t } t∈[0,T ] denotes the filtration generated by W , augmented by the P-null-sets.Unless otherwise specified, all equations and inequalities hold in the P-a.s.sense.For a subset A ⊆ R d , we denote by L 2 Prog (Ω × [0, T ]; A) the set of all progressively measurable A-valued stochastic processes denote the respective subsets of predictable processes.Whenever T − appears, we mean that there exists an ε > 0 such that a statement holds for all T ∈ (T − ε, T ). Problem formulation and main results In this section, we introduce two portfolio liquidation models with stochastic market impact.In the first model, analyzed in Section 2.1, the investor is confined to absolutely continuous trading strategies.For small instantaneous market impact, we prove that the optimal liquidation strategy converges to a semimartingale with jumps.In Section 2.2, we therefore analyze a liquidation model with semimartingale trading strategies.We prove that the limiting process obtained in Section 2.1 is optimal in a model where semimartingale strategies that satisfy a suitable regularity condition are admissible. Portfolio liquidation with absolutely continuous strategies We take the liquidation model analyzed in [18,22] as our starting point and consider an investor that needs to close within a given time interval [0, T ] a (single-asset) portfolio of x 0 > 0 shares using a trading strategy ξ = {ξ t } t∈[0,T ] .If ξ t < 0, the investor is selling the asset at a rate ξ t at time t ∈ [0, T ], else she is buying it.For a given strategy ξ, the corresponding portfolio process The set of admissible strategies is given by the set For a general inventory process X ∈ L 2 P (Ω × [0, T ]; R), the corresponding transient price impact is described by Y X = {Y X t } t∈[0,T ] , the unique stochastic process that satisfies the ODE for some constant γ > 0 and some essentially bounded, adapted, (0, ∞)-valued process ρ = {ρ t } t∈[0,T ] .The process Y X may be viewed as describing an additional shift in the unaffected benchmark price process generated by the large investor's trading activity.For ξ ∈ A, we write For any instantaneous impact factor η > 0 and any penalization factor where the cost functional is given by The first term in the above cost functional captures the instantaneous trading costs; the second captures the costs from transient price impact; the third captures market risk where the adapted and non-negative process λ = {λ t } t∈[0,T ] specifies the degree of risk.If full liquidation is required (N = ∞), the fourth term should formally be read as +∞1 {X ξ T =0} with the convention 0•∞ = 0.The case N = ∞ captures the case where full liquidation is required; this case is analyzed in [18].The case γ + 1 ≤ N < ∞ is analyzed in [22].The fifth term captures an additional loss in book value of the remaining shares.It drops out of the cost function if N = ∞; see [18,22] for further details on the impact costs and cost coefficients. It has been shown in [18,22] that the optimization problem has a solution ξη,N for any N ∈ N and any η > 0. The solution is given in terms of a backward SDE system with possibly singular terminal condition.We index the optimal trading strategies and state processes by η and N as we are interested in their behavior for small instantaneous impact factors for both finite and infinite N . i) The BSDE system that belongs to the space ii) The liquidation problem (2.2) has a solution ξη,N .The corresponding state process is given by the (unique) solution to the ODE system with initial conditions Xη,N 0 = x 0 and Ŷ η,N 0 = 0 where Let us now define the process Ẑη,N := γ Xη,N − Ŷ η,N . The benefit of defining this process is that the terms in the ODE (2.3) that are multiplied by η −1 drop out so that we expect that the process Ẑη,N remains stable for small values of η.Next, we state a result on the optimal state process and the previously introduced process Ẑη,N that will be important for our subsequent analysis.In particular, we show that the optimal portfolio process Xη,N never changes its sign.The proof is given in Section 3.1.1.Theorem 2.2.For all η ∈ (0, ∞), N ∈ N , the process Ẑη,N is non-increasing on [0, T ].Moreover, We are interested in the dynamics of the optimal portfolio processes for small instantaneous price impact.We address this problem within a factor model where the cost coefficients λ and ρ are driven by an Itô diffusion, which is given by the unique strong solution to the SDE dχ t = µ(t, χ t ) dt + σ(t, χ t ) dW t , χ 0 = χ 0 on [0, T ] with χ 0 ∈ R n .We assume throughout that the function is bounded, measurable and uniformly Lipschitz continuous in the space variable: Assumption 2.3.The processes ρ and λ are of the form Moreover, the function f ρ is bounded away from zero. For convenience, we define the stochastic process ϕ = {ϕ t } t∈[0,T ] by In what follows, we heuristically argue that the processes Xη,N converge to a limit process X0 (independent of N ) as η → 0 and identify the limit X0 .Since the ODE system (2.3) is not defined for η = 0, we cannot define the limiting process as the solution to this system.Instead, we first identify the limits of the coefficients of the ODE system and then derive candidate limits for the state processes in terms of the limiting coefficients. Convergence of the coefficient processes In this section, we state the convergence results for the coefficient processes D η,N and E η,N of the ODE system (2.3) as η → 0. In particular, we prove that their limits D 0 and E 0 exist and are driven by a common factor, which is given by the solution of a quadratic BSDE. Before proceeding to the limit result, we provide some heuristics for the convergence.Assuming for simplicity that all coefficients are deterministic, the dynamics of the coefficient processes satisfy Letting η → 0, we expect that that is, we expect that Moreover, by the choice of the coefficients D η,N and E η,N , we expect that The three equalities combined yield (2.6) Plugging (2.5) and (2.6) back into (2.4) yields Hence we expect D 0 and E 0 to be driven by B 0 and B 0 to satisfy the ODE . Our heuristic also suggests that the limit processes are independent of the liquidation requirement. Example 2.4.If λ = Cρ for some constant C ≥ 0, then the process B 0 can be computed explicitly The preceding heuristic suggests that the limiting coefficient processes are driven by a solution to the BSDE corresponding to the above ODE for B 0 .The following lemma is proven in Section 3.1.2. Lemma 2.5.There exists a unique solution (B 0 , Z 0,B ) in the space The process B 0 is bounded from above by 1 and bounded from below by Moreover, there exists a uniformly continuous function ("decoupling field") h : We prove below that the process B η,N converges to B 0 as η → 0 and that D η,N and E η,N converge to the processes respectively.In view of Lemma 2.5, these processes are well-defined and so the dynamics of the process B 0 can be rewritten as Likewise, the BSDE for the process B η,N can be rewritten as (2.10) This suggests that the process B η,N converges to B 0 on the entire interval [0, T ].By contrast, convergence of the processes D η,N and E η,N can only be expected to hold on compact subintervals of [0, T ) because the terminal conditions of the limiting and the approximating processes are different.Specifically, we have the following result.Its proof is given in Section 3.2.1. Convergence of the state process Having derived the limits of the coefficient processes, we can now heuristically derive the limits of the processes Xη,N , Ŷ η,N and Ẑη,N , which we denote by X0 , Ŷ 0 and Ẑ0 , respectively. Since Xη,∞ T = 0 for all η > 0, we expect that X0 T = 0. We prove in Lemma 3.11 that this convergence also holds if N is finite.The proof heavily relies on the optimality of X0 in the semimartingale portfolio liquidation model. Assuming that the optimal trading strategy remains stable if η → 0, the ODE (2.3) suggests that the term D η,N Xη,N + E η,N Ŷ η,N is small for small η and hence that We do not conjecture the above relation at the terminal time because the convergence of D η,N and E η,N only holds on [0, T ).Assuming that On the other hand, by definition, This motivates us to define the process Since we expect that X0 T = 0 and that Ẑ0 = γ X0 − Ŷ 0 , we now introduce the candidate limiting state processes (2.11) Figure 1: Optimal trading strategies Xη,∞ for the liquidation model for different instantaneous impact factors and their limit X0 for m Since ( X0 0− , Ŷ 0 0− ) = (x 0 , 0), we expect that the limiting state process jumps at the initial and the terminal time.In particular, we cannot expect uniform convergence on [0, T ]. We also expect the limiting state processes to be of unbounded variation; this can already be deduced from Figure 1.The figure also suggests that the portfolio process is more or less monotone for large η, while this property is lost for small η.When η → 0, adjustments to small changes in market environments are cheap.This is very different from round-trip strategies where own impact is used to drive market prices into a favorable direction. Figure 1 also suggests that the limiting portfolio process jumps only at times 0 and T .This is consistent with the definition of candidate processes (2.11) as well as the observation in [21], according to which jumps in the optimal strategy can only be triggered by exogenous shocks like jumps in the cost coefficients, which are absent in the present model.It remains to clarify in which sense the state processes converge.Contrary to the convergence result stated in Proposition 2.6, we can only expect convergence in probability because the state process follows a forward ODE while the coefficient processes follow backward SDEs; see also Appendix B.2.The following theorem establishes uniform convergence in probability on compact subintervals on (0, T ) along with some "upper/lower convergence" at the initial and terminal time.The proof is given in Section 3.2.2. Theorem 2.7.For all ε > 0 and δ > 0, there exists an η 0 > 0 such that, for all η ∈ (0, η 0 ] and all N ∈ N , The preceding theorem does not provide a convergence result on the whole time interval, due to the jumps of the limit processes at the initial and terminal time.However, along with our results from Section 2.2, it allows us to prove the convergence of the graphs of the state processes on the entire time interval.The completed graph of a RCLL function X : {0−} ∪ [0, T ] → R with finitely many jumps is defined by The Skorohod M 2 distance between X and Y is defined as the Hausdorff distance between their completed graphs, i.e. where If strict liquidation is required, then Theorem 2.7 is sufficient to prove convergence of the state processes in the Skorohod M 2 sense.Even if liquidation is not required, it turns out that the terminal position converges to zero as η → 0. Heuristically, this can be seen as follows. Let t 0 ∈ (0, T ).Disregarding market risk costs, which we expect to be of order O(T − t 0 ) and hence negligible if t 0 → T , and disregarding instantaneous impact costs for the moment, the cost functional for any given admissible strategy ξ is given by Hence, we expect the controllable costs to satisfy plus instantaneous impact costs.Since N > γ, this suggests to make X ξ T small, which is cheap if η is small.More precisely, we have the following result; its proof is given in Section 3.2.3.Proposition 2.8.For all ε > 0 and δ > 0, there exists an η 0 > 0 such that, for all η ∈ (0, η 0 ] and all N ∈ N , Optimal liquidation with semimartingale strategies In this section, we prove that the limit process X0 is the optimal portfolio process in a trade execution model with semimartingale trading strategies. In our semimartingale model, a trading strategy is given by a triple θ = (j + , j − , V ) where j + and j − are real-valued, non-decreasing pure jump processes and V is a real-valued continuous Brownian semimartingale starting in zero.The jump processes j + and j − describe the cumulative effects of buying, respectively selling large blocks of shares while the continuous The first term captures the transient price impact cost; the second term captures market risk.The third term emerges as an additional cost term when passing from discrete to continuous time, as shown in [1].Moreover, in the absence of this term, arbitrarily low costs can be achieved; see [1] for details. The cost function can be conveniently rewritten as This representation supports our intuition that the price impact before and the price impact after the jump equally influence the total cost.The first term in this expression captures the cost of the initial block trade at time t = 0. The cost functional is well defined under the following admissibility condition. Definition 2.9.A trading strategy θ = (j + , j − , V ) is called admissible if the liquidation constraint X θ T = 0 holds, if j ± is a RCLL, predictable, real-valued, non-decreasing and square integrable pure jump process, and V is a continuous semimartingale starting in zero with (2.12) The set of all admissible trading strategies is denoted A 0 . Our goal is now to solve the optimization problem min θ∈A 0 J 0 (θ). To this end, we verify directly that the limit process X0 obtained in the previous section is optimal.The results of Section 2.1 show that the process has the following representation: where the jump process ĵ− and the continuous part are given by, respectively In view of Assumption 2.3 and because B 0 is a continuous semimartingale and Ẑ0 is differentiable, the process V is a continuous semimartingale starting in zero.Hence the following holds. In order to prove that θ is optimal, we approximate the cost and the portfolio process associated with any strategy θ ∈ A 0 by the cost and portfolio processes corresponding to absolutely continuous trading strategies.To this end, we first approximate the continuous semimartingale part V by differentiable processes.The proof of the following Lemma is given in Section 4.1. Lemma 2.11.For all θ = (j + , j − , V ) ∈ A 0 and for all β, δ > 0, there exists a constant ν > 0 and an adapted and continuous (2.13) and Next, we approximate the portfolio process X θ by a portfolio process associated with an absolutely continuous strategy.To this end, for all θ ∈ A 0 and β, ν, ε > 0, we define the integrable process In view of square-integrability of j ± , we see that ξ θ,β,ν,ε belongs to A. The corresponding portfolio process is denoted X θ,β,ν,ε .The proof of the following Lemma is given in Section 4.2. Lemma 2.13.For all C > 0, there exists a constant D(C) > 0 such that the following holds: As a consequence of the previous results, the optimal instantaneous price impact term converges to zero as η → 0. The proof is given in Section 4.4. Lemma 2.14.We have The cost estimate in Lemma 2.13 allows us to establish the optimality of the trading strategy θ by using the optimality of ξη,∞ in the strict liquidation model with absolutely continuous strategies.It turns out that the minimal trading costs are fully determined by the initial value B 0 0 of the process B 0 along with the impact factor γ and the initial portfolio. It remains to compute J 0 ( θ).In view of (2.15), for all η ∈ (0, ∞), we have The difference of the first two terms converge to zero as η → 0, which is verified using Lemma 2.13 and Theorem 2.7 as in the proof of Theorem 2.15.The third term converges to zero by Lemma 2.14.Hence, using the representation of the value function given in [18], Remark 2.16.If λ ≡ 0, then our model is a special case of the model analyzed in [1], which also contains cases when there is no optimal trading strategy.However, since γ is constant in our model, the processes "µ" and "σ" introduced in [1] are equal to zero.This implies that the equation " β = Y " holds in their notation.As shown in Section 5 of [1], the process "M ⊥ " introduced therein is also equal to zero, which implies that "Y " and hence " β" is a semimartingale.Theorem 2.3 (ii) in [1] confirms that, under this property, an optimal trading strategy does indeed exist. 3 Proofs for Section 2.1 This section proves the results stated in Section 2.1.We start with a priori estimates and regularity properties for the coefficient processes that specify the optimal state processes. A priori estimates and regularity properties 3.1.1The case η > 0 The following estimates have been established in [18,22], except for the upper bound on E η,N for finite N , which is stronger than the corresponding one in [22].It can be established using the same arguments as in the proof of Proposition 3.2 in [18] noting that For all η ∈ (0, ∞), N ∈ N and s, t ∈ [0, T ) with s ≤ t, we have that The preceding estimates allow us to prove that neither the optimal portfolio process nor the corresponding spread process change sign. Proof of Theorem 2.2.Let us put As a result, In view of Lemma 3.1, this shows that V (t) < 0 on (0, T ).Hence, strict positivity of Xη,N Thus, the definition of Ẑη,N along with (2.3) yields Next, we prove that the process B η,N satisfies an L 1 uniform continuity property.We refer to Appendix A for a discussion of general regularity properties of stochastic processes.Proof.Let ε, ε 1 > 0, s > T − ε 1 and let V and τ be arbitrary according to the definition of Condition C.1.By Lemma 3.1, if ε 1 is small enough, If s ≤ T − ε 1 , the assertion follows from the integral representation (2.10) along with the estimates established in Lemma 3.1 using that the stochastic integral in (2.10) is a martingale on [0, T −ε 1 ].We emphasize that Z η,N,B is possibly defined only on [0, T −] and so the stochastic integral may be a martingale only away from the terminal time. The case η = 0 We are now going to establish a priori estimates on the candidate limiting coefficient processes.First, we show that Assumption 2.3 directly implies the following regularity result for the parameter processes: Lemma 3.3.The processes χ, ρ, λ, ϕ and ϕ −1 satisfy Condition C.2 introduced in Appendix A. Proof.χ satisfies Condition C.2 due to Lemma A.9, Lemma A.2 and Lemma A.5.The rest immediately follows by Lemma A.8. We are now ready to prove that the process B 0 is well-defined. Proof of Lemma 2.5.The existence result follows from a standard argument.In fact, it is well known that, for any b ∈ [1, ∞), the BSDE with Lipschitz continuous driver (we recall f λ , f ρ that are defined in Assumption 2.3) By definition, (B 0 , 0) is the unique solution to the BSDE with driver φ and terminal condition 1, where the lower bound B 0 on the process B 0 was defined in (2.8).Likewise, (1, 0) is the unique solution of the BSDE with driver φ and the same terminal condition.Since the standard comparison principle for BSDEs with Lipschitz continuous drivers yields This proves that (B 1 , Z 1 ) is the desired unique bounded solution to the BSDE (2.7). The second assertion follows Theorem A.11 applied to the BSDE (3.1) for b = 1. Having established the existence of the process B 0 , the processes D 0 and E 0 are well-defined. The following lemma establishes estimates and regularity properties for D 0 and E 0 . Lemma 3.4.The following a priori estimates hold: Moreover, the processes D 0 and E 0 satisfy Condition C.2. Proof.The a priori estimates can be obtained by plugging the bounds on B 0 (cf.Lemma 2.5) into the definitions of D 0 and E 0 given in (2.9).Moreover, if we denote by h the function derived from Lemma 2.5, then for all t ∈ [0, T ], In view of Assumption 2.3, the processes D 0 and E 0 can be represented as uniformly continuous functions of the factor process χ and hence the assertion follows from Lemma 3.3 and Lemma A.8. The next lemma can be viewed as the analogue to Theorem 2.2 in the case η = 0. Proof of the convergence results In this section, we prove our main convergence results.We start with the convergence of the coefficients of the ODE system (2.3).Subsequently, we prove that the convergence of the coefficients yields convergence of the state process. Proof of Proposition 2.6 The proof of Proposition 2.6 is split into a series of lemmas.In a first step, we establish the convergence as η → 0 of the auxiliary processes On [0, T ), the processes F η,N and G η,N satisfy the dynamics respectively, A general convergence result for integral equations of the above form is established in Appendix B.1.2.It allows us to prove the following two lemmas. Lemma 3.7.For all ε > 0, there exists an η 0 > 0 such that, for all η ∈ (0, η 0 ] and all N ∈ N , We are now going to prove the almost sure convergence to zero of the process b η,N := B η,N −B 0 . To this end, we first observe that Plugging this into (2.10)shows that on [0, T ).Performing an analogous computation for B 0 and subtracting the two equations yields on [0, T ).This BSDE is different from (3.2) and (3.3).We apply Lemma B.2 to prove the following result. Lemma 3.8.For all ε > 0, there exists an η 0 > 0 such that, for all η ∈ (0, η 0 ] and all N ∈ N , Proof.For every N ∈ N , we apply Lemma B.2 with Assumption B.1 i) follows from the a priori estimates on B η,N (Lemma 3.1), B 0 (Lemma 2.5) and D 0 (Lemma 3.4).Assumption B.1 ii) follows from the a priori estimates on B η,N and B 0 , where the mapping ε → δ is independent of N .Assumption B.1 iii) follows from the same estimates and Lemma 3.6 and Lemma 3.7, where the choice of η 1 is independent of N .Assumption B.1 iv) is satisfied because we can choose ε 1 > 0 small enough s.t. Proof of Theorem 2.7 First we need to prove an auxiliary result: converges to 0 in probability as ν → 0. Proof.For all ω ∈ Ω, Y (ω) is uniformly continuous, hence the modulus of continuity converges to 0 as ν → 0 P-a.s.This implies convergence in probability, in particular. 1 , there exists some Since ω belongs to M ν,η,N 0 and due to (3.6), this implies that Now we can choose s(ω) ∈ (T − ν, t) minimal with the property that Due to minimality of s(ω), we have ∂ t Xη,N s(ω) (ω) ≥ 0. We now show that this derivative must also be strictly negative.In fact, due to (3.4), Since Ẑη,N is non-increasing (Theorem 2.2), using (3.7) again, the right hand side of the above equation can be bounded from above by Since ω ∈ M ν,η,N 3 , we have Proof of Proposition 2.8 The proof of the convergence of the optimal portfolio processes in Skorohod M 2 sense follows from Theorem 2.7 if strict liquidation is required.If strict liquidation is not required, the results of Section 2.2 are required to establish the assertion.This is not a circular argument since the proofs of Section 2.2 only use the results of Section 2.1 concerning the liquidating case N = ∞. Since θ = (0, ĵ− , V ) is optimal in the model introduced in Section 2.2 (cf.Theorem 2.15), it is easy to show that the strategy θq := ( ĵ+,q , ĵ−,q , V q ) is admissible for every q ∈ R where The following lemma shows that we can express the cost term corresponding to the transient price impact without Itô integrals.The proof is an immediate consequence of Itô's formula for semimartingales (Theorem II.32 in [33]) and the fact that Lemma 3.10.For all θ = (j + , j − , V ) ∈ A 0 and all t ∈ [0, T ], it holds that Using the previous lemma, it is not difficult to check that the mapping q → J 0 ( θq ) is differentiable and that 0 = ∂J 0 ( θq ) ∂q This allows us to prove that full liquidation is optimal if η → 0 even if it is not formally required. Proof.We assume to the contrary that lim sup η→0 sup N ∈N E[ Xη,N T ] > 0 and prove that this contradicts the optimality of J η,N ( ξη,N ).To this end, we consider the admissible trading strategies ξη,N,q t := ξη,N t + q, compute the derivative of the function q → J η,N ( ξη,N,q ) and show that the derivative at q = 0 does not vanish for small η if lim sup Obviously, ξη,N,q ∈ A and Xη,N,q In view of Theorem 2.2, Theorem 2.7, Lemma 3.5 and using the sum of the three last expected values is small uniformly in N if η is small.Hence, if lim sup η→0 sup N ∈N E[ Xη,N T ] > 0, then the sum on the right hand side of the above inequality is strictly positive when first choosing η 0 > 0 small enough and then choosing (η, Ñ ) with η ≤ η 0 such that . We are now ready to prove the convergence of Xη,N to X0 in the Skorohod M 2 sense.To this end, we have to bound the distance of each point of any of the graphs to the other graph.In the inner interval [ε, T − ε], it is enough to consider Xη,N − X0 , which we have bounded by Theorem 2.7. Proof of Proposition 2.8.To prove that the probability of is large for small η > 0, we need to prove that the distance of any point (t, x) from either G Xη,N (ω) or G X0 (ω) to the respective other graph is small on a set of large probability.To this end, we fix a small enough ν ∈ (0, ε).If t ∈ [ν, T − ν], this follows directly from Theorem 2.7.For t ∈ [0, ν) ∪ (T − ν, T ], we use Theorem 2.7 along with the facts that (i) the completed graph of a discontinuous function contains the line segments joining the values of the function at the points of discontinuity; (ii) the increments of X0 are small in the sense of Lemma 3.9 and (iii) lim η→0 sup N ∈N E[ Xη,N T ] = 0, due to Lemma 3.11.For instance, let us consider an ω ∈ Ω with and assume that (t, x) ∈ G Xη,N (ω) with t < ν and x < X0 0 (ω) − ν.Since x = Xη,N t (ω), the mean value theorem yields an s ∈ [0, t] s.t.X0 s (ω) = x + ν, which proves that (s, x + ν) ∈ G X0 (ω) and 4 Proofs for Section 2.2 4.1 Proof of Lemma 2.11 In order to prove Lemma 2.11 we first define, for all β, ν ∈ (0, ∞) and x ∈ R, For any admissible strategy (j + , j − , V ) ∈ A 0 , there exists a unique pathwise differentiable, adapted stochastic process Now, (2.13) can easily be verified by the comparison principle: For all ε > 0 and all t ∈ [0, T ], we have and hence, for all ε > 0 and t ∈ [0, T ], In view of Lemma 3.9, it is enough to prove that for all ω ∈ M and t ∈ [0, T ].In order to see this, let us assume to the contrary that the statement is wrong.Then, by continuity, since V 0 − Ṽ β,ν 0 = 0, there exists some ω ∈ M and some t 1 ∈ [0, T ] such that We choose the smallest such is continuous and has no roots in [t 1 − ν, t 1 ], it does not change sign on this interval.We may hence w.l.o.g.assume that . By definition, for all those t, V β,ν t (ω) = β/ν.This, however, contradicts the minimality of t 1 as This finishes the proof of Lemma 2.11. Proof of Lemma 2.12 The proof of Lemma 2.12 requires the following result on the jump processes. We are now ready to prove the approximation of arbitrary portfolio processes by absolutely continuous ones. Proof of Lemma 2.12.By the triangle inequality, for all β, ν, ε > 0, We analyse the four terms separately.This first term can be bounded by Regarding the second term, for all ε < T /2, Using the monotonicity of the jump processes it follows from Lemma 4.1 that For the third term, we conclude from the Itô isometry and the definition of ξ θ,β,ν,ε that Now we can bound Moreover, in view of (4.2), It remains to consider the fourth term in (4.1).To this end, let Then, using (2.13) in the last step, By (2.12), {max t∈[0,T ] |V t | 2 } is uniformly integrable.Hence we can first choose β > 0 small enough, then, according to Lemma 2.11, choose ν > 0 small enough such that P[Ω\M β,ν ] is sufficiently small and finally choose ε > 0 small enough in order to obtain the desired result. Proof of Lemma 2.13 We start with a technical lemma. Proof.Due to the Hölder inequality and the triangle inequality, The following technical lemma provides useful estimates for the impact process. Lemma 4.3.Let X ∈ L 2 P (Ω × [0, T ]; R).Then the transient price impact process Y X given by (2.1) satisfies Y X ∈ L 2 P (Ω × [0, T ]; R) and If X 0 = 0, then additionally, we have for all s, t ∈ [0, T ] with s < t, Proof.Inequality (4.3) follows from the explicit formula and the triangle inequality.Moreover, Substituting the inequality into (4.7)yields (4.4).To prove (4.5), let X 0 = 0.Then, due to (4.6), for all u ∈ [0, T ] and so Using the subadditivity of the square root, we now obtain (4.5) from We are now ready to prove our approximation result for the cost functional. Proof of Lemma 2.13.For θ = (j + , j − , V ) ∈ A 0 and ξ ∈ A, Due to Lemma 4.2 the last term can be estimated as Moreover, using first Lemma 3.10, and then Lemma 4.3 and Lemma 4.2, we obtain Proof of Lemma 2.14 We assume the contrary, i.e. that there exists a constant c > 0 such that, for all H > 0, there exists some η ∈ (0, H) such that The optimality of ξη,∞ and (2.15) imply that, for all η, ν, β, ε > 0, We now prove that (4.8) contradicts (4.9).By Theorem 2.13 and since | X0 t | ≤ x 0 , we obtain (for convenience, let p(x) and Plugging the results into (4.9)yields that, for all η > 0 that satisfy (4.8), it holds Due to Lemma 2.12, we can first choose β, ν, ε > 0 sufficiently small such that 2) and in view of Theorem 2.7, we can then choose η > 0 sufficiently small satisfying (4.8) such that the right hand side of (4.10) is larger than zero, which is a contradiction.This finishes the proof of Lemma 2.14. A Regularity properties of Itô processes and BSDEs In this appendix, we introduce some regularity properties of stochastic processes, which we use to prove various convergence results for stochastic processes. We consider a continuous, adapted, R d -valued stochastic process Y = {Y t } t∈I on some interval I and introduce the following continuity conditions.Condition (C.1).For all ε ∈ (0, ∞), there exists δ ∈ (0, ∞) such that, for all s ∈ I, all F s -measurable and integrable V : Ω → R and all stopping times τ : Condition (C.2).For all ε ∈ (0, ∞), there exists δ ∈ (0, ∞) such that, for all s ∈ I, In what follows, we list some auxiliary results. Lemma A. Proof.The second part of the statement can be proven straightforward using the decomposition The first part of the statement follows from Next, we prove some properties of the concave envelope of the modulus of continuity of a uniformly continuous function.We use these results to show that a uniformly continuous function of an Itô process with bounded coefficients satisfies Condition C.2. Since ω f is continuous in 0 and since X satisfies Condition C.2, we can choose δ > 0 small enough such that this term is not greater than ε. Proof.Let σ > 0 be a component-wise bound on σ.By the Jensen inequality, In order to prove that the conditional expectation on the right hand side is bounded, we apply the classical Doob's maximal inequality concerning the conditional measures w.r.t.all sets A ∈ F s with positive probability and obtain Then by combining these inequalities and using the Itô isometry for conditional expectations, we finally obtain Finally, we prove that the strong Condition C.2 holds for a certain class of BSDEs driven by forward SDEs.Specifically, we prove that the solution to the BSDE can be expressed as a uniformly continuous function of the forward process and then we apply Lemma A.8.The representation of the solution in terms of a continuous function has been proven by El Karoui [12] already. For all (t, x) ∈ [0, T ] × R n , we consider the following SDE on [t, T ], dX t,x s = μ s, X By the previous results, we obtain that X t,x satisfies Condition C.2. B.1.2 An equation with scaling In this section, we establish an abstract convergence result for stochastic processes {P η t } t∈(0,T ) indexed by some parameter η > 0 that satisfy the integral equation d(ψP η ) t = η −1 a P η t , P 0 t + q η t dt + dL η t + Z η t dW t on (0, T ), (B.4) where a : R 2 → R is a measurable mapping, {ψ t } t∈(0,T ) , {P η t } t∈(0,T ) , {P 0 t } t∈(0,T ) , {q η t } t∈(0,T ) and {L η t } t∈(0,T ) are adapted, real-valued, continuous stochastic processes and {Z η t } t∈(0,T ) ∈ L 2 Prog Ω × (0, T −]; R m . Our goal is to prove that the processes {P η t } t∈(0,T ) converge to P 0 as η → 0 uniformly on compact subintervals of (0, T ) if the mapping a(•, •) is such that P η is driven away from P 0 and if the boundary condition lim t→T (P η t −P 0 t ) ≥ 0 holds.If the integral equation (B.4) holds on the whole interval (0, T ], then it is enough to assume that P η T ≥ P 0 T .When applying the abstract convergence result to the BSDEs (3.2) and (3.3), the former condition holds if N = +∞ while the latter holds if N is finite. ii) One of the following two "boundary conditions" holds: a) For all η > 0, there exists a T η < T such that v) The processes ψ and P 0 satisfy Condition C. • We first consider the term E 1 {p η s >ε} η −1 τ η s a P η u , P 0 u + q η u du . Together with our assumption on the process q η and our choice of η, this implies that Now, if we choose first ε 0 and then η 0 (ε) small enough, the coefficients that multiply the probabilities become positive.Hence both probabilities must be equal to zero and so we have P[p η s > ε] = 0 if η ≤ η 0 (ε).Analogously, we can prove that P[p η s < −ε] = 0 for all s ∈ (0, T ).The main difference is that τ η < T − δ does not hold on the set {p η s < −ε}.Instead, we only obtain τ η < T (if Assumption B.3 ii) b) holds) or τ η < T η (if Assumption B.3 ii) a) holds). Lemma 3 . 9 . Let I ⊂ R be a compact interval and Y = {Y t } t∈I a continuous, adapted, R d -valued stochastic process.Then the modulus of continuity sup s,t∈I,|s−t|≤ν Definition A. 1 . A family of stochastic processes is said to uniformly satisfy Condition C.1 or C.2 on I if all processes satisfy the respective property and δ can be chosen uniformly for all processes. • If X and Y both satisfy Condition C.2, then X • Y also satisfies Condition C.2.
11,341.4
2021-03-10T00:00:00.000
[ "Mathematics", "Business", "Economics" ]
Gene biomarker discovery at different stages of Alzheimer using gene co-expression network approach Alzheimer's disease (AD) is a chronic neurodegenerative disorder. It is the most common type of dementia that has remained as an incurable disease in the world, which destroys the brain cells irreversibly. In this study, a systems biology approach was adopted to discover novel micro-RNA and gene-based biomarkers of the diagnosis of Alzheimer's disease. The gene expression data from three AD stages (Normal, Mild Cognitive Impairment, and Alzheimer) were used to reconstruct co-expression networks. After preprocessing and normalization, Weighted Gene Co-Expression Network Analysis (WGCNA) was used on a total of 329 samples, including 145 samples of Alzheimer stage, 80 samples of Mild Cognitive Impairment (MCI) stage, and 104 samples of the Normal stage. Next, three gene-miRNA bipartite networks were reconstructed by comparing the changes in module groups. Then, the functional enrichment analyses of extracted genes of three bipartite networks and miRNAs were done, respectively. Finally, a detailed analysis of the authentic studies was performed to discuss the obtained biomarkers. The outcomes addressed proposed novel genes, including MBOAT1, ARMC7, RABL2B, HNRNPUL1, LAMTOR1, PLAGL2, CREBRF, LCOR, and MRI1and novel miRNAs comprising miR-615-3p, miR-4722-5p, miR-4768-3p, miR-1827, miR-940 and miR-30b-3p which were related to AD. These biomarkers were proposed to be related to AD for the first time and should be examined in future clinical studies. www.nature.com/scientificreports/ in this field can be classified into two main categories. The studies in the first category have adopted image processing approaches based on brain images (e.g., MRI) [9][10][11][12][13][14][15][16][17] . On the other hand, the studies in the second category have used gene expression data to predict the chance of developing Alzheimer's disease 1,6,[18][19][20][21][22][23][24][25][26][27][28] . An article that used Linear discriminant analysis as the best separation procedure is an example of the studies which fall into the first category. It used pathway analysis to distinguish between different stages of Alzheimer. More specifically, it classified different stages of Alzheimer's disease using pathway analysis 29 . The investigation of meta-analysis studies and the examination of the studies which fall into these two categories highlight a significant gap in the relevant literature on Alzheimer's disease and reveal that there is a need for further research on this disease. A brief description of weighted gene co-expression network analysis (WGCNA) and some of the related studies which have adopted this approach is necessary to clarify our method. The WGCNA describes patterns that are constructed as a result of the correlation between the genes in microarray data. It is one of the system biology methods and is used in this study. It is a very useful R package that can be used to construct gene coexpression networks or to discover modules and correlations between genes. Moreover, it can also be utilized to identify Eigen genes or intra-modular hub genes or to calculate measurement values for the module memberships and topological properties 30 . A study adopted this method and used gene and miRNA expression data to discover some diagnostic biomarkers for the early detection of Colorectal Cancer. First, it utilized clustering to extract low preserved modules by constructing the co-expression networks for the different stages of colorectal cancer. Second, it reported two novel miRNAs that were related to colorectal cancer as biomarkers for this type of cancer by validating gene-miRNA interactions and constructing bipartite networks. These miRNAs were not reported in the previous studies 31 . Furthermore, another study used a similar method for discovering diagnostic biomarkers of the stratification of Breast Cancer molecular subtypes. It reported two or three miRNAs for each subtype and their target genes, which were significant and were highlighted in basic mechanisms of this cancer 32 . Moreover, there is a study that applied the WGCNA to find the key genes in Alzheimer's disease and introduced them as potential targets in the therapy for this disease 33 . Another study in this field did not specifically deal with Alzheimer's disease and concentrated on the aging of the brain. WGCNA was used in this study to identify the significant modules and effective biomarkers of the aging human brain 34 . Many studies have been carried out in this field, and numerous proposed Alzheimer's biomarkers have been introduced. However, the disorder remains incurable. This issue stems from the fact that its critical biological pathways, along with the involved functional genes, have not been fully discovered. For example, a recent research study applied WGCNA by focusing on gene targets and their pathways to investigate Alzheimer's disease. However, it did not examine miRNAs 33 . Therefore, our study broadens the scope of the previous studies and explores the relations between genes and their target miRNAs by constructing bipartite networks. In this study, the exploration of the variations of genes expression between different steps by extracting related modules helps to find important interactions. Moreover, the examination of the related pathways helps us to develop a proper understanding of the development of Alzheimer's disease. In the "Results" section, dataset details and the outcomes of the relevant experiments are illustrated and explained using the appropriate chart and tables. The "Discussion" section supports the applicability of the introduced method based on medical and clinical evidence. The proposed methods of our study are introduced in the "Methods" section, which is the last part of this article. Results First, the details of the used database are presented, and the adopted methodological approaches are discussed step by step. Dataset and preprocessing. The dataset of this study was downloaded from the National Center for Biotechnology Information Gene expression Omnibus (GEO) using GSE63063 accession number. The platform of the chip analyzer was GPL6947. First, before the preprocessing of the dataset, the non-gene transcripts were eliminated from the original file. Second, the remaining data were statically tested and preprocessed using "Limma" R package from the Bioconductor project, which was conducted in the RStudio ver.1.1.423 programming environment. Third, Benjamini & Hochberg's false discovery rate method was applied to calculate the adjusted p-values. The genes were attributed to their related IDs, and the duplicated or ID-less genes were excluded. After sorting the genes according to their adjusted p-value, the significant genes with adjusted p-value < 0.01 were selected for further analyses, and the remaining ones were omitted. After preprocessing and removing outliers' data, we determined 6,179 genes which were utilized as the gene list. This list enabled us to construct the network and perform further analysis. Moreover, our database was narrowed down to 104 normal samples, including 80 samples of mild cognitive impairment, and 145 samples of Alzheimer's disease cases. The samples which were related to three stages of the disease were summed up in 329 samples, including 200 female and 129 male patients. The boxplots of Fig. 1 show the range and dispersion of samples according to age and gender at every three stages. Moreover, Table 1 shows the number of samples at each stage based on gender. Weighted gene co-expression network analysis (WGCNA). To construct the co-expression network, 6,179 genes from the 329 samples at three different stages were included. Figure 2 illustrates scale dependency by R 2 and the mean connectivity, along with different values of the soft threshold. Among the powers which ranged from 1 to 20, the value of 4 was selected for β to gain the scale independency of the network at Normal Fig. S1 and S2). Next, the adjacency matrix of the expression data was generated. Based on this matrix, we calculated our topological overlap matrix (TOM). The modules were selected using a tree-cut algorithm. Moreover, 4 and 20 values were used as deepSplit and minimal module size parameters respectively by examining different parameters. The extracted modules were merged in the following step and were labeled with colors. The threshold of 0.14 was chosen to merge the modules. The merged modules of all stages were presented in the Supplementary file (Supplementary Figs. S3-S5). The preservation measure, which is indicated in the Z_summary index, was used to select the effective modules. The modules were strongly preserved when the Z_summary value was equal to or larger than 10. The values which were between 2 and 10, were regarded to be moderately preserved. Finally, the values, which were equal to or smaller than 2, indicated the lack of preservation. The modules, whose Z_summary values were larger than 10, were strongly preserved and did not give us any information. Therefore, we did not use them. However, according to the obtained values, there were not any modules with z_summary values smaller than 2. Consequently, we selected 3, 5, and 3 as thresholds for Normal-MCI, MCI-AD, and Normal-AD module groups respectively. After choosing the thresholds, the Z_summary values of the selected modules were the values that ranged from 3.45 to 4.56 for Normal-MCI modules. Moreover, they ranged from 5.25 to 6.91 for MCI-AD modules and ranged from 3.41 to 5.33 for Normal-AD modules. To gain proper Zsummary values, we examined the deep-split parameter and the types of networks using separate execution procedures. Finally, the signedhybrid network was set as the type of networks, and the value of deep-split was set to 4. Figure 3 illustrates the preservation of median rank and preservation of Z_summary along with the module size. www.nature.com/scientificreports/ Five modules of the normal stage were selected in comparison with the MCI expression data (Normal-MCI modules). Zsummary values of these five modules were smaller than 4.5. Six modules of the MCI stage were chosen with Zsummary values smaller than 6.9 compared with the AD expression data (MCI-AD modules). Furthermore, six modules of the normal stage were selected compared with the AD expression data (Normal-AD modules) that had Zsummary values smaller than 5.3. The selected modules are illustrated by their attributes in Table 2. Gene-miRNA bipartite network. This part aims to analyze the relations among the obtained genes and their related miRNAs by which they are regulated. However, after constructing three bipartite networks, hub miRNAs with the highest connectivity degree were selected to reduce the complexity and to focus on the important connections. In this section, 20 miRNAs and their connections were selected. Therefore, the genes were also filtered by the ones that were at the end of this connection. In the obtained subnetworks, which are shown in Fig. 4, there were 116 genes of the Normal-MCI subnetwork, 131 genes of the MCI-AD subnetwork, and 145 genes of Normal-AD subnetwork. Enrichment analysis Enrichment analysis of genes. To extract the important pathways using the DAVID database, the pathways with normal p-value < 0.05 were selected as significant on the studied gene list. The p-value of the most significant pathway in our experiment was equal to 0.0028 and involved six genes. This pathway is called Spliceosome and is found in Normal-MCI subnetwork. In MCI-AD subnetwork, the Herpes simplex infection was our substantial pathway. Its p-value was equal to 0.0063, and it included seven significant genes. Similar to the first group, in the third subnetwork, called Normal-AD, the important pathway was Spliceosome. Its p-value was equal to 0.005 and involved six genes. The tables which were extracted for biological pathways are available in the Supplementary File (Supplementary Tables S1-S3). Gene ontology analysis of these three subnetworks indicated that in the Normal-MCI subnetwork, the regulation of the gene metabolic process (p-value = 2.18e−04), which involved seven genes, was the most important process. Vesicle-mediated transport (p-value = 4.87e−04) with 22 genes, apoptotic process (p-value = 6.33e−04) with 24 genes, and regulation of RNA splicing (p-value = 6.89e−04) with 6 genes were the following important processes respectively. In the second subnetwork, called MCI-AD, posttranscriptional regulation of gene expression (p-value = 3.01e−05) with 14 genes, regulation of translation (p-value = 1.65e−04) with 11 genes, regulation of cellular amide metabolic process (p-value = 3.29e−04) with 11 genes, and regulation of cellular protein metabolic www.nature.com/scientificreports/ process (p-value = 7.70e−04) with 31 genes were the most important processes respectively. In the third subnetwork, called Normal-AD, regulation of gene metabolic process (p-value = 1.04e−06) with 10 genes, gene metabolic process (p-value = 7.60e−05) with 17 genes, viral process (p-value = 1.96e−04) with 20 genes, multi-organism cellular process (p-value = 2.15e−04) with 20 genes, vesicle-mediated transport (p-value = 2.28e−04) with 26 genes, interspecies interaction between organisms (p-value = 2.96e−04) with 20 genes, symbiosis, encompassing mutualism through parasitism (p-value = 2.96e−04) with 20 genes, and positive regulation of gene metabolic process (p-value = 6.85e−04) with 5 genes were the most significant processes respectively. All of the biological processes by related genes are available in Supplementary File (Supplementary Tables S4-S6). Enrichment analysis of miRNAs. At this stage, the hub miRNAs were evaluated using TAM tool 35 Detailed investigation over obtained biomarkers. First, the genes, which were regulated by the obtained miRNAs, were investigated. The first group involved nine genes that were obtained as a result of the intersection of three module groups. These genes were GOLGA1, HLA-B, MBOAT1, RABL2B, ARMC7, IL10RB, STX5, TFIP11, and VCP. The first one, GoLGA1, was introduced as one of the age-regulated genes 36 . The other gene, HLA-B, had high sensitivity and high specificity measurement values and was regarded to be a signal that showed the patients who suffered from hypersensitivity syndrome (HSS) 37 . One study compared AD and normal samples and revealed a significant difference in their HLA-B frequency 38 . The next gene, RABL2B, was introduced as a gene that is involved in a kind of neurological deficit called Phelan-McDermid Syndrome (PMS) 39,40 . IL10RB was obtained as one of the cell signaling molecules in the aging disease of the young population in 2014 41 . Another study used blood samples to determine early Alzheimer markers. It mentioned IL10RB as one of the best discriminators which distinguished between AD and normal samples 42 . A study in 2015 listed some previous studies that presented STX5 as the protein which plays a role in Alzheimer and Parkinson diseases 43 . Another study examined protein-protein interaction networks and their impact on gene-network analysis using AD gene expression data. It introduced TFIP11 as a significant hub gene 44 . The VCP gene is known to have a positive relationship with AD development risk based on the investigation of different types of dementia 45 . VCP mutations were investigated in another study that illustrated the vital role of these mutations in frontotemporal dementia 46 . Another study showed different genetic variants in genes like VCP which were associated with frontotemporal dementia and its related diseases 47 . According to another study, a mutation of VCP is related to Parkinson and Alzheimer diseases 48 . The next group involved the genes which belonged to the overlap between MCI-AD module groups and CTL-AD module groups. There were eight genes in this group, including ALDOC, APBA3, CHST14, DDX19A, HNRNPUL1, KCTD2, LAMTOR1, and RPA1. The first one, ALDOC, was found to be related to Alzheimer's disease in a study that investigated the proteomics in Alzheimer's brain 49 . Another study, which discussed this gene, examined AD pathogenesis and its related key regulators 50 . The second gene, APBA3, and its interaction with beta-amyloid highlight the importance of the examination of its genomic structure 51 . Similarly, another study investigated APBA3 as the gene which had a regulatory role in Alzheimer's disease 52 . It has been shown that the third gene in this group, CHST14, and its relationship with impaired cognitive function affect the learning and memory abilities 53 . The next one, DDX19A, is among the AD-associated genes. This issue has been proved by imaging-wide association study (IWAS) and transcriptome-wide association study (TWAS) 54 . The next one, KCTD2, was found to be related to AD based on the results of a relevant study 55 . Another study, which focused on the genetic similarity between AD and Ischaemic Stroke (IS), found that KCTD2 was associated with both of these diseases 56 . The last gene in this group, PRA1, was mentioned in a study that compared the expression of nucleonic excision repair (NER) in AD according to brain tissues and blood. In both of these cases, RPA1 showed lower expression in AD samples in comparison with the healthy ones 57 . Two genes of this group, including HNRNPUL1 and LAMTOR1, have not been found in clinical research studies. In this step, ten genes with larger degrees were selected from among the 107 genes which belonged to the overlap between CTL-MCI module groups and CTL-AD module groups to extend the exploration of the obtained results. These genes were PLAGL2, CREBRF, LCOR, ALDOA, LPP, KLF13, CANX, MRI1, STX16, and SLC38A1. One study showed that the variations of ALDOA were associated with Alzheimer's disease 58 . Another study, which indicated the biomarkers of AD pathology, revealed that ALDOA was one of the obtained ones 59 . A different study showed that LPP was suppressed considerably in MCI samples in comparison with the healthy ones 60 . Another study named some of the genes that had regulation changes between healthy and AD samples and argued that LPP was one of them 61 . KLF13 was introduced in a neurodegenerative disease study as one of the key regulators in Alzheimer's disease 62 . A similar study, which examined this disease between male and female samples, found that KLF13 existed in four male clinical traits in male patients 63 . The next one, CANX, was mentioned in a study as a target that had an important role in protein folding in AD cases 64 www.nature.com/scientificreports/ Moreover, it was mentioned in a comparative study on brain samples as an AD-related gene 65 . The other one, STX16, was among the genes which had expression changes in AD Frontal Cortex 66 and was indicated in another study as the gene that showed common expression changes in AD samples 67 . Pathological survey on the role of mammalian target of rapamycin complex showed that SLC38A1 is one of the significant genes in neurodegenerative disease 68 . Finally, a study examined the potential functions of Amyloid β peptide, which has an important role in Alzheimer's disease. It found that SLC38A1 was an effective gen 69 . Four of the genes in this group, including PLAGL2, CREBRF, LCOR, and MRI1, have not been found in clinical research studies. The first miRNA was mir-26b-5p. According to the studies published in 2013 and 2018, the identification of the key miRNAs, which are associated with AD and mir-26b-5p, is also reported as one of the down-regulated key miRNAs 70,71 . In another study, the authors reported the existence of a relationship between mir-26b-5p and Alzheimer's disease and argued that this miRNA was upregulated in Alzheimer's disease 72 . The miR-26b-5p was introduced as one of the signals which helped to distinguish sporadic behavioral variant of frontotemporal dementia from Alzheimer and healthy cases 73 . In two other studies, the researchers used miR-26b-5p as a previously known miRNA in brain diseases, especially Alzheimer's disease 74,75 . In another study, which was published in 2017, miR-26b-5p was one of the significant miRNAs because of the dysregulation that it caused between AD and normal samples 76 . In two other studies, AD samples and normal samples were compared. These studies found that miR-26b-5p was one of the significant regulators for the identified differentially expressed genes 77,78 . The second miRNA was miR-335-5p. This miRNA was reported as one of the miRNAs which were related to Alzheimer's disease and represented the classifiers of Parkinson's disease using dementia and Alzheimer' samples 79 . There is also a study that represented mir-335-5p as an upregulated biomarker of AD 80 . The same miRNA was introduced as an upregulated biomarker in a different study 81 . Another study, which utilized neuroimages and investigated the concordance of miRNA biomarkers related to AD, identified mir-335-5p as an upregulated miRNA 82 . There is another study that used Low-Frequency Pulsed Electromagnetic Field (LF-PEMF) and found that miR-335-5p was a target miRNA and had a role in biological pathways of the Alzheimer's disease 83 . Huynh, R.A., et al. investigated biomarkers of Alzheimer's diseases in the genome, blood and cerebrospinal fluid and compared the AD samples with the normal ones. They observed that the gene expression level of miR-335-5p increased in the normal samples 84 . The third miRNA, miR-92a-3p, was indicated as one of the regulator molecules which influenced transcriptional changes in Alzheimer's disease 85 . In another study, mir-92a-3p was identified as one of the miRNAs that showed significant upregulation 86 . Similarly, in another study, the researchers mentioned that mir-92a-3p was a downregulated miRNA in serum samples of Alzheimer's disease patients in comparison with the MCI patients' serum samples 87 . In a different study, which measured miRNAs in the cerebrospinal fluid (CSF) and the blood of AD and MCI patients, mir-92a-3p was detected as the most frequent miRNAs in dementia patients 88 . The fourth miRNA was miR-615-3p. In a study by Hoss, A.G, et al., this miRNA was detected as a significant signal of Huntington disease, which is a progressive brain disorder 89 . Likewise, a study by Hoss, A.G, et al. confirmed that mir-615-3p was a differentially expressed miRNA in Huntington disease 90 The fifth miRNA was miR-484. In a study, it was determined to be an important miRNA which differed considerably between healthy individuals and MS 92 . Similarly, one of the previously-mentioned studies indicated that miR-484 was an Alzheimer related miRNA 86 . The sixth miRNA was miR- 16-5p. It functioned as a deregulator in brain tissues based on a study that focused on Late-onset Alzheimer's disease (LOAD) 93 . Another study investigated the effect of curcumin on Alzheimer's disease and its neuroprotective role in this disease. The changes of the related miRNAs were assessed in this study. The study showed that miR-16-5p had a relationship with Alzheimer's disease 94 . A different study investigated Frontotemporal Dementia (FTD). It used some circulating miRNAs and showed that miR-16-5p underwent significant changes from healthy individuals to FTD patients 95 . This miRNA was also recognized in Young-onset Alzheimer Disease (YOAD), which is recognized by clinical diagnosis before the age of 65 96 . The next one, called miR-17-5p, has also been mentioned in many research studies that are related to Alzheimer's disease. Mir-17-5p was considered as an important miRNA in recognition of FTD 95 . Study on the overlapping molecules of cancer and neurodegeneration showed that miR-17-5p and miR-18d are two gene regulators in neurotransmission 97 . The miR-17-5p was also determined as an AD-related miRNA 98 . Moreover, miR-17-5p was found to play an effective role in the production of the amyloid precursor protein (APP) and neuronal apoptosis which are two Alzheimer-related proteins 99 . A different study investigated miR-17-5p and its intersectional role in aging diseases and cancer 100 . The next examined miRNA was miR-218-5p. As we examined clinical research studies, we found a study on the important miRNAs. The researchers of this study compared samples of MDD (Major Depressive Disorder), MCI, and AD patients. They argued that miR-218-5p was one of the ten top miRNAs whose expression differed conspicuously from MDD patients to MCI patients 101 . One study used plasma exosomal miRNAs to find the effective miRNAs of Alzheimer. The researchers of this study argued that miR-24-3p was one of these miRNAs 102 . Another study that highlighted the diagnostic role of miRNAs in AD showed that miR-24-3p was an important signal in samples of cerebrospinal fluid (CSF) assays 77 . A different study expressed that miR-24-3p showed a considerable negative correlation between the expression levels in serum and CSF of the normal samples 73 . Another study examined the effects of the human microRNAome on modulating cellular prion protein (PrP C ). The results of this study showed that miR- 124-3p Scientific RepoRtS | (2020) 10:12210 | https://doi.org/10.1038/s41598-020-69249-8 www.nature.com/scientificreports/ was an indirect regulator of PrP C72 . An attractive study concluded that the regulation of miR-124-3p prevented the abnormal hyperphosphorylation of Tau protein 103 . In another study, the researchers found that mir-124-3p was among the other effective miRNAs and locus coeruleus (LC) was the most affected region that must be considered in future studies for further investigation 104 . The mir-93-5p was the next miRNA which was found using serum data of AD and normal samples. A study showed that miR-93-5p had more changes in AD samples 105 . Another study revealed that miR-93-5p was one of the effective miRNAs in the case of MDD patients 106 . The miR-193b-3p was identified in a study by comparing the AD samples with the normal ones. This study examined a decrease in the value of miR-193b 107 . The miR-20a-5p has also been examined in several articles using the network-based method and has been regarded as a regulator in AD samples 85 . The second group had only one member which was miR-106b-5p. It belonged to the overlap between MCI-AD module groups and CTL-AD module groups. The examination of this miRNA shows that many studies have named it as a miRNA which is related to Alzheimer's disease. There is a study that showed the radiation-induced changes of miRNA-106b-5p in the blood was involved in the development of Alzheimer's diseas 108 . Another study proposed that miR-106b-5p was an upregulated miRNA in Alzheimer's disease. This claim was validated using the qRT-PCR analysis 81 . Another study named this miRNA as a blood-based miRNA which was related to AD in more than 34 studies 109 . The third group was miR-98-5p and was observed exclusively in CTL-MCI module group. This issue has been mentioned in some studies. For example, it was proposed as a novel therapeutic target for Alzheimer's disease because of its crucial role in the accumulation of Aβ 110 . It has been shown that the expression levels of this miRNA differ considerably between normal samples and Alzheimer's disease samples 111 . The fourth group included 6 miRNAs which were observed exclusively in MCI-AD module groups. They were investigated one by one similar to the previous studies. Five miRNAs in this group have not been found in clinical research studies. They include miR-4722-5p, miR-4768-3p, miR-1827, miR-940, and miR-30b-3p. miR-106a-5p was introduced in a study that considered it as an effective miRNA in AD and used it to examine the effect of Huperzine-A on β-Amyloid peptide accumulation to determine the relationship between brain damage and neuro-muscular system deficiency 112 . A similar study used this miRNA as an effective miRNA in Alzheimer's disease to express the Folic acid deficiency in amyloid-β accumulation 113 . Finally, a study showed that miR-106a-5p was an important biomarker of Alzheimer's disease and argued that it was a predictor variable in AD 114 . Likewise, the fifth group had 6 miRNAs which belonged to the intersection of CTL-MCI and CTL-AD module groups. The miRNA group members were miR-877-3p, miR-30a-5p, miR-30c-5p, miR-181a-5p, miR-142-3p and miR-15b-5p. The first one, miR-877-3p, was indicated as a miRNA which was effective in young-onset AD 96 . The second one, miR-30a-5p, was indicated as the miRNA which had a considerable and high expression in affected samples which were related to the early-onset familial Alzheimer's disease 115 . Another study showed the effectiveness of miR-30a-5p for the same disease in the opposite direction brain-derived neurotrophic factor 116 . The next miRNA, mir-30c-5p, was introduced as a differentially expressed miRNA between normal and Alzheimer's disease samples in two separate studies 81,117 . It was shown that the next miRNA, miR-181a-5p, was expressed at different levels in AD in comparison with the normal samples 118 . In another study, mir-181a-5p was shown to be an effective miRNA in Alzheimer's disease using SNAP-25 vesicular protein 119 . The fifth one was mir-142-3p. It was introduced as one of the miRNAs which had considerably different levels in AD samples and normal samples 120 . A different study showed that the expression level of two target genes caused a reduction in the risk of AD by reducing the expression level of mir-142-3p 121 . The last one was mir-15b-5p. It was examined using one of the Alzheimer's disease cell models named swAPP695-HEK293 and revealed the upregulation in the expression of mir-15b-5p 94 . Discussion In this article, co-expression network analyses were performed for three stages of Alzheimer's disease based on gene expressions. The experiments were performed on over 145 samples of Alzheimer stage, 80 samples of the MCI stage, and 104 samples which were at the healthy stage. There were 6,179 genes in the total integrated dataset, which were generated by the genes with adjusted p-values smaller than 0.01. After network reconstruction, the modules were specified and merged to gain an optimal structure. Next, the target miRNAs that were related to the selected genes were extracted, and the bipartite networks were constructed for each stage. Subnetworks in the previous steps were constructed by selecting the hub miRNAs, which had pivotal roles in the regulation of the genes to reach optimal results. Then, the lists of extracted genes and miRNAs for each of the subnetworks were used to draw the Venn diagram and to indicate the intersections of three subnetworks. The related diagrams of genes and miRNAs are presented in Fig. 5. The list of the genes and miRNAs of these modules are shown separately by their different intersections and are listed in the Supplementary file as Supplementary Tables S13 and S14. The listed genes and miRNAs in the above-mentioned tables were the proposed biomarkers of Alzheimer's disease and were obtained using our proposed method. Therefore, we investigated the previous research studies and medical experiments as well as clinical studies in the field of Alzheimer's disease. The examination of the results of the clinical and experimental studies and the investigation of the recently published authentic articles show that almost all of our discovered genes and miRNAs have been reported in different studies of Alzheimer's disease and neurological diseases. Most of them can be found in the latest articles. However, some of them have not been reported yet. Therefore, it can be claimed that the proposed biomarkers which were extracted using our methods can be real biomarkers that are related to Alzheimer's disease and should be examined by experimental studies. Some of our proposed biomarkers have been reported in the aging disease that is related to Alzheimer's disease. Eight of the discovered genes in this study, including MBOAT1, ARMC7, and is not associated with Alzheimer. Therefore, these genes are the landmark finding of our study, and we propose them as the biomarkers of Alzheimer's disease. Moreover, we introduced Mir-615-3p as an Alzheimer-related miRNA which has been recognized as a biomarker of the Huntington disease. Furthermore, we dealt with five miRNAs, including miR-4722-5p, miR-4768-3p, miR-1827, miR-940, and miR-30b-3p, which have not been reported in the previous studies. Consequently, they can be considered as new proposed biomarkers which have to be examined by clinical experiments. Finally, we speculated that our gathered biomarkers, which were prepared in tables in the Supplementary file (Supplementary Tables S13 and S14), can be studied as potential biomarkers for the early detection of Alzheimer's disease. In summary, our proposed method aimed to conduct a prognostic study of Alzheimer's disease and used the Gene co-expression Network analysis method based on the GEO database. To this end, significant modules obtained from co-expression networks were utilized to construct bipartite networks (gene-miRNA) for the three stages of the disease. Therefore, we worked with three types of samples. Each type of sample belonged to one of the normal, mild cognitive impairment, and Alzheimer's disease stages. This study identified the hub genes. These genes have the highest connectivity degrees and are regarded to be the potential prognostic biomarkers for Alzheimer's disease. The novel genes, including MBOAT1, ARMC7, RABL2B, HNRNPUL1, LAM-TOR1, PLAGL2, CREBRF, LCOR, and MRI1 together with miRNAs comprising miR-615-3p, miR-4722-5p, miR-4768-3p, miR-1827, miR-940, and miR-30b-3p are introduced as Alzheimer-related proposed biomarkers and should be examined in the experimental studies as clinical studies. This study points out that its exploration needs further research to develop novel therapeutic approaches to drug design and drug discovery along with medical approaches to the treatment of Alzheimer's disease. Methods This part explains the methods which were used in this study step by step. Networks construction. To construct networks, we detected the outlier samples using an optimal version of hierarchical clustering, which uses distance and averaging methods to cluster the study samples. The results of the clustering approach showed that there were only two outliers at the AD stage. Moreover, they revealed that there were not any outliers at the other two stages. The WGCNA approach was utilized to perform an analysis of the gene expression data. The Co-expression networks were constructed for all of the genes at each of the stages separately. In the next step, mean connectivity and scale dependency measures were calculated to choose the proper soft power and to reconstruct the network. Lastly, soft threshold power was evaluated using network analysis functions to preserve more correlated genes based on scale-free topology. Module extraction. The dissimilarity matrix was obtained from TOM matrix to apply the module analysis algorithm. This matrix was used to perform a hierarchical clustering to recognize the potential modules. The modules were selected by using a tree-cut algorithm and experimenting with different values for deepSplit and minimal module size parameters. The extracted modules were merged and labeled with colors. To merge the modules, we extracted eigengenes of modules. After calculating the dissimilarity of the eigengenes, the clustering method was applied to eigengenes. At this point, module preservation analysis was performed to identify the meaningful modules at different stages of the disease. Module preservation function specified the amount of changes in the modules against the www.nature.com/scientificreports/ network of the next stage by calculating Zsummary. A large change was observed in the modules when the value of Zsummary was small. This issue was in line with the aim of this study since we preferred to find the modules that underwent bigger changes. Bipartite gene-miRNA networks. In the next step, the genes of the Normal-MCI modules were merged and were experimentally validated. Moreover, the target miRNAs of these genes were extracted using the miR-Walk2.0 database. The MCI-AD modules and Normal-AD modules underwent the same process. Three bipartite gene-miRNA networks were constructed by the genes and their target miRNAs by Cytoscape.3.7.0. The miRNAs with larger degrees had more connections with the selected genes and performed more regulatory roles in the network. Therefore, 20 miRNAs with the highest degree values were chosen together with their connections and were used to reconstruct the network. Functional enrichment analysis. The Annotation Visualization and Integrated Discovery (DAVID) database was used to study the biological mechanism and gene ontology of the selected genes 122,123 . The biological processes of the selected genes were listed, and the nodes (p-value < 0.01) were reported as important processes. Kyoto Encyclopedia Gene and Genomes (KEGG) database 124 was used to perform the pathway enrichment analysis and the significant genes (p-value < 0.05) were selected. Data availability The datasets which were processed in the present study can be provided by the corresponding author on reasonable request. The raw dataset is available on Information Gene expression Omnibus (GEO) with GSE63063 accession number ( https ://www.ncbi.nlm.nih.gov/geo/query /acc.cgi?acc=GSE63 063).
8,190
2020-07-22T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
A High-Order CFS Algorithm for Clustering Big Data With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS) to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i) an adaptive dropout deep learningmodel to learn features from each type of data, (ii) a feature tensormodel to capture the correlations of heterogeneous data, and (iii) a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data. Introduction With the rapid development of the Internet of Things, Internet of People, and Industrial Internet, big data analytics and mining have become a hot topic [1].One widely used technique of big data analytics and mining is clustering that aims to group data into several clusters according to similarities between the data objects [2].In 2014, Laio and Rodriguez proposed a novel clustering algorithm by fast search and finding of density peaks (CFS) published in Science Magazine [3].CFS is the most potential clustering technique because of its efficiency and high accuracy.However, CFS is limited in clustering big data because it cannot cluster heterogeneous data which is prevalent in big data. Heterogeneous data, different from the homogeneous data containing only one type of objects, involves multiple interrelated types of objects [4].Moreover, a heterogeneous data object is usually of complex correlations among different modalities.Therefore, heterogeneous data poses important challenges on clustering techniques.Recently, researchers have proposed some algorithms to cluster heterogeneous data [5].One of this type is based on the graph partition, for instance, the bipartite spectral algorithm, which clusters heterogeneous data by optimizing a unified objective function.However, this kind of methods is usually of low efficiency for clustering big datasets since they need to solve an eigendecomposition procedure.Another typical algorithm based on the nonnegative matrix factorization, such as SS-NMF, clusters heterogeneous data by revealing the relationships between different objects in a semantic space.In addition, Comrafs is developed for clustering heterogeneous data by constructing the Markov Rand Fields.Since this method is of high computational complexity, it is limited for large-scale heterogeneous data clustering.These algorithms could cluster heterogeneous data; however, they are hard to achieve desired clustering results since they do not model the high nonlinear correlations over multiple types of heterogeneous data objects effectively.Moreover, they are of high time complexity, leading to low efficiency in clustering heterogeneous data. In this paper, we propose a high-order CFS algorithm (HOCFS) for clustering heterogeneous data based on the dropout deep learning model.The dropout deep learning model was proposed by Hinton to prevent overfitting [6].It is especially useful in training large networks with small amount of samples.However, the dropout sets the same omitting probability with 0.5 in each hidden layer of the deep learning model, resulting in its ineffectiveness.Aiming at this problem, we propose an adaptive dropout deep learning model, which sets the omitting probability of each hidden layer according to the relationship between the omitting probability and the layer opposition.Then, we applied the proposed adaptive dropout deep learning model in feature learning for each type of data of every heterogeneous data object.Next, the algorithm uses the vector outer product to fuse the learned features to form a feature tensor for each heterogeneous data object.Finally, since the tensor distance can not only measure the distance between every two heterogeneous samples but also reveal the intrinsic correlations between different coordinates in the high-order tensor space, the tensor distance is applied to the CFS algorithm for clustering heterogeneous data represented by fused features. Finally, we compare our proposed algorithm with two representative data clustering techniques, namely, HOPCM and CFS, on two datasets, namely, NUS-WIDE and CUAVE in terms of * and Rand Index (RI). Therefore, the contributions of the paper are summarized as the following three aspects: (i) Current dropout deep learning models are of low effectiveness and efficiency in learning features for heterogeneous data.To tackle this problem, the paper proposes an adaptive dropout deep learning model to learn features for each type of data and then fuses the learned features to form a feature tensor for each heterogeneous data object. (ii) To measure the similarity between heterogeneous data objects in high-order tensor space, the paper applies the tensor distance in the clustering process. (iii) Conventional CFS algorithm cannot cluster heterogeneous data directly because it works in the vector space.The paper extends the CFS algorithm from the vector space to the tensor space for clustering heterogeneous data represented by the feature tensors. Preliminaries This section presents the technique preliminaries about our scheme, including the stacked autoencoder, dropout, and the CFS clustering algorithm.The stacked autoencoder is presented first, followed by the CFS clustering algorithm. Stacked Autoencoder (SAE) and Dropout.The stacked autoencoder (SAE) that is one important example of deep learning models has been widely employed in supervised feature learning for many applications [7].SAE is built to learn hierarchical features of data by stacking multiple basic autoencoders (BAEs) as shown in Figure 1. Then, BAE reconstructs the input from the hidden representation ℎ to a reconstruction by a decoding function : = ( (2) + (2) ) , where = ( (1) , (1) ; (2) , (2) ) denotes the parameter of the autoencoder and the functions and typically adopt the sigmoid function: To train the parameter of the autoencoder, an objective function with a weight-decay that is used to prevent overfitting is defined as follows: where is the reconstruction error and is a hyperparameter used to control the strength of the regularization.The stacked autoencoder is a full-connected model and it involves many redundant connections.Therefore, it usually produces overfitting in the real applications.Aiming at this problem, Hinton proposed dropout to reduce the overfitting by preventing coadaption of feature detectors in deep learning models.It randomly omits half of the feature detectors on each training sample to prevent a hidden unit from relying on other hidden units being present.Dropout was proved to be especially effective and efficient in training a large neural network with a small training set. Clustering by Fast Search and Finding of Density Peaks (CFS). CFS is the latest clustering algorithm proposed by Laio and Rodriguez in Science Magazine in 2014 [3].It is highly robust and efficient.More importantly, it can find clusters of arbitrary shape and determine the number of clusters automatically.Several experiments have demonstrated its superiority in the efficiency and effectiveness over the previous algorithms for clustering large amounts of data.Therefore, it has become the most potential algorithm for clustering big data. The key of the CFS algorithm lies in the characterization of cluster centers.Particularly, the algorithm basically assumes that cluster centers should be surrounded by neighbor objects with lower local density and be more far away from other objects with a higher local density.Based on this assumption, CFS defines two quantities for every data object , the local density and the minimum distance from any other object with higher density, in where represents a cutoff distance.According to Laio and Rodriguez, can be set to the biggest 2% of all the distances between every two objects to get a good clustering result.For the object with the highest density, its distance is taken as = max ( ). In the CFS algorithm, cluster centers are recognized as the objects with the large value of that is defined in (5) Problem Statement Consider a dataset with heterogeneous data objects = { 1 , 2 , . . ., } and assume that each object can be represented by a feature tensor.The task of heterogeneous data clustering is to classify the dataset into groups according to their similarity such that the objects belonging to the same cluster share similarity as much as possible.Based on the analysis in the previous parts, heterogeneous data poses a large number of challenges on the clustering techniques.We discuss the key issues in three following aspects: (1) Feature Learning of Heterogeneous Data.Feature learning is the fundamental step for heterogeneous data clustering.In fact, many feature learning algorithms, especially some methods based on deep learning, have been well studied in recent years.However, most of them are hard to learn features for heterogeneous data.Although the deep computation model can learn features for heterogeneous data, it is of low accuracy and efficiency since it cannot avoid overfitting. (2) Similarity Measurement for Heterogeneous Data.Similarity measurement is the key to one clustering technique.There are a lot of metrics for measuring the similarity between two objects.However, they can only measure the distance between homogeneous objects represented by feature vectors because they work in the vector space.A heterogeneous object is typically represented by a feature tensor, making most of current metrics hard to calculate the similarity for heterogeneous data objects. (3) Clustering Technique for Heterogeneous Data.Typically, a heterogeneous object is represented by a feature tensor.However, most of clustering techniques High-order CFS clustering Joint representation → feature tensor including the CFS algorithm work only in the vector space, resulting in failure to cluster heterogeneous data in the high-order tensor space. High-Order CFS Algorithm Based on Dropout Deep Computation Model In this section, we describe the details of the proposed highorder CFS algorithm based on the dropout deep learning model for clustering heterogeneous data.The proposed algorithm works in three stages: unsupervised feature learning, feature fusion, and high-order clustering, which is shown in Figure 2. In the first stage, each type of data in the heterogeneous dataset is separately learned by the proposed adaptive dropout deep learning model.In the second stage, the proposed algorithm uses the vector outer product to fuse the learned futures to form a feature tensor as the joint representation of each object.Finally, the proposed algorithm extends the conventional CFS technique from the vector space to the tensor space for clustering the heterogeneous dataset. Feature Learning Based on the Adaptive Dropout Deep Learning Model.In the dropout deep learning model, each hidden unit is randomly omitted from the network always with a constant probability of 0.5.This way will ignore the relationship between the omitting probability and the layer opposition, resulting in a low effectiveness of deep learning models in heterogeneous data feature learning.A large number of studies demonstrate that the fundamental layers of a deep architecture share many common characters, implying that the dropout in the lower layers has more generalization function than that in higher layers.Therefore, the omitting probability of the dropout should decay with the layers becoming higher. where ≤ 9 denotes the number of hidden layers in the deep learning model and represents the position of the layer.Function ( 6) has the following properties: (1) it is monotonically decreasing. (2) The omitting probability is 0.5 for the middle hidden layer. (1) By the assumption, function () is continuously differentiable and we may write which implies that ( 6) is a strictly decreasing function.Particularly, the omitting probability of the dropout should decay with the layers becoming higher. We can get the adaptive dropout deep learning model by applying the distribution function of the omitting probability to the deep learning model outlined in Algorithm 1. In the proposed high-order CFS algorithm, the adaptive dropout deep learning model is used to learn features of each type of data of the heterogeneous data. Feature Fusion Using Vector Outer Product.The vector outer product is one of the widely used operations in mathematics, denoted by ⊗.If is an -dimension vector and is an -dimension vector, their outer product will produce an × matrix ; = ⊗ .Each entry in the matrix is defined as = ⋅ , where and are one entry in vectors and , respectively.One example of the vector outer product is as shown in ( 6): More generally, the outer product of vectors 1 ∈ 1 , 2 ∈ 2 , . . ., ∈ will produce an -order tensor ∈ 1 × 2 ×⋅⋅⋅× , = 1 ⊗ 2 ⊗ ⋅ ⋅ ⋅ ⊗ , in which each entry is defined as After using the adaptive deep learning model to learn features of heterogeneous data, each type of data can be represented by a feature vector.Particularly, for the heterogeneous dataset in which each object consists of one image, one text, and one piece of video, three vectors, , , and , are used to represent the feature vectors learned from the adaptive dropout deep learning model, respectively.In this subsection, such feature vectors are fused by the vector outer product to form one feature tensor for joint representation of one object in the heterogeneous dataset according to the following rules: (1) For the object with only one image and one text, its feature tensor is represented by = ⊗ .(2) For the object with only one image and one piece of video, its feature tensor is represented by = ⊗ .(3) For the object with only one text and one piece of video, its feature tensor is represented by = ⊗ .(4) For the object with only one image, one text, and one piece of video, its feature tensor is represented by = ⊗ ⊗ . The High-Order CFS Clustering. As discussed in Section 2, the conventional CFS algorithm cannot cluster heterogeneous data directly because it works in the vector space while each object in the heterogeneous dataset is represented by a feature tensor.To tackle this problem, we propose a highorder CFS algorithm for clustering heterogeneous data. To calculate the distance between two points in highorder tensor space, represented by two tensors, , ∈ 1 × 2 ×⋅⋅⋅× , they need to be unfolded to the corresponding vectors.In detail, the item =1 .The proposed high-order CFS clustering algorithm (HOCFS) based on the feature tensor is outlined in Algorithm 2. Performance Evaluation of Adaptive Dropout Model In this part, we assess the adaptive dropout deep learning model on the STL-10 and CIFAR-10 datasets by comparison with the conventional dropout model. Experiments on the STL-10 Dataset.We initially explored the effectiveness of adaptive dropout using STL-10, a widely used benchmark for machine learning algorithms.It contains 500 training images, 800 testing images that are grouped by 10 classes, and 100000 unlabeled images for unsupervised learning.We combine the adaptive dropout distribution model with stacked autoencoders to train two deep learning models.One has 4 hidden layers while the other has 5 hidden layers.Both of them have one logistic regression layer on the top.For the adaptive dropout deep learning model, we use the proposed algorithm to set the omitted rate of hidden units while setting omitted rate of 0.5 of hidden units for the conventional dropout deep learning model.The classification results are presented in Figures 3 and 4. From Figures 3 and 4, the classification error decreases with the epoch increasing.the best classification error rate of 0.10 by using adaptive dropout model with 4 hidden layers while the best classification error rate given by the conventional dropout model is 0.12, which indicates that our proposed model performs better than the conventional dropout model in classifying the STL-10 dataset. Experiments on the CIFAR-10 Dataset.CIFAR-10 is a benchmark task for object recognition, consisting of 60000 color images in 10 groups, with 6000 images per group.These images were labeled by hand to produce 50000 training images and 10000 test images.We built a classification network with three convolutional layers and three pooling and two fully connected layers to explore the effectiveness of the adaptive dropout model on CIFAR-10 dataset.Each convolutional layer has an exclusive ReLU layer and a dropout layer.Specially for the adaptive dropout deep learning model, we use the proposed algorithm to set the omitted rate of hidden units while setting omitted rate of 0.5 of hidden units for the conventional dropout deep learning model.The classification results are presented in Figure 5. From Figure 5, the error rate produced by the adaptive dropout model is lower than that produced by the conventional dropout model in most cases.More importantly, using the conventional dropout model gives the best error rate of 0.156.This is reduced to 0.136 by using the adaptive dropout model, which implies that the proposed model works much better than the conventional dropout model for CIFAR-10. Performance Evaluation of the High-Order CFS Algorithm In this part, we evaluate the high-order CFS clustering algorithm by comparison with the HOPCM algorithm and the conventional CFS algorithm on two representative heterogeneous datasets, namely, NUS-WIDE and CUAVE, in terms of * and Rand Index (RI).HOCPM was developed in 2015 for clustering heterogeneous data by combining the autoencoder model and the possibilistic -means algorithm [9].For the conventional CFS algorithm, we perform the same preprocessing step with our proposed algorithm.Particularly, we first use the adaptive dropout deep learning model to learn features of texts, audios, and images of each object and then form a feature vector for the object by concatenating the learned features.Finally, the Euclidean distance is applied for the conventional CFS algorithm to cluster the heterogeneous dataset where each object is represented by a learned feature vector. The evaluation criteria are described in Section 6.1, followed by the experimental results. Experiments on the NUS-WIDE Dataset. The NUS-WIDE dataset is the biggest image set, consisting of 269, 648 annotated images.To compare the proposed algorithm with the HOPCM algorithm and the conventional CFS algorithm fairly, we use the same image dataset collected from the NUS-WIDE with literature [9], which consists of 8 different subsets, each with 10,000 annotated images falling into 14 categories. First, we carried out the experiments on the overall image set for five times.The clustering results are shown in Figures 6 and 7. Figure 6 shows the clustering result in terms of * on the overall dataset.We observe that the proposed algorithm got the lowest values of * in most cases, which implies that the proposed algorithm produced the most accurate clustering centers. From Figure 7, HOCFS produced the highest values of RI in most cases, which indicates that HOCFS performs best in clustering NUS-WIDE dataset.Moreover, the conventional CFS algorithm performs worst in terms of * and RI, demonstrating that the proposed algorithm could effectively capture the complex correlations over the heterogeneous data by applying the vector outer product to feature fusion and using tensor distance to measure the similarity between two objects. Next, we carried out the experiment on the 8 subsets for 5 times to evaluate the robustness of the clustering algorithms.Tables 1 and 2 present the average clustering results of 5 times on every subset. From Tables 1 and 2, the average values of * obtained by HOCFS are lowest for each subset while the average values of RI obtained by HOCFS are significantly larger than that obtained by HOPCM and CFS.In other words, the proposed algorithm produced the best clustering results in terms of * and RI for NUS-WIDE dataset. Experiments on the CUAVE Dataset. CUAVE is a typical multimodal dataset consisting of some digits, 0 to 9, reported by 36 individuals.To assess HOCFS for clustering heterogeneous data, we added some annotations to each object as the literature [9]. We first carried out the experiment on the CUAVE dataset for 5 times to judge HOCFS for clustering heterogeneous data in terms of RI.The result is presented in Figure 8. According to Figure 8, the value of RI obtained by HOCFS is highest for each experiment, implying that the proposed algorithm produced the best clustering result for the CUAVE dataset in terms of RI.On the one hand, the proposed algorithm uses the hybrid stacked autoencoder model to learn features of each object in the CUAVE dataset while HOPCM only uses the basic autoencoder model to learn features, leading to the more accurate clustering result produced by the proposed algorithm compared to HOPCM.On the other hand, HOCFS fuses the learnt features of each modality for capturing the nonlinear correlations over multiple modalities of each object while CFS formed the feature vector for each object by only concatenating the learned features.Thus, the proposed algorithm performed the best for clustering the CUAVE dataset. Next, we evaluate the robustness of the proposed algorithm by generating three different subsets, each with a distinct combination of two modalities.We carried out the experiment on these subsets for 5 times.The results are shown in Figures 9-11. According to Figures 9-11, the proposed algorithm outperformed HOPCM and CFS since HOCFS got higher values of RI than the other two algorithms in most cases, especially for clustering the text-audio subset.In other words, the proposed algorithm produced the best clustering results in terms of RI for the CUAVE subsets.Finally, we studied the relationship between the clustering result and the different combinations of modalities by analyzing the clustering results, as shown in Table 3. From Table 3, the best clustering result is always produced on the overall dataset, implying that the clustering result of heterogeneous data relies on the joint features of image-textaudio modalities.Moreover, the proposed algorithm produced the worst clustering result on text-audio subset, which demonstrates that only features learned from the text-audio modalities could not effectively represent the objects in the CUAVE dataset. Conclusion In this paper, we proposed a high-order CFS algorithm for clustering heterogeneous data.One property of the paper is to devise an adaptive deep learning model and to apply it to learning features of each type of data.Furthermore, the vector Figure 2 : Figure 2: The architecture of the proposed scheme. Figure 6 : Figure 6: Clustering result on NUS-WIDE in terms of * . Figure 7 : Figure 7: Clustering result on NUS-WIDE in terms of RI. Figure 8 : Figure 8: Clustering result on CUAVE in terms of RI. Table 1 : Clustering result on NUS-WIDE in terms of * . Table 2 : Clustering result on NUS-WIDE in terms of RI.
5,100.2
2016-07-25T00:00:00.000
[ "Computer Science" ]
Nearly AdS2 holography in quantum CGHS model In light of recent developments in nearly AdS2 holography, we revisit the semiclassical version of two-dimensional dilaton gravity proposed by Callan, Giddings, Harvey, and Strominger (CGHS) [1] in the early 90’s. In distinction to the classical model, the quantum-corrected CGHS model has an AdS2 vacuum with a constant dilaton. By turning on a non-normalizable mode of the Liouville field, i.e. the conformal mode of the 2d gravity, the explicit breaking of the scale invariance renders the AdS2 vacuum nearly AdS2. As a consequence, there emerges an effective one-dimensional Schwarzian-type theory of pseudo Nambu-Goldstone mode-the boundary graviton-on the boundary of the nearly AdS2 space. We go beyond the linear order perturbation in non-normalizable fluctuations of the Liouville field and work up to the second order. As a main result of our analysis, we clarify the role of the boundary graviton in the holographic framework and show the Virasoro/Schwarzian correspondence, namely that the 2d bulk Virasoro constraints are equivalent to the graviton equation of motion of the 1d boundary theory, at least, on the SL(2, R) invariant vacuum. Introduction The AdS 2 space makes a universal appearance in the near-horizon limit of extremal black holes. The AdS/CFT correspondence [2][3][4] can be successfully applied to the counting of degeneracy of microstates for extremal black holes [5,6]. However, from the viewpoint of holography, since there cannot be finite energy excitations in asymptotically AdS 2 spaces due to large long-distance backreactions [7], there is no dynamics in AdS 2 /CFT 1 and what we can learn from it is only degeneracy of ground states. From the viewpoint of black hole physics, it is important to go beyond extremality in order to study black hole evaporations and the information paradox. To address these issues, nearly AdS 2 (NAdS 2 ) holography was pioneered by Almheiri and Polchinski [8]: the conformal invariance of AdS 2 was broken by an introduction of an explicit energy scale and the holographic study of nearly AdS 2 geometry was initiated for a class of 2d dilaton gravity models in which backreactions due to the symmetry breaking scale are under control and can be studied analytically. The functions U and V of the dilaton Φ specify the models of one's interest. The Jackiw-Teitelboim (JT) model [9,10], which has been most studied in recent developments, is given by the choice U = 0 and V (Φ) = Φ, whereas the (classical) CGHS JHEP01(2020)178 model [1], whose semi-classical version is of our interest, corresponds to U (Φ) = 1/Φ and V (Φ) = −2λ 2 Φ. As an indication of physics of near-extremal black holes, the JT model, for example, captures the first order correction κ −1 T to the entropy of near extremal black holes, S(T ) = S 0 + κ −1 T + O(T 2 ), where S 0 is the entropy of extremal black holes and κ is the energy scale of symmetry breaking [8,11,12]. More recent developments have been boosted by the connection between the NAdS 2 gravity and the SYK model [11][12][13][14][15][16][17][18][19][20]. 1 The latter is an exactly solvable quantum many-body system with an emergent near conformal invariance [28,29]. Both are related to black hole physics in higher dimensions. In fact, the SYK model saturates the quantum chaos bound which is believed to be a smoking gun for the existence of gravity duals [30]. Moreover, the boundary effective theory of the NAdS 2 gravity has turned out to be a Schwarzian theory which also emerges in the soft sector of the SYK model. In light of these developments in nearly AdS 2 holography, in this paper, we revisit a quantum-corrected version of the CGHS model [1] as an alternative to the JT model. The classical CGHS model receives a quantum correction due to conformal anomaly described by the well-known non-local Polyakov action [31]. For a large number N of massless scalars, the CGHS model, including the anomaly correction, can be studied semi-classically. For our convenience at the risk of being a misnomer, we refer to it as the quantum CGHS (qCGHS) model. It has been known that the qCGHS model has an exact AdS 2 vacuum with a constant dilaton [32]. This offers us an opportunity to study the NAdS 2 gravity in the qCGHS model. In the JT model, the scale of symmetry breaking was introduced by the dilaton deformation which grows near the boundary of the AdS 2 space and renders the AdS 2 vacuum nearly AdS 2 [8,11,12]. In the qCGHS model, in contrast, the dilaton is a constant and the scale is instead introduced by turning on a non-normalizable mode of the Liouville field, i.e. the conformal mode of the 2d gravity, in much the same way as in the Liouville theory studied in this context in [33]. As a consequence, there emerges a Schwarzian theory on the boundary as in the case of the JT model and the Liouville theory. The Schwarzian theory is an effective theory of pseudo Nambu-Goldstone boson -the boundary graviton -associated with the spontaneous breaking of the reparametrization symmetry down to the SL(2, R) subgroup in which the explicit symmetry breaking scale renders the effective action finite in a similar way to the QCD chiral Lagrangian with the pion decay constant. Owing to the solvability of the Liouville equation, we are able to study the nonnormalizable mode beyond the linear order. We can, in principle, go to arbitrary higher orders, but we content ourselves with working up to the second order in detail. As a main result of our analysis, we clarify the role of the boundary graviton in the holographic framework, which is a degree of freedom somewhat atypical in the standard holography. As we will show, the graviton equation of motion of the 1d boundary theory is equivalent to the 2d bulk Virasoro constraints, at least, on the SL(2, R) invariant vacuum. This paper is organized as follows: in section 2 we will give a brief review on the qCGHS model and its exact AdS 2 vacuum as well as more general solutions on which our JHEP01(2020)178 discussions that follow are based. We will then begin to study nearly AdS 2 holography in the qCGHS model in section 3. We will first discuss the non-normalizable mode of the Liouville field which renders the AdS 2 vacuum nearly AdS 2 . We will then construct the fully-backreacted NAdS 2 geometry and use it to find the 1d boundary effective action up to the second order in the non-normalizable Liouville fluctuation. In section 4, by using the boundary action derived in section 3, we will show the (conditional) equivalence between the 2d bulk Virasoro constraints and the graviton equation of motion of the 1d boundary theory. Many of the computational details will be relegated to appendices A and B. Finally, we will discuss our results and conclude with directions for the future work in section 5. The quantum CGHS model The CGHS model [1] is a model of 2d dilaton gravity which arises as the effective twodimensional theory of extremal dilatonic black holes in four and higher dimensions [34][35][36][37][38][39] and is defined by the action where g, φ and f i are the metric, dilaton and massless matter fields, respectively, and λ 2 is a cosmological constant. The matter fields f i originate from Ramond-Ramond fields in type II superstring theories. This model has been extensively studied in the early 90's as a model of evaporating black holes. Remarkably, the model is classically solvable and has a simple eternal black hole solution in an asymptotically flat and linear dilaton spacetime. Moreover, it can describe a formation and the subsequent evaporation of the black hole, and it was hoped that significant insights into information paradox might be gained by studying this model and its variants. See, for example, for the review [40,41]. Quantum mechanically, the classical action (2.1) is corrected by conformal anomaly described by the well-known non-local Polyakov action [31] where N − 24 = (N + 2) − 26: N is due to the massless matter fields, 2 from the dilaton and the conformal mode of the 2d metric, and −26 from the diffeomorphism bc ghosts. 2 Thus the quantum-corrected CGHS model is defined by the action To be precise, this is a semi-classical version of the CGHS model. Nevertheless, for our convenience, we shall refer to it as quantum CGHS model (qCGHS) in the rest of the paper. JHEP01(2020)178 In the conformal gauge the non-local Polyakov action becomes local and is given by the Liouville action, and the qCGHS action takes the form Here and hereafter we consider the large N limit in which N − 24 can be replaced by N . The equations of motion for the Liouville field ρ, dilaton φ and matter fields f i are given, respectively, by In addition, this system is subjected to the Virasoro constraints, i.e. the equations of motion for g ±± : The last quantities t ± reflect the non-locality of the Polyakov action and are determined by the choice of the vacuum. 3 The quantum CGHS model is no longer solvable and there is no simple analytic black hole solution even though there is a modified solvable variant of the qCGHS model known as the RST model proposed in [40,44] and extensively studied thereafter. For the purpose of holography, however, we are interested in asymptotically AdS 2 spacetimes. Indeed, there exists an AdS 2 vacuum with a constant dilaton in the quatum CGHS model [32]: where x ± = t ± z are the lightcone coordinates in the Poincaré patch of AdS 2 . Moreover, there exist a more general class of solutions obtained by the reparametrizations x + → A(x + ) and x − → B(x − ) [45]: 12) 3 An elegant and convenient way to see it explicitly is to introduce an auxiliary field ϕ obeying ϕ = R in terms of which the non-local Polyakov action can be rewritten as SP = N 96π d 2 x √ −g(−ϕ ϕ+2ϕR) [42,43]. JHEP01(2020)178 where we introduced the Schwarzian derivative defined by Note that the choice of t ± corresponds to ϕ + ( Nearly AdS 2 holography in qCGHS model The AdS 2 space appears universally in the near horizon limit of extremal black holes as a 2dimensional component of higher dimensional spacetimes. In contrast to higher dimensional counterparts, however, the AdS 2 boundary conditions are not consistent with finite energy excitations due to large long-distance backreactions [7]. From the black hole viewpoint, a mass gap is developed in the near horizon region and the AdS 2 /CFT 1 correspondence can only describe the ground state degeneracy. In order to have nontrivial dynamics, one must therefore introduce a new scale and enforces a deviation from the pure AdS 2 space which does not die off near the boundaries. This necessitates turning on a non-normalizable mode dual to an irrelevant operator in conformal mechanics. From the extremal black hole perspective, this deformation effectively undoes the near horizon decoupling and enables excursions into the region of spacetime corresponding to UV of the dual field theory. To realize this scenario, we first cut off the AdS 2 space near its boundary at a small finite z. More precisely, we consider the spacetime (2.11) with A = B. The resulting spacetime is a reparametrization of the Poincaré AdS 2 by which near the boundary becomes where the map t → B(t) is the time reparametrization on the cutoff boundary. It is, however, important to note that B(t) is not a mere time reparametrization but physical: a different reparametrization function B(t) results in a different t ± in (2.9) and (2.12). In other words, a change to B(t) results in a change to the vacuum or the boundary condition. This then implies that physical observables such as correlation functions do depend on B(t). Note, however, that there is a subset of B(t) for which t ± = 0: This is a Möbius transformation of t. It can be interpreted as meaning that the reparametrization symmetry is spontaneously broken to SL(2, R) and B(t) is the Nambu-Goldstone boson associated with the broken symmetry. In the meantime, the conformal factor of the metric (2.11) has the boundary expansion JHEP01(2020)178 The finite part in the expansion reminds us of the Brown-Henneaux asymptotics of the AdS 3 space [46] and it may thus provide another perspective: B(t) can be thought of as the boundary graviton living in the cutoff surface at a small z [11]. Non-normalizable mode and symmetry breaking scale In the case of Jackiw-Teitelboim (JT) gravity [9,10], the new scale φ r to deform the AdS 2 vacuum is introduced through the dilaton which grows as φ ∼ φ r /z near the boundary [11]. In contrast, as we will see, in the case of the qCGHS model, the dilaton plays only a minor role and the new scale, which renders the AdS 2 vacuum nearly AdS 2 , is provided by a nonnormalizable mode of the Liouville field ρ. This is very much similar to the mechanism advocated in [33]. (A related idea was discussed in an earlier literature [47].) Whether it is the dilaton φ ≡ φ 0 +φ or the Liouville field ρ ≡ ρ 0 +ρ, since what is essential for the nearly AdS 2 geometry is the non-normalizable mode, we first analyze the fluctuationsφ andρ of the dilaton and Liouville fields in the qCGHS model. For this purpose, we work in the conformal gauge (2.4) and then the quadratic fluctuation action for the dilaton-Liouville system is given by where as in (2.11) the background Liouville and dilaton fields are The fluctuation fields are thus classified into the "tachyonic" dilatonφ and the massive field ρ−φ besides N massless matter fields f i . 4 It needs to be mentioned that the dilaton fluctuation violates the Breitenlohner-Freedman bound [48,49]. Noting, however, that it behaves asφ ∼ √ z cos( √ 7/2 log z) for all real and imaginary frequencies near the boundary z = 0 of the Poincaré AdS 2 , the linear instability can be alleviated by imposing the Neumann boundary condition ∂ z φ(t, z) = 0 at the boundary that freezes the dilaton fluctuation. Having frozen the dilaton fluctuation by the Neumann boundary condition, we now focus on the massive Liouville fluctuationρ. 5 To illustrate the essential point, we first consider the Poincaré AdS 2 corresponding to B(x ± ) = t ± z. The equation of motion for the Liouville fluctuation is then Near the boundary the solution to this equation goes asρ ∼ α/z + βz 2 , which indicates that the Liouville fluctuationρ is dual to an irrelevant operator of conformal dimensions JHEP01(2020)178 To be more precise, the non-normalizable mode is given by [4] where a particular normalization was chosen for the consistency with the analysis that follows. We would like to emphasize that the source j ρ is an analogue of φ r in JT gravity and the advertised new length scale which renders the AdS 2 vacuum nearly AdS 2 . We thus anticipate that the finite action for the pseudo Nambu-Goldstone boson B(t) is schematically of the form [11,33] where the source j ρ is the explicit symmetry breaking scale and an analogue of the pion decay constant. We will make it more precise in section 3.4. For a generic B, the fluctuation equation (3.7) is generalized to and the non-normalizable mode is Note that under the reparametrization t → B(t), since the source j ρ transforms according Similarly, by solving (2.8), it is straightforward to find the non-normalizable matter fields where the transformed sourcej f i (B(t)) is related to the one in the Poincaré AdS 2 bỹ j f i (B(t)) = j f i (t). Note that as mentioned above, the sources for massless fields do not introduce a scale since it is dual to marginal operators. Nearly AdS 2 geometry in qCGHS model In the previous section we deformed the AdS 2 vacuum to the linear order in the nonnormalizable Liouville fluctuations. In fact, owing to the solvability of the Liouville equation, one can go beyond perturbation and resum the nearly AdS 2 deformation to all orders. To see it, recall the equations of motion (2.6) and (2.7). For a constant dilaton, the two equations reduce to a single equation JHEP01(2020)178 The general solution is the Liouville field ρ in (2.11). The Liouville fluctuation equation (3.10) is an expansion of this equation to the linear order: 14) The non-normalizable mode (3.11) thus resums to which can be inferred from the expansion of the resummed expression. This gives the fullybackreacted nearly AdS 2 geometry described by the metric ds 2 NAdS 2 = −e 2(ρ 0 +ρ) dx + dx − . In order to gain better ideas of this geometry, we consider the nearly Poincaré AdS 2 corresponding to B(x ± ) = t ± z. After performing a Wick-rotation, t → iτ and j ρ (t) → −i j ρ (τ ), the deformation near the boundary takes a simple form This amounts to the coordinate transformation Rather than viewing this as a mere coordinate transformation, we may interpret it as meaning that the non-normalizable deformation cuts out the near-boundary region below the symmetry breaking scale z ⋆ = πj ρ (τ ) even though the space is locally AdS 2 . It should be noted that we have not imposed the Virasoro constraints (2.9). As we will see in section 4, the Virasoro constraints impose a restriction on the functional form of the source j ρ (t). Second order perturbation Our next goal is to construct the 1d boundary effective theory of the pseudo Nambu-Goldstone boson B(t) as alluded in (3.9). We are going beyond the linear order in j ρ as typically done in the literature and work out to the second order in order to perform a nontrivial check of nearly AdS 2 holography in the qCGHS model in section 4. The resummation (3.15) of the non-normalizable mode allows us to systematically extract the Liouville fluctuations higher orders in j ρ . For the clarity of the argument, we expand the Liouville fluctuationρ asρ = ρ 1 + ρ 2 + · · · (3.19) JHEP01(2020)178 where the numbers in the subscript denote the order in the source j ρ . In this notation, the first order non-normalizable mode (3.11) is renamed to where the r.h.s. of (3.11) was rewritten in terms of the deformation (3.16). By expanding (3.15) for a small deformation, one can similarly find the second order Liouville fluctuation (3.21) For our purposes, we are interested in the expressions for ρ 1 and ρ 2 near the boundary at a small z. Our strategy is to first find the expressions in the Poincaré AdS 2 with B(x ± ) = t ± z and then covariantize the results so obtained to reinstate the dependence on B(t). We perform an appropriate Wick-rotation, t → iτ and j ρ (t) → −i j ρ (τ ), and work in the Euclidean space. The details of the computation are shown in appendix A. In the Poincaré coordinates, the first order fluctuation is calculated as As discussed in section 3.1, the divergent term is essential for the appearance of the finite Schwarzian action (3.9). In the meantime, since we work through to the second order in j ρ , we would also need the bilinear quantities of ρ 1 : To covariantize these expressions, we make the replacements z → zB ′ (τ ) , τ → B(τ ) , and j ρ (τ ) →j ρ (B(τ )) = j ρ (τ )B ′ (τ ) . (3.24) We thus obtain to the relevant order in z and These three quantities form a part of the building blocks for the construction of the 1d boundary Schwarzian-type theory. Turning to the second order fluctuation ρ 2 , it is similarly calculated as JHEP01(2020)178 The covariant form of the second order fluctuation to the relevant order in z is then found to be Apart from N matter fields f i , together with the above three quantities made of ρ 1 , this forms a complete set of the building blocks for the boundary action we discuss in the next section. The boundary Schwarzian-type action We are now in a position to discuss the 1d boundary effective theory of the pseudo Nambu-Goldstone boson B(t). We find it most convenient to work in the locally AdS 2 gauge adopted in [33], i.e. factorizing the metric into the background and fluctuation parts: In this gauge the Liouvilleρ-dependent part of the non-local Polyakov action becomes 6 where we usedR = −2 andK = −e −ρ 0 ∂ z ρ 0 . The last term is a Gibbons-Hawking-York term [50,51] for the Liouville theory. Now, recall the Liouville equation of motion (3.13). Its fluctuation part is given by 0 = 2∂ + ∂ −ρ + e 2ρ 0 ρ +ρ 2 + · · · . (3.31) From this equation, we can infer to the second order that With the latter on-shell equation and by integration by parts, the Polyakov action simplifies and is only left with the boundary contribution Meanwhile, the classical CGHS action, the first three terms of (2.5) in parenthesis, vanishes on-shell and there is only a boundary contribution from the Gibbons-Hawking-York term of the dilaton gravity: JHEP01(2020)178 where γ = −e ρ and K = −e −ρ ∂ z ρ and the metric without a hat is the full metric including both the background and fluctuations. It is worth noting that this is a contribution genuinely from the qCGHS model. Without this contribution, our analysis, apart from the second order corrections, would virtually have no difference from that of the Liouville theory [33]. Even though the dilaton has been playing only a minor role and this boundary contribution might look rather insignificant, as we will see, it makes important difference in the working precision of nearly AdS 2 holography. At this point we are finding that where the background ρ 0 -dependent term is There are 1/z 2 and 1/z divergences in the boundary action we have obtained so far since ρ 1 and ρ 2 are singular as 1/z as discussed in the previous sections. The 1/z 2 divergences can be removed by adding the boundary cosmological constant as a counter-term following the holographic renormalization procedure [52]: where the background boundary cosmological constant is However, there still remains a 1/z divergence in ∂ z ρ 0 +2e ρ 0 . As it turns out, this is cancelled by the background part of the non-local Polyakov action which we have omitted so far: We now put all the pieces together to obtain the finite boundary action Note that the second term is a finite contribution that comes from the counter-term S ct and corresponds to a double trace deformation considered in [53]. Using the expressions (3.25), (3.26) and (3.28) for the fluctuations together with the background values (3.36) and (3.38), after the Wick-rotation, the second order boundary action for the pseudo Nambu-Goldstone boson becomes JHEP01(2020)178 where we defined The first action S jρSch is the Schwarzian action found in [11,12,33] as expected. The second action S j 2 ρ comes from the quadratic terms in ρ 1 and, in the standard holography, corresponds to the two-point function of a dimension ∆ = 2 operator. Meanwhile, the third action S j 2 ρ Sch is the one from the second order fluctuation ρ 2 and is a reflection of the fact that the ∆ = 2 Schwarzian operator, dual to the Liouville fluctuationρ, is a quasi-primary rather than primary. To be complete, we shall add the massless matter action. By integration by parts and using the equation of motion (2.8), the matter action becomes Thus the boundary action for the non-normalizable mode (3.12) is found to be where the details of the computation are shown in appendix A. This is of the form of the two-point function of dimension ∆ = 1 operators as expected. As a final note in this section, in the case of nearly AdS 2 holography [8,11,12], it is rather remarkable that the 1d boundary theory is directly "derived" from the 2d bulk gravity in the sense that the boundary effective action (3.41) plus (3.46) is expected to be a collective field description of a 1d quantum mechanical theory such as the SYK model [28,29]. The Virasoro/Schwarzian correspondence In the standard holography, the 1d boundary effective action is interpreted as the generating functional of correlation functions of the operators dual to the sources j ρ and j f i [3,4]. However, this is not the end of the story for nearly AdS 2 holography: as remarked in the end of section 3.2, we have not imposed the Virasoro constraints (2.9) to this point. This, in particular, means that the sources j ρ (τ ) and j f i (τ ) are not arbitrary functions of τ but constrained by the Virasoro constraints. JHEP01(2020)178 From the viewpoint of the boundary action as the generating functional of correlation functions, the boundary graviton B(τ ) is "the new kid on the block". We would like to clarify what role exactly it plays in the holographic framework. As may have been anticipated, the answer is simple and we shall show that the B(τ ) equation of motion of the 1d boundary theory is equivalent to the 2d bulk Virasoro constraints, at least, on the SL(2, R) invariant vacuum: Since the Virasoro constraints are the equations of motion for g ±± and the boundary graviton B(τ ) is a remnant of 2d metric degrees of freedom, it is not a surprise that this correspondence holds. We first present the B(τ ) equation of motion of the 1d boundary effective action (4.1). The computational details are shown in appendix B. There are three parts in the pseudo Nambu-Goldstone boson action (3.41) and the variations of each part are given by wherej ρ (B(τ )) = j ρ (τ )B ′ (τ ) as appeared before. In the meantime, the variation of the matter action reads These then yield the equation of motion This is the l.h.s. of (4.2) and to be compared with the Virasoro constraints (2.9). Note that to the linear order the equation of motion is ∂ 3 Bj ρ = 0 whose solution is with constants α, β and γ in agreement with the dilaton φ r in the JT model [11] and the non-normalizable mode in Liouville theory [33]. JHEP01(2020)178 We now turn to the Virasoro constraints (2.9). We are only concerned with the fluctuation part of the Virasoro constraints with a constant dilaton. As shown in appendix B, the second-order Virasoro constraints at the boundary z → 0 take the form This is the r.h.s. of (4.2). Since we turned on the non-normalizable modes in the left-right symmetric way, the left and right energy-momentum tensors are identical at the boundary. We are now in a position to compare the B(τ ) equation of motion (4.6) and the Virasoro constraints (4.8). The two are identical except for the second line of (4.6) which are the terms involving the Schwarzian derivatives. Since the Schwarzian derivative {B(τ ), τ } = 0 on the SL(2, R) invariant vacuum, we see that as advertized, the Virasoro/Schwarzian correspondence (4.2) holds on this vacuum for which B(τ ) = τ modulo Möbius transformations (3.3). 7 This is the most conservative interpretation we offer. However, we would like to discuss a little more speculative interpretation. It was our expectation and is our sentiment that ultimately, the Schwarzian-dependent terms in the second line of (4.6) would disappear and the Virasoro/Schwarzian correspondence (4.2) works on all vacua or for all boundary conditions, i.e. for a generic B(τ ). If these terms were a discrepancy to be resolved, we suspect that they are related to t ± in (2.9). As remarked in footnote 2, they can be expressed as t ± = ∂ 2 ± ϕ ± − (∂ ± ϕ ± ) 2 in terms of the auxiliary field ϕ. They have the nonvanishing background values t ± = 1 2 {B(x ± ), x ± } with ϕ ± = 1 2 ln B ′ (x ± ) which vanish on the SL(2, R) invariant vacuum. In our analysis we have been agnostic about potential effects of the auxiliary field ϕ ± on the boundary action. However, it might be that there is a missed effect and when it is properly taken into account, it cancels the Schwarzian-dependent terms in (4.6). Discussion From the viewpoint of holography, it is rather remarkable to see that there is a straightforward connection between the bulk Einstein equations (for g ±± ) and the boundary equation of motion, which we dubbed the Virasoro/Schwarzian correspondence. The key to this correspondence is the presence of the dynamical boundary graviton B(t). In the standard holography, the boundary graviton does not make a regular appearance except for the AdS 3 case [46] and the AdS/CFT realization of Randall-Sundrum II [54] as suggested by Gubser [55]. Even in these examples, to our knowledge, the direct bulk-boundary connection of the type (4.2) has not been realized or formulated. A potential generalization to the AdS 3 case can be explored by studying the corresponding 2d effective action analogous to the 1d Schwarzian action [56]. It is, however, worth mentioning that there are attempts to derive the bulk Einstein equations from other perspectives such as the entanglement of boundary CFTs [57][58][59][60]. JHEP01(2020)178 As remarked in [17], 8 the Schwarzian theory can be considered as the path integral over the symplectic manifold -the coadjoint orbit diff(S 1 )/ SL(2, R). The dynamical boundary graviton B(t) in Schwarzian theory is related to the coadjoint group operation which generates the orbit. That said, as remarked in section 4, we could only show that the Virasoro/Schwarzian correspondence is so far exact on the SL(2, R)-invariant vacuum. Up to the SL(2, R) equivalence class, the vacuum corresponds to exactly the "first exceptional" coadjoint orbit. This may not be entirely satisfactory. However, as we discussed, it could be that the mismatched Schwarzian terms in the second line of (4.6) disappear upon the inclusion of a subtle effect from the background auxiliary field ϕ ± and the Virasoro/Schwarzian correspondence holds true on all vacua. We hope to reach a clear understanding of this point in the near future. A somewhat related note is that the two point function of the Schwarzian derivative obtained from the action (3.41) is structurally almost in the form of the OPE of the 2d energy-momentum tensor, T (z)T (w) ∼ c/2/(z − w) 4 + 2T (w)/(z − w) 2 + ∂T (w)/(z − w), except that the last term is missing. The absence of this last term might be related to the mismatched Schwarzian terms. In this paper, we have focused on the gravity side of nearly AdS 2 holography. Needless to say, it is very important to gain some understanding of the dual quantum mechanics. An obvious candidate is the SYK model [28,29] or its variant [18,61]. Even though we do not have much to offer on this point, it may be worth commenting on the following observation. The Schwarzian sector of the SYK model with N Majorana fermions takes the form, S = N α(q) J dt{B(t), t} with the dimension one coupling J and a constant α(q) which depends on the order q of the interaction. The inverse coupling 1/J corresponds to the symmetry breaking scale j ρ [14,33] and one may identify N with the number of massless scalars in the qCGHS model. Then the second order actions (3.43) and (3.44) would correspond to the 1/J 2 correction to the Schwarzian action. However, they do not seem to agree with the 1/J 2 correction in the SYK model [15,62,63], indicating that the dual quantum mechanics may not simply be the SYK model. Even though we have not discussed in this paper, the qCGHS model has a larger class of exact solutions with matter. For example, there are exact multi shock wave solutions [64]. These include an AdS 2 counterpart of the shock wave limit of traversable wormholes studied in [65]. In order to describe these shock waves, we need to generalize the non-normalizable modes (3.16) to the left-right asymmetric sources. In particular, it would be interesting to see if and how the boundary action for a traversable wormhole realizes the GJW construction of traversable wormholes via a double-trace deformation [66,67]. In contrast to the prior work [68], this would be an example of non-eternal traversable wormholes. Other works about how matter fields interact with AdS 2 background are studied in JT model by [19,20]. It's interesting to revisit the problem of studying the interaction between gravity and matter in the qCGHS model in the future, such as calculating the OTOC and other correlation functions in the bulk gravity. Finally, it is important to understand if and how the qCGHS model can be embedded in higher dimensional black holes. As mentioned earlier, the classical CGHS model arises JHEP01(2020)178 as the effective two-dimensional theory of extremal dilatonic black holes in four and higher dimensions [34][35][36][37][38][39]. It is not immediately clear whether the two-dimensional conformal anomaly has an interpretation in the higher dimensional parent theory. The current technology of black hole microstate counting is limited to supersymmetric extremal black holes. (See [69] for a recent review.) It is an open question to account for non-extremal black hole entropy from dual field theory. If the qCGHS model can be embedded in higher dimensional black holes, one can hope to gain a better understanding of non-extremal black holes along the line of recent developments [21,23]. JHEP01(2020)178 It then follows that Turning to the next order, the second order Liouville fluctuation (3.21) consists of three terms where in the Poincaré coordinates With the above prescription of the damping factor for the source j ρ (τ 0 ), we perform one τ 0 -integral for each term by using the contour integral along C as was done for the first order fluctuation ρ 1 . These integrals result in and For a small z we then find that Massless matter. In the Poincaré coordinates, the massless matter non-normalizable mode (3.12) takes the form where we performed a Wick-rotation t → iτ and j f i (t) → −i j f i (τ ). As in the case of the Liouville fluctuations, we adopt the prescription to add a damping factor e iǫτ 0 to the sources j f i (τ 0 ) and use the contour integral along C to calculate f i . We then find It then follows that Taking the derivative with respect to z, the covariantization of the expression yields the matter action in (3.46). B Variations of boundary action and Virasoro constraints Here we show the computational details of the B(τ ) equation of motion of the boundary theory and the Virasoro constraints to the second order in the Liouville fluctuation as discussed in section 4. The B(τ ) equation of motion. The variation of the Schwarzian action (3.42) with respect to B(τ ) is given by The variation of the first quadratic part (3.43) is calculated as where we used integration by parts and adopted the prescription for the B(τ 0 )-integral Performing integration by parts and usingj ρ (B(τ )) = j ρ (τ )B ′ (τ ), this can be rewritten as Adopting the prescription for the B(τ 0 )-integral (B.3), after a little manipulations, we finally obtain that Finally, using again the prescription (B.3), the variation of the matter action reads (B.5) Virasoro constraints. The linear fluctuation part of the Liouville energy-momentum tensor is found to be The second-order fluctuation part is calculated as (B.7) Finally, the matter energy-momentum tensor is found as Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
8,457.4
2019-10-28T00:00:00.000
[ "Physics" ]
Association of oxytocin receptor (OXTR) gene variants with multiple phenotype domains of autism spectrum disorder Autism spectrum disorder (ASD) is characterized by core deficits in social behavior, communication, and behavioral flexibility. Several lines of evidence indicate that oxytocin, signaling through its receptor (OXTR), is important in a wide range of social behaviors. In attempts to determine whether genetic variations in the oxytocin signaling system contribute to ASD susceptibility, seven recent reports indicated association of common genetic polymorphisms in the OXTR gene with ASD. Each involved relatively small sample sizes (57 to 436 families) and, where it was examined, failed to identify association of OXTR polymorphisms with measures of social behavior in individuals with ASD. We report genetic association analysis of 25 markers spanning the OXTR locus in 1,238 pedigrees including 2,333 individuals with ASD. Association of three markers previously implicated in ASD susceptibility, rs2268493 (P = 0.043), rs1042778 (P = 0.037), and rs7632287 (P = 0.016), was observed. Further, these genetic markers were associated with multiple core ASD phenotypes, including social domain dysfunction, measured by standardized instruments used to diagnose and describe ASD. The data suggest association of OXTR genetic polymorphisms with ASD, although the results should be interpreted with caution because none of the significant associations would survive appropriate correction for multiple comparisons. However, the current findings of association in a large independent cohort are consistent with previous results, and the biological plausibility of participation of the oxytocin signaling system in modulating social disruptions characteristic of ASD, suggest that functional polymorphisms of OXTR may contribute to ASD risk in a subset of families. Introduction Autism spectrum disorder (ASD) is characterized by abnormalities in three domains: social interaction deficits, language impairments, and repetitive behaviors with restricted interests. Despite a widely varying clinical presentation, ASD is highly heritable, with studies demonstrating a concordance rate of 80-95% in monozygotic twins and 0-31% in dizygotic twins (Folstein and Rutter 1977;Ritvo et al. 1985;Bailey et al. 1995;Le Couteur et al. 1996;Taniai et al. 2008;Rosenberg et al. 2009). However, genetic linkage studies have indicated signals for linkage on nearly every chromosome (International Molecular Genetic Study of Autism Consortium 1998; Philippe et al. 1999; Barrett et al. 1999;Risch et al. 1999;International Molecular Genetic Study of Autism Consortium 2001;Yonan et al. 2003;Ylisaukko-oja et al. 2006), suggesting that multiple genes contribute to ASD susceptibility. The gene encoding the oxytocin receptor, OXTR, is a strong functional ASD candidate gene based on its known role in modulating social behavior (Ebstein et al. 2009;Hammock and Levitt 2006). Pharmacological and genetic manipulations have demonstrated a causal role for oxytocin (OXT) and its receptor (OXTR) in the regulation of species-typical social behavior. OXT facilitates social recognition behavior (Ferguson et al. 2000) and modulates maternal behavior (Pedersen and Prange 1979;Kendrick et al. 1987). Moreover, OXT signaling facilitates social preferences between adult monogamous rodents (Williams et al. 1992;Williams et al. 1994). In humans, there have been a number of recent studies demonstrating enhanced functions relevant to social behavior following oxytocin application in healthy adults (Heinrichs et al. 2003;Kosfeld et al. 2005;Rimmele et al. 2009;Zak et al. 2007;Domes et al. 2007a;Domes et al. 2007b;Guastella et al. 2008a;Guastella et al. 2008b;Hurlemann et al. 2010). Studies probing the relationship between OXT and ASD have generated complex findings. Reported reductions in plasma OXT (Green et al. 2001;Modahl et al. 1998) and increases in unprocessed OXT peptides (Green et al. 2001) in young children with ASD are contrasted by measures of higher baseline levels of serum OXT in young adults with ASD compared to control subjects (Jansen et al. 2006). Intravenous or intranasal applications of OXT in youth (Guastella et al. 2010) and adult (Hollander et al. 2007;Hollander et al. 2003;Andari et al. 2010) cohorts with ASD generally improved various functions relevant to social behavior. Genetic analyses of OXTR contribution to ASD risk have been inconsistent. Most genome-wide linkage analyses did not find a peak near the OXTR gene on chromosome 3p25 (International Molecular Genetic Study of Autism Consortium 1998; Philippe et al. 1999; Barrett et al. 1999;Risch et al. 1999;International Molecular Genetic Study of Autism Consortium 2001). However, one of the largest linkage studies performed to date, including 314 families, highlighted a linkage peak directly over the OXTR gene, establishing OXTR as a positional candidate gene (Ylisaukko-oja et al. 2006). Seven genetic association studies of OXTR with ASD have been reported, but consistent association of alleles is lacking. A study of 195 Chinese families indicated association of two markers in the third intron of OXTR, rs2254298 A allele and rs53576 A allele (Wu et al. 2005). Three family-based studies failed to find association of rs2254298 (Liu et al. 2010;Lerer et al. 2008;Wermter et al. 2010) and another study did not find association of rs53576 (Jacob et al. 2007). However, evidence for association of the rs2254298 A allele was found in a Japanese case-control sample (Liu et al. 2010). In Caucasian families, the opposite allele, the rs2254298 G allele, was associated with ASD risk in one study (Jacob et al. 2007) and contributed to a risk haplotype in another study (Lerer et al. 2008). A third study of Caucasian families did not test rs2254298 directly, but found association of a nearby intron 3 marker, rs2268493 (Yrigollen et al. 2008). In addition to the evidence for association of OXTR intron 3 markers, markers near the 3′ untranslated region (UTR) have also been implicated in ASD risk (Lerer et al. 2008;Tansey et al. 2010). Lerer et al. (2008) described association of the 3′ UTR marker rs1042778 G allele and Tansey et al. (2010) found association of the intergenic marker rs7632287 with ASD risk. While different reports indicate positive association of OXTR markers in intron 3 and the 3′ UTR, the studies involved small family cohorts of less than 500 families each, increasing the risk for spurious associations (Sullivan 2007). Further, in those studies that included analysis of phenotype data, the OXTR markers associated with ASD failed to show association with measures of social behavior in individuals with ASD (Lerer et al. 2008;Yrigollen et al. 2008). Given the sound biological and mixed genetic evidence in favor of OXTR contribution to ASD, we hypothesized that analysis of a large sample would provide greater power to detect association of OXTR markers with ASD susceptibility. Here, we report association analysis of 25 genetic markers spanning OXTR in 1,238 families, including 2,333 individuals with ASD. In addition to analysis of association with ASD diagnosis, we examined association with quantitative scores derived from three instruments used to diagnose and describe autism phenotypes: the Autism Diagnostic Interview-Revised (ADI-R), the Autism Diagnostic Observation Schedule (ADOS), and the Social Responsiveness Scale (SRS). Methods and materials Sample The family-based sample consisted of 5,432 individuals from 1,238 pedigrees and included 2,333 individuals with ASD (Table 1). The majority of the sample (921 pedigrees) was collected by the Autism Genetics Resource Exchange (AGRE) Consortium. The remaining 317 "non-AGRE" pedigrees were collected at the University of Iowa, Stanford University, Tufts University, and Vanderbilt University. Approximately 91% of the families had more than one child with ASD (multiplex). The genotyped sample is 95% Caucasian by self-report. The sample analyzed here does not overlap any of the previous reports of genetic association studies for OXTR in ASD (Wu et al. 2005;Liu et al. 2010;Lerer et al. 2008;Wermter et al. 2010;Jacob et al. 2007;Yrigollen et al. 2008;Tansey et al. 2010). Scores on the ADI-R, ADOS, and SRS were available for 2,000 individuals from AGRE families ( Table 2). All research was approved by the Institutional Review Boards of Vanderbilt University and the University of Southern California. In-house SNP genotyping and quality control Genotyping was performed using TaqMan ™ SNP genotyping assays on the ABI Prism 7900HT and analyzed with SDS software as previously described (Campbell et al. 2006;Campbell et al. 2008). SNP Genotyping Assays-On-Demand were obtained from Applied Biosystems (Foster City, CA). Genotyping was performed in a 384-well plate format using 3 ng genomic DNA. Quality control measures included seeding of each 384-well plate with eight to ten blank negative control wells and 20-30 duplicated positive control samples. Automated allele calls were made with SDS Data Collection software and reviewed by an experienced operator according to protocol. The overall no-call rate was <5% for each of the assays. All analyzed markers were in Hardy-Weinberg equilibrium (HWE; P>0.05). GWAS genotyping GWAS of the AGRE Consortium sample was conducted by the Broad Institute on the Affymetrix 5.0 platform, which includes over 500,000 SNPs, and made available publicly. We downloaded the genotype information from the AGRE website (www.agre.org). For analysis purposes, we used Whole-genome Association Study Pipeline. We set the marker genotyping efficiency threshold to a minimum of 95%, the minor allele frequency threshold to a minimum of 0.05, the Hardy-Weinberg equilibrium threshold to a minimum of 0.01, and the marker Mendelian error rate to a maximum threshold of 15 errors. Any marker that did not meet any of the preceding specifications was removed from all further analyses. In addition, all monomorphic markers and all markers composed of only heterozygous genotypes were removed. Following this whole-genome quality control, we selected the markers in the OXTR gene plus 10 kb of flanking region on each side (a total of 39.2 kb). From the GWAS genotypes, 17 markers were analyzed. Only the 3′ UTR marker rs1042778 overlapped with the markers genotyped in-house. There was >98% concordance of the 1,980 samples genotyped for rs1042778 between the in-house TaqMan ™ and GWAS genotyping platforms. Definition of linkage disequilibrium (LD) blocks Haploview (version 3.2) was used to assess HWE and to define Shown are the number of individuals with complete scores on the indicated instrument and the number of overlapping individuals with scores on multiple instruments. The ADI-R scores are provided as total and stratified by the number of verbal and non-verbal individuals LD blocks (Fig. 1). A single individual with ASD was randomly selected from each family using a random number generator to build trios suitable for Haploview analysis (Campbell et al. 2006;Campbell et al. 2008). Analysis of association with ASD diagnosis Family-based single marker and haplotype association analyses were performed using the family-based association test (FBAT; (Horvath et al. 2001)) and haplotype-based association test (HBAT; (Horvath et al. 2004); FBAT version 1.7.2). We analyzed only empirically determined genotypes. All FBAT and HBAT analyses were performed using the empirical variance ("-e" option) because linkage has been reported for the chromosomal region containing these genes and because the empirical variance provides a more conservative estimate of association. FBAT analysis was performed with both the additive model and the recessive model because significant association has been reported with each model (Wu et al. 2005;Yrigollen et al. 2008). HBAT was performed only on defined LD blocks, only with the additive model, and with minimum haplotype frequency set to 0.01. Analysis of association with narrow autism diagnosis On February 23, 2010, the AGRE pedigree file was down-loaded from the AGRE website (www.agre.org). We identified 975 individuals from 618 independent families with narrow autism, defined by clinical diagnosis confirmed by both ADI-R and ADOS classification. FBAT and HBAT analyses were repeated, again using the empirical variance ("-e") option, on this smaller sample of individuals with narrowly defined autism. Concordance and correlation among phenotype scores On February 23, 2010, six files were downloaded from the AGRE website: the phenotype scores on the SRS, the ADI-R, and each of the four ADOS modules. The total number of individuals with phenotype scores for each instrument and the number of individuals with scores on multiple instruments are present in Table 2. The phenotype scores downloaded from the AGRE website were converted directly to phenotype (.phe) files used for FBAT software analysis. Scores reported here are: (a) quantitative summation scores from individual items on the ADI-R and ADOS, (b) binary cutoff scores from the ADI-R and ADOS, (c) factor scores from a previously published principal components analysis (PCA) of the AGRE sample (Frazier et al. 2008), and (d) T scores derived from total score and subscales of the SRS. For individuals with SRS scores from For each marker, the location on chromosome 3 and the distance from the transcription start (TrxSt) site of OXTR are indicated. Also listed is whether the marker was genotyped by the Levitt lab in-house using Taqman assays or by the Broad Institute using the Affymetrix 5.0 GWAS platform both parent and teacher scales, the SRS T scores were averaged from all available informants (Constantino et al. 2009). We performed PCA of the AGRE ADI-R scores (Campbell et al. 2010) and obtained factor structures that were indistinguishable from those previously reported (Frazier et al. 2008); therefore, we also report association analysis of two-factor scores on the ADI-R. Frazier et al. (2008) report a two-factor solution with high loadings of seven variables on the first factor (SOC1T_CS, SOC2T_CS, SOC3T_CS, SOC4T_CS, COM1T_CS, COM4T_CS, and COM2VTCS) and three variables on the second factor (COM3VTCS, BEH1T_CS, and BEH2T_CS). Four distinct modules of the ADOS are administered, depending upon age and verbal abilities of the subject, and it is not appropriate to collapse quantitative phenotype scores across the four ADOS modules. Further, there was not sufficient power in this sample to compare across ADOS modules as the number of subjects in each group ranged from 101 to 540 (mean± standard deviation=347±192; data not shown). Therefore, we did not analyze quantitative traits on each of the ADOS modules. However, each ADOS module contains a module-specific algorithm for determining whether an individual meets criteria for autism or ASD and reports a binary cutoff (yes or no) for meeting the criteria for diagnosis. The ADI-R and ADOS provide categorical variables that define thresholds for ASD diagnosis. The concordance among these binary categorical variables was determined by calculating the number of genotyped individuals with matching categorical scores compared to the total number of individuals with genotypes. The ADI-R and SRS provide continuous total scores for each individual that approximate normal distributions. Correlations among the continuous variables were calculated using StatView software. Analysis of association with phenotype scores FBAT analysis of the phenotype scores was performed using the empirical variance ("-e" option) with no offset. Phenotype analysis was restricted to three markers, and all analyses performed are reported in the "Results" section. FBAT analysis was used for ADI-R total scores, Frazier factor combined scores, and SRS total scores, all of which approach normal distributions in this sample. FBAT was also used to analyze cutoff scores on the ADI-R and ADOS, all of which are binary (yes/no). We did not attempt analysis of individual questions on the three phenotypic instruments as the scores on individual items often fail to be normally distributed. The reported two-sided P values represent only those with positive association of the indicated allele; the indicated allele is positively associated with ASD diagnosis or phenotype. Corrections for multiple comparisons We report in the text uncorrected P values. These results should therefore be interpreted with great caution. None of the reported associations would survive Bonferroni correction for the 25 SNPs analyzed in this study. We note, however, that this is an attempt to find associations consistent with previously reported associations of common OXTR markers with ASD risk. Results Linkage disequilibrium (LD) structure Analysis of LD structure indicated five LD blocks in the 39 kb that includes the OXTR gene and 10 kb of flanking sequence (Fig. 1). One LD block contains considerable overlap with the neighboring CAV3 gene, which maps 3.5 kb downstream of OXTR in a tail-to-tail arrangement. Interpretation of markers with LD overlap between the OXTR and CAV3 genes must include the possibility that genetic variants in CAV3 may contribute to ASD risk. Association of OXTR markers with ASD diagnosis In the entire data set of 25 markers genotyped in 1,238 families including 2,333 individuals with ASD, two markers had significant association with ASD diagnosis (Fig. 2). First, the 3′ UTR marker rs1042778 G allele was associated with ASD risk using the FBAT recessive model (P=0.037). Association of the rs1042778 G allele is consistent with two previous reports of positive association (Lerer et al. 2008;Jacob et al. 2007) and further implicates a putative functional variant near the 3′ UTR of OXTR in ASD risk. Second, the rs7632287 G allele was associated with ASD using both the recessive (P=0.016) and additive (P=0.016) models, consistent with the recent report of Tansey et al. (2010). The rs7632287 marker is intergenic between OXTR and the neighboring CAV3 gene and shares LD with markers in both genes. Therefore, association of rs7632287 suggests the presence of a functional variant that affects OXTR and/or CAV3. HBAT did not reveal significant global association of any of the five LD blocks. Association of OXTR markers with narrow autism diagnosis Family-based analysis of the 975 AGRE individuals with narrow autism (clinical diagnosis confirmed by both ADI-R and ADOS) revealed association of three markers (Fig. 3). Both variants that were associated with ASD also were associated with narrow autism diagnosis. The 3′ UTR marker rs1042778 G allele was associated with narrow autism using the recessive model (P=0.041) and the intergenic rs7632287 G allele was associated with narrow autism using both the additive model (P=0.031) and the recessive model (P=0.004). In addition, the OXTR intron 3 marker rs2268493 T allele was associated with narrow autism diagnosis using the recessive model (P=0.043), consistent with a previous positive association (Yrigollen et al. 2008). HBAT analysis did not reveal significant global association with risk for narrowly defined autism. Fig. 1. The OXTR intron 3 marker rs2268493 (marker 12) T allele was associated with narrow autism diagnosis using the recessive model (P=0.043). The 3′ UTR marker rs1042778 G allele (marker 20) was associated with narrow autism using the recessive model (P=0.041). The intergenic marker rs7632287 (marker 23) G allele was associated with narrow autism diagnosis using both the recessive model (P=0.004) and the additive model (P=0.031) Analysis of phenotype data The ADI-R, ADOS, and SRS provide two distinct types of variables. First, the ADI-R and ADOS provide categorical variables that define binary (yes/no) thresholds for diagnosis or a phenotype domain cutoff. In our sample, the concordance (rate of agreement on an individual eclipsing the threshold) among the 15 categorical variables was high, ranging from 0.732 to 1.000 (Table 4). High concordance rates among these categorical variables should be expected for a sample selected for an ASD diagnosis. Second, the ADI-R and SRS provide continuous total scores that give a more complete description of the phenotype domains. In our sample, the correlation among these continuous variables ranges from 0.012 to 0.983, with some correlations being negative ( Table 5). The broad range of correlations among the continuous variables reflects the phenotypic heterogeneity of ASD. Analysis of phenotype data The number of individuals with available phenotype data with each of the three instruments for measurement of ASD traits is detailed in Table 2. Phenotype analysis was restricted to the three OXTR markers that showed significant association with ASD and narrow autism risk: intron 3 marker rs2268493, 3′ UTR marker rs1042778, and intergenic marker rs7632287. Table 6 describes association of these OXTR markers with categorical variables measured on the ADI-R and ADOS. Table 7 describes association of the three OXTR markers with continuous variables on the ADI-R and SRS. Association of a genetic marker with two highly correlated phenotype measures may reflect an inability to dissect the phenotypes, rather than a contribution of the genetic variant to each phenotype independently. Association of OXTR markers with categorical cutoff scores on the ADI-R and ADOS The intron 3 marker rs2268493 T allele was associated with 14 of the 15 categorical cutoff scores on the ADI-R and ADOS using the FBAT recessive model (Table 6). Using the FBAT additive model, the intron 3 marker rs2268493 T allele was associated only with the non-verbal communication cutoff (Table 6). The 3′ UTR marker rs1042778 G allele was not associated with any cutoff on the ADI-R (Table 6), but was associated with autism diagnosis (P=0.013), ASD diagnosis (P=0.043), autism social cutoff (P=0.036), and the autism communication plus social cutoff (P=0.021) on the ADOS using the FBAT recessive model. The intergenic marker rs7632287 G allele was associated with 14 of 15 categorical cutoff scores on the ADI-R and ADOS (Table 6). In contrast to the intron 3 marker rs2268493 T allele, the association of the intergenic marker rs7632287 G allele was independent of FBAT model used ( Table 6). The only categorical variable with which the rs7632287 G allele was not significantly associated was the non-verbal communication cutoff (Table 6), the phenotype measure with the least power in our sample ( Table 2). Association of OXTR markers with continuous scores on the ADI-R and SRS The intron 3 marker rs2268493 T allele was associated with the social total score (P=0.011), the non-verbal communication total score (P= 0.026), the development total score (P=0.022), and Frazier Factor 1 (P=0.011) using the FBAT recessive model (Table 7). The 3′ UTR marker rs1042778 G allele was not associated with any continuous variable score (Table 7). The rs7632287 G allele was associated with 12 of 14 continuous phenotype scores ( Table 7). The rs7632287 G allele was associated with the social total (P=0.022), the verbal communication total (P=0.010), the behavior total (P=0.002), the development total (P=0.029), Frazier Factor 1 (P=0.018), and Frazier Factor 2 (P=0.001) on the ADI-R using the FBAT recessive model (Table 7). Consistent with the categorical cutoff scores for the ADI-R, the rs7632287 G allele was not associated with the non-verbal communication total or the communication total scores on the ADI-R (Table 7) but was associated with the other six variables independent of the FBAT model used (Table 7). The rs7632287 G allele was associated with the SRS total score (P=0.029) and each of the SRS sub-scale scores using the FBAT recessive model (Table 7). Discussion The present study represents the largest family-based analysis to date of association of common OXTR genetic variants with ASD risk. As in previous reports of genetic association for OXTR, we found evidence of association in two distinct regions of the gene: intron 3 and the 3′ UTR. Associations of markers in intron 3 implicate the OXTR gene specifically in ASD susceptibility. The genetic associations in the 3′ UTR region, however, could influence either OXTR or CAV3 because the LD block overlaps regions of both genes. Our results are consistent with (Sullivan 2007) previous reports of association of the intron 3 marker rs2268493, the 3′ UTR marker rs1042778, and the intergenic marker rs7632287. These data also indicate, for the first time, association of OXTR genetic polymorphisms with social aspects of ASD. None of the reported nominally significant associations would survive appropriate correction for multiple comparisons, so these data should be interpreted cautiously. However, the additional suggestive evidence in support of previous genetic association data from smaller, independent family cohorts, as well as the biological plausibility, suggest that polymorphisms of OXTR may contribute to ASD risk in subsets of families that may exhibit other unique features (Sullivan 2007;Lucht et al. 2009). The additional genetic evidence highlights two specific regions, intron 3 and the 3′ UTR, of OXTR that warrant further research. Three recent genome-wide association studies have each identified genome-wide significant association signals at single loci on chromosomes 5p14.1 ), 5p15 (Weiss et al. 2009), and20p13 (Anney et al. 2010). Similarly, although genome-wide copy number variation analyses have not highlighted OXTR ), a recent report described deletion of the region including OXTR, CAV3, and three neighboring genes in a single proband in one of 119 families (Gregory et al. 2009). The same report also described altered patterns of methylation in the OXTR promoter in individuals with ASD and a decreased expression of OXTR in postmortem brains of individuals with ASD (Gregory et al. 2009). Therefore, there may be multiple modes of disrupting OXTR that result in decreased expression of the oxytocin receptor and an increased risk for ASD. A bioinformatics analysis indicated that the G allele of rs7632287 may alter transcription factor binding. Consite (http://asp.ii.uib.no:8090/cgi-bin/CONSITE/consite/) predicts that COUP-TF is the only transcription factor that will bind the A allele but that the G allele will also be bound by the N-MYC, ARNT, and USF transcription factors. Experimental evidence will be necessary to determine if any of these transcription factors alter expression. A meta-analysis of OXTR association in all published data will provide a definitive answer to the association of common variants in OXTR with ASD. Our data set is publicly available at www.agre.org. Our data suggest that the rs7632287 variant contributes to multiple domains of ASD. It should be expected that a genetic marker associated with one phenotype score will also be associated with a second highly correlated phenotype score. However, the rs7632287 G allele was associated with both Frazier Factor 1 and Frazier Factor 2 scores, even though these two phenotypic variables are only 0.070 correlated. Similarly, the rs7632287 G allele was associated with both ADI-R development total score and the SRS total score, despite the two variables being only 0.199 correlated. These data suggest that, rather than association due to highly correlated scores, the rs7632287 G allele is associated with each phenotypic domain. We were somewhat surprised by the data indicating that all domains of autism were correlated with genotype, rather than the expected statistical enrichment for OXTR relationship with specific social endophenotypes. One possible explanation for this broad association may lie in the robust developmental expression of OXTR in many more brain areas than in the adult (Shapiro and Insel 1989;Snijdewint et al. 1989;Wang et al. 1997). Thus, there could be an early role for OXT in developing neural circuitry that underlies behaviors beyond those of social interactions. As has been reported for many genes, early perturbation can lead to later onset brain dysfunctions that would not be reflected in adult expression patterns (Thompson and Levitt 2010). This also suggests that the broader functions influenced through developmental mechanisms would not be captured by intranasal OXT in healthy adults, in which functions associated with social domains show improvement. Relevant to the present study, the more widespread developmental expression of OXTR may contribute to broad domains of social phenotypes, underlying the broad association of OXTR genotypes with behavioral traits in ASD. Our data also suggest that clinical trials with OXT in young children should examine functional domains beyond those related to social behavior.
6,120.8
2011-01-06T00:00:00.000
[ "Psychology", "Medicine", "Biology" ]
Computational Solutions for Human Falls Classification In the last two decades, studies about using technology for automatic detection of human fall increased considerably. The automatic detection of falls allows for quicker aid that is key to increasing the chances of treatment and mitigating the consequences of falls. However, each type of fall has its specificities, and determining the correct type of fall can help treat the person who has fallen. Although it is essential to use computational methods to classify falls, there are few studies about that in the literature, especially compared to the studies that propose solutions for fall detection. In this sense, we execute a systematic literature review (SLR) using the Kitchenham (2009) [1] method to investigate the computational solutions used to classify the different types of falls. We performed a search on Scopus, Web of Science and PubMed scientific databases looking for computational methods to falls classification in their papers. We use the grounded theory methodology for a more detailed qualitative analysis of the papers. As a result of our search, we selected a total of 36 studies for our review and found two different computational methods for classifying falls. Related to the steps used in each method, we found fourteen different types of sensors, four different techniques for background and foreground extraction of videos, twenty-one techniques for feature extraction, and seven different fall classification strategies. Finally, we also identified fifty-one different types of falls. In conclusion, we believe that the methods and techniques analyzed in our study can help developers to create new and better systems for classification, detection, and prevention of falls and falls database. Besides, we identified gaps that can be explored in future research related to the automatic classification of falls. I. INTRODUCTION Falls are the main cause of morbidity, disability, and increased utilization of health care among the older adults [2] population. According to the World Health Organization (WHO) [3], falls are the leading cause of serious injury in the elderly, reaching as much as 28-35% of people over the age of 65 and over 32-42% of people over 70 years of age. Fall is defined as "an event in which a person inadvertently comes to rest on the ground, floor, or lower-level" [4]. When a fall occurs, it is crucial to immediately detect the situation, because these accidents usually lead to more severe illness or even death. Early detection of falls is essential for rescuing injured people from danger and getting help as quickly as possible [5]. For Mubashir and Shao (2013) [6], the demand for surveillance systems, especially for fall detection, has increased in the health sector with the rapid growth of the older adult population in the world. It has become relevant then to develop intelligent surveillance systems that can automatically monitor and detect falls. Several fall detection devices and fall risk assessment and prevention systems have been developed to enable older adults or those with chronic diseases to live safely and independently at home. According to Abdelhedi et al. (2016) [7], a fall detection system is one or more system that sends an alert in response to a fall. A miniaturized fall detection device seeks to improve the accuracy of fall detection, having a minimal impact on the daily life of the user (e.g., apple watch series 4). Moreover, a fall risk assessment system is one or more systems capable of identifying the risk of a person falling based on sensory data and well-defined measures [8] [9]. Falls may be due to intrinsic causes (such as pre-existing diseases) or extrinsic causes (such as slippery environments) and may have specific characteristics that impact the reliability of fall prevention and detection solutions [9]. Therefore, works that seek to provide these computational solutions usually classify or categorize types of falls according to the characteristics observed about it, for example, the direction of the fall, the place where the fall occurred, the speed of the fall, the final position, or even the post-fall movement. According to Mubashir and Shao (2013) [6], we should be considering different scenarios when identifying different types of falls: walking or standing falls, falls with supports (e.g., stairs), falls during sleep or lying in bed, and falls when sitting in a chair. It is also interesting to note that some fall characteristics also exist in daily actions, for example, a squat also demonstrates a rapid downward movement. Moreover, each fall has specificities that may be related to the profile of the person [10] [11] and to the health status of the patient when the fall occurred, for example, some falls may correlate with specific diseases [12]. Besides, there are types of falls that are more dangerous and deserve more attention [13]. For example, falls to the sides may be more likely to cause fractures in frail older adults [14] [15]. Thus, it is important to not only develop solutions for fall prevention and detection but also to classify its types according to characteristics observed for each fall. Using known computational methods to classify human falls may be advantageous for developing better fall detection applications, fall risk assessment systems, and fall prevention solutions capable of identifying specificities and even possible causes of falls, as in Makhlouf et al. (2018) [16]. These methods should have steps and techniques for each of these steps welldefined to allow replicability. These methods can also aid in building fall databases to be used in experiments aimed at new automatic fall detection and prevention solutions and assist in the faster identification of better treatment for each specific type of fall. Therefore, we execute a Systematic Literature Review (SLR) and find studies from 2006 to 2021 with methods for the classification of human fall aided by computational technologies. Moreover, we analyze how these methods work. As a result, we found thirty six studies that use fall classification methods. Based on these studies, two different types of methods with three or four activities are identified. These methods have as main activities: Sensing, Background and Foreground Extraction (exclusively for methods based on Video Technologies), Feature Extraction, and Execution of the Fall Classification Strategy. Also, we found three types of technologies used by these studies and 51 different types of falls covered by the selected studies. Each kind of fall is related to an observed characteristic of each fall. Finally, we find out open questions about fall classification not treated by these studies as well as challenges that require further research. II. RESEARCH METHODOLOGY We based our Systematic Literature Review (SLR) on the method proposed by Brereton et al. (2007) [17] and Kitchenham et al. (2009) [1]. This is the most used method for developing SLRs in the software engineering area and has three activities: Planning, Execution (or conducting), and Presentation (or documentation). Each activity has a series of specific tasks for the SLR development. Figure 1 illustrates the process adopted in this study. During the SLR planning, we define the research questions and the search strategy, and generate the protocol that guides the execution. This protocol is constructed and validated interactively. In our case, we created several versions of this protocol and submitted it to the evaluation of specialists until obtaining the final version. This document contains the general objective of the review, the search strategy, the research questions, the papers' eligibility criteria, the quality assessment criteria of the selected literature, and the list of data that we want to extract of the selected literature. In the conducting phase, we execute the search strategy and apply the eligibility criteria for selecting the papers. After this, we verify the quality criteria of the selected studies and extract and synthesize the data. Finally, in the presentation phase, we generate the report and discuss the results. This paper presents our report, and it contains the results of the SLR and the discussion about them. This work follows the model of the Preferred Reporting Items for Systematic Reviews and Meta-Analyzes (PRISMA) [19] that suggests the discussion of the results based on the research questions. A. PLANNING This section presents the research questions, the search strategy, the query string, and the eligibility criteria. First, we specified four research questions for this SLR, as follows: 1) What are the computational methods used to classify falls? 2) What are the techniques used in each activity of these methods? 3) What are the advantages of using fall classification methods? 4) Which types of falls are classified by these methods? We analyzed and discussed the answers to these questions in Section IV. The search strategy of this SLR consists of two phases. In the first phase, we utilized a query string to search papers in public scientific studies databases, and, in the second phase, we performed a manual procedure, known as snowballing, to analyze the citations (snowballing forward) and references (snowballing backward) of the articles previously selected in the first phase. Snowballing is used to complement the search procedure in the public databases, making the literature search coverage more complete. These two initial phases were executed from April to May 2018. We chose the databases SCOPUS and Web of Science for the first phase of the literature search. According to Archambault et al. (2009) [20] and Aghaei et al. (2013) [21], which are the most relevant search databases for Computer Science, aggregating works of several other relevant databases for the area of Computing and related. In April 2021, we executed a new search phase. In this phase, we made a new search on the Scopus database, considering articles after 2018, and we added a new database, PubMed [22], a well-known literature database for research in the medical literature. In the PubMed, we do not restrict the search date. For the generation of the query string, we used the PICO approach that was created for systematic reviews in medical research areas, but which is also widely used in Software Engineering research [23] [19]. This method separates the question into four aspects: Population of interest (Population), Intervention, Comparison, and Outcome of interest. The Population represents the types of studies we want to address in the research. The Intervention corresponds to what characteristic we want to find in studies on our Population. The Comparison is related to the control group used in the experiments carried out in our population studies. Finally, the Outcome of interest corresponds to the information we want to find in our population studies. Table 1 shows the elements identified for each component of the PICO approach, according to the research questions presented previously. In general, systematic literature reviews in the Software Engineering area are exploratory studies designed to characterize a specific research line. In this case, these SLRs do not use a control group and we do not use any term for Comparison. However, some authors consider that the lack of this item of the PICO approach is a quasi-systematic review [24] [25]. We evaluated several query strings with the help of three experts until we obtained the final version presented in Textbox 1. These specialists also evaluated the protocol generated during the planning phase. Textbox 1. Query String 2021 ("Fall" OR "Falls" OR "Human Falling" OR "Falling Human" OR "Falls in*" OR "Accidental falls") AND ("Smart Health" OR "E-health" OR "Ambient Assisted Living" OR "AAL" OR "Tele-healthcare" OR "Telemedicine" OR "Healthcare") AND (classifi* OR detect* OR identifi* OR "recognition") AND ("Technique" OR "Approach" OR "Model" OR "Procedure" OR "Method" OR "Process" OR "Technology") The papers resulting from our search had their bibliographic references in .bibtex format extracted from the databases. The data was then organized and stored as PDF files by Mendeley 1 software, which was also used to manage the execution of the selected activity. For the selection of the most relevant studies, it is necessary to define exclusion and inclusion criteria (called eligibility criteria) that can be replicated by other researchers [1]. In this SLR, the exclusion criteria operate in sequential order similar to an Access Control List (ACL) as in Sanndhu and Samarati (1994) [26]. Thus, when we found a match on the list, we performed the exclusion action, and we did not check any other criterion. We defined the following exclusion criteria for this SLR: • Non-English papers (E1); • Non-articles, Non-conference papers, Non-book chapters (E2); • Papers with less than five pages (short paper) (E3); • Secondary studies (e.g., literature review) (E4); • Papers that do not present the falls classification (E5); and • Papers that do not use computational technology to classification, detection or recognition of human falls (E6). We defined the following inclusion criterion for this SLR: • Studies with experiments that have more than one type of fall (I1). • Studies with computational methods for falls classification (I2). B. CONDUCTING In this phase, first, we executed a search with the query string from April to May 2018 in databases of academic papers and with the search filters referring to the exclusion criteria E1 and E2, which could be applied directly in the search engines of the databases. We found 1163 articles for analysis and, using the Mendeley tool, we identified 297 either duplicate papers or we did not consider the papers because they did not have a title, abstract, or author. From the remaining 866 articles, we excluded 817, according to the exclusion criteria based on the dynamic reading of the papers, focusing on the title, abstract, and the most relevant parts of these papers.Then, from the 49 remaining papers, after evaluating the first inclusion criterion, we select 45 papers. Following the Conducting phase steps, to correctly apply the second inclusion criterion, a detailed reading of the articles was needed. However, to increase the research coverage, we opted to use the 45 articles remaining from the application of the exclusion criteria and the first inclusion criterion as the source of the snowballing process. Just after the snowballing process, we did the detailed reading of these papers and evaluated the second inclusion criterion. To apply the snowballing technique, we identified the citations of the articles using Google Scholar, as suggested in Wohlin et al. (2014) [18]. Altogether, we found 2819 papers from citations of the 45 studies aforeselected and another 1249 papers from the references, totaling 4068 papers for analysis. Using the Mendeley tool, we excluded 23 duplicate articles. From the remaining 4045 studies, we excluded 4008 papers, according to the exclusion criteria based on the dynamic reading of the papers, focusing on the title, abstract, and the most relevant parts of these papers, obtaining 37 studies. From these, we selected 36 papers after the first inclusion criterion assessment. Finally, we read the 82 selected studies, and we found 30 articles that fulfill the second inclusion criterion. In April 2021, we executed a new search in the academic databases, including the PubMed Database, and we found a new set of 1454 papers (552 from SCOPUS and 902 from PubMED). Using the Mendeley tool, we identified 967 either duplicate papers or we did not consider the papers because they did not have a title, abstract, or author. We identified that many studies found in PubMed had already been found in the search performed until 2018 in the SCOPUS and Web of Science databases. In PUbMed we did not use a time filter, then, for this reason, we found a large number of duplicate papers. From the remaining 487 articles, we excluded 474, according to the exclusion criteria based on the dynamic reading of the papers, focusing on the title, abstract, and the most relevant parts of these papers. Finally, from the 13 left we selected 6 papers after the first and the second inclusion criterion evaluation. To conclude the selection, we extracted data from the 36 selected articles (i.e., the 30 articles found in the literature search carried out in 2018 and the other 6 articles added after the complementary literature search carried out in 2021) and assessed the quality of the papers. The quality assessment was based on well-defined criteria, as suggested by Kitchenham et al. (2009), [1]. Our goal is to evaluate the potential of the selected studies to contribute to the answers to the research questions. Then, for this SLR, we chose two quality assessment criteria, that are: A Level of detailing of the fall classification method from the study; and B Presence of different types of falls addressed in the study results. For our review, the data extraction and the quality assessment were performed by two researchers who used an online form generated in Google forms. The form containing the Level of details of the fall classification method from the study (+3) The paper presents a detailed method for falls classification. 2 (+2) The paper has a method for falls classification, but does not detail it. (+1) The paper use falls classification of another study. 2 Presence of different types of falls addressed in the study results (+4) The paper evaluates all fall types separately. information to be extracted from each paper can be seen at the link https://bityli.com/Y730w. In Table 2, we show the scores for the answers of each quality criterion specified for this SLR. The first criterion indicates if the study presents a detailed fall classification method, which is a set of replicable and sequential activities that must be performed by the computational solution to classify falls. This criterion is directly correlated to the first and second research questions and has a higher weight in our evaluation. The second criterion assesses if the evaluation procedure results in each study consider the different types of falls. By "different types of falls addressed in the study results", we mean results of the studies (possibly from experiments) that indicate not only that a fall has occurred but also something that characterizes the fall. For example, the direction of the fall (front, back, left or right), the place where the fall occurred (kitchen, bathroom, living room), whether the fall was due to a slide, whether the fall was slow or fast. Figure 2 illustrates the distribution of the sum of the quality assessment criteria values multiplied by their weights for the 36 papers selected for this SLR. C. SYNTHESIS AND THE GROUNDED THEORY We arranged the extracted data in a google sheet, and the data were synthesized based on quantitative and qualitative analyzes to get at the results that we present in the next section. For the qualitative analysis, we used the grounded theory (GT) methodology [27]. According to Corbin and Strauss (2008) [27], the GT is a specific methodology developed for building theory from data, but the grounded theory can be used in a more generic sense to denote theoretical constructs derived from qualitative analysis of data. In general, GT has the following steps: planning, data collection, coding, and reporting [27]. In the planning step, we identify the area of interest and the research question. In our case, the area of interest is "Computational classification of human falls" and the research question is: "What are the computational methods used to classify falls? Furthermore, how do these methods work?". After the planning step, we did the data collection, which is necessary to answer the research question. For our analysis, we used the data obtained during the data extraction phase of the systematic review. The coding step is the main stage of the GT. According to Corbin and Strauss (2008), [27], in this step, we extract concepts (codes) from the raw data and correlate them hierarchically until we obtain a central concept (or code). In this research, we would like to obtain and relate concepts that characterize the methods used to classify falls. The coding step involves three tasks: open, axial, and selective coding. As presented in Figure 3, the coding step has two unique characteristics: theoretical sampling and constant comparative analysis [28]. Theoretical sampling is the step of collecting data for comparative evaluation, which means insight from initial data collection, and analysis leads to subsequent data collection and analysis. Constant comparative is an iterative activity of concurrent data collection and analysis. The Results of the Coding phase are presented in Section III. D. THREATS TO VALIDITY This systematic literature review focused on identifying computational solutions for the classification of human falls. Therefore, it is possible to have studies in the medical literature about fall classification not selected by this review, because they do not use computational technologies for classification. It would be then interesting for future work to identify how the medical literature treats the classification of falls and to use that to propose new computational methods. It is also possible that there are relevant studies related to this SLR that we could not find, because: (i) the study sources are not indexed by the databases used in this review, and (ii) the query string does not cover the studies that we needed. However, to mitigate these threats, we used relevant electronic databases [20] [21] similar to many systematic research and reviews in the field covered by this SLR. Besides, several attempts were made to construct the final version of the query string. Moreover, we used the snowballing strategy [18] to increase the coverage of articles and possible inconsistency of the query string. III. RESULTS In this SLR, we selected 36 papers to answer the defined research questions. These studies were published between 2006 and 2021. Table 3 shows the list of studies selected by the type of hardware used in the studies. A. FALL CLASSIFICATION METHODS We use the codification process in the GT methodology to analyze the fall classification methods and their techiniques. Firstly, in open coding, we check the data to understand the essence of "what is" expresses [27]. We inspect the data extracted from the papers using the extraction form, as done in Carvalho et al. (2018) [63]. Then, a conceptual name (code) is created to represent our understanding. Codes consist of an entire word, phrase, or paragraph. Table 4 presents some examples of codes. We use the QDA Miner Lite tool to aid open coding, as done in [64]. We create 61 codes, which were divided into five categories: Sensors, Hardware limitations, Background and Foreground Extraction (BFE) techniques, Feature extraction techniques, and classification techniques. These categories were extracted from the articles themselves while we refined the codes. Table 5 presents the identified codes divided by categories. To facilitate the analysis, we identified the types of technology associated with each code. "We also collect segmented data streams generated by falls with various falling directions to build the anchoring data streams for the later DTW distance calculations. . . " [30] Fall classification strategy based on thresholds "The threshold was determined by considering accelerations in SVM (Signal Magnitude Vector) and in the x-, y-, and z-axes, whereas falls and stumbles were simulated. . . " [46] The sensors category contains the kind of hardware used for the sensing of the raw data. The hardware limitation category presents the hardware limitations related to the device used to obtain the raw data. The BFE techniques category comprises image preprocessing techniques to remove background and foreground to determine the form to be tracked in the video, allowing feature extraction. These techniques are exclusively related to video technologies. The feature extraction techniques category contains the techniques used to extract features from the raw data. Finally, the classification techniques category contains the techniques used for fall classification. Next, we correlated the open coding categories with the sequence of activities executed for falls classification in the selected papers (Axial coding step). With this, we identified that the fall classification solutions follow the method of Figure 4a when using wearables or AAL sensors, and the method of Figure 4b when using video sensors. Figure 5 shows the representation of axial coding. It presents the relationships between the code categories from open coding and the activities of the fall classification methods. Lastly, according to Corbin and Strauss (2008), [27], when all categories can be related to a core category, it means the researcher is doing selective coding. Selective coding is the final step of Grounded Theory and consists of linking categories around a core category and refining the resulting theoretical construction. This core category is the "Falls Classification Methods" in our research, as shown in the figure. B. ACTIVITIES AND TECHNIQUES OF FALL CLASSIFICATION METHODS This section describes the activities of the fall classification methods and the techniques used in the selected studies for each activity. The sensing activity involves obtaining and storing the raw data that will be processed to generate the features. Associated with the sensing activity are the categories of sensors and hardware limitations. Ambient assisted living (AAL) environment sensors [16], [29], [30], [58], [62] obtain con- tinuous data from specific locations that vary when there is movement within that space. The presence sensors are used in conjunction with sensors of other types of technology and fulfill the function of determining only the location of the individual in a specific room within that AAL, while the other AAL sensors obtain the data that will be used to determine the type of movement, for example, the type of fall. The video sensors [32]- [38], [40], [59]- [62], in general, can be divided into four types of approaches, using video 2D, 3D, Infrared or based on the variation of luminosity or colors. In all cases, the general idea is to identify a region of interest of the video that contains the human body, and when this region varies, we identify an occurrence of falls. Finally, all wearable approaches [4], [16], [41]- [62], [65]- [69] uses accelerometer to derive from the raw data that is used to identify and classify the fall. However, many of the works also used other sensors like gyroscope, magnetometer, barometer, which are used as an altimeter, ECG and even heart rate sensors, used to identify the heart rate at the time of a fall. We found some hardware limitations directly related to the sensing of the approaches that use video or wearable. The similarity between various human postures, the occlusion caused by objects in front of the individual, and the limited memory are the hardware limitations identified for video approaches. Finally, the limited battery of the devices, the low processing power, and the amount of storage of the equipment are the most common restrictions for the wearables. Besides, the location of the wearable in the body also influences the measurement. Most papers that treats this subject indicate that the results are best when the device is on the chest or the waist of the person. The BFE activity separates the region of interest from the rest of the video. This activity is part of the video preprocessing and later affects feature extraction. The BFE techniques category is associated with this activity. Each BFE technique represents the video as points with values that vary among them. This variation may, for example, be obtained by checking the variation of the pixel sets that delimit specific regions of the image, as in the Gaussian mixing technique used in [35], [36], [38]. The feature extraction activity involves features generation from raw data or preprocessed data. These features will be used to detect and classify falls. Each feature extraction techniques category is associated with the feature extraction activity. Each feature extraction technique combines raw or preprocessed values to generate more representative (features). For example, a feature extraction technique for a solution using with accelerometer device can generate the Signal Magnitude Vector (SMV) feature [16], [43], [46], [47], [52]- [54], [69]. The SMV is generated by combining the values obtained for each axis during an accelerometer measurement and follows the formula: Where t i indicates the measurement in time i, and A x , A y , and A z are the accelerometer values from axis x, y, and z. The SMV feature can be used to generate other features, like standard deviation, or can be used alone by the classification strategies. We found 67 different features, as presented in Table 6, separated by the type of hardware. Note that some features are associated with more than one kind of device. In the last activity, the classification strategies are executed, including the application of pattern recognition techniques. Note in Figure 4 that the types of falls are inputs to the activity, so they are predetermined. We identify seven types of fall classification techniques. The most common is the use of thresholds and, in these cases, characteristic values, known as thresholds, are defined for certain phases of the movement of the fall. By exceeding these thresholds, the fall can be identified and, more specifically, the type of fall. These thresholds are drawn from previous studies or deter- mined by applying a pattern recognition technique employed to a training group. This training group consists of data obtained from fall experiments, explicitly performed for a study, or collected from public falls databases. Another type of fall classification technique usually found in the papers are pattern recognition algorithms, in one or multiple phases [50], to classify falls based on a training set. With the algorithm trained, when a new fall occurs, this event is classified according to the class whose values of the features more closely resembles. Some approaches use both thresholds and pattern recognition algorithms to detect and classify falls, rather than pattern recognition algorithms used only to identify thresholds. Figure 6 presents the pattern recognition algorithms and how many of the studies selected uses each algorithm. It is worth mentioning that some studies contain more than one of these algorithms. We can see that Artificial Neural Network (ANN), k-Nearest Neighbors (KNN) and Support Vector Machines (SVM) are the most used algorithms. We believe this happens because they can sort data quickly and produce better results than other algorithms. However, the average training time of these algorithms is higher than others, like tree-based algorithms. It is worth noting that there was a similar prevalence of ANN, SVM, and KNN algorithms in wearable-based and video-based systems studies. However, most of the other algorithms were used by the studies from video-based systems. The studies [47] [40] and [62] use a set of rules of fuzzy inferences to detect and classify falls. They apply inference rules according to the value assumed by the features. This strategy is similar to the use of thresholds, but, in their case, some sets of values are related to the occurrence of the same type of fall, depending on the rules of inference formulated. In short, we observed that the thresholds strategy is more common in systems that use wearable sensors and smartphones to obtain data. In contrast, there is a prevalence of strategies based on logical inferences and pattern recognition algorithms in video-based systems. Some studies also use specific strategies to detect falls. Li (2011) [58] proposes a specific grammar based on features. This approach detects a particular type of fall by combining the grammar elements in some ways. In He and Li (2013) [54], classifiers are generated based on features extracted from wearable data, which, when combined in specific sequences, correspond to particular types of falls. C. TYPES OF FALLS In our systematic review, we identified a total of 51 different types of falls. According to Yu (2008) [70], falls are related to movement performed and position, and are divided into four major categories: falls from standing, falls from sitting, falls from lying, and falls from standing on a support (e.g., a ladder). However, we found other categories of types of falls in Makhlouf et al. (2018) [16], which classifies falls into three different types of cardiac problems (bradycardia, tachycardia, and cardiac arrest), and according to where they occurred (e.g., bathroom, kitchen, room, living room). In addition, Saha et al. (2018) [57] and Gulati and Kaur (2021) [62] show falls related to cardiac and respiratory problems. Therefore, we decided to categorize the types of falls into four categories: falls related to health issues, location, the position of the person, and the kind of motion. Figure 7 shows the types of falls for the category Kind of Motion and Figure 8 presents the types of falls for another three categories. The number next to each type of fall in the figure informs the number of articles in which the type of fall was mentioned. The categories kind of motion and the position include the same types of falls presents by Yu (2008) [70], but they have more examples of falls that use elements related to the movement performed (direction of fall, rotation, speed, severity) and the position before or after the fall. Finally, it is worth noting that the most used falls in the studied literature are related to the direction of movement (Forward, Backward, Leftward, and Rightward), as can be seen in Figure 7. D. PROFILE OF THE EXPERIMENT PARTICIPANTS In general, to evaluate the proposed approaches for the classification of falls, the studies use falls from databases or experiments generated by each research. Most of these papers present a profile of the experiment participants and, with this, it is possible to get more information about the approaches. We identified that 19 of the articles present quantity and some profile of the participants. The papers [49] and [46] use falls or daily activities from adults over 60 years old, the main risk group. The others use experiments with adults, men, and women, between 19 and 57 years old, with most participants between 20 and 30 years old. Some of these authors (e.g., [49] [38] [52]) admit that there could be variations when they use their proposals with older adults, but, according to Karantonis et al. (2006) [46], experiments without the presence of older adults do not make the proposal unfeasible. Moreover, several studies have also identified the participants' height, weight, or body mass index. According to these studies, these characteristics may influence the measurements of the sensors, but they do not show examples of how these characteristics affect the results. IV. DISCUSSION In this section, we discussed the SLR results and identified research gaps and challenges. This SLR aims to discover studies that present classifications of human falls supported by computational methods and how and why these studies use them. In this way, we found 36 studies that have a method to classify falls. In general, to evaluate such fall classification methods, the authors used experiments with data of different types of falls performed. Table 7 presents a summary of the answers to each Research Questions (RQs). As shown in Table 7, we have identified two different types of computational methods used by the studies to classify falls, which differ mainly by the sensing technology used. We also identify techniques used in each activity of these methods. However, most of these methods are used only to improve the accuracy and precision of fall detection systems or systems to identify fall risk. However, they do not seek to identify the severity of these falls, thus prioritizing falls considered the most dangerous in the medical literature, such as lateral falls [14] [15]. Makhlouf [62] are the exceptions that use fall types associated with diseases. There are still few studies that associate falls with specific health problems using computational technologies. In this sense, we believe that this type of relationship between falls and other health issues is a challenge that can be explored in future research. As we mentioned before, these studies classified the types of falls in two categories based on the type of movement or based on the person's position before and after the fall. However, most of them do not clarify why these are the categories that should be considered. We believe that, to build relevant databases, it is important to understand the nature of the data and categorize it. Thus, another challenge that could be explored in future works should be to understand what makes the categories of the types of falls used in the literature relevant and if other relevant characteristics allow a better categorization of falls. In this sense, an exciting gap to be explored in future research is to identify, together with the literature of the health area and health professionals, if the types of falls presented by the works selected in this SLR are relevant to determine the severity of the fall event. Moreover, the proposal of a classification method using sensor data obtained from fall events to identify new types of falls, for example, using grouping techniques such as clustering, could generate interesting future research. Some studies selected for this SLR utilize clustering techniques (e.g., the k-means algorithm), but these techniques were used to classify the falls according to the predefined types of falls. Finally, only Ponce and Martínez-Villaseñor (2020) [60] take into account how the falls database used is classified. We believe that is advantageous to use classification methods in existing falls datasets to classify them or assist the creation of new fall databases. We identify two different methods used for fall classification. We found a method with three steps for solutions using AAL sensors or wearables: sensing, feature extraction, and execution of the classification strategy. Yet, for video approaches, the same activities are executed, but before the feature extraction there is one more activity: Background and Foreground Extraction. Research Question 2. What are the techniques used in each activity of these methods? This SLR identified four different techniques for background and foreground extraction of videos, twenty-one techniques for feature extraction, and seven different fall classification strategies. Also, we identify fourteen different types of sensors used by the selected studies, and five hardware limitations. The list of techniques is presented in Table 6 and detailed in the results section of this study. Research Question 3. What are the advantages of using fall classification methods? The studies intend to improve the precision and accuracy of systems, applications, or approaches of automatic detection of falls and recognize falls risk. The authors argue that different types of falls may behave considerably different from the data, and classifying each type of fall or group of types of falls allows greater accuracy in detecting falls. Besides, Makhlouf et al. (2018) [16], Saha et al. (2018) [57] and Gulati and Kaur (2021) [62] explore the advantages that identifying the type of fall can have in the best treatment of the patient. Research Question 4. Which types of falls are classified by these methods? All the way, we identified 51 types of falls. According to the literature, it is possible to categorize the types of falls as: falls related to the type of movement and falls related to the person's position before and after the fall movement. However, in our research, we found some works presenting types of falls that do not match into these categories. Therefore, we divide these types of falls into two other categories: falls location and falls related to health issues (See Figures 8 and 7). V. CONCLUSION Different types of falls can directly influence the quality and accuracy of fall detection and fall risk identification systems. Fall classification allows identifying particular problems and risks of specific types of falls. Furthermore, according to the medical literature, there is an inherent severity of each type of fall that is also important to consider. The detection and classification of falls can be done automatically using computer devices equipped with sensors capable of monitoring the movement of patients. Using a computational approach is mainly due to the agility in identifying the fall and the risks inherent to the type of fall that the person suffered. So, the systematic literature review presented in this paper aimed to find automatic methods of fall classification in the literature as well as gaps for future research. We utilized a two-step search strategy: a search using three academic article databases, and a snowball strategy on the selected papers after searching the databases. Then, we found several computational fall classification solutions that, as we concluded, followed these two strategies. The differences between them are the sensors and activities employed. The first method is three-step, which is executed by wearables and AAl approaches with the following activities: sensing, feature extraction, and falls classification strategy. The second method is four-step, which is executed by Video solutions with the same activities of the previous method plus a BFE activity. Besides, in this SLR, we also organized the types of falls found in the selected studies. Finally, as one of the results of this study, we identified challenges and open questions in the SLR selected papers that can be addressed in future work, which are summarized as follows: (i) comparison of the techniques applied in each step of the methods and generation of a catalog to assist the development of new hardware and software solutions to falls detection and classification; (ii) a new approach for classifying falls that addresses the types of falls categorized in the medical literature and their inherent severity; and (iii) development of a solution, considering the methods and techniques identified in this study, to help classify and build new falls databases.
9,541
2021-01-01T00:00:00.000
[ "Computer Science" ]
Relationship between Obesity and Coronary Artery Disease Defined by Coronary Computed Tomography Angiography In this context, changes in lifestyle have contributed to increased incidence of cardiovascular risk factors and ultimately of coronary disease. Due to its increasing incidence on a global scale (39% of adults aged 18 years and older are obese), obesity has become one of the factors with the greatest impact on the risk of CAD.2 Obesity is recognized as one of the most important underlying risk factors for a wide variety of metabolic diseases, such as hypertension, dyslipidemia, and diabetes, which are strongly associated with the development of cardiovascular diseases.3 Nevertheless, whether obesity alone is a risk factor for CAD has not been well established.4-6 In this regard, the phenotype 1 of metabolically healthy but obese (MHO) individuals, with hormonal and insulin resistance profile not compatible with increased adiposity has become a matter of discussion. 7,8 Previous studies have investigated the incidence of cardiovascular disease in MHO, with controversial results. 9,10 Also, although data derived from intermediate markers of disease (e.g. the carotid intima media thickness) can evaluate the association of these parameters with the presence of CAD in MHO individuals, there are few data available about the association between body mass index (BMI) and coronary artery calcium score as determinant of subclinical atherosclerosis. Coronary artery calcium score was shown to be superior than other methods for the evaluation of subclinical atherosclerosis in cardiovascular event prediction. 11 Therefore, the aim of the present study was to evaluate whether obesity alone is correlated with the presence of CAD, evaluated by coronary computed tomography angiography (CCTA). Patients and study design We reviewed the database and patients medical records in a tertiary hospital in Sao Paulo (Brazil). The sample was composed of 1,814 patients consecutively referred for cardiac/coronary computed tomography angiography between August 2010 and July 2012. The study was approved by the ethics committee of the Pontifical Catholic University of Paraná (approval number 1524216) and was in accordance with the Helsinki Declaration. The study was registered in Plataforma Brasil (registration number 55363016.6.0000.0020) and informed consent to participate in this study was waived. All data were collected and registered in specific spreadsheets by trained investigators, and then manually transferred to a database of the CCTA division. Epidemiological and clinical data Data contained in the patient admission questionnaire were collected by direct interview and/or from medical records. Variables included demographic and anthropometric data, indication for CCTA, risk factors for CAD -hypertension, diabetes, dyslipidemia, smoking, family history of CAD -parameters of CCTA acquisition and results of the test. Computed tomography angiography, a contrast computed tomography, is clinically used for evaluation of coronary stenosis/obstruction. The test allows the calculation of the coronary artery calcium, which consists in a non-invasive imaging method to identify atherosclerosis in asymptomatic individuals. Definitions of obese and metabolically healthy but obese patients Patients with a BMI greater than 30 kg/m 2 were considered obese, and MHO patients were identified based on the absence of the following criteria -1) hypertriglyceridemia (triglycerides > 150 mg/dL) or pharmacological treatment for this condition; 2) low HDL-cholesterol (HDL < 40 mg/dL) or pharmacological treatment for this condition; 3) hypertension, defined as blood pressure ≥ 130/85 mmHg or pharmacological treatment for this condition; 4) altered fasting glucose (glucose ≥ 100 mg/dL) or diagnosis of diabetes, or pharmacological treatment for this condition. Coronary computed tomography angiography Acquisition parameters and protocol Two computed tomography scanners were used for the tests -Siemens Somatom Sensation 64 and Siemens Somaton Definition Flash (Siemens Healthcare, Forchheim, Germany), following respective protocols. Patients with blood pressure higher than 100 mmHg received 5 mg sublingual nitrates prior to the test, whereas a beta-blocker (metoprolol 150 mg in patients with BMI ≥ 30 kg/m 2 , and 75 mg in those with BMI ≤ 30 kg/m 2 ) was orally administered to patients with a heart rate higher than 80 bpm on the test day. In addition, if necessary, intravenous metoprolol (maximum 20 mg) was used during CCTA to achieve target heart rate (≤ 65 bpm). Patients with no history of angioplasty or surgical revascularization underwent computed tomography scanning synchronized with electrocardiography before contrast injection for quantification of coronary artery calcium (Agatston units). Subsequently, contrast was injected at high flow rates (maximum of 6 mL/s -Henetix 350 mg/mL, Guerbet, Rio de Janeiro, Brazil), with concomitant acquisition of CCTA. The following parameters were obtained for analysis: 1) tube voltage of 100-140 kV; 2) adjusted tube current (estimated by the tomography device according to chest attenuation of each patient); 3) collimation 2 x 128 x 0.6 mm or 64 x 0.6 mm, according to the scanner specifications. The tests on both scanners were performed in helical acquisition mode, or in prospective axial and high-pitch spiral mode by the dual-source (two x-ray sources) scanner. Image reconstruction For coronary artery calcium score calculation, images were reconstructed with a section thickness of 3 mm and 3 mm-interval. Coronary calcifications with attenuation ≥ 130 HU in an area ≥ 3 mm 2 were quantified, according to the algorithm proposed by Agatston et al. 12 CCTA images were reconstructed with a section thickness of 0.6 mm and increment of 0.3 mm in systole and/or automatically or manually determined (in case of spiral or prospective acquisition), to minimize cardiac motion artifacts. For better image quality, iterative reconstruction algorithms were performed. Image interpretation All images (calcium score and CCTA) were analyzed on a dedicated workstation (Leonardo Definition, Siemens Healthcare, Erlanger, Germany). All CCTA images were analyzed by two observers; discrepancies were resolved by consensus. Coronary artery calcium was quantitively determined by visual identification of coronary calcifications. Lesions in different coronary territories were automatically summed to determine the total calcium score. Per-segment analysis of CCTA images was performed following the Society of Cardiovascular Computed Tomography guidelines. 13 CAD was established at two levels: 1) calcium score > 0 (Agatston); 2) presence of atherosclerotic plaque (CCTA). Obstructive coronary disease was defined by the presence of any coronary stenosis ≥ 50%. Statistical analysis Binary data were described in absolute numbers and percentages. Continuous variables with normal distribution were presented as mean and standard deviation, whereas those without a normal distribution were presented as median and interquartile range. Data normality was tested by the Shapiro-Wilk test; coronary artery calcium score was the only variable that was not normally distributed. Categorical variables were compared using the chi-square test. Continuous variables were compared using the unpaired Student's t-test. Calcium score between obese and non-obese patients was compared by the Mann-Whitney test. A multiple linear regression model was used to assess the relationship between cardiovascular risk factors and the presence of obstructive CAD. For continuous variables of the model, β coefficients were used to indicate changes in the dependent variable (presence of obstructive CAD) for a unit change in each independent variable after controlling for confounding variables. For categorical variables (e.g. sex, smoking), β coefficient represents the difference in the dependent variable (presence of obstructive CAD) according to the status (e.g. male vs. female; smokers vs. non-smokers) after controlling of confounding variables of the model. Statistical analysis was performed using the STATA software (version 11, STATACorp, College Station, Texas, USA). The level of significance was set as 5%. Results A total of 1,814 consecutive patients with a medical indication for cardiac/coronary computed tomography angiography, were referred to a tertiary hospital in São Paulo between August 2010 and July 2012. We excluded from analysis patients whose indication for the test was not screening for CAD (e.g. patients with congenital heart disease, patients referred for evaluation of valve disease or pulmonary veins). In addition, we also excluded patients with history of CAD (myocardial infarction, angioplasty and /or surgical myocardial revascularization). A total of 1,383 patients were evaluated (Figure 1). Table 1 describes main epidemiological characteristics of the patients. Mean age was 58.5 +/-11.5 years, and 66.3% (n = 917) of patients were men. In general, the prevalence of cardiovascular risk factors was not different between obese and non-obese subjects (Table 1), and the same was observed for the prevalence of obstructive CAD. Obstructive CAD was present in a similar percentage (18.4% in both groups) in obese patients (n = 58) and in those with BMI < 30 kg/m 2 (n = 197) (Figure 2). The presence of CAD, defined by the presence of coronary calcifications, was significantly different between the groups. Median calcium score was 1.4 and 14.7 Agatston units in the groups of non-obese and obese patients, respectively ( Figure 2). In our sample, mean calcium score percentile, by age, sex and ethnicity was 61. In order to establish the role of each risk factor on the development of obstructive CAD, we used a multiple linear regression model including all cardiovascular risk factors (Table 2). Variables significantly associated with obstructive CAD, defined by CCTA, included age, male sex, and diabetes; hypertension was of marginal significance for outcome definition (p = 0.08). Obesity was not correlated with obstructive CAD (p = 0.10) when the other variables were maintained constant. Discussion The present study showed that, although the prevalence of obstructive CAD was not different between obese and non-obese patients, coronary artery calcium scores were significantly lower in non-obese than obese patients. Obesity is believed to have a direct effect on metabolic health, since proinflammatory cytokines released by the adipose tissue can lead to subclinical inflammation at long-term, even if counterbalanced by anti-inflammatory cytokines. This condition is characterized by a gradual increase in inflammatory markers, such as C-reactive protein, TNF-alpha and interleukin-6, which have a direct relationship with insulin resistance, hepatic steatosis and endothelial dysfunction, leading to atherosclerosis. 14 Pereira et al. Obesity and coronary artery disease Int J Cardiovasc Sci. 2020;33(1):57-64 Despite the great potential of the method, the use of CCTA for the establishment of a correlation between CAD and obesity is still little explored. Compared with catheterization, computed tomography angiography is a highly accurate, non-invasive method, with acceptable levels of patient radiation and contrast, that can be useful in the identification of coronary arterial narrowing by atherosclerotic plaques. Original Article Although the association of obesity with CAD is well documented, 15,16 there is evidence supporting that cardiovascular risk factors are not more common in MHO individuals compared with non-obese subjects. [17][18][19] In other words, obesity alone would not be determinant for increased incidence of CAD. This is corroborated by our findings on the prevalence of obstructive coronary disease, which was not different between obese and non-obese subjects. On the other hand, the higher values of coronary artery calcium score among obese individuals suggest a correlation between this condition with the development of subclinical atherosclerosis. Chang et al.,20 demonstrated that MHO patients have higher calcium score values than non-obese patients. However, after adjusting for metabolic risk factors, this association was attenuated and no longer statistically significant. The authors concluded that obesity is an additional risk for coronary atherosclerosis, including the subclinical form, mediated by metabolic changes whose thresholds are lower than those considered abnormal. In this context, one important factor is the influence of BMI on tomography imaging analysis. Obese individuals show a reduced signal-to-noise ratio in chest images, due to increased adipose tissue compared with non-obese individuals. The higher chest wall thickness in obese subjects attenuates the X-rays emitted from the tubes, allowing that a lower amount of photons reaches the detector for image construction, resulting in a more "grained" image. Such loss could be compensated by modulations in the tube voltage and in the X-ray tube current, improving the signal-to-noise ratio of these tests. However, the methods used for image acquisition for coronary artery calcium scoring do not allow adjustments in tube voltage of the tomography scanner, fixing it at 120 kilovolts. In practical terms, that implicates that images with lower signal-to-noise ratio are obtained from CCTA in obese patients. In parallel with the potential effect of obesity on coronary calcification, we believe that this change in the signal-to-noise ratio in obese patients image may have contributed to changes in the threshold for coronary artery calcium detection, artificially increasing calcium score levels in this population. In this regard, mean calcium score percentile in our patients was 61 according to the Multi-Ethnic Study of Atherosclerosis (MESA), 21 indicating a higher-than-average coronary calcification. However, these results are not comparable with those reported in the MESA study, which evaluated asymptomatic individuals, with not history of CAD, due to selection bias of our study population (patients referred for coronary tomography for investigation of CAD and hence more likely to have the disease). Limitations Our study has limitations inherent to its retrospective design. Since this was a cross-sectional study evaluating the association of obesity with CAD based on medical records, the results do not take into account some variables, such as the time of exposure to triggering factors of the disease. The definition of metabolically healthy obesity was based on the identification and exclusion of obesity-related metabolic abnormalities (hypertension, dyslipidemia, diabetes). Nevertheless, laboratory markers of insulin resistance, including the Homeostatic Model Assessment of Insulin Resistance (HOMA-IR) were not used in patients recruitment in our study. Our study population was selected based on BMI, which, although is the most widely used anthropometric variable to characterize obesity, it does not provide information regarding body composition. Therefore, assuming that the percentage of body fat has a direct effect on insulin resistance, BMI alone does not give us any insight into this condition. In addition, other anthropometric measures known to provide a more accurate estimation of visceral fat (e.g. waist circumference and waist-to-hip ratio measurements) were not registered in the medical records, and hence could not be used in the analysis. Finally, the definition of CAD by CCTA may be controversial; although CCTA is a very robust method to define non-obstructive atherosclerosis by coronary artery calcium scoring, the method considered the goldstandard method to detect obstructive coronary disease is invasive coronary angiography combined or not with intracoronary ultrasound.
3,232.6
2019-01-01T00:00:00.000
[ "Medicine", "Biology" ]
What is empathy for? The concept of empathy has received much attention from philosophers and also from both cognitive and social psychologists. It has, however, been given widely conflicting definitions, with some taking it primarily as an epistemological notion and others as a social one. Recently, empathy has been closely associated with the simulationist approach to social cognition and, as such, it might be thought that the concept’s utility stands or falls with that of simulation itself. I suggest that this is a mistake. Approaching the question of what empathy is via the question of what it is for, I claim that empathy plays a distinctive epistemological role: it alone allows us to know how others feel. This is independent of the plausibility of simulationism more generally. With this in view I propose an inclusive definition of empathy, one likely consequence of which is that empathy is not a natural kind. It follows that, pace a number of empathy researchers, certain experimental paradigms tell us not about the nature of empathy but about certain ways in which empathy can be achieved. I end by briefly speculating that empathy, so conceived, may also play a distinctive social role, enabling what I term ‘transparent fellow-feeling’. Introduction It is a commonplace to point out that while research on empathy is burgeoning, there is little agreement amongst empathy researchers about what it is (Batson 2009;Goldman 2011). Candidates, crudely described, include our automatic and often non-conscious B Joel Smith<EMAIL_ADDRESS>1 School of Social Sciences, University of Manchester, Manchester M13 9PL, UK imitation of others' facial expressions, vocal expressions and posture (Van Baaren et al. 2009);our 'catching' of, 'mirroring', or 'resonating' with, other people's affective states-emotional contagion-that is sometimes claimed to ensue from such imitation (Rapson et al. 1994;Hatfield et al. 2009); our knowledge of the source of such imitation or contagion in another subject (De Vignemont and Singer 2006); our imagining another subject's situation, either as ourselves or as them (imagine-self vs. imagineother perspective-taking) (Batson et al. 1997;Goldie 2011); or our feeling as the other does as a result of such an imaginative project (Coplan 2011). In addition, there are a number of accounts that build in some element of concern for the other (Batson 2011). One may be forgiven for supposing that such debates about the nature of empathy are merely verbal-about the best way to use the term 'empathy'. The danger of this might seem especially acute given that the term 'empathy' was coined as recently as the early Twentieth Century (Coplan and Goldie 2011). Of course, one is free to define the term as one pleases. I hope, however, to offer an account that combines the merits of being reasonably close to common usage of the term, of making explicit a good deal of what various theorists have wanted to say about empathy and its role in our lives, and of resisting the temptation to suppose that the term just picks out a number of phenomena whose sole uniting principle is the fact that they have been dubbed 'empathy'. Such a broad position would deprive the notion of empathy of much of its value. On the other hand, overly narrow accounts run the risk of simply ignoring a significant part of our everyday ways of speaking of empathy. The account I propose avoids both of these vices. More positively, the fact that the concept of empathy is a relative newcomer suggests that, if it is to be retained, it must pay its way. That is, 'empathy' ought to pick out some phenomenon not picked out by some other well-understood term. And this in turn suggests a method: an account of empathy will ideally be one that shows it to make a distinctive contribution to our lives. Empathy makes a distinctive contribution if there is something that it and only it allows us to do. As I shall argue, empathy does make such a distinctive contribution and seeing what this is teaches us something both about our emotional lives and about the future direction of empathy research. As the above suggests, I propose an account of what empathy is that is motivated by an answer to the question of what empathy is for. Questions about what some psychological phenomenon is for can be approached from at least two directions, an evolutionary perspective and what might be called a 'role' perspective. Here I follow de Vignemont and Singer, What is empathy for? Here, it is important to distinguish between two questions: (i) why has evolution selected empathy? and (ii) what is the role of empathy now that it has emerged? The former question refers to the adaptive function of empathy, and the answer lies in studies of empathy in other species. The latter question refers to its functional role in everyday life (De Vignemont and Singer 2006, p. 439). From an evolutionary perspective, one asks whether some phenomenon is an adaptation, (or an exaptation or a spandrel) and, if so, what it is an adaptation to. This evolutionary question will not be my focus since, with respect to empathy, it is far from clear that there is currently evidence sufficient to support one hypothesis over another (De Vignemont and Singer 2006) and, perhaps more importantly, as I will argue, there is reason to suppose that empathy is an epistemic rather than a psychological phenomenon and so not straightforwardly open to evolutionary explanation. From a role perspective, one may ask whether the phenomenon makes a distinctive contribution to our lives-a contribution that only it makes. If it does, one may ask whether that contribution is primarily cognitive or social (cf. Batson's (2009, pp. 3-4) two questions). I will approach the question in terms of the distinctive role that empathy currently plays in our lives, whatever its evolutionary status. That role, I will argue in Sect. 2, is primarily epistemological. In Sect. 3 I propose an account of empathy designed to serve this epistemological function. Views that identify empathy with one or other of the phenomena mentioned above are, in that respect, narrow. However, there are also broad conceptions of empathy that allow that it may take any number of these forms (Preston and De Waal 2002;Thompson 2007, Chap. 13). The view of empathy that I outline walks a line between narrow and broad definitions. On the one hand, on this account empathy is not a loosely associated group but a unitary phenomenon. On the other, many of the phenomena mentioned above may feed into empathy in a number of ways. This provides us with a helpful way of understanding the oftmentioned relation between empathy and simulation. Whilst empathy is not the same as simulation, simulation may ground empathy in some cases. Plausibly, a further consequence of the view I propose is that empathy is not a natural kind. This has implications for how we should interpret certain experimental paradigms. They show us not about the nature of empathy itself, but about the different ways in which empathy can be achieved. I end, in Sect. 4, with the suggestion that empathy's distinctive epistemological achievement may serve a broader social purpose, enabling what I term 'transparent fellow feeling'. Sharing The project of defining empathy in the light of an account of what it is for obviously requires us to begin with an intuitive grasp of the phenomenon. On any construal that seeks to preserve something of the contemporary common-sense notion, empathy has to do with, in some sense, sharing in or, in Deonna's (2007) words, 'feeling in tune with', another person's affective state. This much is strongly suggested by the list of candidates in the previous section (imitating, mirroring, imagining, etc.). Exactly what sense of sharing is relevant is to be determined, but just this very broad understanding is sufficient for now. The fact that empathy has to do with sharing suggests a close connection with the various phenomena that come under the heading of 'simulation'. Indeed, a number of those candidates for empathy, mentioned above, are also often treated as ways of simulating the mental life of another (cf. Goldman 2006). Whilst there is surely a close affinity here, I will argue below that it would be wrong to suppose that we can simply treat 'empathise' and 'simulate' interchangeably. There is a distinctive role for empathy to play, even if the various claims made by simulationists turn out to be false. Empathy is a broader notion. A number of philosophers, influenced by early Twentieth Century phenomenologists, would deny the association of empathy with sharing, at least on some interpretations of 'share' (Thompson 2007;Zahavi 2012). Whilst I am sympathetic with a great deal of what proponents of such a view claim, it nevertheless seems clear to me that there is a use of the term 'empathy', in fact its central use, according to which it involves sharing in another's mental state. I shall, consequently, simply assume, along with the vast majority of empathy researchers, that this is the case and will not directly discuss their view here. Social and epistemic roles My question, then, concerns the function of sharing in another's affective state. Answers to this question come, by and large, in two groups: social and epistemological. First, it might be maintained that the primary, or at least a central, function of empathy is to encourage 'pro-social', or altruistic, behaviour (cf. Batson 2011). Certainly there is evidence that a number of the phenomena mentioned above (imitation, contagion, etc.) are apt to do this (Van Baaren et al. 2009). But it seems unlikely that empathy (or imitation, contagion, etc.) is strictly necessary for this. Simply putting someone in a good mood has similar effects (George 1991). So, whilst encouraging pro-social behaviour is perhaps one role that empathy plays, it is not its distinctive contribution. If this were the fundamental function of empathy, the concept would add little if anything to those of imitation, contagion, etc. on the one hand and altruism, on the other. On the epistemological side, it might be suggested that the role of empathy is to provide us with knowledge of the environment or, alternatively, of what others are likely to do (Carter et al. 2009;De Vignemont and Singer 2006). If Anita resonates with Betty's fear, then Anita may come to expect environmental danger. If she resonates with Betty's fear and thereby knows that Betty is afraid, she may come to expect Betty to flee since, given her own fear, she is herself disposed to flee. But, once again, this is knowledge that can be gained in other ways. Whilst empathy may well play this role of distributing the cognitive burden, such a thing is in principle available otherwise, through perception, testimony or the sorts of processes to which theory theorists appeal, for example. Once more, if the provision of these varieties of knowledge were what empathy was for, the concept would not pay its way. It would remain unclear, from a role perspective, why the concept of empathy should be of much interest. Knowing how others feel There is an epistemic function for empathy that is, I suggest, distinctive of it. Empathy provides us with knowledge of how others feel (cf. Adams 2001;Green 2007, Chap. 7;Coplan 2011). Betty may tell Anita that she is afraid. Or Anita may infer that Betty is afraid from the look on Betty's face. Or Anita may see that Betty is afraid. Each of these can constitute, in my view, Anita's coming to know that Betty is afraid. However, unless Anita empathises with Betty, Anita will not know how Betty feels. For that, Anita must share in Betty's affective state. Or so I claim. What is involved in knowing how another person feels? One might suppose that, contrary to the distinction I have drawn above, Anita's knowing that Betty is afraid is sufficient for knowing how she feels. But this is wrong. Anita may possess a functional concept of fear, indeed she may have complete descriptive knowledge of fear, but if she has never experienced anything that shares an affective character, a way of feeling, with Anita's fear (Damasio 2000, pp. 62-67) then she won't know how Anita's fear feels (Jackson 1986;Ravenscroft 1998;Brewer 2002). Rather, and generalizing for any affective psychological state ψ (whether positive or negative), I suggest that A knows how B feels only if she knows that B is ψ and how it feels to be ψ. Further, A knows how it feels to be ψ if and only if A knows that ψ feels like this. Here 'this' is an 'inner demonstrative' picking out the affective character of ψ (Perry 2003, Chap. 3). In order to secure such 'inner demonstrative' reference, there needs to be a conscious state of hers that A can demonstrate. Thus, A must be in, or perhaps have been in, some affectively matching conscious state. That is, A must share in B's affective state. Never having experienced fear, or some affectively matching state, Anita has only a partial understanding of fear. Specifically, she doesn't know how fear feels. So, whilst Anita may know that Betty is afraid, say on testimonial grounds, there is a certain piece of knowledge that she lacks. This is a consequence of the fact that it is only through feeling fear that Anita can become acquainted with the feel of fear, with the affective character of that emotion. I suggest, then, that the distinctive epistemological contribution of empathy is that it provides knowledge of how others feel. If what I have said above is correct, this seems to be something that only empathy achieves. For sharing in another's affective state is a necessary condition of coming by this knowledge. Empathy, conscious awareness, and knowing how others feel What, then, is empathy? In light of the above, I propose the following: A is consciously aware of what being ψ feels like, (3) On the basis of (1) and (2), A is consciously aware of how B feels. Two clarifications are in order. First, one is 'consciously aware' of some fact F only if one is in some veridical conscious state with the representational content F. The state in question may be judgement, perception, or some other cognitive state with the same 'direction of fit'. Whilst the veridicality condition does not require knowledge, it is sufficient grounds to think of this as an epistemic account of empathy, and I will occasionally use 'knows' where 'veridically represents' would be strictly appropriate. Second, the fact that one is consciously aware of some fact 'on the basis' of being consciously aware of some others, means that those others may be drawn upon if challenged. So, in the case of empathy, one can defend one's claim to know how another feels by pointing out that one knows both that they are ψ and how it feels to be ψ (ψ, remember, is a variable ranging over affective states, either positive or negative). I do not suppose, however, that any conscious inferential process or reasoning must take place. How might the first condition be satisfied? On the pluralistic view which, for reasons of space, I will simply assume here, it could be satisfied in any number of ways including testimony, inference, simulation and perception (cf. Goldie 1999). That is, one can come to know that B is ψ by being told that it is so; by inferring it from other things that one believes about B, perhaps alongside generalisations about the mind that one tacitly accepts; by simulating B and attributing to her the output; or by seeing that B is ψ. On this account, simulation is to be identified not with empathy per se but rather with one way in which the first condition might be satisfied. This is so even if one were to suppose that the empirical evidence shows that we only ever come by such knowledge by way of simulation (a claim which, in any case, is rather doubtful). For at least some of the other ways of coming to know that B is ψ are surely possible. On this view, then, it would be wrong to suppose that 'simulate' is another term for 'empathise'. Empathy is epistemic, simulation is not. Thus, whilst there is a clear affinity between empathy and simulation, they are importantly different. How might the second condition be satisfied? Here there is less room for variation, for the reason that in order to know what being ψ feels like A must be acquainted with the feel of ψ, for she must be able to think, 'ψ feels like this'. To be acquainted with the feel of ψ, A must have experienced some state that feels that way. The most obvious way to satisfy (2), then, is if A is currently ψ. Indeed, according to some ways of understanding simulationism-as involving an actual and current instantiation of a conscious state affectively matching that being simulated-the way in which A comes to possess the knowledge mentioned in the first condition will typically also provide her with that mentioned in the second (cf. Goldman 2006, pp. 149-151). But this simultaneous matching of states may not be strictly necessary. I will mention three possibilities, each of which helps to further distance the concept of empathy from that of simulation. First, it may be that A can empathise with B's being ψ at t although A is not, at t, in any state that affectively matches ψ. I allow this on the condition that it is possible for A to know that being ψ feels like this where the inner demonstrative picks out the affective character of ψ preserved in episodic memory (Perry 2003, pp. 58-59;Green 2007, p. 190). Allowing for this is in accordance with at least one of our ordinary ways of speaking. For it can be entirely natural for Anita to say that she empathises with Betty's sadness since she, Anita, was in a similar situation last year but happily feels much better now. This is, I suggest, what we have in mind when we say, 'Yep, I know that feeling'. It is important to point out, however, that in order for this possibility to meet condition (2), Anita's episodic memory of being ψ must be conscious, since it must be able to ground a conscious thought 'ψ feels like this'. Green, whose account of empathy is in a number of ways akin to mine, makes this point nicely, I could save a lot of time and effort simply by calling up into conscious awareness my memory of how I felt […] On the basis of that conscious awareness, I now know [or, at least, veridically represent] how you feel, not dispositionally but occurrently (Green 2007, p. 190). Second, A may empathise with B's being ψ even if A has never been ψ. All that is required is that A is, or has been, in some state that affectively matches ψ. There are a number of ways in which this condition might be met. Most obviously, the affectively matching state may be a state of the same kind, for example shame. But it may be of another kind. For example, it may be that in some instances of imaginatively representing oneself as ashamed, the affective character 'carries over'. It is not implausible, it seems to me, that imagining performing a shameful act can in some cases feel the way shame does, to some degree at least, even though one may not satisfy the conditions on being ashamed, which may include quite sophisticated appraisals. A final way in which A may empathise with B's being ψ without ever having been ψ has already been hinted at in the previous paragraph. It must be recognised that the affective character of a psychological state can be specified to different levels of determinacy. From a bird's eye view, a psychological state's affective character may just amount to pure valence: feels good versus feels bad (for a more sophisticated account see (Green 2007, Sect. 7.2)). From the snail's eye view, affective character will be a highly nuanced super-determinate (Funkhouser 2006). Now, it is decidedly unlikely that, through emotional contagion, perspective-taking, etc. A will come to be in a state with the same super-determinate affective character as B's state (cf. Goldie 2011). Nevertheless, she might well come to be in some state that is similar enough (just how similar is similar enough is, plausibly, not a matter to be settled independent of context). Thus, even if A is not, and has never been, in exactly the same psychological state as B, she may nevertheless be, or have been, in a state that affectively matches it at some level of determinacy. It is reasonable to suppose that such levels of determinacy will correspond to the extent to which A knows, or is aware of, how B feels. Again, this tallies with our linguistic practise, since we are happy to allow that one can empathise with another to a greater or lesser extent. This, I am suggesting, amounts to the extent to which one knows how they feel. The above considerations bring out the fact that in order to empathise with B s being ψ, A need not be, or have been, in an affective state with the same intentional object. A can empathise with B even though B is upset about one thing-that her child is being bullied at school-and A is, or was, upset about another-that hers is, or was. In order that A empathise with B in such a case, it need not be that she has ever been upset about B s child being bullied at school. This is not to say, however, that there need be no similarity of content. One might well suppose, for example, that they must be in some way analogous (in the above example, 'my child is bullied at school', 'my child is bullied too'). Furthermore, insofar as A is, or has been, in a state with matching affective character, there will also be matching at the level of evaluative content. At the minimum, both affective states in the above case will either represent, or be associated with a representation of, the relevant state of affairs as bad (Deonna and Teroni 2012). As above, the extent to which the intentional objects or evaluative contents in question must be similar is not likely to be a matter that can be decided in an entirely general way. Rather, features of the context will be crucial in determining the conditions that must be met before we will be willing to speak of empathy. Empathy and causation The account on offer does not explicitly employ any causal notions. So, for example, it appears to contravene de Vignemont and Jacob's (2012, p. 306) causal path condition, which would require that A be in some affective state that is caused by B's being ψ. I have already provided some reasons to think that A need not be in an affective state at all, let alone one with any particular causal history. Nevertheless, even if those considerations are not found compelling, de Vignemont and Jacob's formulation of the causal condition is implausible (although I am sympathetic to other elements of their view). Consider the following: A and B are in the waiting room of the pain-lab, ready to take part in an experiment that they know will involve them receiving a series of electric shocks. When B is led off into an occluded and sound-proofed room to be given the shocks, A may simulate B's situation and thereby come to be in an affectively matching state, attributing that state to B. It is very natural to say that, in such a case, A empathises with B despite the fact that there is no causal path leading from the affective state of B to that of A. By contrast, the account that I have been articulating has the consequence that, in the envisaged scenario, A empathises with B. I do not rule out the possibility that every case of empathising stands in some interesting causal relation to the state empathised with (B's being ψ), most plausibly some restriction on __have a common cause. However, if this is so, it is due to whatever causal conditions there are on the veridical representation of B's being ψ, rather than to any additional condition. Empathy and care It might be suggested, and has been to me on several occasions, that this account of empathy is missing a crucial ingredient. This is the condition that in order to empathise with B, A must care about her. On the account that I have been outlining, it is possible for A to empathise with B whilst caring nothing for her plight. It is common to cite, for example, a torturer who, on my account, 'empathises' with his victim the better to inflict pain. Or we might think of Zahavi and Overgaard's (2012) case of two angry men in a fight. Such cases do seem somewhat at odds with at least one use of 'empathy', according to which to empathise with someone's suffering is, at least in part, to be moved to aid them. This is an aspect of empathy that my account leaves out. The standard response to this sort of concern is to distinguish between empathy, on the one hand, and sympathy on the other (see, for example, Darwall 1998;Snow 2000;Nussbaum 2001, Chap. 6;Eisenberg 2007). An initial gloss on the distinction would be that whilst empathy is feeling with another, sympathy is feeling for them. Understood in this way, care clearly falls on the side of sympathy. This response is, I think, the right one. As such, I agree with Darwall when he writes that, '[e]mpathy can be consistent with the indifference of pure observation or even the cruelty of sadism. It all depends on why one is interested in the other's perspective '(1998, p. 261). Furthermore, an explanation of the tendency to wrongly suppose that empathy involves care is reasonably easy to come by. First, it is notable that those philosophers, namely David Hume and Adam Smith, who were the first to articulate and put to work the concept of empathy, in fact employed the term 'sympathy' for this notion (Hume 1739;Smith 1759). In addition to this, there is evidence that with appropriate background conditions in place, empathy tends to lead to sympathy and, subsequently, altruistic motivation (Batson 1991). It is, then, not so surprising to find that empathy and sympathy are sometimes conflated in ordinary thought. One recent account of empathy poses a challenge to this, however. De Vignemont and Jacob's (2012;cf. Michael 2014) account of empathy includes the condition that the empathiser care for the empathisee. As they put it, 'X must care about Y's affective life ' (2012, p. 307). Far from being a mere conflation of empathy with sympathy, de Vignemont and Jacob claim that this care condition (CC) is empirically supported. On their view, CC is supported by the fact that 'empathy is not the default response to one's awareness of another's affective state in general. In particular, empathetic pain is not the default response to one's awareness of another's standard pain. ' (2012, p. 307). If correct this spells trouble for the account of empathy developed here. For adding CC to the account would quite plausibly mean that empathy, so defined, would have no distinctive role to play in our cognitive lives, since any such putative role could be played by empathy minus care. Fortunately for my account, however, the evidence does not support de Vignemont and Jacob's CC. The empirical work in question is Singer et. al.'s (2006) experiment that shows that activation of neural areas associated with 'empathic responses' (the anterior insular/fronto-insular cortex and the anterior singulate cortex) is modulated by high-level evaluations of the other person. For example, in male subjects the neural areas in question showed significantly less activation when the other person is perceived as previously having behaved unfairly (Singer et al. 2006, p. 467). The existence of this top-down effect is what de Vignemont and Jacob have in mind when they claim that empathy is not the 'default response'. This, they claim, supports CC. There are a number of reasons, however, for thinking that Singer et. al.'s work does not, in fact, support CC. First, not only perceived fairness, but other contextual features, including the perceived positive benefits of the empathisee's pain (in therapy), also modulate the mirroring responses in question (Hein and Singer 2008, p. 156). Mirroring is responsive to top-down effects and it is not obvious why care should be singled out. Second, in the study, perceived fairness does not seem to have the same effect for female subjects. As Singer et. al. note, 'formal analysis revealed no significant difference for women when comparing painful trials for fair versus unfair players in empathy-related pain regions' (Singer et al. 2006, p. 467). Given the small sample size of Singer et. al.'s study, the reliability of this data might be questioned. Nevertheless, it is far from clear that the study supports the gender neutral CC, rather than merely the possibility of top-down effects on mirroring. The falsity of the claim that mirroring is the default response is insufficient to motivate the CC. Perhaps this is part of the reason for the fact Hein and Singer themselves explicitly distinguish empathy from sympathy (2008, p. 154). But there is a deeper reason to think that psychological or neuroscientific evidence of this sort could not in principle support the CC, or any other condition on empathy. This is that, on the account I have been offering, empathy is not a psychological phenomenon at all. Empathy is epistemic not psychological According to the way of understanding empathy that I have been articulating, it is fundamentally an epistemological, not a psychological, phenomenon. That is, empathy cannot be identified with any one psychological state or process, but is more accurately described in epistemic terms, terms that themselves leave it as an open question which psychological states and processes might, in any given case, be recruited into empathy's service. None of the psychological phenomena such as imitation, emotional contagion and perspective-taking are to be identified with empathy. Nor are any of them strictly necessary for empathy. For example, simulation is not necessary for empathy. If A knows, via theoretical inference, that B is ψ, and also, via memory, what it is like to be ψ, then A may empathise with B; no simulation required. Nevertheless, as I have indicated, these various psychological phenomena can feed into empathy by grounding the veridical representations mentioned in conditions (1) and (2). On the account that I have sketched, empathy is not a process of any sort, rather it is a state in which one arrives having undergone those grounding processes, whatever they may have been. To borrow Goldman's apt phrase, there are different 'routes to empathy' (Goldman 2011). That empathy is not a psychological phenomenon gives us some reason to doubt that it is a natural kind. If A empathises, it follows that she satisfies conditions (1)-(3) but, as indicated above, conditions (1) and (2), and so (3), can be satisfied in a range of different ways. For example, one instance of empathy may be grounded in a process of simulative imagination, another in a combination of testimony and memory. Thus, even if we suppose such psychological states as imagination, memory, and, more controversially, testimonial knowledge, each to be natural kinds, the psychological states that an empathising subject may be in are so disparate that there is little to recommend the view that empathy itself is a kind. Accordingly, it would be surprising if empirically grounded judgements about empathy were projectable. If K is a natural kind, some empirically grounded judgements about a sample of K s-those making reference to the features that distinguish K from other kinds-may be reliably, although perhaps not exceptionlessly, projected to the whole category (cf. Goodman 1955, Chap. 4;Quine 1969). Whilst there are various conceptions of natural kinds (see, for example, Bird and Tobin 2012), all are associated with the idea that the surface features of members of a kind are explained by 'underlying' (microstructural, evolutionary, etc.) similarities, and that these underlying similarities may serve as the basis for empirically grounded inductive generalisations. A familiar example would be the discovery that the atomic number of gold is 79. This discovery, of necessity limited to a sample, may be projected to all samples of gold. But nothing similar can be said for empathy. There is little reason to suppose that empirical discoveries about some sample instances of empathy may be projected to all instances, since the wide variety of possible cases of empathy display little or no underlying (microstructural, evolutionary, etc.) similarity. They do not tell us about the nature of empathy but, more plausibly, about the nature of those psychological states that do the work in satisfying conditions (1) and (2). Consider, for example, the experiment mentioned above, Singer et. al.'s (2006) experiment showing that activation of neural areas associated with what they call 'empathic responses' is modulated by high-level evaluations of the other person. Of course, this only tells us something about empathy itself if we have reason to believe that the anterior insular/fronto-insular cortex and the anterior singulate cortex are associated with empathy per se. What, though, are Singer and colleagues referring to with the term 'empathic responses'? This is what they tell us, Empathy enables us to share the emotion, pain and sensation of others. The perception-action model of empathy states that the observation or imagination of another person in a particular emotional state automatically activates a representation of that state in the observer [...] our ability to empathize relies on neuronal systems that underpin our own bodily and emotional states. (Singer et al. 2006, p. 466) What Singer and colleagues have in mind, then, is what I have been calling 'mirroring' or 'resonance'. But this, I have been arguing, is not identical to empathy. Rather, mirroring provides one possible way in which empathy might be achieved. As a result, at best, what this experimental paradigm shows is that mirroring, a phenomenon which may underpin some instances of empathy, can be modulated by high-level evaluations. This cannot be projected onto the full spectrum of instances of empathy. It tells us nothing, for example, of cases in which A empathises with B's distress through being reliably informed of her plight and knowing how it must feel through her presently being in a sufficiently similar situation. If empathy is not a natural kind, there is no reason to suppose that the results of experiments such as that of Singer and colleagues are projectable and so they cannot, in principle, support conditions such as de Vignemont and Jacob's CC. Experimental paradigms such as that of Singer and colleagues tell us about mirroring, not about empathy per se. Analogous remarks apply to a number of other experimental paradigms, for example Avenanti and colleagues' experiments on pain mirroring, and their resulting claim that, 'empathy for pain may rely not only on affective-motivational but also on fine-grained somatic representations' (Avenanti et al. 2005, p. 958). Again, what the experiments show us is something about mirroring. There is no reason to suppose that such claim can be projected onto every instance of empathy with another's pain. To forestall possible misunderstanding, I am not suggesting that there is anything wrong with these experimental paradigms or others like them. Not at all. My point is simply that these paradigms do not show us something about the nature of empathy itself, but rather of the sorts of psychological processes that are plausibly thought to underlie some, but not all, instances of it. Nor does the complaint apply equally to all empirically grounded claims about empathy. For example, the problematic cases above stand in contrast to others, such as Wicker and colleagues' (2003) disgust paradigm and the thought, in Goldman's words, that it shows that, 'observing a disgust expressive face produces mental mimicry, or empathy' (2011, p. 35). Here there is no illicit projection, just the claim that empathy may be provoked in a certain way. This makes no claim about the nature of empathy, and is entirely consistent with the approach to empathy that I have been endorsing. A social role for empathy I have claimed that the function of empathy is to provide knowledge of how others feel. However, I would also like to suggest that, by way of this distinctive epistemological achievement, empathy may serve a broader social function. As Adam Smith recognised-in fact he based a significant part of his moral theory on it-we place a high value on empathy (Smith 1759). As he puts it, 'nothing pleases us more than to observe in other men a fellow-feeling with all the emotions of our own breast' (1759, I.i.2.2). His term 'fellow-feeling' is highly appropriate here. One of the things we value is feeling with others (Tomasello 2008, pp. 210-212). Whilst merely feeling as another does is of value, I suggest that we place a higher value on knowing that the other feels the same, and knowing that the other knows that one feels the same, etc. We have here something of the structure of common knowledge (Lewis 1969, Chap. 2;Schiffer 1972, pp. 30-42). We might call this complex state of affairs 'transparent fellow-feeling'. I have claimed that it is only through empathy that a person can know how another feels. Thus, since transparent fellow-feeling involves knowing how another feels, it follows that empathy is necessary for transparent fellow-feeling. One might be tempted to suppose that, contrary to this, transparent fellow-feeling could occur in the absence of knowing how the other feels, just getting along with an awareness of what they feel. But this would be a mistake. For, if A fellow-feels with B and A knows what B feels then A will thereby know how B feels. Transparent fellow-feeling entails knowing how. Empathy, then, is necessary for transparent fellow-feeling. It may be, however, that only some instances of transparent fellow-feeling possess the sort of value to which I have alluded. Cases of transparent fellow-feeling between sadists come to mind, for example. Perhaps more interesting would be cases of alienation, in which one party does not reflectively endorse their own emotional state-for example if I share a racist joke, but am at the same time appalled at my finding it funny. These sorts of cases aside, Adam Smith's idea contains an important truth-that we do indeed find value in transparent fellow-feeling. Of course it is no doubt the case that knowing how others feel is important for pro-social behaviour. But it might also be that we think of transparent fellow-feeling as an end in itself. If so, then empathy serves a social function that is itself independent of any further pro-social effects it may have. We value feeling with others. Conclusion The concept of empathy is contested and in offering an account of it some stipulation is perhaps inevitable. I hope, nevertheless to have presented a view that has the benefits of being reasonably true to both ordinary ways of speaking about empathy and also the various claims that have been made about it by a wide range of theorists in a variety of disciplines. I have suggested that, whilst related, 'empathy' should not be considered as another term for simulation. Empathy has its own role in our lives: it alone allows us to know how others feel. I have also suggested that, on the account I have offered, empathy is not a natural kind and so certain experimental paradigms tell us not about the nature of empathy but about certain ways in which empathy can be achieved. I add the speculation that, thought of in this way, empathy may also play a distinctive social role, enabling the valuable state of 'transparent fellow-feeling'.
9,289
2015-05-30T00:00:00.000
[ "Philosophy", "Psychology" ]
Nanopapers for organic solvent nanofiltration † Would it not be nice to have an organic solvent nanofiltration membrane made from renewable resources that can be manufactured as simply as producing paper? Here the production of nanofiltration membranes made from nanocellulose by applying a papermaking process is demonstrated. Manufacture of the nanopapers was enabled by inducing flocculation of nanofibrils upon addition of trivalent ions. Organic solvent nanofiltration (OSN) has found both widespread scientific and industrial interest since its emergence at the beginning of this century. 1 OSN describes the process of separating molecules or particles with a molecular weight (M W ) of some hundreds to thousands of Da -i.e. particles or molecules with nanometer dimensions -from an organic solvent. 1,2 Applications such as product purification and concentration, solvent exchange and recycling as well as recovery of homogeneous catalysts have been reported and compared favorably to classical methods, such as distillation, due to the lower energy consumption and milder conditions that chemical compounds experience during separation. 2 However, the utilization of organic solvents in NF operations still provides a significant challenge for the membranes from the materials point of view, in particular due to the required solvent-stability, which many traditional polymer membranes lack. 3 Several different engineering and high performance polymers have been tested for OSN membranes. [3][4][5] Typically, polymer membranes do require a mechanical support, which is often made of polyamides, polysulfones or polyimides. 6 Besides polymer membranes, ceramics 7 or organic-inorganic hybrid materials 8 have been explored. Unfortunately, all these materials suffer from drawbacks; the production processes involve the use of large quantities of solvents and chemicals as well as extensive energy usage in the case of ceramics. 9 Thus, simple, clean and fast production processes would be desirable to manufacture solvent stable nanofiltration membranes. In general, both everyday life and laboratory operations depend on filtration processes that are performed using membranes or cellulose filters. However, there are certain limitations when it comes to the removal of small M W compounds using filter papers. In recent years, nanofibrillated cellulose (NFC) has gained significant attention due to its outstanding mechanical and chemical properties, 10 especially when used in composites. 11 NFC, when used in the paper form, also known as nanopaper, possesses outstanding mechanical properties, low thermal expansion coefficients, high optical transparency and good gas barrier properties. [12][13][14][15] These barrier properties have been exploited in food packaging films. 16 Nanopapers might offer potential for applications in separation processes due to their inherent pore dimensions in the nm range. 13 For example, the NFC paper was explored as a separator in Li-ion batteries. 17 Here we introduce solvent stable nanofiltration (NF) membranes entirely made from nanocellulose. These membranes are produced by a papermaking process that utilizes an aqueous suspension of nanocellulose thus avoiding vast amounts of organic solvents that are usually necessary for the production of conventional OSN polymer membranes. 5 Manufacture of these nanopapers is enabled by inducing flocculation of nanofibrils upon addition of multivalent ions. This type of nanocellulose membrane represents a step forward within this important domain and demonstrates the utilization of a well-known material for an advanced application. We discuss the use of nanopapers made entirely from (2,2,6,6-tetramethylpiperidin-1-yl)oxy (TEMPO) oxidized NFC (herein termed as NFC-O) with fibre diameters ranging from 5 to 30 nm (UPM-Kymmene Oyj, Helsinki, Finland) for NF membranes. The production method of NFC-O is described in detail elsewhere. 18 It can be anticipated that these nanofibrils can be densely compacted to form a framework structure with pore-dimensions in the range of the diameter of the nanofibrils. This concept has been mathematically proven by Zhang. 19 To demonstrate the possibility of controlling the pore size, and thus the molecular weight cut-off (MWCO) and permeance of the nanocellulose membranes, we also used another NFC grade produced by mechanical grinding (MKZA10-15J Supermasscolloider, Masuko Sangyo Co., Kawaguchi, Japan) of never-dried bleached kraft birch pulp as described by Lee et al. 14 Herein, we call these fibrils NFC-K, which possess fibre diameters of 50 to 100 nm (more details about the NFC grades can be found in the ESI †). In general, for the production of paper, cellulose fibres are suspended in water. This suspension is then filtered, the resulting filter cake, i.e. the fibre mat, is pressed and water is removed until the desired quality is achieved. As for usual paper, the production of nanopapers started from an NFC in water suspension with a consistency of 0.3 wt%. This suspension was produced by blending (Breville VBL065-01, Oldham, UK) NFC feedstock for 2 min, which had an original consistency of 2.5 wt% and 1.8 wt%, respectively, for NFC-O and NFC-K. Nanopapers with the desired grammage were produced by vacuum-filtration of NFC suspensions containing a pre-determined amount of nano-cellulose onto cellulose filter papers (VWR 413, 5-13 mm pore size, Lutterworth, UK). However, we observed that NFC-O passed through both the filter paper and the supporting glass frit (Schott, porosity No. 1, Mainz, Germany) due to its extremely small size. This effect was not observed for the filtration of the larger diameter NFC-K fibrils, which was consistent with our previous observations. 14 In order to facilitate the filtration of NFC-O, flocculation of the fibrils by changing the surface charge was required. Thus, we measured the z-potential of NFC as function of pH in a 1 mM KCl electrolyte using electrophoresis (Brookhaven ZetaPALS analyzer, Holtsville, USA). It can be inferred from the z = f (pH) curve that it is impossible to induce flocculation of NFC-O by changing the pH of the NFC-O suspension, since the isoelectric point (iep), where z = 0, at which significant flocculation would occur, is very low (Fig. 1, left). To reach the iep, a pH of 1.5 (extrapolated) would be required, which could possibly result in acid hydrolysis of NFC. 20 The z-potential as a measure of surface charge is dependent on the ionic strength, which is most effectively increased by addition of multivalent ions. Therefore, we measured z as a function of the salt (MgCl 2 and AlCl 3 ) concentration, from which the point of zero charge (pzc) was determined (Fig. 1, right). At the pzc, the NFC-O fibrils have zero net surface charge and, therefore, no electrostatic repulsion exists between NFC-O fibrils, which causes the whole NFC-O suspension to form a single gel. Multivalent cations specifically adsorb on negatively charged NFC-O surfaces causing the z-potential to decrease by effectively reducing the Debye length. Ultimately, the pzc was reached upon adjusting the electrolyte concentration to 800 mM for MgCl 2 and 1 mM for AlCl 3 , respectively (Fig. 1, right), because the ionic strength of the electrolyte increases exponentially with increasing charge of the cations. To produce NFC-O filter cakes, AlCl 3 was added to achieve a concentration of 1 mM. Wet NFC-O and -K filter cakes of 125 mm in diameter were pressed between blotting papers (Whatman 3MM Chr, Kent, UK) for 5 min under a weight of 10 kg to increase the NFC solid content to 15 wt%. These filter cakes were then sandwiched between blotting papers and metal plates for further hot pressing at 120 1C for 1 h under a weight of 1 t to dry and consolidate the filter cakes. The hot pressing also prevents the shrinkage of nanopapers and increases the density of the sheets, resulting in better mechanical properties of the papers. 15 Nanopapers with grammages between 10 and 70 g m À2 (gsm) were produced from both types of nanocelluloses. The thickness of these nanopapers was found to increase linearly with the grammage (Fig. S1, ESI †). The nanopapers produced were used as membranes directly. Exemplarily, the permeance (P) of tetrahydrofuran (THF), n-hexane and water through the nanopapers was measured in a dead end cell (Sterlitech HP 4750, Kent, USA). The solvent was forced through the nanopapers at 20 1C by nitrogen at a head pressure of 0.2 MPa and 1 MPa for nanopapers with grammages o20 gsm and 420 gsm, respectively. The amount of solvent that passed through the nanopaper for a given time interval was measured gravimetrically and used to determine P [L m À2 h À1 MPa À1 ]. For these measurements, discs of 49 mm in diameter were cut from the nanopapers and placed in the dead end cell on a ceramic support. In the beginning of the measurement, P decreased significantly (Fig. S2, ESI †) caused by membrane compaction due to the applied pressure. 21 The permeance of different solvents is exemplarily shown for NFC-O nanopapers in Fig. 2(a). These measurements showed that P of the tested solvents passed through nanopapers increases in the following order: water o THF o n-hexane. Thus, irrespective of the hydrophilic nature of nanocellulose and the hydrophobicity of some of the solvents, P increases inversely with increasing hydrophobicity of the solvent. It should be noted that the calculation of P does not take into account the viscosity of the solvent. In addition to this, we also observed that P is dependent on the grammage, and thus the thickness, of the nanopapers as well as the diameter of the fibrils (Fig. 2). Using nanofibrils with larger diameters (NFC-K) for membrane fabrication resulted in nanopapers with larger pore dimensions as compared to NFC-O, which, in conjunction with varying the grammage of the nanopapers, allows the permeance to be controlled over a wide range. Varying the aspect ratio of randomly packed high aspect ratio cylinders hardly affects the porosity of a mat. 22 Since the number of fibrils per unit mass within the same volume element is higher for smaller fibrils, this results in a larger number of pores, which are smaller in diameter due to the constant porosity (around 35%). The nanofiltration membrane performance is generally quantified by the MWCO, which was determined by passing standard polymer solutions of known concentrations through the nanopapers. The amount of rejected polymer molecules was quantified using gel permeation chromatography (GPC, aqueous: Viscotek GPCmax VE2001, VE3580 RI detector, Malvern, UK; organic: Waters 515 HPLC pump, Waters 2410 RI detector, Milford, USA). The MWCO is defined as the molecular weight of a molecule which is rejected by 90%. 23 Poly(ethylene glycol) (PEG) dissolved in deionized water and polystyrene (PS) standards dissolved in THF with molecular weights ranging from 1 to 13 kDa were used to determine the MWCO for NFC-O nanopapers with a grammage of 65 gsm. The retention of PEG and PS standards as a function of the M W is shown in Fig. 3(a). For PS and PEG, the MWCO values were found to be 3.2 kDa and 6 kDa, corresponding to hydrodynamic radii of 1.6 nm 24 and 2.4 nm, 25 respectively, which represent the pore size. Thus, our nanopaper membranes have a MWCO at the upper end of the NF range. In the case of NFC-K papers ( Fig. 3(b)), the MWCO of PEG was 25 kDa, which corresponds to a hydrodynamic radius of 5 nm (ref. 25) and for PS it was 40 kDa, which is equivalent to a hydrodynamic radius of 5.5 nm. 24 This demonstrated that by using differently sized cellulose nanofibrils, around 50 nm for NFC-K and down to 5 nm for NFC-O, it is possible to adjust the pore dimensions of the resulting nanopapers, which is due to a reduced pore size in the random packing of cylinders with smaller diameters. To summarize, we produced nanocellulose based nanofiltration membranes by simply using a papermaking process. These nanopapers are suitable for NF of organic solvents and water. It was observed that the permeance of nanopapers was dependent on the hydrophilicity of the solvents and that P was governed by the grammage of the nanopapers and the dimensions of the nanofibrils. We also observed that the MWCO was determined by the diameter of the nanofibrils, which affects the pore dimensions of the nanopapers. It is thus possible to tailor the membrane performance over a wide range of applications by selecting nanofibrils with different diameters. In conclusion, we can prepare, as simply as making paper, solvent-stable OSN membranes from renewable resources. If it eventually becomes possible to produce NFC with fibrils of evenly distributed lengths, potentially even thinner active membrane layers with smaller MWCO could be created, which would drastically improve the performance of these types of NF membranes. The authors greatly acknowledge the funding provided by the EU FP7 project NanoSelect (Grant No. 280519) and the University of Vienna for funding KYL. We thank Maria Schachner (TU Vienna) and Dr Ivan Zadrazil (Imperial) for performing the GPC-measurements.
2,914.8
2014-05-01T00:00:00.000
[ "Engineering", "Chemistry" ]
Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning . The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver’s vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers’ car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models. Introduction Recent studies have suggested that the development of autonomous driving may benefit from imitating human drivers [1][2][3]. ere are two reasons: First, the comfort of autonomous vehicles (AVs) may be improved if the driving styles match the preferences of the passengers. Second, the transition period during which AVs will share the road with human-driven cars is expected to last for decades. Road safety may be enhanced if AVs are designed to understand how human drivers will react in different situations. Car-following is one of the most common situations encountered by drivers. e modeling of car-following behavior has been a common research focus in the fields of traffic simulation [4], advanced driver-assistance system (ADAS) design [5], and connected driving and autonomous driving [6][7][8][9]. Various car-following models have been proposed since 1953 [10]. In general, there are two major approaches. e classical methods use several parameters to characterize the car-following behavior of drivers [11,12]. With the rapid development of data science, data-driven methods with a focus on learning the behavior of drivers based on field data [13,14] have emerged. For both approaches, data-driven car-following models were found to provide the highest accuracy and best generalization ability for replicating the drivers' trajectories. Among data-driven methods, supervised learning and expressive models, such as neural networks (NNs), have been commonly used to learn the relationships between states and drivers' controls [15][16][17]. ese modeling techniques are often referred to as behavior cloning (BC). Even though BC approaches have been successfully applied, they are prone to cascading errors [18], which is a well-known problem in the sequential decision-making literature. e reason is that inaccuracies occur in model predictions when there are insufficient data for training the model. Small inaccuracies accumulate during the simulation, which eventually leads the model to states not included in the training data and brings about even poorer predictions. Inverse reinforcement learning (IRL) was introduced to overcome these drawbacks. IRL, which was proposed by Ng and Russell [19], is the inverse problem of reinforcement learning (RL). Although RL has been applied with great success in recent years, such as in the well-known program AlphaGo [20], the use of RL in other domains remains limited because it is challenging to determine the reward, which is the core component in RL. Manual tweaking of the reward functions can be tedious, and inappropriate reward assignments may lead to unexpected behaviors [21]. IRL, however, provides a framework to learn the rewards automatically. e advantages of IRL are twofold: the learned rewards can be used to improve the interpretability of the models, and the goals of the tasks can be understood, which may prevent cascading errors [22]. erefore, the present study proposes a car-following model based on IRL. In contrast to a recent work, which applied IRL to model carfollowing using linear reward representation [23], in this study, a nonlinear function, that is, NN, is used to approximate the reward function as the preferences of human drivers may be highly nonlinear. e proposed model is trained and tested using data under actual driving conditions, and the performance is compared with that of other car-following models. e rest of the paper is organized as follows: Section 2 briefly reviews the literature on car-following modeling, RL, and IRL. Section 3 presents the input feature vectors of the reward network in the IRL and the proposed algorithm. Section 4 describes the experiments and data used in this study. Section 5 elaborates on the training process of the proposed model and presents the investigated car-following models. Section 6 presents the comparison of the performance for different methods and the characteristics of the trained models using data from drivers with different driving styles. e final section presents the discussion and conclusion. Background e car-following process is essentially a sequential decisionmaking problem where drivers continually adjust the longitudinal control a based on the states s they encounter, which include the speed of the driver's car, the spacing between the driver's car and the leading car, and the relative speed between the two vehicles. Car-following models are designed to model the policy π(a|s) of drivers. Classical Car-following Models. e early General Motors models proposed by Chandler [24] modeled the drivers' longitudinal controls to minimize the relative speed because this is one of the primary objectives of car-following. ese models exhibited poor performance in predicting the distance between cars. Later models addressed this problem by considering another objective of car-following, that is, maintaining the desired distance; these models included the Gipps model [25] and the intelligent driver model (IDM) [12]. Behavior Cloning Car-following Models. As the access to high-fidelity driving data has become increasingly available, data-driven models such as NN have been used to model car-following behavior. NN have been demonstrated to exhibit excellent performance for estimating nonlinear and complex relationships. In 2003, Jia et al. [16] proposed an NN-based car-following model with two hidden layers and the inputs speed, relative speed, spacing, and desired speed. Chong et al. [15] simplified the architecture proposed by Jia to one hidden layer and obtained similar results. Instead of using as input only a single time step of relevant information, such as in the conventional NN-based models, Zhou et al. [17] proposed a recurrent neural network-(RNN-) based model that used a sequence of past driving information as input. e RNN approach was better adapted to changes in traffic conditions than the NN approaches. e present study also uses the RNN-based model to compare its performance with that of the proposed method. Reinforcement Learning. In RL, a sequential decisionmaking problem is modeled as a Markov-decision process (MDP), which is defined as a tuple M � S, A, T, r, c . S and A denote the state and action space, respectively, and T denotes the transition matrix, which is defined in equation (1). r and c denote the reward function and the discount factor, respectively. where v(t), Δv(t), and h(t) denote the speed of the ego vehicle, the relative speed from the lead vehicle, and the spacing between the ego and the leader at time step t, respectively. Δt is the simulation time interval, which is 0.1 s in this study, and v lead denotes the speed of the lead vehicle, which was obtained from the collected data. RL assumes that drivers follow a policy that maximizes long-term rewards. Once the rewards are known, the policy can be determined using algorithms such as Q-learning [26]. In recent years, RL has been applied by researchers to solve real-world problems such as the balance control of a robot and the energy management of hybrid electric vehicles [27][28][29]. Inverse Reinforcement Learning. In IRL, the reward of a state can be represented by a linear combination of the relevant features (equation (2)). e goal of IRL is to determine the weights θ from expert demonstrations. Abbeel and Ng [30] proposed a feature matching strategy to solve the problem (equation (3)). As long as the feature expectation of the simulated trajectories equals the features calculated from the expert data, the learned behavior has the same performance as the demonstrator. However, it was found that many different policies can be obtained when the feature matching conditions were satisfied. e ambiguity problem related to the correct reward and policy remains unsolved. π a t |ts t T s t+1 |s t , a t . (3) e maximum entropy IRL (Max-Ent IRL) proposed by Ziebart [31] addressed the ambiguity problem by incorporating the principle of maximum entropy into the IRL. In the Max-Ent IRL framework, the probability of a trajectory is proportional to the sum of the exponential rewards accumulated in the trajectory (equation (4)). is form of distribution can guarantee no additional preferences other than the feature matching requirement. When the probability of a trajectory is known, the weights of the reward can be determined by maximizing the log-likelihood of the expert data using the following objective function (equation (5)): Maximum Entropy Deep Inverse Reinforcement Learning. Since the linear representation of the rewards might limit the accuracy of reward approximation, Wulfmeier [32] extended the method to nonlinear models using deep NNs. Deep architectures have been shown to capture the nonlinear reward structure in several benchmark tasks with high accuracy. e present study uses the approach of deep architectures to represent the rewards of drivers in car-following. e fully connected NNs used in this study map the input features in the car-following model to estimate the rewards, as shown in Figure 1. It can be derived that the gradient of the Max-Ent deep IRL (DIRL) is as follows: where μ D and E μ refer to the state visitation frequencies calculated from the expert demonstrations and expected state visitation frequencies obtained from the learned policy and g(f, θ) refers to the network architectures. Once the gradient is calculated, the parameters of the NN are updated using backpropagation [33]. The Proposed Car-following Model In this section, the details of the proposed model (DIRL) are explained, including the design of the input features for the reward network and the full algorithm. e DIRL model uses as input the driver data on car-following trajectories, consisting of speed during car-following, spacing to the leading car, and relative speed. After training, the DIRL model outputs the policy and the rewards of drivers. A discrete state and action space were defined in the present study. According to the rules for determining car-following events that will be described in Section 4.2 and the distribution of the empirical data used in this study, the spacing h is limited to the range from 0 to 120 m with an interval of 0.5 m, the speed v is limited to the range from 0 to 33 m/s with an interval of 0.5 m/s, and the relative speed Δv is limited to the range from −5 to 5 m/s with an interval of 0.5 m/s. e action a is limited to the range from -3 to 2 m/s 2 with an interval of 0.2 m/s 2 . Feature Selection for the Rewards in Car-following. As introduced in the last section, the input features of the network are determined first to create an NN and obtain the rewards in car-following. e rewards in RL encode the objectives or the purpose of the agent [26]. erefore, the selected features should represent the objectives of drivers in the car-following task. In the study of Gao [23], speed and spacing were chosen as features for representing the rewards. In [34], the reward function represented the speed discrepancies between the simulated trajectories and the test data. In contrast to these studies, we base the reward function on the following features. Time-Headway. Time-headway (TH) has been widely used as an indicator for drivers to evaluate risk during carfollowing [35]; TH is defined as the time between two vehicles passing the same point on the road. It has been suggested that a driver's safety margin in car-following can Journal of Advanced Transportation be characterized by the TH, which plays a role in the driver's decision-making [36]. Drivers may have different desired safety margins for the TH. For example, aggressive drivers may prefer a shorter TH than conservative drivers because they like to track vehicles at a closer distance. It has been suggested that one of drivers' objectives in car-following is to control TH to their expectations [37]. erefore, TH is selected as an input of the reward network in this study. Relative Speed. Research has shown that the drivers' speed control in car-following is proportional to the relative speed [38]. As mentioned earlier, an objective in car-following is to keep the relative speed close to zero [37]. In this study, we relax this objective so that drivers will keep the relative speed within an appropriate range because people's driving behavior is imperfect and is not always optimal. Following the method presented in [23], these two features were mapped into high-dimensional space using the Gaussian radial kernel: where s i � (TH i , ΔV i ) denotes the kernel vectors, which represent the conjectural values of the preferred TH and relative speed, and σ is a parameter that controls the width of the kernel function. Specifically, TH i has a range of 0.5 s to 3 s, with an interval of 0.5 s, and ΔV i has a range of −4 m/s to 4 m/s, with an interval of 0.5 m/s in this study. Maximum Speed. e maximum desired speed is commonly used in many classical car-following models [12,16]. Drivers may have a preferred maximum speed, and they may not continue to follow the leader if their speed is already above this value. It is assumed that the objective of the driver is to keep the speed below the maximum speed as follows: where v i max denotes the conjectural acceptable maximum speed. v i max is in the range of 90 km/h to 120 km/h, with an interval of 5 km/h. e reward function is represented by an NN that is parameterized by θ as follows: 3.2. e Full Algorithm. e proposed DIRL algorithm consists of three parts, which are marked in bold in Algorithm 1. In the first part, the reward r i (s) is determined by the parameters of the NN to calculate the policy π i (a|s). Value iteration with a softmax function is used to solve the policy based on the reward. e result of the softmax version of value iteration is a stochastic policy in which the probabilities of every predefined action are listed in a tabular form. V(s) and Q(s, a) in this part denote the expected longterm return of states and state-action pairs. In the second part, the policy π i (a|s) is applied to estimate the expected state visitation frequencies μ i (s). e original version for estimating μ i (s), as reported in [31], is not suitable in car-following tasks because the speed of the lead vehicle is always changing. Simply applying policy propagation [32] for every trajectory in the data can be timeconsuming. erefore, in this study, we perform sampling by running the policy in the simulation of drivers' car-following trajectories for N 2 times to approximate μ i (s). During the simulation, the action at every time step was randomly sampled from the policy based on the probability of every action. In the third part, the gradients are calculated by subtracting the estimated μ i (s) from the state visitation frequencies μ D obtained from the data. Subsequently, the parameters of the NN are updated by backpropagation. ese steps are repeated several times until convergence. e training of the algorithm can be stopped when the rewards accumulated in the trajectories stop increasing. Data Description. Data from two field tests that were conducted in Huzhou city in Zhejiang province and Xi'an city in Shaanxi province were used in this study. Forty-two drivers participated in the test. eir driving experience ranged from 2 to 30 years with the average being 15.2 years. During the test, the participants were only informed of the starting location and destination, and they were asked to follow their normal driving styles. e test data were collected by a Volkswagen Touran equipped with instruments and sensors, as illustrated in Figure 2. e test route consisted of diverse driving scenarios such as urban roads and highways, as shown in Figure 3. e other details of the field tests are described in [39,40]. Extraction of Car-following Events and Data Filtering. We applied the rules described in [41] to extract the carfollowing events from the obtained data. (1) We ensured that the test vehicle was following the same lead car; (2) the distance to the lead car was less than 120 m to eliminate freeflow traffic conditions; (3) we ensured that the follower and the leader were on the same lane; (4) the duration of carfollowing events was longer than 15 s. e extracted events were then manually reviewed by checking the videos recorded by the front camera on the equipment vehicle to guarantee good data quality. Eventually, nearly one thousand car-following events were extracted. A moving average filter was applied (1 s) to remove noise from the extracted car-following data. Driving Style Clustering. e participants displayed diverse driving styles, which were evident in the driving data. e k-means algorithm was used to cluster the drivers into different driving styles. Previous studies have adopted kinematic features such as spacing, speed, and relative speed or time-based features such as TH and TTC for driving style clustering [34,39]. In this study, multiple combinations of the mentioned features were tested as inputs for the k-means algorithm, and the quality of the clustering results was then evaluated by the silhouette coefficient where a larger silhouette coefficient indicates a better result. Finally, the mean value of TH and TH when braking was chosen because this combination achieved the highest value of the silhouette coefficient [42]. e number of the clusters was also determined to be two based on the results of the silhouette coefficient. Figures 4 and 5 present the boxplot of the mean TH and mean TH when braking for the conservative group that consisted of 16 drivers and the aggressive group that consisted of 26 drivers, respectively. e aggressive group had significantly higher mean TH (t � 6.748, p < 0.001) and mean TH when braking (t � 7.655, p < 0.001) than the conservative group. e descriptive statistics (Table 1) of the two groups confirmed the clustering results. e aggressive drivers had shorter mean spacing and higher mean speed and mean acceleration than the conservative drivers. Evaluation Metrics. Two metrics, the root mean square percentage error (RMSPE) (equation (10)) and the modified Hausdorff distance (MHD), were used to evaluate the accuracy of the car-following models for reproducing drivers' car-following trajectories. As suggested by Punzo and Montanino [43], the cumulative sum of the errors is an appropriate option to evaluate the performance of car-following models. where RMSPE(speed) denotes the RMSPE of speed, RMSPE(spacing) denotes the RMSPE of spacing, v obs n (t), h obs n (t) are the speed and spacing at time t in the observed nth trajectory, and v simu n (t), h simu n (t) are the simulated speed and spacing at time t for the nth trajectory. e MHD is an extension of the Hausdorff distance which represents the distance between two sets of points C � c 1 , c 2 , . . . , c N c and B � b 1 , b 2 , . . . , b N b , as defined in equation (11). e median of the MHD (MHD 50 ) had been used to evaluate the similarity of simulated and actual trajectories in modeling defensive driving strategies [44] and urban route planning [45]. Since the proposed DIRL model outputs a stochastic policy, the two metrics were calculated by averaging the results of 10 simulations for every trajectory in the data. Model Training. e k-fold cross-validation method was applied to evaluate the performance of the car-following models. Specifically, the car-following datasets of the two groups of drivers were randomly divided into 5 groups with an equal number of trajectories. One group was taken as the test set and the remaining four groups were taken as the training set. e training and test experiments were repeated five times because every divided group had been used as the test set. Finally, the performance of the car-following models was evaluated by the average value of the two metrics. e Adam optimizer [46] with learning rate decay was applied to train the DIRL model. e hyperparameters used for training are listed in Table 2. L2 regularization was used to prevent overfitting of the reward network. Figures 6 and 7 present the change of RMSPE of spacing and the change of the cumulative normalized rewards per trajectory in one of the cross-validation experiments, respectively. After about 5 iterations, the RMSPE of spacing for the training set and test set start to converge. e rewards collected in the trajectory remain stable after about the same number of iterations. e Investigated Models. e accuracy and generalization ability of the proposed model was compared with those of two other data-driven car-following models, that is, the NN-based model and the RNN-based model. NN-Based Car-following Model. A fully connected neural network with one hidden layer was built following the study conducted by Chong et al. [15]. e hidden layer consisted of 60 neurons in this study. e NN-based model takes inputs of speed, spacing, and relative speed and outputs the acceleration for the current time step. e objective Journal of Advanced Transportation 5 of minimizing the empirical acceleration and the model's predictions was adopted to train the model (equation (12)). where w, b denotes the weights and bias in the NN-based model, a simu n (t) denotes the predicted acceleration at time step t for the nth trajectory, and a obs n (t) denotes the empirical acceleration at time step t for the nth trajectory. RNN-Based Car-following Model. e architecture of the RNN-based model built in this study is in line with the study conducted by Zhou et al. [17]. e number of hidden neurons in the RNN model was set to be 60. e RNN model takes inputs of a sequence of historical information that lasts for 1 s and outputs the acceleration for the current time step. e speed and spacing for the next time step were then estimated based on the state transition matrix described in equation (1). e training of the RNN model adopted the Journal of Advanced Transportation Randomly initialize the parameters of the neural network as θ 1 For i � 1 to N 1 do Determine the reward for every state by applying forward propagation in the neural network Use the softmax version of value iteration to obtain the policy loss function shown in equation (13) which minimizes the RMSPE of speed and spacing. where w, b denotes the weights and bias in the RNN model, h obs n (t), h obs n (t) are the speed and spacing at time t in the observed nth trajectory, and v simu n (t), s simu n (t) are the simulated speed and spacing at time t for the nth trajectory. Performance Comparison. e average performances of the three models in the fivefold cross-validation tests using the data from the aggressive and conservative groups were compared in this section. Tables 3 and 4 present the results on the training sets and the test sets, respectively. e DIRL had the lowest RMSPE of spacing and MHD 50 in both the training sets and the test sets. Although the NN and the RNN model had lower RMSPE of speed in the test sets, the overall error of the DIRL in reproducing drivers' trajectories was lower than that in the other two models. For the two kinds of BC models, RNN outperformed the NN model as it achieved lower RMSPE and MHD 50 than the NN model. Figure 8 presents the simulation results of speed and spacing for two car-following periods randomly selected from the datasets. As can be seen, the DIRL model tracks the empirical speed and spacing more closely than the other two models. e simulation results of speed for the NN and RNN model are smoother than those of the DIRL model because the former models output a continuous action, while the latter model outputs a discrete action. 6.2. e Learned Characteristics of the Model. Since the proposed model was trained with data from two groups of drivers with different driving styles, we expected that the learned models would exhibit features of both groups. erefore, the learned value of the two driving styles, which represents the expected long-term return, is compared in this section. As depicted in Figure 9, the states with a higher value represent the preferable states, which drivers try to achieve during car-following. For the same distance to the lead vehicle, the aggressive drivers preferred a higher speed than the conservative drivers. e high-value area (V ≥ 0.8, in red) for the aggressive drivers has a steeper slope as indicated by the angle θ between the black-dashed line and the x-axis. Since the cotangent of the angle θ is proportional to the value of TH, a larger angle means a shorter TH. Hence, the comparison of the angle θ in the two figures shows that the aggressive drivers favor a shorter TH. Besides, the width of the highvalue area for the aggressive is wider compared with the conservative; it indicates that the aggressive drivers' preferred TH has a larger variance than that of the conservative drivers. is result is in good agreement with the details shown in the boxplot of TH for the two groups of drivers in Figure 4. It is also found that the high-value region of the speed becomes wider with an increase in the spacing to the lead vehicle in the two figures. e interpretation is that when the spacing is small, drivers must control the speed more precisely to prevent colliding. As the distance increases, drivers have more flexibility for speed control. e learned policies of the two groups were compared by assuming that both groups were following the same leader. e initial states of this car-following event and the speed of the leader were input from the collected data. e learned stochastic policy was run 20 times for both groups. As shown in Figure 10, the aggressive group (in blue) maintained a smaller distance compared to the conservative group (in red) during the simulation. Both the aggressive and conservative drivers accelerated to follow the leader. However, the aggressive drivers increased the speed more quickly in the first 4 s, resulting in less distance to the leader compared with the conservative drivers. Discussion and Conclusion In this study, we propose a car-following model based on Max-Ent DIRL. e proposed model learns the rewards of drivers during car-following which were approximated by an NN. e policy of drivers was solved by an RL algorithm of softmax version of value iteration. Tested on actual driving data, the results showed that the proposed model outperformed the BC models NN and RNN by providing the lowest RMSPE and MHD 50 in replicating drivers' car-following trajectories. e better performance of the proposed model can be explained by the more general objective compared with the BC models. e DIRL model reproduces drivers' policy by firstly learning drivers' decision-making mechanisms (i.e., the rewards), whereas the BC approaches only learn the state-action relationships. Since the policy was solved by the RL algorithm that is based on the assumption of maximizing long-term rewards, the obtained policy then has the ability of long-term planning. In contrast, the BC methods do not include long-term planning in its model training objectives. e simulation results for the two carfollowing trajectories confirmed the superior ability of longterm planning for the DIRL model. e derivation between the simulated spacing and the empirical data for the BC models becomes lager as the simulation continues. On the contrary, the simulation error does not accumulate during the simulation for the DIRL model. Moreover, the better performance of the RNN model found in this study is in line with previous studies [17,34]. Compared with the NN model that only relies on information in the current time step for predication, the advantage of using historical information makes the RNN model more suitable for time series prediction. e present study also demonstrates that the proposed model could capture the characteristics of different driving styles of human drivers. e learned value and policy matched those of the drivers with distinct driving styles. e fully connected NN applied in this study was trained to capture the relevant features that represented the drivers' preferences or objectives in car-following scenarios. e IRL method used in this study provides a new perspective to explain driver behavior and to model different driving strategies. However, solving the IRL problem is computationally expensive, which makes it challenging to apply to high-dimensional systems. Recent studies that have applied adversarial learning to IRL have shown an ability to scale the method to solve complex problems [22,47]. Future studies should consider these new approaches. e present study had some important limitations. First, the participants in the present study are all male, so a broader sample is needed in future research. Second, the proposed model does not consider drivers' reaction delay and memory effect for speed control during car-following. Future studies should take these factors into account. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,967.8
2020-11-20T00:00:00.000
[ "Computer Science", "Engineering" ]
On the Historical Association between National IQ and GDP per capita A remarkable, unquestioned assumption in (1–3) and subsequent studies measuring the association between national average Intellectual Quotients (IQ) and Gross Domestic Products (GDP) per capita is that a supposedly immutable1 genetic2 factor (IQ) may be correlated with a markedly fluctuant one (the wealth of nations). This short paper questions this assumption and presents the following results: Introduction and summary of results A remarkable, unquestioned assumption in (1)(2)(3) and subsequent studies measuring the association between national average Intellectual Quotients (IQ) and Gross Domestic Products (GDP) per capita is that a supposedly immutable 1 genetic 2 factor (IQ) may be correlated with a markedly fluctuant one (the wealth of nations). This short paper questions this assumption and presents the following results: 1. Using historical GDP per capita data produced by the Maddison project (5,6), we find that, over history, the (Pearson product-moment) correlation coefficient (r) between average IQ and GDP per capita is highly variable and ranges from strong negative values to strong positive values. The correlation between national IQ and GDP per capita is a snapshot of the world order at some point in time, and historical data allow us to identify several other eras. 2. The reported positive correlation between national average IQ scores and GDP per capita thus only concerns "today's GDP". However, today's GDP was never difficult to explain and predict in the first place. We show that arbitrary ad-hoc scores based on a country's continental location present a more significant correlation with contemporary GDP per capita. As an economic variable, the predictive value of IQ is thus lesser than that of the common sense observation that North-America is, currently, richer than Europe which is in turn richer than Africa, etc. 3. We conclude this paper by questioning the purpose of IQ studies in Macroeconomics. If this purpose is explaining the wealth of nations then confounding variables such as literacy cannot be ignored, and the Pearson productmoment correlation cannot be considered as a sole criterion to draw causal conclusions. If, on the other hand, the purpose is predicting the wealth of nations then simply using the geographical location of countries, which is no less circular than the use of IQ due to the confounding role of literacy, would be a better predictor of GDP. Related work and data sources General knowledge regarding average national Intellectual Quotients (IQ) and their association with economic outcomes is largely based on two books by Richard Lynn and Tatu Vanhanen, "IQ and the Wealth of Nations" (1) and its followup "IQ and Global Inequality" (2), as well as a dataset (3) by the same authors. With these publications, IQ gained entry into macroeconomic research and started being considered a valid independent variable to explain and predict the Gross Domestic Product (GDP) of nations, because of the high reported correlation of .82 3 . Since then, the confounding role of literacy in the association between IQ and GDP has been thoroughly established. Indeed, Marks has shown that IQ variations across time and race are explained by literacy differences (7) and that literacy, not intelligence, is in fact the key predictive factor for economic development (8). A recent (June 26, 2020) retraction of a publication by Clark et al. in Psychological Science (9), based on data from (3) notes that the above data are "plagued by lack of representativeness of the samples, questionable support for some of the measures, an excess of researcher degrees of freedom, and concern about the vulnerability of the data to bias". In this work, we overlook these shortcomings, as well as inherent shortcomings of IQ tests as a measure of an individual's intelligence (10), and question the idea that a fairly static racial factor is associated with the historically variable variable that is GDP per capita. D R A F T The beginning of new cycles can be linked to important historical changes (industrial revolution, postmodernism starting after the second World War, decolonization). Modern GDP per capita was never difficult to predict We divide the world in 13 regions and assign an ad-hoc integer score from 1 to 10 reflecting the wealth of the region (1 for Subsaharan Africa, 10 for North America), according to table 2. Each country is assigned the score of the region it belongs to. Figure 3 compares the coefficient of correlation of this lazy ad-hoc score with that of national IQ. Conclusion The purpose of IQ research in Macroeconomics is unclear. If it is an attempt at explaining the wealth of nations, e.g. to predict the value of investment in increasing intelligence, then this type of analysis cannot avoid controlling for literacy rates and other confounding variables (nourishment, health, etc.). If on the other hand, it is an attempt at predicting the wealth of nations based on an independent variable (notwithstanding the poor test-retest correlation of IQ test), e.g. to inform immigration policies, with correlation as the only criterion, then assigning lazy 1 to 10 scores to different continents based on their current wealth would be a better model than national IQ. The wealth of nations (and of anyone for that matter) is best studied as a time-series. Any association with a static variable is bound to be uninformative.
1,225
2021-03-05T00:00:00.000
[ "Economics" ]
Digital Commons @ Michigan Tech Digital Commons @ Michigan Tech Aerosols-cloud microphysics-thermodynamics-turbulence: Aerosols-cloud microphysics-thermodynamics-turbulence: Evaluating supersaturation in a marine stratocumulus cloud Evaluating supersaturation in a marine stratocumulus cloud . This work presents a unique combination of aerosol, cloud microphysical, thermodynamic and turbulence variables to characterize supersaturation fluctuations in a turbulent marine stratocumulus (SC) layer. The analysis is based on observations with the helicopter-borne measurement platform ACTOS and a detailed cloud microphysical parcel model following three different approaches: (1) From the comparison of aerosol number size distributions inside and below the SC layer, the number of activated particles is calculated as 435 ± 87 cm − 3 and compares well with the observed median droplet number concentration of N d = 464 cm − 3 . Furthermore, a 50 % activation diameter of D p50 ≈ 115 nm was derived, which was linked to a critical supersaturation S crit of 0.16 % via K ¨ ohler theory. From the shape of the fraction of activated particles, we estimated a standard deviation of supersaturation fluctuations of σ S 0 = 0.09 %. (2) These estimates are compared to more direct thermodynamic observations at cloud base. There-fore, supersaturation fluctuations ( S 0 ) are calculated based on highly-resolved thermodynamic data showing a standard deviation of S 0 ranging within 0.1 % ≤ σ S 0 ≤ 0 . 3 %. (3) The sensitivity of the supersaturation on observed vertical wind velocity fluctuations is investigated with the help of a detailed cloud microphysical model. These results show highest fluctuations of S 0 with σ S 0 = 0 . 1 % at cloud base and a de-creasing σ S 0 with increasing liquid water content and droplet number concentration. All three approaches are independent of each other and vary only within a factor of about two. Introduction The atmosphere's radiation budget and aerosol particles are linked via (1) the radiative properties of the aerosol particles themselves (direct aerosol effect) and (2) via influencing cloud microphysics (indirect aerosol effect) and, therefore, cloud radiative properties. The first description of the indirect aerosol effect on climate was introduced by Warner and Twomey (1967). Assuming a constant liquid water content (LWC) but increasing number of cloud condensation nuclei (CCN) the same amount of water is distributed to a larger number of smaller droplets (first indirect effect). Later Albrecht (1989) focused on the effect of "polluted" droplet spectra on cloud lifetime and precipitation (second indirect effect, cloud lifetime effect). Since then, it has become clear that there is a range of subtle aerosol-cloud interactions (e.g., Stevens and Feingold, 2009), and quantifying them remains a challenge. Besides direct measurements of aerosol and cloud droplet populations, the supersaturation field is of great interest because it serves to link the two populations via activation. For example, activation theories have been highly refined to allow high resolution in critical supersaturation to be determined, based on aerosol chemical composition (Wex et al., 2007;Petters and Kreidenweis, 2007). Aerosols do not typically activate in a quiescent background of uniform supersaturation, however, but rather in a highly fluctuating, turbulent supersaturation field. Therefore, some studies have been carried out to estimate the effects of saturation fluctuations on droplet growth (Cooper, 1989;Khvorostyanov and Curry, 1999). Kulmala et al. (1997) pointed out that some droplets are able to grow in on average undersaturated conditions. To make matters more complex, the fluctuations do not arise solely from turbulent mixing of temperature and water vapor concentration fields, but also from the mass exchange associated with the activation process itself. Thereby, growing droplets are acting as sinks for the local supersaturation, which furthermore, can vary from droplet to droplet (Srivastava, 1989). Achieving internal consistency between aerosol distributions, cloud droplet distributions, turbulence, and thermodynamic fluctuations is still a significant challenge, and is the context of this work. In this study, we evaluate the magnitude of supersaturation fluctuations (S ) in a turbulent marine stratocumulus layer over the Baltic Sea. The measurements were obtained with the helicopter-borne platform ACTOS (Airborne Cloud Turbulence Observation System, Siebert et al., 2006); its true air speed of only 15 to 20 m s −1 allows us to compare highly resolved and spatially collocated thermodynamic and cloud microphysical properties with microphysical properties of the interstitial aerosol nearby the turbulent cloud layer. Unlike most prior airborne studies, we also measure the nonactivated interstitial aerosol inside an SC to draw conclusions on the activation properties. Within the framework of this paper, we focus on three approaches for characterizing supersaturation fluctuations: (1) aerosol number size distribution inside and outside the stratocumulus cloud and the resulting activation properties as well as a comparison with observed cloud droplet number concentration, (2) water vapor supersaturation at the cloud base derived from highly resolved thermodynamic data, and (3) a sensitivity analysis of the influence of measured vertical velocity fluctuations on the supersaturation field determined with a cloud microphysical parcel model. Experimental This study draws on measurements of marine stratocumulus clouds over the Baltic Sea, obtained on 5 October 2007 during a flight originating from the Kiel-Holtenau airport in Germany (54 • 22 46 N, 10 • 8 43 E). ACTOS operated north of the city of Kiel over rural area, the coast line and the Baltic Sea, in the measurement area shown in Fig. 1. Airborne Cloud Turbulence Observation System (ACTOS) The helicopter-borne measurement platform ACTOS is equipped with a variety of high resolution sensors for meteorological and turbulence parameters as well as cloud and aerosol microphysical properties. ACTOS is an autonomous platform with its own data acquisition system and power supply. A wireless network uplink to the helicopter ensures online monitoring of the most important parameters during flight. ACTOS is carried by means of a 140 m long rope beneath a helicopter and operates at a true air speed of 15 to 20 m s −1 . The combination of low true air speed and high sampling frequency results in a spatial resolution on the centimetre scale for standard meteorological parameters. For a detailed description of ACTOS and its instrumentation see Siebert et al. (2006). Aerosol and cloud microphysical instrumentation During this campaign, aerosol number size distributions (NSDs) in the size range of 6 nm < D p < 2.6 µm were recorded by a Scanning Mobility Particle Sizer (SMPS, IfT, Leipzig, Germany) and an Optical Particle Counter (model 1.129, Grimm Aerosol Technik GmbH, Ainring, Germany). Additionally, the total particle number concentration of the interstitial aerosol larger than D p = 6 nm was measured by a Condensational Particle Counter with an increased temperature difference between saturator and condensor (CPC 3762, TSI Incorporate, Shoreview, MN, USA). For a detailed description of the aerosol instrumentation, the reader is referred to Wehner et al. (2010). The aerosol inlet consists of a horizontally oriented tube, which is curved 90 • to the mean flow direction. Taking into account aspiration efficiency and losses at the 90 • bend (Baron and Willeke, 2001) particles and cloud droplets larger than 5 µm are not able to enter the aerosol measurement system. This ensures exclusive sampling of interstitial particles. Cloud droplet spectra were measured with the Phase-Doppler Interferometer for Cloud Turbulence (PICT, Chuang et al., 2008). The PICT instrument measures size and speed (in flight direction) of individual droplets between 3 µm < D d < 100 µm, with no dead time losses and with minimal coincidence sizing errors. Liquid water content (LWC) was measured with the Particle Volume Monitor (PVM, Gerber, 1991). Temperature and humidity measurements at cloud base were performed by an ultra-fast thermometer (UFT) and an infra-red absorption hygrometer, respectively (Siebert et al., 2006). Both sensors are located in the frontal outrigger of the measurement platform. Measurements On 5 October, ACTOS performed measurements near and inside a SC layer advected from the Baltic Sea to Northern Germany. After take-off, the flight started with a vertical profile up to approximately 1000 m above ground level (AGL, all following heights refer to ground level of airport) where ACTOS touched the cloud base of the SC layer. The vertical profile was then continued in cloud free area up to a height of approximately 1550 m. After descending to cloud top height ACTOS was dipped into the SC from above (cf. Fig. 2). Several horizontal flight legs with constant altitude were performed inside the SC layer. A second vertical profile was accomplished about 1 h after the first one during the descent on the way back to the airport. The complete measurement flight took approximately 1.5 h. Figure 3 shows vertical profiles of selected meteorological parameters measured during ascent and descent. The potential temperature (θ) features a slight increase in the lowermost 1000 m indicating a stably stratified atmosphere. This is followed by a strong temperature increase until about 1300 m, which belongs to an inversion above the observed stratocumulus. The absolute humidity (q) shows a decrease with height until the altitude of the inversion, differences between ascent and descent are likely due to horizontal inhomogeneities. The wind direction (dd) was north-east to east, its vertical distribution exhibits mainly a change of approximately 30 • during the lowermost 500 m. Within the same height interval, the wind velocity (U ) increases by around 5 m s −1 . Furthermore, abrupt changes are only found at the inversion. Vertical structure The absence of strong vertical gradients in dd and U below the inversion layer indicates that the SC and the subcloud layer are coupled. The total particle number con- Measurements On 5 October, ACTOS performed measurements near and inside a SC layer advected from the Baltic Sea to Northern Germany. After take-off, the flight started with a vertical profile up to approximately 1000 m above ground level (AGL, all following heights refer to ground level of airport) where ACTOS touched the cloud base of the SC layer. The vertical profile was then continued in cloud free area up to a height of approximately 1550 m. After descending to cloud top height ACTOS was dipped into the SC from above (cf. Fig. 2). Several horizontal flight legs with constant altitude were performed inside the SC layer. A second vertical profile was accomplished about 1 h after the first one during the descent on the way back to the airport. The complete measurement flight took approximately 1.5 h. Figure 3 shows vertical profiles of selected meteorological parameters measured during ascent and descent. The potential temperature (θ) features a slight increase in the lowermost 1000 m indicating a stably stratified atmosphere. This is followed by a strong temperature increase until about 1300 m, which belongs to an inversion above the observed stratocumulus. The absolute humidity (q) shows a decrease with height until the altitude of the inversion, differences between ascent and descent are likely due to horizontal inhomogeneities. The wind direction (dd) was north-east to east, its vertical distribution exhibits mainly a change of approximately 30 • during the lowermost 500 m. Within the same height interval, the wind velocity (U ) increases by around 5 m s −1 . Furthermore, abrupt changes are only found at the inversion. Vertical structure The absence of strong vertical gradients in dd and U below the inversion layer indicates that the SC and the subcloud layer are coupled. The total particle number con- centration (N tot ) features a continuo 3000 cm −3 at the ground to a few timetre at the altitude of the inversi an increase to N tot ≈ 1000 cm −3 ab variability in the lowermost 200 m i effects or local pollution. The observ between 1000 m and 1300 m. Note sons, the helicopter is not allowed t cal profiles are recorded in cloud f and descents. Due to the forward v vertical gradients are to some degre tal gradients and displayed vertical those directly below the cloud layer. Figure 4 shows a time series of sel an in-cloud flight leg. ACTOS pe from above and performed an approx 1250 ± 15 m. At the beginning, the ber concentration (N int ) increases s was dipped into the stratocumulus fr significantly lower (cf. Fig. 3). Measurements at cloud level Inside the cloud, N int varies mo 1300 cm −3 , which is higher com served for the vertical profile. The to horizontal inhomogeneities becau recorded at some distance from t above. The LWC ranges mainly bet while the vertical wind velocity (w) ±1.5 m s −1 with a standard deviatio a mean value close to zero. Strong d with sharp decreases in the LWC. time series of the observed mean d ), absolute humidity (q), wind direction (dd), wind velocity (U ) and total particle number concentration (N tot ) recorded during ascent (black lines) and descent (red lines). Note that vertical profiles were performed in cloud free area. centration (N tot ) features a continuous decrease from about 3000 cm −3 at the ground to a few hundreds per cubic centimetre at the altitude of the inversion. This is followed by an increase to N tot ≈ 1000 cm −3 above the inversion. The variability in the lowermost 200 m is possibly due to ground effects or local pollution. The observed SC layer was located between 1000 m and 1300 m. Note that for flight safety reasons, the helicopter is not allowed to fly into clouds. Vertical profiles are recorded in cloud free areas during ascents and descents. Due to the forward velocity of the platform, vertical gradients are to some degree influenced by horizontal gradients and displayed vertical profiles may differ from those directly below the cloud layer. Figure 4 shows a time series of selected parameters during an in-cloud flight leg. ACTOS penetrated the cloud layer from above and performed an approximately 5 km long leg at 1250 ± 15 m. At the beginning, the interstitial particle number concentration (N int ) increases sharply because ACTOS was dipped into the stratocumulus from above, where N int is significantly lower (cf. Fig. 3). Measurements at cloud level Inside the cloud, N int varies mostly between 1000 and 1300 cm −3 , which is higher compared to the value observed for the vertical profile. The difference may be due to horizontal inhomogeneities because vertical profiles were recorded at some distance from the cloud as mentioned above. The LWC ranges mainly between 0.6 and 1.0 g m −3 , while the vertical wind velocity (w) shows variations within ±1.5 m s −1 with a standard deviation of σ w = 0.6 m s −1 and a mean value close to zero. Strong downdrafts correlate well with sharp decreases in the LWC. In the lowest panel, the time series of the observed mean droplet diameter for 10 s long intervals is shown indicating average diameters between 12 µm and 16 µm with a nearly constant standard deviation around 2-3 µm (error bars). The corresponding mean droplet size distribution to above displayed flight leg is illustrated in Fig. 5. The spectrum shows a broad mono-modal distribution with a maximum concentration at about D d = 12 µm. The majority of droplets size between 5 µm < D d < 20 µm, while median total droplet concentration for this cloud passage averages out at approximately 470 cm −3 with an interquartile spread of 141 cm −3 . The median droplet number concentration of all cloud passages is about N d = 464 cm −3 with an interquartile spread of 184 cm −3 . Critical supersaturation In this section, three different estimates of the critical supersaturation are presented. The three estimates come from independent measurements, so agreement between them builds confidence in the individual methods and their theoretical foundations. 1. Aerosol and cloud microphysics: aerosol number size distributions inside and outside the cloud are used to compare to cloud droplet number densities as a check, and then to derive an activation diameter and a corresponding critical supersaturation. 2. Thermodynamics: humidity fluctuations are estimated from direct, high resolution measurements of absolute humidity and temperature around the cloud base. 3. Turbulence: a cloud parcel model is utilized to translate observed vertical wind velocity fluctuations into supersaturation fluctuations. feature a similar Aitken mode (20 nm < D p < 70 nm), implying that sub-cloud and in-cloud aerosols originate from the same air mass. Aerosol number size distribution Comparing the in-cloud and sub-cloud NSDs, significant differences are obvious. For particles larger than D p =70 nm, a spread between in-cloud and sub-cloud NSDs occurs that can be explained by activation of aerosols to cloud droplets. The shaded area in Fig. 6 illustrates the number of activated particles N act . We derive N act by integrating the sub-cloud and in-cloud NSDs for particles between 80 nm < D p < 2600 nm as follows: In our case, the difference is about 435 ± 87 cm −3 , which agrees remarkably well with the above introduced median droplet concentration of N d = 464 cm −3 . This consistency between aerosol and cloud microphysical measurements provides encouragement to further investigate the activation process that links the two. In order to quantify an activation diameter, we calculate the fraction of activated particles η with the help of the mean in-cloud and sub-cloud NSDs (NSD cloud (D p ) and NSD subcloud (D p ), respectively) as follows: (2) Figure 6 shows η (right ordinate), which features a steep increase for particles in the size range of 80 nm < D p < 150 nm. For larger particle sizes, η approaches unity. From η = 0.5, a 50 %-activation diameter of D p50 ≈ 115 nm can be derived (cf. Fig. 6). With the help of Köhler theory (Köhler, 1936) and an assumption about the chemical composition, D p50 can be related to a critical supersaturation. Since ACTOS is not equipped with instruments to analyze the aerosol chemical composition, we have to make an assumption for the hygroscopicity. Among others, investigated the chemical composition of Central European aerosol and found a dominating mass fraction of ammonium sulfate. To make a first guess we assume a pure ammonium sulfate particle containing an insoluble core. Chemical analysis during LACE 98 support this , furthermore, the same authors found an overall mean of the water-solube volume fraction of = 0.6, additionally they also found a class of highly soluble particles with = 0.85. Therefore, we use an ammonium sulfate particle with a dry diameter of D p = D p50 = 115 nm and a soluble fraction of = 0.7 and calculate that a minimum supersaturation of S crit = 0.16 % is required to activate the particle as cloud droplet. In the next step, we concentrate again on the fraction of activated particles (η). Taking into account that a small fraction of particles with a diameter of D p = 80 nm and smaller are activated, a critical supersaturation of S crit = 0.28 % is necessary. If we had a single updraft velocity and perfectly homogeneous aerosol composition and concentration, we should see a perfect step function for η. Instead, we see an error function (erf) like behaviour. In order to relate this roll-off to the distribution of critical supersaturation, we convert η(D p ) to a function of critical supersaturation by calculating the critical supersaturation (with the same chemical parameters as above) for every D p via Köhler theory. Figure 7 illustrates the resulting activated fraction η (S crit ) (black squares), which exhibits a similar error function like behaviour as η(D p ) in Fig. 6. For clarity, here, the activated fraction as a function of critical supersaturation is defined as η (S crit ) = 1 − η(S crit ) so as to show an increase with critical supersaturation, although the shape parameters obtained from the curve are independent of this choice. This gives us the opportunity to estimate the mean critical supersaturation and its standard deviation by applying a least squares fit to η with the function y = erf((x−µ)/( √ 2σ ))+1 2 using the mean and standard deviation (µ, σ ) as free parameters. The fit function is plotted in Fig. 7 (red line), the resulting µ = 0.175 % agrees well with the above derived S crit (D p50 ), while the standard deviation calculates to σ S = 0.09 %. Absolute humidity and temperature fluctuations We now compare these estimates with more direct thermodynamic observations leading to relative humidity (RH) at cloud base. During the first ascent, ACTOS touched the cloud base at about 1000 m above ground level. Performing accurate measurements of RH or supersaturation (S) in the presence of cloud droplets is still a difficult task, but we make an attempt here in order to compare with the other methods for evaluating σ S . We derived RH from collocated temperature and absolute humidity measurements with an ultrafast thermometer (UFT) and infra-red absorption hygrometer both with a temporal resolution of 100 Hz. The longitudinal separation between both sensors was considered before combining the two measurements. The absolute accuracy of our humidity estimates is on the order of a few percent. Observed fluctuations, however, are interpreted as real and afterwards termed saturation fluctuations (S ). For quality assurance of the resulting time series of RH, we applied power spectral analysis. Figure 8 shows power spectra of three different subsets during the cloud penetration of ACTOS, which will be analysed concerning supersaturation later on. All spectra feature a large scattering due to poor sampling statistics of the short subsections. Regardless of the scatter, all spectra show roughly a mean slope of −5/3, which implies inertial subrange scaling. This implies that the fluctuations are a result of real turbulence and not noise, which would result in a flat and horizontal spectrum. Strong deviations from the −5/3 slope could also be a result of scattering effects of single cloud droplets in the measuring volume of the infra-red absorption hygrometer, which was also not observed. Figure 9 displays linearly detrended S and LWC as function of the altitude. The saturation fluctuations are divided into three subsections according to different mean LWC values. The first subsection (green line) refers to zero LWC, the blue line belongs to a LWC slightly above the noise level and the last subsection (red line) references a mean LWC∼ 0.02 g m −3 . For these different subrecords, the standard deviation of the supersaturation fluctuations (σ S ) is calculated. The green subsection which is located a few meters below the cloud base features a standard deviation of 0.2 % whereas the peak to peak (p2p) values reach 1.2 %. In the blue subsection, only calm fluctuations occur with σ S = 0.1% which range within 0.5 %. The strongest fluctuations appear in the red subsection with a mean LWC of about 0.02 g m −3 . The standard deviation achieves 0.3 % and the fluctuations range within 1.5 %. All together these estimates of saturation fluctuations at standard deviations within a few tenth of percent agree well with the above derived values of the critical supersaturation but local values are expected to be higher. . Phys., 12, 2459-2468 Vertical velocity fluctuations Here, we investigate the influence of turbulent vertical wind fluctuations on the supersaturation field with the help of the detailed cloud microphysical parcel model of Simmel and Wurzler (2006). Essentially, we wish to evaluate the range of supersaturation fluctuations that can be achieved for realistic fluctuations in vertical velocity occurring during the activation and condensation growth of cloud droplets near cloud base, assuming uniform aerosol number concentration. In this study, the model is used with a moving size-bin approach to avoid numerical diffusion along the mass axis. Furthermore, we focus on pure condensational droplet growth ignoring collision/coalescence and entrainment. During the initialization of the cloud microphysical model, an air parcel is lifted by a constant updraft w init until a specified altitude or average liquid water content is reached. The air parcel starts to rise just beneath the cloud base observed by ACTOS. The initial conditions are characterised by a static pressure of p = 905 hPa, a temperature of T = 278.9 K and a relative humidity of RH = 96 % resulting in a model cloud base at 1065 m. The initial aerosol number size distribution is represented by 4 log-normal modes, which were fitted to the observed mean aerosol number size distribution measured below the cloud base of the stratocumulus. After the initialisation, the model is driven by detrended vertical wind velocity fluctuations (w ) recorded during a flight leg inside the stratocumulus layer. Basically, with this approach we compare two different reference frames. We use Eulerian measurements to drive a detailed cloud microphysical parcel model in a Lagrangian reference frame. In principle, the conversion of measured time series of wind fluctuations into spatially resolved fluctuations is possible by using Taylors frozen flow hypothesis (Taylor, 1938). Furthermore, for homogeneous turbulence the probability density functions (pdf) of "one-point one-time" velocity fluctuations (Eulerian reference frame) can be taken as equivalent to Lagrangian velocity fluctua-tions (see e.g., Pope, 2000, p. 483). Instead of multiplying the measured time series w(t) with the true airspeed of the measurement platform to get w(x) (Taylors frozen flow hypothesis) we considered the ratio of two different time scales: (i) the time T ACTOS needs to pass an eddy of typical size L and (ii) the typical eddy turn-over time τ eddy , which describes the typical residence time of an air parcel in the same eddy. The first time scale can be estimated by integrating the autocorrelation function ρ w (τ ). In practise, ρ w (τ ) is assumed to exhibit an exponential shape and T is taken as the time where ρ w (T ) = 1/e. From the measurements, we estimate T ≈ 2.6 s. The second time scale can be estimated by τ eddy ∼ L/σ w , where L = U · T ≈ 20 m s −1 · 2.6 s ≈ 50 m is the integral length scale (U is the true airspeed of ACTOS). With a standard deviation σ w ≈ 0.6 m s −1 , we get τ eddy ≈ 80 s. That is, if ACTOS samples one eddy in the time T , an air parcel spends the time τ eddy in this eddy and we have to stretch the simulation time by a factor of τ eddy /T ≈ U/σ w ∼ 30. Although, the observed vertical wind velocity fluctuations are detrended, subrecords contain longer periods of up-and downdrafts leading to a vertical shift of the air parcel. Therefore, stretching the simulation time increases the vertical displacement. Note that this is an order of magnitude estimate and with a factor of 30 the vertical shift of an air parcel would be about ±300 m, exceeding the thickness of the observed SC, we want to compare with. For this reason, in our simulations, we used a factor of 10 to limit this shift to ±100 m. Furthermore, sensitivity estimates for different stretching factors do not show a strong influence on the resulting σ S . The factor is realized by using single data points of the 100 Hz resolution time series with a model time step of 0.1 s. We have performed two types of model runs that allow us to initiate the vertical velocity fluctuations just after or during the activation process. This is achieved through two scenarios for initializing the model: (a) lifting the air parcel with a constant updraft velocity w init until a certain LWC is reached and (b) lifting the air parcel with a constant velocity to a certain altitude (h) close to the cloud base. In concept (a), the vertical wind fluctuations are switched on when the air parcel has already entered the cloud layer. Instead, concept (b) allows us to investigate the influence of the wind fluctuations on the supersaturation field directly at the condensation level. In this case, the subsequent wind fluctuations and the resulting vertical displacement of the air parcel lead to activation and deactivation of aerosols and cloud droplets, respectively. Figure 10 shows time series of the observed vertical wind velocity and the resulting modelled supersaturation after the initialization of the cloud microphysical model following concept (a) with an initial wind speed of w init = 0.1 m s −1 and LWC = 0.2 g m −3 . The figure displays the fluctuations in the supersaturation field arising from w . In Table 1 the initial parameters and results of six model runs are presented. The model runs A1, A2 and A3 follow the initializing concept (a). Within A1 and A2 the air parcel is lifted until , which corresponds to an altitude of about 1200 m. Afterwards, the model is driven by the observed vertical wind fluctuations causing supersaturation fluctuations with a standard deviation of 0.08 and 0.07 %, respectively and peak to peak values (p2p) up to 0.56 %. In A3, the parcel was lifted up to an altitude of 1800 m and features calm supersaturation fluctuations with a standard deviation of 0.03 % at a range of only 0.19 %. B1 to B3 follow the initialization scheme (b). In this case, the air parcel is lifted into altitudes between 1035 and 1045 m, which is below the cloud base. Furthermore, the initial vertical wind velocity does not influence the activation process and, therefore, is of marginal importance. Due to the vertical displacement resulting from driving the model by w the air parcel is lifted up and down and enters the cloud several times. The results for B1, B2 and B3 correspond to subsections of the model runs longer than 500 model seconds where the air parcel was located between 1065 and 1100 m to focus only on the supersaturation field at the cloud base. B1 and B2 exhibit the highest coupling of w and S with standard deviations up to 0.1 % and maximum peak to peak values of 0.58 %. Summary and discussion This study presents a unique combination of cloud and aerosol microphysical, thermodynamic, and turbulence variables measured at high temporal and spatial resolution in a stratocumulus cloud. These measurements and results from a cloud microphysical parcel model have allowed three independent approaches for characterizing supersaturation magnitudes and fluctuations. First, from the measurements of the interstitial aerosol below cloud base and inside the cloud, we were able to calculate the fraction of activated particles, which agrees remarkably well with the observed median cloud droplet number concentration. We then derived a 50 % activation diameter of 115 nm which can be related to a critical supersaturation of 0.16 % via Köhler theory. The roll-off of the fraction of activated aerosols allows us to estimate the range of supersaturation fluctuations, assuming all aerosols have identical composition. With the help of fitting an error function to the fraction of activated particles we find σ S ≈ 0.09 %. The highly resolved turbulence measurements give insights into the fluctuation of the supersaturation at cloud base. Calculated supersaturation fluctuations vary with a standard deviation ranging from 0.1 ≤ σ S ≤ 0.3 %, which agrees well with the above derived critical supersaturation. Peak to peak values indicated a fluctuation range within 1.5 %. With the help of the cloud parcel model we analysed the sensitivity of the supersaturation to observed vertical wind fluctuations inside the stratocumulus layer. We found the highest supersaturation fluctuations for model runs at cloud base (σ S = 0.1 % and peak to peak values of 0.58 %). This behaviour can be interpreted through the phase relaxation time, defined approximately as: τ p = (2π · d · D d · N d ) −1 (e.g., Rogers and Yau, 1989;Khvorostyanov and Curry, 1999;Austin et al., 1985), where d is the water vapor diffusivity (≈ 2.2 × 10 −5 m 2 s −1 , e.g., Houghton, 1985), and D d and N d are the mean droplet diameter and droplet number concentration, respectively. The phase relaxation time is a measure of how fast the water vapor is redistributed between vapor and condensed phases after a rapid change in S. Alternately, it is the time scale for approaching a quasi-steady-state supersaturation for a given steady vertical velocity. Considering model run A3 (cf . Table 1), which features a very high droplet concentration of N d ∼ 640 cm −3 and a mean droplet diameter of D d = 14 µm, this leads to τ p ≈ 0.8 s. That is, for such a small phase relaxation time water vapor condenses comparably fast onto the existing droplets, resulting in a strong damping effect and hence very calm fluctuations. Instead, for A1 with D d = 7 µm and N d = 500 cm −3 the phase relaxation time is roughly doubled to τ p ≈ 2 s, which explains the increase in σ S compared to A3. Finally, in the model runs with initialization scheme (b) droplet diameter and concentration are highly variable. During activation and deactivation, N d changes from zero to a few hundred and vice versa, while the maximum D d is on the order of 5 µm. Nevertheless, Atmos. Chem. Phys., 12, 2459-2468, 2012 www.atmos-chem-phys.net/12/2459/2012/ assuming, e.g., D d ≤ 5 µm and N d ≤ 150 cm −3 , this yields a large τ p ≥ 10 s. Hence, the above mentioned damping effect is weak and rapid changes in w result in more intense fluctuations in S. Within the framework of this work we did not consider radiative effects on the equilibrium supersaturation (S eq ) arising from radiative cooling or heating of the cloud droplets. Marquis and Harrington (2005) pointed out that radiative heating and cooling rates for cloud droplets can vary by 2 to −15 K h -1 , depending on radiative fluxes, droplet diameter and droplet location with reference to cloud top and cloud base. Since the reported measurement flight was conducted between 16:30 and 17:45 CET strong shortwave heating can be neglected. Furthermore, longwave forcing can be neglected because the majority of our observed droplets have sizes between 5 and 20 µm in diameter. For droplets in this size range the equilibrium supersaturation in the uppermost 50 m of a stratocumulus cloud can be changed by values on the order of S eq = ±0.01 % (Marquis and Harrington, 2005, Fig. 6), which is less than 10 % of the estimated variability from our measurements. Here, we have considered the range of supersaturation fluctuations in a stratocumulus cloud. The question of maximum possible supersaturation fluctuations, or the detailed distribution of supersaturation during cloud activation remains to be fully answered. Ultimately, supersaturation is the result of combined fluctuations of absolute humidity (q) and temperature (T ), all of which are closely coupled to the condensed phase through the phase relaxation time. One possibility to estimate a maximum possible supersaturation is, therefore, the investigation of joint probability density functions of T and q (PDF(T ,q)), which is the focus of future efforts with ACTOS. It should be evident even from this first effort, however, that variability of the supersaturation field is of great relevance when considering the activation of aerosols.
8,184.4
0001-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
The aggregation of Fe3+ and their d–d radiative transitions in ZnSe:Fe3+ nanobelts by CVD growth Transition metal (TM) doped II–VI semiconductors have attracted great attention due to their luminescence and diluted magnetism. In this study, the Fe3+-doped ZnSe nanobelts (NBs) were grown by a facile CVD method. The surface morphology observed via SEM is smooth and clean and the elemental composition measured via EDS confirms that the Fe3+ ions were incorporated into ZnSe NBs successfully. The micro-Raman scattering spectra demonstrate that the as-prepared NBs have the zinc blende structure. Furthermore, the Raman spectra of the Fe3+-doped NBs were compared with those of pure and Fe2+-doped reference samples. The former with a higher signal-to-noise ratio, an enhanced 2LO mode, a stronger LO mode redshift and a larger intensity ratio of LO/TO mode as well as the lower acoustic phonon modes confirms the better crystallization and the stronger electron–phonon coupling on Fe3+-incorporation. The emission of single Fe3+ ion, assigned to the 4T1 → 6A1 transition, was observed at about 570 nm. Moreover, increasing the doping concentration of Fe3+ ions caused the formation of different Fe–Fe coupled pairs in the lattice, which emitted light at about 530–555 nm for an antiferromagnetic-coupled pair, possibly due to the stacking faults and at about 620–670 nm for a ferromagnetic-coupled pair. Introduction As a branch of diluted magnetic semiconductors, transition metal ion doped II-VI semiconductors have gained importance. The spin-spin coupling and the spin-carrier coupling of the host material and the active ion inuences their semiconductor properties. In 2001, by theoretical calculation, Sato et al. found that electron doping in Fe(II)-, Co(II)-or Ni(II)-doped ZnO could enhance the stabilization of the ferromagnetic state. 1 Moreover, several researchers have used the co-dopant of magnetic ions and anions or cations with a valence different from the host material to introduce the free carriers for manipulating the electron spin, 2 which is the basic concept for the design of spintronic devices. Simultaneously, the characteristic emission band of a transition metal ion was observed, which was derived from the splitting of the energy levels of free ions in the crystal eld of the host material. Feng et al. have realized the lasing of Cr 2+ -doped ZnSe nanowires successfully for the rst time. 3 The tunable redshi of the Mn ion related to the d-d transition in the CdS lattice was also observed, in which the Mn ion aggregated via ferromagnetic coupling. 4,5 It is also reported that the Mn ion antiferromagnetic pair emission near the stacking faults occurred in ZnSe:Mn nanoribbon with complicated electronic states and also, various properties were introduced. 6 Furthermore, Bhattacharjee proposed the coupling of magnetic polaron associated with an electron-hole pair, which is called "EMP". 7 The EMP emission located at 460 nm was observed in ZnSe:Mn DMS nanoribbon 6 and the EMP lasing has been observed in the Co(II)-doped CdS nanobelts. 8 Clearly, the interactions of the magnetic dopant and the host material strongly depend on the microstructures, the incorporation type, and the concentration. Therefore, they have impacts on the overall properties of the semiconductor. ZnSe is a direct broad bandgap compound with 2.67 eV bandgap and a zinc blende structure under atmospheric pressure and room temperature. 9 It has been extensively studied for its potential applications in blue-green light emitting devices and the rst ZnSe based blue-green laser diodes were invented in 1992. 10 The strong broad emission with lower energy than its bandgap is common, particularly in the low-dimensional nanostructures. [11][12][13] The strong red emission at about 617 nm associated with the Zn-vacancy was observed in ZnSe nanowires. 13 In addition, Sn-catalyzed tetrapod-branched ZnSe nanorod showed the as-mentioned emission contributed by the Zn vacancy, the interstitial states, the stacking faults, and the nonstoichiometric defects. 11 Bukaluk et al. reported that the broadening of PL bands was due to the compositional and structural disorder. 12 On the whole, different preparation conditions cause the formation of various local or extended defects and stacking faults in the ZnSe lattice, which hinders its wide application. Hence, the detailed formation processes, the structures, the composition characteristics, and the corresponding properties need to be studied. Fe(III), with a similar electronic conguration as Mn(II), is seldom used for the DMS doping due to its larger p-d hybridization effect; also, its independent spin could not be easily maintained. 14 Moreover, there is no explanation for the fact that Fe(III) ion, unlike Mn(II) ion, seldom functions as a dopant in semiconductors in terms of the luminescence via d-d transition. The recent ndings on the iron compounds with superconductivity have indicated the clear carrier effect due to their strong p-d hybridization. 15 In the present study, the Fe 3+ -doped ZnSe NBs are primarily investigated and compared with the pure and Fe 2+ -doped ZnSe NBs as the reference samples. The morphology of the Fe 3+ -doped NBs was observed by SEM and the element composition was analyzed by EDS. The micro-Raman and photoluminescence (PL) spectra of the asdiscussed NBs were recorded to study their optical properties. Some novel properties have been identied in the Fe 3+ -doped ZnSe nanostructures. These ndings will promote their future applications in the nanophotonic devices. Experimental The Fe 3+ -doped ZnSe NBs were grown in a horizontal singletemperature zone furnace using the chemical vapor deposition (CVD) method, in which the mixture of ZnSe (Alfa Aesar, 99.99%, USA) and Fe 2 O 3 (Aladdin, 99.9%, China) powders, used without further purication, served as the precursors and Au was used as the catalyst. A quartz tube was inserted into the furnace, following which the mixture with a molar ratio of 20 : 1 in a ceramic boat and the cleaned mica sheets sputtered with a 10 nm Au layer on another ceramic boat were loaded into the centre and downstream of the quartz tube, respectively. Subsequently, the high-purity gas mixture of 10% hydrogen and 90% argon was circulated through the tube at the rate of 50 sccm for 1 h to remove the air. Then, the temperature of the furnace was raised to about 1150 C at the heating rate of 75 C min À1 and kept at this value under the same conditions for 1 h. Eventually, the furnace was cooled down to room temperature naturally and the sample was dispersed on a cleaned silicon substrate. The pure and Fe 2+ -doped reference samples were prepared under the same conditions; FeCl 2 was used as the precursor for the Fe 2+ -doped samples. The morphology and elemental composition of the samples were characterized using a scanning electron microscope (SEM, Zeiss SUPRA 55, Carl Zeiss, Jena, Germany) equipped with an energy dispersive spectrometer (EDS, Zeiss SUPRA 55, Carl Zeiss, Jena, Germany), respectively. The optical properties of the samples were analyzed by recording the micro-Raman scattering and photoluminescence spectra, for which the 405 nm and 532 nm continuous-wave laser excitation sources were used, respectively. In addition to the light source, a confocal microscope (Olympus BX51M) and a spectrometer (Princeton SP2500) were used to converge and split the light into a spectrum; CCD (Princeton SP2500) was used as the light detector. Liquid nitrogen was used to reduce the temperature during the temperature-dependence spectroscopy tests. In addition, the magnetic response was measured via vibrating sample magnetism (VSM, LAKESHORE, 730T, America) technique. Results and discussion The SEM images of the samples originally grown on the mica sheet and an individual nanobelt dispersed on the silicon wafer are shown in Fig. 1(a) and (b), respectively. The morphology of the as-grown nanomaterial is nanowires, nanoribbons, or nanobelts with a smooth surface, which strongly depends on the growth temperature, the carrier gas rate, and the growth time. At the edges of the NBs, there are no metal balls visible to the naked eye, which is very common in this growth process. This proves that the formation mechanism of NBs is V-S and not VLS, which indicates that a slightly higher temperature than that for the gradual growth of nanowire is required. In addition, the width of most of the as-grown NBs reaches hundreds of nanometres or up to micron level with a 1D-like structure. The inset of Fig. 1(c) displays the elemental composition of NBs, which shows that the samples conform to the stoichiometric ratio and the doping of Fe element is achieved. Moreover, iron is the form of Fe(III) instead of Fe(II) because the valence state of the precursor is trivalent. Simultaneously, the mole ratio of the precursors has almost no inuence on the morphology of the resultant nanostructure that is discussed in Chapter 2 of the ESI. † Fig. 1(d) is the energy dispersive spectra (EDS) mapping of Se, Zn, and Fe and the distribution Fe is far fewer than the other two. Fig. 2(b) represents the room temperature micro-Raman spectra of the as-synthesized Fe 3+ -doped ZnSe NBs at 0.032 W excitation power in air; the spectra t well with the Lorentz function curve. There are two known modes located at around 200 cm À1 and 245 cm À1 corresponding to the TO and LO phonons of ZnSe, respectively, which signies that the as-grown NBs have the zinc blende structure. In addition, the peak locations of the above modes shi to a lower frequency in comparison with those of the bulk ZnSe crystal reported earlier because of the quantum size effect. 16 The other four scattering peaks located at around 140 cm À1 , 180 cm À1 , 287 cm À1 , and 485 cm À1 are labeled as 2TA(L), 2TA(X), LO(L) + TA(L), and 2LO, respectively, and they all belong to the higher-order phonon modes. 17 This implies that there is strong anharmonicity in the lattice vibration. The formation of LO(L) + TA(L) occurs because the movement of some optical phonons is limited in the stacking faults related to the acoustic phonons. Fig. 2(a), (d) and (e) exhibit the micro-Raman spectra of the as-prepared Fe 3+doped ZnSe NBs, pure ZnSe NBs, and Fe 2+ -doped ZnSe NBs with an increase in the excitation power, respectively; all of the abovementioned measurements were performed in air and the measurement parameters were the same. It is clear that the Fe 3+ -doped ZnSe NBs possess a better signal-to-noise ratio than that of all other samples, which is a signicant characteristic of good crystallinity for the zinc blende lattice. In addition, there are visible vibration modes located at around 310 cm À1 that are assigned to the structural defects, 17 which is oen modulated by the incorporation of dopants in the ZnSe lattice. It is still disputable whether the weak scattering peak near 380 cm À1 , labelled as A, is ascribed to the second order LO(X) + LA(X) mode 17 or the oxidation state vibration. [18][19][20] From Fig. 2(c), which exhibits the micro-Raman spectra of the as-prepared Fe 3+ -doped ZnSe single nanobelt detected at 80 K, 190 K, and 330 K in vacuum with 0.040 W excitation power, it is clear that there is no scattering peak observed at 380 cm À1 (noted by a red ellipse). This indicates that this scattering peak appears from the contact with air rather than the intrinsic quality of the Fe 3+doped ZnSe NBs. The similar vibration mode located near 380 cm À1 was once observed in ZnSeO x alloy 19 and ZnO, 20 which conrms the incorporation of oxygen with laser heating. Simultaneously, the vacuum Raman spectra imply that the oxygen element of the precursor Fe 2 O 3 has been exhausted in the growth and extracted out by the carrier gas. In addition, the Raman scattering peak 380 cm À1 only appears when the sample is excited with a relatively higher excitation power. As the power increases, the higher-order 2LO mode appears and becomes more distinct, while the same phenomenon cannot be observed in the vacuum Raman spectra. This indicates that the oxygen atom may involve in the formation of the 2LO mode, which has also been interpreted as D-centre caused by the O incorporation, which can cause the reduction of the bandgap 19 and the enhancement of the multi-phonon process. 21 However, when the Fe 3+ -doped ZnSe NBs are compared to the pure reference samples that were grown under the same condition ( Fig. 2(d)), the 380 cm À1 peak intensity of the doped NBs is much lower than that of the pure NBs at the same excitation power, which indicates that the oxygen incorporation is harder in the doped ZnSe lattice than that in the pure samples. As the oxygen induced the Raman mode, the pure ZnSe NBs should exhibit the 2LO mode, similar to that in the doped NBs. However, the experimental results about the 2LO mode were not in accordance with the above expectation. Combining the above two comparisons, it can be concluded that the Fe 3+ incorporation and the oxygen adsorption jointly promote the 2LO modes. However, the Fe 3+ incorporation produces a stronger 2LO mode than that via O adsorption. Moreover, the existence of the Fe 3+ ions suppress the O adsorption; thus, the 380 cm À1 mode in the Fe 3+ -doped nanobelt is much lower than that in the pure sample. The frequency redshi and the intensity enhancement of the TO and LO phonon vibration modes of the Fe 3+ -doped NBs with the increase in the excitation power are shown in Fig. 2(a) (noted by red dotted lines). It is clear that the tendency of peak-shi with an increase in power input is in accordance with the tendency of the temperature-dependence variation (Fig. 2(c)). This indicates that the temperature enhancement caused by powers is one reason for the redshi. Moreover, it is notable that the variation of the locations of the LO-and TO-mode peaks in the Fe 3+ -doped sample is much larger than that in the pure and Fe 2+ -doped NBs under the same power. In addition, the LOand TO-mode peak locations in the Fe 3+ -doped sample redshied by about 4 cm À1 , while those in the pure samples shied by only 1 cm À1 and those in the Fe 2+ -doped NBs shied by less than 1 cm À1 . According to the study reported by Brajesh et al., the peak position of the LO mode shows a downward shi, which is attributed to the electron-LO phonon coupling with an increase in the doping concentration. 22 It means that the larger LO peak position red shi in Fe(III) doped ZnSe NBs is related with the electron-LO phonon coupling. However, the TO-mode represents the intrinsic polar vibration of a local bond, whose peak-shi may arise from the stress effect or the vibration energy aer the incorporation of oxygen or the Fe 3+ ion. This is because the radius of the Fe 3+ ion (0.67 A), Zn 2+ ion (0.74 A), and Fe 2+ ion (0.78 A) is different; the radius of the Fe 3+ ion is less than that of Zn 2+ and one positive charge is le when Fe 3+ replaces Zn 2+ ion in the lattice. The Se-Zn bond may be relaxed when the Se-Fe bond is formed due to the structure or charge balance. This proves that the Fe incorporation has a much stronger inuence on the lattice strength and order as well as the electron-phonon coupling. Except for the above situation, it is noticeable that the intensity ratio of the LO/TO-modes of the Fe 3+ -doped ZnSe NBs is much larger than those of pure and Fe 2+ -doped NBs. The effect of Fröhlich electron-phonon coupling contributes to this phenomenon, 23 in which the carrier movement in the lattice causes the ratio difference. It is known that the Fe 3+ ion introduces positive charges that function as a possible carrier, while the Fe 2+ ion introduces no charge inside the lattice. It can be observed that there are almost no LO-and 2LO-modes appearing in the spectrum shown in Fig. 2(e). However, the 2LA(X) and the oxidation state modes are prominent in the Fe 2+ -doped NBs. The latter phenomenon indicates that the Fe 2+ ion at the Zn 2+ site would facilitate the oxygen adsorption. Simultaneously, the carrier density does not increase and exhibit the electron-LO phonon coupling in its lattice. It is clear that the Fe 3+ -doped ZnSe NBs exhibits the highest electron-phonon coupling. Moreover, the electron-phonon coupling displayed by the pure reference sample is intermediate and that of the Fe 2+ -doped sample has the lowest. The effect of oxygen adsorption is in the opposite order. These characteristics inuence their physical properties. The difference in the intensities of the 2TA and 2LA acoustic phonon modes between the Fe 3+ -doped NBs and the reference samples is also distinct. The intensity of the Fe 3+ -doped sample is the lowest, the Fe 2+ -doped nanobelt is the highest, and the pure sample is intermediate. The acoustic phonons represent the collective shi relative to the mass-centre, which is hardly observed in bulk crystals. However, they could be enhanced in a small-size system. One example is the ZnSe nanowires or nanobelts grown by CVD. Due to the small difference in the energy value of ZnSe between wurtzite and zinc blende structures at high temperature, it is very easy to form the stacking fault and dislocations. This is the reason for the presence of the 2TA mode at around 140 cm À1 and the 2LA mode at 180 cm À1 in all types of ZnSe NBs. Such existence of the regular defect structures and anharmonic overtones in ZnSe nanowire or belts indicates the massive correlated defects. The different intensities of the above NBs is an interesting nding and the order is the same as that of their electron-phonon coupling magnitudes. Through cyclotron resonance, Langerak et al. proved that the electrons coupled to the LO phonon instead of the TO phonon, 24 which supports our abovementioned arguments. In addition, the large energy mismatch between electron and acoustical phonons indicates the difficulty in coupling, 25 but the collective contribution of an acoustic mode may strongly modify the longitudinal transport. Hence, the more stacking faults and the related higher acoustic mode can reduce the carrier mobility and hint the electron-phonon coupling further in the nanostructures. The enormous increase of the electron-LO phonon scattering rate had been observed in GaN in comparison with GaAs, which is ascribed to its much larger iconicity. 26 The enhancement intensity of the LO-mode in these ZnSe nanostructures demonstrates the carrier propagation effect on the electron-phonon coupling. Fig. 3(a) shows the micro-photoluminescence (PL) spectra of the as-prepared Fe 3+ -doped ZnSe NBs with Gaussian tting curves and the corresponding optical images are shown in Fig. 3(b). Four emission peaks are detected; the peak with the highest energy is attributed to the near band edge emission. The origins of the other three peaks named GFC1, GFC2, and GFC3 are unclear since they have never been reported before. The FWHM of GFC1, GFC2, and GFC3 is 28 nm, 30 nm, and 66 nm and the peaks are located at around 538 nm, 577 nm, and 627 nm, respectively. This phenomenon is different from that observed in the pure and Fe 2+ -doped reference samples as discussed in Chapter 1 of the ESI. † In this case, the reference samples show a broad emission band at around 600 nm with a much larger FWHM. Some of these broad bands in the pure or doped ZnSe nanostructures are related to the point defects, the extended defects, and the stacking faults formed during their growth process. [11][12][13]27 The deep level initiated by the Au catalyst in ZnSe may be another cause. 28 However, in contrast to the deep level band with a variable energy primarily located at around 600-620 nm, 11,13,27 the abovementioned three bands have different energy ranges. In addition, the oxygen adsorption state, which is related to the laser illumination conditions, cannot lead to the abovementioned emission bands either at its energy level lying at the band edge or at higher energy levels. 29 Considering the entire situation, we think that these emission bands are derived from the d-d transitions of the Fe 3+ ions in the ZnSe lattice. The emission band range also veries that the doping ions are not divalent because the d-d transition of Fe 2+ in ZnSe related emission is in the infrared range. 30 As discussed in the above section, the micro-Raman spectra of the as-prepared Fe 3+ -doped ZnSe NBs with the zinc blende structure indicate that the Zn 2+ site has the tetrahedral (Td) symmetry and the replacement of the Zn 2+ ion by the Fe 3+ ion stabilizes the ZnSe lattice. When the Fe 3+ ion with a 3d 5 conguration was incorporated into the Zn 2+ site of ZnSe NBs, the ground state 6 S and the rst excited state 4 G of free Fe 3+ ion split into 6 A 1 , and 4 T 1 , 4 T 2 , double degenerate 4 A 1 , and 4 E levels, respectively, under the impact of the Td crystal eld; the same splitting appeared in other host materials. [31][32][33] As a result, one conjecture that the three emission bands arising from the dd transitions, namely, 4 T 1 / 6 A 1 , 4 T 2 / 6 A 1 , and 4 E / 6 A 1 are notable. It is known that the 3d 5 conguration of the Fe 3+ ion is the same as that of Mn 2+ . The optical absorption spectra in Cd 0.6 Mn 0.4 S single crystals displayed a similar emission prole at 455 nm, 480 nm, and 510 nm due to the above mentioned dd transitions of Mn 2+ (ref. 34). However, the energy difference between 4 T 1 and 4 T 2 of Mn 2+ does not match well with our results. The possibility of such transitions is also small because two higher levels only radiate under the condition of ferromagnetic coupling in the lattice. 8, 35 Hou et al. prepared the Fe 3+doped ZnSe nanobelts successfully with two emission bands at 553 nm and 630 nm, respectively, which are attributed to the phenomenon of the transitions of 4 T 2 / 6 A 1 and 4 T 1 / 6 A 1 . 36 The different emission bands displayed by our as-prepared sample were similar to that reported by Begum et al., which is due to the site symmetry of the Fe 3+ ion in the host ZnSe material, which is identied as octahedral. 37 Hence, the abovementioned assignment is not plausible. As the content of the precursor (Fe 2 O 3 ), the growth temperature, and the growth time are reduced, the PL spectra (Fig. 3(c)) can be obtained with the distinct emission band usually located at 565-585 nm. In fact, the higher growth temperature, the longer growth time, and the higher Fe precursor content contribute to the higher dopant concentration to a certain degree, which has been discussed in Chapter 2 of the ESI. † The emission band is related to Fe 3+ and it agrees with the results of the Fe 3+ -doped CdS NBs, in which the orange light emission has been observed and the bands were located at around 573 nm. 38 We ascribe the photoluminescence to the 4 T 1 / 6 A 1 transition. There is no doubt that the concentration of Fe 3+ becomes lower under this preparation condition and the variation of the dopant concentration will affect the photoluminescence properties of the NBs as shown in Fig. 3(a) and (c). The competition between the near band edge and the Fe 3+ -related luminescence is apparent, in which the increasing Fe 3+ concentration lowers the intensity of the near band edge and the same situation has been observed in ZnSe:Mn QDs. 39 In addition, comparing to that observed for the low Fe 3+ concentration, the two peaks shown in Fig. 3(a) at the higher energy and lower energy side of 4 T 1 / 6 A 1 transition emission are remarkable. The d-d transition of TM ion related emission appears in numerous systems 30,40 with a premise that there are no same neighboring TM ions in the range of one wavelength around the dopant ion to avoid the resonant energy transfer between TM ions. Recently, we found that the ferromagnetic or antiferromagnetic (MnX) n cluster can exhibit an emission of d-d transition nature. 5,6 The same electron conguration of Mn 2+ and Fe 3+ makes it possible to infer that these two peaks arise from the antiferromagnetic coupling pairs (AFM) and ferromagnetic coupling pair (FM) of the Fe 3+ ions, which is in agreement with that reported for ZnSe:Mn nanoribbons. 6 The simplied diagram of the formation of AFM and FM is shown in Fig. 4. Fe(III) related compounds can easily form antiferromagnetic states. 41,42 In addition, the ferromagnetic Fe-Fe coupling in the normal lattice is already veried by magnetic response measurement (Fig. 5). The TM ion-cluster in a typical host semiconductor crystal would be stabilized in the ferromagnetic state, while its own bulk crystal displays the antiferromagnetic state. The presence of two peaks indicates that their origin is related to the amount of Fe 3+ ions in the lattice. The increase in the Fe 3+ ion concentration easily causes the aggregation and the magnetic ion pair leads to the ferromagnetic coupling at high temperature, which is common in numerous DMSs. 8 And the ferromagnetic coupling pairs related emission ranges from 620 nm to 670 nm in as-prepared NBs. Simultaneously, the antiferromagnetic state appears in the vicinity of the stacking fault layer, of which the existence is conceived by the above mentioned Raman spectra. The microscopic optical techniques are used to study the origin of magnetism in the Fe 3+ -doped ZeSe nanobelts, which is an important way to nd a novel function of DMS. Fig. 3(d) shows the PL spectra of the Fe 3+ -doped ZnSe NBs at various temperatures (different sample from Fig. 3(a)). One visible feature is that the AFM related emission peak drops faster than the other two peaks with the increase in temperature as shown in Fig. 3(e). This phenomenon conforms to the formulation that the Fe-Se-Fe AFM pairs are in the vicinity of the stacking faults related to the acoustic phonon vibrational mode. The high temperature causes the higher electronacoustic phonon coupling, which leads to the abovementioned situation in the emission spectra. It can also explain the almost xed FM state related emission due to the fact that the coupled Fe ions are located on the normal lattice site and not on the defect sites. In addition, the single ion d-d transition emission becomes subtly higher with the rising temperature, which is possible because some high energy AFM states have relaxed to this state. The 4 T 1 / 6 A 1 transition is forbidden by the symmetry and spin selection rules, which is not in accordance with the result. This is because of the sp-d hybridization effect 43,44 of the dopant and host materials. Moreover, the variation in the location of the FM related emission peak conrms the inference. Eventually, the variable emission band in the observed range is also caused by the p-d hybridization combining with denite covalence. In fact, the d-d transition emission of the Fe 3+ ion in ZnSe strongly depends on not only the ion lattice location, the neighbor ion, the symmetry, and the aggregation, but also the carrier effect and the lattice relaxation due to the electron-phonon coupling. This CVD preparation method under the current experimental conditions can only realize trace doping since the concentration of Fe(III) cannot oen exceed the critical value due to the segregation. Furthermore, the excessive doping is discussed in Chapter 3 of the ESI. † The magnetic response of ZnSe:Fe 3+ nanobelts was measured at room temperature with the magnetic eld ranging from +1 to À1 T as shown in Fig. 5(a). The M-H curves represent the magnetic hysteresis loops, which are related to the ferromagnetic behaviour. In addition, the ferromagnetic response is derived from the high spin state of Fe(III) in the ZnSe lattice with the largest magnetic moment (g ¼ 5/2) among the TM ions. The magnied area near the zero magnetic eld indicates that the coercive eld is about 100 Oe. The overall magnetism is not large, which may be related to the inuence of the antiferromagnetic pairs. Moreover, this magnetic measurement matches with the PL spectra results, conrming that the ferromagneticcoupled pair exists and contributes to this magnetism. Conclusions Overall, the Fe 3+ -doped ZnSe NBs were grown by a simple CVD method and different test methods demonstrated that Fe 3+ was doped into ZnSe NBs. The iron ion-doping in ZnSe introduces, the surplus free carriers and the micro-Raman scattering spectra show the different features of the as-prepared NBs in comparison with the reference samples. The better signal-tonoise ratio, the lower acoustic phonon modes (2TA and 2LA), and the oxygen-related vibration modes at 380 cm À1 along with the appearance of the 2LO modes conrm that Fe 3+ promotes the good crystallinity for the zinc blende lattice. Moreover, the LO mode exhibits the largest frequency red-shi and the highest intensity ratio of the LO/TO modes, thus indicating the strong electron-phonon coupling. The PL spectra show a clear Fe 3+ -related internal d-d transition emission, which is assigned to the 4 T 1 / 6 A 1 transition of a single Fe 3+ ion. In addition, the emission related to the antiferromagnetic and ferromagnetic coupling occurs at the higher-energy and lower-energy sides of single Fe 3+ ion, respectively, with an increase in the ion concentration. The temperature-dependence PL spectra indicate that the p-d hybridization and electron-phonon coupling have a signicant impact on the Fe 3+ ion related emission. This is the rst report on the d-d transition emission of the Fe 3+ ion doped on the Zn 2+ site in ZnSe. However, numerous properties of the Fe 3+ -doped ZnSe NBs, as one of the TM doped II-VI semiconductors, need to be explored. In addition, the ZnSe:-Fe(III) QD is promising to have a strong emission, similar to that of ZnSe:Mn QDs. Conflicts of interest There are no conicts to declare.
7,036.4
2018-01-12T00:00:00.000
[ "Materials Science", "Chemistry" ]
Mangiferin-Enriched Mn–Hydroxyapatite Coupled with β-TCP Scaffolds Simultaneously Exhibit Osteogenicity and Anti-Bacterial Efficacy Biphasic calcium phosphate (BCP) containing β-tricalcium phosphate and manganese (Mn)-substituted hydroxyapatite (HAP) was synthesized. Biomedical scaffolds were prepared using this synthesized powder on a sacrificial polyurethane sponge template after the incorporation of mangiferin (MAN). Mn was substituted at a concentration of 5% and 10% in HAP to examine the efficacy of Mn at various concentrations. The phase analysis of the as-formed BCP scaffold was carried out by X-ray diffraction analysis, while the qualitative observation of morphology and the osteoblast cell differentiation were carried out by scanning electron microscopy and confocal laser scanning microscopy techniques. Gene expressions of osteocalcin, collagen 1, and RUNX2 were carried out using qRT-PCR analyses. Significantly higher (p < 0.05) levels of ALP activity were observed with extended osteoblast induction on the mangiferin-incorporated BCP scaffolds. After characterization of the specimens, it was found that the scaffolds with 10% Mn-incorporated BCP with mangiferin showed better osteogenicity and simultaneously the same scaffolds exhibited higher anti-bacterial properties as observed from the bacterial viability test. This study was carried out to evaluate the efficacy of Mn and MAN in BCP for osteogenicity and antibacterial action. Introduction Bone makes up the majority of the connective tissue mass in the body. Bone matrix is physiologically mineralized, unlike most other connective tissue matrices, and it is constantly rebuilt throughout the life of a human being because of the formation of new bones. Bone is a heterogeneous composite material consisting of a mineralized phase called hydroxyapatite (HAP; Ca 10 (PO 4 ) 6 (OH) 2 ), an organic phase (90% type I collagen), 5% non-collagenous proteins (NCPs), and 2% lipids together with water. Fracture healing is a physiological process that leads to bone union. However, large bone defects, despite surgical stability, does not heal spontaneously and entails additional intervention such as natural bone grafting (with an autograft, allograft, or xenograft) [1][2][3]. However, natural bone grafting has several disadvantages. Hence, synthetic calcium phosphate (CaP) bone grafts have been used in many cases. Its exceptional biocompatibility with bone tissues arises due to its chemical composition, similar to the bone mineral phase. Furthermore, CaP ceramic has nontoxic properties and can be attached to bone directly [4,5]. Tissue engineering (TE) is a broad and transdisciplinary field that has shown considerable promise in developing living substitutes for harvested tissues and organs for transplant and reconstructive surgery. TE relies heavily on materials and fabrication technologies to create temporary, synthetic extracellular matrices that supports formation of 3D tissues. Growing cells in 3D scaffolds have become increasingly popular for engineering tissues of realistic size scale and specified forms. They aid in creating functional tissues and organs by guiding cell growth and synthesizing extracellular matrix and other biological substances [6,7]. There are some primary conditions for building polymer scaffolds that are commonly accepted. The first condition is that it must have sufficient porosity and pore size. Secondly, a large amount of surface area is required. Biodegradability is a primary property of the scaffolds with a breakdown rate corresponding to the pace of new tissue creation. For retaining the tissue structure, the scaffold must possess corresponding mechanical strength to that of natural bones. The scaffold should not exhibit any toxicity to the cells. Finally, there should be a positive interaction between scaffold and tissues, resulting in improved cell adhesion, differentiation, migration, and growth [8]. Bioactive ceramics are recognized as the most promising biomaterials for bone tissue engineering. Because of its potential for direct bone-to-implant interaction, ceramics like hydroxyapatite, bioactive glass, β-tri-calcium phosphate (β-TCP), and calcium silicate have been extensively studied for biomaterial applications [9]. For over three decades, biphasic calcium phosphate (BCP) ceramics, which is a blend of HAP and β-TCP, have been broadly utilized as substitute biomaterials for synthetic bone grafting for which it has gained a lot of interest. BCP is appropriate for artificial bone applications and is considered superior to individual phase HAP or β-TCP components due to its exceptional controlled dissolving properties, which enhance new bone development at the implantation site. The degradation rate of β-TCP is 20 times higher than that of HAP [10]. Due to its high brittleness and low fracture toughness, HAP can only be used in non-load bearing areas in clinical orthopaedic and dental applications. β-TCP has less mechanical strength than HAP [11]. As a result, a mixture of HAP and β-TCP would balance out each other's shortcomings. Thorough characterization of BCP is very important because it offers a combination of improved mechanical stability and bioactivity, which is challenging to accomplish in single-phase materials [12]. Adding trace metal elements (Ag + , K + , Na + , Sr 2+ , Zn 2+ , Cu 2+ , Mn 2+ , Mg 2+ , Al 3+ , Fe 3+ , Th 4+ ) significantly improves the physical and chemical properties of bioceramics. The HAP phase contains a large number of trace metal elements [13]. Manganese (Mn) may be added to BCP as a sintering additive to enhance the mechanical characteristics of the material. In the presence of Mn, ligand affinity rises, resulting in increased cell adhesion. Mn in the bones has been observed to reduce bone resorption, according to one study [14]. Mn was also found to act as a calcination and sintering additive in BCP powders without causing establishment of any other subordinate phases such as α-TCP and CaO. Manganese doping in BCP is expected to increase the physicochemical characteristics of the material, resulting in better biological function in antimicrobial efficacy [15]. Moreover, Mn also has a role in the production of mucopolysaccharides, which are necessary for cartilage development [16]. Apart from osteogenicity, Mn 2+ macrocyclic complexes have many biological properties. Various bacterial strains were utilized in the antibacterial test on Mn. It was observed that Mn-BCP showed outstanding antibacterial action against all of the bacteria examined. Gram-positive and Gram-negative pathogens were both strongly suppressed by MnBCP [17]. Mangiferin (MAN; 2-D-glucopyranosyl-1,3,6,7-tetrahydroxy-9H-xanthan-9-one), a naturally occurring polyphenol found in mango and papaya, is a natural immunomodulator. MAN is antiallergic, antidiabetic, antibacterial, antioxidant, immunomodulatory, and hypocholesterolemic and has some other health-promoting characteristics [18]. MAN also boosts the monocyte-macrophage system capacity and possesses an antibacterial effect against both Gram-positive and Gram-negative pathogens. MAN may be a viable alternative therapy for treating osteolytic bone disorders due to its anti-NF-κβ characteristics. MAN has also been shown to suppress bone apoptosis and the production of osteoclasts. MAN boosted the growth of human bone formation cells considerably, and there was no evidence of cytotoxicity. Moreover, MAN could stimulate the production of alkaline phosphate (ALP) inside human osteoblast cells [19,20]. The synthesis of 5% and 10% Mn-doped BCP-MAN scaffolds has been carried out in this investigation and the efficacy of mangiferin and manganese in terms of enhanced bone regeneration and antibacterial action has been described. Fabrication of Mn-BCP Porous Scaffolds The aqueous precipitation technique was adopted to fabricate 5% and 10% Mn-doped BCP scaffolds. Measured quantities of Ca(NO 3 ) 2 4H 2 O and MnCl 2 .4H 2 O were added dropwise to a (NH 4 ) 2 HPO 4 solution at room temperature maintaining a pH of 11. This white coloured product was thoroughly washed with deionized water and aged for 24 h and the solution was then filtered [21]. The resultant product was thermally processed for two hours at 1000 • C to yield Mn-BCP containing Mn-HAP and β-TCP. Next, the as-formed Mn-BCP powder was ground using a mortar and pestle and thereafter sieved to produce particles with a size less than 75 µm. For preparation of the scaffolds, fully reticulated polyurethane (PU) sponge was employed as a sacrificial template. Surface treatment of the sponge was carried out using a NaOH solution for about half an hour to increase its hydrophilicity, and templates of the PU sponges were carved into proper dimensions. PVA was mixed with water at a concentration of 0.1 mol/L to make a slurry. The 5% and 10% Mn-doped BCP powders were further mixed with the slurry in a 30:70 ratio by weight. PU sponges were cleaned and dried before being soaked into the Mn-doped BCP slurry and after uniform soaking, the PU sponges were lightly squeezed to remove excess slurry, and the leftover slurry was then blown with compressed air to achieve a uniform dispersion throughout the sponge. Before getting fired in an electric furnace (at 1200 • C for two hours), the MnBCP-coated sponges were dried at 37 • C for 48 h and then cooled at a rate of 5 • C/min until the temperature reached 25 • C [22,23]. The 5% and 10% Mn-doped BCP scaffolds were soaked in 1 µg/mL mangiferin mixed with dimethylsulfoxide (DMSO) and kept at 37 • C until completely dried. X-ray Diffraction To carry out the phase characterization of the synthesized 5% and 10% Mn-doped BCP scaffolds, an X-ray diffractometer using Cu Kα radiation (λ = 0.154 nm) operated at 40 kV and 20 mA was employed for qualitative analysis. A scan speed of 2 • per minute in the range of 10 • ≤ 2θ ≤ 80 • was used to record the XRD patterns. Contact Angle Measurement The wettability of scaffolds was determined using Dulbecco's modified Eagle's medium (DMEM) cell culture media and simulated body fluid (SBF). Static contact angles of the immobilized liquid drops were assessed utilizing contact angle equipment at pH 7.2. Water Uptake Capability The water absorption capability of the prepared scaffolds was tested by immersing a measured amount of the scaffold in distilled water for 2 h. Thereafter, the scaffolds were removed from the water, the excess water was removed, and their wet weight was calculated [24]. The following formula was used to determine the extent of water absorption: Water uptake capability (%) = Wet weight − dry weight /dry weight × 100 Mechanical Property Measurement The mechanical properties of the scaffolds were measured at 3, 5, 7, 9, and 11 weeks after degradation and week 0 was used as the base of comparison. The degradation media was replaced every week throughout the degradation period. With the help of an electromechanical universal testing machine (SANSCMT4503, SANS, Shenzhen, China), the compressive strength was evaluated by crushing a 10 × 10 × 10 mm 3 scaffold amid two flat platens possessing a ramp rate of 0.5 mm min −1 . The compressive strength and modulus of scaffold yield were noted for comparison [25]. Biodegradation in SBF By soaking the scaffolds in SBF at 37 • C, the biodegradability of the scaffolds was examined in vitro. At a solid/liquid ratio of 50 mg/mL, cylinder-shaped scaffolds were soaked in SBF for 1, 5, 10, 15, 20, and 25 days at 37 • C. All the samples were kept in a sealed plastic flask to prevent pH changes and microbial contamination. Throughout the experiment, the SBF solution was not refreshed. The immersed samples were then filtered, rinsed with deionized water, and dried at 40 • C for about four days before being weighed. The percentage of original weight was utilized to compute the weight loss. The weight loss and difference in pH were measured from five scaffolds, and the findings were reported as mean ± SD [22]. Release of Mangiferin during In Vitro Degradation In the process of degradation, the scaffolds were taken for characterization at 0, 1, 2, 4, 6, 8, 10, 12, and 14 weeks, and the quantification of MAN release from the scaffolds was performed in vitro. After adding equal volumes of DMSO, MAN was extracted from the scaffold and subsequently centrifuged. High-performance liquid chromatography (Beckman, Brea, CA, USA) was employed for detection of MAN concentration [4]. Ion Release To determine the ion release characteristics of 5% and 10% Mn-BCP-MAN scaffolds, 500 mg of the sample was immersed in 50 mL SBF. Particle-induced X-ray Emission (PIXE) was utilized to identify the increase of Ca 2+ as well as Mn 2+ in the body fluid over time [26]. In Vitro Toxicity Testing Using MTT Assay In this investigation, the human osteoblast MG63 cell line (obtained from NCCS, PUNE) was used. These cells were incubated at 37 • C in a dehumidified environment containing 5% CO 2 . DMEM (Invitrogen, Paisley, UK) was used to culture the cells, which was supplemented with 10% foetal bovine serum (FBS, Invitrogen, Paisley, UK), 100 mg/mL streptomycin, and 100 U/mL penicillin. Every other day, the cultured media was changed. The MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) assay was used to quantify cell proliferation by detecting mitochondrial succinate dehydrogenase function and was utilized to investigate the cytotoxicity of the synthesized scaffolds. The obtained scaffolds were then fixed in the bottoms of 96-well cell culture plates and sterilized for 24 h at room temperature with ethylene oxide (ETO) steam; then, 1 mL of cell suspension was seeded uniformly on each sample. Every two days, the cultured medium was replaced with new medium. Following seeding for 1, 7, and 14 days, 100 mL of MTT (5 mg/mL) solution was added to each well and the detailed procedure was performed as per our previous work [27]. Measurements of four test runs were initiated to evaluate the mean value. The data were evaluated statistically to determine the mean and standard deviation (SD). Microscopic Observation and Immunostaining The microscopic view of the scaffolds was performed using scanning electron microscopy (SEM) for qualitative analysis of the osteoblast cells along with determination of the pore dimensions of the scaffolds. For assessments using CLSM, colonized cells present on the scaffolds were fixed using 3.7% paraformaldehyde for 20 min. Cell cytoskeletal filamentous actin (F-actin) was visualized by Alexa Fluor 488 Phalloidin (1:25 dilution in PBS, 1.5 h) treatment of cells and counter-staining with propidium iodide (1 µg mL −1 , 20 min) for labelling of cell nuclei. The cultures were then placed in Vectashield and assessed using a Leica SP2 AOBS (Leica Microsystems, Wetzlar, Germany) microscope [28]. Osteogenic Gene Expression To measure mRNA gene expression, quantitative reverse transcription-polymerase chain reaction (qRT-PCR) was utilised to analyse the osteogenic differentiation of MG63 cells on scaffold surfaces. Runt-related transcription factor X2 (RUNX2), osteocalcin (OCN), and type-1 collagen were computed using Bio-rad MyiQ2. Cells were cultured at a density of 4 × 10 4 per well for 1, 7, and 14 days before being lysed with TRIZOI (Invitrogen, Waltham, MA, USA) to obtain RNA. To acquire enough RNA, cells from all scaffolds in each group were used. A total of 1 mg of RNA was reverse transcribed to complementary DNA (cDNA) using the superscript II first-strand cDNA synthesis kit [29]. Alkaline Phosphatase (ALP) Assay Osteoblast cell differentiation was estimated by ALP activity. Osteoblast cells were lysed in a buffer solution containing 0.05% Triton X-100, 1.0% Tris, and 6.0% NaCl (w/v in deionized water, pH 10.0. All the chemicals were procured from Sigma Aldrich (St. Louis, MI, USA). A volume of 60 µL of scaffold specimen solution was added to 50 µL of 0.07% pnitrophenylphosphate (w/v, Thermo Fisher, Waltham, MA, USA) in amino methyl propanol (AMP, Acros Organics, Pittsburgh, PA, USA) buffer and then the resulting solution was placed in an incubator for 2 h at 37 • C. The absorbance was recorded at 400 nm. ALP activities were normalized to the ALP activity/µg of the entire DNA content [27]. Bacterial Viability Test The antibacterial study of the 5% and 10% Mn-doped BCP-MAN scaffolds was carried out using Staphylococcus aureus (S. aureus). Mueller-Hinton Broth (MHB) medium was used to culture the bacteria. The MTT assay was utilized to assess bacterial viability in vitro. The detailed procedures were followed as per our previous work [30]. Statistical Analysis The SPSS (V 22) statistical analysis software was used to perform statistical analyses on the collected data. The mean ± standard deviation was used to express all of the experimental outcomes except XRD, SEM, and CLSM. One-way analysis of variance (ANOVA) was used to compare the changes in the data. A p value < 0.05 was considered statistically significant in all the studies. X-ray Diffraction The XRD pattern of 5% Mn-BCP and 10% Mn-BCP are shown in Figure 1. In the synthesized Mn-BCP scaffolds, Mn-HA and β-TCP peaks were present and are quite similar to the stoichiometric HA diffraction peaks (JCPDS no. 9-0432) and (JCPDS no. 9-0169), respectively. The doped Mn-BCP nanoparticles did not show any additional impurities, according to the XRD pattern. The XRD pattern of 5% Mn-BCP and 10% Mn-BCP are shown in Figure 1. In thesized Mn-BCP scaffolds, Mn-HA and β-TCP peaks were present and are quite to the stoichiometric HA diffraction peaks (JCPDS no. 9-0432) and (JCPDS no. respectively. The doped Mn-BCP nanoparticles did not show any additional im according to the XRD pattern. Swelling Ratio The water uptake capability of the soaked scaffolds after different soaking pe 0, 1, 4, and 7 days is depicted in Figure 3. A significant change in swelling ratio scaffolds was observed. The swelling was higher in 5% Mn-BCP-MAN scaffold 14), but it decreased to 253 ± 13% for 10% Mn-BCP-MAN scaffolds. The XRD pattern of 5% Mn-BCP and 10% Mn-BCP are shown in Figure 1. In the thesized Mn-BCP scaffolds, Mn-HA and β-TCP peaks were present and are quite sim to the stoichiometric HA diffraction peaks (JCPDS no. 9-0432) and (JCPDS no. 9-0 respectively. The doped Mn-BCP nanoparticles did not show any additional impur according to the XRD pattern. Swelling Ratio The water uptake capability of the soaked scaffolds after different soaking perio 0, 1, 4, and 7 days is depicted in Figure 3. A significant change in swelling ratios o scaffolds was observed. The swelling was higher in 5% Mn-BCP-MAN scaffolds (2 14), but it decreased to 253 ± 13% for 10% Mn-BCP-MAN scaffolds. Swelling Ratio The water uptake capability of the soaked scaffolds after different soaking periods of 0, 1, 4, and 7 days is depicted in Figure 3. A significant change in swelling ratios of the scaffolds was observed. The swelling was higher in 5% Mn-BCP-MAN scaffolds (287 ± 14), but it decreased to 253 ± 13% for 10% Mn-BCP-MAN scaffolds. Mechanical Property of the Scaffold Compressive strength ( Figure 4) and modulus ( Figure 5) of the scaffolds were estimated from week 0-11 after degradation. The in vitro compressive strength and modulus at yield considerably decreased in the BCP scaffolds with time. However, BCP-MAN scaffolds with different Mn concentrations did not exhibit any change in the above tests. Mechanical Property of the Scaffold Compressive strength ( Figure 4) and modulus ( Figure 5) of the scaffolds were estimated from week 0-11 after degradation. The in vitro compressive strength and modulus at yield considerably decreased in the BCP scaffolds with time. However, BCP-MAN scaffolds with different Mn concentrations did not exhibit any change in the above tests. Mechanical Property of the Scaffold Compressive strength ( Figure 4) and modulus ( Figure 5) of the scaffolds were estimated from week 0-11 after degradation. The in vitro compressive strength and modulus at yield considerably decreased in the BCP scaffolds with time. However, BCP-MAN scaffolds with different Mn concentrations did not exhibit any change in the above tests. Degradation It can be seen that the rate of degradation of both the scaffolds increased as the soaking period increased (Figure 6). The 10% Mn-BCP-MAN scaffold show a lower deterioration rate in comparison to 5% Mn-BCP-MAN scaffolds. Within the time period of one week, 10% Mn-doped BCP scaffolds lose less weight than 5% Mn-doped BCP scaffolds. The biodegradation ratio reduced as the concentration of Mn increased from 5% to 10%. Degradation It can be seen that the rate of degradation of both the scaffolds increased as the soaking period increased ( Figure 6). The 10% Mn-BCP-MAN scaffold show a lower deterioration rate in comparison to 5% Mn-BCP-MAN scaffolds. Within the time period of one week, 10% Mn-doped BCP scaffolds lose less weight than 5% Mn-doped BCP scaffolds. The biodegradation ratio reduced as the concentration of Mn increased from 5% to 10%. In Vitro Release of Mangiferin from Scaffolds The release of MAN was closely linked with the concentration of Mn present in 5% and 10% Mn-BCP-MAN scaffolds. The total release of MAN was estimated to be about 90% for each sample (Figure 7 Ion Release The antibacterial property of the scaffold surface can be revitalized over time by the bioactive Mn ions released by the scaffolds. The ion release behaviour of the 5% and 10% Mn-BCP-MAN scaffolds were studied by immersing the scaffolds in SBF at 37 °C, and ion Degradation It can be seen that the rate of degradation of both the scaffolds increased as the soaking period increased ( Figure 6). The 10% Mn-BCP-MAN scaffold show a lower deterioration rate in comparison to 5% Mn-BCP-MAN scaffolds. Within the time period of one week, 10% Mn-doped BCP scaffolds lose less weight than 5% Mn-doped BCP scaffolds. The biodegradation ratio reduced as the concentration of Mn increased from 5% to 10%. In Vitro Release of Mangiferin from Scaffolds The release of MAN was closely linked with the concentration of Mn present in 5% and 10% Mn-BCP-MAN scaffolds. The total release of MAN was estimated to be about 90% for each sample (Figure 7 Ion Release The antibacterial property of the scaffold surface can be revitalized over time by the bioactive Mn ions released by the scaffolds. The ion release behaviour of the 5% and 10% Mn-BCP-MAN scaffolds were studied by immersing the scaffolds in SBF at 37 °C, and ion Ion Release The antibacterial property of the scaffold surface can be revitalized over time by the bioactive Mn ions released by the scaffolds. The ion release behaviour of the 5% and 10% Mn-BCP-MAN scaffolds were studied by immersing the scaffolds in SBF at 37 • C, and ion MTT Assay The MTT assay was used to examine MG63 cell viability on 5% and 10% Mn-BCP-MAN scaffolds. The cell density of both the scaffolds was evaluated after culturing for 1, 7, and 14 days, as shown in Figure 8. Pure BCP was used as the control specimen. On day 1, cell viability of the 10% Mn-BCP-MAN scaffold was modest, but on days 7 and 14, the cell proliferation rate was higher than the 5% Mn-BCP-MAN scaffold. For all cultured days, the statistical analysis resulted in a significant difference (p < 0.05) in cell density between the 5% and 10% Mn-doped BCP-MAN scaffolds. MTT Assay The MTT assay was used to examine MG63 cell viability on 5% and 10% Mn-BCP-MAN scaffolds. The cell density of both the scaffolds was evaluated after culturing for 1, 7, and 14 days, as shown in Figure 8. Pure BCP was used as the control specimen. On day 1, cell viability of the 10% Mn-BCP-MAN scaffold was modest, but on days 7 and 14, the cell proliferation rate was higher than the 5% Mn-BCP-MAN scaffold. For all cultured days, the statistical analysis resulted in a significant difference (p < 0.05) in cell density between the 5% and 10% Mn-doped BCP-MAN scaffolds. SEM and CLSM Observation Assessment of the 10% Mn-BCP-MAN scaffold by SEM ( Figure 9a) and CLSM (Figure 9b) on the 14th day of culture exhibited elongated cells spread throughout the scaffold surface by establishing cell-to-cell contacts on 10% Mn-BCP-MAN. Adherent cells seemed to be well spread with elevated cytoplasmic volume and higher amounts of fibrillar projections. Moreover, the cells showed a well-aligned F-actin cytoskeleton having intense staining at the boundaries of cells with the appearance of prominent nuclei and cell division [31]. SEM and CLSM Observation Assessment of the 10% Mn-BCP-MAN scaffold by SEM ( Figure 9a) and CLSM (Figure 9b) on the 14th day of culture exhibited elongated cells spread throughout the scaffold surface by establishing cell-to-cell contacts on 10% Mn-BCP-MAN. Adherent cells seemed to be well spread with elevated cytoplasmic volume and higher amounts of fibrillar projections. Moreover, the cells showed a well-aligned F-actin cytoskeleton having intense staining at the boundaries of cells with the appearance of prominent nuclei and cell division [31]. MTT Assay The MTT assay was used to examine MG63 cell viability on 5% and 10% Mn-BCP-MAN scaffolds. The cell density of both the scaffolds was evaluated after culturing for 1, 7, and 14 days, as shown in Figure 8. Pure BCP was used as the control specimen. On day 1, cell viability of the 10% Mn-BCP-MAN scaffold was modest, but on days 7 and 14, the cell proliferation rate was higher than the 5% Mn-BCP-MAN scaffold. For all cultured days, the statistical analysis resulted in a significant difference (p < 0.05) in cell density between the 5% and 10% Mn-doped BCP-MAN scaffolds. SEM and CLSM Observation Assessment of the 10% Mn-BCP-MAN scaffold by SEM ( Figure 9a) and CLSM (Figure 9b) on the 14th day of culture exhibited elongated cells spread throughout the scaffold surface by establishing cell-to-cell contacts on 10% Mn-BCP-MAN. Adherent cells seemed to be well spread with elevated cytoplasmic volume and higher amounts of fibrillar projections. Moreover, the cells showed a well-aligned F-actin cytoskeleton having intense staining at the boundaries of cells with the appearance of prominent nuclei and cell division [31]. Osteogenic Gene Expression Osteogenic gene expression was used to assess the differentiation of MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. Pure BCP was used as the control specimen. The expression level of osteogenic genes of COL1A1 (Figure 10), RUNX2 (Figure 11), and OCN ( Figure 12) increased from day 1 to day 14 for MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. In comparison to the 5% Mn-BCP-MAN scaffold, the 10% Mn-BCP-MAN scaffold demonstrated greater gene expression levels (p < 0.05). Table 1 shows the forward and reverse primers for quantification of expression of the relevant genes. Osteogenic Gene Expression Osteogenic gene expression was used to assess the differentiation of MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. Pure BCP was used as the control specimen. The expression level of osteogenic genes of COL1A1 (Figure 10), RUNX2 (Figure 11), and OCN ( Figure 12) increased from day 1 to day 14 for MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. In comparison to the 5% Mn-BCP-MAN scaffold, the 10% Mn-BCP-MAN scaffold demonstrated greater gene expression levels (p < 0.05). Table 1 shows the forward and reverse primers for quantification of expression of the relevant genes. Osteogenic Gene Expression Osteogenic gene expression was used to assess the differentiation of MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. Pure BCP was used as the control specimen. The expression level of osteogenic genes of COL1A1 (Figure 10), RUNX2 (Figure 11), and OCN ( Figure 12) increased from day 1 to day 14 for MG63 cells on both 5% and 10% Mn-BCP-MAN scaffolds. In comparison to the 5% Mn-BCP-MAN scaffold, the 10% Mn-BCP-MAN scaffold demonstrated greater gene expression levels (p < 0.05). Table 1 shows the forward and reverse primers for quantification of expression of the relevant genes. ALP Activity Measurement of ALP activity was carried out to assess the capability of the scaffolds to accelerate osteoblast cell differentiation ( Figure 13). Pure BCP was used as the control specimen. After 7 days of culture, osteoblast cells in contact with the 5% Mn-BCP-MAN scaffold surface exhibited insignificant ALP activity compared to those on the 10% Mn-BCP-MAN scaffolds (p < 0.05). However, after 14 days of culture, there seemed to be significantly higher ALP activity on 10% Mn-BCP-MAN scaffolds than the 5% Mn-BCP-MAN scaffolds. Bacterial Viability At 490 nm, optical density measurements were used to examine the activity of S. aureus. The data were compiled in 10 h intervals until 30 h. Bacteria growth was monitored ALP Activity Measurement of ALP activity was carried out to assess the capability of the scaffolds to accelerate osteoblast cell differentiation ( Figure 13). Pure BCP was used as the control specimen. After 7 days of culture, osteoblast cells in contact with the 5% Mn-BCP-MAN scaffold surface exhibited insignificant ALP activity compared to those on the 10% Mn-BCP-MAN scaffolds (p < 0.05). However, after 14 days of culture, there seemed to be significantly higher ALP activity on 10% Mn-BCP-MAN scaffolds than the 5% Mn-BCP-MAN scaffolds. ALP Activity Measurement of ALP activity was carried out to assess the capability of the scaffolds to accelerate osteoblast cell differentiation ( Figure 13). Pure BCP was used as the control specimen. After 7 days of culture, osteoblast cells in contact with the 5% Mn-BCP-MAN scaffold surface exhibited insignificant ALP activity compared to those on the 10% Mn-BCP-MAN scaffolds (p < 0.05). However, after 14 days of culture, there seemed to be significantly higher ALP activity on 10% Mn-BCP-MAN scaffolds than the 5% Mn-BCP-MAN scaffolds. Bacterial Viability At 490 nm, optical density measurements were used to examine the activity of S. aureus. The data were compiled in 10 h intervals until 30 h. Bacteria growth was monitored Bacterial Viability At 490 nm, optical density measurements were used to examine the activity of S. aureus. The data were compiled in 10 h intervals until 30 h. Bacteria growth was monitored on pure BCP control scaffold, 5% Mn-BCP-MAN scaffold, and 10% Mn-BCP-MAN scaffold. During the initial 10 h, there was an insignificant decrease in bacterial cells, but bacterial count exponentially decreased as the time period increased from 20 h to 30 h. In the 10% Mn-BCP-MAN scaffold, there was a significant decrease in bacteria cell count, as illustrated in Figure 14. on pure BCP control scaffold, 5% Mn-BCP-MAN scaffold, and 10% Mn-BCP-MAN scaffold. During the initial 10 h, there was an insignificant decrease in bacterial cells, but bacterial count exponentially decreased as the time period increased from 20 h to 30 h. In the 10% Mn-BCP-MAN scaffold, there was a significant decrease in bacteria cell count, as illustrated in Figure 14. Discussion Mn-BCP-MAN comprises balanced combinations of a non-resorbable phase (Mn-HAP) and resorbable phase (β-TCP) that frequently demonstrate increased bioactivity, and satisfactory antibacterial properties together with good mechanical strength, which cannot be achieved by a single-phase biomaterial. The diverse action of naturally occurring MAN at the cellular as well as molecular level offers vital knowledge for its usage as a potential osteoporotic agent. It has been established that MAN suppresses the formation of bone resorption cells by inhibiting RANKL-induced activation of NF-kβ and ERK I ligand. Moreover, it enhances the development of bone formation cells by raising OCN, COL1A1, and RUNX2 expression levels [31]. The effectiveness of a sustained-release MAN scaffold in the treatment of diabetic alveolar bone defects was analysed in an earlier study. The resulting scaffolds exhibited porous architectures, possessing pores 111.35 to 169.45 μm in size. Average pore size decreased with increasing PLGA content. Increased drug content was produced by either a decrease in PLGA concentration or an increase in MAN concentration [32]. In in vitro models, the MAN-loaded scaffolds prevented the decline in cell viability due to diabetes. Additionally, healing of delayed alveolar bone defects was improved with enhanced bone regeneration in diabetic mice. Another study was carried out to find out whether treatment of MC3T3-E1 cells with MAN could protect the cells against dexamethasone-induced toxicity. The outcomes showed that incorporation of MAN greatly reduced the effects of dexamethasone on cell viability of MC3T3-E1 cells and levels of ALP activity. Increased OCN is a characteristic of osteogenic differentiation, and ALP activity is regarded as an early marker of this differentiation [33]. In summary, it can be concluded that MAN could be used to treat significant bone disorders. Figure 1 depicts the XRD pattern from which it was evident that Mn-doped BCP did not show any impurities because of the absence of additional diffraction peaks. The 10% Mn-BCP scaffold showed higher crystallinity as compared to its 5% Mn-BCP counterpart. Although 5% Mn-BCP showed a further increase in the intensity of the β-TCP peak while decreasing the intensity of the HAP peak, the highest amount of β-TCP was detected in this specimen as compared to its 10% Mn-BCP counterpart. These phenomena could be Discussion Mn-BCP-MAN comprises balanced combinations of a non-resorbable phase (Mn-HAP) and resorbable phase (β-TCP) that frequently demonstrate increased bioactivity, and satisfactory antibacterial properties together with good mechanical strength, which cannot be achieved by a single-phase biomaterial. The diverse action of naturally occurring MAN at the cellular as well as molecular level offers vital knowledge for its usage as a potential osteoporotic agent. It has been established that MAN suppresses the formation of bone resorption cells by inhibiting RANKL-induced activation of NF-kβ and ERK I ligand. Moreover, it enhances the development of bone formation cells by raising OCN, COL1A1, and RUNX2 expression levels [31]. The effectiveness of a sustained-release MAN scaffold in the treatment of diabetic alveolar bone defects was analysed in an earlier study. The resulting scaffolds exhibited porous architectures, possessing pores 111.35 to 169.45 µm in size. Average pore size decreased with increasing PLGA content. Increased drug content was produced by either a decrease in PLGA concentration or an increase in MAN concentration [32]. In in vitro models, the MAN-loaded scaffolds prevented the decline in cell viability due to diabetes. Additionally, healing of delayed alveolar bone defects was improved with enhanced bone regeneration in diabetic mice. Another study was carried out to find out whether treatment of MC3T3-E1 cells with MAN could protect the cells against dexamethasone-induced toxicity. The outcomes showed that incorporation of MAN greatly reduced the effects of dexamethasone on cell viability of MC3T3-E1 cells and levels of ALP activity. Increased OCN is a characteristic of osteogenic differentiation, and ALP activity is regarded as an early marker of this differentiation [33]. In summary, it can be concluded that MAN could be used to treat significant bone disorders. Figure 1 depicts the XRD pattern from which it was evident that Mn-doped BCP did not show any impurities because of the absence of additional diffraction peaks. The 10% Mn-BCP scaffold showed higher crystallinity as compared to its 5% Mn-BCP counterpart. Although 5% Mn-BCP showed a further increase in the intensity of the β-TCP peak while decreasing the intensity of the HAP peak, the highest amount of β-TCP was detected in this specimen as compared to its 10% Mn-BCP counterpart. These phenomena could be explained by the Mn solubility limit in HAP. The 5% Mn-BCP likely resulted in the production of β-TCP in terms of the second phase; however, a concentration of Mn > 5 mol%, stabilizes the HAP phase, thereby preventing the formation of the subordinate phase. Furthermore, the decrease in the β-TCP peak leads to an enhancement of the HAP peak as the concentration of manganese increases in BCP [34]. The enhancement in the β-TCP peak intensity of Mn concentration to 5 mol% can be attributed to the incorporation of Mn 2+ ions at the Ca 2+ ion site in the β-TCP phase. Calcination of Mn-doped BCP at 1000°C stabilizes its phase structure, and this phenomenon explains the decomposition of the structural phase [16]. The intensity of the β-TCP peak marginally deviated to a higher angle of 2θ with an increase in the concentration of Mn, but no change was observed in the HAP peaks. This finding demonstrated that doping of Mn favours the TCP phase over the HAP phase [35]. On the other hand, 5% Mn-doped BCP-MAN exhibited a large contact angle due to its lower hydrophilicity when compared with 10% Mn-BCP-MAN scaffolds. The wettability of the specimens influences cell proliferation, differentiation as well as cell adhesion on the biomaterial surface. Furthermore, an increase in Mn concentration in the scaffold results in decreased contact angles [36]. The swelling ratio of the scaffold is used to estimate impact on cell activities like cell proliferation, growth, and adhesion [37]. The swelling ratio was found to be >100% for all the synthesized samples, thereby stimulating cell development on the scaffold. However, micro, as well as macro pores were present in the synthesized scaffolds for both the 5% and 10% Mn-BCP-MAN scaffolds. The number of macropores was abundant, showing that water absorption increases with an increase in pore size [38]. The newly generated bone is envisaged to substitute the 5% and 10% Mn-BCP-MAN scaffolds and show better mechanical strength because of the presence of higher amount of Mn in it. An ideal scaffold should have a controlled biodegradation rate, which is related to the bone remodelling speed. Enhanced osseointegration should supply ample mechanical strength for the regeneration. During the resorption process, the mechanical strength of the scaffolds must be retained until the implantation area is totally replaced by the host tissues so that it can resume its structural role [39]. According to the literature [11], the dissolution rate of β-TCP is higher inside the body environment in comparison to HAP. The degradation rate of all our synthesized scaffolds was slow and both the specimens demonstrated a consistent degradation rate, which differs from the literature. Ca and P, as the major mineral components of HAP, exhibit critical functions in accelerating and retarding osteoblast and osteoclast activities. Both 2-4 mmol (low) and 6-8 mmol (medium) content of Ca 2+ ions are favourable for osteoblast proliferation, differentiation, and extracellular matrix remineralization. On the other hand, P seems to perform as a subordinate in osteoblast proliferation as well as differentiation [12]. The Mn-BCP-MAN scaffold was soaked in SBF at 37 • C, and ion content was measured using the PIXE technique for different time periods to explore its ion release properties. The release of Mn 2+ ions aids in the stimulation of osteoinductivity along with the antibacterial activities of Mn-BCP-MAN scaffolds [40]. Furthermore, the amounts of β-TCP and Mn-doped HAP in the scaffolds regulate cell viability as well as functionality. In the initial phases of the experiment, according to the MTT assay, the survival rate of MG63 cells suggested its cytotoxic nature. On the 14th day, however, the survival rate of MG63 cells on the 10% Mn-BCP-MAN scaffold was comparatively higher than those on the 5% Mn-BCP-MAN scaffold. Thus, the presence of MAN affects the cell proliferation rate. CLSM observation exhibited organized cellular activities. This behaviour is appropriate as far as biological activities on the scaffolds is concerned, i.e., since the F actin cytoskeleton, that is highly concentrated below the plasma membrane, gives structural strength and elasticity to the cell that undergoes adaptation to the scaffold structure. Moreover, the F-actin cytoskeleton is a primary candidate in the mechano-transduction mechanism of cells that modulates complex signalling pathways that are mandatory to the next stages of osteoblast proliferation and differentiation [41] The qRT-PCR technique was carried out in order to investigate the osteogenic gene expression of MG63 cells for COL1A1, RUNX2, and OCN. Throughout proliferation as well as matrix maturation stages of osteoblastic cell development, COL1A1 is considered an early-stage marker. On the 7th and 14th day, the osteogenic gene expression for COL1A1 of osteoblast cells on the 10% Mn-BCP-MAN scaffold was higher than the 5% Mn-BCP-MAN scaffold, indicating that the presence of MAN results in increased proliferation as well as differentiation rate. OCN is regulated via RUNX2 and is a RUNX2 target gene [42]. OCN is termed a late-stage gene marker. The presence of MAN significantly boosts RUNX2 transcriptional activity, as discussed by Peng et al. [43]. However, the results revealed that, in the presence of antimicrobial Mn 2+ ions, the level of gene expression decreased. Because of the higher antibacterial efficacy of Mn, less bacterial cells were viable in the 10% Mn-BCP-MAN scaffold compared to those in the 5% Mn-BCP-MAN scaffold [44]. Relating to the osteo-inductive property of the scaffolds, the ALP activity of osteoblast cells on both the scaffolds on day 7 showed the least variation. However, ALP activity was significantly enhanced on the 10% Mn-BCP-MAN scaffolds than 5% Mn-BCP-MAN scaffolds on day 14. This change may be based on the progressive release of βglycerophosphate during scaffold degradation that is widely used to stimulate osteoblast cell-mediated mineralization. Conclusions While 10% Mn-BCP-MAN showed higher hydrophilicity, its swelling ratio was less than 5% Mn-BCP-MAN. The release of mangiferin in both the scaffolds showed insignificant variation that led to osteogenicity of the scaffolds. COL1A1, OCN, and RUNX2 showed higher osteogenicity in the case of the 10% Mn-BCP-MAN scaffold and the antibacterial efficacy of 10% Mn-BCP-MAN scaffold increased with the increase in Mn content.
9,328.2
2023-03-01T00:00:00.000
[ "Materials Science", "Medicine", "Engineering" ]
A combined treatment of Proteinase K and biosynthesized ZnO-NPs for eradication of dairy biofilm of sporeformers Biofilms of sporeformers found in the dairy industry are the major contaminants during processing, as they withstand heat and chemical treatment that are used to control microbes. The present work is aimed to remove these resistant forms of bacterial community (biofilm) present in dairy production lines using ecofriendly agents based on proteinase K (Prot-K) coupled with Zinc oxide nanoparticles (ZnO-NPs). Some metal/metal oxide (Ag, CuO and ZnO) NPs were prepared microbially, and ZnO-NPs were characterized as the most effective ones among them. The produced ZnO-NPs were 15–25 nm in size with spherical shape, and FTIR analysis confirmed the presence of proteins and alkanes surrounding particles as capping agents. Application of Prot-K for eradication (removal) of a model biofilm of mixed sporeformers on food-grade stainless steel resulted in an 83% reduction in the absorbance of crystal violet-stained biofilm. When Prot-K was mixed with the biosynthesized NPs ZnO_G240, the reduction increased to 99.19%. This finding could contribute to an efficient cleaning approach combined with CIP to remove the recalcitrant biofilms in dairy production lines. Introduction Microbes generally tend to form biofilms on all surfaces with sufficient moisture and organic matter supply. Presence of biofilms in the dairy industry raises safety issues, especially when biofilms are located on milk-processing surfaces and pipelines that are unreachable by cleaning agents [1]. Bacteria in biofilms are protected against disinfectants due to the interspecific cooperation and the presence of extracellular polymeric substances (EPS), which enhance their survival and promote the subsequent contamination of dairy products. Indeed, dairy biofilms are composed of specific bacterial species adapted to survive the intrinsic and extrinsic factors (heat, nutrients, pH, salt, etc.) that are associated with milk processing [2]. It is well documented that the bacteria frequently found in the dairy environment, other than the starters, capable of forming biofilms are aerobic sporeformers belonging to genus Bacillus and allied genera [3][4][5][6]. These bacteria are major contaminants in the milk processing industries, as their spores, already existent in raw milk and able to grow at refrigeration temperature, survive pasteurization and subsequent processing, attach to surfaces, form biofilms and consequently become a part of the final product. Their biofilms, especially in the problematic regions (joints, pipe corners, gaskets, etc.), remain after Clean-In-Place (CIP) practices that result in more production of the spoilage enzymes which give rise to off-flavors and structural defects, as well as high numbers of bacteria in the end products, limiting the shelf life and thus leading to huge economic losses [7,8]. So, in order to remove the formed biofilms of these nonstarter bacteria completely to prevent the regeneration possibility in the subsequent batches, deep cleaning methods are required. Proteolytic treatment of biofilm is a preferred approach due to proteinaceous contents of the biofilm cells and EPS matrix. Previous studies have been published concerning enzyme degradation of mature biofilms using proteinase K (Prot-K) [9]. It is a very reactive serine protease and stable in a wide range of conditions, including temperature, pH, detergents and buffer salts [10]. As a result, Prot-K is an excellent choice for biofilm disassembly among proteases [11]. Additionally, nanoparticles (NPs) are considered a promising tool for removing bacterial biofilms. Many interactions have been determined between NPs and biofilm, such as electrostatic, hydrophobic and steric, that lead to disruption and prevention of the biofilm growth. These interactions are affected mainly by the size and surface charge of the particles and the structure and composition of EPS matrix [12,13]. Metal oxide nanoparticles, such as ZnO and CuO, are among the most promising NPs and widely investigated for the treatment of bacterial biofilms [13]. The chemical synthesis of promising NPs is relatively expensive and might result in low biocompatibility and risks to living organisms because of the dangerous compounds that are used. On the other hand, biosynthesis using microorganisms, enzymes or plants has been proposed as possible environmentally sustainable method [13,14]. The objective of the current study is to assess the susceptibility of dairy biofilms of sporeformers to Prot-K and some biosynthesized metal/metal oxide NPs (Ag, CuO and ZnO) and their combination effects. To achieve this, isolation of the nonstarter dairy biofilm-forming bacteria post-pasteurization (thermoduric and/or sporeformers) was done to form a biofilm model of dairy industries for the study's experiments. Isolation of biofilm-forming bacteria Five samples (two swab samples, one raw milk and two powdered milk) were collected. Table 1 describes the samples' characteristics and their step(s) of stock solution preparation. Isolation of the biofilm-forming bacteria was carried out by the pour-plate technique on Plate Count Agar (PCA) medium [15], supplemented with 0.2% soluble starch for enhancement of the bacterial spore germination [16]. First, samples of stock solutions were serially diluted 10-fold in sterile distilled water to 10 -5 . Afterward, 0.1 mL of volume from each dilution was poured under an aseptic condition in duplicate onto Petri dishes containing melted PCA medium and allowed to solidify at room temperature. The dishes after agar solidification were incubated for 16 h to 72 h at 37 °C and 55 °C. After incubation, the morphologically different colonies were picked and purified by streaking on new media. Pure colonies were tested for their ability to form a biofilm on food-grade 316 stainless steel (SS-316) by growing each isolate in 10 mL sterile Tryptic Soy Broth (TSB) containing a coupon (size 10-by-20mm; grade 316) placed in a Falcon tube (15 mL) and incubated under their optimum growth temperatures. After incubation, the coupons were washed twice by dipping and rinsing using sterile distilled water, transferred to a sterile Falcon tube (50 mL) containing 2 g of glass beads (diameter 5 mm) and 10 mL of 0.1% peptone water and then vortexed for 1 min. Next, the recovered bacterial cells from coupons were diluted and plated on Nutrient Agar (NA). Thereafter, a screening was done for the obtained cultures by Gram staining and microscopic examinations. 16S rDNA and biochemical identification The bacterial isolates were identified on the basis of 16S ribosomal RNA gene sequence using two sets of universal primers, 27F (5'-AGAGTTTGATCMTGGCTCAG-3') and 1492R (5'-TACGGYTACCTTGTTACGACTT-3'), that amplify about 1500 bp of the 16S rDNA region [17]. Genomic DNA was extracted using a GeneJET Genomic DNA Purification kit (Thermo Scientific, USA) according to the manufacturer's instructions. PCR was performed by adding 40 ng of the extracted DNA in 50 µL of PCR reaction solution (1 U of MyTaq™ DNA polymerase (Bioline, Meridian Bioscience Inc., USA), 1x MyTaq buffer contains dNTPs and MgCl2, and 10 pmol of each primer). PCR product was purified by QIAquick Gel Extraction Kit (Qiagen, Germany) and sequenced by capillary DNA sequencing systems, Applied Biosystems™ 3730XL (Applied Bio-systems, USA; service provided by GATC Biotech AG, Germany). Obtained sequences were aligned on the blastn program of NCBI (https://blast.ncbi.nlm.nih.gov/Blast.cgi) using the database of 16S ribosomal RNA sequences (Bacteria and Archaea) in rRNA/ITS databases and against the online tool SepsiTest BLAST (http://www.sepsitest-blast.de/en/index.html) for identification to the species level. Biochemical identification using the analytical profile index (API) method was performed by API 50CH, API 20E strips and CHB/E medium, (bioMerieux, Marcy-l'Etoile, France), following the manufacturer's instructions. Briefly, freshly grown bacterial colonies were taken from each isolate and suspended in saline solution. From that, ampoules of API CHB medium (10 mL) and API 20E (5 mL of 0.85% NaCl) were inoculated using manual pipette to a turbidity equivalent to 2 McFarland. Then, the suspension was applied to API strips and covered with mineral oil in accordance with instructions. The results of color changing after incubation for 48 h were analyzed with API WEB. Biosynthesis of metal/metal oxide NPs Green synthesis of metal/metal oxide NPs was carried out using cell free filtrate of 14 thermoalkali actinobacteria species (provided by Dr. Ahmad S. El-Hawary, Faculty of Science, Al-Azhar University) according to methods described by Darwesh & Elshahawy [18] and Darwesh et al. [19]. Silver nitrate (AgNO3, Sigma-Aldrich, USA) 0.1%, copper sulfate (CuSO4.5H2O, Merck, Germany) 1% and zinc sulfate (ZnSO4.7H2O, Sigma-Aldrich, USA) 1% w/v were used as precursors for the biosynthesis. Fresh cultures of the 14 actinobacteria species were prepared by inoculating two disks from actively growing agar culture in 50 mL nutrient broth (NB) with a final pH of 8.5 and incubation at 55 °C and 150 rpm overnight. Then, the cells were harvested and re-suspended in 5 mL PBS. The suspensions were used to inoculate 100 mL complex medium (Glucose, 1%; Yeast extract, 0.5%; Peptone, 0.25%; Casein, 0.25%; MgSO4, 0.03%; FeSO4, 0.002%; ZnSO4, 0.02%; CaCO3, 0.1%; KH2PO4, 0.1%; K2HPO4, 0.1%) and incubated for 96 h at 55 °C and 200 rpm [20]. After incubation, the grown cultures were filtered by Whatman paper grade 5. Next, equal volumes of AgNO3, CuSO4 and ZnSO4 solutions were mixed with the obtained supernatants for each culture separately. Afterwards, the reaction mixture was incubated in dark conditions at room temperature overnight. Detection and selection of the synthesized NPs were done by visual observation of the color changing and the precipitation at the flask bottom. Further, the NPs were collected by centrifugation at 10, 000 rpm for 15 min, and the pellets were washed 3 times with deionized water and absolute ethanol and dried in a hot air oven at 45 °C. Finally, the powders were re-suspended in deionized water, sonicated and subjected to biofilm eradication (removal) assay. Biofilm formation assay The SS-316 coupons were first washed with 0.1% (w/v) SDS, deionized water and 70% ethanol sequentially and then sterilized by autoclaving. Following that, biofilm strains were grown in NB medium at 37 °C to early log phase (3-6 h). The obtained bacterial cells were collected and resuspended in a sterile saline solution (0.85% NaCl). Then, the suspensions were used either individually or in combination (mixed-species) to inoculate 5 mL of 3% reconstituted skim milk (RSM) with a final concentration of 10 5 -10 6 CFU/mL in a Falcon tube (15 mL) containing the sterilized coupons placed vertically and incubated-shaken at 150 rpm and 37 °C for 72 h. In order to determine the background of the staining and fouling layer of milk without bacteria, control A (coupon incubated in un-inoculated water) and control B (coupon incubated in un-inoculated skim milk), respectively, were treated the same as in the biofilm formation test. Crystal Violet (CV) staining assay Evaluation of the mature biofilm and quantification of the remainder after treatment were accomplished by CV assay [21]. Biofilm masses on the surfaces of the stainless steel coupons were first washed with PBS and fixed with methanol for 15 min. Then, the coupons were transferred to 12-well plates, air dried and stained for 20 min with 1% CV solution. Afterwards, dyes were discarded, and the coupons were rinsed 5 times with distilled water and air-dried again. Subsequently, the coupons were immersed in 5 mL glacial acetic acid (33%) for 10-15 min to de-stain the stained biofilm. Finally, biofilm quantity was detected by transferring the de-stained solution to a disposable cuvette and measuring the absorbance at 590 nm using a spectrophotometer (SHIMADZU, UV-240, Japan). Biofilm eradication assay Once incubation of the biofilm strains (mixed-species) with the SS-316 coupons was completed, the culture media were discarded; and the coupons were rinsed thrice with PBS (1X) to remove the non-biofilm cells, then transferred to Falcon tubes (15 mL) containing 5 mL solution of eradication treatment (i. e., Prot-K, biosynthesized NPs or Prot-K + biosynthesized NPs) and incubated for 30 min. The Prot-K was commercial proteinase K of Bioline Co. (Bioline, Meridian Bioscience Inc., USA). The Prot-K treatment alone was performed in 50 mM Tris-Cl (pH 7.8) at 55 °C, whereas the NPs were suspended in water and incubated at room temperature. However, the combination (Prot-K + biosynthesized NPs) was conducted as a one-step procedure using Prot-K buffer and temperature. Each treatment was replicated (3 replicates), and the controls (+ ve andve) of removal activity were distilled water without inoculation. After incubation, the coupons were rinsed with PBS and subjected to the staining assay. Characterization of the effective NPs (ZnO_G240) The effective NPs in the biofilm eradication assay (ZnO_G240) were subjected to identification by TEM, FTIR and XRD instruments. For examination of size and morphological shape, characterization was done by high-resolution transmission electron microscopy (HRTEM, JEOL 2100, Japan). The sample solution was drop-coated onto the carbon-coated copper TEM grid and loaded after drying into the specimen holder. Then, an HRTEM micrograph was taken, and the size and shape were recorded [22]. In the case of the FTIR instrument, the probable biomolecules involved in capping, reduction and efficient stabilization of the synthesized NPs were recorded using Fourier Transform Infrared Spectroscopy (FTIR, Agilent Cary 630 FTIR spectrometer) in diffuse reflection mode. Sample powder was placed in a micro cup with an inner diameter of 2 mm and loaded into the FTIR spectrometer set at 26 ± 1 °C. Then, the sample was scanned with the infrared light in the range of 400 to 4000 cm -1 . The resulting spectral data was compared to the reference chart to identify the presented functional groups [23]. The crystal structure of the biosynthesized NPs was characterized using an X-Ray Diffractometer (XRD, Shimadzu XRD-6000). This analysis was done with the nickel-filter and Cu-Kα X-ray target, under the conditions of a 2θ scan range from 10 to 80°, step size of 0.02°, scan rate of 0.5 sec and copper anode source [24]. Toxicity assay of bio-synthesized ZnO_G240 According to Rajabi et al. [25] and Saleh et al. [26], a brine shrimp toxicity test was performed on the bio-synthesized ZnO_G240 nanoparticle concentrations. 3.3 g of instant ocean sea salt (Aquarium System, Ohio) was dissolved in 100 mL of distilled water, and 0.5 g of the dried cysts of Artemia salina (Linnaeus) nauplii was added to the salt solution and incubated at room temperature under continuous aeration and illumination. The larvae (nauplii) hatched within 48 h were distributed by glass capillary in a vial containing 5 mL of sea water. Then, different concentrations (0, 10, 30, 50, 70, 90 µg/mL) of the bio-synthesized ZnO_G240 were prepared in 5 ml of sea water as triplicates, and ten nauplii of A. salina were introduced to each concentration and incubated at room temperature for 24 h, followed by 24 h for confirmation. The survival percentage was obtained after counting and recording the number of alive and dead nauplii in each concentration. Statistical analysis The collected data from three replicates were statistically analyzed by MINITAB statistical software version 18.1 (Minitab, Inc., PA, USA). One-way analysis of variance (ANOVA) was used to determine the significances through the Tukey test with significance level (P value ˂ 0.05). Isolation and identification of the nonstarter dairy biofilm-forming bacteria post-pasteurization The samples were collected, prepared and cultured based on the study's purpose regarding the ability to form biofilms on stainless steel surfaces in dairy production lines post-pasteurization. Approximately 22 thermoduric or spore-derived colonies were picked randomly and purified for further characterization. Colony description, cell morphology and Gram staining were performed and resulted in obtaining 10 different isolates. According to Gram staining and microscopic observation, the obtained isolates from five samples were Gram positive bacilli, rod-shaped and purple-violet colored. Moreover, all the isolates were mesophilic thermoduric bacteria, and they could grow in a range between 30 and 55 °C and optimally at 37 ℃ [27]. The isolates were identified by 16S rRNA gene sequences and the biochemical tests of API 50CH and 20E (Table S1). Sequences of the ten isolates were analyzed and deposited at NCBI GeneBank under accession numbers OM857595-OM857604. The results of identification based on 16S rDNA and API biochemical analysis are summarized in Table 2. As expected, all of the isolates were aerobic, spore-forming bacteria and identified as members of Bacillus and related species. These findings are consistent with those of Yuan et al. [4], Sadiq et al. [5], Reginensi et al. [7], Zhao et al. [28] and Vanderkelen et al. [29]. Determination of biofilm formation capability Crystal violet (CV) staining assay was used for monitoring the biofilm formation ability of the isolated strains individually and in combination (mixed-species) on SS-316 surfaces in skimmed milk ( Figure 1, Table S2). Strains were considered as high, moderate and weak biofilm formers based on the fold value of CV assay in comparison to control B (fouling of milk without bacteria): High biofilm, ≥ 5 fold; Moderate biofilm, 2.5 to 5 fold; Weak biofilm, ≤ 2.5 fold. The results obtained from the biofilm forming assay showed that out of the ten isolates, only B. coagulans Sw72/5 exhibited high biofilm formation on the submerged SS surfaces with milk. Sadiq et al. [21] reported that the biofilmforming ability of two strains belonging to B. coagulans were higher on the polystyrene microtiterplate containing TSB than the stainless steel in RSM. However, the strains belonging to B. subtilis showed both moderate (strains Sw72/4, RM/6, and PMp/11) and weak (strain Sw80/1) ability to form biofilm on SS-316 coupon. This Bacillus species (B. subtilis) has previously been characterized as a dairy-associated bacteria reported in milk powders [4,6,7], whey WPC80 [15] and sheep milk [30]. Moreover, a moderate ability of biofilm formation was observed by strains B. licheniformis RM/7, B. sonorensis PMa/8, and B. paralicheniformis PMp/10. Previously, Zain et al. [15] and Sadiq et al. [21] concluded that B. licheniformis strains exhibit good biofilm forming ability on SS rather than polystyrene in presence of TSB at 37 °C. Indeed, B. licheniformis is the most common contaminant found in dairy-associated environments as well as the final products. Strains B. sonorensis and B. paralicheniformis are close relatives of B. licheniformis and have previously been reported in milk powder [5] and raw milk [30,31]. The rest of the isolates, Brevibacillus brevis Sw80/2 and Bacillus amyloliquefaciens Sw72/3, showed weak ability to form biofilm on SS surfaces. Presence of these sporeformer species in milk and dairy products was reported by Sadiq et al. [5], Reginensi et al. [7] and Vanderkelen et al. [29]. Finally, the mixed-species biofilm of the ten isolates was high: 2.4 fold that of the highest single-species alone (strain Sw72/5). The biofilm in natural environments, such as the food industry, consists of a bacterial population (mixed-species biofilms), and it has been found to be more resistant to disinfectants and sanitizers than mono-species biofilms [32]. Thus, according to the observed results, the removal treatments should be conducted on mixed-species biofilms to be closer to the application. Figure 1. Biofilm formation capabilities of the ten isolates on SS-316 submerged in skim milk. Data represent means ± SE of the obtained results of CV assay from three independent experiments. *Control A is a non-biofilm-containing coupon, incubated in un-inoculated water for background staining determination. **Control B is the fouling layer of milk without bacteria, incubated in un-inoculated skim milk. Biosynthesis of metal/metal oxide NPs The extracellular filtrate of 14 thermoalkali actinobacteria species obtained from their cultivation on the complex broth medium were used as a reducing system for biosynthesis of metal/metal oxide (Ag, CuO and ZnO) NPs from their respective salts. In this process, NPs were produced through a reduction process detected by visual observation as the change in color and precipitation. The primary confirmation of NP biosynthesis was done by the changes in color (Ag, from pale yellow of AgNO3 to brown; CuO, from light blue of CuSO4 to dark green; ZnO, from colorless ZnSO4 to yellowishwhite) after addition of the cell free filtrate in equal volumes [33][34][35]. The colors' formation depends on the surface resonance of plasmon. Table 3 shows which reaction mixtures were positive nanoform producers. The produced NPs were 10 different metal/metal oxides, coded according to the metal ion and reducing system (Ag_G310, Ag_G210, Ag_G240, CuO_G215, CuO_G210, CuO_G240, ZnO_G412, ZnO_G710, ZnO_G215 and ZnO_G240). Table 3. Green synthesized metal/metal oxide NPs extracellularly. Effects of Prot-K and the biosynthesized NPs on dairy biofilms of sporeformers The serine protease Prot-K, typically obtained from Tritirachium album, has frequently been used as an efficient biofilm removal agent against those produced by E. coli, G. vaginalis, H. influenza, L. monocytogenes, P. aeruginosa, Sal. Gallinarum, V. cholerae and many Staphylococcus spp. including MRSA [9,40,41]. Activity of metal oxide NPs has also been reported against a range of pathogenformed biofilms [13]. To our knowledge, this is the first report on the use of Prot-K and biosynthesized NPs for eradication of dairy biofilms of sporeformers. So, in order to evaluate their effects, a model has been established to emulate sporeformers' biofilms in dairy industries: the abovementioned mixedspecies biofilm grown on food-grade stainless steel (SS-316). Nguyen and Burrows [42] have reported that the established biofilms on stainless steel can be dispersed by Prot-K at concentrations between 50 and 200 μg/mL. Likewise, the NPs are effective against bacteria in a range of 15 μg/mL to 1400 μg/ mL [13]. Accordingly, the treatments of both Prot-K and NPs were carried out at concentrations of 50 μg/mL. Figure 2 and Table S3 show the removal effects of Prot-K and the green synthesized NPs on the established model of dairy spore-type forming biofilms based on absorbance of the de-staining solution of CV assay. The results showed a significant (P < 0.05) removal effect on the formed biofilm by Prot-K and the combination of Prot-K with ZnO_G240, lettered by "B" and "C", respectively, according to ANOVA with a Tukey test (Figure 2). However, the synthesized NPs alone did not have effects on the performed biofilms significantly different from the negative control, so they are lettered by "A". Also, the combinations of Prot-K with the other NPs were non-significant, so they are lettered by "B," as Prot-K alone. Regarding the NPs CuO_G240, they exhibited removal activity next to ZnO_G240 when combined with Prot-K but share the significance level of Prot-K, so they are lettered by "BC". Importantly, the observed synergistic effect of Prot-K with ZnO_G240 was interesting and presented the highest significance value: the same as the non-biofilm-containing coupon (control + ve of removal), as both are lettered by "C". The removal percentages of Prot-K and Prot-K plus the synthesized NPs ZnO_G240, as measured after CV assay and compared to un-treated biofilm (controlve of removal) and non-biofilm-containing (control + ve of removal) coupons, were 83.76% and 99.19%, respectively (Table S3, Supplementary data). The known mechanism of biofilm dispersion by Prot-K is cleaving the peptide bonds of aliphatic, aromatic or hydrophobic amino acids in the EPS matrix, which leads to degradation of the protein components and disintegration of the established biofilms [9,43]. The reactive oxygen species (ROS) is the key mechanism for the action of ZnO NPs against bacterial biofilms. The bacterial contact with ZnO NPs inhibits the respiratory enzyme(s) that facilitate the generation of ROS. The formed ROS can irreversibly damage the bacteria membrane, DNA, mitochondria, etc. [22,44]. The mechanism of synergism between Prot-K and NPs ZnO_G240 can be inferred from the proteinase K treatment, which significantly degraded the related proteins in the EPS matrix, and the biofilm cells became loose. This enables the ZnO NPs to penetrate deeply inside the biofilm and completely degrade the existing cells. The obtained result of a synergistic effect between Prot-K and NPs (ZnO_G240) is supported by the recent report of Sahli et al. [45]. They reviewed the synergistic effect of Prot-K when combined with gold NPs toward biofilm of P. fluorescens. Several studies in this line of research have also investigated the synergistic benefits of Prot-K when combined with acylase I [46], antibiotics [11], plant extracts of R. sativus [47] and thyme oil [48]. Figure 2. Effects of Prot-K and the biosynthesized NPs on the established biofilms of dairy sporeformers. Values sharing the same letter are not significantly different at p < 0.05 using Tukey test. Data represent means ± SE of the obtained results of CV assay from 3 independent experiments. *Control + ve of removal is a non-biofilm-containing coupon, incubated in un-inoculated water instead of RSM to be a reference of maximum removal activity. **Controlve of removal is a mixed-species biofilm without treatment. It is important to simulate biofilms as formed in dairy plants; thus, the SS-316 coupons (the stainless steel grade that is used widely in dairy production lines) were applied and stained with CV for visualization. Figure 3 displays CV staining images for SS-316 coupons that were captured for both control + ve andve of removal and treatment of Prot-K and Prot-K with ZnO_G240. The violet spots of biofilm on SS surfaces of controlve were decreased markedly with treatment of Prot-K alone, like individual cells or small colonies, and almost disappeared with combination of Prot-K with ZnO_G240. These results confirm the synergistic interaction between Prot-K and the synthesized NPs ZnO_G240, as well as efficiency of the suggested treatment. Figure 3. CV staining images for removal of established biofilms by Prot-K and biosynthesized ZnO-NPs. Mixed-species biofilm of ten dairy sporeformers grown on foodgrade stainless steel (SS-316) for 72 h were exposed to 50 μg/mL Prot-K alone and in combination with 50 μg/mL of the synthesized NPs ZnO_G240, stained with CV. Control + ve of removal is a non-biofilm-containing coupon, incubated in un-inoculated water instead of RSM to be a reference of maximum removal activity. Controlve of removal is a mixed-species biofilm without treatment. Characterization of the effective NPs (ZnO_G240) The biosynthesized NPs ZnO_G240 that exhibit synergism with Prot-K were characterized using High-Resolution Transmission Electron Microscopy (HRTEM), Fourier transform infrared spectroscopy (FTIR) and an X-ray diffraction spectrophotometer (XRD). HRTEM was applied for analysis of the NPs' morphology, size and shape. The image obtained through HRTEM showed that the sizes ranged between 15 and 29 nm, and the NPs had a spherical shape with marginal variation and little aggregation (Figure 4). Similar results have been reported by Vijayakumar et al. [49], Al-Shabib et al. [50], Ali et al. [51], Ishwarya et al. [52], for green synthesized ZnO-NPs having antibiofilm properties. FTIR measurement was performed to identify the possible biomolecules responsible for reduction, capping and stabilization. The FTIR spectrum of ZnO_G240 NPs shows intense absorption peaks at 3419, 3280, 3001, 2934, 1636, 1558, 1406, 1042, 1012, 922, 804, 675, 645, 620, 512, 459, 421 cm -1 ( Figure 5). The broad absorptions at 3419 and 3280 cm -1 correspond to O-H stretching of the proposed alcohols, flavonoids or phenols present in the extract. The peaks at 3001, 2934 cm -1 are C-H stretching of alkenyl and alkyl, respectively, groups of proteins. The absorption bands at 1636 and 1558 cm -1 correspond to the amide I and II bands, respectively, that are characteristics of proteins and enzymes. The amide I band is mainly a C=O stretching mode, and the amide II band is a combination of largely N-H in plane bending mode and C-N stretching. The high intensity band at around 1406 cm -1 could be attributed to bending vibrations of C-C in aromatic groups of proteins that act here as a protective agent. The important roles of these surrounding proteins observed in FTIR analysis are capping and stabilizing the synthesized nanoparticles. The section between 500 and 900 cm -1 is associated with metal oxygen. A similar band pattern has been reported for ZnO-NPs synthesized by green method for biofilm control [49][50][51][52]. In the characterization section of XRD analysis of ZnO_G240, the diffractogram showed strong diffraction peaks at 8.6°, 31.6°, 34.2° and 35.9° of 2 theta ( Figure 6). XRD spectra indicate that the sample was crystalline with few amorphous phases resulting from proteins and alkanes surrounding the particles as capping agents. The results are in agreement with previous works [53]. Toxicity evaluation of ZnO_G240 NPs by brine shrimp bioassay It is necessary to apply safe and food grade materials in dairy and/or food industry. Thus, in the current experiment, evaluation of the toxicity and biosafety of ZnO-NPs (which are noted as active materials for biofilm eradication) was done. The aquatic organisms brine shrimp were used as biomonitoring tools in aquatic ecotoxicology, allowing the detection and evaluation of the potential toxicity of actinobacterial ZnO_G240 nanoparticles. The results illustrated in Figure 7 and Table S4 show the ZnO_G240 NPs at the active concentration for biofilm eradication are safe and do not have toxicity. Concerning the effect of high concentration on survival %, it might be caused by light transmission reduction [54]. Conclusion In conclusion, the results of this work demonstrated that Prot-K in combination with the biosynthesized NPs ZnO_G240 could be used as a potential cleaning agent for eradication of the dairy biofilms of sporeformers. The current cleaning methods (CIP) in dairy industries are not always sufficient for eradication of the formed biofilms in the production lines, and the proteolytic degradation should be used in conjunction with chemical method for prevention of the re-colonization by the released cells. Thus, the suggested combination could be applied with the CIP regimes to address the dairy biofilm problems.
6,676.6
2022-12-19T00:00:00.000
[ "Materials Science" ]
Invariant measures from locally bounded orbits Motivated by recent investigations of Sophie Grivaux and ´Etienne Matheron on the existence of invariant measures in Linear Dynamics, we introduce the concept of locally bounded orbit for a continuous linear operator T : X −→ X acting on a Fr´echet space X , and we use this new notion to construct (non-trivial) T -invariant probability Borel measures on ( X, B ( X )). Introduction This paper focusses on some aspects of the relationship between Topological and Measurable Dynamics in the particular context of Linear Dynamics, our main aim being to find some sufficient conditions for a linear dynamical system to admit (non-trivial) invariant probability Borel measures.A linear dynamical system is a pair (X, T ) where X is a separable infinite-dimensional Fréchet space (that is, a locally convex and completely metrizable topological vector space), and where T : X −→ X is a continuous linear operator acting on X.We will briefly write T ∈ L(X), and given a vector x ∈ X we will denote its T -orbit by O T (x) := {T n x ; n ≥ 1}. A linear dynamical system T ∈ L(X) can be examined from various perspectives.For instance, one may focus on Topological Dynamics and, if we denote by E the topological closure of any subset E ⊂ X, one can study notions such as recurrence and hypercyclicity: a vector x ∈ X is said to be -recurrent for T if x ∈ O T (x), and the set of recurrent vectors for T will be denoted by Rec(T ); -hypercyclic for T if X = O T (x), and the set of hypercyclic vectors for T will be denoted by HC(T ). Alternatively, one can adopt the Measurable Dynamics (also called Ergodic Theory) point of view and, considering a positive finite (often normalized and therefore probability) measure µ defined on the σ-algebra of Borel sets B(X) of X, investigate notions such as invariance and ergodicity: -such a measure µ is called T -invariant (or simply invariant) if µ(A) = µ(T −1 (A)) for all A ∈ B(X); -and µ is called T -ergodic (or just ergodic) if it is invariant and µ(A) ∈ {0, µ(X)} when A = T −1 (A). It is by now well understood that the notion of ergodicity can be seen as the measure-theoretic counterpart of hypercyclicity, while invariance can be compared with recurrence.To state this analogy let N be the set of positive integers, denote the return set from any x ∈ X to any subset E ⊂ X by N T (x, E) := {n ∈ N ; T n x ∈ E}, and note that a vector x ∈ X is hypercyclic for T ∈ L(X) precisely when the return set N T (x, U ) is infinite for every non-empty open subset U ⊂ X, and that x ∈ X is recurrent for T when N T (x, U ) is infinite at least for every neighbourhood U of x.Using this notation we reach the announced analogy: -when µ is a T -ergodic measure with full support (that is, µ(U ) > 0 for every open set U = ∅), it was exhibited by Bayart and Grivaux in 2006 that then µ-a.e.vector x ∈ X is not only hypercyclic, but even frequently hypercyclic: for every non-empty open subset U ⊂ X the return set N T (x, U ) has positive lower density dens(N T (x, U )) > 0, where the lower density for any set A ⊂ N is dens(A) := lim inf the vector x ∈ X is then called frequently hypercyclic for T and we will denote by FHC(T ) the set of frequently hypercyclic vectors for T ; see [3,Proposition 3.12] or [5,Corollary 5.5] for the details of this argument, which uses the Birkhoff pointwise ergodic theorem in a crucial way; -and when µ is just T -invariant, it was checked in [27] that then µ-a.e.vector x ∈ X is also not only recurrent, but even frequently recurrent: for every neighbourhood U of x the return set N T (x, U ) has positive lower density dens(N T (x, U )) > 0; the vector x ∈ X is then called frequently recurrent for T and we will denote by FRec(T ) the set of frequently recurrent vectors for T ; see [27,Lemma 3.1] for the details of this argument, which uses again the Birkhoff pointwise ergodic theorem this time combined with the ergodic decomposition theorem, and see [11] for more on frequent recurrence. These results emphasize the importance of being able to ensure the existence of invariant measures possibly satisfying additional properties such as having full support or being ergodic, weakly and even strongly mixing.This kind of question goes back to the classical work of Oxtoby and Ulam [37] where the existence of invariant positive finite Borel measures, but for continuous automorphisms acting on completely metrizable spaces, was fully characterized.In our linear framework note that every operator T ∈ L(X) admits the atomic Dirac mass δ 0 as an invariant measure since the zero-vector is always a fixed point, so we will say that a probability (or positive finite) Borel measure µ on (X, B(X)) is non-trivial if it differs from δ 0 (or from every positive multiple of δ 0 ). The existence of non-trivial invariant measures in Linear Dynamics has recently been explored in the works [29] and [27].In fact, [29, Section 2] extends to the linear setting a constructive technique already known for compact dynamical systems obtaining that, under some "natural topological assumptions" on the space X and the operator T , then one can construct a T -invariant measure with full support from a frequently hypercyclic vector x ∈ FHC(T ).This was slightly refined in [27, Section 2] weakening the "frequent hypercyclicity" requirement into that of "reiterative recurrence": we say that x ∈ X is reiteratively recurrent for T if for every neighbourhood U of x the return set N T (x, U ) has positive upper Banach density Bd(N T (x, U )) > 0, where the upper Banach density for any set N . The aforementioned "topological assumptions" on T ∈ L(X) require the underlying space X to be a Banach space in both works [29] and [27], since some kind of "local boundedness" is needed along the construction of invariant measures developed.The main objective of this paper, and what we do in Section 2, is extending the constructive technique exposed in [29,27] to the context of operators acting on Fréchet spaces via the new concept of locally bounded orbit (see Definition 2.2).The rest of the paper is organized as follows: in Section 3 we apply the theory developed in Section 2 by adding general restrictions on X and T , we discuss why the invariant measures constructed are optimal in terms of Banach limits, and we adapt the main ideas from Section 2 to study almost-F-recurrence and some equivalences of Devaney chaos in the Fréchet setting.In Section 4 we elaborate further on the notion of "locally bounded orbit" by exhibiting some explicit examples and stability results. Invariant measures on Fréchet spaces In this section we recall the technique developed in [29] and [27] to construct invariant measures for operators acting on Banach spaces and we extend it to the Fréchet setting by introducing the concept of locally bounded orbit (see Definition 2.2 below).The basic results that we need from [29,27] were originally stated for Polish dynamical systems so that we start by presenting some notation. From the Banach to the Fréchet case We will say that the pair (X, T ) is a Polish dynamical system if T : X −→ X is a continuous map acting on a Polish space X, that is, a separable completely metrizable topological space.Note that the concept of "linear dynamical system" as defined at the Introduction of this paper is indeed a particular case of Polish system.Moreover, the topological and measurable notions already defined, such as "recurrent/hypercyclic vector" and "invariant/ergodic measure", make sense in this rather general context and by abuse of notation we will utilize them also for Polish systems.See [20] for recent investigations on the relation between both Polish and linear dynamical systems. Given a Polish space X we will denote by τ X the original (separable and completely metrizable) topology of the space, but we will often consider a second topology τ on X fulfilling some properties with respect to τ X .The σ-algebra of Borel sets induced by each of these topologies will be denoted by B(X, τ X ) and B(X, τ ) respectively, and if they coincide we will simply write B(X).All the measures considered in this paper will be non-negative finite Borel measures defined on Polish spaces, hence regular (see [16,Proposition 8.1.12]),and we will usually omit the words "Borel" and "regular".Moreover, for any non-negative measure µ on a Polish space (X, τ X ) we will denote its support by It is easy to check that a point x ∈ X belongs to the support supp(µ) if and only if µ(U ) > 0 for every measurable neighbourhood U of x.Let ℓ ∞ be the space of all bounded sequences of real numbers, we will write 1l ∈ ℓ ∞ for the sequence with all its terms equal to 1, and for each A ⊂ N the element 1l A ∈ ℓ ∞ will be the sequence in which the n-th coordinate is exactly 1 if n ∈ A and 0 otherwise.Recall also that a Banach limit is a positive and shift-invariant continuous linear functional m : ℓ ∞ −→ R, which preserves the value of the limit for every convergent sequence (see [17, page 82]). Using the previously introduced notation we can explore the very technical lemma, originally stated in [29, Remarks 2.6 and 2.12] and later refined in [27,Lemma 2.1], which allows to construct plenty of invariant (but possibly null) measures for every Polish dynamical system T : (X, τ X ) −→ (X, τ X ) admitting a second Hausdorff topology τ on X which fulfills some conditions with respect to τ X : -[27, Lemma 2.1]: Let (X, T ) be a Polish dynamical system, denote by τ X the original topology of X and suppose that there exists a Hausdorff topology τ on X fulfilling that then for each x 0 ∈ X and each Banach limit m : ℓ ∞ −→ R one can find a (non-negative) T -invariant finite Borel regular measure µ on (X, B(X)) for which µ(X) ≤ 1 and such that µ(K) ≥ m(1l N T (x 0 ,K) ) for every τ -compact set K ⊂ X.Moreover, we have the inclusion In [27,Theorem 2.3] it is shown that conditions slightly stronger than (α), (β), (γ) and (δ) allow to obtain non-null measures by applying [27,Lemma 2.1] to each reiteratively recurrent point.This result has the following automatic corollary (already observed in [27, Proof of Theorem 1.3]): Corollary 2.1.Let Y be a Banach space, assume that its dual Banach space X := Y ′ is separable, and let T ∈ L(X) be the adjoint of some S ∈ L(Y ).Given a (non-zero) vector x 0 ∈ RRec(T ) one can find a (non-trivial) T -invariant probability measure µ x 0 on (X, B(X)) such that Moreover, if the set RRec(T ) is dense in X, then there exists a T -invariant probability measure µ on (X, B(X)) with full support.In particular, the result is true for every operator T ∈ L(X) with respect to the weak topology σ(X, X ′ ) as soon as (X, • ) is a separable reflexive Banach space. Note that given any Banach space X we are denoting by X ′ its topological dual space, which is again a Banach space, and given a dual pair (Y, X) we are denoting by σ(X, Y ) the weak topology on the space X induced by Y .Using this "locally convex spaces"-notation let us briefly explain how Corollary 2.1 is implicitly proved in [27, Proof of Theorem 1.3 and Theorem 2.3]: when T ∈ L(X) is the adjoint operator of some where τ • is the norm topology of (X, • ); (γ * ) every vector of X has a basis of τ • -neighbourhoods consisting of σ(X, Y )-compact sets; and if (X, • ) is separable [29,Fact 2.1] shows that condition (γ * ) implies the σ(X, Y )-metrizability of every σ(X, Y )-compact set, but also that B(X, σ(X, Y )) = B(X, τ • ), which are conditions (γ) and (δ) needed to apply [27,Lemma 2.1].Then, each reiteratively recurrent vector x 0 ∈ RRec(T ) can be shown to return enough frequently to every of its σ(X, Y )-compact and τ • -neighbourhoods of the type K r := {x ∈ X ; x 0 − x ≤ r} for r > 0, to admit a Banach limit m r : In order to repeat this proof when (X, τ X ) is a Fréchet space obtained as the strong dual of some locally convex space (Y, τ Y ), we must prove that conditions (α), (β), (γ) and (δ) still hold between σ(X, Y ) and τ X (see Lemma 2.6 below), but we also need to solve the following problem: if (X, τ X ) is not a Banach space then the τ X -neighbourhoods of the vector selected x 0 ∈ RRec(T ) are no longer σ(X, Y )-compact and [27, Lemma 2.1] seems useless.The following definition will avoid this issue: Definition 2.2.Let T ∈ L(X) be an operator acting on a Fréchet space X.A vector x ∈ X has a locally bounded orbit for T if there exists a neighbourhood U of x such that the set U ∩ O T (x) is bounded in X.We will denote by ℓbo(T ) the set of vectors with locally bounded orbit for T . Using this new concept we can state Theorem 2.3 below, which is the main result of this paper.Recall first that given any Hausdorff locally convex topological vector space (Y, τ Y ), then its topological dual space Y ′ can be endowed with a Hausdorff locally convex topology β(Y ′ , Y ) for which a basis of β(Y ′ , Y )-neighbourhoods of the 0 Y ′ -vector is formed by the following family of σ(Y ′ , Y )-closed sets Recall also that (Y, τ Y ) is called quasi-ℓ ∞ -barrelled if every bounded sequence in its strong dual space is equicontinuous (see [32,Section 12.1] or [38,Definition 8.2.13]).Here we have our main result: Theorem 2.3.Let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .Given a (non-zero) vector x 0 ∈ RRec(T ) ∩ ℓbo(T ) one can find a (non-trivial) T -invariant probability measure µ x 0 on (X, B(X)) such that Moreover, if the set RRec(T )∩ℓbo(T ) is dense in X, then there exits a T -invariant probability measure µ on (X, B(X)) with full support.In particular, the result is true for every operator T ∈ L(X) with respect to the weak topology σ(X, X ′ ) as soon as (X, τ X ) is a separable reflexive Fréchet space. The rest of this section is devoted to prove Theorem 2.3, but let us include some initial remarks: Remark 2.4.Let T ∈ L(X) be an operator acting on a Fréchet space X.Note that: (a) When X is Banach the equality ℓbo(T ) = X holds because the unit ball of X is a bounded set, so that Theorem 2.3 is just an extension of Corollary 2.1 to operators acting on Fréchet spaces. (b) When X is a Fréchet space which is not Banach: (b1) We have that X \ Rec(T ) ⊂ ℓbo(T ).Indeed, given any x ∈ X \ Rec(T ) there is some neighbourhood U of x such that U ∩ O T (x) is finite, so that X \ ℓbo(T ) ⊂ Rec(T ). (b2) If x ∈ Rec(T ) has a bounded orbit for T (that is, the set O T (x) is bounded in X) then x ∈ ℓbo(T ).In particular: if the vector x is T -periodic (that is, T p x = x for some p ∈ N) or if x is a unimodular T -eigenvector (that is, T x = λx with |λ| = 1) then x ∈ ℓbo(T ); and if the operator T is power-bounded then ℓbo(T ) = X (see Subsection 3.1 and Section 4). (b3) We have that HC(T ) ⊂ Rec(T ) \ ℓbo(T ).Indeed, given x ∈ HC(T ) and any neighbourhood U of x then U ∩ O T (x) is dense in U and not bounded since X is not Banach.Thus, if T is Devaney chaotic (that is, T has a hypercyclic vector and the T -periodic vectors are dense) then ℓbo(T ) is a dense but meager set in X (see Subsection 3.2 and Section 4). See Section 4 for more on this new concept of locally bounded orbit. Remark 2.5.The reader is referred to the textbooks [32,38] for details regarding the following facts: (a) The space (Y, τ Y ) in Theorem 2.3 has to be a separable quasi-barrelled (DF)-space.Indeed, since the strong dual space (Y ′ , β(Y ′ , Y )) is assumed to be separable we deduce that (Y, τ Y ) has to be separable, and hence quasi-barrelled by [38,Corollary 8.2.20], but we also know that (Y ′ , β(Y ′ , Y )) is a Fréchet space so that (Y, τ Y ) has a fundamental sequence of bounded sets (see [9,Corollary 5]). (b) Conversely to (a), and since the strong dual of any (DF)-space is always a Fréchet space (see for instance [32,Section 12.4]), we have that the hypothesis of Theorem 2.3 are satisfied as soon as the starting space (Y, τ Y ) is a (DF)-space with separable strong dual. (c) Note that, when (X, τ , the definition of strong topology implies that every vector x ∈ X admits a basis of τ X -neighbourhoods formed by σ(X, Y )-closed sets. (d) In the statement of Theorem 2.3 the sentence "T ∈ L(X) is the adjoint of S : Y −→ Y " means that "we have the dual-evaluation equality Su, x = u, T x for every pair (u, x) ∈ Y × X". We are now ready to prove Theorem 2.3.See Subsection 2.3 for some examples and extra remarks. Proof of Theorem 2.3 Let us start by showing that [27, Lemma 2.1] can be used in our Fréchet setting.Recall first that a topological space is called Lindelöf if every open cover of the space admits a countable subcover, and that a topological space is called hereditarily Lindelöf if every subspace of it is Lindelöf. Lemma 2.6.Let (Y, τ Y ) be a Hausdorff locally convex topological vector space, denote its strong dual space by (X, τ X ) := (Y ′ , β(Y ′ , Y )), and let T : X −→ X be a linear map.Then: Moreover, if the Hausdorff locally convex space (Y, τ Y ) is separable then: and if the Hausdorff locally convex space (X, τ X ) is hereditarily Lindelöf then: Proof.Property (α) is well-known (see [32,Section 8.6]) and (β) follows from the definition of σ(X, Y ) and τ X .Property (γ) is also known since for any τ Y -dense countable set can be checked to be a metric defining the weak topology on each σ(X, Y )-compact subset.For (δ) note first that B(X, σ(X, Y )) ⊂ B(X, τ X ) by (β).Conversely, let U ∈ τ X and for each ) is hereditarily Lindelöf we can obtain a countable sub-covering of U formed by σ(X, Y )-closed sets, which finally implies that U ∈ B(X, σ(X, Y )). We can now proceed with the proof of Theorem 2.3: let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual space (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y . Since the space (X, τ X ) is assumed to be separable and metrizable one can check that (Y, τ Y ) is also separable, and that (X, τ X ) is hereditarily Lindelöf.Hence Lemma 2.6 applies to our situation in its full generality and we have that: Claim 1.Given x 0 ∈ X and a σ(X, Y )-compact set K ⊂ X with Bd(N T (x 0 , K)) > 0, there exists a T -invariant probability measure µ on (X, B(X)) such that µ(K) > 0.Moreover, we have the inclusion Proof.In [27, Fact 2.3.1] it was shown that for any set A ⊂ N one can construct a Banach limit We repeat the explicit construction of such a Banach limit in Proposition 3.4 below, where we discuss the optimality of this construction (see Subsection 3.1).Hence there exists a Banach limit m K : Now we use the "locally bounded orbit"-assumption: Claim 2. Given a vector x 0 ∈ RRec(T ) ∩ ℓbo(T ) there exists a T -invariant probability measure µ x 0 on (X, B(X)) such that In particular, if x 0 = 0 then µ x 0 is a non-trivial T -invariant measure. Proof.Since the vector x 0 has a locally bounded orbit for T there exists a τ X -neighbourhood U of x 0 such that K := U ∩ O T (x 0 ) is a countable τ X -bounded set and hence an equicontinuous set in Y ′ by the quasi-ℓ ∞ -barrelled assumption on (Y, τ Y ).Moreover, we have that Hence we can apply Claim 1 to x 0 and each set for each n ∈ N. Then which also implies that µ x 0 is non-trivial when x 0 = 0 (see [27,Fact 2.3.2] for more details). To prove the part of Theorem 2.3 regarding the existence of a T -invariant measure with full support one can argue just as in [27, Theorem 2.3]: Claim 3. If the set RRec(T ) ∩ ℓbo(T ) is dense in X there exits a T -invariant probability measure µ on (X, B(X)) with full support. Proof.Given any countable dense subset {x k ; k ∈ N} ⊂ RRec(T ) ∩ ℓbo(T ) and applying Claim 2 to each vector x k we obtain a sequence (µ x k ) k∈N of T -invariant probability measures on (X, B(X)) such that x k ∈ supp(µ x k ) for each k ∈ N, and µ := k∈N 2 −k • µ x k fulfills the desired properties. Finally, and in order to complete the proof of Theorem 2.3, let us argue what happens for reflexive spaces.Recall first that a Fréchet space (X, τ X ) is called reflexive if the canonical inclusion from X into its strong bi-dual space X ′′ is an isomorphism, that is, the linear map where given x ∈ X the map J(x) : X ′ −→ K acts as [J(x)](u) = J(x), u := u, x for all u ∈ X ′ , and where β(X ′′ , X ′ ) is the strong topology induced on X ′′ by the locally convex space (X ′ , β(X ′ , X)).Since the strong dual of a Fréchet space is always a (DF)-space, in the previous situation we have that (Y, τ Y ) := (X ′ , β(X ′ , X)) is a (DF)-space; see [32,Section 12.4] and [36,Chapter 23] for more details. Claim 4. The conclusion of Theorem 2.3 holds for every operator T ∈ L(X) with respect to the weak topology σ(X, X ′ ) as soon as (X, τ X ) is a separable reflexive Fréchet space. (ii) The strong dual of (Y, τ X ) coincides with the separable Fréchet space (X, τ X ). (iii) The operator T ∈ L(X) can be seen as the adjoint of the linear map S : Y −→ Y defined as Su ∈ Y such that Su, x := u, T x for every x ∈ X. Indeed, the linear map S is the adjoint of T , and T coincides with the adjoint of S by reflexivity. By (i), (ii) and (iii) we have that the initial hypothesis of Theorem 2.3 are fulfilled by (Y, τ Y ), (X, τ X ) and T ∈ L(X), so that the conclusion holds.Moreover, note that the corresponding weak topology that appears in the statement σ(X ′′ , X ′ ) = σ(Y ′ , Y ) coincides with σ(X, X ′ ) by reflexivity. Remarks on Theorem 2.3 Let us include here some examples where Theorem 2.3 can be applied: Example 2.7 (Reflexive Fréchet spaces).The "reflexive" hypothesis is not too restrictive since plenty of interesting Fréchet spaces are reflexive.Moreover, the positive part of considering a reflexive space is that we have no restriction on the operators to which we can apply Theorem 2.3. Among these spaces we have the important class of Fréchet-Montel spaces like the space of all holomorphic functions H(Ω) on any open connected subset Ω ⊂ C equipped with the compact-open topology, also the space of smooth functions C ∞ (Ω) on any open subset Ω ⊂ R n equipped with the compact-open topology in all derivatives, or the space of all (real or complex) sequences ω = K N endowed with its usual coordinatewise convergence topology. We refer to [32,Chapter 11] for more on reflexivity. Example 2.8 (The case of (DF)-spaces with separable dual).Apart from the reflexive setting, and as pointed out in Remark 2.5, the conclusion of Theorem 2.3 also holds when we start considering a (DF)-space (Y, τ Y ) with separable strong dual and we pick an adjoint operator in that dual space.An important class of spaces fulfilling this property are the (LB)-spaces with separable dual. For instance one can consider the so-called Köthe sequence spaces (see [36,Chapter 27]).Indeed, for every Köthe matrix A = (a k,j ) k,j∈N , as defined in [36, Page 327] but also explicitly included in this notes (see Example 4.5 below), the respective Köthe space λ p (A) is the strong dual space of an inductive limit of countably many Banach spaces for every 1 ≤ p ≤ ∞.In particular, when 1 < p < ∞ then λ p (A) is always a reflexive Fréchet space (see [36,Proposition 27.3]) and, even though the spaces λ 1 (A) and λ ∞ (A) are not necessarily reflexive (see [36,Theorem 27.9: Dieudonné-Gomes]), we at least have that λ 1 (A) is always separable and the strong dual space of an inductive limit of c 0 -weighted spaces.Thus, Theorem 2.3 applies to every continuous weighted backward shift acting on λ 1 (A). In Section 4 we give explicit examples of locally bounded orbits for some classical operators acting on the already mentioned spaces H(C), ω and λ p (A), but let us end this part of the paper with a possible generalization of Theorem 2.3 for Polish dynamical systems: Remark 2.9.If one reads carefully the proof of Theorem 2.3 it is obvious that the construction holds because we assume the existence of a reiteratively recurrent point admitting a basis of neighbourhoods whose intersection with the respective orbit is compact for the weak topology.We could give a similar definition in the general Polish dynamical systems setting: Definition 2.10.Let T : X −→ X be a continuous map acting on a Polish space (X, τ X ) and let τ be any Hausdorff topology on X.We say that a point x ∈ X has a locally τ -compact orbit for T if any of the following equivalent conditions holds: Note that, when (Y, τ Y ) is a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space whose strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a Fréchet space as in Theorem 2.3, then a vector x ∈ X has a locally σ(X, Y )-compact orbit for an operator T ∈ L(X) if and only if x ∈ ℓbo(T ).A completely similar proof to that of Theorem 2.3 shows now the following extension of [27, Theorem 2.3]: Theorem 2.11.Let (X, T ) be a Polish dynamical system.Assume that X is endowed with a Hausdorff topology τ which fulfills (α), (β), (γ) and (δ) with respect to the original Polish topology τ X on X.Given a point x 0 ∈ RRec(T ) with a locally τ -compact orbit for T one can find a T -invariant probability measure µ x 0 on (X, B(X)) such that Moreover, if the set of reiteratively recurrent points with locally τ -compact orbit for T is dense in X, then there exits a T -invariant probability measure µ on (X, B(X)) with full support. Applications, optimality, almost-F -recurrence and chaos In this section we apply Theorem 2.3 to certain linear dynamical systems T ∈ L(X) with many locally bounded orbits, obtaining for them a very strong equivalence between two generally distinguished recurrence notions.We also discuss the optimality of Theorem 2.3 in terms of Banach limits and Furstenberg families, which implies the optimality of the announced equivalence.We finally use again the concept of locally bounded orbit to extend, from the Banach to the Fréchet setting, two results from the recent works [12,14] regarding the notions of almost-F-recurrence and Devaney chaos. Applications and optimality of the measures constructed By Theorem 2.3 we can construct invariant measures from every single reiteratively recurrent vector and then we can show the existence of many frequently recurrent points, which have a much stronger recurrent behaviour than that of reiterative recurrence (see [27,Lemma 3.1]).These arguments were already exhibited in [27,Theorem 1.3] for adjoint operators acting on dual Banach spaces but now we can obtain an extension of such a result in our "dual Fréchet setting": Proposition 3.1.Let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .Then we have the inclusions In particular, this holds for every T ∈ L(X) as soon as (X, τ X ) is a separable reflexive Fréchet space. If we guarantee that every reiteratively recurrent vector has a locally bounded orbit, then we recover the main results from [27, Section 3] but in our more general "dual Fréchet setting": Corollary 3.2.Let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .If we suppose that RRec(T ) ⊂ ℓbo(T ) then and, moreover, the next statements are equivalent: (i) T admits an invariant probability measure with full support; (ii) T is frequently recurrent (that is, the set FRec(T ) is dense in X); (iii) T is reiteratively recurrent (that is, the set RRec(T ) is dense in X). In particular, the inclusion RRec(T ) ⊂ ℓbo(T ) holds if we assume any of the following conditions: -the space (X, τ X ) is Banach (that is, (X, τ X ) is a locally bounded Fréchet space); -or the operator T ∈ L(X) is power-bounded (that is, every T -orbit is bounded). Note that if (X, τ X ) is a Banach space, or if T ∈ L(X) is a power-bounded operator, then we clearly have the inclusions RRec(T ) ⊂ X ⊂ ℓbo(T ). We consider worth mentioning again that Corollary 3.2 contains [27, Theorem 1.3], which is the original Banach version of the result.Moreover, following the arguments employed in [27] one can prove extended versions of Proposition 3.1 and Corollary 3.2 for "product" and "inverse" linear dynamical systems.This was deeply studied in [27, Sections 5 and 6] and we will not develop it further here. Let us now focus on the optimality of the measures obtained in Section 2. First of all we have to mention that Theorem 2.3 does not hold, and hence Proposition 3.1 and Corollary 3.2 are no longer true, outside the "dual/reflexive setting" described in Section 2. Indeed, in [11, Section 5] there are explicit examples of linear dynamical systems, acting on non-dual spaces, that have plenty of reiteratively recurrent vectors (they have a co-meager and hence dense set of such vectors) but no non-zero frequently recurrent vector, so that the only invariant probability measure that these operators admit is the trivial Dirac delta δ 0 (see [11,Theorem 5.7 and Corollary 5.8]). A second question regarding the optimality of Theorem 2.3 is whether we can weaken or not the "reiterative recurrence" assumption.Indeed, if we could construct invariant measures from vectors presenting a weaker recurrent behaviour than that of reiterative recurrence, then Proposition 3.1 and also Corollary 3.2 would show the existence of frequently recurrent vectors but starting from a condition weaker than reiterative recurrence.We are about to show that we cannot find such a weaker property, but in order to give a complete answer to this question let us recall the following definitions already used and deeply studied in the works [7,8,10,11,12,13,14,27,28,35]: Definition 3.3.If we denote by P(N) the power set of the set of positive integers N, then: (a) a collection of sets F ⊂ P(N) is called a Furstenberg family (or just a family for short) if ∅ / ∈ F and for every A ∈ F the inclusion A ⊂ B ⊂ N implies that B ∈ F; (b) and given an operator T ∈ L(X) and a Furstenberg family F ⊂ P(N), a vector x ∈ X is called F-recurrent for T if for every neighbourhood U of x the return set N T (x, U ) = {n ∈ N ; T n x ∈ U } belongs to F. We will denote by FRec(T ) the set of F-recurrent vectors for T , and we will say that the operator T is F-recurrent whenever the set FRec(T ) is dense in X. Note that frequent and reiterative recurrence, as defined in the Introduction of this paper, are two particular cases of F-recurrence that appear precisely when F is chosen to be: -the family of sets with positive lower density, which will be denoted as in [8,14,27] by -or the family of sets with positive upper Banach density, which will be denoted as in [8,14,27] by Therefore, our search for a property weaker than "reiterative recurrence", yet still enabling us to derive the conclusion of Theorem 2.3, can be formulated (and was implicitly asked by A. Avilés to the author of this paper in the context of the original Banach space result [27, Theorem 1.3]) as follows: -Is there any Furstenberg family F ⊂ P(N) fulfilling that BD F and such that the conclusion of Theorem 2.3 still holds for every vector x 0 ∈ FRec(T ) ∩ ℓbo(T )? Recall that, in Claim 1 of Theorem 2.3, it is necessary that given a set A ∈ BD then one can find a Banach limit m A : ℓ ∞ −→ R such that m A (1l A ) = Bd(A) > 0, since this Banach limit is crucial to construct the strictly positive invariant measure required.Thus, the optimal form of Theorem 2.3 in terms of Furstenberg families would appear if we replace BD by the "apparently new" family BL := {A ⊂ N ; there exists a Banach limit m A : Proposition 3.4.We have the following equality of Furstenberg families BD = BL. Proof.The inclusion BD ⊂ BL was already discussed in [27, Fact 2.3.1] and follows since given A ∈ BD we can find a strictly increasing sequence of positive integers (N k ) k∈N and a sequence of intervals of positive integers ( , where U ⊂ P(N) is a fixed non-principal ultrafilter on N, is a Banach limit for which m A (1l A ) = Bd(A) > 0. Conversely, given a set A ∈ BL there is a Banach limit m A : ℓ ∞ −→ R such that m A (A) > 0. Using now [40, Theorem 1], which asserts that the maximum value that a Banach limit can get on a sequence φ = (φ n ) n∈N ∈ ℓ ∞ is precisely the value given by the functional M : ℓ ∞ −→ R with we clearly have that Bd(A) = M (1l A ) ≥ m A (1l A ) > 0 and hence A ∈ BD.Proposition 3.4 shows that the measures from Theorem 2.3 (but also those constructed in [27]) are optimal in terms of Banach limits and Furstenberg families.This observation slightly improves the classical result of Oxtoby and Ulam [37, Theorem 1] for Polish dynamical systems: Proposition 3.5.Let (X, T ) be a Polish dynamical system acting on the Polish space (X, τ X ).The following statements are equivalent and optimal in terms of Banach limits and Furstenberg families: (i) there exists a positive finite T -invariant Borel measure µ on (X, B(X)); (ii) there exist a point x ∈ X and a τ X -compact set K ⊂ X such that lim (iii) there exist a Hausdorff topology τ on X fulfilling the properties (α), (β), (γ) and (δ) with respect to τ X , a point x ∈ X and a τ -compact set K ⊂ X such that Bd(N T (x, K)) > 0. Proof. Almost-F -recurrence and Devaney chaos on dual Fréchet spaces We finish Section 3 by showing two more locally bounded orbit's applications, which are not related to the existence of invariant measures but that also use the "dual Fréchet setting" from Section 2. Both applications are based on recent investigations from Rodrigo Cardeccia and Santiago Muro, who have successfully used the "adjoint operators acting on dual Banach spaces setting" to obtain strong results regarding the notions of almost-F-recurrence and Devaney chaos (see [14] and [12] ).Given a Furstenberg family F ⊂ P(N) we say that an operator T ∈ L(X) is almost-F-recurrent if for every non-empty open subset U of X there exists a vector x U ∈ U such that the return set We remark that the notion of almost-F-recurrence is highly inspired in the so-called P F property introduced in 2018 by Puig [39]: an operator T ∈ L(X) has the P F property if for every non-empty open subset U of X there exists a vector x U ∈ X such that the return set N T (x U , U ) belongs to F. The only difference between these two concepts is the relation "x U ∈ U ", so that almost-F-recurrence is slightly stronger than the P F property.However, both concepts coincide whenever F ⊂ P(N) is left-invariant, that is, if for every A ∈ F and k ∈ N the set (A − k) ∩ N belongs to F, where A − k := {a − k ; a ∈ A}.Indeed, note that if F is left-invariant and there exists some x 0 ∈ X \ U fulfilling that N T (x 0 , U ) ∈ F and for which n 0 := min(N T (x 0 , U )), then the vector x U := T n 0 x ∈ U clearly fulfills that (N T (x 0 , U )−n 0 )∩N = N T (x U , U ) and hence N T (x U , U ) ∈ F. It is worth mentioning that the usual families considered in the literature (such as BD and D mentioned in Subsection 3.1) are left-invariant (see also [10], [14,Section 3] or [28,Example 4.2]), so it is natural to just focus on the similarities/differences between almost-F-recurrence and the standard notion of F-recurrence. As observed in [14, Section 3], since the definition of F-recurrence (see Definition 3.3) requires the density of the set FRec(T ), and since the F-recurrent vectors return to each of their neighbourhoods with "frequency F", it follows from Definitions 3.3 and 3.6 that the concept of almost-F-recurrence is (at least formally) weaker than that of F-recurrence.Indeed, it is asked in [14, Section 5] and still open for the moment, if both properties coincide or not for continuous linear operators.This question encourages to search for results similar in spirit to [27,Theorem 1.3] and Corollary 3.2, where several notions of F-recurrence are shown to coincide for different Furstenberg families.In fact, one of the main lines of though in the recent work from Cardeccia and Muro [14] is to search for families F = G ⊂ P(N) fulfilling that almost-F-recurrence and almost-G-recurrence are equivalent properties.This led to the so-called block families (see [14,Definition 3.3] but also [24,31,33]): Definition 3.7.For a Furstenberg family F ⊂ P(N) we define bF, the associated block family, in the following way: a set B ⊂ N belongs to bF if there exists some A B ∈ F such that for each finite subset F ⊂ A B there is some Roughly speaking, the block family bF obtained from a given F is the collection of sets that contain every finite block from a fixed set of the original family, but possibly translated.Some general basic properties (such as the inclusion F ⊂ bF) and examples (such as the equality bD = BD) are exposed in [14,Section 3], and the authors prove in [14,Theorem 3.12] the equivalence between the notion of almost-F-recurrence and that of almost-bF-recurrence, for adjoint operators acting on dual Banach spaces and every left-invariant Furstenberg family F ⊂ P(N).Using locally bounded orbits and the "dual Fréchet setting" exposed at Section 2 we can obtain the following extension of such a result: Theorem 3.8.Let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .If the set [bF]Rec(T ) ∩ ℓbo(T ) is dense in (X, τ X ) for a left-invariant Furstenberg family F ⊂ P(N), then T is almost-F-recurrent. Proof.Let U be an arbitrary but fixed non-empty τ X -open subset of X.By assumption there is some x 0 ∈ U ∩ [bF]Rec(T ) ∩ ℓbo(T ).Since x 0 has a locally bounded orbit we can find a σ(X, Y )-closed τ X -neighbourhood V of x 0 fulfilling that V ∩ O T (x 0 ) is a countable τ X -bounded set and hence an equicontinuous set in Y ′ by the quasi-ℓ ∞ -barrelled assumption on (Y, τ Y ).Without lost of generality we can assume that V ⊂ U , and by the Alaoglu-Bourbaki theorem (see [32,Section 8.5]) we have that Note also that we have the inclusions which imply that N T (x 0 , K) ∈ bF by the hereditarily upward property of the Furstenberg family F. By definition of block family there exists a set A ∈ F such that for every finite subset F ⊂ A there is some Following now the proof of [14, Theorem 3.12], for each n ∈ N let A n := A ∩ [1, n] and pick some a n ∈ N ∪ {0} such that T an+r (x 0 ) ∈ K for every r ∈ A n .Since K is σ(X, Y )-compact and (Y, τ Y ) is separable, property (γ) from Lemma 2.6 shows that K is also σ(X, Y )-metrizable.Thus, there exist a subsequence (a n k ) k∈N and a vector x U ∈ K satisfying that (T an k +r 0 (x 0 )) k∈N is σ(X, Y )-convergent to x U ∈ K ⊂ U .We finally claim that (A − r 0 ) ∩ N ⊂ N T (x U , U ) and hence that N T (x U , U ) ∈ F by left-invariance, which finishes the proof since U was arbitrary.Indeed, for any r ∈ A with r > r 0 , T r−r 0 (x U ) = T r−r 0 σ(X, Y ) -lim k→∞ T an k +r 0 (x 0 ) = σ(X, Y ) -lim k→∞ T an k +r (x 0 ), and T an k +r (x 0 ) ∈ K provided that r ∈ A n k , that is, as soon as n k > r.The σ(X, Y )-compactness of the set K implies that T r−r 0 (x U ) ∈ K ⊂ U and hence r − r 0 ∈ N T (x U , U ) as we had to show. We refer the reader to [14, Section 3] for more on almost-F-recurrence and the possible consequences of Theorem 3.8.Let us now focus on the notion of chaos: a linear dynamical system T ∈ L(X) is called Devaney chaotic (or just chaotic for short) if T is hypercyclic and the set of T -periodic vectors is dense.In [12,Theorem 3.11] Cardeccia and Muro characterize the notion of chaos, for adjoint operators acting on dual Banach spaces, as a concrete case of F-hypercyclicity: they introduce the family AP b ⊂ P(N) of sets containing arbitrarily long arithmetic progressions with a fixed common bounded difference, and they show that an adjoint operator acting on a dual Banach space T ∈ L(X) is chaotic if and only if T is AP b -hypercyclic, but also if and only if the operator T is hypercyclic and has dense small periodic sets.We can again extend this result to our "dual Fréchet setting": Theorem 3.9.Let (Y, τ Y ) be a quasi-ℓ ∞ -barrelled Hausdorff locally convex topological vector space, assume that its strong dual (X, τ X ) := (Y ′ , β(Y ′ , Y )) is a separable Fréchet space, and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .The following assertions are equivalent: (ii) T is hypercyclic and has dense small bounded periodic sets; (iii) T is (Devaney) chaotic. The interested reader can find the precise definition of the previous concepts in [12], but let us just mention that T ∈ L(X) is said to have dense small bounded periodic sets if every non-empty open subset U ⊂ X contains a bounded set Y ⊂ U such that T p (Y ) ⊂ Y for some p ∈ N. The proof is analogous to that of [12,Theorem 3.11], but using the arguments of Theorems 2.3 and 3.8 regarding locally bounded orbits, and we just include a sketch of the proof: Sketch to prove Theorem 3.9.For (iii) ⇒ (i) recall that every periodic vector is trivially AP b -recurrent; to prove (i) ⇒ (ii) one can adapt the arguments employed in both Theorems 2.3 and 3.8 regarding locally bounded orbits, but following the proof of [12, Proposition 3.9 and Lemma 3.10]; and the final implication (ii) ⇒ (iii) follows as in [12,Theorem 3.11] because every bounded set in (X, τ X ) is relatively σ(X, Y )-compact (recall that (Y, τ Y ) is quasi-barrelled by separability, see Remark 2.5). Note that Theorems 2.3, 3.8 and 3.9 are just modest extensions to the Fréchet-space setting of three results originally proved for operators acting on Banach spaces.However, and as we are about to show in the following section, the main classical examples of operators in Linear Dynamics present plenty of locally bounded orbits so that the results obtained here will usually be applicable. More on locally bounded orbits In this final section we elaborate further on locally bounded orbits (see Definition 2.2).In particular, we study the topological and dynamical structure of the set ℓbo(T ), we give some explicit examples of locally and non-locally bounded orbits for operators acting on (non-Banach) Fréchet spaces, and we argue which kind of strong recurrence is needed for a locally bounded orbit to be bounded.We also include a final subsection with some open problems regarding the main results of this paper. Stability results and explicit examples When studying a set of vectors with some dynamical property with respect to an operator T ∈ L(X), in our case the set ℓbo(T ), many properties may be observed.First of all we can look at the size of such a set: we have already argued in Remark 2.4 that ℓbo(T ) can be the whole space, it could also be a dense but meager set, or even the singleton set formed by the zero-vector ℓbo(T ) = {0} (see [26]).Let us show that, at least, the set ℓbo(T ) always contains the linear span of the T -eigenvectors: Proposition 4.1.For every T ∈ L(X) acting on (real or complex) Fréchet space X we have the inclusion span(Eig(T )) ⊂ ℓbo(T ), where Eig(T ) := {x ∈ X ; T x = λx for some λ ∈ K}. Note that the linear subspace span(Eig(T )) ⊂ X is a T -invariant set for every operator T ∈ L(X), so that a natural property to look at is the T -invariance of ℓbo(T ): Proposition 4.2.If a continuous linear operator T ∈ L(X) is invertible, then T (ℓbo(T )) ⊂ ℓbo(T ). Proof.Given any x 0 ∈ ℓbo(T ) set x 1 := T (x 0 ).By definition there exists a neighbourhood U 0 of x 0 such that U 0 ∩ O T (x 0 ) is a bounded set, and by the continuity of T −1 we can find a neighbourhood U 1 of x 1 such that T −1 (U 1 ) ⊂ U 0 .We claim that U 1 ∩ O T (x 1 ) ⊂ T (U 0 ∩ O T (x 0 )), which would finish the proof since the image of a bounded set by a continuous operator is again bounded.Indeed, given any vector x ∈ U 1 ∩ O T (x 1 ) there exists some n ∈ N such that x = T n x 1 = T n+1 x 0 ∈ U 1 , but then the vector y := T −1 (x) = T n−1 x 1 = T n x 0 fulfills that y ∈ U 0 ∩ O T (x 0 ) and x = T y. Let us apply Propositions 4.1 and 4.2 to the following well-known class of operators: In the space of entire functions we can also consider the standard differential operator, commonly known as the MacLane operator, D : The operator D is not invertible but chaotic (see again [30,Example 2.35]), so that Proposition 4.2 does not apply although the set ℓbo(D) is again a dense but meager set in H(C).A function f ∈ H(C) belongs to ℓbo(D) if and only if there exist k 0 ∈ N, ε > 0 and w = (w j ) j∈N ∈ ]0, +∞[ N such that ∀n ∈ N with max where we are using the notation f (n) (z) := [D n (f )](z). Birkhoff and MacLane operators are a particular case of the so-called differential operators, usually denoted by ϕ(D) : H(C) −→ H(C), where ϕ ∈ H(C) is an entire function of exponential type acting on the standard differential operator D : H(C) −→ H(C).These operators were originally studied by Godefroy and Shapiro (see [25,Section 5]) and it is well-known that for each a ∈ C \ {0} we have the equality T a = e aD but also D = p(D) for the polynomial p(z) = z (see [30,Section 4.2] for more details).It is clear that a general differential operator ϕ(D) is not necessarily invertible, so that Proposition 4.2 can not always be applied.However, Godefroy and Shapiro showed that ϕ(D) is a chaotic operator, and hence ℓbo(ϕ(D)) is a dense but meager set in H(C), as soon as ϕ is a non-constant function (see [25,Theorem 5.1]).Note that an entire function f ∈ H(C) belongs to ℓbo(ϕ(D)) if and only if there exist k 0 ∈ N, ε > 0 and w = (w j ) j∈N ∈ ]0, +∞[ N such that ∀n ∈ N with max Finally, given any differential operator ϕ(D) we can apply Proposition 4.1 obtaining that the linear span of the exponential functions A := span{e λz ; λ ∈ C} is contained in ℓbo(ϕ(D)).Indeed, it is well-known and not hard to check that ϕ(D)(e λz ) = ϕ(λ) • e λz for every λ ∈ C, so that e λz ∈ Eig(ϕ(D)).Moreover, A is a dense set (see the nice proof of [30,Lemma 2.34] originally from [1,Sublemma 7]), so that A is a dense ϕ(D)-invariant subalgebra of (H(C), +, •) with respect to the usual addition and pointwise product of entire functions, contained in ℓbo(ϕ(D)) for every differential operator ϕ(D).In summary, differential operators have plenty of locally bounded orbits. Let us show that the set of locally bounded orbits ℓbo(T ) is not necessarily T -invariant when the studied operator T ∈ L(X) is not invertible: Example 4.4 (The backward shift on the space of all sequences).Let ω = K N be the space of all (real or complex) sequences endowed with the standard Fréchet topology of convergence in all coordinates (see for instance [30,Example 2.2]).Consider the backward shift operator B : ω −→ ω, which acts as B((x j ) j∈N ) := (x j+1 ) j∈N for each x = (x j ) j∈N ∈ ω.It is well-known and easy to check that B is chaotic, so that ℓbo(B) is a dense but meager set in ω, and a sequence x = (x j ) j∈N ∈ ω belongs to ℓbo(B) if and only if there exist k 0 ∈ N, ε > 0 and w = (w j ) j∈N ∈ ]0, +∞[ N such that ∀n ∈ N with max Note that the set ℓ ∞ K := {(x j ) j∈N ∈ K N ; sup j∈N |x j | < +∞} of all (real or complex) bounded sequences is a dense linear subspace of ℓbo(B).Indeed, ℓ ∞ K is even a dense B-invariant subalgebra of the Fréchet algebra (ω, +, •) with respect to the coordinatewise addition and product of sequences, which shows that the backward shift on ω presents plenty of locally bounded orbits. Contrary to Birkhoff operators, the backward shift B : ω −→ ω is not invertible and we claim that it does not satisfy the conclusion of Proposition 4.2.Indeed, one way of proving that B(ℓbo(B)) is not included in ℓbo(B) is the following: we construct a vector y = (y j ) j∈N ∈ ω \ ℓbo(B) such that y j > 0 for every j ∈ N (so that y is non-hypercyclic for B), and we consider z = (z j ) j∈N ∈ ω with since then we will have z ∈ ℓbo(B) but Bz = y / ∈ ℓbo(B).We construct y = (y j ) j∈N recursively: -Step 1: We start by fixing any finite word of positive numbers (y 1 , y 2 , y 3 , ..., y N ) ∈ ]0, +∞[ N , which will be the first N ≥ 1 coordinates of the final vector y = (y j ) j∈N ∈ ω. - 6 . -Step 3: We repeat Step 2 infinitely many times, but each time on the "new" and strictly longer finite sequence obtained from the previous application of Step 2, and we let y = (y j ) j∈N ∈ ω be the final limit sequence obtained from this recursive process. Note that y = (y j ) j∈N / ∈ ℓbo(B) since for every k ∈ N we have that the finite word (y 1 , y 2 , y 3 , ..., y k , M ) appears along the sequence (y j ) j∈N for every positive integer M ∈ N. Example 4.5 (The backward shift on Köthe spaces).Following [36,Chapter 27] we will say that an infinite matrix A = (a k,j ) k,j∈N of non-negative numbers is a Köthe matrix if it satisfies: (KM1) The inequality a k,j ≤ a k+1,j holds for all k, j ∈ N. (KM2) For each j ∈ N there exists some k ∈ N such that a k,j > 0. Given such a matrix A = (a k,j ) k,j∈N and 1 ≤ p < ∞ the Köthe space λ p (A) is defined as , and for p = ∞ the Köthe space λ ∞ (A) is defined as where (q k ) k∈N and (r k ) k∈N are sequences of seminorms defining the topology of λ p (A) and λ ∞ (A). Assume now that A = (a k,j ) k,j∈N is a Köthe matrix and that p ∈ [1, ∞] is a fixed value such that the backward shift operator B : λ p (A) −→ λ p (A), acting in λ p (A) as in Example 4.4, is well-defined and hence continuous by the Closed Graph Theorem.Using the known characterization of bounded sets for Köthe spaces (see [36,Lemma 27.5]) a vector x = (x j ) j∈N ∈ λ p (A) belongs to ℓbo(B) if and only if there exist k 0 ∈ N, ε > 0 and w = (w when p = ∞.The dynamical behaviour of B, but also the space λ p (A), strongly depend on the matrix A, so there are not general locally bounded orbits for B : λ p (A) −→ λ p (A).Indeed, λ p (A) could be a Banach space and hence ℓbo(B) = λ p (A), but also the operator B : ω −→ ω from Example 4.4 is a particular case of B : λ ∞ (A) −→ λ ∞ (A), precisely when a k,j = 1 for j ≤ k and 0 otherwise. We finish this subsection by showing that if a locally bounded orbit has a too strong recurrent behaviour then the orbit has to be bounded.This will be the case for uniformly recurrent locally bounded orbits.A vector x ∈ X is called uniformly recurrent for T ∈ L(X) if for every neighbourhood U of x the return set N T (x, U ) = {n ∈ N ; T n x ∈ U } has bounded gaps, that is, if for the strictly increasing sequence of integers (n k ) k∈N forming the set N We will denote by URec(T ) the set of uniformly recurrent vectors for T , and we will say that the operator T is uniformly recurrent whenever the set URec(T ) is dense in X. Uniform recurrence is a very strong notion that has sometimes been called "almost periodicity" (see for instance [11]).It is not hard to check that the orbit of a uniformly recurrent vector for an operator T ∈ L(X) is bounded when X is a Banach space, but as shown in [11,Example 3.3], not necessarily bounded when X is just a (non-Banach) Fréchet space.Let us show that local boundedness is the key in this fact: Proposition 4.6.Given an operator T ∈ L(X) acting on a Fréchet space X, the orbit of a uniformly recurrent vector x ∈ URec(T ) is locally bounded for T if and only if its orbit O T (x) is bounded.Hence, if the set URec(T ) ∩ ℓbo(T ) is non-meager then T is power-bounded and X = URec(T ). Proof.If there is a neighbourhood U of x such that U ∩ O T (x) is a bounded set and N ∈ N is the maximum gap between two consecutive elements from N T (x, U ), then one can check that which are bounded sets by the continuity of T .Hence, if URec(T ) ∩ ℓbo(T ) is a non-meager set then T is power-bounded by Banach-Steinhaus, and by [11,Theorem 3.1] we get that X = URec(T ).This last result is an extension of [11,Corollary 3.2] that shows, in some way, what kind of (weak) boundedness is exhibited by a uniformly recurrent orbit in a Fréchet space (see [11,Section 3]).We would also like to point out that Proposition 4.6 does not hold for locally bounded orbits with a weaker recurrent behaviour than that of uniform recurrence: First of all, we may assume that min(A k ) > k for every k ∈ N by taking a subsequence of (A k ) k∈N if necessary.Let (m s ) s∈N ∈ N N be the increasing sequence of integers forming the set k∈N A k .We now construct the vector x = (x j ) j∈N ∈ ω recursively: -Step 1: We start by letting x 1 = 1, and x j = 0 for every 1 < j ≤ m 1 in case that 1 < m 1 .We have fixed the first "m 1 " coordinates of the final vector x = (x j ) j∈N ∈ ω. -Step 3: We repeat Step 2 infinitely many times by considering each time m s for s ≥ 2, and using that m s+1 − m s ≥ l as soon as m s ∈ A l by ( * ), but also that m s > l by the assumptions on (A k ) k∈N .We let x = (x j ) j∈N ∈ ω be the final limit sequence obtained from this recursive process. From the previous construction it is not hard to check that x = (x j ) j∈N fulfills the characterization of locally bounded orbit given in Example 4.4 for B : ω −→ ω with the parameters k 0 := 1, any positive value 0 < ε < 1, and the sequence w = (w j ) j∈N := (j) j∈N ∈ ]0, +∞[ N .However, the set O B (x) is not bounded since x = (x j ) j∈N / ∈ ℓ ∞ K = {(x j ) j∈N ∈ K N ; sup j∈N |x j | < +∞}.Moreover, the construction implies that for each positive integer k ∈ N we have the following equality max 1≤j≤2k+1 [B n (x)] j − x j = 0 for every n ∈ A 2k+1 ∈ F. We deduce that for each neighbourhood U of x in ω, with respect to the topology of convergence in all coordinates, the return set N B (x, U ) belongs to F and hence x ∈ FRec(B).We would like to mention that the families F ⊂ P(N) fulfilling ( * ) have been called hypercyclicity sets in [7].Assuming ( * ) it is also not hard to construct an F-hypercyclic vector for the backward shift B : ω −→ ω.An operator S ∈ L(Y ) is said to be quasi-conjugate (resp.conjugate) to a second operator T ∈ L(X) if there exists a continuous map J : X −→ Y for which S • J = J • T and such that J has dense range (resp.J is an homeomorphism); and a property P is said to be preserved under (quasi-)conjugacy when the following holds: if an operator T ∈ L(X) has property P then every operator S ∈ L(Y ) that is (quasi-)conjugate to T also has property P .In Example 4.4 we have shown that locally bounded orbits are not preserved under quasi-conjugacy since B : ω −→ ω is quasi-conjugate to itself by taking J = B = S and there exists z ∈ ℓbo(B) such that Bz / ∈ ℓbo(B).This motivates the following problem: Problem 4.9.Are locally bounded orbits preserved under conjugacy? Some problems Note that given T ∈ L(X), S ∈ L(Y ) and an homeomorphism J : X −→ Y with S • J = J • T , if J is linear then the same arguments from Proposition 4.2 show that J(ℓbo(T )) = ℓbo(S) and also that ℓbo(T ) = J −1 (ℓbo(S)), so that the problem here is knowing what happens if J is non-linear. The following questions seem to be non-trivial and all of them are based on removing the locally bounded orbit assumption in Theorems 2.3, 3.8 and 3.9.Using Remark 2.5, we state them directly for adjoint operators acting on the dual of a (DF)-space: (i) ⇒ (ii) was already shown in[37, Theorem 1] by using similar arguments to those employed in[27, Lemma 3.1].We have (ii) ⇒ (iii) since τ X fulfills the properties (α), (β), (γ) and (δ) with respect to itself.Finally, (iii) ⇒ (i) follows from[27, Lemma 2.1].The optimality in terms of Banach limits, Furstenberg families and densities follows from Proposition 3.4 and [27, Lemma 2.1]. respectively).Let us start by the notion of almost-F-recurrence, name recently coined by Cardeccia and Muro for general Furstenberg families in [14, Definition 3.2], although this concept was previously considered for particular families (under very different names) by several authors such as Costakis and Parissis [19], Badea and Grivaux [2, Proposition 4.6] and Grivaux and Matheron [29, Section 2.5]: Definition 3.6 ([14, Definition 3.2] Example 4 . 3 ( Birkhoff, MacLane and differential operators).For a complex number a ∈ C\{0} consider the translation operator, also called the Birkhoff operator, T a : H(C) −→ H(C) on the space of entire functions endowed with the usual compact-open topology, where [T a (f )](z) := f (z + a) for each f ∈ H(C).The operator T a is invertible and chaotic (see [30, Example 2.35]), so that the set ℓbo(T a ) is T a -invariant by Proposition 4.2, and a dense but meager set in H(C).A function f ∈ H(C) belongs to ℓbo(T a ) if and only if there exist k 0 ∈ N, ε > 0 and w = (w j ) j∈N ∈ ]0, +∞[ N such that ∀n ∈ N with max |z|≤k 0 |f (z) − f (z + na)| < ε, then max |z|≤j |f (z + na)| ≤ w j ∀j ∈ N. Example 4 . 7 . From the Furstenberg families F ⊂ P(N) studied in the literature in the context of Linear Dynamics, the next notion of F-recurrence slightly weaker than uniform recurrence is that of frequent recurrence as defined in the Introduction.As we have mentioned in Subsection 3.1, this notion coincides with F-recurrence for the family F = D of positive lower density sets.A standard separation result such as[30, Lemma 9.5] shows that the family F = D has the following property: sequence of pairwise disjoint sets (A k ) k∈N ∈ F N such that:for every n ∈ A k and every n ′ ∈ A k ′ with n = n ′ , then |n − n ′ | ≥ max{k, k ′ }.Let us show that, given a Furstenberg family F ⊂ P(N) fulfilling ( * ), then we can construct for B : ω −→ ω a vector x ∈ FRec(B) ∩ ℓbo(B) such that O B (x) is not bounded: In Example 4.4 we show that the set ℓbo(T ) is not necessarily T -invariant when the operator T ∈ L(X) is not invertible, but one can check that the inclusion B(Rec(B) ∩ ℓbo(B)) ⊂ Rec(B) ∩ ℓbo(B) holds for the backward shift operator B in both Examples 4.4 and 4.5.This motivates our first problem: Problem 4.8.Is the inclusion T (Rec(T ) ∩ ℓbo(T )) ⊂ Rec(T ) ∩ ℓbo(T ) true for every continuous linear operator T ∈ L(X)?What about the inclusion T (X \ ℓbo(T )) ⊂ X \ ℓbo(T )?Note that if T (X \ℓbo(T )) ⊂ X \ℓbo(T ) is true, then T (Rec(T )∩ ℓbo(T )) ⊂ Rec(T )∩ ℓbo(T ) follows trivially from the following reasoning: given x ∈ Rec(T ) ∩ ℓbo(T ) and an open neighbourhood U of x such that U ∩ O T (x) is bounded, then we have that O T (x) ∩ U ⊂ Rec(T ) ∩ ℓbo(T ). Problems 4 . 10 . Let (Y, τ Y ) be a (DF)-space whose strong dual Fréchet space (X, τ X ) := (Y ′ , β(Y ′ , Y )) is separable and let T ∈ L(X) be the adjoint of some linear map S : Y −→ Y .Then we ask:(a) Given a vector x 0 ∈ RRec(T ) \ ℓbo(T ), does it follow that there exists a T -invariant probability measure µ on (X, B(X)) fulfilling that x 0 ∈ supp(µ)?(b)Given a non-empty open set U ⊂ X and a Furstenberg family F ⊂ P(N), does the existence of a vector x ∈ U ∩ [bF]Rec(T ) \ ℓbo(T ) implies the existence of z ∈ U fulfilling that N T (z, U ) ∈ F? the (absolute) polar of each set E ⊂ Y with respect to the dual pair (Y, Y ′ ).The topology β(Y ′ , Y ) is called the strong topology on the space Y ′ induced by (Y, τ Y ), and the Hausdorff locally convex topological vector space (Y ′ , β(Y ′ , Y )) is called the strong dual of (Y, τ Y ); see [32, Chapter 8] for more on duality for locally convex spaces.
16,293.2
2024-01-24T00:00:00.000
[ "Mathematics" ]
Anti-Aging and Lightening Effects of Au-Decorated Zeolite-Based Biocompatible Nanocomposites in Epidermal Delivery Systems The main challenges in developing zeolites as cosmetic drug delivery systems are their cytotoxicities and the formation of drug-loading pore structures. In this study, Au-decorated zeolite nanocomposites were synthesized as an epidermal delivery system. Thus, 50 nm-sized Au nanoparticles were successfully deposited on zeolite 13X (super cage (α) and sodalite (β) cage structures) using the Turkevich method. Various cosmetic drugs, such as niacinamide, sulforaphane, and adenosine, were loaded under in vitro and in vivo observations. The Au-decorated zeolite nanocomposites exhibited effective cosmetic drug-loading efficiencies of 3.5 to 22.5 wt% under various conditions. For in vitro cytotoxic observations, B16F10 cells were treated with various cosmetic drugs. Niacinamide, sulforaphane, and adenosine-loaded Au-decorated zeolite nanocomposites exhibited clear cell viability of over 80%. Wrinkle improvement and a reduction in melanin content on the skin surface were observed in vivo. The adenosine delivery system exhibited an enhanced wrinkle improvement of 203% compared to 0.04 wt% of the pure adenosine system. The niacinamide- and sulforaphane-loaded Au-decorated zeolite nanocomposites decreased the skin surface melanin content by 123% and 222%, respectively, compared to 2 and 0.01 wt% of pure niacinamide and sulforaphane systems, respectively. As a result, Au-decorated zeolite nanocomposites show great potential as cosmetic drug epidermal delivery systems for both anti-aging and lightening effects. Introduction Recently, zeolites, which are biocompatible nanocomposites, have attracted attention due to their diverse structures, allowing for controlled and targeted drug delivery applications [1][2][3][4][5]. Zeolites, owing to their low cost, abundant natural availability, and mass production, have the advantage of commercial feasibility. Zeolites consist of various porous structures, including micropores, mesopores, and macropores. These pore structures allow for the delivery of different therapeutic agents to the targeted sites with controlled-release systems [6][7][8][9]. Silica-based mesoporous zeolites, particularly, have shown great potential for use in drug-release systems [10,11]. The ion-exchange ability and uniform structure of zeolites have direct benefits of absorbing and releasing organic or inorganic particles [9,12,13]. Due to the various sizes and hydrophilicity of drugs, the loading capacity and release rate are highly dependent on the pore sizes [14] and surface modification [15,16] of zeolites. However, there are several challenges in using zeolites as drug delivery systems. Because of the difference in hydrophilicity and pore sizes of the zeolite surface and drug molecules, the loading capacity could be limited, and the release rate could be too high. In addition, oxygenated functional groups of zeolites induce cytotoxicity and carcinogenic effects, resulting in the disruption of the cell structure as well as swelling in the mitochondria and squared cells [17][18][19]. An alternative to overcome these challenges is to introduce novel metal nanoparticles, such as Au, to zeolite surfaces. Au nanoparticles are ideal vehicles for targeted and selective drug delivery. Au nanoparticles have high biocompatibility, hydrophilicity, nonimmunogenicity, and low toxicity [20][21][22]. Furthermore, cytotoxicity of Au nanoparticles could be controlled by their shape, size, and densities [23][24][25]. Various deposition methods, such as vapor-phase deposition and grafting, sol-gel, and ion-exchange methods, have been developed for deposition, precipitation, co-precipitation, and impregnation [26]. Au nanoparticles are also deposited by the Turkevich method, developed by Brust et al., which induces a reaction with AuCl 4 − (using tetrachloroauric acid (HAuCl 4 )) and the reducing agent sodium borohydride in the presence of the desired ligand (thiol-terminated longchain alkane) [27,28]. Depending on the method used, Au nanoparticles of up to 20 nm in diameter can be deposited. The loading efficiency of Au nanoparticles strictly depends on the form of zeolite and surface modification via ion exchangeability [29]. For instance, the Au loading efficiency is usually higher for the ammonium form than that of the hydrogen form of zeolite [30]. In this study, a biocompatible nanocomposite, Au zeolite, was developed for the delivery of various cosmetic drugs. Zeolite 13X was first calcinated to free the super cage (α) and sodalite (β) cage structures (Scheme 1). The Au nanoparticles were subsequently decorated onto the zeolite nanocomposites by the Turkevich method to form Au zeolite. Finally, in vivo and in vitro observations were performed for three different molecules: niacinamide, sulforaphane, and adenosine. Niacinamide is a practical lightening substance recognized by the Ministry of Food and Drug Safety, which is known to cause skin-lightening by reducing the melanosome transfer from melanocytes to keratinocytes [31][32][33]. As well as hyaluronic-acid-based composites [34], sulforaphane is also an effective lightening agent that reduces melanin production and tyrosinase activity as an anti-inflammatory, antioxidant, and anti-cancer substance [35][36][37]. Adenosine is a wrinkle-improving agent supported by the Ministry of Food and Drug Safety and is known to enhance wound healing by binding to the A2A receptor and increasing collagen in the dermis [34,38]. Scheme 1. Synthesis of Au-decorated zeolite and its structure via reduction of Au. Materials Zeolite 13X powder and tetrachloroauric acid, used to decorate the zeolite 13X, were purchased from Sigma-Aldrich (MO, USA). Oleic acid (Sigma-Aldrich, MO, USA) was used to coat the zeolite after drug loading to reduce unnecessary release [39]. An alumina filter (pore size 0.2 µm, Cytiva, Seoul, Republic of Korea) was used for the filtering processes. Preparation of Calcinated Zeolite Zeolite 13X powder sized 3-5 µm, 10 Å pore size, and a bulk density greater than 0.61 g/mL was used. To increase the loading efficiency, the impurities in zeolite 13X were removed via calcination at a high temperature. The zeolite 13X powder was placed in a ceramic boat (85 mm × 50 mm × 20 mm), which was subsequently placed in an electric furnace (Hantech Co., Ltd. Gunpo-si, Gyeonggi-do, Republic of Korea). After loading into a vacuum environment, the temperature was increased (up to 450, 550, or 650 • C) at a rate of 2 • C/min and held for 6 h. After maintaining the process conditions for the required time interval, the temperature was decreased, and calcined zeolite was obtained. Synthesis of Au-Decorated Zeolite The pH of the calcined zeolite 13X was controlled using a solution of 1 N NaNO 3 , 1 N NaOH, and distilled water. Thus, 1 N NaNO 3 (1 L) was mixed with the zeolite, and the mixture was adjusted to pH 6 using 1 N NaOH. The pH-controlled zeolite was dispersed for 5 min using a sonicator (Ultrasonic Cleaner ABS, JAC-5020, Kodo, Hwaseong-si, Gyeonggido, Republic of Korea) and vacuum filtered using an alumina filter. Subsequently, the filtered powder was dried at 20 • C for 6 h. To decorate the zeolite with Au, 1.46 × 10 3 M gold chloride hydrate (0.620 g) was mixed with distilled water (250 mL). The pH was adjusted to 6 using 1 N NaOH, and dried zeolite (2 g) was dispersed for 5 min using a sonicator. To decorate the zeolite surface and pores with Au, the mixture was placed on a hot plate (PC-420D, Corning, New York, NY, USA). The electromagnetic force was maintained at 200 rpm for 24 h. After stirring, the residual tetrachloroauric acid was washed thrice using distilled water, and the dispersion was vacuum filtered using an aluminum filter. After filtering, the resulting zeolite was dried for 24 h at room temperature. Fabrication of Zeolite Loaded with Active Materials Each substance was loaded into zeolite pores. Before adding the materials, the entrapment efficiency of each material after filtering zeolite (1) was calculated using the following equation: where A represents the weight of the zeolite after filtering (g), 10 is the filtering loss (mg), and B is the weight of the zeolite materials before loading (g). Table 1 lists the weight ratios obtained when materials were included. Efficiencies from 1 to 10 mg/mL were measured and compared by weight ratio. Each active material was mixed with distilled water (100 mL) in appropriate ratios, and gold-decorated zeolite (2 g) was sonicated for 5 min and stirred at 200 rpm at 60 • C for 24 h. The dispersion was washed three times with distilled water to remove residual materials on the zeolite surface and vacuum filtered using an alumina filter. The retentate was dried at room temperature for 24 h. Coating of Zeolite with Organic Acid The zeolite was coated with oleic acid to reduce unnecessary release. After mixing distilled water with 2-10 wt% oleic acid, 2 g of zeolite was added and stirred at room temperature for 15 min. The remaining oleic acid was separated by centrifugation (Mega 17R, Hanil Sci-Med, Daejeon, Republic of Korea) at 10,000 rpm for 10 min. After vacuum filtering with an alumina filter, the coated zeolite was dried at room temperature and stored at a temperature below the melting point of oleic acid (15 • C). Microstructure and Loading Properties of Zeolite Material characterization of the zeolite was performed using various equipment. Calcination of the zeolite was performed using an electric furnace (Hantech Co., Ltd. Gunpo-si, Gyeonggi-do, Republic of Korea). After fabricating Au-decorated zeolite and loading efficient materials, scanning electron microscopy (SEM, Apreo, FEI, Hillsboro, OR, USA) was performed to study the structure of zeolite 13X and at different calcination temperatures. Energy-dispersive X-ray spectroscopy (EDX, Apreo, FEI, Hillsboro, OR, USA) was used to confirm the uniformity of the zeolite surface, and transmission electron microscopy (TEM, CM200, Philips, Amsterdam, Netherlands) was used to confirm the loading efficiency by assessing the scattering peaks. The scale bar was set to 100 nm and Equation (2) was used to measure the surface area. The Na/Si ratio and uniformity of gold in the zeolite were measured using XPS (K-Alpha plus, Thermo Fisher Scientific, USA). The surface area and particle size were measured via physisorption (Brunauer-Emmett-Teller (BET), ASAP 2020, Micromeritics, GA, USA). Loading Efficiency Test The absorbance of the materials in the zeolite was estimated using UV-vis-NIR spectrophotometry (Ultraviolet-Visible-Near-IR Spectroscopy, Lambda 750, Perkin Elmer, Waltham, MA, USA). The active materials were added to distilled water (100 mL) and stirred (comparison was performed after preparation by ratio). After the materials were dissolved, the absorbance of niacinamide, adenosine, and sulforaphane was measured at each concentration (0.2, 0.4, 0.6, 0.8, and 1.0 wt%). The peak was estimated for each material and a standard curve was generated. The loading efficiency of the coated zeolite onto the skin was measured using agarose gel electrophoresis. The coated zeolites were mixed into a permeated cream and immersed in agarose gel (3 g) for 5 min. The agarose gel was dissolved in deionized water (DIW) after removing the mixture from it, and the loading efficiency of the materials was measured. Cell Culture and Viability B16F10 melanoma cells (Korea Cell Line Bank) and HDF (Lonza, Switzerland) were cultured in DMEM containing 10% FBS and 1% penicillin/streptomycin. The cells were cultured in a 5% CO 2 incubator and maintained at 37 • C. MTT assay was used to determine the cytotoxicity of the samples. B16F10 melanoma cells were dispensed into a 96-well plate at a cell count of 1 × 10 4 cells per well and treated with different concentrations of niacinamide, sulforaphane, adenosine, niacinamide + Au zeolite, sulforaphane + Au zeolite, and adenosine + Au zeolite, and incubated in a CO 2 incubator at 37 • C for 24 h. Subsequently, a solution of 5 mg/mL MTT was added to each well, followed by incubation in an incubator (37 • C, 5% CO 2 ) for 4 h. A microplate reader (Biotek Synergy-HT, USA) was used to measure absorbance at 540 nm. The experiment was conducted three times under the same conditions. Melanin Content Assay B16F10 melanoma cells were dispensed into a 6-well plate at a cell count of 2 × 10 5 cells per well and incubated for 24 h in DMEM containing 10% FBS. After 24 h of incubation, the culture medium was replaced with phenol-red-free DMEM. Subsequently, 100 µm IBMX and samples at each concentration were added and incubated for 72 h at 37 • C. Following this, the supernatant from each well of the 6-well plate was transferred to a 96-well plate. The absorbance, measured at 490 nm using a microplate reader, was substituted into the standard melanin calibration curve to calculate the amount of extracellular melanin production. After the supernatants were separated and detached, they were resuspended in PBS by centrifugation at 13,000 rpm for 5 min. The supernatants were removed, and 1 N NaOH and 10% DMSO were added to the cell pellet. The samples were boiled at 60 • C for 1 h, dissolved in melanin, and transferred to a 96-well plate. A microplate reader was used to measure the absorbance at 490 nm. The process was repeated three times to obtain an average value for calculating the amount of melanin produced in each well. Melanin Content and Wrinkle Improvement Assay In Vivo The trials were conducted with more than 20 females over the age of 19 years, in a stable environment of constant temperature and humidity conditions (22 ± 2 • C and 50 ± 5%, respectively), in the absence of air current and direct sunlight. Each test subject stayed for 30 min under these conditions to ensure skin consistency. To measure the wrinkle depth, GC-A-AG was applied to the area under the left eye and GC-B was applied to the forehead; both creams were applied twice a day (morning and evening) during the final stage of the test subjects' skincare routine. Average wrinkle depth was measured using Antera 3D (Miravex Ltd., Dublin, Ireland) before testing and after four weeks of application. To measure melanin content, GC-B was used as a control. GC-N-NG and GC-S-SG were applied to the right cheek and GC-B was applied to the forehead; all three creams were applied twice a day (morning and evening) in the final stage of the test subjects' skincare routine. Skin color intensity and total melanin concentration (surface + inner) on the skin were calculated using Mark-Vu (PSI Plus Co., Ltd., Republic of Korea) before testing and after four weeks of application. Statistical Analysis All data were analyzed using one-way analysis of variance (ANOVA) for normally distributed values. Statistical significance was determined using one-way ANOVA followed by the Newman-Keuls multiple comparison test to analyze differences between the groups. Statistical analyses were performed using PRISM software (GraphPad Software, San Diego, CA, USA). Fabrication and Microstructure of Au-Decorated Zeolite Au-decorated nanocomposites were fabricated using various cosmetic drugs, such as niacinamide, adenosine, and sulforaphane. The zeolite was calcined, as the calcination process can reduce the surface and pore impurities. After calcination, the zeolite 13X structure formed super-cage and beta-cage structures with hollow pore surfaces, allowing for the loading of various cosmetic drugs. The powder was maintained at temperatures up to 650 • C, and the particles were separated from the zeolite ( Figure A1). To confirm the optimized temperature of the calcination process, the surface area of the zeolite at various temperatures (450, 550, and 650 • C) was observed. The surface areas were found to be 715.65 m 2 /g (450 • C), 738.80 m 2 /g (550 • C), and 661.01 m 2 /g (650 • C). Therefore, zeolite 13X was calcined at 550 • C due to the highest value of surface area at that temperature ( Figure A2). Schematic 1 shows the structure of zeolite 13X before and after Au decoration using the Turkevich method. Au nanoparticles, with an average size of 50 ± 10 nm, were successfully nucleated and grown on the surface of zeolite 13X (Figure 1a,b). Figure 1c,d show the diffraction patterns of the zeolite and Au-decorated zeolite. The length of the dots in the diffraction pattern was measured, and the distance (Å) was subsequently calculated, allowing for the confirmation of the characteristic distance of gold materials, as shown in Figure 1d Pore-Size Distribution and Drug-Loading Release Efficiency of Au-Decorated Zeolite To evaluate the efficiency of cosmetic drug loading and release of Au-decorated zeolite, the pore size and surface area were observed using BET. The BET surface area was measured to compare the zeolites before and after Au decoration. The surface area of zeolite was found to be 738.805 m 2 /g ( Figure 2a) and 515.339 m 2 /g (Figure 2b) before and after Au decoration, respectively. During Au decoration, the pores of the zeolite were partially coated and filled with Au nanoparticles, reducing the surface area by 30.4%. The average pore size of the zeolite also differed after Au decoration. Before Au decoration, the average pore size was 54.796 Å (Figure 2c); however, after Au decoration, the average pore size increased to 60.591 Å (Figure 2d). Three different cosmetic drugs were loaded onto the Au-decorated zeolite. Niacinamide, adenosine, and sulforaphane were loaded with oleic acid coating for entrapment. Among other fatty acids, such as capric acid, myristic acid, palmitic acid, and stearic acid, a small amount of oleic acid could reduce unnecessary release [40]. The loading efficiencies of niacinamide, adenosine, and sulforaphane were estimated using Equation 1 (Figure 2e). The loading efficiencies of niacinamide, adenosine, and sulforaphane were 10%, 4%, and 7%, respectively. Adenosine showed the highest loading efficiency, whereas niacinamide showed the lowest, mainly due to the differences in diffusion and adsorption on the zeolite surface. The delivery efficiency of each cosmetic drug was estimated via UV-vis absorption in an in vitro agarose gel. The reference for each material was first observed at different concentrations ( Figure A3). Using a reference concentration, the delivery efficiency of each material on zeolite was estimated. Compared with the loaded cosmetic drugs, niacinamide, adenosine, and sulforaphane exhibited delivery efficiencies of 2.6%, 5.2%, and 2.7%, respectively. B16F10 melanoma cells were dispensed into a 6-well plate at a cell count of 2 × 10 5 cells per well and incubated for 24 h in DMEM containing 10% FBS. After 24 h of incubation, the culture medium was replaced with phenol-red-free DMEM. Cells treated with 100 µm IBMX(e) and 50 nm α-MSH(d) were used as a negative control to induce melanin formation at different sample concentrations. The test resulted in a negative control group consisting of 50 nm α-MSH at 100% relative melanin content, compared to 23.8% for the control group. Sulforaphane, treated at a concentration of 0.18 and 1.8 µg/mL, caused melanin levels to decrease to 78.8% and 69.2%, respectively, in a concentration-dependent manner, demonstrating excellent lightening efficacy (Figure 3d). With 100 µm IBMX, the relative melanin content of the Au zeolite containing sulforaphane was 100%, whereas that of the control group was 17.2%. Au zeolite containing sulforaphane, treated with concentrations of 0.18, 0.80, 1.80, 3.60, and 1.8 µg/mL, resulted in 105.9%, 104.1%, 101%, and 97.7% relative melanin content, respectively (Figure 3e). This indicates a smaller drop in melanin; however, the concentration-dependent manner of Au zeolite containing sulforaphane was confirmed. Effect of Melanin Content and Wrinkle Improvement In Vivo The effect of the biocompatible nanocomposite, Au zeolite, as a drug delivery system was observed in terms of wrinkle depth and melanin content on the skin surface. The average wrinkle depth improvement rates in the test subjects for GC-B, GC-A, and GC-A-AG (***) were 5.97%, 8.46%, and 17.19%, respectively (Figure 4a) (*** p < 0.001 vs. control group). Wrinkle depth improvement relative to GC-B was 142% and 288% for GC-A and GC-A-AG, respectively (Figure 4b). The average reduction in melanin on the skin surface of the test subjects for GC-B, GC-N, GC-N-NG, GC-S, and GC-S-SG was 0.92%, 1.36%, 1.67%, 0.81%, and 1.70%, respectively (Figure 4c). The means for GC-N and GC-N-NG, the reduction in melanin on the skin surface relative to GC-B was 148% and 181%, respectively (Figure 4d). The relative average melanin content on the skin surface for GC-S and GC-S-SG was 88% and 185%, respectively (Figure 4d). Discussion The microstructure of Au-decorated zeolite was observed by TEM and XPS analysis ( Figure 1). According to TEM images and diffraction patterns, 50 nm Au nanoparticles were successfully decorated on the free super-cage (α) and sodalite (β) cage structures of zeolite 13X via calcination and the Turkevich method. XPS analysis indicates that approximately 10% Au nanoparticles were successfully decorated onto the surface of zeolite 13X. Furthermore, the pore-size distribution shown in Figure 2 indicates that after Au decoration, Au nanoparticles only reduce the number of micropores less than 20 Å. Therefore, the effective drug-loading pores were not affected by Au decoration. As a result, the Au-decorated zeolite nanocomposites exhibited effective cosmetic drug-loading efficiencies of 3.5% to 22.5% under various conditions. The Au-decorated zeolite nanocomposites exhibited safe cytotoxicity, owing to the novel metal decoration. Au-decorated zeolite 13X showed in vitro cell viabilities of 80% and higher, at all concentrations ( Figure 3). Each material indicated that the biocompatible nanocomposite, Au zeolite, is a safe material, and the lightening efficacy of sulforaphane and Au zeolite material containing sulforaphane was evaluated at concentrations without cytotoxicity. Finally, the effect of melanin content and wrinkle improvement was observed as in vivo observation (Figure 4). Adenosine and niacinamide delivery systems were observed to have a lightening effect in vivo, whereas niacinamide and sulforaphane delivery systems were observed to have anti-aging effects. Improvement in wrinkles and reductions in melanin content on the skin surface were observed in vivo. The adenosine delivery system exhibited an enhanced wrinkle depth improvement of 203% compared to 0.04 wt% in the pure adenosine system. The niacinamide-and sulforaphane-loaded Au-decorated zeolite nanocomposites decreased the skin surface melanin content by 123% and 222%, respectively, compared to 2 and 0.01 wt% by pure niacinamide and sulforaphane systems, respectively. The Au-decorated zeolite nanocomposites in this study suggest an effective route for a safe epidermis drug delivery system, which opens the cosmetic potential for anti-aging and lightening applications. Conclusions In summary, a biocompatible nanocomposite, Au-decorated zeolite, showed effective cosmetic drug epidermal delivery systems for both anti-aging and lightening effects. By decorating Au nanoparticles on zeolite, we reduced cytotoxicity of zeolite without reducing available drug-loading pores. As a result, both cytotoxicity and drug delivery efficiency were dramatically enhanced for both in vitro and in vivo observation. Over 80% of cell viability was achieved as in vivo observation. Furthermore, 203% improved wrinkle depth and 222% decreased melamine contents were achieved in in vivo observation. The Au-decorated zeolite nanocomposites in this study suggest an effective route for a safe epidermis drug delivery system, which opens the cosmetic potential for anti-aging and lightening applications.
5,139.4
2023-01-26T00:00:00.000
[ "Materials Science", "Medicine" ]
HtrA1 Is Specifically Up-Regulated in Active Keloid Lesions and Stimulates Keloid Development Keloids occur after failure of the wound healing process; inflammation persists, and various treatments are ineffective. Keloid pathogenesis is still unclear. We have previously analysed the gene expression profiles in keloid tissue and found that HtrA1 was markedly up-regulated in the keloid lesions. HtrA1 is a serine protease suggested to play a role in the pathogenesis of various diseases, including age-related macular degeneration and osteoarthritis, by modulating extracellular matrix or cell surface proteins. We analysed HtrA1 localization and its role in keloid pathogenesis. Thirty keloid patients and twelve unrelated patients were enrolled for in situ hybridization, immunohistochemical, western blot, and cell proliferation analyses. Fibroblast-like cells expressed more HtrA1 in active keloid lesions than in surrounding lesions. The proportion of HtrA1-positive cells in keloids was significantly higher than that in normal skin, and HtrA1 protein was up-regulated relative to normal skin. Silencing HtrA1 gene expression significantly suppressed cell proliferation. HtrA1 was highly expressed in keloid tissues, and the suppression of the HtrA1 gene inhibited the proliferation of keloid-derived fibroblasts. HtrA1 may promote keloid development by accelerating cell proliferation and remodelling keloid-specific extracellular matrix or cell surface molecules. HtrA1 is suggested to have an important role in keloid pathogenesis. Introduction Keloids are a dermal fibrotic disease characterized by abnormal accumulation of extracellular matrix (ECM) and fibroproliferation in the dermis [1,2]. They appear as raised, red, and inflexible scar tissue that develops during the wound-healing process, even from tiny wounds including vaccination and insect bites. Keloid lesions expand over the boundaries of the initial injury site, and the lesions continue to develop and become larger [3,4]. The many treatments for keloids include steroid injections, steroid tape, and surgery with postoperative irradiation. The cure rate following surgery and postoperative radiation varies widely from 28~89% [3,[5][6][7][8] and depends on the individual. Clarifying keloid pathogenesis could improve the treatment outcome. Previously, we studied the molecular mechanism of keloid pathogenesis using cDNA microarray and Northern blot analysis to compare gene expression patterns in keloid lesions and normal skin [9]. HtrA1, a member of the HtrA family of serine protease and a mammalian homolog of Escherichia coli HtrA (DegP), was markedly upregulated in the keloid lesions. As human HtrA1 has multiple domains, including protease, IGFBP, and PDZ domains, HtrA1 has been expected to be a multifunctional protein. Several cellular and molecular studies suggested that HtrA1 plays a key role in regulating various cellular processes via the cleavage and/or binding of pivotal factors that participate in cell proliferation, migration, and cell fate [10][11][12][13] HtrA1 has been suggested to be closely associated with the pathology of various diseases, including osteoarthritis, age-related macular degeneration (AMD), familial cerebral small vessel disease (CARASIL), and malignant tumours. HtrA1 was also suggested to stimulate progression of arthritis through degrading cartilage matrix in osteoarthritis [14]. Recently, the increased expression of human HtrA1 in the mouse retinal pigment epithelium (RPE) was shown to induce vasculogenesis and degeneration of the elastic lamina and tunica media of the vessels, similar to that observed in AMD patients [15,16]. These observations imply that HtrA1 plays a role in the pathogenesis of various diseases by modulating proteins in the ECM or cell surface. Although controversial, HtrA1 has been proposed as a key molecule in osteogenesis and chondrogenesis [14,17,18]. HtrA1 expression is induced during hypertrophic change in chondrocytes, with the up-regulation of the type X collagen marker in keloid lesions [9,18]. HtrA1 is closely concerned with normal osteogenesis and in pathogenesis of arthritis [14]. In arthritis, synovial fibroblasts identified as a major source of HtrA1 degrading cartilage matrix, such as fibronectin and aggrecan, which are abundant in keloid lesions [9,14,18]. Based on the foregoing data, in this study, we focused on HtrA1. We examined the expression and localization of HtrA1 in keloid tissues, using in situ hybridization and immunohistochemical studies. HtrA1 was strongly up-regulated at both the mRNA and protein levels in the hypercellular and active keloid lesions. Silencing HtrA1 gene expression in keloid fibroblasts significantly inhibited cell proliferation, and additional recombinant HtrA1 stimulated keloid fibroblast proliferation. We propose that HtrA1 may be a pivotal molecule in keloid pathogenesis, and our discussion centres on the possible roles of HtrA1 in the molecular mechanism of keloid development. In Situ Hybridization of HtrA1 mRNA in Keloid Lesions and Normal Skin To confirm the up-regulation of the mRNA level for HtrA1, we previously observed using microarray and Northern blot analyses, and to determine the localization of HtrA1 mRNA in keloid lesions, in situ hybridization was performed using skin samples from six keloid patients. In one specimen (No. 27 in Table 1), in situ hybridization was performed on several parts of lesions which differed in keloid activity. The expression of the HtrA1 gene was clearly detected in the fibroblasts in the hypercellular and actively growing area of keloid lesions (Figure 1a, Supplementary Figure S1a,c,e), but not in unaffected skin (Figure 1b). In the sections hybridized with sense probe, no signal was observed (Supplementary Figure S1b,d,f), demonstrating specific staining by the antisense probe. All keloid sections were hard and elevated in the keloid lesions. In these regions, the antisense probe provided strong signals (Figure 1a, Supplementary Figure S1a,c,e). Clinical findings and the results of in situ hybridization of sample 27, which was an abdominal keloid after laparoscopic surgery for removal of uterine myoma, as depicted in Figure 2. Keloid activity was in the order of a, b and c. Higher activity in the affected portion of the lesion was associated with greater cell proliferation and greater up-regulation of HtrA1 ( Figure 2). HtrA1 mRNA was strongly up-regulated, and expression of HtrA1 was more pronounced in keloid lesions. Table 1) were hybridised with a probe specific to HtrA1 mRNA. Positive signals are visualised in blue. Scale bar = 50 µm. Immunohistochemical Staining and Western Blot Analysis of HtrA1 To examine whether the up-regulation of HtrA1 at the mRNA level leads to increases at the protein level, we performed immunohistochemical analysis to detect HtrA1 (Figure 3a Figure S2b, d, f). Therefore, HtrA1 was strongly up-regulated at the protein level in active areas of the keloid lesions. To confirm the up-regulation of HtrA1 protein, western blot analysis was performed. In all keloid tissue samples from four patients, HtrA1 protein was up-regulated, relative to four normal skin samples ( Figure 4). Enumeration of HtrA1-positive cells after immunohistochemical staining indicated that the proportion of cells expressing detectable levels of HtrA1 in keloid tissue ranged from 12.4% to 48.4%, with an average of 31.9 ± 10.5% ( Figure 5). In contrast, the proportion of HtrA1-positive cells in normal skin ranged from 2.1% to 3.8%, with an average of 2.8 ± 0.6%. The proportion of HtrA1-positive cells was significantly higher in keloids than in normal skin (p < 0.001). The total number of fibroblasts was much less in normal skin relative to keloid tissue (Figure 3), as previously reported [9]. These results indicate that keloid tissue exhibits an increase in the number of fibroblasts producing HtrA1, as well as an increase in the total number of fibroblasts. Table 1, and (b) displays the results from patient No. normal skin-1 in Table 1. Positive signals are visualised in brown. Scale bar = 50 µm. The number of fibroblasts with positive signals was counted after immunohistochemical staining of HtrA1 using samples from 17 keloidand 4 unrelated patients. Ten high-power (×400) fields were selected at random from a section and numbers of total and stained fibroblasts were counted. Patient information is described with proportion of HtrA1-positive cells in Table 1. HtrA1 Knockdown Inhibits Keloid Cell Proliferation To investigate role of HtrA1 in keloid pathogenesis, we examined whether HtrA1 affects cell proliferation by silencing HtrA1 gene expression using specific small interfering RNA (siRNA). Keloid fibroblasts treated with HtrA1 siRNA exhibited a proliferation rate significantly slower relative to those treated with control siRNA (Figure 6, Supplementary Figure S3). This effect with silencing HtrA1 was also observed in normal fibroblasts, but the inhibition effect was not as pronounced. Table 1 (a), (n = 3) and normal fibroblasts from sample No. 8 (d) transfected with HtrA1 siRNA (knockdown) or control siRNA (control). The efficiency of HtrA1 knockdown in keloid fibroblasts was determined using western blot analysis (b) and quantitative PCR (c), (n = 3). The efficiency of HtrA1 knockdown in normal fibroblasts was similarly determined using quantitative PCR (e), n = 3. Cell proliferation was analysed using a colorimetric assay with a water-soluble tetrazolium salt as the substrate. Error bars represent standard deviations (n = 3). * p < 0.001. Additional HtrA1 in Culture Medium Stimulates Keloid Cell Proliferation To confirm the effect of HtrA1 on cell proliferation, we performed a proliferation assay on keloid fibroblasts with the addition of recombinant human HtrA1 in culture medium (Figure 7, Supplementary Figure S4). The addition of HtrA1 stimulated the proliferation of keloid fibroblasts, but not normal fibroblasts. These results suggest that HtrA1 plays an important role in keloid cell proliferation. Table 1, incubated with (rHtrA1) or without (control) recombinant HtrA1. n = 3, * p < 0.01. Discussion In the present study, the expression of HtrA1 was strongly up-regulated in active keloid legions as analysed by in situ hybridization and immunohistochemical staining. Previous studies suggested that HtrA1 stimulates arthritis by digesting the ECM [14]. In arthritis, synovial fibroblasts produce abundant HtrA1, and HtrA1 digests cartilage ECM, including fibronectin, collagens, and proteoglycans. ECM fragments produced by HtrA1 digestion reportedly activate synovial fibroblasts and induce the remodelling of cartilage ECM. We propose that HtrA1 functions as a matrix protease that stimulates keloid development because the keloid matrix consists mainly of collagens, fibronectin, and proteoglycans, which are substrates for HtrA1. HtrA1 may degrade keloid matrix and accelerate ECM remodelling in keloid lesions. Matrix protein fragments produced by HtrA1 may activate keloid cells, leading to further progression of the disease. Consistent with this notion, we found HtrA1-knockdown inhibited the proliferation of keloid fibroblasts, and that recombinant HtrA1 added to the culture medium stimulated the proliferation of keloid fibroblasts. Interestingly, the inhibition or stimulation of proliferation with silencing or additional HtrA1 was clearly demonstrated in keloid fibroblasts, but not in normal fibroblasts. These results suggest that HtrA1 is a key molecule of keloid pathogenesis. The more keloid fibroblasts proliferate, the more matrix produced by keloid fibroblasts accumulates in keloid lesions. HtrA1 has been reported to be a crucial molecule in AMD, a leading cause of irreversible blindness in the elderly [15,16]. AMD is accompanied with choroidal neovascularization and polypoidal choroidal vasculopathy. Analysis of HtrA1 transgenic mice indicated that increased HtrA1 is sufficient to cause hyper-vascularisation and degeneration of elastic laminae in choroidal vessels [15]. Zhang et al. demonstrated that HtrA1 promotes angiogenesis by regulating GDF6, a TGF-β family-protein, using HtrA1 knock-out mice [12]. As in AMD, abundant microvessels are observed in keloid lesions [9]. Thus, HtrA1 may play a role in keloid hypervascularity by modulating TGF-β family signalling. Taken together, these observations suggest that HtrA1 contributes to the development of keloid lesions as matrix protease by remodelling keloid-specific ECM or cell surface molecules. HtrA1 may be useful as a target of keloid treatment, although further study is required. Tissue Specimens Between September 2007 and September 2013, 30 keloid patients (aged 16-75 years) and 12 unrelated patients (aged 31-88 years) undergoing surgical treatments were enrolled in this study. With approval from the Institutional Reviewing Board in the Kyoto University Faculty of Medicine (G61, the 14 December 2006), which adheres to the ethical standards as formulated in the Helsinki Declaration, written informed consent was obtained from all the patients. Keloid diagnosis was based on the clinical findings and definitive diagnosis was based on histopathologic data from the operative specimens [3,4]. The skin tissue samples were obtained as the surplus skin at the plastic surgery. Sample information is shown in Table 1. Antibodies Monoclonal anti-human HtrA1 antibody (MAB2916, R&D Systems, Minneapolis, MN, USA) was used for western blotting. The antibody used in immunohistochemical staining was developed in rabbits using a synthetic peptide corresponding to the C-terminal region of human HtrA1 as the immunogen. In Situ Hybridization For in situ hybridization, keloid and surrounding unaffected skin tissue specimens were obtained from the keloid patients at the time of surgical treatment. The specimens were fixed in 4% paraformaldehyde at 4 • C, embedded in paraffin, and Sections 6 µm in thickness were prepared. Deparaffinised sections were fixed in 4% paraformaldehyde in phosphate-buffered saline (PBS) for 15 min and washed with PBS. Sections were treated with 3 µg/mL proteinase K in PBS for 30 min at 37 • C, washed with PBS, refixed with 4% paraformaldehyde in PBS, washed again with PBS, and placed in 0.2 N HCl for 10 min. After washing with PBS, sections were acetylated by incubation in 0.1 M tri-ethanolamine-HCl (pH 8.0)/0.25% acetic anhydride for 10 min. After washing with PBS, sections were dehydrated through a series of ethanol solutions. Hybridization was performed with 1558-2066 of human HtrA1 gene (Accession # NM_002775) at concentrations of 300 ng/mL in Probe Diluent-1 (Genostaff, Tokyo, Japan) at 60 • C for 16 h. After hybridization, sections were washed in 5× HybriWash (Genostaff) at 60 • C for 20 min, and in 50% formamide with 2× HybriWash at 60 • C for 20 min, followed by RNase treatment with 50 µg/mL RNase A in 10 mM Tris-HCl (pH 8.0)/1 M NaCl/1 mM EDTA for 30 min at 37 • C. Sections were then washed twice with 2× HybriWash at 60 • C for 20 min and twice with 0.2× HybriWash at 60 • C for 20 min. After treatment with 0.5% blocking reagent (Roche Diagnostics, Tokyo, Japan) in TBST (0.05 M Tris-HCl/0.15 M NaCl/0.05% Tween 20) for 30 min, sections were incubated for 2 h at room temperature with anti-DIG alkaline phosphatase conjugate (Roche Diagnostics) diluted 1:1000 with TBST. Sections were washed twice with TBST and then incubated in 100 mM NaCl/50 mM MgCl 2 /0.1% Tween20/100 mM Tris-HCl (pH 9.5). Colouring reactions were performed with NBT/BCIP solution (Sigma-Aldrich, Saint Louis, MO, USA) overnight, followed by washing with PBS. Sections were counterstained with Kernechtrot stain solution (Muto Pure Chemicals, Tokyo, Japan), dehydrated, and mounted with Malinol (Muto Pure Chemicals). Immunohistochemical Analysis All keloid and normal skin tissue specimens were obtained from the surgical treatment and fixed in 4% paraformaldehyde at 4 • C, and paraffin sections (3 µm) were prepared. Deparaffinised sections were incubated at 90 • C for 10 min in target retrieval solution (pH 9, 1:10, Dako, Glostrup, Denmark). After blocking endogenous peroxidase and non-specific protein binding activities, the sections were incubated with antibody against human HtrA1 (1:400) using LSAB TM 2kit/HRP (Dako). After incubation with a peroxidase-conjugated anti-rabbit IgG antibody, sections were stained using a LSAB/HRP kit (Dako) and counterstained with haematoxylin. Microscopic images of sections were obtained by a Biorevo BZ-9000 microscope (Keyence, Osaka, Japan) and counting of total and stained fibroblasts was performed using ten microscopic fields at high-power (×400). The number of cells in the ten fields was determined. Stained fibroblasts per total fibroblasts were assumed as the proportion of HtrA1-positive cells. Statistical Analysis Significance of difference was analysed by the Student's t-test. A p-value < 0.05 was taken as an indication of statistical significance. Knockdown of HtrA1 Gene Expression and Cell Proliferation Assay Keloid fibroblasts and normal fibroblasts were extracted by the explant method from surgical specimens. Briefly, tissues were cut into 1~2 mm 3 pieces, placed into plastic tissue culture dishes, and cultured in Dulbecco's modified Eagle's medium (DMEM; Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% fetal calf serum, 10,000 U/mL penicillin G, and 10 mg/mL streptomycin sulphate. Cells were propagated at 37 • C, and semiconfluent cultures of fibroblasts were passaged by trypsinization up to twice prior to analysis. One day before transfection, keloid and normal fibroblasts were plated at 40% confluence at the 3rd passage in DMEM without antibiotics on 10-cm dishes, followed by transfection with HtrA1 siRNA using Lipofectamine RNAiMAX Reagent, (Life Technologies, Carlsbad, CA, USA). After 48 h, the cell proliferation assay was performed using WST assay reagent (Nacalai Tesque, Kyoto, Japan). The expression levels of target gene and protein were analysed by real-time polymerase chain reaction (PCR) and western blot analysis, respectively. A proliferation assay of keloid and normal fibroblasts was also performed with or without the addition of recombinant human HtrA1 (R&D Systems, Minneapolis, MN, USA) to the culture medium. Real-Time PCR Analysis Total RNA was extracted from cells after the transfection using RNeasy Mini Kit (Qiagen, Venlo, The Netherlands). First-strand cDNA was synthesised using Prime Script RT Reagent Kit with gDNA Eraser (Takara Bio). RT-PCR was performed with cDNA using TaqMan Probe Assay (Applied Biosystems, Foster City, CA, USA). Glyceraldehyde-3-phosphate dehydrogenase was used as a housekeeping control gene. Relative expression was calculated by calibration curve method. Conclusions In summary, the expression of HtrA1 was revealed, especially in keloid active lesions, and the silencing of HtrA1 suppressed the proliferation of keloid fibroblasts. This effect of silencing HtrA1 was also observed in normal fibroblasts, but the inhibition effect was not so as pronounced. Moreover, the addition of recombinant HtrA1 in culture medium stimulated the proliferation of keloid fibroblasts but not normal fibroblasts. These results suggest that HtrA1 plays an important role in keloid cell proliferation and is a key molecule in keloid pathogenesis.
4,023.6
2018-04-24T00:00:00.000
[ "Biology", "Medicine" ]
SmallSats Rational Design Technique — The SmallSat design process is comprised of choice of its trajectory, determination of its components and main parameters of its systems, development of external and internal layouts, determination of the number of satellite-born antennas and their main characteristics. This paper will focus on estimating a concept and physical relationships in the design process, and on the rational design algorithm version. In terms of specialization of engineering works during SmallSats development, was formulated concept of the design process and established physical relationships to find some optimal design solution about compatibility of basic parameters and characteristics. INTRODUCTION Small spacecraft (SmallSats) focus on spacecraft with mass less than 180 kilograms and about the size of a large kitchen fridge. The SmallSat design process is comprised of choice of its trajectory, determination of its components and main parameters of its systems, development of external and internal layouts, determination of the number of satellite-borne antennas and their main characteristics, making programs: general one and for separate sessions [1].Furthermore, since it is not possible to determine any basic parameters for the systems and the requirements for the control system, and to pro-gram the work without understanding the behavior of the individual systems and their interaction, these problems must be solved in the design process. II. CONCEPT OF THE DESIGN PROCESS In terms of specialization of engineering works in the process of SmallSats development, design and calculation works, development of logical and electric diagrams and development of computation programs, modelling and computer analyses shall be done.The calculation and the modelling process include among others [2][3][4][5]: 1) design and strength checking calculations; 2) mass calculations, momentum of inertia calculations, the canter of mass position and positions of the main inertia axes; 3) thermal calculations; 4) calculations of internal and external disturbing moments influencing SmallSat; 5) gas environment calculations for hermetic compartments; 6) estimation of probability of meteorite impact and erosion of external surfaces, determining whether special protection measures (additional screens, thicker shells, more resistant coatings, etc.) should be applied; 7) estimation of radiation exposure for devices, glass, coatings and structural non-metallic elements; 8) dynamic analysis purposed to determine requirements or to check stiffness of the structure to eliminate mutual undesirable influence of mechanical and mechatronic devices and systems, and operation of the orientation system; 9) ballistic design; 10) power supply system calculations, orientation system and other system calculations. 11) If we bind the design process with the development stages typical for any product [6,7], then this process should cover development and agreement of the technical specification for the SmallSat concerned, development of draft proposal, conceptual and technical design (Fig 1).It is obvious that in the process of SmallSat design the basic parameters of separate systems, trajectory characteristics, operation program and the spacecraft design should be brought into line [7]. The external configuration of the SmallSat and optical characteristics of external surfaces determine the characteristics of forces and moments of the light pressure; in some cases, moments of light pressure can be used as useful moments helping to adjust consumption of propellant or electric power for orientation of the SmallSat [9-13]. III. PHYSICAL RELATIONSHIPS IN THE DESIGN PROCESS The study of physical relationships in the design process is necessary, first, to find some optimal design solutions about compatibility of basic parameters and characteristics of SmallSat. The first task is to be certainly solved in the development of any project [14,15].Taking into consideration relatively low cost of modern SmallSats, and the current methods of test and control, it is difficult to imagine that the parameters of any systems could be incompatible in an orbiting SmallSat, or that its design could not provide for the operation of its devices.Such cases are extremely rare.Substantially the alignment of the basic system parameters with each other, and with the characteristics of the trajectory and the design, is the design itself in the usual sense of the word [6]. The second of the tasks set is the search for optimal combinations of parameters and characteristics.It is much more difficult than the first one and is not al-ways solved.This is mainly due to the complexity of studies of this sort [14][15][16]. This complexity is aggravated by the fact that the external and internal configurations significantly influence the system parameters, mass and other characteristics of SmallSat. The variety of SmallSats shapes due to minimum external shape limitations for most of them significantly complicates the formalization process enabling to find the best external configuration.And the technology CubeSat is the most effective form for SmallSats today [1]. To avoid a random choice, sometimes development of the components is assigned to different specialists with the following choice of the best option.But also, in this case the choice of the right option is often done based on intuition of the project manager, and therefore, personal preferences, a wish to simplify the analysis and the following works, and other considerations are sub-consciously involved in this choice, which does not always result in the best option or an option close to the best one. At the same time, inadequate choice of the external configuration can lead to higher values of moments of inertia for SmallSat, increased weight of the on-board cable network, deterioration of characteristics of airborne antennas, complication of technology, etc. To enable rational design, it is necessary to establish some criteria, which extreme values must be a goal in searching a combination of parameters and characteristics of SmallSat.These criteria are to be determined by the tasks set for a specific spacecraft, or a technical specification for the spacecraft, determining its purpose and operating conditions.Due to the wide variety of modern SmallSats, it is impossible to enumerate all the criteria that their developers may encounter (Fig. 2). For some SmallSats the weight of the scientific equipment, which may be installed on the spacecraft, can be a criterion.In the simplest case the trajectory and the orbit injection launch vehicle shall be set.They shall determine the overall weight 0 M of the spacecraft to be injected on the specified trajectory.Thus, in the simplest case under consideration, when the trajectory, or rather narrow range of trajectories and the launcher are specified, the task of rational design is reduced to minimization of the total weight of service systems, the frame and the on-board cable network when it comes to mathematics.In this case, the initial weight of the vehicle 0 M can be considered a design value. In this case the weight of the equipment Here we proceed from the assumption that the larger is the weight of scientific equipment, the higher is the scientific value of the spacecraft [17].This assumption seems to be true provided a careful and informed selection of scientific tasks was done. The described above approach to rational design does not depend on the weight of scientific equipment when the minimum of the total weight of the service systems is to be found [14,15].This approach has very limited application, as in most cases the weight of temperature control devices, electronics, the power supply system and the orientation system depend on the weight of scientific equipment, its purpose and operation program (Fig. 3). For the cases when the value SS M in the expression (1) cannot be considered independent of the value , Sc M sometimes it is possible to write the following [17]: )., ( In this case, we can search for the maximum value directly . Sc M The above method of problem solving may be not strict enough in some cases.The matter is that that the function Sc SS M f characterizing the increase in the weight of the service systems and the frame necessary for the successful functioning of scientific equipment depends, as a rule, on the parameters of the temperature control system, the orientation system, and the power supply system (type of power generator and battery type), the frequency range of the radio telemetric system and the configuration of the spacecraft.If it is impossible to specify the above parameters and the configuration prior to the beginning of the computational analysis, it is impossible to use the formula (3) in the rational design, because to obtain the formula, it is necessary to know the exact type of function, M Here each version of the service systems and configuration is basically provided with complete or almost complete development of the project and final adjustment of the basic parameters of all on-board systems and characteristics of SmallSat (Fig. 4).As a rule, rational design in this case should be done by successive approximations.In this case, the expression of the type (3) can always be used to solve some specific problems.For example, for a power supply system consisting of some solar panel and a chemical battery of a type, the function Sc SS M f can be easily specified if it is possible to determine the dependence of the average electricity demand for the scientific equipment from its weight [5]. The reliability can be expressed through probability of implementation of the basic task under which here it is necessary to understand operation of scientific equipment according to the specified program within the given time.This time is sometimes called the vehicle operation time, or active existence time [17]. For numerical estimation, the reliability shall be regarded as a probability of flawless operation within a specified time, the failure being such condition of the on-board systems and devices, which makes impossible further functioning of the scientific equipment.To calculate the probability, it is possible to use the theory of reliability apparatus [3]. If we indicate the probability of flawless operation of the spacecraft within a specified time 0 t as B , we can write Among the many parameters of the system there can be those, which are uniquely determined by the composition and characteristics of scientific equipment, and its operation program.The remaining parameters are free.And their choice is the result of rational design.A similar remark can be made concerning parameters ( ) i T and ( ). j P For example, if we design an artificial Earth satellite with the specified height of a circular orbit and the specified deviation, the orbit injection time shall be free parameter.This parameter determines the orbit position relative to the Sun and stars and can be selected to ensure maximum reliability of the orientation control system at the beginning of orbital motion, particularly when searching and capturing support landmarks.Restraints and requirements to the newly developed SmallSat and first its purpose is expressed not only in the form of constants in equations of physical relations [15], but also in presence or absence of the equations and in the form of the equations themselves.This is natural, since the composition and technical meaning of the basic parameters of the systems, trajectory parameters and parameters of the operation program depend on the schemes of the on-board systems, the structure of the operation program and the flight scheme [6].These parameters are essentially dependent on the purpose of SmallSat and many restrictions and requirements.Hence, the equations of physical relations showing interdependence of all the above parameters [20] depend on the purpose of the satellite, limitations and requirements thereto (Fig. 5). Besides the availability of partial restraints and requirements narrows the range of basic system parameters, trajectory parameters, operation program and even configuration diagrams considered in the design process.In this case, some equations of physical relationships will not have any solution if there are constants in these equations determining partial restraints and requirements. It follows from the above that in the process of rational design we must consider equations and inequalities determining physical relationships characteristic for the spacecraft of this purpose or type, and limitations and requirements to it.These expressions will include some constants. These equations and inequalities shall be written as follows K is to be referred to as a task of linear or nonlinear programming depending on the type of functions K and  [18]. The purpose of rational design is to create a project of a vehicle for which the value of the selected criterion is close to the maximum or minimum value.In this case, different configuration diagrams, different orientation schemes and different methods of creating control and corrective forces, should be considered [19].Depending on the versions of design solutions the functions K and  will change. Consequently, the rational design shall be confined to the investigation of the function in the constraint equations (5) for different versions of the newly designed SmallSat. Finding the optimal parameter values for one record variant of functions K and  shall be described as a specific task of rational design.This is essentially the task of optimizing some specific version of SmallSat. In some cases, the analysis of physical relations characteristic for some versions of SmallSat allows to find an optimal combination of some parameters, which simplifies the solution of the rational design problem.Mathematically the above means that it is possible to extract from the system ( 5 Published by : subsystem including only some parameters and find some specific criterion depending on these parameters and not contradicting the general criterion .K The tasks of this type can be called specific optimal tasks of SmallSat design. IV. RATIONAL DESIGN ALGORITHM VERSION Before we can provide a variant of the rational design algorithm let's make some assumptions concerning equations and inequalities (5) [18].Basically, these assumptions specify a class of spacecraft for which they are true and for which we are going to offer an algorithm. The relations (6) include only equations, inequalities are absent.This is because inequalities arising from the requirements to the spacecraft and from limitations can be replaced by equations for many spacecraft's.If it is required that the initial weight of the spacecraft does not exceed the specified value determined by the trajectory and the launch vehicle, then when analyzing the different versions of SmallSat and identifying its optimal parameters it is possible to accept that the initial mass of the spacecraft 0 M is equal to the maximum permissible value max 0 M minus some allowance i.e. . 1 The relative weight allowance may be accepted within [17] depending on the complexity and novelty of the developed SmallSat and its systems. Similar reasoning can be given also for the case when the minimum permissible reliability of SmallSat is specified, and in relations (5) we shall accept, that the reliability of the spacecraft is equal to this value with some allowance which can disappear at the stage of detailed design. If some of the parameters in the equations ( 5) are timedependent, for example, the weight of a spacecraft or its moments of inertia due to fuel consumption on during correction phase, or orientation process, the time may be included as constants obtained during the ballistic design phase.For example, the equations (5) may members 1 n and , 2 n where − n is an average fuel consumption per second needed for orientation of the spacecraft.It is one of the varied parameters of the orientation control system depending on the moment of inertia, arms of the driven engines, disturbing moments, etc., 1 t and − 2 t constants determining the times of characteristic points within the flight trajectory. The varied parameters do not include any parameters and characteristics of SmallSat trajectory.Consequently, it is assumed that the choice of the flight scheme, the basic parameters of the trajectory, as well as determination of the requirements to the spacecraft in terms of implementation of the necessary trajectory has been done in advance, before determining the parameters of systems, configuration and the operation program.Such a stage of work, which is called ballistic design, can often be started immediately after receiving technical specifications for the SmallSat.In cases when there is a dependence of the trajectory parameters from the parameters of some systems, and the latter cannot be determined in advance before the complex investigation of the spacecraft parameters, it is necessary to use the method of consecutive approximations.Ballistic design is an independent area of spacecraft design [18].The ballistic design shall result in determined trajectory characteristics, the initial weight of the spacecraft, which can be injected in the specified trajectory by the chosen launcher, characteristic speeds, times for corrections and maneuvers, requirements to the control actions for the thrust vector positioning during corrections and maneuvers, and the necessary accuracies, and, in addition, all necessary data for the development of the orientation control system and operation program, such as, for example, the angles between possible optical guides and the times when the spacecraft is in the visual range of ground facilities (Fig. 6). It should be noted that at the stage of ballistic design may be necessary to solve complex variation problems, multipoint boundary value problem, etc.Some of these tasks are studied in the publication [19]. The number of constraint equations ( 5) is less than the number of varied parameters.If this assumption is not fulfilled, the task of selecting the optimal parameters cannot be solved, as there are no free parameters to minimize the criterion . K Most likely it means that some free parameters have not been revealed, and it is necessary to review the parameters and the type of functions K and . Let's assume that at some stage of the design process we found a satellite and a program version which met all the requirements and restrictions.This version of the satellite shall be called a reference one.Suppose that it is characterized by parameters ( ) n m C , and ( ), j P that we call the source parameters.These parameters will satisfy the equations ( 5), which consider the physical relations and constraints characteristic for the found SmallSat variant. Obviously, the combination and technical signification of the SmallSat parameters, and therefore the structure of the expressions (5) and the constants included therein will not change when the parameters within some intervals near the values ( ) n m C , and ( ) j P change.We shall introduce the following symbols for the specified intervals of each parameter: International Journal of Engineering Research & Technology (IJERT) For convenience, further under the reference version we will understand the version characterized by parameter variation intervals (6). Optimization of the reference version is limited to search of parameter values ( ) n m C , and ( ) j P within intervals (6) and implementation of the maximum or minimum value of the criterion during execution of equations (3), written for the reference version. It is very important that the experience of creation and operation of similar spacecraft is used in the development of the versions.The qualification of the developers of the reference versions is of paramount importance.However, it should be borne in mind that the newly created SmallSat may not have prototypes.In such cases, a sufficiently wide review of the possible reference versions is required. The described method of rational design, of course, does not deny the process of intuitive creative thinking.This process reveals itself in assumptions and development of reference versions, as well as in ballistic design. In the context of the amount of calculations the most complex is the stage of finding optimum parameters and extreme values for the criterion for all reference versions.For each reference variant the problem is to study the function extrema ) ,..., , ( x    , correspond to the parameter variation intervals boundaries (6). There are different methods of solving this problem.First, we can try to exclude some parameters In the expressions (10) and (11) all the partial derivatives shall be calculated in the point ).,..., , ( K  Please note that the described method is a special case of the linear programming problem. M is the total weight of the service systems, frame and on-board cable network necessary to ensure the operation of the spacecraft. Fig. 2 . Fig. 2. Satellite Comparison (Deep Space Industries) weight of the service systems and the frame independent of the weight of the scientific equipment; weight of the service systems and the frame, necessary for operation of the scientific equipment depending on its weight, composition and operation program. Fig. 3 . Fig. 3. Basic Components of SmallSat Various methods of solving the problem of rational design are possible here.For example, we can search a minimum value 0 Sc M in the expression (2), and divide the resulting Fig. 4 . Fig. 4. Satellite ASNARO-1 set of basic parameters of systems; − m is the system number; − n is a parameter number; ( )− i T is a parameters set determining the trajectory of the spacecraft; ( )− j P parameter set determining the operation program. Fig. 5 . 0 B Fig. 5. SmallSat Design Solution It should be noted that the probability value of flawless operation of the spacecraft is not significant.This value shall be used only as a criterion for analyzing different design solutions.If the technical task sets the value of reliability , 0 B the process of rational design shall consider the condition Using the introduced symbols for all the basic parameters  N we can write the following expression:In general, the expressions (5) and (6) may include time.If all the expressions (5) are equations and , seeking for optimal parameter values is confined to finding a constrained extrema of the many variables function.The relations of type (5) are simultaneously the constraint equations.If some relations (5) are inequalities, the task of seeking the variable ( ) ( ) kxx using equations(8) and to investigate unconditional extrema of the function K of already R N −  variables.If it is difficult, i.e. to investigate the extrema of the function .In this case, the required optimal parameters and multipliers r  are to be found from necessary existence conditions of the internal maximum or minimum function  and equations(8).You can try to linearize the functions K and  by developing them, for example, into the Taylor series near the point If we take a maximum permissible error of the criterion K we shall get , In this case we get a system of linear equations: ....... .......... .......... .......... can take the largest or the smallest value only at the boundaries of the value variation intervals .finding of an optimal combination of parameters l x  shall be confined to calculation of the value for all possible variation intervals boundary combinations K  and selection of a combination l x  implementing the maximum or minimum values .
5,210.8
2018-06-14T00:00:00.000
[ "Engineering" ]
Unconventional excited-state dynamics in the concerted benzyl (C7H7) radical self-reaction to anthracene (C14H10) Polycyclic aromatic hydrocarbons (PAHs) are prevalent in deep space and on Earth as products in combustion processes bearing direct relevance to energy efficiency and environmental remediation. Reactions between hydrocarbon radicals in particular have been invoked as critical molecular mass growth processes toward cyclization leading to these PAHs. However, the mechanism of the formation of PAHs through radical – radical reactions are largely elusive. Here, we report on a combined computational and experimental study of the benzyl (C7H7) radical self-reaction to phenanthrene and anthracene (C14H10) through unconventional, isomer-selective excited state dynamics. Whereas phenanthrene formation is initiated via a barrierless recombination of two benzyl radicals on the singlet ground state surface, formation of anthracene commences through an exotic transition state on the excited state triplet surface through cycloaddition. Our findings challenge conventional wisdom that PAH formation via radical-radical reactions solely operates on electronic ground state surfaces and open up a previously overlooked avenue for a more “rapid” synthesis of aromatic, multi-ringed structures via excited state dynamics in the gas phase. computationally explored the benzyl radical self-reaction on the ground state singlet surface via initial recombination of the radicals with their radical centers on the singlet surface 16 . Two key pathways involve the 1,2-diphenylethane intermediate (C 6 H 5 CH 2 CH 2 C 6 H 5 , 1), two hydrogen atom losses via transstilbene (C 14 H 12 , 3) and 9,10-dihydrophenanthrene (C 14 H 12 , 8), and eventually the formation of phenanthrene (C 14 H 10 , p1). At temperatures above 1200 K, the authors predicted that phenanthrene (C 14 H 10 , p1) represents the nearly exclusive product of the benzyl-radical self-reaction at levels of at least 99%. Rijs et al. exploited infrared (IR)/ultraviolet (UV) ion dip spectroscopy coupled with a high-temperature pyrolysis reactor to experimentally probe the products of the benzyl radical self-reactions at 1373 K in 1.4 bar argon buffer gas 17 , but only phenanthrene (C 14 H 10 , p1) was detected. Based on modeling studies of the pyrolysis of toluene, Matsugi and Miyoshi suggested that only the phenanthrene isomer might be formed via a stilbene intermediate 18 . These studies reveal that there is still limited understanding of the fundamental reaction pathways of how the benzyl radical self-reaction can lead to phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2). Identification of both isomers would provide an experimental benchmark for the conversion of two singly ringed benzyl radicals to the 14π-aromatic system phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) in hightemperature combustion and circumstellar environments. Here, we report on a combined computational and experimental study of the benzyl (C 7 H 7 • ) radical self-reaction leading eventually to the formation of phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) as prototype 14π aromatic systems carrying three fused benzene rings. The outcome of the isomerselective synthesis is shown to be driven by discrete, spin-dictated mechanism, with phenanthrene (C 14 H 10 , p1) initiated through a classical barrierless radical-radical recombination of two benzyl radicals with radical centers at the exocyclic methylene (CH 2 ) moiety on the singlet ground state surface. Formation of anthracene (C 14 H 10 , p2) commences unconventionally on the excited state triplet surface (a 3 A) through [3 + 3] cycloaddition involving a transition state with a cyclic arrangement of the atoms in a six-membered ring along with a reorganization of σ and π bonds via excited-state dynamics initiated by a single collision. The excited-state dynamics leading eventually to anthracene (C 14 H 10 , p2) defy conventional wisdom that PAH formation via radical-radical reactions solely takes place on electronic ground state (singlet) surfaces via initial recombination of the doublet reactants at their radical centers. The facile formation of anthracene (C 14 H 10 , p2) via excited-state dynamics on the triplet surface through cycloaddition as showcased here presents a fundamental shift in currently "accepted" views and opens up the door for a more "rapid" synthesis of aromatic, multi-ringed structures such as three-ring PAHs from mono-ringed radical precursors (benzyl) at high-temperature conditions relevant to combustion and deep space. It further delivers a strategy to explore chemical reactions of aromatic radicals and RSFR under high temperature environments of relevance to synthetic and materials chemistry leading eventually to carbonaceous nanostructures like fullerenes, nanocages, and nanotubes [19][20][21][22] . Results & discussion The formation of phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) is initiated through the reaction of the benzyl radical (C 7 H 7 • , X 2 B 2 ) generated via pyrolysis of helium-seeded benzylbromide (C 7 H 7 Br) at fractions of 0.15% in a chemical micro reactor at temperature of 1473 ± 10 K and a reactor inlet pressure of 400 mbar 23 bromine plus a benzyl radical 24 . Although the formation of the phenanthrene (C 14 H 10 , p1) isomer has been predicted theoretically 16 and demonstrated experimentally 17 , there is no experiment to date that has followed the mechanism of the benzyl radical self-reaction leading to the simultaneous gas-phase detection of phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) under controlled experimental conditions with tunable vacuum ultraviolet (VUV) light to interrogate the reaction products in a molecular beam. It should be noticed that when the initial reactants are dense without dilution, the consequent reactions from tricyclic PAH radicals (anthrancenyl/phenanthrenyl radicals generated from the H-abstraction of anthracene/ phenanthrene) might lead to the formation of tetracyclic PAHs such as tetracene, chrysene, and pyrene 25,26 . Thus, the dilute reactant conditions at elevated temperatures, used in our method is crucial in detecting isomers and arresting reactions before larger PAHs are formed in subsequent reactions. A representative mass spectrum collected at a photon energy of 9.50 eV for the benzyl radical self-reaction is presented in Fig. 3 for a reactor temperature of 1473 K. Ion counts are observable up to a mass-to-charge ratio (m/z) of 182 (C 14 H 14 + ). Formally, this m/z is twice the mass-to-charge ratio of the ionized benzyl radical (C 7 H 7 + ) of m/z = 91. Photoionization efficiency curves (PIE), which report the intensity of a well-defined ion of a specific m/z ratio as a function of photon energy, are used to identify the structural isomer(s) formed in the benzyl self-recombination (Fig. 4). Three independent experimental measurements between 7.3 and 8.0 eV, measured in step size of 0.1 eV, with 300 Torr helium backing pressure are reported in Supplementary Data 1-3 while three independent experimental measurements between 8.0 and 10.0 eV measured in step size of 0.1 eV are reported in Supplementary Data 4-6. These PIE curves then can be fitted with a (linear combination of) reference curve(s) of distinct structural isomer(s) 27 . A close look at the PIE curve of m/z = 182 ( Fig. 4) reveals that these data can be nicely replicated with the PIE reference curve of 1,2-diphenylethane (bibenzyl, C 6 H 5 CH 2 CH 2 C 6 H 5 , 1) as the initial recombination product of two benzyl radicals. The adiabatic ionization energy (IE) of (1) of 8.7 ± 0.1 eV 28 agrees well with the onset of ion counts of 8.60 ± 0.05 eV in the PIE curve. The ion signal at m/z = 180 (Fig. 3) is twice as strong as the ion counts for m/z = 182. The PIE curve at m/z = 180 (C 14 H 12 + , Fig. 4) cannot be replicated with a single contributor, but contributions from trans-and cisstilbene (C 6 H 5 CH = CHC 6 H 5 , 3/3') at ion fractions of 22.5 ± 2.3% and 77.5 ± 7.8% at 10.00 eV are required. It is important to highlight that the PIE curves of 9,10-dihydrophenanthrene (8) or 9,10-dihydroanthracene (14) could not fit the experimental PIE curve at m/z = 180. However, minor contributions of up to 4.7 ± 0.5% (8) and 2.5 ± 0.3% (14) can be accounted for without changing the overall fit of the PIE curve at m/z = 180. Further, the ion counts at m/z = 178 are linked to a hydrocarbon with the molecular formula C 14 H 10 (Fig. 3). A close look at the PIE curve of m/z = 178 (Fig. 4) reveals that a linear combination of PIE reference curves for phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) is critical to replicate the experimental data at m/z = 178. In detail, the experimental PIE curve at m/z = 178 shows an onset of ion counts at 7.45 ± 0.05 eV, which correlates exceptionally well with the NIST evaluated adiabatic IE of anthracene of 7.439 ± 0.006 eV. A sole contribution of anthracene, however, cannot replicate ion counts Reaction pathways for the benzyl radical self-reaction. This reaction leads to phenanthrene (p1) calculated at the CBS-QB3 level theory as extracted from Ref. 14 . Novel reaction pathways to anthracene (p2) computed at the G3(MP2,CC)//B3LYP/6-311G(d,p) level of theory in the present work on the singlet and triplet surfaces are color coded in blue and red, respectively. All energies are presented in the unit of kJ mol −1 . Carbon and hydrogen atoms are color coded in gray and white, respectively. Cartesian coordinates and vibrational frequencies are provided in Supplementary Table 1. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-022-28466-7 ARTICLE above 7.9 eV. To reproduce the overall shape of the PIE curve, a second contribution of the PIE curve of phenanthrene (IE = 7.891 ± 0.001 eV) is required. Accounting for the photoionization cross sections and quoted errors of 20%, described in the Supplementary Fig. 3, fractions of phenanthrene and anthracene of 87 ± 17% and 13 ± 3% are extracted i.e., the phenanthrene isomer dominates at m/z 178. Signal at m/z = 179 and 181 is much weaker compared to m/z = 178 and 180, respectively, and can be linked to the 13 C analogues of phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) (m/z = 178) and trans-and cis-stilbene (C 6 H 5 CH = CHC 6 H 5 , 3/3') (m/z = 180) present at levels of 15.4 % accounting for naturally occurring 13 C (1.1%) and fourteen carbon atoms in p1, p2, and (3/3'). Besides ion counts (m/z = 178-182), the mass spectrum also shows ion counts at m/z = 170 and 172. These are linked to nonpyrolyzed C 7 H 7 79 Br and C 7 H 7 81 Br precursors, respectively (Supplementary Figs. 1 and 2). Further, ion counts at m/z = 91, 92, and 93 can be associated with the benzyl radical (C 7 H 7 + , m/ z = 91), the 13 C substituted benzyl radical ( 13 CC 6 H 7 + ) and toluene (C 6 H 5 CH 3 ) (m/z = 92), and 13 C substituted toluene ( 13 CC 6 H 8 ) as well as the doubly 13 C substituted benzyl radical ( 13 C 2 C 5 H 7 + ) (m/z = 93). Control experiments of helium-seeded benzylbromide conducted under identical experimental conditions, but keeping the silicon carbide tube at 293 K, verify that the aforementioned products are not contaminations from the reactants, but clearly reaction products from the benzyl radical-self reaction. Overall, our experimental study provided compelling evidence on the identification of both phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) (178 amu) with fractions of 87 ± 17% and 13 ± 3% along with cis/trans-stilbene intermediates (C 14 H 12 , 3/3', 180 amu) and the 1,2-diphenylethane (C 14 H 14 , 1, 182 amu) adduct along with possibly minor fractions of 9,10dihydrophenanthrene and 9,10-dihydroanthracene (C 14 H 12 , 8/14, 180 amu). The quantification of anthracene (C 14 H 10 , p2) contradicts previous electronic structure and flame modeling studies 16 that the benzyl radical self-reaction should lead under our experimental conditions to nearly exclusive production to the phenanthrene (C 14 H 10 , p1) molecule with upper limits of anthracene (C 14 H 10 , p2) of 1% at most. These deviations by at least one order of magnitude suggest that an understanding of the critical routes to anthracene (C 14 H 10 , p2) are lacking. The aforementioned discrepancies call for a computational investigation of the benzyl radical self-reaction beyond the traditional singlet ground state surface leading to phenanthrene (C 14 H 10 , p1) 16 . Figure 2 compiles the theoretically predicted key pathways dominating the formation of phenanthrene (C 14 H 10 , p1) on the electronic ground state singlet surface commencing with the radical-radical recombination through their radical centers located at the methylene (CH 2 ) moiety of the benzyl radical 16 ; novel pathways leading to anthracene (C 14 H 10 , p2) on the singlet and triplet surfaces are presented via color codes in red and blue, respectively. The traditional viewpoint of the benzyl-radical selfreaction suggests a recombination of the benzyl radical on the ground state singlet surface with their radical centers at the CH 2 moieties leading to 1,2-diphenylethane (C 6 H 5 CH 2 CH 2 C 6 H 5 , 1). The latter can emit a hydrogen atom from one of the CH 2 groups leading to 1,2-diphenylethyl (C 6 H 5 CH • CH 2 C 6 H 5 , 2) or might be stabilized in the reactor by a three-body reaction with the helium buffer gas. Two reaction pathways from (2) eventually lead via trans-stilbene (C 14 H 12 , 3) or 9,10-dihydrophennthrene (C 14 H 12 , 8) through three hydrogen atom losses along with cyclization to phenanthrene (C 14 H 10 , p1). The overall reaction endoergicity of 195 kJ mol −1 can be compensated by the elevated temperature of the reactor of 1,473 ± 10 K; the reaction progress is facilitated by hydrogen atom abstraction by hydrogen atoms present in the system, in particular for the (3)-(4), (8)- (9), and (14)-(15) steps. The energetics of the transition states suggests that formation of phenanthrene (C 14 H 10 , p1) via the trans-stilbene (C 14 H 12 , 3) is preferred; the highest energy transition state (3) + H → (4) + H 2 of 312 kJmol −1 is still below the barrier of 361 kJmol −1 for the (2) → (6) isomerization required for the 9,10-dihydrophenenthrene (C 14 H 12 , 8) route. These findings are in line with the experimental observations of the 1,2-diphenylethane (C 6 H 5 CH 2 CH 2 C 6 H 5 , 1) and trans-stilbene (C 14 H 12 , 3) intermediates supporting the trans-stilbene (C 14 H 12 , 3) route; however, the 1,2-diphenylethane (C 6 H 5 CH 2 CH 2 C 6 H 5 , 1) pathway cannot be completely discounted (Fig. 4). Note that the pathway to phenanthrene (C 14 H 10 , p1) only proceeds via trans-stilbene (C 14 H 12 , 3), but in the reactor, trans-stilbene (C 14 H 12 , 3) can undergo hydrogen atom assisted isomerization to cis-stilbene (C 14 H 12 , 3') 29 . Having commented on the route to phenanthrene (C 14 H 10 , p1), we discuss the newly revealed reaction pathways leading to anthracene (C 14 H 10 , p2). These are highlighted in Fig. 2 and colorcoded in red and blue for the triplet and singlet surfaces, respectively. First, the benzyl radical can also add with its radical center at the CH 2 moiety to one of the ortho-positions of the second benzyl radical on the singlet surface yielding the C 14 H 14 intermediate (11); careful investigations at the B3LYP/6-311 G(d,p) level of theory by scanning the minimal potential energy reaction profile reveal that this pathway is barrierless. A subsequent hydrogen atom loss from the ortho position leads to the C 14 H 13 intermediate (12), which can isomerize via ring closure to (13). This intermediate carries the carbon backbone of the anthracene molecule and undergoes a subsequent hydrogen atom loss yielding 9,10-dihydroanthracene (C 14 H 12 , 14). Two additional hydrogen atom losses, where the first one, (14) + H → (15) + H 2 , actually occurs by hydrogen abstraction from (14) by atomic hydrogen eventually yield anthracene (C 14 H 10 , p2) via intermediate (15). A one-step molecular hydrogen loss is not competitive in the presence of a sufficient concentration of hydrogen atoms considering the inherent transition state located 367 kJmol −1 above the energy of the separated reactants compared to the hydrogen atom abstraction and loss transition states placed only 158 and 218 kJmol −1 above the separated reactants. Second, both benzyl radicals can also recombine on the triplet surface (a 3 A) via [3 + 3] cycloaddition through a transition state in the entrance channel located 71 kJmol −1 above the energy of the separated reactants leading to triplet 4a,8a,9,10-tetrahydroanthracene (C 14 H 14 , 10). This intermediate contains three rings and is formed via a single collision between two benzyl radicals via cycloaddition. The extensive reorganization of the σand π-electron densities involving the frontier π and π* orbitals of the two reacting benzyl radicals account for the 'tight' nature of this transition state (Fig. 5). Interestingly, (10) does not exist in a singlet electronic state. When optimized starting with the triplet geometry and open-shell singlet initial wavefunction, the structure undergoes spontaneous opening of the central ring and the optimization converges to the open singlet structure (11). On the triplet surface, after being produced, (10) emits a hydrogen atom to form intermediate (13), which eventually loses two hydrogen atoms yielding anthracene (C 14 H 10 , p2) as detected experimentally (Figs. 3 and 4). It is important to comment on the entrance channels leading to (10) and (11) on the triplet and singlet surface, respectively. Both pathways (10) → (13) and (11) → (12) + H → (13) + H eventually lead via (13) to 9,10-dihydroanthracene (C 14 H 12 , 14) and anthracene (C 14 H 10 , p2). Although the formation of (10) on the triplet surface has the entrance and (10) → (11) + H barriers of 71 and 102 kJmol −1 , respectively, and, hence, appears to be unfavorable compared to the barrierless path to (11) on the singlet surface, isomerization of (12)- (13) is not efficient with a barrier of 156 kJmol −1 . Consequently, the pathway on the triplet surface (10) → (13) wins over the reaction sequence (11) → (12) + H → (13) + H on the singlet surface. This is due to the unfavorable barrier to isomerization of (12) to (13) that is twice compared to the energy of the transition state leading to (10) on the triplet surface. The highest barrier to anthracene (C 14 H 10 , p2) formation of 218 kJmol −1 connects to the atomic hydrogen loss of (15) to form the final anthracene product. This barrier is substantially lower than the barrier of 312 kJmol −1 for the hydrogen abstraction by atomic hydrogen from (3) forming (4) in the most favorable pathway to phenanthrene (C 14 H 10 , p1) via (1) → (2) + H → (3) + 2 H → (4) + H + H 2 → (5) + H + H 2 → p1 + 2 H + H 2 . Therefore, anthracene (C 14 H 10 , p2) formation can compete with the synthesis of phenanthrene (C 14 H 10 , p1). The reactivity is also influenced by the cone-of-acceptance of the benzyl radical leading to the initial collision complexes (1) [forming phenanthrene] versus (10) [forming anthracene] on the singlet and triplet surface, respectively. While the formation of (10) is geometrically constrained due to the cycloaddition character of the transition state (Fig. 5) and dictated by low impact parameters, the preferred synthesis of phenanthrene (C 14 H 10 , p1) over anthracene (C 14 H 10 , p2) with branching ratios of 87 ± 17% and 13 ± 3%, respectively, suggests that (1) is accessible through a larger range of impact parameters. The mechanisms to form both phenanthrene and anthracene beginning from C 7 H 7 + C 7 H 7 are very complex as they involve series of consecutive reactions: C 7 H 7 + C 7 H 7 → C 14 H 13 (13) (4) + H 2 , (4) → C 14 H 10 (p1) + H for phenanthrene; the reaction system for phenanthrene is even much more complicated 16,18 than the snapshot provided here. Therefore, the relative yields of phenanthrene and anthracene are controlled by multiple factors including rate constants of all reactions involved under particular conditions and concentrations of atomic and molecular hydrogen, which can be generated or consumed by competing reactions occurring in the system. Detailed modeling of the phenanthrene to anthracene branching ratio would be specific for particular flame conditions and is beyond of the scope of the present study. The existing models 16,18 can be extended by rate constants for the reactions along the anthracene pathway initiating on the triplet surface which are provided in the Supporting Information (Supplementary Table 1). In the entrance channel on the phenanthrene pathway, the recombination of two benzyl radicals proceeds by the well-skipping C 7 H 7 + C 7 H 7 → C 14 H 13 (2) + H mechanism because, if collisionally stabilized C 14 H 14 (1) is formed, it is unlikely to unimolecularly decompose to C 14 H 13 (2) + H as the dissociation pathway back to C 7 H 7 + C 7 H 7 is more favorable and the rate constant for the forward decomposition is very low 16,18 . Therefore, we compare the rate constant of the triplet reaction C 7 H 7 + C 7 H 7 → C 14 H 13 (13) + H with that for C 7 H 7 + C 7 H 7 → C 14 H 13 (2) + H 18 ( Supplementary Fig. 4(a)). At 1500 K the rate constant for the singlet channel is higher than that for the triplet channel. Among the consequent reaction steps, the hydrogen atom loss reactions from C 14 H 13 and C 14 H 11 are very fast at relevant temperatures ( Supplementary Fig. 4(b)); in fact, the present pressure-dependent calculations predict that the radicals are unstable at temperatures and pressures typical for the micro reactor and immediately equilibrate with their H loss products. However, the rate constants for hydrogen abstraction by atomic hydrogen are significantly higher for the anthracene pathway; for instance, at 1500 K the rate constant for (14) + H → (15) + H 2 is a factor of more than 30 higher than that for (3) + H → (4) + H 2 . Thus, the reactions following the formation of (13) in the anthracene channel are noticeably faster than those following the formation of (2) in the phenanthrene channels, which results in an increase of the relative yield of anthracene. However, it is not possible to predict the phenanthrene to anthracene branching ratio without detailed modeling taking into account the distribution of temperature and pressure in the reactor as well as the concentrations of atomic hydrogen and alternative abstractors like bromine atoms, which affect the rates of bimolecular reactions participating in the network. To sum up, our combined experimental and computational study identified phenanthrene (C 14 H 10 , p1) and anthracene (C 14 H 10 , p2) as two prototype 14π aromatic products of the benzyl radical self-reaction at elevated temperatures of 1473 ± 10 K representing high-temperature combustion systems and carbonrich circumstellar envelopes of, e.g., IRC + 10216 star. The isomerselective synthesis is driven by two highly diverse reaction mechanisms. A radical-radical recombination of two benzyl radicals with radical centers at the methylene (CH 2 ) moiety leads initially to 1,2-diphenylethane (C 6 H 5 CH 2 CH 2 C 6 H 5 , 1) followed by hydrogen losses and ring closure to eventually phenanthrene (C 14 H 10 , p1). The formation of anthracene (C 14 H 10 , p2) embarks preferentially on the excited state triplet surface (a 3 A) through [3 + 3] cycloaddition via a transition state with a cyclic arrangement of the atoms in a six-membered ring together with an extensive reorganization of σ and π bonds via excited state dynamics initiated by a single collision between two radicals. These excited state dynamics producing eventually to anthracene (C 14 H 10 , p2) challenge 'established' paradigms that PAH formation via radical-radical reactions solely operates on electronic ground state (singlet) surfaces through recombination of the doublet reactants at their radical centers. It should be noted that excited state dynamics are of fundamental importance in polymer chemistry, too. Here, polymerization mechanisms involving excited state anions have been identified as elementary reaction pathways in an anionic isoprene polymerization implicating electronic excitation of a polyisoprene-isoprene complex to a quasi-degenerate electronically excited state 30 . Recently, Rodembusch et al. 31 fluorescent monomers with emissions at long wavelengths originating from an excited keto tautomer; the latter arises from an enol-cis conformer in the electronically excited state. Thus, the facile formation of anthracene (C 14 H 10 , p2) via excited-state dynamics on the triplet surface through cycloaddition involving two doublet radicals represents a fundamental shift in currently "perceived" views toward the synthesis of multi-ringed structures in the gas phase broadening our understanding of the origin and evolution of carbonaceous matter in the Universe. Methods Experimental. The reaction between two benzyl radicals (C 7 H 7 ) was examined under combustion-relevant conditions by utilizing a resistively heated hightemperature pyrolytic reactor 23 Briefly, a continuous beam of benzyl radicals (C 7 H 7 ) was generated in situ through the pyrolysis of benzylbromide (C 7 H 7 Br) (Sigma Aldrich, 98%) at 1473 K via carbon-bromine bond cleavage at concentrations of 0.15% in helium carrier gas at total pressures of 400 mbar at the reactor inlet 24 . At 298 K, the vapor pressure of benzylbromide is 0.6 mbar. Upon exiting the heated silicon carbide (20 mm) tube and passing through a skimmer, the neutral molecules within the supersonic beam were photoionized by single-photon ionization utilizing quasi-continuous tunable VUV radiation. A mass spectrum was obtained at intervals of 0.05 eV between 7.30 and 10.00 eV. The Re-TOF spectrometer was operated with 2.5 µs repeller pulses. Photoionization efficiency curves (PIEs), which report the ion counts at a particular mass-to-charge (m/z) ratio as a function of photon energy, were obtained by integrating the signal at a well-defined m/z ratio selected for the species of interest over the energy range and normalizing to the total photon flux. Note that whenever necessary, calibration PIE curves were recorded within the same experimental setup. Computational. Earlier computational investigations of the benzyl radical selfreaction have been limited to their recombination at the radical center 16 . A potentially important pathway via the 'head-tail' recombination has been not explored to date. Here, ab initio G3(MP2,CC)//B3LYP/6-311G(d,p) calculations have been employed to explore the lowest singlet and triplet C 14 H 14 PESs accessed by the reaction of the benzyl radical with its CH 2 group attacking the ortho position in the ring of its counterpart. The Cartesian coordinates (in Å) and vibrational frequencies (in cm −1 ) for reactants, intermediates, transition states, and products along reaction pathways leading to phenanthrene and anthracene are reported in Supplementary Data 7. Then, the C 14 H x (x = 13-10) species formed by consequent H losses from C 14 H 14 in their ground doublet and singlet electronic states were also explored at the same level of theory. The G3(MP2,CC)//B3LYP/6-311G(d,p) model chemistry scheme [32][33][34] , which is expected to provide accuracy of 4-8 kJ mol −1 for relative energies, involves geometry optimization and vibrational frequencies calculations at the hybrid density functional B3LYP/6-311G(d,p) level 35,36 followed by single-point energy calculations at the CCSD(T)/6-311 G(d,p), MP2/6-311 G(d,p), and MP2(G3Large) levels of theory aimed to evaluate the CCSD(T) energy with a large and flexible G3Large basis set. The overall energy in this scheme also includes zero-point vibrational energy corrections ZPE obtained at the B3LYP/6-311 G(d,p) level. Connections between transition states and local minima they link were verified by intrinsic reaction coordinate (IRC) calculations. The family of the G2-G4 model chemistry schemes has been shown [32][33][34]37 to consistently achieve similar accuracy not only for closed shell singlet molecules but also for radicals (doublets) and diradicals (triplets) when treating the lowest state for each particular spin with a predominantly single-reference character of the wavefunction. The absence of a significant multireference character in the wavefunctions of all species considered in the present study is indicated by low values of their T1 diagnostics 38,39 . Therefore, we anticipate that the accuracy of the calculated energies on the singlet, triplet, and doublets PESs are comparable here. The ab initio calculations were performed utilizing the Gaussian 09 40 and MOLPRO 2015 41 program packages. Temperature-and pressure-dependent phenomenological rate constants for the reactions on the anthracene pathway were computed using the one-dimensional Rice-Ramsperger-Kassel-Marcus-Master Equation (RRKM-ME) approach 42 employing the MESS software package 43 within the rigid rotor-harmonic oscillator approximation for partition function calculations, which utilized the G3(MP2,CC) relative energies and B3LYP/6-311G(d,p) molecular parameters. Data availability The data that support the plots within this paper and other finding of this study are available from the corresponding author upon reasonable request. The raw data (Timeof-Flight Mass Spectra as a function of photon energy) generated in this study are provided as Supplementary Data 1-6.
6,962.2
2022-02-10T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Ghostbusters in f (R) supergravity f (R) supergravity is known to contain a ghost mode associated with higher-derivative terms if it contains Rn with n greater than two. We remove the ghost in f (R) supergravity by introducing auxiliary gauge field to absorb the ghost. We dub this method as the ghostbuster mechanism [1]. We show that the mechanism removes the ghost super-multiplet but also terms including Rn with n ≥ 3, after integrating out auxiliary degrees of freedom. For pure supergravity case, there appears an instability in the resultant scalar potential. We then show that the instability of the scalar potential can be cured by introducing matter couplings in such a way that the system has a stable potential. Introduction Higher-order derivative interactions naturally appear in effective field theories. In particular, in the system with gravity, we need to take into account such terms since various higher-order corrections can be relevant to the dynamics. However, higher-derivative interactions often lead to the so-called Ostrogradski instability [2,3]: higher-derivative interactions give additional degrees of freedom which makes the Hamiltonian unbounded from below, and hence the system shows an instability. If such a ghost mode appears, one should regard the system as an effective theory which is valid only below the energy scale of the mass of the ghost mode, otherwise the system loses the unitarity. In a class of ghost-free higher-derivative interactions, one does not come across with such an instability problem. In the case of a system with a single scalar and a tensor, the Horndeski class [4,5] of interactions are free from ghosts. In this class of interactions, the equations of motion (E.O.M) are at most the second order differential equations, and no additional degree of freedom shows up. In general, one may ask the following question: among many possible higher-order derivative terms, what kind of structure gives us ghost-free interactions? For example, in the so-called Galileon models [6], Galileon scalar fields can be understood as the Goldstone mode of translation symmetry in extra dimensions, and the action is made out of ghost-free derivative terms. Therefore, one can say that the hidden translation symmetry controls the higher-derivative interactions so that there appear no new degrees of JHEP05(2018)102 freedom. The absence of ghosts in supersymmetric Galileon model [7] can be also achieved by a spontaneously broken hidden SUSY [8]. Higher-derivative interactions are also studied in gravity theories. Despite the existence of fourth-order derivative interactions, the so-called Starobinsky model [9], which has a quadratic term of the Ricci scalar, does not have any ghost as well as the Horndeski class. This is because such a system is equivalent to the scalar-tensor system without higherderivatives. As a cosmological application, the Starobinsky model predicts the spectral tilt of scalar curvature perturbation compatible with the latest CMB observation [10]. One can extend this model to the system with an arbitrary function of the Ricci scalar, called the f (R)-gravity model [11] (see also refs. [12,13] for review), which is also dual to a scalar-tensor system, and therefore free from the ghost instability. On the other hand, higher-derivative interaction of gravity multiplets were studied in 4D N = 1 SUGRA. In ref. [56], Cecotti constructed the higher-order terms of the Ricci scalar in the old minimal supergravity formulation and showed that at least one ghost superfield appears if we have R n (n ≥ 3) terms in the system. It is possible to avoid the ghost by some modifications of the system. In [57], the so-called nilpotent constraint on the Ricci scalar multiplet, which removes a scalar field in the multiplet, is considered. Due to the absence of the scalar, the bosonic ghost is absent in the spectrum of the system. This mechanism has been applied to various higher-curvature models in SUGRA [58]. The nilpotent constraint R 2 = 0, however, is an effective description of a broken-SUSY system. If the linearly realized SUSY is restored in a higher energy regime, the ghost mode would show up. 1 As another approach, in [59] the authors considered a deformation of the ghost kinetic term by introducing an additional Kähler potential term. It is shown that the resultant ghost-free system is equivalent to the matter coupled f (R) SUGRA. 1 The nilpotent condition on a chiral superfield Φ has two solutions. A nontrivial solution is φ = ψψ F φ where φ, ψ and F φ are scalar, Weyl spinor, and auxiliary scalar components of Φ. Obviously, this solution is well-defined for F φ = 0, that is, SUSY should be spontaneously broken. JHEP05(2018)102 Meanwhile, in our previous work [1], we proposed a simple method to remove a ghost mode in 4D N = 1 SUSY chiral multiplets [16,17], which we dubbed "ghostbuster mechanism." We gauge a U(1) symmetry by introducing a non-dynamical gauge superfield without kinetic term to the higher-derivative system with assigning charges on chiral superfields properly in order for the gauge field to absorb the ghost. Namely, due to the gauge degree of freedom, the ghost in the system is removed by the U(1) gauge fixing. In this class of models, a hidden local symmetry plays a key role in the ghostbuster mechanism. Actually, before this work, esentially the same technique is used for superconformal symmetry in the conformal SUGRA formalism: the conformal SUGRA has one ghost-like degree of freedom called a compensator. Such a degree of freedom is removed by the superconformal gauge fixing, whereas in the ghostbuster mechanism, the hidden local U(1) gauge fixing removes the ghost associated with higher-derivatives. Therefore, in SUGRA models, one may understand the higher-derivative ghost as a second compensator for the system with the superconformal symmetry × hidden local U(1) symmetry. In this paper, we apply the ghostbuster mechanism to remove the ghost in the f (R) SUGRA system. Interestingly, the hidden U(1) symmetry required for the mechanism can be understood as the gauged R-symmetry, since the gravitational superfield should be gauged under the U(1) symmetry. The U(1) charge assignment is uniquely determined, and therefore, naively one cannot expect a ghost mode cancelation a priori. As we will show, a would-be ghost superfield has a gauge charge and can be nicely removed by the gauge fixing of the U(1) symmetry. As a price of this achievement, however, the resultant system generically has an unstable scalar potential in a pure SUGRA case. Such an unstable scalar potential can be cured by various modifications. As an example we propose a model with a matter chiral superfield. We will find that such a deformation leads to a healthy model of SUGRA without either ghosts or instabilities of the scalar potential. One will easily find how the ghost supermultiplet is eliminated from the dual mattercoupled SUGRA viewpoint. We also address the same question in the higher-curvature SUGRA system. We find that, after integrating out the auxiliary vector superfield for the mechanism, the scalar curvature terms including R n with n ≥ 3 disappear, and the resultant system has linear and quadratic terms in R. However, the R + R 2 SUGRA system has couplings completely different from that proposed in [56]. This observation means that, despite the disappearance of higher scalar curvatures in the final form, the higher-curvature deformation in the original action gives a physical consequence even after applying the ghostbuster mechanism. This paper is organized as follows. In section 2, we briefly review the higher-curvature SUGRA models and its dual description. In particular, one finds that once the SUSY version of the higher order Ricci scalar term R n (n ≥ 3) is included in the old minimal SUGRA formulation, there appears at least one ghost chiral superfield. We apply the ghostbuster mechanism to the higher-curvature SUGRA in section 3. We will see that although the ghost superfield can be removed by the mechanism, the resultant system has a scalar potential with an instability in the direction of a scalar field. Then, in section 4, we discuss a simple modification of the model by introducing an extra matter chiral superfield. We show an example which is stable and free from ghost as well. Finally, we conclude in section 6. Throughout this paper, we will use the notation of [60]. JHEP05(2018)102 2 Higher-curvature terms in supergravity In this section, we review the construction of higher-order terms of the Ricci scalar in 4D N = 1 SUGRA [56]. 2 In this paper, we use the conformal SUGRA formalism, in which there are conformal symmetry and its SUSY counterparts in addition to super-Poincaré symmetry [64][65][66][67]. In order to fix the extra gauge degree of freedom, we need to introduce an unphysical degrees of freedom called the conformal compensator, which should be in a superconformal multiplet. In this paper, we adopt a chiral superfield as a compensator superfield, which leads to the so-called old minimal SUGRA after superconformal gauge fixing. We show the components of supermultiplet, the density formulas, and identities in appendix A. First, let us show the pure conformal SUGRA action, where S 0 is the chiral compensator with the charges (w, n) = (1, 1) in conformal SUGRA (see Apeendix A for the definition of the charges), and [· · · ] D denotes the D-term density formula. Taking the pure SUGRA gauge, S 0 =S 0 = 1, b µ = 0, we obtain an action whose bosonic part takes the form where R is the Ricci scalar, F S 0 is the F-term of S 0 and A a is the gauge field of chiral U(1) A symmetry, which is a part of superconformal symmetry. The E.O.M. for the auxiliary fields F S 0 and A a can be solved by setting F S 0 = A a = 0, and then we find the pure SUGRA action. The action (2.1) can also be written as where [· · · ] F is the F-term density formula. Here we have used the identity given in (A.26). The chiral superfield R is the so-called scalar curvature superfield, defined by where Σ is the chiral projection operator. Its components in the pure SUGRA gauge are given by where ellipses denote fermionic parts. From this expression, we find that the F-component of R contains the Ricci scalar. It has been known that there is no ghost in the system involving R 2 , which is realized as JHEP05(2018)102 where α is a real constant. The bosonic part of this action after the superconformal gauge fixing is where D a represents the covariant derivative, The Lagrangian has the quadratic Ricci scalar term α 36 R 2 and also the non-minimal couplings between F S 0 , A a and R. In this system, there exist four real massive modes ϕ i with the common mass m 2 = 3/α in the fluctuations around the vacuum g µν = η µν and F S 0 = A a = 0: We stress that, as is often the case with SUSY higher derivative models, the auxiliary fields have their kinetic terms and hence they are dynamical degrees of freedom in the presence of the higher-derivative term. Next, let us consider a SUGRA system with R n , n ≥ 3 along the line of refs. [56,57,68]. As we discussed in the previous section, R superfield has the Ricci scalar in its F-component. Using the chiral projection operator Σ, one can obtain the superfield Σ(R) which has R in the lowest component: where we have shown only the relevant part. With this superfield Σ(R), one can construct an action involving arbitrary functions of R, i.e. f (R) gravity models in SUGRA. Here we consider the action of the form where Ω is an arbitrary real function and F is an arbitrary holomorphic function. If we chose Ω = 0, F(S, X) = S(3 − αX)/2, then this action reduces to (2.6) since (2.11) The bosonic part of the action contains the following terms including higher-order terms of Ricci scalar R JHEP05(2018)102 where the subscripts on the functions denote the differentiations with respect to the scalar fields. Such SUSY higher-derivative terms have derivative interactions of auxiliary fields, and the interactions make the auxiliary fields dynamical as . (2.13) In this system, in addition to the scalar degree of freedom from the derivative terms of the Ricci-curvature, the higher-derivative terms of the "dynamical" auxiliary field F S 0 give rise to multiple scalar degrees of freedom, some of which are ghost-like. If we choose Ω(S,S, X,X) = SSΩ(X,X), F(S, X) = SF(X), and set F S 0 = 0 identically as is done by imposing the nilpotent condition R 2 = 0 in ref. [57], the above terms vanish and no ghost seems to appear. Without such a condition, however, the appearance of ghost is unavoidable as is clearly shown in the following. The present system is also equivalent to a standard SUGRA model coupled to matter superfields. As in the previous section, we use Lagrange multiplier suerfields, and rewrite the action (2.10) as where T and Y are Lagrange multiplier superfields with (w, n) = (0, 0). The E.O.Ms of T and Y give the constraints which reproduce the original action (2.10). Instead, using the identity (A.26), we can also obtain the dual action (2.15) This is a standard SUGRA system with the following Kähler and super-potentials, Let us show the existence of a ghost mode. The Kähler metric of the {S, Y } sector takes the form, where A = T +T + YS +Ȳ S + Ω(S,S, X,X). The determinant of this sub matrix has negative determinant, and this Kähler metric has one negative eigenvalue corresponding to a ghost. Thus, the f (R) SUGRA model has one ghost mode in general. JHEP05(2018)102 Note that X becomes an auxiliary superfield if Ω = Ω(S,S) is independent of X. Even in such a case, the system has higher-curvature terms in the F(S, X) term in (2.12). The reduced dual system is described by This reduction does not change the above discussion, and hence a ghost mode appears in this system as well. 3 Ghostbuster in f (R) supergravity In this section, we consider the elimination of the ghost superfield along the line of ref. [1]. To eliminate the ghost superfield, one needs to introduce a gauge redundancy, by which one of the degrees of freedom is removed. In the f (R) SUGRA discussed above, all the superfields R, Σ(R) are expressed in terms of S 0 with the SUSY derivative operators. Hence, once we introduce a vector superfield V R for a U(1) gauge symmetry and assign the charge to S 0 so that it transforms as the transformation law of R and Σ(R) are automatically determined as where the chiral projection Σ needs to be modified so that the operations is covariant under the gauge symmetry. In the rest of this section, we omit the suffix g attached to R g , Σ g . Interestingly, the U(1) gauge symmetry under which the compensator is charged becomes a gauged R-symmetry [69]. We call it a U(1) R symmetry in the following discussion. Here, however, we do not introduce a kinetic term for V R and thus the vector superfield V R is an auxiliary superfield, which should be written as a composite field consisting of curvature superfields R and Σ(R). Ghostbuster in pure f (R) supergravity model Let us introduce a U(1) R gauge symmetry under which S 0 has charge c S 0 = 1. Since the chiral superfield R = Σ(S 0 )/S 0 , the charge of R is determined as c R = −2. Analogously, we find that c Σ(R) = 2. Then the gauged extension of the system (2.10) with Ω = Ω(S,S) is described by the action 3) JHEP05(2018)102 where Ω should be gauge invariant and F should have gauge charge c F = −3 in total. Hence F should take the form To discuss the ghost elimination, it is useful to consider the dual system as in the non-gauged case (2.15). The dual system of the gauged model is described by where the gauge charges of T, S, Similarly to the non-gauged case, we can rewrite this action as For simplicity, in the following discussion, we choose the function Ω = γ − hSSe −3V R , where γ is a real constant. Note that one can perform the following procedure with a more general form of Ω in a similar way. Then we obtain We stress that the U(1) R charges of (S, Y ) are automatically determined to be non-zero. This is a nontrivial and important nature of the f (R) SUGRA model since the ghostbuster mechanism does not work if S and Y , either of which corresponds to the ghost mode, did not have the U(1) R charges. This equation can be algebraically solved in terms of V R as Substituting this solution to the action, one finds where we have rescaled S 0 as S 0 → 2 1/3 / √ 3 S 0 . Thus, starting from the modified highercurvature action (3.3), we find the dual matter-coupled system (3.10). After partial gauge fixings of superconformal symmetry, 4 this system becomes Poincaré SUGRA with the following Kähler and superpotentials, Therefore, if the lowest component of Y takes a non-zero value, we can fix the U(1) R gauge by setting Y = 1. Then, after a redefinition S → S + 1 h , we obtain (3.14) If S = 0, we can also fix the gauge by setting S = 1. Then we find Except for the two points S = 0 (Y = ∞), Y = 0 (S = ∞), the above two descriptions are equivalent and related by a coordinate transformation between S and Y . In both cases, all the eigenvalues of the Kähler metric are obviously positive. Therefore, we have shown that the ghost mode is eliminated by our ghostbuster mechanism. Note that X is an auxiliary field in this setup, and we need to solve the E.O.M for X to obtain the physical superpotential. We stress that the elimination of the ghost mode by the ghostbuster mechanism in this higher-curvature system is nontrivial since we do not have any choice of the charge assignment to the superfields. As we have seen above, the would-be ghost modes have charges under U(1) R , which enables us to remove the ghost mode by the gauge degree of freedom. Instability of scalar potential In this section, we analyze the scalar potential of the ghost-free system derived in the previous section. The F-term scalar potential in the Poincaré SUGRA is given by (3.17) If we choose the gauge fixing condition S = 1, Im T appears only in W due to the shift symmetry of Im T in the Kähler potential, and hence the mass of Im T is given by The Kähler potential in eq. (3.15) has the property called the no-scale relation Since W ∝ T + XY , the potential has the following linear term of Im T Im(KBKB To realize a stable vacuum at Im T = 0, this quantity must vanish identically. By using the Kähler potential in eq. (3.15), we find that the coefficient of the linear term is given by Note that the non-dynamical field X becomes a function of Y after solving its E.O.M. Im T has only a mass term Unfortunately, this "off-diagonal" contribution in the mass matrix leads to a tachyonic mode. 5 This instability cannot be cured by any higher-order terms since Im T appears only in the term (3.20). Therefore, ImX = 0 makes Im T unstable and even if there is the local minimum in ImX = Im T = 0, that point cannot be a local minimum, but must be a saddle point. We conclude that although the instability caused by ghost mode is absent thanks to the ghostbuster mechanism, the pure higher-curvature action has an unstable scalar potential, which does not have any stable SUSY minimum. In the next section, we consider an extension of our model to improve this point. Preliminary As we discussed in the previous section, the scalar potential of our minimal model has no stable SUSY minimum. One may improve such a situation by various types of modifications. Here we take a relatively simple way; we introduce an additional matter field Z so that the coupling between the gravitational sector and the additional sector stabilizes JHEP05(2018)102 the potential. 6 Let us assume that Z carries no U(1) R charge so that the superpotential W contains T Z term in the S = 1 gauge. Then it is possible to introduce Z in the superpotential in such a way that the constraint for S is modified as We can also change the definition of X as with an arbitrary function k(Z,Z). Note that if we chose k(Z,Z) = Z, then we obtain the same unstable model as in section 3 with the redefinition S → S ′ = SZ. Therefore, k(Z,Z) should have a constant term around the minimum of Z, i.e. k( Z , Z ) ≡ c = 0. Under this modification, the dual system is given by which can be rewritten as For simplicity, let us choose the function as After solving the E.O.M for V R , we find the following Kähler potential and superpotential in the S = 1 gauge. Example of matter coupled f (R) supergravity Let us discuss a simple example by setting the functions as The corresponding Kähler potential is given by where both ω 1 and ω 2 are required to be positive so that there exists a solution of the E.O.M. for V R and the condition e K > 0. The eigenvalues {λ i | i = 1, 2, 3} of the Kähler metric K AB are given by (4.12) Furthermore, by choosing the functionF so thatF(0) = 0,F ′ (0) = 0, we find a SUSY vacuum satisfying W A = W = 0 at X = Y = T = S = 0, which is guaranteed to be stable. Therefore, there exists the SUSY vacuum with a positive definite metric if and only if When these conditions are satisfied, there exist no ghost anywhere in the region M = {T, Y, Z | ω 1 > 0 , ω 2 > 0} and the boundary ∂M is geodesically infinitely far away from the SUSY vacuum. Ghostbuster mechanism from higher-curvature SUGRA viewpoint In this section, we discuss how the ghostbuster mechanism works in the higher-curvature frame. As we have seen in previous two sections, the ghost supermultiplet is eliminated in both pure and matter-coupled higher-curvature systems. Let us consider the original action for f (R) gravity before taking the dual transformation. For concreteness of the discussion, we take the simplest model with an additional matter superfield in eq. (4.8). The same conclusion follows even in the absence of an additional matter. The higher-curvature action can be obtained by solving E.O.M. for T and Y and imposing the constraints for S and X. Here we introduce S 1 ≡ cS 0 S + R g as an extra matter and solve the modified constraint (4.1) for Z. After introducing the quadratic term of X, the original action takes the form JHEP05(2018)102 withF(X) ≡ cXG(X) and where a and b are real (positive) parameters. Note that X now does not have the Ricci scalar in the lowest component but a higher-derivative superfield made out of S 1 . This means that the higher-derivative term of R g is now replaced by that of S 1 , and hence the higher-curvature term does not show up. By expanding the action explicitly, one can check that this action has Ricci scalar terms up to the quadratic order. We note that, however, this does not lead to the conclusion that the ghost is removed by the additional matter: since there still exist higher-derivative terms of S 1 , the ghost mode can arise from such terms. One may also confirm that the absence of the higher curvature terms R n (n ≥ 3) is not an artifact of field redefinition. We can show that in this specific matter coupled model, the higher-curvature terms exist only in the off-shell action before substituting the solution of the E.O.M for the auxiliary field in V R . We stress that this conclusion does not mean that the higher-curvature modification is removed by the ghostbuster mechanism. As we claimed above, the resultant system has scalar curvature terms only up to the quadratic order, as the simplest Cecotti model does [56]. However, the coupling of the resultant system is completely different from the Cecotti model. In our dual matter coupled system in section 4.2, Kähler potential (4.9) takes the form whereas, in the Cecotti model, it can be written as where T, Y and T c are chiral superfields. The difference of the Kähler potentials leads to a different moduli space geometry. Interestingly, all T, Y and T c have the hyperbolic geometry structure, which is applicable to the so-called inflationary α-attractors [71,72]. In the α-attractor inflation, we take the moduli space K = −3α log(Φ +Φ) for an inflaton superfield Φ, and the value of the parameter α has a relation to the tensor to scalar ratio r as r = 12α N 2 , where N is the number of e-foldings at the horizon exit. In our model, we have α = 1 3 and 2 3 , whereas the Cecotti model has α = 1. If we apply our model to inflation, we would find a value of tensor to scalar ratio r different from that of the Cecotti model. Therefore, the higher-curvature modification has physical consequences even though the higher-order scalar curvature terms seem to disappear after the ghostbuster mechanism. Since the construction of the inflation model is beyond the scope of this paper, we leave it as future work. Conclusion We have applied the ghost buster method to a higher-curvature system of SUGRA. It has been known that once we introduce a higher scalar curvature multiplet Σ(R), a ghost mode JHEP05(2018)102 generically shows up in the system as we reviewed in section 2. The ghostbuster method requires a nontrivial U(1) gauge symmetry with a non-propagating gauge superfield. It turned out that the required U(1) symmetry should be the gauged R-symmetry in the case of the higher-curvature system, since the ghost arises from the gravitational superfield. Due to the uniqueness of the gauge charge assignment, it is nontrivial that if the ghostbuster method is applicable to remove the ghost. As we have shown in section 3, thanks to the nonzero U(1) charge of "would-be" ghost mode, we can eliminate the ghost mode and obtain a ghost-free action. However, the resultant ghost-free system turned out to be unstable because of the scalar potential instability. Such an instability is easily cured by introducing matter fields, which would be necessary for realistic models. Additional matter superfields can stabilize the scalar potential if we choose proper couplings between gravity and matter multiplets. We have also discussed how the ghostbuster mechanism can be seen in the highercurvature system in section 5. We have found that the higher-order scalar curvature terms R n with n ≥ 3 are eliminated in using the mechanism, and the resultant system has the scalar curvature up to the quadratic order. However, the higher-curvature modification is not completely eliminated by the mechanism. We find moduli space geometry different from the known R + R 2 supergravity [56]. Therefore, despite the absence of f (R) type interactions in the final form, the SUSY higher-order curvature corrections give physical differences. In particular, the difference of the moduli space structure might be useful for constructing inflationary models. In this work, we did not discuss the elimination of ghosts originated from higherderivative terms of matter superfields. It is a straightforward extension of our previous work [1] for global SUSY to SUGRA and is much easier than the higher-curvature model discussed in this paper, since the U(1) charge assignment is not unique for matter higherderivative models. Since the higher-derivatives of matter fields in SUGRA requires the compensator S 0 , it would be interesting to assign the U(1) charge to the compensator as well, i.e. we can use U(1) R-symmetry for the ghostbuster mechanism as with the highercurvature case, which is only possible for the SUGRA case. Let us mention the applicability of our mechanism to the other SUGRA formulations, where the auxiliary fields in the gravity multiplet are different. Our mechanism is not applicable for the so-called new minimal SUGRA formulation [73], since the compensator is a real linear superfield, which cannot have any U(1) charge. For the non-minimal SUGRA case, it would be possible to assign a nontrivial U(1) charge to complex linear compensator. In addition, it is known that the R 2 model of non-minimal SUGRA has a ghost mode in the spectrum, so it is interesting to see if the ghost can be removed by our mechanism. A Superconformal tensor calculus Here we give a brief summary of the superconformal formulation. We use the convention η ab = diag(−1, 1, 1, 1) for the Minkowski metric. In 4D N = 1 conformal SUGRA, we have the super-Poincaré generators {P a , M ab Q α }, and the additional superconformal generators, {D, A, S α , K a }. They correspond to the translation P a , the Lorentz rotation M ab , the SUSY Q α , the dilatation D, the chiral U(1) A, the S-SUSY S α and the conformal boost K a , respectively. Such additional gauge degrees of freedom are technically useful for the construction of the SUGRA action. In conformal SUGRA, a supermultiplet is characterized by the charges under D and A denoted by w and n, respectively. We introduce one particular supermultiplet called the compensator, whose components are auxiliary fields or removed by the superconformal gauge fixing. In this paper, we use a chiral superfield as the compensator, which gives the so-called old-minimal SUGRA after the superconformal gauge fixing. In the following, we summarize the component expressions of supermultiplets, the chiral projection operation, the invariant formulae and some identities. where ζ c and λ c are the charge conjugates of ζ and λ, respectively. Note thatC has the D and A charges (w, −n). Multiplication law. Here We show the multiplication rule of supermultiplets. Suppose C I ∈ G (w I ,n I ) and consider a function f (C I ) ∈ G (w,n) . The component of f (C I ) is given by f (C I ) = f (C I ) , f I ζ I , f I H I + · · · , f I K I + · · · , f I B I a + · · · , f I λ I + · · · , where ellipses denote terms containing fermions and f I , f IJ are derivatives defined as and the covariant derivative of C I is given by Note that since (w, n) are additive quantum numbers, the following relations are satisfied,
7,270.2
2018-05-01T00:00:00.000
[ "Physics", "Geology" ]
Finite-time passivity for neutral-type neural networks with time-varying delays – via auxiliary function-based integral inequalities ∗ . In this paper, we investigated the problem of the finite-time boundedness and finite-time passivity for neural networks with time-varying delays. A triple, quadrable and five integral terms with the delay information are introduced in the new Lyapunov–Krasovskii functional (LKF). Based on the auxiliary integral inequality, Writinger integral inequality and Jensen’s inequality, several sufficient conditions are derived. Finally, numerical examples are provided to verify the effectiveness of the proposed criterion. There results are compared with the existing results. Introduction Recently, neural networks have received much attention of their extensive applications in signal processing, solving optimization problems, pattern recognition, pattern classification, image processing, model identification and other engineering fields. The stability problem of neural networks with time-varying delays has been deeply investigated in [8-10, 12, 25, 37]. Time-delay phenomena are inevitable in studying real systems. The existence of time delay makes the system dynamic performance worse or even leads to system instability. Therefore, the stability and control problem of time-delay system have attracted a lot of scholars attention, and some nice results have been obtained on linear and nonlinear time-delay neural networks during the past few decades. Moreover, the delay-dependent stability conditions are generally less conservative than delay-varying delays have been investigated by employing LMI technique in [22]. In [27], the authors have studied finite-time neutral delay uncertain neural networks. Passivity analysis for neural networks of neutral type has been studied in [32]. The passivity analysis for memristor-based stochastic BAM neural networks of neutral type was presented in [26]. To the best of the authors' knowledge up to now, the finite-time passivity of neural networks with neutral-type time-varying delays has not been completely studied in the literature, which motivates our research in this paper. With the above motivation, in this article, the issue of finite-time boundedness and finite-time passivity criteria of neutral-type neural networks with time-varying delay based on the auxiliary function-based integral inequality technique is explored. As result, in this note, there still exists some less conservatism for neural networks with interval timevarying delay to be further improved. To achieve this, at the end, several numerical examples are addressed to show the effectiveness of the developed stability criteria. The highlights and major contributions of this paper are reflected in the subsequent key points: (i) In this paper, we considered the system with time-varying delays, additionally the effect of neutral delay has also been taken into account to showing feasibility on a problem. (ii) Some simplest LMI-based criterion has been launched with the help of integral inequality technique together with the auxiliary function-based integral inequality combined with Writinger integral inequality, Jensen's inequality. (iii) Then we derived finite-time boundedness, finite-stability and finite time passivity conditions in the theorems. (iv) Several examples have been investigated to verify the correctness of the main theorem and the corollaries. The outline of the paper is structured as follows. In Section 2, the system models and some necessary mathematical preliminaries are declared. In Section 3, we present the main results for the neural network model in which neutral delay is taken into account. Simulation examples are given in Section 4, and conclusions follow in Section 5. Notations. R n denotes the n-dimensional Euclidean space, and R m×n is the set of all m × n real matrices. The superscript "T" denotes matrix transposition, and A B (respectively, A < B), where A and B are symmetric matrices (respectively, positive definite). · denotes the Euclidean norm in R n . If Q is a square matrix, λ max (Q) (respectively, λ min (Q)) means the largest (respectively, smallest) eigenvalue of Q. The asterisk " * " in a symmetric matrix is used to denote term, which is induced by symmetry; diag{·} stands for the diagonal matrix. Problem formulation and preliminaries Consider the neutral-type neural networks with time-varying delays as follows: http://www.journals.vu.lt/nonlinear-analysis where x(t) ∈ R n is the neural state vector, v(t) is the exogenous disturbance input vector belongs to L 2 [0, ∞), and y(t) is the output vector of the neural networks, f (x(t)) is the neuron activation function, A = diag{a 1 , a 2 , . . . , a n } > 0 is a diagonal matrix, B, C, D and E are connection weight matrices. φ(θ) denotes the continuous vector-valued initial function. h(t) denotes the time-varying delay, and d is neutral delay. We define the interval t k+1 − t k =h k + ∆h k h + ∆ h (t). Here |∆ h (t)| < ρ < h, where ρ is very small scalar. The intervals can be written as Assumption 1. For a given positive parameter δ, the external disturbance input w(t) is time varying and satisfies For presentation convenience, we denote (b) Under zero initial condition, the following relation hold for a given positive scalar γ > 0: Lemma 1. (See [20].) For a positive definite matrix M , a differentiable function x(u), u ∈ (α, β), and a polynomial auxiliary function p i (u) = (u−α) i , the following inequality holds for 0 n 3: Lemma 2. (See [21].) For any constant matrix M > 0, the following inequality holds for all continuously differentiable function x in [α, β] → R n : ∈ R n such that the following integration is well defined, then 3 Main results Finite-time boundedness In this section, we investigate finite-time boundedness for the following delayed neural networks (1)- (3): where φ(θ) is a continuous vector-valued initial function, and we define the following vectors: Theorem 1. For given scalars h, µ, d, δ, α, β, c 1 , c 2 and T , the neural networks (4)- (5) is finite-time bounded if there exist positive symmetric matrices P , Q i (i = 1, 2, . . . , 10), any diagonal matrices U , S and matrices N 1 , N 2 with appropriate dimensions such that the following LMIs holds:Θ where where Consider Proof. Consider the following Lyapunov-Krasovskii functional: Then we calculating the time derivative of V (x(t)): Using Lemma 2 in (11), we can geṫ and applying Lemma 1 inV 5 (x(t)), we geṫ By applying Lemma 3 we geṫ By Lemma 3 we obtaiṅ Also, by using the Lemma 3 we can geṫ By Lemma 3 we getV Furthermore, the following equality holds for any real matrices N 1 and N 2 with compatible dimensions: Based on Assumption 2, for i = 1, 2, . . . , n, we obtain which is equivalent to where m i denotes the unit column vector having 1 on its ith row and zeros elsewhere. Let U = diag{u 1 , u 2 , . . . , u n }, S = diag{s 1 , s 2 , . . . , s n }. Proof. The proof is similar to that of Theorem 1, so it is omitted here. Proof. By using LKF and the similar lines as that in the proof of Theorem 1, Since we can obtain ξ T (t)Φξ(t) < 0. Conclusion In this article, we investigated the finite-time passivity of neutral-type neural networks with time-varying delays. By applying the Jensen-type integral inequality technique a delay-dependent criterion is developed to achieve the finite-time boundedness and finitetime stability for the neutral-type neural networks. Based on our proposed multiple integral forms of the Wirtinger-based integral inequality and the auxiliary function-based integral inequalities approach for high-order case, a novel delay-dependent condition is established to achieve the finite-time passivity neural networks. Numerical examples shows the effectiveness of the theoretical results and superiority to the existing results. Thus, the proposed technique can be extendable to spatial finite-time stabilization or synchronization: finite/fixed-time pinning synchronization of complex networks with stochastic disturbances [17]; discontinuous observers design for finite-time consensus of multiagent systems with external disturbances [16]; nonsmooth finite-time synchronization of switched coupled neural networks [15]. This will occur in the near future. http://www.journals.vu.lt/nonlinear-analysis
1,855
2020-03-02T00:00:00.000
[ "Mathematics", "Computer Science" ]
Research of an optical device based on an anisotropic epsilon-near-zero metamaterial In this work, a novel design of an electro-tunable narrow channel based on an anisotropic epsilon-near-zero metamaterial is presented. The ENZ condition can be flexibly tuned by an applied gate voltage. This permittivity-tunable channel is composed of periodic alternating layers of graphene and nanoglass with a thickness of 3 nm. Additionally, a dual output light modulator is utilized to expand its application. Numerical analysis results show that the maximum transmittance of the incident light can reach 96.7%, and the extinction ratio of the device is 14.8 dB when the gate voltage is added to 4.96 V at the near-infrared wavelength. This ultracompact optical device may open a new realm in highly integrated photonic circuits, especially on the nano-chips. Introduction In the past few decades, artificial electromagnetic (EM) metamaterials based on Epsilonnear-zero (ENZ) photonics have attracted widespread attention all over the world (Ioannidis et al. 2020;Askari and Hosseini 2020;. These materials that do not exist in nature have been proposed and prepared due to their almost arbitrary effective permeability and permittivity. Moreover, these metamaterials have unique optical properties of achieving abnormal regulation of light (Gric, et al. 2018;Tatjana, et al. 2017). The zero-index materials (ZIM), as a specific type of metamaterials, also have become a popular material in scientific research (Hui et al. 2012;Efazat et al. 2020). They are divided into epsilon-near-zero (ENZ) metamaterials, mu-near-zero (MNZ) metamaterials, and matched impedance zero-index metamaterials (MIZIM) (Niu et al. 2018). Because the refractive index of these metamaterials tends to zero, there will be a constant phase advance when the EM wave goes through them. At the same time, there will be other exciting phenomena such as the ability to squeeze and tunnel (Silveirinha and Engheta 2006), cloaking (Kundtz and Smith 2010;Papasimakis et al. 2013) and enhanced coupling (Ourir et al. 2013), etc. All the extraordinary optical phenomenon in the ZIMs may guide potential applications on ultra-energy-efficient, all-optical switching devices for future optical communication and computation. The concept of zero-index material was originated from the proposal of negative refractive index materials by Pendry (2000) and verified by experiments in 2008 (Edwards et al. 2008). By tailoring epsilon to have either a negative or a positive value, the negative refraction, and perfect lenses have been demonstrated (Smith et al. 2004). Up to now, there are many methods to obtain and regulate zero-index materials. For example, Maas et al. (2013) found that a zero-index material can be achieved by a multi-layers materteral with alternating positive and negative dielectric according to the effective medium theory in 2013. The common one is that silver can be regarded as a material with a negative dielectric constant in the visible spectral range, while silicon is a dielectric with a positive one. Once they were stacked, the two materials construct a zero refractive index material in the visible range. In addition, the regulation of light based on ZIMs has been presented and applied in many fields. For example, Xu et al. found that embedded defects in the zero-index material will affect the reflection and transmission of light (Xu 2011;Wu and Li 2013;Huang and Li 2015). Their results showed that the transmittance of the incident TM wave would be influenced by the sizes, quantities, and permittivity of defects once they are embedded in the zero-index material, which is difficult to change those properties again. Nanoglass, as a low permittivity material (Reynard et al. 2002), has excellent optical properties. It has a higher light response speed and a giant optical nonlinearity under stable light conditions (Danilov et al. 2016;Wu et al. 2020). Graphene is a two-dimensional material favored by many researchers in optoelectronic research fields over the years, which is composed of a single layer of hexagonally arranged carbon atoms (Falkovsky 2008). Various applications and devices based upon graphene have been widely used in many fields, including antennas (Nair et al. 2008), waveguides (Xu et al. 2018), and switches (Lu 2012) due to the unique electrically tunable optical property of graphene. Here, inspired by the effective medium theory and the optical property of graphene, an ultracompact epsilon-near-zero metamaterial channel composed of alternating layers of graphene and nanoglass is presented. Only if the permittivity of the proposed channel is tuned to zero value, the incident waves can pass through this waveguide structure in a lower power dissipation under a tunable gate voltage condition. Compared to the defects structure mentioned before, this waveguide structure is more available to realize light modulation. Besides, a dual output light modulator has been illustrated in our work whose characterize is that the output light of two output ports can be arbitrarily selected. Figure 1a depicts a straight bend waveguide structure with two parallel Si 3 N 4 waveguides and a graphene-nanoglass metamaterial channel. These two parallel waveguides are connected by the ultracompact channel. The whole structure is sealed by perfect electric conductors (PEC) to prevent the light energy from leaking out. Considering some practical applications based on Ref Yang et al. (2014a), among four sorts of the common metal including Au, Ag, Al, and Cu, we can choose Au instead of PEC to reduce the light propagation loss. The incident wave is input from the left port and output from the right one. The way of metamaterial composed of alternating layers of graphene and nanoglass has been shown in Fig. 1b. The period thickness of the structure is 3 nm with a 0.6 nm graphene sheet and a 2.4 nm nanoglass layer, and the width of the two Si 3 N 4 waveguides is set as a 1 = a 2 = 330 nm whose permittivity is 2.1 (Chen et al. 2009;Anh Pham et al. 2010). Since the width of the Si 3 N 4 waveguides (330 nm) is much smaller than the incident wavelength of 1550 nm, only the fundamental transverse electromagnetic mode can exist in the left waveguides (Emadi et al. 2018;Xiao and Rui 2013). Due to the specificity of our structure, only TM mode can be transmitted through in the z-direction. Therefore, the TM 0 mode is selected as the polarization mode to analyze the light distribution of the metamaterial structure in our simulation. Design consideration and theoretical model In addition, the transmission coefficient of the structure with two 90° bends is expressed as (Silveirinha and Engheta 2007): where k x = √ 0 yy 0 r is the x-direction component of the wave vector, yy is the vertical direction effective permittivity of the metamaterial, d is the length of the ENZ channel, . 1 a The sectional view of the waveguide structure together with the construction of the metamaterial. The incident wave is input from the left port. It is squeezed and tunneled through the metamaterial channel and output from the right port. b The detailed schematic diagram of a 12 nm thickness of the metamaterial consisted of graphene sheets and nanoglass layers. c Sketch of the Air/Si 3 N 4 /metamaterial structure and r = 1 is the relative permeability of that. According to formula (1), the transmission coefficient can be adjusted to 1 approximately if the corresponding size is designed appropriately due to the loss of the anisotropic ENZ material being neglected. Based on Ref Yang et al. (2014b), the length of the metamaterial d is better to set to be 15 nm in our structure, which can achieve a large extinction ratio (ER) and an acceptable insertion loss. In order to design a tunable optical device based on graphene-nanoglass metamaterial, we should first verify the optical properties of the proposed materials. The optical properties of graphene are mainly affected by the electrical conductivity and equivalent permittivity. Hence, tuning the optical property of graphene by changing the gate voltage has been widely used. According to Kubo formulas (Stauber et al. 2008;Efetov and Kim 2010), graphene's conductivity comprises two parts: the intraband and the interband. where e represents elementary charge (e = 1.6 × 10 -19 C), is the angular frequency of the incident wave, ℏ is the reduced Planck constant ( ℏ = h∕2 , h = 6.62 × 10 -34 J•s is the Planck constant), f d ( ) is the Fermi-Dirac distribution, and represents the relaxation time which is closely related to the carrier mobility and the chemical potential c. where k B is the Boltzmann constant, T = 300 K is the Kelvin temperature ( k B ⋅ T = 0.026 eV), is set to 1000 cm 2 /(V•s), and vF = 1 × 10 -6 m•s −1 is the Fermi velocity. Because the sheet graphene is treated as an extremely thin film, the relationship between the surface permittivity (along the x-and z-directions) of graphene can be expressed as: where 0 is the dielectric constant in vacuum, Δ represents the thickness of graphene, and we set the thickness of graphene to be 0.6 nm in all the simulations. Considering the normal electric field cannot excite any current in the y-direction, so the normal component of the graphene's permittivity should be 1. Based on formula (7), we can get the equivalent permittivity of graphene is related to its conductivity and angular frequency. Furthermore, the conductivity will be affected by c which can be controlled by an applied voltage. According to the effective medium theory, the permittivity tensor of this structure can be obtained from Ref. Ding et al. (2013), Zhu et al. (2013): where fg = 0.2 is the filling factor of graphene, d = 1.3 is the permittivity of nanoglass (Reynard et al. 2002), 0 = 5.65 × 10 16 m −2 V −1 , V 0 is the voltage offset caused by the natural doping (Ao et al. 2014), and V g stands for the applied voltage. According to formula (8), the metamaterial's permittivity tensor along the x and z-direction can be tuned by the chemical potential c or the incident wavelength. Based on the formula (9), we can get the vertical direction effective permittivity of the metamaterial is a constant as 1.226. For a given wavelength such as 1550 nm, there is an identical chemical potential c corresponding to the ENZ point. All the simulations were investigated by using the commercial software COMSOL Multiphysics based on the finite element method (FEM). Results and analysis Based on the above analysis, we can get the relationship between the equivalent permittivity and the chemical potential of grapheme c . Figure 2a illustrates the horizontal permittivity xx of the metamaterial as a function of the chemical potential of grapheme c for a specific incident wavelength ( = 1550 nm).Real( xx1 ) represents the real permittivity of grapheme-nanoglass structure (G-N) in our work and real( xx2 ) shows the previous one of grapheme-silica structure (G-S) by contrast (Yang et al. 2014b). It can be found that the real part xx can be changed from a positive value to a negative one with the increasing chemical potential, which demonstrates the real part of permittivity can be tuned nearly equal to zero at a fixed wavelength. Owing to the chemical potential (the Fermi energy level) being higher than the half photon energy (≈0.4 eV), the electrons in graphene cannot make the interband transition resulting in the imaginary part of permittivity is always a small value (Klimchitskaya and Mostepanenko 2016). The results of the simulation are in line with the theoretical predictions. The anisotropic epsilon-near-zero metamaterial can be realized when c = 0.58 eV. Apparently, the ENZ point in our work is around c = 0.58 eV for = 1550 nm, which is lower than previous work for c = 0.689 eV (Yang et al. 2014b). That means our work only needs a lower applied voltage to arrive at ENZ condition. In modern electronic communications, the integration of electronic scales often requires nanoscale, and the reduction in device size also requires a further reduction in energy consumption (Kim et al. 2003;Denard 1974), so the reduced 0.109 eV in our structure is particularly important at the aspect of device design in the future. Figure 2b shows that the permittivity of metamaterial is a function of the wavelength of the incident light for a fixed chemical potential ( c = 0.58 eV). The horizontal permittivity of the metamaterial xx is -0.02517 + 0.0083i when the incident wavelength is 1550 nm, which is roughly equal to zero value achieving the squeezing and tunneling effect. The distributions of the transverse magnetic field along z-direction for ENZ point of c = 0.58 eV and non-ENZ point of c = 0.2 eV are shown in Fig. 2c, d. The light can pass through the structure at ENZ point and get blocked at non-ENZ point, which corresponds to the ON and OFF states, respectively. Since the chemical potential can be modified by tuning the external voltage, the working wavelength and the permittivity of the proposed light modulator can be controlled by tuning external voltage as well. Moreover, the transmittance of incident light is shown in Fig. 3. The red bar graph represents the result of graphene-nanoglass structure (G-N) in our work, and the blue one is the graphene-silica structure (G-N) in previous work (Yang et al. 2014b). Considering the Fig. 2 a Function of the real and imaginary parts of effective permittivity of the metamaterial in the horizontal direction and the chemical potential of graphene for a fixed wavelength λ = 1550 nm. The dashed line in the picture represents the previous work of graphene-silica structure. b The relationship between the horizontal permittivity of the metamaterial and various wavelengths of the incident wave for a fixed chemical potential c = 0.58 eV. c and d The magnetic field distribution inside the modulator in the z-direction Fig. 3 The transmittance of the two structures against the different chemical potential c for the fixed incident wavelength λ = 1550 nm. a The metamaterial is composed of graphene-nanoglass in the structure. b The metamaterial is composed of graphene-silica in the structure loss of the ENZ metamaterial, the maximum transmission coefficients of the G-N metamaterial for c = 0.58 eV ( xx = -0.02547 + 0.083i) is 96.7%, and the one of the G-S metamaterial ( xx = -0.02949 + 0.007i) is 94.9% (Yang et al. 2014b) for c = 0.689 eV. Based on Eqs. (1) and (9), the transmission coefficient is related to the size of the structure and the vertical direction of effective permittivity of the metamaterial yy . Concerning the fact that the permittivity of the nanoglass used in the metamaterial is lower than that of silica, the smaller yy leads to a greater transmittance (improved about 1.8%), which is consistent with the results obtained in our work. Additionally, the expression of extinction ratio is shown in Eq. (11) from Ref. Soto and Soto (2005), Li et al. (2020): T max and T min represent the maximum and minimum values of the EM field energy, respectively. It is found that T min is 0.031 dB corresponding to the OFF state ( c = 0.2 eV) and T max is 0.935 dB is responded to the ON state ( c = 0.58 eV), so the extinction ratio (ER) is obtained as 14.8 dB in our structure. Furthermore, a dual output light modulator designed by this structure has been shown in Fig. 4a. The whole optical device is sealed by PEC as well. We assume that this structure is surrounded by air with permittivity of 1and the boundaries of the air layer are set as scattering boundary conditions. The size of this channel is the same as we proposed before (a 1 = a 2 = 330 nm), and a 3 is set to 165 nm after optimization in order to reduce the loss of incident light. The light is input from the bottom port and output from two ports at the top, Vc Vc Figure 4b, c demonstrate the magnetic field distribution inside the light modulator in the z-direction. The result shows that the incident light can pass through the narrow channel and arrives at the outports when the c = 0.58 eV meets the ENZ condition. Otherwise, the light will be blocked at the input port. Due to the structure designed with the advantage on the port selection aspect, this light modulator can freely make the wave pass the two output ports, respectively. By selecting different materials to construct the metamaterial, this modulator also has an important guiding significance in the selection of wavelengths such as the optical splitter. Considering the loss in the shape bends connecting the vertical and horizontal channels (about 20.6%), the energy of the left output is demonstrated in Fig. 5. The maximum energy of one output can be achieved at 31% when c is enlarged to 0.58 eV. With regard to the symmetry of the T-shaped structure, the maximum energy in our work can be reached about 62% overall. Conclusion In summary, the anisotropic epsilon-near-zero metamaterial channel whose permittivity can be tuned to near zero is designed in this paper. Our work focused on the role of nanoglass, as a low permittivity material, resulting in the device operating at a small applied voltage (reduced about 0.109 eV) and the transmittance of light is increased by 1.8%. And the extinction ratio of the device is obtained as 14.8 dB. The variation of the permittivity is influenced by different chemical potentials c and different wavelengths. For a fixed wavelength, the channel can be regulated by an applied voltage. In addition, the T-shaped light modulator we designed is very different from the previous work, which is interesting for selective port orientation. The maximum output optical energy of the incident light can be tunneled and regulated at 62%. All the results are essential to optical interconnects and
4,121.6
2021-07-07T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
An ABAQUS® plug-in for generating virtual data required for inverse analysis of unidirectional composites using artificial neural networks This paper presents a robust ABAQUS® plug-in called Virtual Data Generator (VDGen) for generating virtual data for identifying the uncertain material properties in unidirectional lamina through artificial neural networks (ANNs). The plug-in supports the 3D finite element models of unit cells with square and hexagonal fibre arrays, uses Latin-Hypercube sampling methods and robustly imposes periodic boundary conditions. Using the data generated from the plug-in, ANN is demonstrated to explicitly and accurately parameterise the relationship between fibre mechanical properties and fibre/matrix interphase parameters at microscale and the mechanical properties of a UD lamina at macroscale. The plug-in tool is applicable to general unidirectional lamina and enables easy establishment of high-fidelity micromechanical finite element models with identified material properties. Introduction Fibre-reinforced polymer (FRP) composite laminates have been widely used in aerospace, automotive, and wind energy industry due to their excellent material properties such as high stiffness-to-mass ratio, high strength, and light weight. Applications of FRP composite laminates to create engineering structure models fundamentally require mechanical properties as inputs. Experimental tests are ideal solutions to evaluate the mechanical properties of a composite lamina. However, it must be repeated whenever the constituents (fibre and matrix) and/or microstructure characteristics (fibre volume fraction) are altered. This procedure may, for instance, costs millions of dollars and lasts for years to generate the experimental data of mechanical properties for the design of aircraft structures [32]. To overcome the aforementioned drawbacks associated with experimental tests, various micromechanical approaches have been proposed to establish a closed-form relationship between elastic properties at the lamina scale and the elastic properties at the constituent scale. These methods fall generally into two categories, i.e., analytical and numerical methods. Analytical methods, such as the Rule of Mixture method [6], the Halpin-Tsai semi-empirical method [10], the Mori-Tanaka method [19], and the Chamis method [5], facilitate the calculation of elastic properties by a direct mathematical, empirical expression between constituent properties and elastic properties of the lamina. However, these methods have an inherent limitation in describing the stress and strain fields at microstructural scale mainly due to neglecting fibre interaction. With the development of computing capacities, numerical methods, in particular the finite element method (FEM), have become widely used tools for studying the behaviour of composites, including inverse analysis [8,15,26], elastic moduli [27], failure of composite lamina [29,33,34], and the effective coefficients of thermal expansion [13]. Inverse analysis has been used to identify fibre mechanical properties and fibre thermal expansion coefficient, and evaluate the factors of analytical methods, and so on. [26] determined the elastic and thermal properties of graphite fibre using inverse analysis. [3] predicted fibre properties using finite element analysis of hexagonal and random representative volume element (RVE) through inverse analysis. Similarly, [14] conducted an inverse analysis to predict fibre mechanical properties. However, they used quasi-analytical gradients derived from analytical models such as Chamis or Halpin-Tsai to reduce the computational cost. [15] utilised an inverse method to identify the mechanical properties of T300 carbon fibre as well as the interphase region parameters based on a computational homogenisation approach together with experimental results and Kriging metamodelling. [8] carried out an inverse analysis in the framework of FEM to estimate the reinforcement parameter ξ of the Halpin-Tsai models which is used to calculate transverse stiffness E 2 . A total number of 67 FE models of 2D square, 2D hexagonal, and 3D random fibre distributions were used to obtain a new value of ξ with a high level of confidence. Regardless of the inverse methods used or the purpose they are used for, a large number of FE analyses are required for converged solutions. However, constructing a micromechanical FE model is not a straightforward task and requires special treatments to impose boundary conditions, and generate microstructures including fibre distribution and fibre/ matrix interphase, and extract outputs, etc. These complexities impose a barrier of using inverse analysis by engineers and researchers. Recently, several ABAQUS plug-ins have been developed for the ease of creating micromechanical FE models. These plug-ins were developed either by the functions available in ABAQUS or by external software. An ABAQUS plug-in named MultiMech was developed to perform multi-scale finite-element analysis (FEA) with the capability of simulating nonlinear behaviours of composites [16]. Another ABAQUS plug-in for multilevel modelling of linear and nonlinear behaviour of composite structures [7,28]. The plug-in developed using Python scripts for analysing an RVE at microscopic level to obtain macroscopic parameters for structural analysis by user-defined FORTRAN subroutines in ABAQUS. Composite Micro-Mechanics (COMM) toolbox was developed in Matlab for micromechanical analysis of composites [17]. The toolbox creates an input file that can be read by ABAQUS which performs the FEA. Recently, EasyPBC plug-in was developed for ABAQUS to estimate effective elastic properties of a pre-prepared and meshed RVE [21]. While, the ABAQUS plug-in proposed by [24] is capable of generating an RVE with random fibre distribution using Random Sequential Adsorption (RSA) technique. The aforementioned plug-ins have shown outstanding benefits and capabilities to create and simulate complex RVEs of unidirectional (UD) FRP composite lamina. However, they are designed to generate and analyse a single model. Therefore, this paper aims to develop an open-source ABAQUS plug-in named Virtual Data Generator (VDGen) that automates the time-consuming manual task requires to create a large number of virtual data for inverse analysis. The plug-in uses Latin-Hypercube (LH) sampling methods and supports the unit cell of square and hexagonal fibre arrays. In addition, the plug-in incorporates Artificial Neural Networks (ANN) to explicitly parameterise the relationship between fibre mechanical properties and fibre/matrix interphase parameters and the mechanical properties of a UD lamina. The data required here were created in advance by the plugin and used to train the ANN model. Main plug-in GUI The concept of the plug-in arises from the need for a tool that helps to perform a large number of micromechanical FE simulations in a few simple steps. ABAQUS has different ways to increase its capabilities such as subroutines and/or adding new plug-ins. ABAQUS/CAE plug-in is one of the most powerful tools that can be used to perform pre-and post-processing via functions written in Python programming language in the kernel. The current plug-in operates through a series of user-friendly GUI commands send to the kernel to carry out tasks. The plug-in interface is shown in Fig. 1. It consists of six tab items that allow the user to navigate between them to edit input and output commands. For computational micromechanics modelling, the plug-in supports square and hexagonal unit cell fibre arrays. Despite fibres are usually randomly distributed in the matrix, it has been concluded by [31] that micromechanical modelling of the unit cell is accurate enough to predict the elastic properties of a UD lamina, while an RVE with randomly distributed fibres is essential to compute the local failure. Figure 2 shows a typical 3D unit cell of square and hexagonal fibre arrays of the fibre reinforced composite that the plug-in supports. The mechanical properties of each constituent, i.e., fibre, matrix and interphase, can be modified in the material section. There are two ways to input the value of constituent properties, either by a single value or using a domain of lower and upper bounds (lower-upper). A material property assigned with a single value remains unchanged throughout simulations. While for others, a random value in the range of (lower-upper) is generated at each training point using Latin-Hypercube sampling technique. The fibres and matrix are meshed using eight-node brick element with reduced integration (C3D8R). There were also a relatively small amount of six-node linear triangular prism elements (C3D6) due to the free meshing technique used. The interphase region is meshed with eight-node cohesive elements (COH3D8). To maintain matched meshes between the cohesive elements and the fibres and matrix elements, a suitable number of nodes are seeded to the interphase and its neighbours. The elastic behaviour of the cohesive elements is written in terms of a stiffness matrix that relates the nominal stresses to the nominal strains across the interphase. The nominal traction stress vector t consists of three components, t n , t s , t t , which represent the normal and two shear tractions, respectively. The corresponding separations are denoted by δ n , δ s , and δ t , and the original thickness of the cohesive element is denoted by T. Then, the nominal strains can be defined as Therefore, the elastic behaviour of the cohesive element can be written in Eq. (1). For simplicity of computation, uncoupled behaviour between the normal and shear components is desired, so the off-diagonal terms in the elasticity matrix are set to be zero and the stiffness in the two shear directions are assumed to be equal [1,33] The plug-in imposes Periodic Boundary Conditions (PBC) on the corresponding surfaces of the unit cell to ensure the compatibility of strain and stress at the macroscale level. These consist of a series of constraints in which the deformation of each pair of nodes on the opposite surfaces of the unit cell is subject to the same amount of displacements. The PBCs are expressed in terms of the displacement vectors �� ⃗ U 1 , �� ⃗ U 2 , and �� ⃗ U 3 that are related to the displacements between the opposite surfaces by where L 1 , L 2 , and L 3 are the lengths of the unit cell along with three orthogonal directions, respectively. PBC requires matching nodes on opposite sides of the unit cell. Hence, elements of equal size are assigned to the edges of the unit cell to ensure periodic mesh required for PBC. The Output tab allows the user to select appropriate results that suit the work. The macroscopic normal and shear strain components are calculated by The macroscopic stress is calculated as where F i is the resultant force on the ith surface which represents the reaction force at a reference point where the displacement is applied, and A is the area of the surface. Therefore, Young's modulus, Poisson's ratio, and shear modulus are, respectively, calculated from Eqs. (7), (8), and (9) The flowchart of the pre-and post-processing procedure of the plug-in is described in Fig. 3. The user is to define the analysis of data and the required outputs, as given in Fig. 3. When the 'OK' or 'Apply' button is clicked, the plug-in creates three files to be called in the subsequent steps. ExperimentNAME.dat file contains all input commands which are given by the user in the GUI interface. These commands are vital to creating the FE model. Also, this file provides an opportunity to modify the input commands using exact plug-in keywords in the file. ~ Temp.txt is dedicated to storing the experiment name the user wants to run. Material properties values created by Latin-hypercube sampling are stored in a tabular format in Experi-mentNAME.csv file, which makes them easier to read and process by external software. Once these files are created, the Python script (Execute.py) can be submitted for analysis from ABAQUS command environment using either dos ('abaqus cae script = Execute.py') or dos ('abaqus cae noGUI = Execute.py') command. It is strongly recommended to execute it using the latter in which ABAQUS/ CAE runs commands in Execute.py without the added expense of running a GUI display. Whichever option adopted to perform the analysis, the plug-in continuously provides the user with useful information, e.g., number of jobs done, number of jobs remain and approximate time to complete. Upon running, the plug-in instantly creates two files to store outputs after completion of each job and to record errors that occurred during the analysis. Numerical example: prediction of effective elastic properties To validate the newly developed plug-in, the effective elastic properties of carbon fibre/epoxy (T300/PR-319) and glass fibre/epoxy (E-Glass/MY750) were determined and compared with EasyPBC plug-in developed by [21] as well as the experimental data [12]. The mechanical properties of the fibre, matrix, and interphase are given in Table1. Since T300 carbon fibre is classified as a transversely isotropic material, the elastic properties highlighted by an asterisk * symbol in the table are obtained by applying the following relations: The E-glass is considered as isotropic material. It is important to note that the input data for the interphase region are not accurately known as they are difficult to measure from simple laboratory experiments. However, an initial stiffness K i of 10 5 GPa/mm is used in [2,18,25,30] to simulate the elastic behaviour of the RVE model. In this paper, the elastic parameters from [15] are used to for the interphase as an approximation. Table 2 shows the comparison of the predicted effective elastic properties determined by VDGen and EasyPBC plug-ins. It is noted among the prediction results that VDGen provides reliable results that are identical to those from EasyPBC. However, an obvious discrepancy exists between experimental results and those predicted by the two plug-ins. This is mainly due to the inaccurate parameters used for the interphase region. Figure 4a shows the stress contours of a loaded unit cell under transverse and Fig. 4b shows the stress contours for in-plane shear loading conditions. It can be seen the periodic stress contours distribution which is additional verification of the PBC. Background In this section, a machine learning (ML) technique Artificial Neural Network (ANN) is used to construct a relationship between fibres and interphase parameters and effective elastic properties of the lamina. It is inspired by the animal brain's structure and function, which learns from former examples. ANN consists of three main layers: an input layer, one or more hidden layers, and an output layer. Each layer has several neurons which are responsible for transmitting weight and biases (equivalent to chemical and electric signals in the animal brain) between two layers. Figure 5 illustrates a typical structure of single neurons where each input (x) comes from the previous layer multiplied with its individual weight of the connection (w i ) and then summed up with biases [23]. Then, this sum is composed with activation function (f), resulting in another vector (a) as Another key step of ANN is a defined objective function that is to be minimised during the training process. Mean Squared Error (MSE) and Sum Squared Error (SSE) among others are examples of functions used to assess the network's behaviour by measuring the errors between the output and the target. The errors are reduced through tuning the values of the weight and biases by the so-called back-propagation. Back-propagation is widely recognised as a powerful tool for the training of ANN very efficiently. Several algorithms have been proposed to address the slow convergence associated with the back-propagation. However, it is quite difficult to decide which algorithm is more computationally efficient as it depends on many factors. Readers may refer to a comparative study carried out by researchers to evaluate accuracy and convergence time for different algorithms [4,9]. ANN model to identify micro-parameters ANN model is developed to identify the micro-parameters, e.g., fibre and fibre/matrix interphase parameters. The relationship between micro-and macro-properties in UD lamina is fairly complex and nonlinear. Moreover, the number of micro-parameters to be identified is usually more than the number of macro-properties, which makes the ANN a complicated task. Therefore, to ease the training process, the micro-parameters are set to be the input layer of the neural networks model and the macro-properties are of the output layer. However, the calculation of optimal micro-parameters becomes difficult when they are in the input layer as it is not possible to obtain an analytical inverse response solution with the ANN model that has multiple neurons in the hidden layer. This issue is overcome using trained ANN to enlarge the dataset. Details of model building are explained in the following section. Model building The whole procedure of the fibre and interphase parameters identification using ANN is illustrated in Fig. 6. Firstly, a total of n = 500 FE models were created by VDGen using the procedure outlined in Sect. 3. In each model, a random value of the parameters to be identified is created by LH sampling within the range given in Table 3. The remaining fibre properties were obtained by applying the transversely isotropic material relationships E f1 remained unchanged in all samples and its value was 230GPa. For all samples, the fibre volume fraction is 60% and matrix properties are given in Table1. By the end of (12) this phase (Step 1), a dataset of 500 samples containing the inputs (x = [E f2 , ν f12 , ν f23 , G f12 , T i , K nn , K ss ]) and the targets (t = [E 11 , E 22 , ν 12 , ν 23 , G 12 ]) required to train ANN is obtained (Fig. 7). In Step 2, the ANN was built and trained using the dataset created in the previous step. The training, testing, and validation of the ANN model were conducted by MATLAB R2015a software. By MATLAB default, 70%, 15%, and 15% of the original dataset were used for training, validation, and testing, respectively. Nonlinear tangent sigmoid and linear functions were employed as the activation functions in the hidden layers and the output layer, respectively. Selecting a best representative ANN structure plays an important role in output prediction. In this study, two hidden layers were used and the number of neurons in each hidden layer was changed until the best possible prediction was obtained. Initially, the number of neurons in the first hidden layer (nh 1 ) was set to 20 then increased by one, whereas the total number of neurons in both hidden layers (nh) was retained at 100. Usually, the data are randomly divided into three subsets (training, validation, and testing), and different initial weight and bias values are used in each time the neural networks are trained. As a result, different neural networks trained for the same problem may give different outputs for the same inputs. In this study, therefore, 20 runs were performed on each ANN architecture to ensure inclusion of different data for each subset. MSE, which is the average squared difference between the output (y) vectors and the target (t) vectors, was used to compute the difference and back-propagated though the networks to update weights and biases Levenberg-Marquardt (LM) back-propagation algorithm was adopted in this study. This function uses back-propagation scheme to update weights and biases according to Levenberg-Marquardt optimization algorithm, which can accurately achieve results with fewer data comparing with its counterparts. To decide which is the best ANN taking into account the fact of dividing the input data into three main subsets (training, validation, and testing) during the training process, the sum of correlation coefficient (R-value) between the target and the output of the entire dataset was used to attain the optimal ANN structure are the regressions of the dataset of E 11 , E 22 , ν 12 , ν 23 , and G 12 , respectively. The optimal ANN was then used to extend the dataset and (13) f (x) = 2∕ 1 + e −2x − 1 tansig(tangent sigmoid activation function) f (x) = x purelin(linear activation function). Results and discussion The ANN which can predict fibre and fibre/matrix interphase parameters from micromechanical FE modelling dataset is designed. The number of total samples created by micromechanical FE modelling to train the ANN is 500. 350 of them is randomly assigned for training, 75 for validation set and the rest used for testing. The ANN is built and validated as explained in Sect. 4.2.1. Table 4 presents some architecture samples used to verify the performance of the ANN in terms of R-value. Since 20 Figure 8 shows the regression graphs for the target and the output of the verification data set only. This figure shows the closeness among the output data predicted by the ANN and the target data obtained from FEM. The dashed line in each subfigure represents the perfect results when the outputs equal targets. It can be seen that E 11 , E 22 , ν 12 and ν 23 are well predicted by ANN with an R-value between 0.91 and 0.99 and a regression slope (m) between 0.81 and 0.96. G 12 is slightly less well predicted comparing to other effective elastic properties with an R-value and a regression slope of 0.86 and 1.08, respectively. Hence, the selected ANN is capable of providing a good correlation between the target and the output. After training the ANN, the selected model with the highest R-value is used to generate N samples. It is found that 10,000 samples of the random input parameters generated by the LH sampling method are sufficient to produce a dense space. These new input data (E f2 , ν f12 , ν f23 , G f12 , T i , K nn and K ss ) are then processed by the ANN to obtain and the outputs (E 11 , E 22 , ν 12 , ν 23 , and G 12 ). The closest point of the new output to the experimental data is found through Eq. (16). The corresponding carbon fibre and interphase parameters of the closest point are given in Table 5. Finally, the identified fibre and interphase parameters (micro-parameters) obtained from the ANN are used as input for the FE model to predict the effective properties of the UD lamina, i.e., macroscale level properties. The effective elastic properties calculated by the FE model using the identified parameters are given in Table 6 (second column). The table also shows a comparison between these properties and those predicted by the ANN and the experimental data. It can be seen that the effective elastic properties agree well with the experimental data with a maximum error of about 6%. This error is mainly due to the using fixed value for E 11 in the training of ANN. Conclusions and future improvement An ABAQUS® plug-in (VDGen) has been developed for generating virtual data for identifying the uncertain materials' properties in unidirectional (UD) lamina. In combination with artificial neural networks (ANNs), the data generated from the plug-in enable the determination of the relationship between fibre mechanical properties and fibre/ matrix interphase parameters at microscale and the mechanical properties of a UD lamina at macroscale. Application of the plug-in to a T300/PR-319 UD lamina has shown very good agreement between the predictions and the experimental data when using the identified constituent properties. A few improvements of the plug-in should be considered in the future. The current plug-in is designed to support the square and hexagonal unit cell fibre arrays. Micromechanical FE modelling of randomly distributed fibres in the matrix is essential when studying the failure of the composite lamina. However, using random fibre distribution causes arbitrary meshing condition on opposite RVE edges [20,35]. Further development of the plug-in to support RVEs with randomly fibre distribution and capable of generating periodic mesh on the opposite edges is under investigation by the authors. At this stage, the plug-in is only designed to calculate the effective elastic properties of a lamina. We aim to develop it further, so that it will be capable of conducting failure analysis under uniaxial, biaxial, and multiaxial loading conditions. The effect of fibre shape has recently been subjected to intensive studies by means of computational micromechanics [11,22]. The current core Python scripts of the plug-in will be developed further, so that RVEs with different fibre shapes can be automatically generated. Fig. 8 ANN predictions versus FEM results for a E 11 , b E 22 , c ν 12 , d ν 23 , and e G 12
5,510.4
2021-10-31T00:00:00.000
[ "Computer Science", "Engineering", "Materials Science" ]
IL-10 Treatment Is Associated with Prohibitin Expression in the Crohn's Disease Intestinal Fibrosis Mouse Model Prohibitin, which can inhibit oxidative stress and mitochondrial dysfunction, has been shown to have significant anti-inflammatory activities. Here, we investigate the effects of altering prohibitin levels in affected tissues in the interleukin-10 knockout (IL-10KO) mouse model with intestinal fibrosis. The aim of this study is to investigate the effects of IL-10 on prohibitin and the role of prohibitin in intestinal fibrosis of murine colitis. After the mice were treated with IL-10, prohibitin expression and localization were evaluated in IL-10KO and wild-type (WT, 129/SvEv) mice. The colon tissue was then investigated and the potential pathogenic molecular mechanisms were further studied. Fluorescence-based quantitative polymerase chain reaction (FQ-PCR) and immunohistochemistry assays revealed a significant upregulation of prohibitin with IL-10 treatment. Furthermore, IL-10 decreases inflammatory cytokines and TGF-β1 in the IL-10KO model of Crohn's disease and demonstrates a promising trend in decreasing tissue fibrosis. In conclusion, we hypothesize that IL-10 treatment is associated with increased prohibitin and would decrease inflammation and fibrosis in an animal model of Crohn's disease. Interestingly, prohibitin may be a potential target for intestinal fibrosis associated with inflammatory bowel disease (IBD). Introduction Inflammatory bowel disease (IBD) is a chronic and multifactorial gastrointestinal inflammatory condition that is clinically categorized as ulcerative colitis (UC) or Crohn's disease (CD). IBD fibrosis can occur in both UC and CD, but it is much more prevalent in CD. The etiology and pathophysiology of IBD are still unknown and multifactorial. Typical CD presentations include discontinuous involvement of various portions of the gastrointestinal tract and the development of complications including strictures, abscesses, or fistulas [1,2]. Current anti-inflammatory therapies neither prevent fibrosis nor reverse established strictures, which may present years after remission of active inflammation. Prohibitin, which can inhibit oxidative stress and mitochondrial dysfunction, has been shown to have significant anti-inflammatory activities. Prohibitin is a ubiquitously expressed, multifunctional protein implicated in many cellular processes, including mitochondrial function and protein folding [3]. It has been implicated in the regulation of proliferation, cellular apoptosis, and gene transcription [4,5]. Moreover, prohibitin exhibits a remarkable degree of sequence conservation across species. The protein sequences of mouse and rat prohibitin are virtually identical, and these differ from the human protein sequence by a single amino acid [6]. A recent study showed that prohibitin levels are decreased in Crohn's disease colonic mucosal biopsies [7], and little is known about the regulation and role of prohibitin during intestinal inflammation. Prohibitin has been shown to exhibit an antifibrotic effect in animal models of cirrhosis [8] and renal fibrosis [9]. Current studies with prohibitin focus on the acute phase in inflammation rather than the expression and role in the fibrosis course of IBD. Of all the cytokines and growth factors involved, transforming growth factor (TGF)-1 is one of the most potent fibrogenic cytokines in not only CD [10] but also in several other fibrotic diseases such as systemic sclerosis and hepatic cirrhosis [11,12]. Therefore, regulating TGF-1 expression has been investigated as a potential therapeutic for preventing and treating intestinal fibrosis. 2 Mediators of Inflammation -smooth muscle actin ( -SMA) is one of six actin family members. In the adult, prominent -SMA expression can be found in vascular smooth muscle cells and myoepithelial cells. Epithelial-mesenchymal transition (EMT) which contributes to tissue fibrosis is also associated with cells that eventually express -SMA as myofibroblasts [13]. Interleukin-10 (IL-10) is a pluripotent cytokine that plays a pivotal role in the regulation of immune and inflammatory responses. IL-10 has been shown to suppress the production of proinflammatory mediators and downregulate costimulatory molecules that are critical for the activation of T cells [14]. In this study, we used IL-10KO mice that spontaneously form symptoms similar to Crohn's disease as an inflammatory bowel disease model to establish a colon mice intestine fibrosis model. The aim was to detect and investigate if the effects of IL-10 on prohibitin has antifibrotic effects and prevents the progression of intestinal fibrosis in the IL-10KO mice model of CD and to determine the role of prohibitin in intestinal fibrosis of murine colitis. Animals. Specific pathogen-free (SPF) female homozygous IL-10 knockout (IL-10KO) and wild-type (WT) 129/SvEv mice (Jackson Laboratory) of 5 weeks of age and weighing 19-23 g were used for this study. The animals were randomly allocated into three groups: group A (control group, 18 WT mice), group B (IL-10KO mice group, 18 IL-10KO mice), and group C (IL-10 treatment group, 12 IL-10KO mice). The animals were housed under SPF and temperature-controlled conditions, with light/dark cycles of 12/12 hours and free access to water and standard rodent chow at the animal center of the Sixth People's Hospital affiliated to Shanghai Jiao Tong University. Ethics. This study was carried out in strict accordance with the recommendations in guidelines for the care and use of laboratory animals. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University (Permit Number: SYXK [Hu] 2011-0128). All surgery was performed under lidocaine anesthesia, and then mouse was sacrificed by cervical dislocation and all efforts were made to minimize suffering. IL-10KO Colitis Model. Using aseptic techniques, the mice of the treated group and model group were treated with intraperitoneal injections of IL-10 (5 g/kg body weight, three times per week) and 0.9% physiological saline since week 12. The control group received no treatment; then mice of control group and IL-10KO mice group were sacrificed at weeks 12, 14, and 16, and the treated group was sacrificed at week 14 and 16. Mice were housed until the appropriate age, and then they were sacrificed by cervical dislocation. In this model, IL-10KO mice developed spontaneous colitis between 6 and 8 weeks. Then, a spontaneous chronic intestinal inflammation and prominent fibrosis occurred at week 12. Histopathology Staining. Entire colons were removed, fixed in 10% formaldehyde, and embedded in paraffin. Sections were stained with hematoxylin and eosin (H&E) reagent and Masson's trichrome stain. 2.6. Collagen Assay. We harvested colonic tissues from IL-10KO and WT mice at 12, 14, and 16 weeks of age. From each tissue sample, one section was obtained by using Masson's original trichrome stain. This trichrome system stains collagen blue, nuclei purple-brown, and cytoplasm pink. Collagen area was defined as the distinct blue color region and was distinguished from muscle, blood, and inflammatory cells. The total length of each tissue section was measured. The collagen area was also measured by using a Sircol Collagen Assay kit according to the manufacturer's instructions. Assessment of the Histologic Severity. The histological severity of colitis was evaluated by H&E-stained and coded sections by modifying the validated scoring system described by Tamaki et al. [15]. All slides were scored by a gastrointestinal pathologist in a blinded manner for inflammation based on the scoring system for inflammation (macrophage, lymphocyte, and neutrophil infiltration in the lamina propria or submucosa) was scored for severity as follows: normal, 0; minimal, 1; mild, 2; moderate, 3; marked, 4; and severe, 5. In brief, the inflammation score for a given tissue section is the sum of the scores given to the four regions or features assessed for inflammation. Immunohistochemistry Staining. For immunohistochemical analysis of prohibitin and -SMA colon tissue fixed with 4% buffered paraformaldehyde was embedded in paraffin, and 4 m-thick sections were stained. After deparaffinization, antigen retrieval was performed by immersing the section in 10 mM citrate buffer (Ph 6.0) and heating twice in a microwave oven (95 ∘ C) for 5 min each time. Endogenous peroxidase activity was blocked by incubation with 1% hydrogen peroxide in distilled water for 10 min. All sections were then incubated with antiprohibitin and anti--SMA antibody. After incubation with second antibody immunoglobulin, the sections were stained with diaminobenzidine. The sections were counterstained with hematoxylin. Then, we used the image analysis software to measure the integral optical density (IOD) of prohibitin and -SMA. Statistical Analysis. Data analysis was performed using SPSS version 18.0 (SPSS, Chicago, IL, USA) statistical software. The results are shown as mean ± standard deviation. One-way analysis of variance (ANOVA) was used to analyze the differences between groups. Nonparametric data were analyzed by Kruskal-Wallis and Mann-Whitney tests and Pearson's correlation coefficient was used to determine the relationships between the indicators. < 0.05 was considered significant. Effects of IL-10 on Animal Weight. There were significant differences in the mean weight of the control group, IL-10KO mice group, and IL-10-treated group in all of the experiments. The IL-10KO group exhibited a marked decrease in their body weight as compared to the control group at 12, 14, and 16 weeks ( < 0.05). The mice were sacrificed on wk 12, 14, and 16 and various degrees of edema and adhesion were found over the distal colon in a length of 3-5 cm and 1-3 cm in the model group and IL-10KO treatment group, respectively. Severe strictures associated with the dilatation of the proximal segment were exhibited gradually over time. In contrast, mice in the control group had only minimal inflammatory change. Administration of IL-10 led to a significant increase in body weight at 14 and 16 weeks as compared to the IL-10KO mice group ( < 0.05). At the end of study, the surviving mice in the IL-10-treated group gradually regained their weight but still failed to reach the initial weight (Figure 1). Histopathological Abnormalities in the Colon. The H&E staining histologic findings revealed colonic epithelial hyperplasia, crypt abscess, glands arranged in disorder, and the forming of edema and ulceration of submucosa in the propria of the colonic tissue from IL-10KO mice at 12, 14, and 16 weeks of age (Figures 2(d)-2(f)). IL-10 treatment led to a significant amelioration of colonic epithelial hyperplasia and cellular infiltration with IL-10 treatment (Figures 2(b) and 2(c)). The histopathological colitis score in the IL-10KO mice group was significantly increased with age and was greater in IL-10KO mice than in the control group at each time point. In treatment group, the histopathological colitis score was decreased significantly than those in IL-10KO mice group at week 16 (Table 1, < 0.05). Masson's trichrome stain sections of the colon were analyzed for collagen content as described in Section 2 ( Figure 4). In WT mice, little collagen deposition was found in the mesenchymal layer. On the other hand, collagen deposition in IL-10KO mice was localized not only in the mesenchymal but also in the mucosa, submucosal, and muscularis propria areas. We then found that collagen deposition was increased with age, specifically at 12-, 14-, and 16-week time points (Figures 3(d)-3(f)). Furthermore, the amount of collagen deposited in the submucosal areas and muscularis propria was markedly reduced in the IL-10treated group compared to that in the IL-10KO mice (Figures 3(b) and 3(c), < 0.05). Prohibitin, Nrf2, Collagen I, -SMA, and TGF-1 Gene Expression in Colonic Tissues of Wild-Type and IL-10KO Mice. The gene expressions of TGF-1, collagen I, and -SMA in the IL-10KO mice group were upregulated in a time-dependent manner and were much more significant compared to those in the control group. In addition, gene prohibitin expression of Nrf2 in the colonic tissue of IL-10KO mice was down-regulated in a time-dependent manner. Moreover, prohibitin and Nrf2 mRNA expression were significantly up-regulated and TGF-1, -SMA mRNA expression was markedly reduced in the IL-10-treated group when compared with the IL-10KO mice group at 14, 16 wk (Table 2). Effects of IL-10 on Protein Expression of Prohibitin and -SMA. We further detected protein expression levels of prohibitin and -SMA in colonic tissues by measuring the integral optical density. The protein levels of prohibitin were lower in the IL-10KO mice group than in the control group ( Figures 5(d)-5(f)) and that of -SMA in the IL-10KO mice group was much more significant compared with that in the control group. Moreover, there was a significant difference in prohibitin and -SMA in the treatment group and the IL-10KO mice group. Prohibitin protein expression levels were significantly up-regulated (Figures 5(b) and 5(c)) and -SMA was markedly reduced in the IL-10-treated group when compared with the control group at 14 and 16 weeks (Table 3). Protein expression of prohibitin was negatively correlated with the histologic colitis score ( = −0.859, < 0.01) and -SMA protein expression ( = −0.798, < 0.05). Discussion In IBD, especially in the CD, long-term recurrent intestinal chronic inflammation and excessive damage repair could cause intestinal tract fibrosis and intestinal stricture. Research shows that about a third of the CD patients developed stenosis of bowel and needed surgery. However, after the surgery, about 70% of the postoperative patients developed stenosis of the bowel [16]. Immunomodulators and biologic therapies are the therapeutic mainstay for CD. These therapies are highly effective in treating inflammation in most patients. However, their effects have not specifically shown to decrease fibrosis. The findings in the present study demonstrated that prohibitin is critically involved in the development of organ fibrosis and indicated that regulation of prohibitin might prevent and alleviate intestinal fibrosis associated with human IBD. The Masson collagen trichrome staining in IL-10KO mice showed massive amounts of collagen deposition, mainly in the lamina propria, submucosal areas, and muscularis propria in the colonic tissues of IL-10KO mice with colonic inflammation, and the amounts increased with age. In addition, we found that the gene expressions of TGF-1, -SMA, and collagen I in IL-10KO mice were markedly increased when compared to those in WT mice, especially at 16 wk. However, the expression of prohibitin and Nrf2 in IL-10KO mice was markedly reduced over that of WT mice. Protein expression of prohibitin negatively correlated with the histologic colitis score and -SMA protein expression. Decreased prohibitin levels was associated with increased ECM (extracellular matrix) levels ( -SMA and collagen I). Theiss et al. [17] found that the therapeutic delivery of prohibitin to the colon reduces the severity of DSS-induced colitis in mice. These findings suggested that chronic intestinal inflammation of IL-10KO mice reduced prohibitin, resulting in intestinal fibrosis. In this investigation, we found that IL-10 treatment in mice was associated with increased expression of prohibitin and Nrf2, attenuation of the lesion in the intestinal fibrosis, and significant reduction in the expressions of TGF-l, -SMA, and collagen. We also found that in IL-10-treated mice at 16 wk in histological and immunological findings were significantly improved, and colonic mucosa edema was lowered and the degree of disorder in the intestinal mucosa gland arrangement was reduced. These results show that IL-10 can ameliorate intestinal fibrosis; this effect is probably related to the regulation of prohibitin by IL-10. The human prohibitin gene was first found by Sato et al. [18]. Prohibitin is a highly conserved, ubiquitously expressed, multifunctional protein whose expression is decreased during IBD [19]. Additionally, prohibitin is regarded as an apoptosis-regulating protein [20]. The best characterized function of the prohibitin is as a chaperone involved in the stabilization of mitochondrial proteins. It is also thought to play a role in maintaining normal mitochondrial function and morphology. Berger and Yaffe [21] found that loss of function of prohibitin leads to altered mitochondrial morphology, loss of normal reticular morphology, and disorganized mitochondrial distribution. Theiss et al. [17] reported that the elevation of prohibitin in the surface epithelial cells of the colon could reduce the severity of colitis in mice, suggesting that prohibitin may be a novel therapeutic target for inflammatory bowel disease. The results from the abovementioned studies suggest that prohibitin is associated with cell/tissue injury. Similarly, these data support our results. Correlation analysis showed that prohibitin protein expression was negatively correlated with the histologic colitis score and protein expression of -SMA. Therefore, prohibitin regulation may prove to be a major therapeutic target for treating intestinal fibrosis in human IBD. Of all the cytokines and growth factors involved, TGF-1 is known as one of the most potent fibrogenic cytokines in several fibrotic diseases [11][12][13]22]. TGF-1 plays a pivotal role in the processes of intestinal fibrosis. Therefore, we examined TGF-1 expression in the colonic tissue of Mediators of Inflammation 7 IL-10KO mice. As expected, TGF-1 expression was upregulated in a time-dependent manner in the colonic mucosa. The present study has shown the importance of regulating the local production of TGF-1 as a therapeutic strategy for preventing and suppressing intestinal fibrosis [23]. However, TGF-1 also has diverse important biologic functions such as immunosuppressive function, enhancement of tissue regeneration, and wound healing [24,25]. In the progression of the fibrosis processes, it cannot only increase the secretion of extracellular matrix, but it can also prevent fiber activator generation. Therefore, TGF-1 may not be an ideal target for preventing fibrosis in patients with inflammatory bowel disease. We therefore investigated another transcription factor in chronic intestinal inflammation, Nrf2, a transcriptional regulator of antioxidant responses. Nrf2 also plays a pivotal role in the endogenous defense against oxidative stress [26]. Nrf2 is very sensitive to the cellular oxidation reduction status; therefore, any changes in the cellular oxidation reduction can lead to the change of the Nrf2 transcriptional regulation action [27]. Lastres-Becker et al. [28,29] found that transcriptional activity of Nrf2 increases with low or moderate doses of TNF-and decreases with high doses of TNF-. Another study [17] showed that prohibitin is the regulator of Nrf2 and can sustain activation of the Nrf2, which can decrease oxidative stress and colitis. In this investigation, we found similar results that prohibitin acts as a regulator of antioxidant response and regulation of prohibitin can sustain activation of the Nrf2. Therefore, these findings indicate that regulation of prohibitin might prevent the development of therapeutic agents for intestinal fibrosis in human IBD. IL-10 is a cytokine with anti-inflammation and antifibrosis characteristics. García-Prieto et al. [30] found that inhibiting the Matrix metalloproteinase-8 (Matrix metalloproteinases-8, MMP-8) can enhance IL-10 expression and increase bleomycin-induced pulmonary fibrosis in rat models. In this experiment, we observe the effect of IL-10 on the intestinal fibrosis in IL-10-treated mice. Theiss et al. [17] reported that improving prohibitin expression in intestinal epithelial cells can effectively relieve the degree of colonic inflammatory reaction in mice. In this study, our results were similar to those in previous studies that showed that IL-10 exhibits an inhibitory effect on the intestinal fibrosis process. Moreover, there is a strong correlation between IL-10 administration and prohibition expression, which ultimately slows fibrosis formation. Our data revealed that administration of IL-10 treatment remarkably decreased collagen deposition in the colonic tissues of IL-10KO mice. Therefore, we demonstrated that regulation of prohibitin plays an important role in preventing and ameliorating intestinal fibrosis related to intestinal inflammation. In conclusion, lowered expression of prohibitin is associated with intestinal fibrosis progression, and IL-10 treatment is associated with increased prohibitin in IL-10KO mice. Based on these findings, regulation of prohibitin may be a promising option for the treatment of intestinal fibrosis related to IBD in the future. However, cell culture and further investigations should be conducted to investigate the detailed mechanism.
4,377.8
2013-04-14T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
A two-step approach for damage detection in beam based on influence line and bird mating optimizer This paper presents a two-step approach for structural damage identification in beam structure using the influence line and bird mating optimizer (BMO). Local damage is simulated as the reduction of the elemental Young’s modulus and mass of beam element. The technique for damage localization based on influence line and its derivatives before and after damage for beam structure was outlined. An objective function comprised of dynamic acceleration is utilized for BMO algorithm. The dynamic response data under external force is calculated by Newmark integration method. Numerical examples of a simply supported beam was investigated. Effect of measurement noise is studied. Studies in the paper indicate that the proposed method is efficient and robust for identifying damages in beam structures. Introduction It is well-known that structures are suffering from all kinds of damage problems, thus a reliable structural damage identification method is needed. Vibration based damage identification basically comprised of methods using data from frequency domain and time domain.In the last few decades, many damage identification methods have been developed using the vibration data in frequency domain [1][2][3][4].Moreover, with the advantage of massive vibration data, methods in time domain [5][6][7][8] emerged serve as a more reliable but complicated aspect in damage identification. In mathematical aspect, damage identification problem can be regarded as an optimization problem that aims to find the optimal solution of an objective.And with the fast development of computation, more attention was paid in algorithm these years.Neural network was among the first to be adopted to identified damage [9,10], and therefore others heuristic algorithm such as genetic algorithm (GA) [11], particle swarm optimization (PSO) [12] and artificial bee colony (ABC) [13] are proved to the available in damage identification. A new heuristic algorithm named Bird Mating Optimizer (BMO) was proposed by Askarzadeh [14,15] recently, which imitates the mating behavior of birds.The algorithm was simulated and compared with GA, PSO and GSO [14] and had better accuracy. In this paper, a two-step method for damage identification is proposed, damage is quantified as both stiffness and mass reduction in different levels.The disadvantage of heuristic algorithm is the probability of local optimum, especially with large amount of identified parameters existed.Here the displacement influence line will be utilized to determine the damage location so as to reduce the number of parameters.Then, the damage is quantified in the second step using bird mating optimizer.An objective function is established by minimizing the discrepancies between the simulated 'measured' dynamic acceleration responses and calculated ones.A simulation of simply supported beam is studied to show the promising results of the method. Damage model construction The finite element method is utilized to calculate the responses of the damage beams.Without loss of generality, we quantify the local damage as the reduction of both stiffness and mass parameters in damage element(s).For a structure with elements, we preset -dimension vectors and , whose elements and ( =1, 2,…, ) are between 0 and 1, stand for the damage parameters of stiffness and mass of the th element, respectively: where and represent the elemental stiffness and mass matrices for element , and , as global stiffness and mass matrices.The dynamic responses of the damaged structure can be obtained by Eq. ( 2) using direct integration method: where is the damping matrix of the system and Rayleigh damping is adopted that = + , and are two constants determined by two given damping ratios that corresponding to the first two modal frequencies of the structure. Localization of the damage by influence line residue The localization of damage is totally based on the static displacement of the beam when a external force exerted on it.In a beam structure with length , if loaded with force , the static deflection ( ) of point can be obtained by Eq. ( 3): where ( ) is the bending moment of beam when only external force exerted on the beam, and ( , ) is the bending moment when a unit force exerts at the measuring point , respectively, is the deflection function of the beam. If we assume that there is damage existing between and with stiffness damage index , then the residue between damaged and intact situation is: If we set the external force as a single moving load whose location is , then the displacement influence line residue for point is formed: The value of ( , ) within [ , ] will change with moving from to .In mechanics aspect, the change is produce directly to the change of shear force.Taking the partial derivative of ( , ) with respective to of Eq. ( 5) we get Eq.( 6): where ( , ) represents the shear force of the beam caused by moving load.A sudden change of Δ ( , ) will occur within [ , ], which can be explained as: when a moving force passes through a damaged area, shear force within the area would change from positive to negative as is illustrated in Fig. 1.For a structure with multiple damages, the values of Δ ( , ) will be the linear sum of derivative of influence line residue (DILR) of every damage individual.For discretized model with elements and elemental length is ⁄ , the discrete influence line residue is: where is the number of damages.One can further specify the change of DILR by taking the derivative of DILR(DDILR) to again.In the discretized model, the second-order central difference quotient is used to calculate the DDILR as shown in Eq. ( 8), where the damage parts will present as non-zero DDILR values: Note that in numerical study, a damage is assumed as an elemental damage that associates with two nodes, on the other hand Δ is nodal value, so a single damage will be presented as two consecutive changes of corresponding nodes on the curve. Objective function for identification assessment To get results of higher precision especially of structure with lots of elements, the acceleration responses was to made up the objective function.For node the acceleration responses vector with sampled time points can be expressed as: By measuring the deviance of acceleration responses between actual and calculated in the form of Modal Assurance Criterion (MAC): where the superscript and denote that the responses originated from calculation and measurement respectively.When the calculation in accordance with the measured data, the reaches 1, so the objective function can be: where is the weighting factor of the acceleration data and is the number of the measuring points. Numerical simulation The identification procedure is shown in.A simply supported beam made of aluminum is the numerical example of the study, with length = 10 m and cross-sectional width = 0.6 m and height ℎ = 0.4 m, the beam is discretized into 12 Euler-Bernoulli elements, Young's modulus is = 6.9 GPa and mass density = 2700 kg/m 3 .There are three designed damages, elements 3, 8 and 9 with Young's modulus reduction by 10 %, 20 % and 15 %, mass reduction by 5 %, 10 % and 5 % respectively.The damping model is Rayleigh damping and the two Rayleigh coefficients are both assumed to be 0.01. To maintain practicality, 3 % random noise presented by Eq. ( 15) is added to the displacement values during localization: where is the noise level, two sensors are set in nodes 4 and 8, and the localization is processed three times to reduce the affection of noise.The DDILR are shown in Fig. 2, nodes 3, 4, 8, 9, 10 show stable changes in the chart of sensor set in node 4, and nodes 8, 9 and 10 have constant trend in the chart of the other sensor in node 8. Thus, the corresponding elements 3, 8 and 9 are selected as suspected elements.For acceleration responses , a Gaussian distributed random noise with zero mean and unit standard deviation is added as Eq. ( 16), where (⋅) stands for the standard deviation of the acceleration responses in time history.The weighting factors of the accMAC are all set as one: The acceleration responses of the structure of a constant sinusoidal force acting at node 7 in the global direction with ( ) = 10 sin( 10) is calculated by Newmark-beta method, the time increment is 0.01 second and the time duration for the response calculation is 6.0 seconds and 600 time steps in total.The arrangement of acceleration measurements are two nodes locating at nodes 5, 8. The identified results of noise level 5 % and 10 % are shown in Fig. 3, with comparison to the result of noise free in condition.In noise free condition the identification yielded good results with maximum relative error 0.9667 % by elemental stiffness of the 3rd element, in low noise level the result is still satisfactory with maximum deviation less than 5 %, and in high noise level the maximum relative error is 5.2526 % which is not too great compared to the result of 5 %. In the best logarithmic fitness values chart show in Fig. 4, it can be observed that with higher noise level, the searching process reach convergence faster but on the other hand the precision dropped. Conclusions By making use of the influence line residue and acceleration data, a two-step damage detection method for beam structures is proposed in this study.A damage model combined with stiffness and mass reduction is applied.The influence line data of displacement is used to reduce the dimension of the identified parameters, objective function based on acceleration and the form of MAC (Modal Assurance Criterion) is established, then BMO is used to minimizing the discrepancies between the simulated measured data and the data from damaged structure.A simply supported beam numerical simulation reflect the effectiveness of the proposed method, results under three different noise levels demonstrated that the proposed method is insensitive to measurement noise, which also manifested the robustness of the method.
2,308.6
2017-10-21T00:00:00.000
[ "Engineering", "Physics" ]
Deep Learning Modeling of Cardiac Arrhythmia Classification on Information Feature Fusion Image with Attention Mechanism The electrocardiogram (ECG) is a crucial tool for assessing cardiac health in humans. Aiming to enhance the accuracy of ECG signal classification, a novel approach is proposed based on relative position matrix and deep learning network information features for the classification task in this paper. The approach improves the feature extraction capability and classification accuracy via techniques of image conversion and attention mechanism. In terms of the recognition strategy, this paper presents an image conversion using relative position matrix information. This information is utilized to describe the relative spatial relationships between different waveforms, and the image identification is successfully applied to the Gam-Resnet18 deep learning network model with a transfer learning concept for classification. Ultimately, this model achieved a total accuracy of 99.30%, an average positive prediction rate of 98.76%, a sensitivity of 98.90%, and a specificity of 99.84% with the relative position matrix approach. To evaluate the effectiveness of the proposed method, different image conversion techniques are compared on the test set. The experimental results demonstrate that the relative position matrix information can better reflect the differences between various types of arrhythmias, thereby improving the accuracy and stability of classification. Introduction Cardiovascular disease poses a significant threat to human health.As electrocardiography is a key method for representing pathological information, it has been an essential tool for the identification and diagnosis of cardiovascular diseases.For arrhythmia, which is one of the manifestations of cardiovascular disease, various technical difficulties still exist in achieving an accurate diagnosis, and the assistance of modeling electrocardiography is required.The traditional diagnostic technique relies on the clinical experience and feature extraction skills of physicians.However, there are technical challenges in manual diagnosis due to substantial differences in electrocardiograms.In recent years, with the application of deep learning in the medical field, automatic extraction of essential features and the automatic diagnosis of cardiovascular diseases have become research topics. The extraction of features from ECG signals is a crucial step in the intelligent recognition of cardiovascular diseases.The effectiveness of diagnosis is mainly dependent on the quality of feature extraction.The primary features found in ECG signals commonly include morphological properties, time-frequency features, and statistical information.Qin [1] utilized a discrete wavelet transform to extract morphological features combined with principal component analysis (PCA).An optimized support vector machine (SVM) algorithm was employed to perform a six-classification recognition on the MIT-BIH arrhythmia database.A waveform coding rule was devised to extract multiple morphological features of signals [2], and the combination of a convolutional neural network (CNN) and long short term memory (LSTM) network was used for the multi-classification of ECG signals. Zhu [3] used PCA and dynamic time warping (DTW) to extract multiple features with an employment of improved SVM.Elhaj [4] obtained non-linear properties by utilizing the high-order statistical cumulant of signals.The SVM and neural network were used for the five-class classification of arrhythmias with ten-fold cross-validation.Based on the statistical feature extracted with wavelet packet decomposition, a backpropagation neural network was employed for classification with a genetic algorithm [5].Zarei [6] extracted non-linear features of signals by utilizing the wavelet transform coefficients with an entropy analysis on the fuzzy entropy, approximate entropy, and conditional entropy. However, due to the one-dimensional nature of ECG signals, the hidden features are not readily revealed.Thus, traditional deep neural networks have difficulty with automatically extracting effective features, leading to underperformance in classification.In contrast, deep learning techniques have demonstrated excellent performance in image segmentation and classification.As a result, researchers have sought to enhance neural networks by increasing the dimensionality of ECG signals.Consequently, two-dimensional imaging has emerged as a viable direction in automatic arrhythmia classification. Lu [7] proposed an advanced bidirectional recursive neural network based on residual structure, which was effectively classified for signals using a two-dimensional grayscale spectral image.A discrete cosine residual network algorithm was introduced for recognizing myocardial infarctions, optimizing time-frequency characteristics through a discrete cosine transformation method [8].A variable scale fusion network model was employed with residual blocks and an attention mechanism to convert signals into a spectrogram [9].Zhai [10] converted the signals into double-coupled matrices as two-dimensional feature inputs for a CNN model.However, in view of the time-frequency spectrogram, these methods may have difficulty in accurate modeling with information extraction from ECG signals with a low signal-to-noise ratio. In order to address this issue, the thought of transfer learning was utilized to develop a deep CNN model, which converted signals into a two-dimensional recurrence plot [11].This improves the ability of the recurrence plot to represent both temporal and spatial features of signals, improving the classification accuracy with an expression of multidimensional information.However, the dimensionality reduction parameter is a key determination for the recurrence plot.Thus, empirical judgment or multiple parameter attempts may be necessary to achieve relatively accurate results. An improved ResGC-Net network for automatic arrhythmia recognition was employed by converting signals into a two-dimensional Gramian Angular Field (GAF) for identification and classification [12].This method can better represent the interactions between different parts of signals, while possessing a superior stability and robustness.A semi-supervised CNN was proposed [13] for arrhythmia classification by using Markov Transition Field (MTF) to analyze signals at different time and frequency scales.The MTF exhibited a robustness with a reduction in the influence of noise through adjustment of the regularization parameter of the state transition matrix.However, calculations of GAF and MTF involve a high computational complexity.Therefore, it may result in an extended calculation cost or inaccurate results when dealing with large-scale time series data. In automatic arrhythmia classification, a primary challenge is to exploit a simple and efficient rise-dimensional algorithm for the image format to support subsequent automatic classification tasks.To address this need, a novel technology is proposed that utilizes relative position matrices for feature extraction in processing signals.This approach enables the swift conversion of a two-dimensional image while retaining sufficient information to facilitate subsequent automatic classification tasks.The proposed method is incorporating a deep learning algorithm via the Gam-Resnet18 network, which is built upon ResNet, by introducing the GAM global attention mechanism to enhance the capability of feature selection.The target of the proposed technology is to achieve rapid and effective detection and diagnosis of arrhythmias. The organization of this paper can be described as follows.The ECG signals for the abnormal heart rate types underwent segmentation firstly.Wavelet transform with a db6 wavelet and five-scale decomposition are applied to the segmented signals for noise reduction.In order to accurately represent the features of signals, the conversion of time-domain signals is conducted into relative position matrices in Section 3.Then, an in-depth explanation of the proposed network model "Gam-Resnet18" is provided.The design strategy using transfer learning technique is discussed, which is applied to facilitate automated classification.Section 5 concentrates on constructing a transfer convolutional neural network accounting for modeling of the features represented by two-dimensional images.It is demonstrated that this model achieves an overall accuracy of 99.30% in the context of relative position matrix-based classification.To validate the effectiveness of the proposed method, comparisons are also made with other algorithms using GAF, RP, and MTF. Introduction of the Dataset The present study involves the utilization of ECG signal recordings from the MIT-BIH Arrhythmia Database [14].The data are acquired from a total of 47 participants.These samples include 25 male individuals aged 32 to 89 years, along with 22 female individuals aged 23 to 89 years.In order to facilitate the analysis, we opted to select all 38 records that featured MLII lead configurations.The selected heart rhythms from the samples include normal electrocardiograms (Normal), and four prevalent arrhythmia types (e.g., left bundle branch block (LBBB), right bundle branch block (RBBB), atrial premature contractions (APC), and premature ventricular contractions (PVC)).In the subsequent discussion, the designations N, L, R, A, and V are chosen to represent these five electrocardiograms. ECG Signal Segmentation In the procedure of diagnosing signals, it is necessary to perform the segmentation of heartbeats for the extraction of essential features.Due to the atypical waveform of arrhythmic signals, localization of ECG signals is required with precise identification of QRS peak positions.The MIT-BIH dataset has been completed using the manually labeled R-peak positions, thereby facilitating the subsequent process of heartbeat segmentation. Heartbeat segmentation is a key step in ECG signal processing.Commonly, heartbeat segmentation relies on fixed sampling points or time windows with QRS peak positions (Figure 1).Specifically, QRS peak locations serve as the starting point with 99 signal points forwards and 200 signal points backwards.Then, these points are compiled to be a complete heartbeat to guarantee the consistency and precision.With processing and filtering of the original ECG signals from MIT-BIH, all patients with the MLII lead are selected for research.Table 1 displays the number of samples for each type of ECG by applying the individual heartbeat segmentation.And it is noted that these are actual samples taken from the original database.To address the issue of data imbalance in ECG signals, an approach is employ individual normal heartbeats conversion of 5 s data segments.By converting the normal heartbeat into segments of 5 s, the challenge of imbalanced data can be effe To address the issue of data imbalance in ECG signals, an approach is employed for individual normal heartbeats conversion of 5 s data segments.By converting the single normal heartbeat into segments of 5 s, the challenge of imbalanced data can be effectively tackled.Additionally, the exclusion of 6 or 7 s time intervals is taken into account to mitigate the impact of noise and interfering signals, thereby enhancing the accuracy and reliability in detecting specific cardiac events. To perform the 5 s segmentation, it starts from the first sampling point at an interval of 5 s (1800 data points).In accordance with the established criteria, the segments will be labeled as designation of N if all the heartbeats within the segments correspond to the normal category.This approach is conducted to address the data imbalance caused by solely segmenting individual heartbeats, thereby enhancing the integrity and reliability of the dataset.Consequently, the particular outcome of the mixed heartbeat samples is referred to Table 2. Denoising of Electrocardiogram Signals The ECG is a weak biological electrical signal that is subject to various sources of interference.So, it is imperative to perform noise reduction processing in a targeted manner.In this paper, the db6 wavelet function is selected from the Daubechies wavelet family for suppressing noise.Within the support interval (−2, 2), it can be represented as a symmetric function centered at 0. The mathematical expression is defined as: Here, ψ(t) represents the db6 wavelet function, ϕ(t) denotes the scaling function, and h 0 , h 1 , h 2 , h 3 , h 4 , and h 5 are the coefficients of the db6 wavelet.The specific coefficients for the db6 wavelet are h 0 = 0.332671999, h 1 = 0.806891509, h 2 = 0.459877502, h 3 = −0.13501102,h 4 = −0.085441273,and h 5 = 0.035226293.These coefficients are derived to ensure that the db6 wavelet satisfies the conditions of compact support and orthogonality.The db6 function is chosen here due to its similarity with the QRS waveform parameters found in ECG signals.Through the utilization of the wavelet transform, signals can be decomposed into wavelet coefficients of different scales and frequencies, providing a more precise description of characteristics.For ECG signals, the energy primarily converges within 4 or 5 scales.Accordingly, a decomposition level of 5 is chosen for multiscale analysis in this study.A balance could be achieved for a decomposition level of 5 between high frequency characteristics and noise suppression while fully retaining the low frequency information of the signal [15]. In wavelet denoising, hard and soft thresholding methods are employed typically.The compromise of soft-hard thresholding [16] aims to balance the advantage and limitation of both methods.Usually, soft thresholding is applied initially to eliminate noise.Subsequently, hard thresholding is used to remove the remaining noise while maximizing the retention of signal details.The threshold function of the approach is defined as follows: Entropy 2023, 25, 1264 5 of 17 When λ is assigned a value of 0 or 1, the threshold function manifests as either hard or soft thresholding, respectively.However, when 0 < λ< 1, both soft and hard thresholding methods are applied to the ECG signal.In this study, an intermediate weight of α = 0.5 is chosen as a compromise.The signal denoising is shown in Figure 2, which is obtained a good processing effect in practice. tion of both methods.Usually, soft thresholding is applied initially to eliminate noise.Subsequently, hard thresholding is used to remove the remaining noise while maximizing the retention of signal details.The threshold function of the approach is defined as follows: When λ is assigned a value of 0 or 1, the threshold function manifests as either hard or soft thresholding, respectively.However, when 0 < λ< 1, both soft and hard thresholding methods are applied to the ECG signal.In this study, an intermediate weight of α = 0.5 is chosen as a compromise.The signal denoising is shown in Figure 2, which is obtained a good processing effect in practice. Information Feature Extraction Strategy Based on Relative Position Matrix The manual extraction of features from ECG is a complex and time-consuming task due to the large volume of data.Furthermore, a challenge is presented for CNN to accept inputs of different lengths of ECG recordings.The conversion of time-domain ECG data into two-dimensional images offers several advantages.It facilitates feature extraction, leading to improved performance.Neural networks are well suited for processing twodimensional matrix-format data.The challenge of data with varying lengths can be overcome by normalizing through segmented aggregation approximation, enabling uniform conversion to images of consistent size.The Relative Position Matrix (RPM) [17] is a visualization method, which captures the relative positions between different moments in a time series.Furthermore, it enhances data interpretability by reflecting the relative position of each moment within the entire time series.The algorithm of RPM provides a more comprehensive characterization of the correlation and trend among different data points.The dependency relationship is reflected among various temporal moments within a time series. Therefore, RPM is proposed to convert ECG signals into two-dimensional images, facilitating a better feature extraction and accurate classification.The deep learning network modeling for recognition and classification is performed in subsequent sections.The detail of the modeling process is shown in Figure 3. Information Feature Extraction Strategy Based on Relative Position Matrix The manual extraction of features from ECG is a complex and time-consuming task due to the large volume of data.Furthermore, a challenge is presented for CNN to accept inputs of different lengths of ECG recordings.The conversion of time-domain ECG data into two-dimensional images offers several advantages.It facilitates feature extraction, leading to improved performance.Neural networks are well suited for processing two-dimensional matrix-format data.The challenge of data with varying lengths can be overcome by normalizing through segmented aggregation approximation, enabling uniform conversion to images of consistent size.The Relative Position Matrix (RPM) [17] is a visualization method, which captures the relative positions between different moments in a time series.Furthermore, it enhances data interpretability by reflecting the relative position of each moment within the entire time series.The algorithm of RPM provides a more comprehensive characterization of the correlation and trend among different data points.The dependency relationship is reflected among various temporal moments within a time series. Therefore, RPM is proposed to convert ECG signals into two-dimensional images, facilitating a better feature extraction and accurate classification.The deep learning network modeling for recognition and classification is performed in subsequent sections.The detail of the modeling process is shown in Figure 3. Relative Position Matrix Algorithm The electrocardiogram signal is represented as X = x 1 , x 2 ,. .., x n , where x i is the value at each sampling point at time step i, and the length of the signal is n.The RPM algorithm can be described as follows. 1. To obtain a standard normal distribution Z for the ECG signal.The z-score normalization can be performed as: where µ is the mean of X and σ is its standard deviation. 2. Calculate the relative position between two time steps and transform the pre-processed ECG signal X into a two-dimensional matrix M. Each value at time step i serves as the reference point for each row of M. The transformation equation is formulated as follows: The resultant matrix M characterizes the relative position relationships between each pair of time steps in the ECG signal sequence.Each row and column of the matrix M is centered on a reference time step, further characterizing the information of the entire sequence.Each row of matrix M displays a time series with different reference points, while each column shows the mirror image of the former, providing a reverse perspective for viewing the time series. 3. The final gray-level matrix F is obtained by applying minimum-maximum normalization below: To effectively reduce the dimensionality of signals, the Piecewise Aggregate Approximation (PAA) method is adopted here.By calculating the average values of the piecewise constant to reduce dimensionality, the approximate trend of the original ECG signal can be maintained effectively.Ultimately, the smooth ECG signal sequence is transformed to the matrix representation of the two-dimensional image, denoted as matrix F. Relative Position Matrix Algorithm The electrocardiogram signal is represented as X = x1, x2,..., xn, where xi is the value a each sampling point at time step i, and the length of the signal is n.The RPM algorithm Conversion of Relative Position Matrix ECG Image The above balanced segmented samples are converted into two-dimensional images using a relative position matrix in this part.In the course of the conversion, a pixel resolution can be set to control image accuracy and clarity, such as 224 × 224.The transformed images are generated based on the feature RPM obtained from preprocessing, as shown in Figure 4.The areas with high scales in the figure correspond to the locations with elevated amplitudes in the original ECG.To reduce the dimensionality of ECG samples for normal heartbeats, the PAA method is employed.Therefore, the dimensionality is reduced to 300 × 300, as the relative position map illustrated in Figure 4e.This method extracts an intuitive display of features and local structure in the ECG signal.It also provides important information for subsequent analysis and processing. ECG signal can be maintained effectively.Ultimately, the smooth ECG signal sequence is transformed to the matrix representation of the two-dimensional image, denoted as matrix F. Conversion of Relative Position Matrix ECG Image The above balanced segmented samples are converted into two-dimensional images using a relative position matrix in this part.In the course of the conversion, a pixel resolution can be set to control image accuracy and clarity, such as 224 × 224.The transformed images are generated based on the feature RPM obtained from preprocessing, as shown in Figure 4.The areas with high scales in the figure correspond to the locations with elevated amplitudes in the original ECG.To reduce the dimensionality of ECG samples for normal heartbeats, the PAA method is employed.Therefore, the dimensionality is reduced to 300 × 300, as the relative position map illustrated in Figure 4e.This method extracts an intuitive display of features and local structure in the ECG signal.It also provides important information for subsequent analysis and processing. Design of Gam-Resnet18 Network Model for Relative Position Matrix Recognition ResNet18 [18] is a deep learning model primarily utilized in the field of image classification.It comprises convolution and pooling layers, residual blocks and fully connected layers, which are used to extract image features and address issues of model degradation, respectively.In this study, the Gam-Resnet18 model is proposed as an improvement on ResNet18, specifically customized for the categorization of ECG signals.In the Gam-Res- Design of Gam-Resnet18 Network Model for Relative Position Matrix Recognition ResNet18 [18] is a deep learning model primarily utilized in the field of image classification.It comprises convolution and pooling layers, residual blocks and fully connected layers, which are used to extract image features and address issues of model degrada-tion, respectively.In this study, the Gam-Resnet18 model is proposed as an improvement on ResNet18, specifically customized for the categorization of ECG signals.In the Gam-Resnet18 model, a GAM [19] module is added to enhance feature selection at the output of each residual block.This module performs global average pooling and uses a multilayer perceptron to calculate the weights associated with each channel.Subsequently, the weighted feature map is then added to the original feature map to obtain an output with enhanced information features.Compared to ResNet18, the Gam-Resnet18 model shows improved feature selection ability due to the addition of GAM modules.The proposed model procedure is shown in Figure 5 for the structure of the network.In the procedure of modeling, the 2D convolution layer and max pooling layer are utilized to extract time series features from ECG signals, which are transformed into image format by the residual blocks and GAM modules. Gam-Resnet18 Network Training The dataset is partitioned into separate training and test sets in an 8:2 ratio, with 20% of the training set reserved as a validation set.During the model training, an image dimension of 224 × 224 is utilized with a batch size of 32 and a learning rate at 0.0001.The network optimization is undertaken through the utilization of the Adam optimizer, which dynamically adjusts the updated step size for each parameter via adaptive learning rates.This approach is accelerated in the training process, avoiding the problem with local optimization.The performance of the model is evaluated by monitoring accuracy and loss value on the validation set. The loss value can be reflected by the discrepancy between the predicted result and the true label, while accuracy is measured by the proportion of correctly predicted samples to total samples.The training processes are shown in Figure 6.It is demonstrated that the loss value in the validation set is stabilized after the sixth epoch.The accuracy approaches the value from the training set after the ninth epoch.This finding indicates that the model has a strong generalization ability. Gam-Resnet18 Network Training The dataset is partitioned into separate training and test sets in an 8:2 ratio, with 20% of the training set reserved as a validation set.During the model training, an image dimension of 224 × 224 is utilized with a batch size of 32 and a learning rate at 0.0001.The network optimization is undertaken through the utilization of the Adam optimizer, which dynamically adjusts the updated step size for each parameter via adaptive learning rates.This approach is accelerated in the training process, avoiding the problem with local optimization.The performance of the model is evaluated by monitoring accuracy and loss value on the validation set. The loss value can be reflected by the discrepancy between the predicted result and the true label, while accuracy is measured by the proportion of correctly predicted samples to total samples.The training processes are shown in Figure 6 Metrics Evaluation The performance evaluation framework is introduced for the image classification model, including five key metrics as overall accuracy (ACC), positive predictive value (PPV), specificity (SP), sensitivity (SE), and F1 score.Overall accuracy is represented by the proportion of correctly classified heartbeat signal samples to total samples, while PPV is the ratio of correctly classified positive samples to all positive samples.Specificity is used to measure the probability of true negative samples being correctly predicted as negative.Sensitivity is referred to as the probability of true positive samples being correctly predicted as positive.The F1 score is the harmonic mean of precision and recall.By selecting appropriate metrics, the accuracy as well as the robustness of classifiers can be improved to meet requirements.The computations can be carried out using the following formulas.Here, the samples of arrhythmia can be correctly identified by the classifier using TP, while FP represents the samples of other types that the classifier has incorrectly identified.TN indicates the number of samples of the current type incorrectly identified by the classifier as other types, while FN represents the number of samples not belonging to the current type incorrectly identified by the classifier as other types. To assess the performance of the model, the Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) are utilized as evaluation metrics.The ROC curve provides a visual representation of the relationship between the true positive rate (TPR) and the false positive rate (FPR) at various thresholds.The TPR and FPR can be calculated by using the following formulas. The value of AUC ranging from 0 to 1 is adopted to quantify the discriminative ability of the model.A higher value indicates a better classification performance.The ROC curve of the model is presented in Figure 7 with the RPM.The black dashed line stands for the performance boundary between classifier and random selection.The curve approaching the upper-left corner states a better performance of the classifier for the corresponding category.The micro-average curve evaluates the overall performance by considering true positive and false positive rates across all categories.And, the macro-average curve represents the average value of the individual category curves. Metrics Evaluation The performance evaluation framework is introduced for the image classification model, including five key metrics as overall accuracy (ACC), positive predictive value (PPV), specificity (SP), sensitivity (SE), and F1 score.Overall accuracy is represented by the proportion of correctly classified heartbeat signal samples to total samples, while PPV is the ratio of correctly classified positive samples to all positive samples.Specificity is used to measure the probability of true negative samples being correctly predicted as negative.Sensitivity is referred to as the probability of true positive samples being correctly predicted as positive.The F1 score is the harmonic mean of precision and recall.By selecting appropriate metrics, the accuracy as well as the robustness of classifiers can be improved to meet requirements.The computations can be carried out using the following formulas. Here, the samples of arrhythmia can be correctly identified by the classifier using TP, while FP represents the samples of other types that the classifier has incorrectly identified.TN indicates the number of samples of the current type incorrectly identified by the classifier as other types, while FN represents the number of samples not belonging to the current type incorrectly identified by the classifier as other types. To assess the performance of the model, the Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) are utilized as evaluation metrics.The ROC curve provides a visual representation of the relationship between the true positive rate (TPR) and the false positive rate (FPR) at various thresholds.The TPR and FPR can be calculated by using the following formulas. The value of AUC ranging from 0 to 1 is adopted to quantify the discriminative ability of the model.A higher value indicates a better classification performance.The ROC curve of the model is presented in Figure 7 with the RPM.The black dashed line stands for the performance boundary between classifier and random selection.The curve approaching the upper-left corner states a better performance of the classifier for the corresponding category.The micro-average curve evaluates the overall performance by considering true positive and false positive rates across all categories.And, the macro-average curve represents the average value of the individual category curves. Identification Results By applying the trained model to the test data, the corresponding in Figure 8 is obtained, which can assess the classifier s performance.B sion matrix, the classification evaluation indicators are calculated subse shown in Table 3.The trained model demonstrates a perfect perform the five types of ECG signals with a classification accuracy rate of 99.30 this model can accurately classify ECG signals while distinguishing types of signals. For the comparison of influence under heartbeat segmentation, t achieved in ECG signal classification with the single heartbeat.As ind the normal image of RPM for the single beat is much neater compared types.The metrics in Table 3 reveal that a better performance is achiev ing the mixed heartbeat for modeling.These findings highlight the en hybrid heartbeat segmentation in the RPM + Gam-Resnet18 model, ma tion for accurately classifying ECG signals.The value of 1 for micro-average AUC signifies an excellent classification performance for the entire dataset.Similarly, the macro-average AUC of 1 indicates a good average classification performance across different categories.It indicates stable and accurate classification abilities across multiple categories.In summary, these results provide evidence of the outstanding performance of the Gam-Resnet18 model in classifying electrocardiogram signals. Identification Results By applying the trained model to the test data, the corresponding confusion matrix in Figure 8 is obtained, which can assess the classifier's performance.Based on the confusion matrix, the classification evaluation indicators are calculated subsequently, which are shown in Table 3 The value of 1 for micro-average AUC signifies an excelle mance for the entire dataset.Similarly, the macro-average AUC o erage classification performance across different categories.It ind rate classification abilities across multiple categories.In summar evidence of the outstanding performance of the Gam-Resnet18 m trocardiogram signals. Identification Results By applying the trained model to the test data, the correspo in Figure 8 is obtained, which can assess the classifier s performan sion matrix, the classification evaluation indicators are calculated s shown in Table 3.The trained model demonstrates a perfect per the five types of ECG signals with a classification accuracy rate of this model can accurately classify ECG signals while distinguis types of signals. For the comparison of influence under heartbeat segmentat achieved in ECG signal classification with the single heartbeat.A the normal image of RPM for the single beat is much neater comp types.The metrics in Table 3 reveal that a better performance is ac ing the mixed heartbeat for modeling.These findings highlight t hybrid heartbeat segmentation in the RPM + Gam-Resnet18 mode tion for accurately classifying ECG signals.For the comparison of influence under heartbeat segmentation, the performance is achieved in ECG signal classification with the single heartbeat.As indicated in Figure 9, the normal image of RPM for the single beat is much neater compared to the arrhythmia types.The metrics in Table 3 reveal that a better performance is achieved when considering the mixed heartbeat for modeling.These findings highlight the enhanced validity of hybrid heartbeat segmentation in the RPM + Gam-Resnet18 model, making a reliable option for accurately classifying ECG signals. Transformation of ECG Signals To validate the efficiency of the proposed technique, Gramian Angular Field [20] is transformed for the identical ECG signal, as well as Recurrence Plots [21] and Markov Transition Field [20] simultaneously.Subsequently, verifications are performed by using the Gam-Resnet18 network model for the recognition of all image features.To ensure the Comparison of Images for Classification Modeling with Gam-Resnet18 Network 6.1. Transformation of ECG Signals To validate the efficiency of the proposed technique, Gramian Angular Field [20] is transformed for the identical ECG signal, as well as Recurrence Plots [21] and Markov Transition Field [20] simultaneously.Subsequently, verifications are performed by using the Gam-Resnet18 network model for the recognition of all image features.To ensure the consistency of the measurements in the experiment, a consistent pixel resolution of 224 × 224 is maintained with all samples and employed identical ECG signals.After data preprocessing, the corresponding GAF, RP, and MTF are generated, as illustrated in Figure 10.Three kinds of image formats are represented from left to right in each group of arrhythmias, which are derived from the GAF, RP and MTF, respectively.A direct correlation is expressed between color intensity and value magnitude in the GAF images.This correlation becomes even more pronounced as the color intensity increases.In the RP features, the black scale signifies the similarity among corresponding points, whereas the white scale indicates the dissimilarity.For the MTF images, the color scale corresponds to the heightened frequency of neighboring signals. Modeling Results During the model training process, identical hyperparameters are implemented including a batch size of 32 and a learning rate of 0.0001.Network optimization is utilized by the Adam optimizer.Model performance is also evaluated by monitoring the precision rate and loss value of the validation set.By employing the trained model on testing, the performance of classifier can be evaluated by acquiring corresponding confusion matrices.Based on these matrices, the overall accuracy rates are calculated as 99.15%, 99.28%, and 98.57% for the RP, GAF and MTF, respectively.Moreover, with a comparison to the relative position matrix algorithm, it is shown in Tables 3 and 4 that the latter achieves a higher overall accuracy rate of 99.30%.An excellent performance is presented for the RPM algorithm in accurately classifying ECG signals.The results also confirm the reliability and effectiveness of this proposed method, offering a significant reference for ECG signal classification. Modeling Results During the model training process, identical hyperparameters are implemented including a batch size of 32 and a learning rate of 0.0001.Network optimization is utilized by the Adam optimizer.Model performance is also evaluated by monitoring the precision rate and loss value of the validation set.By employing the trained model on testing, the performance of classifier can be evaluated by acquiring corresponding confusion matrices.Based on these matrices, the overall accuracy rates are calculated as 99.15%, 99.28%, and 98.57% for the RP, GAF and MTF, respectively.Moreover, with a comparison to the relative position matrix algorithm, it is shown in Tables 3 and 4 that the latter achieves a higher overall accuracy rate of 99.30%.An excellent performance is presented for the RPM algorithm in accurately classifying ECG signals.The results also confirm the reliability and effectiveness of this proposed method, offering a significant reference for ECG signal classification. Comparison with the Original ECG Signal In this section, a detailed comparison is conducted between RPM and the original ECG signal by employing the Gam-Resnet18 model.To ensure a fair comparison, the dataset is partitioned and trained using the same methodology.Furthermore, it is illustrated that an overall accuracy of 99.27% is achieved for ECG signal classification on the testing set.Comparatively, the RPM algorithm exhibits a higher overall accuracy of 99.30%, as shown in Tables 3 and 5.This again indicates the effectiveness of the RPM + Gam-Resnet18 model on classifying cardiac arrhythmias. Comparison with Reported Results The RPM algorithm is compared with existing techniques as illustrated in Table 6 in this section.For instance, a transfer learning network based on AlexNet [22] was utilized to transform ECG signals into grayscale images, achieving a classification accuracy of 94.95%.Similarly, the particle swarm optimization [23] was employed with a support vector machine model.It achieves an accuracy rate of 98.57% for five-class classification.One-dimensional CNN model with LSTM was designed [24] to classify similar types of heartbeats, resulting in an overall accuracy of 98.10%.Additionally, the feature extraction method in combination with empirical mode decomposition (EEMD) was proposed [25] for classification with a sequential minimal optimization support vector machine (SMO-SVM).Compared to these results, the methodology of the proposed relative position matrix with Gam-Resnet18 network produces a higher classification accuracy, confirming its superiority for ECG signal classification. Discussions In future research, this method can be expanded and improved in several ways.Firstly, it can enhance the feature extraction capability of ECG signals by the improvement of the relative position matrix.Additionally, the integration of sophisticated data augmentation methods, such as the GAN (Generative Adversarial Network), could be employed to expand the dataset and enhance the robustness of the model.Thirdly, the classification accuracy can be further improved by combining adaptive learning and model fusion technologies.Finally, for noisy and anomalous data, more effective denoising and filtering methods may be employed to increase the model robustness and boost the classification performance. Conclusions A novel approach is presented in this paper to classify ECG signals using deep learning networks.To address the recognition strategy, a relative position matrix is utilized to describe the relative spatial relationships between different waveforms with Gam-Resnet18 network image recognition technology for classification.The goal of this method is to enhance the ability of feature extraction and classification accuracy of ECG signals by employing image transformation, attention mechanisms, and residual blocks.The main conclusions are summarized as follows. In terms of data processing, ECG signals are segmented into the mixed heartbeat samples comprising single heartbeats and 5 s fragments.The segmented signals are denoised using the db6 wavelet basis and 5-level decomposition with wavelet transform.For image conversion, a feature extraction strategy is adopted based on a relative position matrix.This method extracts an intuitive display of features and local structure in the ECG signal, providing a support for automatic classification tasks. To achieve the two-dimensional image recognition, a Gam-Resnet18 network model is proposed for the modeling and classification of ECG signals.The GAM global attention mechanism is introduced at each residual block to improve the recognition capability of the model.Additionally, transfer learning technology is employed to accelerate the model training process. Utilizing the RPM approach with the Gam-Resnet18 modeling, the proposed method is validated with a total accuracy of 99.30%.The results are also compared with methods of GAF, RP, MTF, and other reported results to validate the effectiveness.It is demonstrated that the relative position matrix can better reflect the differences between various types of arrhythmias, thereby improving the accuracy and stability of classification. 1 Figure 3 . Figure 3. Design of the modeling process. Figure 3 . Figure 3. Design of the modeling process. Entropy 2023 , 25, 1264 9 of 18 series features from ECG signals, which are transformed into image format by the residual blocks and GAM modules. . It is demonstrated that the loss value in the validation set is stabilized after the sixth epoch.The accuracy approaches the value from the training set after the ninth epoch.This finding indicates that the model has a strong generalization ability. . The trained model demonstrates a perfect performance in classifying the five types of ECG signals with a classification accuracy rate of 99.30%.It indicates that this model can accurately classify ECG signals while distinguishing between different types of signals. 18 Figure 9 . Figure 9. ECG normal image of relative position matrix for single heartbeat. Figure 9 . Figure 9. ECG normal image of relative position matrix for single heartbeat. Table 1 . Classification results of five types of heart rate based on single beat. Table 2 . Segregation results of the mixed heartbeat samples. Table 3 . Performance metrics for 5-class classification with RPM. Table 3 . Performance metrics for 5-class classification with RPM. Table 4 . Evaluation metric results for cardiac arrhythmia classification. Table 4 . Evaluation metric results for cardiac arrhythmia classification. Table 5 . Performance metrics for 5-class classification with original ECG signal. Table 6 . Comparison of results with other literature.
8,814.4
2023-08-26T00:00:00.000
[ "Medicine", "Computer Science" ]
Analysis of price determinants in the case of Airbnb listings Abstract Nowadays, the role of the sharing economy in tourism increases, the number of people involved as guests or hosts rising day by day. This dynamic generates a viable alternative to the traditional services, allowing tourists to customise their trips and enrich their experiences. This paper focuses on accommodation services, investigating the factors influencing the prices established by Airbnb hosts. Using structural equation modelling, the authors analyse the influence of different categories of factors (listing’s characteristics, hosts’ involvement, listing’s reputation, listing’s location, and rental policies) on the average daily rate. The results emphasise that the hosts establish the listing’s price based on listing’s characteristics and on their involvement – the owners managing only one listing and the ones charging a security deposit value more their involvement, while the other hosts focus more on the listings’ characteristics. The location of the listing counts for experienced owners, while for opportunist owners it has no importance. The listings’ reputation has a negative impact on the price, contrary to the conclusions achieved in other studies, an aspect that supports the idea that price determinants differ across regions. Introduction Currently, the new socio-economic system of sharing economy represents a habitual element in the market, recognised and embraced by an increasing number of individuals. It has emerged due to ICT innovations in different areas like transportation, accommodation, food, and skills (Tussyadiah & Pesonen, 2016). The higher level of spread encountered in the hospitality industry led to the development of platforms like Airbnb, 9Flats, Expedia, Fairbnb. Among these, Airbnb, a pioneer in peer-to-peer accommodation, represents the most important player at the global level with a substantial influence in the market. According to a recent report, Airbnb guests can book from more than 6 million rooms, flats, and houses, in around 81.000 cities all over the world. There are more than 150 million hosts enrolled in the system which provided accommodation services in 2018 for 260 million guests (Sherwood, 2019). Airbnb successfully provided a lot of competitive advantages and financial resources for launching it into the hotel business. Thus, for 2020, through a partnership with an accommodation developer to convert some commercial proprieties in New York into high-end apartment-style suites, Airbnb will enter the lodging market beside traditional players. The phenomenon of Airbnb development has changed the traveller's consumption patterns, on one hand, and the structure of the lodging sector, on the other hand (Wang et al., 2019). For example, due to the important presence of Airbnb supply in some areas, the hotels had to respond by lowering their prices (Zervas et al., 2017), reducing in this way their profitability. Consumers benefited from this strategy, even those who were not customers for the Airbnb platform, and much more from the growth of goods and services diversity provided through peer-to-peer platforms. After the economic recession, the impact was enhanced also by the fact that people paid more attention to their spending and tried to use more opportunities for preserving their resources . The boost of share economy system popularity attracted the attention of practitioners and academics, thus in the literature exist a lot of studies (Hamari et al., 2016;Sigala, 2017) which try to explain the motives of people to share and provide services (especially accommodation) through commercial sharing platforms, the way they establish the prices, the influence upon the consumer behaviour, and the impact on travel and tourism industry. Focussing on pricing strategies, there are still aspects that have to be explored, like: what are the differences in owners' strategies, what is the impact of their goals, what are the criteria used to establish the price, or why the impact of factors differs from region to region. Using structural equation modelling, the purpose of the present study is to develop a formative model that enables the identification of factors that influence Airbnb listing prices. This paper aims to enrich the literature by concentrating the analysis on hosts' price setting behaviour. Starting from the characteristics of the different categories of owners, the paper will explore the impact of these differences on price determinants, this approach representing a new contribution to the literature. Moreover, the authors propose and test the role of a new variable, host involvement, which upgrades the host's attributes by including the behaviour the owners have in relation to their guests and the Airbnb platform. To achieve the paper's goals, a multigroup analysis is implemented based on the following four criteria: the owners' reasons to share their homes, the number of listings, the requirement of a security deposit, and the existence of the superhost status. The official data for 881 Airbnb listings in the city of Cluj-Napoca (Romania), during the time span of April 2017 -March 2018, were used to apply the model and to implement the analyses. The remainder of this paper is organised as follows: section two presents the literature, section three describes the model proposed, section four presents the variables used and the methodology applied to collect and process the data, section five discusses the results. The last section is dedicated to conclusions. Theoretical framework The Airbnb platform differs from traditional accommodation units in terms of facilities, customer service, website design, and booking systems. The lodging model specific for Airbnb favours a closer social interaction between residents and tourists (Tussyadiah & Pesonen, 2016), to create a sense of place as a competitive advantage compared with conventional lodging proprietaries (Cheng, 2016). In addition, the literature suggests that this social appeal of peer-to-peer accommodation contributes to several outcomes like a longer length of stay, higher frequency of travel, and a bigger range of activities. On the other hand, Farmaki and Stergiou (2019) highlighted that among the hosts' motives to share their homes, social interactions with guests are equally important as economic reasons. Furthermore, accordingly with the need for intensity of social interaction, different types of behaviour were illustrated (Farmaki & Stergiou, 2019). Many studies have focussed on Airbnb's advantages, threats (Fang et al., 2016;Lampinen & Cheshire, 2016;Meleo et al., 2016), legal issues (Edelman & Geradin, 2015), impacts on the hotel industry and tourism industry employment (Fang et al., 2016;Neeser et al., 2015;Zervas et al., 2017) or revenues (Frenken & Schor, 2017;Gibbs, Guttentag, Gretzel, Yao, et al., 2018;Gunter & Onder, 2018;Zhang et al., 2018). For instance, Guttentag (2015) emphasised that the strengths of Airbnb are related to lower prices and the socio-cultural benefits resulted from staying in a local residence. On the other side, the lack of legislation to regulate the activity and the safety issues may represent possible threats to Airbnb's future growth. The regulatory framework is mandatory to allow peer-to-peer platforms to operate legally and to allow all the stakeholders to enjoy the advantages created by these platforms. Zervas et al. (2017) reported that a 1% increase in Airbnb listings in Texas resulted in a 0.05% decrease in quarterly hotel revenues. This result supports the idea that the Airbnb development may impact negatively the financial results of traditional accommodation units. Fang et al. examined the effect of Airbnb on the tourism industry in Idaho, USA. Their results emphasise that the sharing economy can generate employment, especially in small to medium markets, the additional impact being less significant as the market grows (Fang et al., 2016). The exponential growth of the Airbnb platform raised the interest in studies trying to highlight the price determinants of Airbnb listings (Wang & Nicolau, 2017). Since the price influences clients' choice of accommodation and hosts' profits, identifying the factors determining the price can help hosts establish an equitable price so that both the owners and guests benefit from being part of the sharing economy. Until now, different studies have been focussed on price determinants of Airbnb listings, their results emphasising the complexity of the relationship between pricing and its determinants. Moreover, different authors suggest that the relationship between price and its determinants differ undoubtedly across cities/countries/regions due to the variation in city types, city economics, and Airbnb development in the region Lorde et al., 2019). For instance, racial discrimination creates an unfortunate impact on prices, the Afro-American, Asian and Hispanic owners being obliged to charge lower prices for similar listings than white hosts (Edelman & Luca, 2014;Kakar et al., 2016). Gutt and Herrmann (2015) identify that once the minimum number of reviews is obtained and the rating becomes visible, the price increase by e2.69. Moreover, higher ratings determine hosts to charge higher prices. Prices are also correlated with reputation and trustworthiness. Ikkala and Lampinen (2014), Gutt and Herrmann (2015), Kakar et al. (2016), Brochado et al. (2017), and Mody et al. (2017) emphasise the impact of different components of reputation on prices, most of the elements having a positive effect. Ert et al. (2016) highlighted that visual-based trustworthiness, created through the users' photographs in the case of Airbnb listings in Stockholm, determines the listing prices. Li et al. (2016) and Zhang et al. (2017) reported that the distance to the nearest landmark and facility is correlated with Airbnb listing prices. Dud as et al. (2017) underline that, in the case of Budapest, distance to the city centre is a weak price determinant. Summarizing the findings from previous studies on Airbnb pricing, five groups of variables were identified: Listing/home attributes include the following variables: Accommodation type and room type, number of bedrooms, number of bathrooms, facilities (car parking, swimming pool, and wireless Internet) have a positive effect on prices (Chen & Xie, 2017;Wang & Nicolau, 2017); The number of accommodation photos has a positive effect on prices (Ert et al., 2016;; Free breakfast has a negative impact on prices Wang & Nicolau, 2017). Listing location includes the following variables: Neighbourhood average rental price, number of other Airbnb listings, price of similar Airbnb listings or price of hotels, availability of sightseeing, eating or shopping areas, or other attractions have a positive effect on prices (Chen & Xie, 2017;Kakar et al., 2016;Li et al., 2016); The distance to the city centre has different effects on prices, ranging from weak (Dud as et al., 2017) to a significant negative effect (Li et al., 2016). Listing reputation includes the following variables: Ratings on cleanliness and location have a positive effect on prices (Chen & Xie, 2017;Kakar et al., 2016); The reviews' number and scores have a negative effect on prices Kakar et al., 2016;Wang & Nicolau, 2017). A higher number of reviews is the result of a higher number of bookings, which is, usually, a characteristic of the cheaper listings; Average review score has a positive effect on prices (Gutt & Herrmann, 2015;Ikkala & Lampinen, 2014); Ratings on accuracy and check-in do not have a significant effect on prices (Chen & Xie, 2017). Host attributes include the following variables: Hosts' listing count, host verification, host profile picture, and response time have a positive impact on the price (Chen & Xie, 2017;Wang & Nicolau, 2017); Race has a negative effect on prices (Edelman & Geradin, 2015;Kakar et al., 2016); Gender, marital status, and sexual orientation did not have a significant effect on price (Chen & Xie, 2017;Kakar et al., 2016); Superhost status has mixed effects on prices (Chen & Xie, 2017;Kakar et al. 2016, Liang et al., 2017Wang & Nicolau, 2017). Rental policies describe the Airbnb policies the owner may implement: Strict cancellation policy and guests' phone verification required has a positive effect on prices (Chen & Xie, 2017;Wang & Nicolau, 2017); Smoking allowed has a negative effect on prices Wang & Nicolau, 2017). Based on these conclusions, the following hypotheses will be tested: H1: Better listing characteristics will determine the hosts to charge a higher price. H2: A farther listing location from the city centre will determine the hosts to charge a lower price. H3: A better listing reputation will determine the hosts to charge a higher price. H4: Hosts willing to involve more in managing Airbnb listings charge a higher price. H5: The stricter the rental policies, the higher will be the price charged. The model The formative model developed to investigate the factors influencing the hosts' behaviour in setting rental prices has the average daily rate (measured in USD) as a dependent variable and the five latent variables, described in the literature section, as independent variables. When the owners establish the price, they take into consideration different aspects that influence the value of their listing for tourists: their listings' characteristics, their rental policies, their involvement, the reputation of their listing, and the distance to the city centre. The host involvement construct represents an upgrade of the host's characteristics including the owner's attributes and behaviour in managing their listing on the platform. The effects of attributes on prices are mixed (Kakar et al., 2016;Wang & Nicolau, 2017), while the effects of owner's behaviour were not analysed yet in the literature. Based on the previous findings, and considering reputation as an important factor in this process, being the results of the interaction between other categories of factors (host involvement, listing characteristics, rental policies, location), the authors have developed the model described in Figure 1. Description of variables Browsing the literature focussed on pricing strategies in the sharing economy and tourism industry, several categories of factors were identified. Starting from this information, research hypotheses were developed, taking into consideration the specific aspects of sharing economy accommodation services. One of the most important issues that should be addressed is the flexibility of supplycompared to traditional accommodation units, the sharing economy owners are able at any moment to start (be active) or stop (be inactive) providing their servicesaspect which determines fluctuations in supply from one month to another. This situation is the result of having at least three different categories of owners: 1) the experienced (professional) host who is providing services throughout the entire year, possibly operating more than one listing; 2) the stable income seeker who wants to obtain extra income by renting regularly its space, but not on a full-time basis; and 3) the opportunist who is renting its space occasionally just to obtain an extra income in a specific period, usually when the level of incomes they could obtain is higher due to seasonality of tourists flows. The listing's characteristics, location, reputation, and rental policies latent variables have similar content to the studies presented in the literature section, while the host's involvement latent variable represents a new approach proposed by the authors. Previous studies analysed the influence of the owner's characteristics/attributes on prices (Ert et al., 2016;Lorde et al., 2019), emphasising that the number of host's photos and their facial expression represent sources of trustworthiness, which is associated with higher prices. Furthermore, the owner's behaviour and actions in managing the listing's presence on the platform and the communication with potential customers represent other important sources to build trust. These elements describe, in the authors' view, the host's involvement/effort in making their listing(s) more attractive for potential customers, a latent variable whose influence on the price will be investigated. In this direction, the authors consider that an owner will be more involved if he/she is willing to: fulfil the criteria imposed by Airbnb to become a superhost respond fast, and to all inquiries received: each host has 24 hours to respond to any inquiry, but some of them answer very fast, others very late, and some hosts do not answer at all. The hosts who are more involved and who want to have better performance will answer fast, so faster answers mean higher involvement. Furthermore, answering all inquiries represents a sign of involvement. upload a bigger number of photos for each listing to allow the potential customers to have a better image of the location's characteristics. Each owner decides how many photos to upload to the Airbnb platform. The bigger the number of photos available, the higher will be the chance to attract bookings, so a host interested to represent an option for potential customers will upload more photos. Therefore, a bigger number of listing photos represent a sign of deeper involvement. manage many listings on the Airbnb platform implement a flexible pricing strategy according to the changes in the market supply and demand. If the host has a very low involvement in the pricing strategy, it will charge approximately the same price no matter the moment of the year, the variance of the average daily rate being small. On the other side, an owner who is very involved in the process of pricing will try to adjust the price according to the changes in the market, the variance of the average daily rate being higher. Data collection and processing To study the research hypothesis, yearly and monthly data of all Airbnb listings from Cluj-Napoca were obtained from AIRDNA (airdna.co). The main purpose of data selection was to identify and use only the active listings in the analysis. The following criteria were used: The monthly reported average daily rate should be higher than 0 Only the listings with reviews were selected The shared rooms listings were eliminated due to their small number (they were only 10 listings) During the timespan, April 2017 -March 2018, a maximum of 952 listings were available for booking on the platform (see Figure 2). Applying the previous criteria, 881 listings were used in the analysis. Due to the tourism specificity of the city, the number of listings available fluctuates significantly from month to month, this aspect allowing the accommodation supply to adjust easily to tourism demand. The events organized in the city, some of them being well known at the national and international level, generate significant fluctuations both in the size of tourism flows and in the price of accommodation services. Most of the listings are small apartments (1-2 bedrooms), rented entirely to guests, without asking for a security deposit. Approximately 20% of the hosts were involved in the renting phenomenon for at least 10 months, 25% of them had the superhost status, while approximately 50% of the hosts were managing only one listing. Structural equation modelling and multigroup analysis were conducted using Smart PLS 3. To implement the multigroup analysis, different categories of owners were created, based on the following criteria: Starting from the number of months the hosts were active (their location was listed) on the platform during the analysed period: experienced (their listing was available for at least 10 months), stable income seekers (their listing was available Source: Authors between 5 and 9 months) and opportunists (their listing was available maximum 4 months); Starting from the number of listings they managed on the platform: one listing and many listings; Starting from the requirement of security deposit: hosts who ask for a security deposit and who do not ask for a security deposit; Starting from superhost status: hosts who have the superhost status and who do not have the superhost status. The model validation To validate the model described in Section 3, the statistical significance of indicators' effect on the latent variables was performed through a bootstrapping procedure. Since this is a formative model, a more liberal approach was adopted, considering the effect statistically significant if the p-value is less than 0.1 (see Table 3). The second step performed to validate the model was the collinearity diagnostic. Especially in the case of a formative model, multicollinearity must be avoided, the regression estimates being unstable and having high standard errors. The variance inflation factors (VIFs) were assessed and all the values were below 3 (see Table 4), which means there is no problem with multicollinearity in the proposed model. Kock (2015) emphasised that a VIF greater than 3.3 is a sign of pathological collinearity and an indicator that the model may be contaminated by common method bias. In the case of the model proposed, since all VIFs resulting from a full collinearity test are lower than 3.3, the model is not affected by the common method bias. To identify if there are significant differences between hosts concerning the factors that influence their price decision, a multigroup analysis was performed. The model proposed was applied to the categories of owners described in the data collection and processing section, to identify significant differences between the path coefficients. Results and discussions To evaluate the research hypotheses, the model presented in the methodology section was developed and applied in the case of Cluj-Napoca, Romania. The variables described previously were taken initially into consideration in the process of model creation, while in the final model only the variables and relationships having statistical significance have been kept. According to Sarstedt et al. (2017), the strength of the relationship between latent variables is represented by path coefficients. A path coefficient of 0.441 between the listing's characteristics latent variable and the average daily rate implies that owners considering their listing as providing better amenities, will charge higher prices; for each additional standard deviation unit, the price construct will increase by 0.441 standard deviation units when keeping all other independent constructs constant. Moreover, analysing the path coefficients, it can be observed that the listing's characteristics registered the strongest impact on price, followed by the host involvement latent variable, these two aspects being taken into consideration most often by the hosts. Analysing the listing's characteristics, only the number of bedrooms, the listing type, and the maximum number of guests were kept. Among these variables, the most important for hosts is the maximum number of guests, a bigger accommodation capacity determining the host to charge a higher price. The owners renting the entire location charge a higher price than the owners renting private rooms. In conclusion, the H1 research hypothesis is being validated, the hosts considering the listing's characteristics as one of the most important aspects for pricing, this result being in line with the previous studies presented in the literature section. Regarding the H1 hypothesis, the multigroup analysis brings important contributions to the literature. The analysis focussed on hosts managing one listing compared to hosts managing many listings showed that the latter grant higher importance to the listing's characteristics when they establish the pricea statistically significant difference of 0.192 (p-value ¼ 0.998) being identified between the path coefficients. Correlating this result with the path coefficients between the host's involvement and price (the involvement is the most important aspect in the pricing process in the case of owners managing one listing), we can suppose that the hosts managing many listings cannot be very involved in managing all of them, so they prefer to focus more on the listing's characteristics when they fix the price. Another difference in pricing behaviour is identified in the case of hosts asking for a security deposit, who are valuing more their involvement than the listing's characteristics. Since only approximately 10% of owners are doing this, the security deposits may hurt their revenues, as many hosts confirmed on Airbnb forums. Being aware of this risk, they grant bigger importance to their involvement in hosting when they fix the price to generate an added value for the customers. Having the guarantee of being able to cover the unexpected damages, they try to compensate through their involvement the potential negative impact of their requirementa statistically significant difference of 0.353 (p-value ¼ 0.008) were identified between the path coefficients. According to Figure 3, the higher the distance to the city centre, the lower will be the price -the path coefficient of À0.078 between DistanceCityCentre and price latent variables implies that owners considering their listing as being located further will charge lower pricesfor each additional standard deviation unit of the distance variable, the price construct will decrease by 0.078 standard deviation units, when keeping all other independent constructs constant. Therefore, the H2 hypothesis is validated. The multigroup analysis confirms the mixed effect of the location described in the literature, providing valuable insights regarding the reasons for these effects. As the hosts gain more experience on Airbnb, they give higher importance to distance from the city centre: a statistically significant difference of 0.168 (p-value ¼ 0.000) was identified between the path coefficients of experienced and opportunist hosts and a statistically significant difference of 0.090 (p-value ¼ 0.032) was identified between the path coefficients of opportunists and stable income seeker hosts. In their long-run strategy, the experienced hosts are aware of the negative impact the distance may have on their listing reputation and they will try to compensate the distance with a price decrease. On the other side, the opportunists are renting usually in periods when the tourist flows are high, or when a specific opportunity appears, their shortrun strategy is not influenced too much by the listing's location. Out of all variables tested in the final model, the listing's reputation is evaluated through the monthly number of booked days and the number of reviews. Analysing the results from Figure 3, it can be concluded that the H3 hypothesis is not validated -the path coefficient of À0.187 between the listing's reputation and price implies that the owners who consider their listing having a good reputation, will charge lower pricesfor each additional standard deviation unit of the listing's reputation variable, the price construct will decrease by 0.187 standard deviation units when keeping all other independent constructs constant. A better reputation does not determine the owner to increase the price, but to lower it. This result contradicts the previous literature, one possible explanation can be the host's strategy to increase the occupancy rate by lowering the price; like this, the number of reviews is rising also. It seems that this approach is characteristic to all categories of owners in the analysed market, no significant statistical difference being identified between the path coefficients of any host categories. This strategy may be characteristic to new/incipient/immature markets, like Cluj-Napoca's market (the first apartment listed on Airbnb was in 2015), where the hosts, being aware of the increased competition they will have in the future, are interested to build a good reputation to achieve a long-run competitive advantage. The hosts' involvement in managing their listing may be characterised through different items. In the final model, only the number of photos and the variance of the average daily rate were statistically relevant. As it can be noticed in the model (Figure 3), a path coefficient of 0.369 between host involvement and the average daily rate implies that more involved owners will charge higher pricesfor each additional Source: Authors' calculation standard deviation unit, the price construct will increase by 0.369 standard deviation units when keeping all other independent constructs constant, the H4 hypothesis is validated. Besides confirming the role of host characteristics described in the literature, the result develops the literature highlighting the significant role of the owner's behaviour. Rational hosts, focussed on maximising their listing value in the long run, will be willing to allocate more resources and to get more involved in developing a pricing policy adapted to changes in market demand. The rental policies with statistically significant influence in the model are the value of cleaning fee, extra people fee, minimum stay, and instant booking enabled. Any additional fee the host charges will increase the final price, while the minimum stay and instant book policies facilitate a decrease of price. Overall, the H5 hypothesis was validated -the path coefficient of 0.160 between rental policies and price implies that hosts who impose rental policies will tend to charge higher pricesfor each additional standard deviation unit of the rental policies variable, the price construct will increase by 0.160 standard deviation units, when keeping all other independent constructs constant. The multigroup analysis showed that the owners having the superhosts status consider the impact of rental policies on price less significant than other hosts -a statistically significant difference of 0.128 (p-value ¼ 0.030) was identified between the path coefficients. One possible explanation for this difference is the fact that superhosts prefer to have instant booking enabled and to establish more than one night for the minimum stay, aspects which compensate for the increase of price due to other policies. Conclusions The findings of this study highlighted different degrees of influence for a set of 5 categories of variables upon the host's behaviour on the Airbnb platform in the process of setting the rental prices. It has been considered a total of 24 variables classified in the following categories: listing's characteristics (7 variables), listing's reputation (4 variables), distance to the city centre (1 variable), host's involvement (6 variables), and rental policies (6 variables). Theoretical implications Following previous studies done in the same area of research, the greater impact on price decisions is from the listing's characteristics, especially the maximum number of guests, number of bedrooms, and type of listing. A novel contribution in this regard is the finding that owners who are managing many listings are influenced more significantly by listings' characteristics in establishing rental prices in comparison to those who are managing only one listing. The study confirms that the location of the listing represents another important variable in setting the price. Like in the case of hotels, a better location established as a shorter distance to the city centre or closer to a tourist attraction is used by hosts to increase their rental prices. Previous studies emphasised that the distance to the city centre may have both positive and negative effects on prices, the results of this study bringing a new perspective on this situation. According to their strategic thinking, the owners perceive differently the importance of location in their pricing strategies. The opportunist hosts value less the role of the location, the main reason being their short-run strategy focussed on revenues maximisation, compared to experienced owners who are focussed to achieve long-run success. This study adds new knowledge to the existing literature by defining a new category of variables in the analysis of price determinantsthe hosts' involvement in managing their listing(s). Previous studies focussed on hosts' characteristics/attributes assessed mostly the impact of hosts' attributes on their pricing decisions, while, in this paper, the authors focus on the hosts' behaviour in the usage of the revenue management principles and pricing policies to generate additional value for them and their customers. This behaviour represents a strong evidence regarding the raising of a new host segment, more entrepreneurial, paying more attention to the market environment, with more knowledge regarding accommodation business activities and appropriate marketing and pricing strategies. Practical implications In a platform where the reviews count, both guests and owners adjust their behaviour to build confidence and to co-create value for both. As a result, most of the hosts with good reviews, follow in their pricing strategy an approach that allows them to create a good quality-cost experience for their guests. For example, the hosts who impose more complex rental policies tend to use higher prices in the rental process, but at the same time, being aware that the rental policies bring them additional benefits, they are willing to involve more to generate additional value also for their guests. Thus, through increased interaction with the guest and Airbnb platform, the owners improve the quality of the services they provide, enriching the guests' experience, better quality is associated with higher prices. Furthermore, the hosts asking for a security deposit are characterised by a higher level of involvement, this aspect being more important in their pricing strategy than the listings' characteristics. Superhosts represent another category of owners who provide high quality services to their guests. Contrary to the previous approach, the higher quality they provide results from their higher involvement and flexibility (most of them allow instant booking) and from their willingness to provide better listing amenities, the rental policies being less significant. Contrary to the conclusions of the studies described in the literature section, the listing's reputation has a negative impact on the price established by hosts. This result supports the idea that the price determinants differ across regions, two aspects being relevant for this situation -the level of the market development and the approach the host has in building the listing's reputation. Usually, an incipient market is registering a high dynamic in terms of listings available from year to year, some of the hosts adopting long-run strategies to build their reputation. Also, no matter the level of market development, there will always be new hosts joining the platform, aiming to achieve notoriety rapidly. In both cases, these hosts are willing to charge lower prices to receive more bookings, increase the occupancy rate, and receive more reviews. Moreover, correlating these aspects with a high level of involvement, the owners will be able to achieve both very good review scores and a high number of reviews, a proof of the high-quality experience they provide. Thus, the results obtained may represent appropriate guidelines for the persons willing to rent their apartments/homes through a sharing economy platform in choosing the best strategy. The "right price" must be established starting from the home's characteristics and facilities provided to clients. Then, according to their objectives (short-run or long-run), they will have to adjust the price in relation to the distance to the city centre and their involvement. Limitations of the study The price's determinants differ undoubtedly across regions due to the regions' characteristics and economics and due to Airbnb development. As a result, testing the proposed model on a developing market may represent a limitation of the study. Comparing with previous studies, most of the price determinants have a similar influence on pricing decision, but there are also other factors with a different impact, creating new opportunities for further studies. The influence of market development on the relationship between a listing's reputation and the price represents one of the aspects that will be studied in the future. In mature markets, owners who achieve reputation use this asset to obtain higher revenues, while in the case of incipient markets, the goal of achieving reputation may be associated with lower prices. Disclosure statement No potential conflict of interest was reported by the author(s).
7,870.8
2021-08-11T00:00:00.000
[ "Economics", "Business" ]
Search for heavy bottom-like quarks in 4.9 inverse femtobarns of pp collisions at sqrt(s) = 7 TeV Results are presented from a search for heavy bottom-like quarks, pair-produced in pp collisions at sqrt(s) = 7 TeV, undertaken with the CMS experiment at the LHC. The b' quarks are assumed to decay exclusively to tW. The b' anti-b' to t W(+) anti-t W(-) process can be identified by its distinctive signatures of three leptons or two leptons of same charge, and at least one b-quark jet. Using a data sample corresponding to an integrated luminosity of 4.9 inverse femtobarns, observed events are compared to the standard model background predictions, and the existence of b' quarks having masses below 611 GeV is excluded at 95% confidence level. Introduction The total number of fermion generations is assumed to be three in the standard model (SM), though the model does not provide an explanation of why this should be the case.Thus the possible existence of a fourth generation remains an important subject for experimental investigation.Adding a fourth generation of massive fermions to the model may strongly affect the Higgs and flavour sectors [1][2][3][4][5].A fourth generation of heavy quarks would enhance the production of Higgs bosons [6], while the indirect bound from electroweak precision data on the Higgs mass would be relaxed [7,8].Additional massive quarks may provide a key to understanding the matter-antimatter asymmetry in the universe [9]. Various searches for fourth-generation fermions have already been reported.Experiments have shown that the number of light neutrino flavours is equal to three [10][11][12][13], but the possibility of additional heavier neutrinos has not been excluded.A search for pair-produced bottom-like quarks (b ) by the ATLAS collaboration excludes a b -quark mass of less than 480 GeV/c 2 [14].Earlier studies setting mass limits on possible fourth-generation quarks, from experiments at the Tevatron and the Large Hadron Collider (LHC), can be found in Ref. [15][16][17][18][19][20][21]. Using the Compact Muon Solenoid (CMS) detector, we have searched for a heavy b quark that is pair-produced in pp collisions at a centre-of-mass energy of 7 TeV at the LHC.We assume that the mass of the b quark (M b ) is larger than the sum of the top quark and the W-boson masses.If the b quark couples principally to the top quark, the decay chain b b → tW − tW + → bW + W − bW − W + will dominate [22].Given the 11% branching fraction for a W-boson to each lepton, distinctive signatures of b b production are expected, specifically those of two isolated leptons with the same charge ("same-charge dileptons") or three isolated leptons ("trileptons").Although occurring very rarely in the standard model, these two signatures may be present in 7.3% of the b b events.An earlier search by CMS [17] in the same-charge dilepton and the trilepton channels, utilizing a data set corresponding to an integrated luminosity of 34 pb −1 , set a lower limit on the mass of the b quark of 361 GeV/c 2 at the 95% confidence level (CL). Here we present an update of this search using a much larger data set, corresponding to an integrated luminosity of 4.9 fb −1 . CMS detector and trigger This analysis is based on the data recorded by the CMS experiment in 2011.The central feature of the CMS detector is a superconducting solenoid, 13 m in length and 6 m in diameter, which provides an axial magnetic field of 3.8 T. Charged-particle trajectories are determined using silicon pixel and silicon strip tracker measurements.A crystal electromagnetic calorimeter, including lead-silicon preshower detectors in the forward directions, together with a surrounding brass/scintillator hadronic calorimeter, encloses the tracking volume and provides energy measurements of electrons and hadronic jets.Muons are identified and measured in the tracker and in gas-ionization detectors embedded in the steel return yoke outside the solenoid.The detector is nearly hermetic, providing measurements of any imbalance of momentum in the plane transverse to the beam direction.A more detailed description of the CMS detector can be found in Ref. [23]. A two-level trigger system [24] selects events for further analysis.The events analyzed in this search are collected with the requirement that the trigger system detects at least two lepton candidates.Efficiencies for these dilepton triggers are determined using events that pass a jet trigger, have two reconstructed electrons or muons, and that also pass the full selection criteria described in the next section.For these selected events, the dilepton trigger efficiencies are Selection criteria The use of the CMS particle-flow global event reconstruction procedure [25][26][27][28] has been extended beyond its application in Ref. [17].In the present analysis, all physics objects -leptons, jets, and missing transverse energy -are reconstructed with this procedure.The reconstruction and selection criteria for each physics object used in this analysis are described below. Candidate muons are reconstructed through a global fit to trajectories, using hit signals in the inner tracker and in the muon system.Muons are required to have transverse momenta p T > 20 GeV/c and |η| < 2.4, where the pseudorapidity η = − ln[tan θ/2] and θ is the polar angle relative to the anticlockwise beam direction.The muon candidate must be associated with hits in the silicon pixel and strip detectors, have segments in the muon chambers, and provide a high-quality global fit to the track segments.The efficiency for these muon selection criteria is >99% from Z decays [29].In addition, the muon track is required to be consistent with originating from the principal primary interaction vertex, which is defined by the one associated with tracks yielding the largest value for the sum of their p 2 T .Reconstruction of electron candidates starts from clusters of energy deposits in the ECAL, which are then matched to hits in the silicon tracker.Electron candidates are required to have p T > 20 GeV/c.Candidates are required to be reconstructed in the fiducial volume of the barrel (|η| < 1.44) or in the end-caps (1.57< |η| < 2.4).The electron candidate track is required to be consistent with originating from the principal primary interaction vertex.Electrons are identified using variables which include the ratio between the energies deposited in the HCAL and the ECAL, the shower width in η, and the distance between the calorimeter shower and the particle trajectory in the tracker, measured in both η and azimuthal angle (φ).The selection criteria are optimized [30] to reject the background from hadronic jets while maintaining an efficiency of 80% for the electrons from W or Z decays. Jets are reconstructed by an anti-k T jet-clustering algorithm with a distance parameter R = 0.5 [31].Particle energies are calibrated [32] separately for each particle type, and resulting jet energies therefore require only small corrections that account for thresholds and residual inefficiencies.All jet candidates must have p T >25 GeV/c and be within |η| < 2.4.Neutrinos from W boson decays escape the detector, and thereby give rise to a significant imbalance in the net transverse momentum measured for each event.This missing transverse momentum, expressed as the quantity E T / , is defined as the absolute value of the vector sum of the transverse momenta of all reconstructed particles [33]. In contrast to the earlier analysis of Ref. [17], b-tagging is now used to reject events from backgrounds that do not include a top-quark decay.The b-tagging algorithm applied in this analysis generates a list of tracks associated with each jet, and calculates the significance of each track's impact-parameter (IP), as determined by the ratio of the IP to its uncertainty.For the jet to be tagged as a b-jet, the IP significance of at least three of its listed tracks must exceed a threshold value, chosen to give an identification efficiency of 50% for b-jets and a misidentification rate of 1% for other particle jets [34]. Electrons and muons from W → ν ( = e, µ) decays are expected to be isolated from other particles in the detector.A cone of ∆R < 0.3, where ∆R ≡ (∆η) 2 + (∆φ) 2 , is constructed around each lepton-candidate's direction, and if the scalar sum of the transverse momenta of the particles inside the cone, excluding contributions from the lepton candidate, exceeds 15% of the candidate p T , then the lepton candidate is rejected.Electron candidates are required to be separated from any selected muon candidates by ∆R > 0.1 to remove misidentified electrons due to muon bremsstrahlung.Electron candidates identified as originating from photon conversions are also rejected. Events are required to have at least one well-reconstructed interaction vertex [35].Events with two leptons of the same electric charge, or with three leptons (two of which must be oppositely charged), are selected.For the same-charge dilepton (trilepton) channel, events with fewer than four (two) jets are rejected.At least one jet must be identified as a b-jet.In addition, events that have any two muons or electrons whose invariant mass M is within 10 GeV/c 2 of the Z-mass (|M − M Z | < 10 GeV/c 2 ) are rejected, in order to suppress the background from Z → + − decays.For each event, the scalar quantity S T = ∑ | p T (jets)| + ∑ | p T (leptons)| + E T / is required to satisfy the condition S T > 500 GeV.The selection criteria described above are not fully optimized in terms of discovery reach, but in fact they are more robust because they have a single background component in the background estimation with data. Signal selection efficiencies are estimated using simulated event samples.Fourth generation quarks production is implemented as a straightforward extension to the standard model configuration of the MADGRAPH/MADEVENT generator version 5.131 [36].Parton showering and hadronization are provided by PYTHIA 6.424 [37] using the matching prescription described in Ref. [38].Finally, these generated signal events are passed through the CMS detector simulation based on GEANT4 [39]. Table 1 shows the expected efficiencies for a b signal, for 450 ≤ M b ≤ 650 GeV/c 2 .The efficiencies vary between 1.5% and 1.7% for the same-charge dilepton channel, and between 0.47% and 0.63% for the trilepton events, in the chosen range of M b .These efficiencies include the branching fractions for W-decay and the b-tagging performance [34].Jet multiplicities for the same-charge dilepton and the trilepton channels are shown in Fig. 1, and the S T distributions are presented in Fig. 2. The expected distributions for a b signal having M b = 500 GeV/c 2 are normalized to the production cross sections from Ref. [40] that include approximate nextto-next-to-leading-order perturbative QCD corrections, and standard QCD couplings are assumed. Background estimation Because of the b-tagging requirement, 98% of the expected background events in the samecharge dilepton channel have at least one top quark from tt, tt + W/Z, or single-top processes.These backgrounds are categorized into three sources: (i) true + − events with a electron of misidentified charge, (ii) single-lepton events with an extra misidentified or non-isolated lepton candidate, and (iii) events with two prompt leptons of the same charge.The contribution due to the charge misidentification of electrons is determined using a control sample that, while keeping the remaining signal selection criteria, has oppositely-charged electron pairs or electrons and muons.The charge misidentification rate (0.03% and 0.31% for barrel and endcap candidates, respectively) is determined by counting the events containing two same-charge electron candidates, whose invariant mass is consistent with that of a Z-boson, relative to the yield of Z → e + e − events.Background from source (ii) is estimated as follows.Leptons passing the selection criteria described in Section 3 for signal are denoted as "tight", while muon candidates passing relaxed isolation thresholds and track-fit quality requirements, or electron candidates passing relaxed identification and isolation requirements, are referred to as "loose".Tight lepton candidates are excluded from the selection of loose lepton candidates.The background from events containing a false or non-isolated lepton candidate is estimated using another data control sample containing one tight lepton candidate and one loose lepton candidate, with the remaining selection criteria kept identical to those used for the signal sample.By definition, this control sample excludes events in the signal sample.The contributions of the backgrounds in the selected events are calculated using the yields observed in the control sample multiplied by the ratio of the number of lepton candidates passing tight selection criteria to those passing the loose criteria.This ratio, also determined in data, is calculated as the number of events containing one loose and one tight lepton candidate divided by the number of those containing two loose lepton candidates.Applying the above methods to data, a background yield of 7.8 ± 2.8 events is estimated to originate from sources (i) and (ii). The estimated yield to the same-charge dilepton channel from processes that produce prompt same-charge dileptons, including tt + Z, tt + W, and diboson channels (WZ, ZZ, and samecharge W ± W ± +jets), is determined using simulations of these processes.The contribution in the signal region is estimated to be 3.6 ± 0.6 events. For the trilepton channel, the background is an order of magnitude smaller than for the samecharge dilepton channel, and is dominated by processes that produce three prompt leptons, such as tt + W/Z.The yield in the signal region, which is only 0.78 ± 0.21 events, is estimated using simulated samples.Contributions from pp → tt and W/Z processes are normalized to the cross sections measured by CMS [41,42].The single-top contributions are normalized to the next-to-next-to-leading-logarithm cross sections [43,44].Production rates for dibosons are estimated from the next-to-leading-order cross sections given by MCFM [45].The tt + W/Z and same-charge W ± W ± +jets processes are normalized to the next-to-leading-order cross sections given in Ref. [46]. The multijet background contribution is estimated using a control sample of events containing two (three) loose lepton candidates for the same-charge dilepton (trilepton) channel, maintaining other selection criteria.The yield of multijet events in the signal region is calculated by multiplying the yield observed in the control sample by the ratio squared (cubed) of the number of lepton candidates passing tight selection to the number passing loose selection.The contribution of multijet events to the signal region is estimated to be smaller than 0.12 (0.001) events for the same-charge dilepton (trilepton) channel, and thus is negligible compared to contributions from the other background processes. Systematic uncertainties To validate the procedure for estimating background, and to assign a proper systematic uncertainty, the study in the same-charge dilepton channel is repeated using a mixture of simulated samples representing the potential background sources.The full estimation procedure is then applied to the simulated samples, and results are compared to the input values.The observed difference (2.7 ± 0.9 events) is included as a systematic uncertainty.The statistical uncertainties in the control samples are also included in the systematic uncertainties. The following uncertainties are included in both dilepton and trilepton channels.The b-tagging efficiency as measured in data has a precision of 10% per b-jet [34], resulting in a 6.7% uncertainty in the efficiency of signal samples.The effect of this uncertainty on the background contributions determined using simulated samples is estimated to be 0.35 (0.08) events for the dilepton (trilepton) channel.Lepton selection efficiencies are measured using inclusive Z → + − data, and the difference between efficiencies measured in data and simulation is taken as a systematic uncertainty.An additional systematic uncertainty of 50% of the difference in efficiency between simulated Z and b samples is included, to cover the effects of different event topologies.This estimation yields uncertainties of 1.7% and 2.7% for electrons and muons, respectively.The uncertainty in signal efficiency, calculated using appropriate weighting of the electron and muon contributions, is 3.3% (5.0%) for the dilepton (trilepton) channel. The uncertainties in the background normalization are estimated to be 0.74 and 0.12 events for dilepton and trilepton channels, respectively, and the uncertainties for each of the individual processes are included as follows: ±11% for tt [41], ±3% (±4%) for W (Z) [42], ±30% for single top processes, ±26% for WW, ±30% for WZ, ±21% for ZZ, ±30% for ttW, ±30% for ttZ, ±49% for W ± W ± +jets, and ±100% for multijet.The uncertainties in the normalization of diboson, ttW, ttZ, and W ± W ± +jets processes are taken from a comparison of next-to-leading-order and leading-order predictions.The uncertainty related to the presence of additional interactions (pile-up) in the same beam crossing interval as an event is examined by varying the number of such interactions included in the simulations.The systematic effects of the uncertainties in jet-energy-scale, jet resolution, E T / resolution, pile-up events, and trigger efficiency are found to be small [32,33].Uncertainty sets given by CTEQ6 [47] are used to determine the uncertainties from the choice of parton distribution functions.The relative uncertainty in the integrated luminosity measurement is estimated to be 2.2% [48], and is included in the calculation of limits.The details of uncertainties in the signal selection efficiency and in the background estimation are presented in Table 2. Results There are 12 (1) events found in the signal region for the dilepton (trilepton) channel, to be compared with an estimated background of 11.4 ± 2.9 (0.78 ± 0.21) (Table 3). Most of the background sources contain at least one top quark in the final state, with a b-quark produced in the top quark decay.Therefore, modifying the required number of b-tagged jets, in a separate study, provides a good check of the analysis.The observed yields when requiring ≥ 0, ≥ 1, or ≥ 2 b-tagged jets are consistent with the estimated background, and in agreement with the expected dominance of background from top quarks. For each b mass hypothesis, cross sections, selection efficiencies, and associated uncertainties are estimated (Tables 1 and 2).From these values, the estimated background yield, and the number of observed events, upper limits on b b pair production cross sections at 95% CL are derived, using a modified frequentist approach (CL s ) [49].These limits are plotted as the solid line in Fig. 3, while the dotted line represents the limits expected with the available integrated luminosity, assuming the presence of standard model processes alone.By comparing to the theoretical production cross section for pp → b b , a lower limit of 611 GeV/c 2 is extracted for the mass of the b quark, at 95% CL, while a limit of 619 GeV/c 2 is expected for a backgroundonly hypothesis. Summary Results have been presented from a search for heavy bottom-like quarks pair-produced in proton-proton collisions at √ s = 7 TeV.The process of pp → b b → ttW + W − has been studied in data corresponding to an integrated luminosity of 4.9 fb −1 , collected with the CMS detector.Estimated background contributions have been found to be small, since final states containing the signatures of trileptons or same-charge dileptons are produced rarely in standard model processes.Assuming a branching fraction of 100% for the decay b → tW, b quarks with masses below 611 GeV/c 2 are excluded at 95% CL.This is the most stringent limit to date. Figure 1 :Figure 2 : Figure 1: Jet multiplicity distributions for the same-charge dilepton channel (left), and the trilepton channel (right).The open histogram shows the contribution expected from a b having M b = 500 GeV/c 2 .The contributions from standard model processes are normalized to the total estimated background.All selection criteria are applied except the one corresponding to the plotted variable.The vertical dotted lines indicate the minimum number of jets required in events selected for each of the channels. Figure 3 : Figure 3: Exclusion limits at 95% CL on the pp → b b production cross section (σ).The solid line represents the observed limits, while the dotted line represents the limits expected for the available integrated luminosity, assuming the presence of standard model processes alone.A comparison with the production cross-sections excludes b masses M b < 611 GeV/c 2 at 95% CL for a 100% b → tW decay branching fraction. Table 1 : [40]ary of expected b b cross sections[40], selection efficiencies, and yields for the two signal channels as a function of the b mass. Table 2 : Summary of relative systematic uncertainties in signal selection efficiencies (∆ / ) and the absolute systematic uncertainties in the number of expected background events (∆B).The ranges given below represent the dependence on M b , varying from 450 GeV/c 2 to 650 GeV/c 2 . Table 3 : Summary of the estimated background contributions to the same-charge dilepton channel and the trilepton channel, and the observed event yield in data.The given uncertainties are systematic.
4,761.4
2012-04-04T00:00:00.000
[ "Physics" ]
Cocrystal Formulation: A Novel Approach to Enhance Solubility and Dissolution of Etodolac Etodolac (ETD) is a non-steroidal anti-inflammatory drug (NSAID) given in rheumatoid arthritis treatment. As it comes under BCS class II drug hence it exhibits low water solubility. Also, its dissolution rate-limited oral absorption results in delayed onset of action. The Novel approach in the solubility enhancement field; crystal engineering was preferred to prepare pharmaceutical cocrystals of etodolac with GRAS (generally recognized as safe) molecules. Pharmaceutical cocrystals of etodolac were prepared with p-hydroxybenzoic acid and glutaric acid with the drug: coformer ratio 1:1 and 1:2. Cooling cocrystallization was used to prepare etodolac cocrystals. Cocrystal formulations were characterized by saturation solubility study, in-vitro dissolution studies, and stability study. Cocrystal was also characterized by analytical parameters like Fourier transform infrared spectroscopy (FTIR), powder X-ray diffraction (PXRD), and differential scanning calorimetry (DSC). Optimized Cocrystal formulation dissolved more rapidly and their equilibrium solubility is greater than the plain drug. Another peak was observed in the range The problem of crystal engineering seemed in its cutting-edge manifestation with inside the past due Nineteen Eighties and early 1990s. The discipline of natural crystal engineering observed its sensible application, a touch later, with inside the location of pharmaceutical cocrystals and salts. Zaworotko and co-employees described those as ''co-crystals which might be fashioned among a molecular or ionic energetic pharmaceutical ingredient (API) and a co-crystal former that could be a strong below ambient conditions. The brilliant majority of co-crystals are built with robust hydrogen bonds and there's the opportunity that the proton concerned within side the hydrogen bonding interplay is transferred from the donor (acid) to the acceptor (base) to shape a salt. Pharmaceutical solid forms exist in different forms like dissimilar Polymorphs of an API, hydrates or solvates, in addition to salts, co-crystals, and amorphous materials, inclusive of amorphous dispersions. Co-crystals are investigated to increase the solubility, bioavailability, and/or other physical or chemical deficits of a taken API 1 . Cocrystals is homogenous crystalline phase with well-defined stoichiometry found to be novel technique for solubility enhancement due to their ability to modify the solubility properties of nonionizable drugs that cannot otherwise form pharmaceutical salts. Compared to polymorphs, cocrystals have the ability to increase solubility by orders of magnitude above the drug solubility and also in contrast to amorphous pharmaceutical forms; cocrystals can achieve thermodynamic stability in the solid state while providing large solubility advantage over a drug 2 . The discovery of new drugs carried out in pharmaceutical companies is a timeconsuming and costly process 3 . Among the newly discovered drugs, more than 60% of new drug molecules exhibit poor aqueous solubility 4 . Various researchers have developed different approaches for solubility enhancement of poorly soluble/waterinsoluble drugs like salt formation, solid dispersion, size reduction, and complexation [5][6][7][8][9][10][11][12] . Salt formation is one of the basic methods used to transform the physical characteristics of APIs and almost half of the BCS Class II drugs are formulated as salts to improve solubility. However, a requirement for the Salt formation method is that the API ought to own a suitable (simple or acidic) ionizable site. But, co-crystals suggest a dissimilar ways, where any API, in spite of of acidic, basic, or ionizable groups, might hypothetically be co-crystallized. Hence cocrystal formation is considered to be a convenient and novel crystal engineering technique in the arena of solubility enhancement. Co-crystals are formulated by various techniques such as slowly evaporation at room temperature 13,14 , reaction cocrystallization 15,16 , cooling co-crystallization 17,18 , grinding method 19,20 , and supercritical fluid technique 21,22 . By considering the pharmaceutical application of crystal engineering, it can be concluded that co-crystal may be a useful and successful strategy in improving the apparent solubility (combination of dissolution and solubility) of BCS Class II and class IV drugs which is the major problem in the expansion of optimized formulations of new chemical entities (NCEs) discovered in pharmaceutical industries 23,24 . The current study deals with cocrystals formation of BCS class II drug; etodolac. The prescribed dose of etodolac is 300 mg orally twice to thrice a day and multiple administrations, making it difficult for patient compliance and convenience. Limited dissolution and poor absorption of etodolac associated with pharmacokinetic variation in rheumatoid arthritis patients. In the present work, etodolac cocrystals were prepared using suitable coformers; p-hydroxybenzoic acid and glutaric acid by varying drug: coformer ratio of 1:1 and 1:2 by cooling crystallization method. The physical nature of nebivolol hydrochloride and prepares cocrystals have been characterized via way of differential scanning calorimetry (DSC), powder X-ray diffraction (PXRD), and IR spectroscopy. Materials Etodolac has been received as a gift sample from IPCA Lab, Mumbai. All other chemical compounds and solvents used in this research had been analytical grade and procured from Loba Chemie Pvt. Ltd., Mumbai. Etodolac Solubility Analysis Excess quantities of etodolac, etodolac glutaric acid cocrystals were added in 10 ml of distilled water in a volumetric flask sealed with aluminum foil. The volumetric flasks have been kept in a shaker at 37±0.5°C for 24 hours. The solutions have been filtered via a 0. 45 ìm Millipore filter and the filtrate became analyzed spectrophotometrically (Shimadzu Co, Japan) at 248 nm. Synthesis of Etodolac Cocrystals Trial batches of etodolac cocrystal were prepared using coformers; p-hydroxybenzoic acid and glutaric acid which have the ability for the formation of hydrogen bonding (good proton donor and acceptor). But cocrystal formulation batches of glutaric acid results in increased aqueous solubility and dissolution rate of etodolac 22 . Hence glutaric acid is considered to be a more appropriate coformer for etodolac cocrystal formulation. The cooling crystallization method was preferred to prepare etodolac cocrystal where Etodolac and Glutaric acid were taken in ratios of 1:1 and 1:2. Etodolac was dissolved in methanol and glutaric acid was dissolved in distilled water. The drug solution was added to the coformer solution. The resulting mixture was kept in the refrigerator overnight and filtered to obtain the cocrystal. The solution was then filtered to remove insoluble. Drug-Excipients Compatibility Study Thermal evaluation of the etodolac changed into finished the usage of a differential scanning calorimeter (Mettler Lab Star Switzerland). The pattern powder changed into located in hermetically sealed aluminum pans and heated at a scanning charge of 10°C/min from 20° to 300°C beneath consistent purging dry nitrogen flow (100 ml/min), empty pan applied as a reference 25. Infrared spectra have been recorded the usage of Shimadzu FTIR spectrometer. The spectra have been accrued over the variety of 4000-400 cm-1 in 45 scans, with a decision of 5 cm-1 for every sample. Data was collected from software Particle Morphology and Surface Roughness Analysis The external morphology of co-crystals of etodolac turned into decided with the aid of using SEM (Joel® JSM-6390LV) method. The product for evaluation turned into sprinkled over the double adhesive tape connected with an aluminium stub and at last, the stub containing the product turned into located withinside the chamber. The product turned into scanned arbitrarily at 10 kV acceleration voltage and the photomicrographs have been taken 25,26 . Characterization of Physical/Chemical State of Drug Present Powder X-ray diffraction data recording was done by means of a powder X-ray diffractometer (Bruker AXS D8 Advance, Germany). Conditions used for measurement: Si(Li) PSD detector, Cu X-ray source (ë = 1.5406 Å), 3° to 135° angular range. Powder X-ray diffraction data obtained compared with bulk drug and co-formers 25,27 . Intrinsic Dissolution Rate Measurement Etodolac and selected co-crystal equivalent to drug dose etodolac glutaric acid cocrystal were subjected to dissolution study in phosphate buffer pH 6.8 at 37±0.5 °C and 50 RPM using Type II (paddle type) dissolution apparatus. The samples had been withdrawn at 10 mins time c programming language for 60 mins and changed with a clean dissolution medium. Immediately, samples had been filtered through Whatman filter paper, diluted appropriately, and analyzed with the aid of using UV spectrophotometer. (Shimadzu Co, Japan). Stability Study of Prepared Formulation In the stability analysis, samples of cocrystals had been placed in USP type I glass vials and hermetically sealed with rubber plugs and aluminum caps. Vials had been stored in a stability chamber and maintained storage condition according to ICH guidelines (40 ± 5 °C and 75% RH). The samples of cocrystals (n = 3) had been taken out on the interval of 0, 1, 2, and three months and the physical properties, drug content, and solubility had been determined. Solubility Analysis Solubility of etodolac and etodolac glutaric acid cocrystals in distilled water are mentioned in the following table. Solubility of pure etodolac showed 0.3178 mg/ml concentration at 24 h with continuous stirring. Co-crystals of etodolac glutaric acid with the drug: coformer ratios 1:1 and 1:2 showed an increase in aqueous solubility of 2.150mg/ ml and 2.220 mg/ml respectively (Table 1). Etodolac glutaric acid cocrystals with a 1:1 ratio show a 3.61-fold increase and Etodolac glutaric acid cocrystals with a 1:2 ratio show a 3.83-fold increase in solubility which indicates that cocrystal formulation would have significant potential in the field of solubility enhancement of poorly soluble drugs 27,28 . Drug-Excipients Compatibility Study DSC data recorded for plain etodolac, glutaric acid, and its cocrystals are shown in Fig. 3. DSC thermogram for plain etodolac showed a sharp endothermic peak at 149.07 °C (consequent to its melting point) and glutaric acid showed only one sharp endothermic peak at 96.21°C corresponding's to its melting point. Etodolac glutaric acid cocrystals showed a sharp endothermic peak at 149.45 °C which indicates that there is a minor shift in peak when compared with the endotherm of the pure drug sample. The vanishing of the endothermic peak leads to the melting of glutaric acid is due to its solubility at a lower concentration in the cocrystal formulation. Interaction among drug and cocrystal was studied by FTIR Spectroscopy. FTIR spectra of a drug, glutaric acid, and cocrystal are shown in The FTIR spectra of pure etodolac, nebivolol hydrochloride-4-hydroxy benzoic acid cocrystals, and nebivolol hydrochloride-nicotinamide cocrystals are shown in Fig. 4. In addition to the characteristic peaks of etodolac, few additional peaks were observed in cocrystals of etodolac due to the construction of hydrogen bonding amid drug and co-former in cocrystals. In etodolac glutaric acid cocrystals, a peak was found in the range of 1475-1600 cm "1 due to a carboxylic acid moiety which designate the presence of C=O stretching. Another peak was observed in the range Figure 3 and SEM image shown in Figure 4. Surface properties and cocrystal formation are indicated from the SEM image. A microscopic photograph indicates the formation of needle-like crystals as shown in Figure 3. Characterization of Physical/Chemical State of Drug Present Powder X-ray diffraction data recorded for plain etodolac, glutaric acid, and its cocrystals is shown in Fig.2t The XRD pattern of cocrystals with etodolac: glutaric acid ratio 1:1 and cocrystals with etodolac: glutaric acid ratio 1:2 shows the generation of new additional peaks with increased intensities. X-ray diffraction pattern of pure drug shows 3 strongest peaks at 13.580°, 14.452°and 22.948° at 2è; series of sharp and intense diffraction peaks were emphasized the crystalline state of pure Etodolac. X-ray diffraction pattern of cocrystal shows sharp peaks at 14.440°, 18.540°, 18.760° and 18.998° at 2è. When the X-ray diffraction pattern of pure drug and cocrystal was compared, additional peaks were observed at 18.540°, 18.760°, and 18.998° which designated the development of cocrystals. Intrinsic Dissolution Study In-vitro dissolution data of plain drug and etodolac cocrystal system given in Figure no.1. Dissolution of cocrystal and the plain drug became conducted in pH 6.8 phosphate buffer and cocrystal exhibited a quicker dissolution rate than plain drug. The dissolution of etodolac became 50.24% in 60 mins while cocrystal with a drug: coformer ratio of 1:1 confirmed 79.99 and cocrystal with a drug: coformer ratio of 1:2 confirmed 90.10% drug release in 60 mins. At each time point quantity of drug released from cocrystal became constantly greater than that of plain drug. As per the results obtained, a good enhancement in the dissolution rate in co-crystals was observed in comparison with pure drugs. Stability Study Stability study of cocrystals was conducted at 40 ± 5 °C/75 ± 5% RH for the period time of three months. During this period, cocrystal remained stable without any significant changes which were confirmed after physicochemical characterization. The results found for drug content and solubility throughout the stability experimentation is revealed in Table 2 and Table 3. CONCLUSION Cocrystals formulation of Etodolac was formulated with glutaric acid by cooling cocrystallization showed a significant increase in solubility and dissolution rate as compared to the parent compound. Optimized cocrystal formulations were characterized by numerous analytical methods FTIR, DSC, and XRD which confirmed the formation of etodolac glutaric acid cocrystal. From results obtained from solubility, dissolution, and stability studies (refer to Table 1, figure1, Table 3, and Table 4) it can be concluded that the cocrystal formulation is effective in increasing the solubility and dissolution rate of etodolac. Etodolac and glutaric acid in equimolar ratio forms a co-crystal by cooling cocrystallization method. An Etodolac Glutaric acid cocrystal significantly increases in solubility with a dissolution rate 2 -3 times faster than that of pure etodolac. The enhancement of aqueous solubility and dissolution rate of etodolac with cocrystallization may be a potential way to solving the bioavailability problem of etodolac. Therefore, It is the novel approach for solubility enhancement of poorly water soluble drug candidate with minimum excipients, easy method development and minimum requirement of solvent in case of organic based solvent approach.
3,138.8
2022-03-31T00:00:00.000
[ "Materials Science" ]
Increasing the Reliability of an Electrical Power System in a Big European Hospital through the Petri Nets and Fuzzy Inference System Mamdani Modelling : The big hospitals’ electricity supply system’s reliability is discussed in this article through Petri nets and Fuzzy Inference System (FIS). To simulate and analyse an electric power system, the FIS Mamdani in MATLAB is implemented. The advantage of FIS is that it uses human experience to provide a faster solution than conventional techniques. The elements involved are the Main Electrical Power, the Generator sets, the Automatic Transfer Switches (ATS), and the Uninterrupted Power Supply (UPS), which are analysed to characterize the system behaviour. To evaluate the system and identified the lower reliability modules being proposed, a new reliable design model through the Petri Nets and Fuzzy Inference System approach. The resulting approach contributes to increasing the reliability of complex electrical systems, aiming to reduce their faults and increase their availability. Introduction The electric power system plays a strategic function in a big European hospital. Therefore, the managers have an extreme interest in maintaining the electricity system working correctly. If a failure happens, it will cause dangerous problems for the hospital's activities and people in its operational context. Thus, the power source system must be designed to be very reliable to maintain the system working with maximum availability. Because of the specificity of this type of asset, its maintenance and reliability are strategic. This paper aims to improve this system's reliability by using the fuzzy inference system and Petri nets to simulate and improve the existing system with a new and more reliable design, using MATLAB as the simulation tool. The structure of the paper is the following: Section 1 presents the introduction; Section 2 presents state of the art-the maintenance concepts, the maintenance activity in a hospital, the reliability and availability of maintenance systems, the Petri nets system, and the fuzzy Petri nets and fuzzy logic system; Section 3 describes the electrical power system of a big European hospital-the characterization of the hospital, the hospital electrical system modelling using block diagrams, the group of generators, the automatic transfer switch (ATS), and the uninterrupted power supply (UPS); Section 4 presents the modelling of the hospital's electrical system using the Petri nets software simulator HiPS description, the modelling of the hospital's electrical system using Petri nets, the explanation of the hospital electrical system, and the modelling and analysis using fuzzy logic; Section 5 presents the conclusions, including proposals for future developments. The Maintenance Concept Maintenance is an essential factor for the sustainability of the asset's operating functions and, by consequence, its availability and reliability. Maintenance is also a way to mitigate the damages that will occur in assets; therefore, the people in charge must be competent in their professional fields. This paper is based on existing norms and relevant research papers relating to maintenance aiming to support new ideas that may be relevant for further improvement, namely based on the following quotations. The American Hospital Association (1980) mentions that "proper maintenance of the power system is essential to its safety and reliability. The designer may incorporate certain features into the system to make maintenance safer and more comfortable and to make it possible to perform routine maintenance and inspection without dropping essential hospital load" [1]. Anderson and Neri (1990) reported that "support deals with the specific procedures, tasks, instructions, personnel qualification, equipment needed to satisfy the system maintainability requirement within an actual environment use" [2]. According to the Department of the Army, maintenance is defined as "those operations and actions that directly retain the appropriate activity of an item or renewing that operation when it is disturbed by failure or some other anomaly-within the context of RCM, the necessary process of an object means that it can perform its intended function" [3]. Farinha (2011) also referred to the norm EN 13306:2010 that defines maintenance as the "combination of all technical, administrative and managerial actions during the life cycle of an item intended to retain it in or restore it to, a state in which it can perform the required function" [4]. Gulati (2009) stated that "Maintenance is concerned with keeping an asset in right working conditions, so that the asset may be used to its full productive capacity. The maintenance function includes both upkeep and repairs" [5]. Moubray (1997) stated that "the role of Maintenance is to ensure that physical assets continue to do what their users want to do" [6]. Wang (2012) said that "Maintenance is a function that operates in parallel to production and can have a significant impact on the capacity of the production and quality of the products produced, and therefore, it deserves continuous improvement" [7]. It can be considered that maintenance is a management tool to prevent failures in the physical assets, using both planned and non-planned interventions to maintain their useful lives, in charge of the maintenance engineers. The Electrical Maintenance Activity in Hospitals This paper discusses the maintenance and modelling of the electricity system that supplies electricity to a big European hospital, as shown in Figures 7 and 8. To analyse the maintenance of the electricity system, the norms and papers of other researchers relating to hospitals were used to support new ideas that are relevant for further improvement based on the following quotations. AHA (1980) states that "the engineering and maintenance department charged with the responsibility for ensuring the safe, cost-effective operation and maintenance of hospital facilities and expensive equipment" [1]. Farinha (2001) mentioned that "another way of analysing the useful life was proposed by (AHA, 1996), based on the knowledge of type parameters of most hospital equipment, which allows establishing the maximum limit of maintenance expenses from the ones it is more economical to replace the equipment than to repair it" [4]. Mwanza and Mbohwa (2015) concluded that "the maintenance practices in three hospitals are not effective. The conclusion based on the lack of work order system to capture all work to manage labour, no skill training programs and poor spare inventory and purchasing system" [8]. The IEEE C2: National Electrical Safety Code (2007) mentions that "the purpose of this standard covers basic provisions for safeguarding of people from hazards arising from the installation, action, or maintenance of (1) conductors and equipment in electric supply stations, and (2) overhead and underground electric supply and communication lines" [9]. Christiansen (2015) mentioned that "this paper presents a model approach based on over 33,500 h of measurements within a modern University Medical Centre of Hamburg/Germany to assess the time-dependent course as well as the weekly sum of the demand for electrical energy due to medical laboratory plug loads" [10]. According to AHA (1980), "safety requires adequate provision for the protection of life, property, and continuity of hospital services. The protection of human life is paramount" [1]. BenSaleh et al. (2010) mentioned that "as there are more and more automated hospitals, the greater protection against the lack of energy. Hospital systems are increasingly dependent on technology, well-designed emergency energy systems, and the ability to adapt to the changing environments" [11]. Jamshidi (2014) mentioned that "Risk-Based Maintenance (RBM) is composed of two main components: (1) A comprehensive framework for prioritization of the critical medical devices; (2) A method for selecting the best maintenance strategy for each device. Risk-based prioritization of medical devices is valuable to health organizations in the sequencing of maintenance activities and budget allocation for maintenance activities" [12]. The World Health Organization (WHO) and Pan American Health Organization (2015) mention that "promoting 'the aims of 'hospitals safe from disasters' by ensuring that all new hospitals aware about the safety that will provide them to function in disaster situations and implement mitigation measures to reinforce existing health facilities, particularly those providing primary health care" [13]. Abdul et al. (2015) presented a "study on equipment inspection and shutdown at optimized, risk-based maintenance intervals for a processing facility unit, considering the human errors that introduced during these activities" [14]. Maintenance, Reliability, and Availability Maintenance, reliability, and availability are essential tools to prevent failures, damages, and delays in the production processes and services in terms of time, costs, and systems' performance. The quality management effort for internal and external customer's satisfaction, guided by the international norms and world conventions, takes advantage of the research relating to hospital physical assets to support new ideas relevant to further improvement like the following authors described. Ali et al. (2019) stated that "to develop a safety and profitable process, uncertainty quantification is necessary for a reliability, availability, and maintainability (RAM) analysis. The uncertainties of 3% in each key decision variable are propagated, bringing the system into an unreliable/risk region. This approach reduces about 90% of the total computational time when compared with the conventional simulation approaches required for a complex first principle-based model" [15]. Arias et al. (2019) stated that the "reliability model is based on the information available in the maintenance system-driven framework using both classical and Bayesian methodologies. It illustrates the ageing process and the necessary data for the creation of the model. This model can be demonstrated and analysed with an important factor; it represents the flexibility to build the reliability expected during the maintenance strategy-making and the knowledge of the equipment" [16]. Calixto (2016) stated that "RAM analysis is the basis for complex system performance analysis. To demonstrate such a methodology, the RAM analysis steps, such as scope definition, lifetime data analysis, modelling, simulation, critical analysis, sensitivity analysis, and conclusions, will be discussed" [17]. Çekyay and Özekici (2015) stated that "system reliability, mean time to failure, and steady-state availability, are functions of the component failure rates. The primary objective is providing explicit expressions for these performance measures and obtaining various characterizations on their mathematical structures" [18]. Corvaro et al. (2017) stated that "the complex of RAM factors constitutes a strategic approach for integrating reliability, availability, and maintainability, by using methods, tools and engineering techniques (Mean Time to Failure, Equipment Down Time and System Availability values) to identify and quantify equipment and system failures that prevent the achievement of productive objectives" [19]. Ebeling (2010) suggested that "Reliability is defined to be the probability that a component or system will perform a required function for a given period when used under state operating conditions" and that "Maintainability is defined to be a probability that a failed component or system will be restored or repaired to a specified condition within a period when maintenance is performed following prescribed procedures and Availability is defined as the probability that a component or system is performing its required function at a given point in time when used under state operation condition" [20]. Feng et al. (2011) stated that "many problems have existed in synthesis of design of Reliability, Maintainability, Supportability (RMS) and performance; such as RMS design activities are numerous and optional, variable feedback branches can satisfy same RMS requirement, some iteration among RMS and Performance activities is necessary, and many uncertainties exist in the design process" [21]. Hameed et al. (2011) stated that "the need, method, benefits, and possible areas of application for the proposed RAM database have been identified. Both the technical and managerial challenges were outlined, which could be encountered during this database's realisation. The structure for the database is suggested keeping in view the implementation of RAM concepts quickly and efficiently" [22]. Sikos and Klemeš (2010) stated that "the proposed methodology focuses on HEN maintenance through the influence of availability and reliability rather than the optimization of cleaning schedules only. It has been shown that the failure analysis is capable of predicting heat exchanger bundle replacement times, leading to significant savings" [23]. Song and Wang (2013) presented "a comprehensive review of reliability assessment and improvement of power electronic systems from three levels: (1) metrics and methodologies of reliability assessment of existing system; (2) reliability improvement of an existing system using algorithmic solutions without change of the hardware; and (3) reliability-oriented design solutions that are based on the fault-tolerant operation of the overall systems" [24]. Sutton (2015) stated that "Reliability, Availability, and Maintainability (RAM) programs are an integral part of any risk management system. RAM techniques possess many similarities to those that are used for safety" [25]. Wang et al. (2013) stated that "failure of a component in Building Cooling, Heating and Power (BCHP) system may fail a sub-system or the whole system. The reliability and availability analysis of the BCHP system is helpful to the designer to decide the redundancy in case of equipment failure" [26]. Zio et al. (2019) considered "reliability engineering in the modern civil aviation industry, and the related engineering activities and methods. They consider reliability in a broad sense, referring to other system characteristics that are related to it, like availability, maintainability, safety and durability" [27]. Shen et al. (2019) mentioned that "to describe the system performance, system availabilities including instantaneous availability and limiting average availability, and some time distributions of interest are important indexes. Then, the problem of optimal maintenance policy is formulated by considering constraints of availability and operating times" [28]. Do et al. (2015) proposed and showed "how to optimize a dynamic maintenance decision rule on a rolling horizon? The heuristic optimization scheme for the maintenance decision is developed by implementing two optimization algorithms (genetic algorithm and MULTI FIT) to find optimal maintenance planning under both availability and limited repairmen constraints" [29]. From the dynamics of various opinions regarding reliability, availability, and maintenance, it is essential to pay close attention to these variables to ensure the production industry's successor service satisfies customers. Petri Nets Systems This paper corresponds to the evolution of the authors' research. Because of this, some of the next sections are strongly supported in Reference [30]. A Petri net may be defined by 5-tuples N = (P, T, 1, 0, Mo), where (1) P = {P1, P2, . . . , Pm} is a limited set of places; (2) T = {t1, t2, . . . , tn} is a limited set of transitions, P U T = ∅, and P ∩ T = ∅; (3) I (P, T) → N is an Input function that defines an arc directed from a Place to a Transition, where N is a set of negative integers; (4) (T, P) → N is the Output function that defines the arc directed from Transition to Place; and (5) Mo: P → N is the initial marking. Marking is the assignment of tokens to places of the Petri net. The number and position of tokens may change during the implementation of the Petri network. According to Wang (1998), "Petri nets were named after Carl A. Petri, who defined a general-purpose mathematical tool for describing relations existing between conditions and events. This work was done in the years 1960-1962. Since then, Petri nets have resulted in considerable research because they can be used to model properties such as process synchronization, asynchronous events, sequential operations, concurrent operations, and conflicts, or resource sharing. These properties characterize Discrete Event Dynamic Systems (DEDS). This, and other factors, makes Petri nets a promising tool and technology for applying to various types of DEDS. Petri nets provide a powerful communication medium between the user, typically requirements engineer, and the customer as a graphical tool. Instead of using ambiguous textual descriptions or mathematical notation difficult to understand by the customer, complex requirements specifications can be represented graphically using Petri nets. This, combined with computer tools, allows interactive graphical simulation of Petri nets and puts the development engineers a powerful tool to assist in complex engineering systems' development process. The graphical representation also makes Petri nets intuitively very appealing. They are straightforward to understand and grasp-even for people who are not very familiar with Petri nets' details. This is because Petri net diagrams resemble many of the drawings that designers and engineers make while constructing and analysing a system" [31]. Volovoi (2003) dealt with "the dynamic modelling of degrading and repairable complex systems as modularity allows a focus on the needs of a system reliability modelling and tailoring of the modelling formalism accordingly" [32]. Chew et al. (2007) mentioned that "Petri Nets provide a logical, easily understood, and compelling way of predicting the reliability of a system or platform" [33]. Garg (2012) mentions that "Petri Nets tool is applied to represent the asynchronous and concurrent processing of the order instead of the fault tree analysis" [34]. Leigh and Dunnett (2016) mentioned that "the study has aimed to develop a model using Petri Nets to determine the feasibility of adopting this technique to model the maintenance processes efficiently" [35]. Ren et al. (2014) mentioned that "if a Petri Nets are required to model processes that have a random (or pseudorandom) nature to them, and this randomness follows a specific pattern such as a statistical distribution, the transitions can sample their switching times from this distribution" [36]. Sadou et al. (2009) mentioned that "this new representation of the Petri net with formulae of linear logic allows us to define the notion of scenario formally. To obtain a minimal situation, we have considered three elements: (i) the order of events governed by a useful relation of cause and effect in the system, (ii) the list of activities of the scenario must be minimal (i.e., without loop events), and (iii) the final marking corresponding to the feared state must be minimal" [37]. Eisenberger and Fink (2017) stated that "Petri nets are such a mathematical tool that has been applied for maintenance modelling and simulations of different applications. Several types of Petri nets with different properties have been introduced" [38]. Pinto et al. (2021) stated that "the importance of Petri Nets as a powerful tool in maintenance management, providing analysis and simulation of the systems to increase the reliability and availability of the individual assets and their operations" [30]. Farinha (2018) showed the example using Petri nets on the electrical circuit through "an Emergency Generator that, as is known, starts operating when the external mains voltage from below a certain value about the nominal voltage. In the example, the value assumed for starting the Emergency Generator is 350 V. When the value of the voltage of the external electrical network from below this value, the Generator starts, turning off when the electrical network's value is above that. For this purpose, the following situations are assumed for the Emergency Generator: The Generator can be in two possible operational states: in standby and operation (generating electricity); two situations give rise to those states: mains voltage above 350 V (> 350 V) and below this value (<350 V); Other possible states, such as malfunction, are not considered. Figure 2 illustrates the state diagram and the Petri Net for the preceding situations, respectively" [39]. Fuzzy Inference System (FIS) and Fuzzy Petri Nets Fuzzy Petri Nets is a combination of two different sciences-the set of fuzzy logic and Petri nets theory-which are held to provide answers to vague or unclear problems in a system that is about to be examined. Therefore, we use fuzzy Petri nets to see and provide solutions to problems that are not clear, such as an asset or system that does not have historical data but wants to get a definite answer regarding the reliability and reliability of maintenance to improve the performance of these assets. Also, several previous researchers put forward their ideas in articles they wrote as follows. Cannarile et al. (2017) "propose a method based on the Fuzzy Expectation-Maximization (FEM) algorithm, which integrates the evidence of the field inspection outcomes with information taken from the maintenance operators about the transition times from one state to another. Possibility distributions are used to describe the imprecision in the expert statements" [40]. Ladj et al. (2017) proposed "a new interpretation of PHM outputs to define machine degradations that are corresponding to each job. Moreover, to consider several sources of uncertainty in the prognosis process, the authors choose to model PHM outputs using fuzzy logic. Motivated by the computational complexity of the problem, Variable Neighbourhood Search (VNS) methods are developed, including well-designed local search procedures" [41]. Touat et al. (2017) mentioned that "to solve the problem, we developed two fuzzy genetic algorithms that are based on respectively the sequential and total scheduling strategies. The one respecting the sequential approach consists of two phases. In the first phase, the integrated production and maintenance schedules are generated. In the second one, the human resources are assigned to maintenance activities. The second algorithm respecting a total strategy consists of developing the integrated production and maintenance schedules that explicitly satisfy the human resource constraints" [42]. Jabari et al. (2019) mentioned that "Based on the results obtained in the case study, it can conclude that the fuzzy set for calculation is more rigorous than the qualitative results. The calculated unified qualitative and fuzzy risk number shows that the plant was classified as semi-critical. It obtained the highest fuzzy risk number of 99.1452 for both blowers (BW 20 21 and BW 20 23 A) assets failure" [43]. Ratnayake and Antosz (2017) mentioned that, "also, a fuzzy logic-based risk rank calculation approach has been presented. The suggested RBM approach, together with the fuzzy inferencing process, enables us to minimize suboptimal calculations when the input values are at the boundaries of the particular ranges. Fuzzy membership functioned together with the rule base. It enabled to insert numbers with the least uncertainty" [44]. Seiti et al. (2017) mentioned that, "for this purpose, a model based on Fuzzy Axiomatic Design (FAD) is presented, wherein each evaluation has both optimistic and pessimistic fuzzy scores, as the fuzzy evaluations themselves have risks. To improve the accuracy of the presented method, a new concept called "acceptable risk" has been suggested" [45]. Babashamsi et al. (2016) stated that "to determine the weights of the indices, the fuzzy AHP is used. Subsequently, the alternatives' priorities are ranked according to the indices weighted with the VIKOR model" [46]. According to Cordón (2011), "The current contribution constitutes a review on the most representative genetic fuzzy systems relying on Mamdani-type fuzzy rule-based systems to obtain interpretable linguistic fuzzy models with a good accuracy" [47]. Zahabi and Kaber (2019) mentioned that "use the Mamdani max-min inference method to calculate a 'risk reliability (R-R) score based on a fuzzy definition of frequency of hazard occurrence, the severity of hazard outcomes, and system reliability. The application of the proposed model is presented in the context of a complex-human-in-the-loop system using the MATLAB fuzzy logic toolbox" [48]. According to Akgun et al. (2012), "For this purpose, an easy-to-use program, 'MamLand,' was developed for the construction of a Mamdani fuzzy inference system and employed in MATLAB. Using this newly developed program, it is possible to construct a landslide susceptibility map based on expert opinion" [49]. According to Kacimi et al. (2020), "The Mamdani fuzzy system is known as a linguistic model where the semantic meaning of the fuzzy rules is an intrinsic characteristic that must be retained during the learning process while seeking for high accuracy" [50]. Lu and Sy (2009) mentioned that "A fuzzy logic approach is adopted to handle the uncertainty conditions. To meet the requirement of real-time decision-making, the fuzzy project programs were coded and compiled into DLL files" [51]. Dhimish et al. (2018) stated that "Mamdani fuzzy logic system interface and Sugeno type fuzzy system. Both examined fuzzy logic systems show approximately the same output during the experiments. However, there are slight differences in developing each type of the fuzzy system such as the output membership functions and the rules applied for detecting the type of the fault occurring in the PV plant" [52]. Kraidi et al. (2020) stated that "A Computer-Based Risk Analysis Model (CBRAM) was designed to analyse the risk influencing factors using a fuzzy logic theory to consider any uncertainty that is associated with stakeholders' judgments and data scarcity. The CBRAM has confirmed the most critical risk influencing factors, in which this study has explained the effective methods to manage them" [53]. Khosravanian et al. (2016) stated that "The Mamdani-type FIS requires defuzzification, whereas the Sugeno-type FIS applies a constant weighted-average technique avoiding defuzzification. The results for the two field cases evaluated convincingly demonstrate that the Sugeno-type FIS is superior to the Mamdani-type FIS for WOB prediction using the same input data and membership functions" [54]. About this type of approach, the research developed by Teo et al. [55][56][57] must be considered as very relevant, regardless of to be focused on mainly in energy management, namely for a grid-connected microgrid with renewable energy sources and energy storage system, including the design of fuzzy logic-based controllers to be embedded in a gridconnected microgrid with renewable and energy storage capability. From the many approaches done by many researchers and the authors' research, it can consider that fuzzy Petri nets have a very high potential to help solve complex reliability problems inside the systems. The HiPS Software Simulator Description According to the HiPS (Hierarchical Petri net Simulator), the "tool was developed by the Department of Computer Science and Engineering, Shinshu University, being a tool for Petri nets design and analysis; it was developed using Microsoft Visual C # and C++. HiPS tool has a very intuitive GUI, which enables hierarchical and/or timed-net design. HiPS tool has also functioned of static/dynamic analysis: T-invariant detection, Reachability path analysis, deadlock state detection, and k-boundedness analysis. Also, it is possible to perform a random walk simulation with each firing step" [58]. The definition of the Petri net model using the HiPS software can be seen in Figure 3. Characterization of the Hospital The big European hospital is a medical care building that has a total construction area of 90,000 m 2 . This paper focuses on the emergency power supply system (EPSS) of the hospital, which has the following equipment: three units of 1000 KVA generators; two units of UPS (uninterrupted power supply) with 300 KVA; one unit of UPS of 8 KVA; 20 units of UPS of 20 KVA; one unit of ATS (automatic transfer switch); three transformer units; two PT (power transfer); three LVDB (low-voltage distribution board) input units; six LVDB central output units and other peripheral instruments (correction battery, LV distribution network, indoor lighting (normal/emergency), output and obstruction signalling, normal/emergency outlets); and ground network. The paper uses Petri net time methods and fuzzy logic to analyse and diagnose the power system's operation and reliability and propose a new design to improve its availability [30]. Modelling of the Hospital's Electrical System Using Block Diagrams The main contribution of the Petri nets system is based on their ability to simulate the process and to analyse complex structures. Figure 4 shows the process block flow diagram of the electrical power system of the hospital under study. The Group of Generators, Automatic Transfer Switch, and UPS In case of power failure of the external electrical energy supplier, the hospital is equipped with three generators, two of 1000 kVA and one of 500 kVA, powered by diesel engines. The command and transfer board of the most potent power groups have also an installed synchronization system between the two groups that can operate in parallel after synchronization between both groups ( Figure 5) [30]. Modelling the Hospital's Electrical System by Petri Nets In the present case, the physical assets under study have maintenance procedures to guarantee their adequate reliability and availability conditions and mitigate failures ( Figure 6). The Hospital Electrical System Block Diagrams As can be seen in Figure 7, the ATS manages the generators-if it does not work, then the generators must be activated manually, which hurts the system. Additionally, it can be emphasized that only one ATS is installed. Thus, the question arises: how do the above circuit behaviours answer to the expected security system? To respond to this question, the present situation was simulated and a solution to solve the identified handicap with block diagrams, as shown in Figures 8 and 9, is proposed [30]. In the block diagram of Figure 7, the hypothesis of a fault in the main electrical power is emphasized when the UPS takes over the primary function. In this situation, the ATSs activate the generator that replaces the UPS while waiting until the main electrical power is on again; unfortunately, if one of the ATS, UPS, and generator fails, then a fatal accident occurs, which permits to infer that this is a fragile module. In the block diagram of Figure 8, if there is a current fault from the main power, UPS 1, 2, and 3 will turn on the main power's functions. Then, ATS activates Genset 1, 2, and 3, replacing the UPS function, while waiting for an intervention from the maintenance team; if one of the UPS, Genset, or ATS fails, then it will be replaced by the other UPS (Genset and ATS) because there is a redundancy of three units; thus, the probability of fatality accidents is extremely low. This design can be considered a good design because it is deemed very reliable; however, its cost and maintenance are more expensive because it needs more equipment to be installed. It can be concluded that the components of the system are critical to the electrical hospital functioning, and the ATS is the most critical item. Because of the preceding, the electrical sequences that must be carefully targeted for research to identify the main functions and failures of each module for the installed load are discussed and analysed. However, because the hospital does not provide historical data, Petri nets are used to analyse this case study. Modelling and Analysing with a Fuzzy Inference System For computing, the authors use the MATLAB fuzzy tool and the fuzzy Mamdani method. Fuzzification Data Processing After analysing the electricity system of the hospital, using Petri nets and the block diagrams design to find the most critical instruments or items in the asset, now we use fuzzy MATLAB to determine how reliable and available the system is according to their several states to determine the input and output functions of the system by the specified setpoint; it will use information and conditions, such as electric main power worth 420, Genset 1 and 2700, ATS 140, and UPS 1 and 2220. The removal of all inputs and outputs is presented in Figures 9-12. Thus, we can conclude that the fuzzy set for input "Electrical Main Power" is as follows: Using the fuzzy set operator "AND", the value taken is the lowest, and thus: {0.215 + (0.3 * 2) + 0.3 + 0.27 * 2)}/6 = 0.28 ≈ 0.3 (minimum total value of input variable) ( Figure 14). The other way to solve the Centroid of Gravity method is using calculus mathematics as follows: The defuzzification method uses the centroid of gravity (COG): Therefore, the centre of gravity of the calculated drawing area is at point x = 60 and point y = 0 as a balance of the average electrical current in the said hospital system ( Figure 15). Fuzzy Logic Designer Fuzzy logic designer in this study involves parameters including six "inputs": (a) electrical main power (350 MVA); (b) two Gensets 1 and Gensets 2 (1000 KVA); (c) one automatic transfer switch (ATS); and (d) two UPS 1 and UPS 2 (300 KVA). The input is shown in Figure 16. In Figure 17, from (a) to (f), it is clear that the elements contained therein are intervals and parameters; however, the approach can be completed with Table 1, from (a) to (f), which corresponds to each corresponding item of Figure 17, from (a) to (f), respectively. The Membership Function Editor of the fuzzy logic design for "output" variables is designed based on the input voltage variation: if the voltage received on the system for Under load is 34.5% up to 55.5% and at Normal load is 64.5% up to 85.5%, then the output that appears in the fuzzy MATLAB simulation is shown in Figure 18. In Figure 18, it is clear that the elements enclosed in it are the intervals and parameters; however, they can be supported by Table 2. Rules of Editor The next step is to apply the fuzzy operator "AND & THEN" in fuzzy rules, and the fuzzy rules that are by data collected and processed according to fuzzy logic with the following 17 rules: (1) To support the fuzzy rules above, it is necessary to sort out the working orders of some important equipment of the electrical power system of the hospital that are being analysed. Based on MATLAB software, the fuzzy inference system was used to simulate how reliable and available their functions are in order to prevent any failure. The support of the fuzzy rules that show the simulation of the referred electrical circuits functioning is shown in Table 3. Synthesis The synthesis of the steps shown in previous sections is shown in Figure 23, representing the inference process corresponding to five inputs, 17 rules system, and one output plot. Based on the description above, the analysis of the electrical power system of a large hospital can be described using Petri nets and a fuzzy inference system based on the following steps: (1) Creating an asset register and numbering system hierarchically; (2) Creating a functional block diagram; (3) Creating a process flow chart; (4) Establishing the system boundary definitions; (5) Creating a Petri net modelling and a fuzzy inference system; and (6) Describing the work function and the operational potential failures. Based on the steps above, the following results can be obtained that support the actual operational documents in the field: (1) To identify the reliability and the weak points of the system; (2) To redesign the system aiming to remove the weakest points of the system to guarantee the asset reliability; (3) To simulate the most important solutions to improve the system reliability; and (4) To choose the best solution for the desired system reliability. Conclusions The paper demonstrates the usefulness and relevance of Petri nets in the dynamic modelling and analysing of the hospital's electrical power supply systems. The paper demonstrates how Petri nets can help to identify the weaknesses in a complex electrical system, to simulate more reliable solutions, and to validate them. With Petri nets, it is possible to identify the most critical components of the electrical system in a hospital. As there is no historical maintenance available, the authors used the fuzzy inference system to analyse the system with excellent results, as shown in the paper. The paper emphasizes the Petri nets and fuzzy inference system as a powerful tool to support maintenance management, providing the analysis and simulation approach for this type of system aiming to increase their reliability and availability. Based on the simulations of Petri nets, it is possible to identify the most critical devices in the electrical energy system of a large European hospital. The case study used a fuzzy inference system that demonstrates that the function of the assets, on average, reaches only 45% of reliability and availability since the function of the assets in their usefulness is only between 50% to 75%. To solve this weakness, the authors propose to install redundant automatic transfer switches (ATSs) to increase the asset's reliability and availability. Based on the preceding, it can be stated the contribution of the approach carried out along the paper, based on Petri nets and fuzzy logic to identify the reliability weak points in electrical power systems and to evaluate the new performance after the improvements are done in order to reach the desired availability. Still, the approach done can be generalized to any other organization, regardless of its nature. The future developments to be carried on will be based on the comparison between the approach done in the present paper and a stochastic one, namely when it is used as a Reliability Centred Maintenance policy.
8,335.6
2021-03-15T00:00:00.000
[ "Computer Science" ]
Light-Driven Raman Coherence as a Non-Thermal Route to Ultrafast Topology Switching A grand challenge underlies the entire field of topology-enabled quantum logic and information science: how to establish topological control principles driven by quantum coherence and understand the time-dependence of such periodic driving? Here we demonstrate a THz pulse-induced phase transition in Dirac materials that is periodically driven by vibrational coherence due to excitation of the lowest Raman-active mode. Above a critical field threshold, there emerges a long-lived metastable phase with unique Raman coherent phonon-assisted switching dynamics, absent for optical pumping. The switching also manifest itself by non-thermal spectral shape, relaxation slowing down near the Lifshitz transition where the critical Dirac point (DP) occurs, and diminishing signals at the same temperature that the Berry curvature induced Anomalous Hall Effect varnishes. These results, together with first-principles modeling, identify a mode-selective Raman coupling that drives the system from strong to weak topological insulators, STI to WTI, with a Dirac semimetal phase established at a critical atomic displacement controlled by the phonon pumping. Harnessing of vibrational coherence can be extended to steer symmetry-breaking transitions, i.e., Dirac to Weyl ones, with implications on THz topological quantum gate and error correction applications. Dynamic driving by periodic lattice vibrations represents a powerful approach to manipulate topological band structures, in stark contrast to equilibrium tuning methods, e.g., temperature, chemical substitution and static strain/electric/magnetic fields [1,2]. Ultrafast non-thermal manipulation of topology [3,4], particularly at preferred terahertz (THz)-cycle clock rates, is key for full implementation of dynamical protocols needed to both match current information and sensing technologies and exceed their limits via topological functionalities [5][6][7][8]. Despite of intriguing studies recently [9][10][11][12], topological states driven by THz optical phonons have not been explored, especially in Dirac semimetals. ZrTe 5 is a model Dirac system [13][14][15] for establishing such topological quantum switching by periodic driving from phonons because of the minimal single nodal (Dirac) point and extreme sensitivity on the small structural changes across a broad range of phases, from STI to Dirac semimetal to WTI. However, only thermal-or strain-induced transition [17,18] have been studied. We implement a dynamical topology-switching scheme using intense THz laser pulses (red line) to excite a Raman-active (A 1g ) optical phonon mode, as illustrated in Fig. 1a. The subpicosecond THz driving has near-single-cycle electric field profile in the time domain and broadband spectrum with central frequency ∼1.2THz (gray shade, Fig. 1b). The atomic displacement associated with the A 1g eigen-mode (Fig. 1c) is a translational rigidchain structure which mostly involves opposite displacements along the b-axis of the dimer Te (Te d ), apical Te (Te a ) and Zr atoms along the b-axis arising from the neighboring Zr-Te units. This motion results in a modulation of the atomic positions along b-axis that determines the van der Waals coupling and, in turn, controls the band inversion at the Γ point. Most intriguingly, this mode could enable an exclusive topology switch without symmetry breaking because it has the extreme sensitivity to topology while preserves the inversion symmetry of the lattice. Coherent excitation of the A 1g phonon expects to create a periodically-driven state via lattice vibrations that modulate the topological bands and switch from STI (top, Fig. 1a) to Dirac semimetal (middle) to WTI (bottom) phases. Dephasing of this topological coherent state, via Dirac fermion-phonon interaction, leads to conversion of Raman phonon coherence into population, i.e., into finite atomic displacement associated with the establishment of a final state with highly non-thermal characteristics. However, the salient spectroscopy features for the ultrafast non-thermal phase transition in any Dirac materials were not established until now. In this paper, we provide evidence for the distinct topology switching in ZrTe 5 driven by THz Raman phonon coherence. Our results are consistent with the calculations of A 1g mode-selective electronic band structures. We used a bulk single crystal sample that exhibits a 3D linear dispersion and bulk bandgap less than 30 meV [19][20][21]. Coherent phonon emissions after intense THz pump excitation at 4.1K for various field strengths E THz =5, 22, 86, 184, 386, 552 and 736kVcm −1 are sampled in the time domain by a weak optical pulse (Methods), as shown in Fig 1d. A pronounced multi-cycle oscillation is clearly visible in the sample emission. The Fourier transform (FT) spectra of these coherent beatings at 736kVcm −1 display two dominant peaks centered at ∼1.2THz and 2.1THz. The static Raman spectrum (blue line, Fig. 1b) from the same sample, shown together, identifies their Raman symmetry (dash lines). Note that the strongest emission peak ∼1.2THz matches very well with the A 1g mode, unlike for the B 2g mode dominant in the static Raman spectra. This observation clearly shows that the intense THz driving strongly excites a Raman A 1g coherence in the driven state, unlike for the equilibrium state, which periodically modulates the interlayer spacing. In addition, the three infrared (IR) active phonon modes with frequencies 0.63THz, 1.5THz and 2.3THz, as seen in linear THz transmission (Fig. S2, supplementary) are negligibly small in the intense THz-driven state (Fig. 1d), i.e., the preferred coupling of the topological electronic bands to the A 1g mode. This Raman mode-selective coupling attests the excitation scheme ( Fig. 1c) due to the extreme sensitivity of the band inversion to the interlayer spacing, which is reproduced by density functional simulations below. To characterize the observed periodically-driven non-equilibrium topological state dressed by Raman coherence Fig. 2a plots the THz differential transmission change ∆E/E 0 (red circles) after excitation by the intense THz pump pulse (gray shade) as a function of the pumpprobe time delay ∆t pp at 4.1K. The measured ∆E/E 0 signal at gating pulse delay t gate =0ps (inset) originates mainly from phonon renormalization and recovery induced by Dirac fermion-phonon (e-ph) interaction (further discussed in Fig. 3a). Shown together is the pump-induced coherent phonon emission (black line, overlaid) that is measured simultaneously as a function of ∆t pp . These results reveal the build-up of a metastable state, which occurs exclusively during the coherent Raman phonon oscillations in time. At the early times during the THz pulse, marked by t pulse (Fig. 2a), there is only small pump-induced ∆E/E 0 signals. This excludes the THz heating of electronic states near the Fermi surface, which would lead to quasi-instantaneous increase in the transient signals on the arrival of the pump pulse. In contrast, the transient state evolution, seen from the pronounced ∆E/E 0 signals, only occurs at later times after the pulse, but before phonon dephasing, marked as t dephasing , i.e. during the period of the pronounced coherent phonon vibrations (blue line, Fig. 2a). This emergent behavior after the incident THz excitation dominates the driven state dynamics. The formation process is followed by a quasi-steady temporal regime that marks the establishment of a final metastable state after the dephasing of the Raman phonon coherence by, e.g., the strong e-ph couping. In contrast, the build-up behavior during coherent oscillations is absent for high-frequency pump pulses tuned at 1.55 eV (Fig. 2b). Here we only see a sub-ps rise which now occurs mostly during the photoexcitation. This stark difference clearly indicates the non-thermal nature of the THz-driven state mediated by coherent Raman phonons. Experimental evidence associating the observed phase evolution with the topological switching is presented in Figs. 2c and 2d as follows. The first evidence is to compare the temperature dependence of the pump-probe ∆E/E 0 signals in the meta-stable states (at ∆t pp = 10ps) with that of the Anomalous Hall Effect (AHE) that directly probes the Berry curvature Ω k generated by the Weyl nodes. As shown in Fig. 2c, the nonlinear THz signal (blue diamond) quickly diminishes at the same temperature, T Berry ∼160K, where AHE Resistivity ρ 0 AHE vanishes (black circles). In the AHE measurement, application of a magnetic field transforms a Dirac semimetal into a Weyl semimetal by breaking time reversal symmetry. Consequently, Weyl nodes behave like magnetic monopoles that generate large Berry curvatures and act like an effective magnetic field. This gives rise to a non-zero Here we obtained the temperature dependence of ρ 0 AHE , i.e., the saturation value of ρ AHE (B) (Fig. S1, Supplementary materials), by subtracting the ordinary Hall signals (linear background) at high magnetic field from the experimentally measured Hall resistivity [22]. It is clearly visible that ρ 0 AHE in ZrTe 5 emerges below T Berry when the dominant carriers are Dirac fermions with linear dispersion near the conical point with conserved chirality. The T Berry correlates very well with the critical temperature associated with the THz-driven metastable phase transition (blue diamond, Fig. 2c). Therefore, the metastable phase has the same topological origin as the chiral magneto-transport and cannot be established by excitation of normal Fermi surface dominant above T Berry . Note also that the sign change of ρ 0 AHE in the vicinity of T Lif shitz ∼60K where the critical DP occurs separating the STI-WTI transitions [17] (Fig. S1b, supplementary), agrees with the rapid rise of the pump-probe signals, marked by the black dash line, associated with the metastable state. The second evidence is a distinct nonlinear pump fluence dependence of the pump-probe signals with a larger size than the change, ∆ thermal , required for the thermally-driven topological transition. By increasing temperatures from 4.1K to 160K (T Berry ), as shown in the inset of Fig. 2d, the ∆ thermal can be directly determined ∼0.09 corresponding to the change of THz field transmission during the STI-DP-WTI transition. Here we compare the ∆ thermal with the THz pump field dependence of the differential transmission ∆E/E 0 signals at a fixed time ∆t pp =10ps (Fig. 2d). We emphasize two key points. First, it is clearly visible that the pump-induced ∆E/E 0 is negligibly small at THz field strengths less than E th ∼ 75kV/cm. This threshold behavior of the formation dynamics (Fig. 2a) is not limited by our noise floor, which is a hallmark of the non-equilibrium phase transition to a THz-driven metastable state. Second, at slightly higher field above E th , the pump-induced differential transmission surpasses the ∆ thermal value for the thermally-induced STI-DP-WTI topological switching (black dash line). This indicates that the sufficiently large lattice displacement above E THz drives the system cross the topological phase boundary to new band structures determined by phonons. Next we identify further some distinguishing spectral and temporal features associated with the THz-driven metastable phase that are different from the thermal states. First, Fig. 3a reveals a distinct, non-thermal, spectral shape in the non-equilibrium response function. At equilibrium, the frequency-dependent conductivity σ 1 /σ DC (inset, Fig. 3a) reveals two strong IR-active phonons along the a-axis (probe direction) with resonant peaks at ω 1,2 IR ∼1.5 and 2.5THz in the 4.1K trace (red line). These IR phonon modes can contribute to the the Raman A 1g phonon generation via the ionic Raman mechanism that involves the IR modes as mediator and IR-Raman coupling due to anharmonicity [16]. At elevated temperatures, these modes progressively shift to higher frequencies up to 200K (magenta line). In the THz-driven phase, the pump-induced conductivity change ∆σ 1 /σ DC at ∆t pp =10 ps shows spectral oscillations(red circles, Fig. 3a) with pronounced absorptive features (red arrows). In contrast, the normal state thermalization leads to dominantly inductive spectral shape (black line, Fig. 3a) which can be obtained by subtracting σ 1 traces at higher temperature and 4.1K, i.e., between STI and WTI thermal states. This result highlights the difference between the driven phase evolution and temperature-/laser-heating induced phase thermalization process. Second, Fig. 3b plots the relaxation dynamics that measures the lifetime of the THz-driven state from 4.1K to 120K. The temporal profile is consistent with the slow build-up, peaked at ∼10ps, and ∼100ps decay of phonon renormalization as seen in the σ 1 /σ DC spectra shown for various ∆t pp delays (inset, Fig. 3b). The transient phase decays with a single exponential profile over ∼ 120ps, see e.g. the 60K trace (black line). Such relaxation is nearly 2 orders of magnitude longer than that reported for the case of optical excitation using higher energy photons [23,24], due in part to the minimal heating of the Fermi surface due to the THz pumping. Most intriguingly, the relaxation time exhibits a non-monotonic temperature dependence. It firsts becomes longer with temperature increase from 4.1K (gray line), reaches a maximum at ∼60K (black line), and finally decreases with temperature up to 120K (blue line), as shown in Fig. 3a. Critical to note that the longest lifetime appears ∼T Lif shitz at which the critical Dirac point appears [17]. This dynamical slowing down further underscores the topological origin of the THz-driven phase transition. Fig. 4. First, without any atomic displacements from equilibrium (λ=0.0) ( Fig.4a), ZrTe 5 has a narrow band gap of 0.04 eV. Both the valence and conduction band edges appear along Γ-Y and are off the Γ point, towards the Y point. The projection (green shadow) on Te d p orbitals clearly shows the band inversion between the valence and conduction bands, which agrees with the gapped massless Dirac state observed in experiments [13]. As one of the indicators, the 2D topological index on the k z =0 plane is 1 (Fig.S3a), shown by the odd number of crossings for the Wannier charge centers (WCCs) moving along k y . The overall topological invariant index is (1;110) for the initial state. For λ=3.0 (Fig.4b), the band gap increases to 0.07 eV but the band inversion along Z-Γ-Y directions remains. The system is still a gapped massless (Dirac) state [26]. Most interestingly, in contrast to the above, by moving in the other direction with more positive λ, the band gap decreases. The valence and conduction bands touch at the Γ point when λ=2.15 (Fig.4c). Then the band gap reopens as λ increases further. For λ=3.0 (Fig.4d), the re-opened band gap has no band inversion along Z-Γ-Y, as seen from the orbital projection of the Te d p orbitals (inset, green line). The corresponding 2D topological index on the k z =0 plane is now 0, shown by the even number of crossings for the WCCs moving along k y in Fig.S3b. The overall topological index becomes (0;110) for a WTI. Thus, with the coherently excited Raman A 1g mode within one THz pulse cycle above threshold E th , ZrTe 5 can be driven into coherent topological oscillations between STI and WTI states, with an interesting critical Dirac point in between. The dephasing of the phonon vibration, from multiple Dirac fermion-phonon scattering and/or disorder effects, results in non-thermal phonon populations, which lead to a renormalized spectral shape different from thermal ones shown in Fig. 3a. This critical bulk DP point (Method) is marked by the dashline in the λ dependence of the band gap (Fig 4e), which can exist at the Γ point in ZrTe 5 as the phase boundary between gapped Dirac state and WTI. Such transition can be driven by the A 1g Raman phonon mode consistent with the THz tuning experiment. In summary, we identify a previously-inaccessible tuning scheme via mode-selective Raman coherence that can control the band topology. We demonstrate THz-driven topological phase transition during coherent lattice oscillations in a Dirac material. Harnessing the resonant THz-driven coherence of specifically tailored modes with an intense THz pulse electric field may become a universal principle for steering other symmetry-breaking transitions to Weyl states or phase.
3,591
2019-12-04T00:00:00.000
[ "Physics" ]
Towards an Analysis of Daylighting Simulation Software The aim of this article was to assess some of the main lighting software programs habitually used in architecture, subjecting them to a series of trials and analyzing the light distribution obtained in situations with different orientations, dates and geometry. The analysis examines Lightscape 3.2, Desktop Radiance 2.0, Lumen Micro 7.5, Ecotect 5.5 and Dialux 4.4. Introduction In 1966 Hopkinson, Petherbridge and Longmore published the book "Daylighting" [1], which is a true compendium of the information on natural lighting which existed until that point.This book clearly shows the great complexity of the procedures for quantifying natural light, and how the information on offer there, such as the value of estimated average lighting or lighting for a concrete point in an interior space, was limited. In 1970, the CIE published document No. 16 [2] on calculation methods available for natural lighting.This document detailed the existence of more than fifty calculation procedures.In addition, this text highlights the narrow scope of these procedures, as well as the poor results attained, usually following drawn-out and tedious processes. One of the methods most frequently used to assess light distribution within buildings is the analysis of scale models, which are studied in an artificial sky simulator.This avoids laborious calculation OPEN ACCESS procedures.However, as studies by Thanachareonkit, Scartezzini and Andersen [3] show, there is a wide divergence between results obtained from a scale model and those observed in a real model. The invention of personal computers brought about the appearance of an entire range of computer programs capable of providing full reports on natural lighting within any type of space, under any sort of sky and in any geographical location, measured on a specific day and time.A common feature of all these is their extreme vagueness as they offer practically no information on their basis and hypotheses for calculation and do not even mention the sources used for data calculation on the conditions of the sky. As a result of this situation, several publications have appeared with comparative analyses of the more widely used programs, designed to help lighting designers choose the most suitable programs. Antecedents and Objectives As was mentioned earlier, many studies have been carried out to prove which is the most suitable form of lighting software.Some of the most relevant ones are described below. The study by Kopylov and Khodulev [4] resolves the analysis of three calculation programs, Lightscape 3.1, Inspirer 5.3 and Desktop Radiance, although little analysis is offered of the last of these in this study.As a result of this, most of the results shown relate to Lightscape and Inspirer.The first part of the study consists in the analysis of a cube of certain measurements where the luminance values of six points of one of its faces are established using a physical formulation.Subsequently, the luminance obtained thanks to the formulation is compared with the results produced by the Lightscape and Inspirer programs, and some tables are created defining the margin of error in relation with the calculation time.Susan Ubbelohde's study [5] analyses the following programs: Lumen Micro, Super Lite, Radiance and Lightscape.A truly fascinating prior analysis is carried out for all the programs, describing the different qualities of each. Following the description, trials of all programs are carried out on a specific model: building 1022 of Natoma Street in San Francisco, by the architect Stanley Saitowitz.The results obtained with each program are also contrasted with the measurements carried out on the original building. The work of Roy [6] carries out an extensive description both of the history of lighting calculation programs and of the programs themselves.This test analyses the most common programs at the time of study, Adeline, equivalent to Radiance 2.0 in Windows 2000 version, Lightscape, Microstation and RadioRay. Bryan's article [7] begins with a brief description of the more common simulation programs among which it is worth noting Lightscape 3.2, Desktop Radiance 1.02, Lumen Micro 2000 and FormZ Radiozity 3.8.Subsequently, the article details the virtues and flaws of the different calculation programs in relation with different aspects. The article by Lau and Mistrick [8] begins with an extensive introduction on the different variables for choosing a lighting calculation program.It is surprising to note that without prior analysis or reference to any sort of trial, the article in question categorically states that without a doubt Desktop Radiance is the best calculation program. In 2005, the CIE published [9] a series of trials that were to act as a basis for the evaluation of the precision of calculation programs.One of the authors who used the CIE proposals for his own work was Maamari [10], who compares the precision of the Lightscape 3.2 and Relux 2004 programs.The author concludes that it is advisable to carry out complementary tests in addition to those established by the CIE, in order to cover other aspects of light propagation. Chang-Sum and Seung-Jin [11] established one of the most interesting proposals in terms of computer simulation studies.Taking into account that Desktop Radiance is the most widely-used tool in this field of research they compared this program with a scale model with ten photometers.The results of these trials show significant differences between Radiance and the measurements on the scale model.As a result, the authors established a correction factor, applicable to the results provided by Radiance. In addition to the articles summarized above, there are many more with the same objective, such as the article by Ng [12] of the University of Hong Kong, or that of Houser [13] of the University of Pennsylvania.Other relevant studies are those carried out by Mardaljevic [14,15], comparing the results of lighting software under real sky conditions, and those by Reinhart [16], which focus mainly on the application of calculation tools to architectural design. It is hard to obtain an overall picture of the efficiency and precision of the aforementioned software from all the articles observed.This conclusion is reached while observing that all the trials carry out lighting analyses for very specific conditions, and rule out the variable of studying the model on different dates and times.This is why it is necessary to express the results in values relating to the levels of exterior lighting, using the concept of DF (Daylight Factor) [17], which shows the quotient of lighting levels inside the model and those obtained outside.This enables the observation of light distribution regardless of day and time of study. There are another two fundamental absences in the studies examined previously; firstly, with the exception of the study carried out by Ubbelohde [5], there is a lack of comparison with a real model.Secondly, none of the works mentioned take into account the orientation or geometric variations of the model.The variation in the model's dimensions helps to study how the calculation program responds to different calculation situations: the simpler the model, the easier the distribution of lighting levels, while if the model is more complex, the distribution of lighting can heighten certain margins of error which condition the software's evaluation.This aspect is mentioned exclusively in the work of Kopylov-Khodulev [4]. It cannot be denied that it is hard to find information on the calculation systems of each program.Some articles, like that by Ubbelohde [5], do carry out an exhaustive description of each calculation program, although it must be said that much of the data provided by this article lacks contrasting views.As a result of this it is hard to describe the calculation system for each program, which demonstrates the lack of literature on the foundations and algorithms used. In addition, the analyses mentioned earlier do not contemplate any of the current calculation programs, such as Ecotect 5.5 or Dialux 4.4.We must remember that the field of technological development through computers is rapidly progressing, with many programs becoming obsolete in very short periods of time. In consequence, this article aims to carry out a series of trials on an ensemble of widely-used programs, assessing them according to the theoretical principles of lighting.This article falls within the lines of research on Daylighting which the Institute of Architecture and Building Science of the University of Seville has been carrying out for many years [18,19]. Methodology for Calculation In the trial explained below we will consider a simple model which can be easily reproduced and permits quick and precise calculation.We rule out the use of the clear sky hypothesis, as in this context a high number of variables preventing universal conclusions are introduced.Analyses in overcast sky conditions are more advisable for contrasting time variations as in these conditions the coefficients of uniformity and the daylight factors remain invariable.The overcast sky model, used in this article, is supported by the definition coined by Moon-Spencer [20], where the luminance values are distributed in accordance with the following law: which implies that the lowest luminance value in an overcast sky vault occurs in the horizon, and is equivalent to a third of the maximum luminance at the zenith: The model for calculation is defined as a cube of 3 × 3 × 3 meters, with an opening of 1 × 1 meters on one of its sides (Figure 1).The location of this opening is variable, as we will study in this first trial.The surfaces of this cube are perfectly diffuse and their reflection coefficients vary in accordance with their location: the value of the ceiling will be 0.80, the walls 0.50 and the floor 0.2.The opening of 1 × 1 meters is centered on one of the sides, letting the light into the interior of the model completely, without filtering it.This model will be studied over different trials, which will be described subsequently: Extensive research has been carried out on the programs to be analyzed in this study, evaluating those with greatest representation and presence worldwide.For the selection criteria for programs for the trial we have ruled out those which are not employed professionally for the calculation of lighting.As a final condition we employed the same programs which feature prominently in the aforementioned articles.For the definitive selection, we decided to use Lightscape 3.2, Desktop Radiance 2.0, Lumen Micro 7.5, Ecotect 5.5 and Dialux 4.4. Parameters have been provided for all the programs that allow an optimum precision of the results obtained, considering a processing time of calculation of under an hour per model.The calculation parameters for each simulation program are shown below in Table 1. Orientation Trial The first trial carries out a study on the model and analyses levels of lighting in overcast sky conditions in accordance with the position of the window.Accordingly, a specific day for study, common to all calculations, will be set and five analyses will be carried out for each program.For each analysis the window will be situated in a different position, North, South, East, West and finally in the ceiling. The development of this trial has two main aims: • On the one hand, to confirm that the different programs give equal results for the different orientations of the side opening. • Secondly, to contrast the difference of the levels of illuminance obtained for the gap at the zenith position with those observed for the gaps in a side position. The results obtained in this trial are shown in Figures 2,3.The graph shown highlights the Average Daylight Factors for each program on an overcast sky calculation, where the variable is the position of the opening on the model.The following conclusion is reached: The illuminance levels reached with the five programs do not vary with the orientation of the window, that is to say, all programs interpret the overcast sky as a vault which emits luminance which is constant on each parallel of its surface, regardless of the day and time of study. In this way, all five programs have acceptable results, although the differences in the values obtained are striking, as Desktop Radiance 2.0 doubles the illuminance levels in side openings when compared with Lightscape and Ecotect. The graph for Maximum Daylight Factors shows that the illuminance at the zenith for Desktop Radiance 2.0, Lumen Micro 7.5 and Dialux 4.4 programs is not much higher than the levels in the side openings.Nevertheless, with the Lightscape 3.2 and Ecotect 5.5 programs it is observed that the illuminance levels at the zenith are almost double those of the side openings.From this it is possible to deduce that overcast sky interpretation differs according to the program used. Temporary Variation Trial The second trial consists in analyzing how light is distributed according to the time on a specific day.As was shown in the previous study, in overcast sky conditions the orientation of the window makes no difference, so we will place it for instance at the South.Subsequently, the trial will measure the coefficients of uniformity and the natural lighting factors.The analyses will be for a specific day, in this case, June 21st, and the results obtained every hour between 9:00 and 18:00 will be examined. Figure 4 shows the different coefficients of uniformity, according to program and time.As is observed, the uniformity values are fixed or slightly variable in Lightscape, Lumen Micro, Ecotect and Dialux.However, for Desktop Radiance, the uniformity coefficients resulting from the program's measurements vary visibly with the time of study.These coefficients range from 7.53 at 9:00 am, reaching a maximum at 11:00 am with a value of 8.65, and falling progressively until 18:00 with a value of 2.66.This is due to Desktop Radiance using algorithms that add the factor of the turbidity of the sky to the definition of overcast sky established by Moon and Spencer.As is observed, Ecotect and Dialux show overly elevated uniformity coefficients compared with other programs; their values range between 13.66 and 15.23.That is to say, the maximum illuminance levels are up to fifteen times higher than the minimum levels. Calculations for the Daylight Factor for each program at the times on the day of study are carried out below. Figure 5 shows the results of the DF values of each program, in accordance with the time.However, in Radiance 2.0 the DF values are variable, peaking at 18:00 h, and remain practically constant between 9:00 and 14:00 h.The results of Lumen Micro 7.5, Ecotect 5.5 and Dialux 4.4 are relatively similar, while those obtained with Lightscape 3.2 are approximately half of these. Geometric Variation Trial For this trial we modify the model under study, using the height of the model as a variable.In order to do so, we take the original model, with a square base of 3 m a side, with a square opening centered in the ceiling, measuring 1 m a side.The study observes the behavior of the light on five models of different heights, starting at 1.5 m, and going up to 7.5, varying the limit of the parameters at 1.5-m intervals. The trial is carried out in conditions for overcast sky, on June 21st at 15:00, ensuring equal conditions for all models and therefore using common characteristics for a joint evaluation.The Coefficients of Uniformity of the maximum illuminance levels over minimum, in accordance with the height of the model and referring to each calculation program, are observed below. As is observed in Figure 6, the behavior of almost all programs is similar in this trial.However, one of the programs produces highly different values: Ecotect 5.5 presents considerable variations in the coefficients of uniformity, with these values increasing as the height of the model increases, establishing it as barely sensitive to the variation of the height of the model. Trial with Artificial Sky Subsequently, comparisons are made of each program with the artificial sky, thus ensuring a comparison which makes it possible to properly determine the validity of the tools used. The artificial sky used in this trial belongs to the laboratory of the Department of Architectural Constructions I of the Higher Technical School of Architecture of Seville.Its design corresponds with that of a parallelepiped model of artificial sky, where a constant luminance reflector emits light inside a cube adapted to the lamp's measurements.On the reflector there are 16 fluorescent tubes providing a homogeneous emission of light flux.The walls of the hexahedron are mirrors simulating the location of the horizon in the infinite, just like the conditions of a real sky.The reproduction of the model under study is placed on the floor of the cube, which is matte white. A parallelepiped sky, contrasted and calibrated following the study by Navarro, Sendra and Barros [21], where the model to be studied is placed, will be used (Figure 7).An appropriate artificial sky simulates the conditions of an overcast sky of the Moon-Spencer type. A cube measuring 30 × 30 × 30 cm with a 10 × 10 cm opening centered in the ceiling is placed inside the artificial sky (Figure 8).The scale model simulates the model which has been tested throughout this article, on a 1:10 scale.This scale model has been manufactured in lightweight white cardboard as this is considered a reflector with a 0.95 reflection coefficient.To prevent the transmission of light through the cardboard, the scale model has been lined with aluminum foil, thus rendering the walls of the model completely opaque.A multiway Megatron Limited photometer with twelve measurement points has been used. The results of the measurement, which was carried out using the photometer inside the model exposed to the artificial sky, are shown below.Figure 9 shows the levels of illuminance at the specific point in red, while the daylight factor for each point measured appears in blue.Once results have been obtained for the artificial sky, as is observed in Figure 10, these are contrasted with the measurements provided by each program, reproducing the scale model as a computer model. Neither Lumen Micro nor Dialux allow reflection coefficients higher than 0.90, and therefore the D.F. values and the uniformity coefficients will be slightly lower than those obtained in the actual model. The comparison of the daylight factors shows significant differences between the computer calculations and the measurements carried out with the aid of an artificial sky.The highest daylight factors were measured on the scale model, while Figure 11 shows the values obtained with the programs measured approximately half of these.Secondly, the uniformity coefficients of the artificial sky and the different programs are checked (Figure 12).As is observed, the coefficients of almost all programs are very close to those obtained with the artificial sky, with the exception of Radiance 2.0, which obtains values of 1.88 for the Max/Min quotient.The uniformity coefficients are similar between the artificial sky and the measurements in the software, with the exception of Desktop Radiance, which shows values for the Max/Min quotients which are higher than the rest. Conclusions By observing the trials described earlier the following conclusions on the precision of lighting simulation software under overcast sky conditions can be reached: The application of these five lighting simulation tools produces significant differences in daylight factor results, both for average and maximum illuminance, and coefficients of uniformity. Calculation programs use very different interpretations of the sky vault in overcast sky conditions.In general it is possible to determine different calculation criteria for each program, as can be observed from the daylight factors shown in Figures 2 and 3 of the orientation trial. Specifically, it is apparent in Figure 5 that Desktop Radiance does not maintain daylight factors uniformly in the time variation trial. This observation leads to two deductions.The first of these is that the position of the sun can be considered a variable within the calculation algorithm for overcast sky conditions, as Radiance uses a definition of sky that is more complex than that of other programs, applying the variable of turbidity in the distribution of luminance of the sky.The second deduction is that it is possible that the calculation parameters considered by the authors for this program, see point three, have not been sufficient to determine a precise calculation. Both the autonomous program Ecotect and Dialux show very high uniformity coefficients in many of the trials.The results of these uniformity coefficients contrast with those observed when measuring the scale model in artificial sky conditions. In the geometric variation trial, when models reach a certain degree of complexity, some calculation programs like Ecotect show different readings for the uniformity coefficients. In conclusion, there is currently a wide range of natural lighting simulation programs, but neither the calculation criteria nor the results reached are uniform.Accordingly, it is advisable to contrast the calculation tests of the computer programs with a scale model in artificial sky conditions, always using several calculation programs with a view to preventing errors in the results obtained. Figure 2 . Figure 2. Daylight Factor of Average levels of Illuminance.June 21st, overcast sky. Figure 6 . Figure 6.Coefficients of Uniformity (Max/Min illuminance levels) varying the height of the model from 1.5 m to 7.5 m.June 21st at 15:00, overcast sky. Figure 7 . Figure 7. Parallelepiped artificial sky used in the trial. Figure 8 . Figure 8. Model of calculation for artificial sky. Figure 9 . Figure 9. Measurements of the photometer in the scale model of artificial sky. Figure 10 . Figure 10.Results of the measurements of artificial sky. Figure 11 . Figure 11.Comparison of Daylight Factor between software and artificial sky. Figure 12 . Figure 12.Comparison of the Uniformity Coefficients between software and artificial sky.
4,939.2
2011-06-29T00:00:00.000
[ "Engineering", "Environmental Science" ]
Magnetic Phase Diagram of Dense Holographic Multiquarks in the Quark-gluon Plasma We study phase diagram of the dense holographic gauge matter in the Sakai-Sugimoto model in the presence of the magnetic field above the deconfinement temperature. Even above the deconfinement, quarks could form colour bound states through the remaining strong interaction if the density is large. We demonstrate that in the presence of the magnetic field for a sufficiently large baryon density, the multiquark-pion gradient (MQ-$\mathcal{5}\phi$) phase is more thermodynamically preferred than the chiral-symmetric quark-gluon plasma. The phase diagrams between the holographic multiquark and the chiral-symmetric quark-gluon plasma phase are obtained at finite temperature and magnetic field. In the mixed MQ-$\mathcal{5}\phi$ phase, the pion gradient induced by the external magnetic field is found to be a linear response for small and moderate field strengths. Its population ratio decreases as the density is raised and thus the multiquarks dominate the phase. Temperature dependence of the baryon chemical potential, the free energy and the linear pion gradient response of the multiquark phase are well approximated by a simple analytic function $\sqrt{1-\frac{T^{6}}{T^{6}_{0}}}$ inherited from the metric of the holographic background. I. INTRODUCTION Discovery of the AdS/CFT correspondence [1] and the generalization in terms of the holographic principle have provided us with alternative theoretical methods to explore the physics of strongly coupled gauge matter. Holographic models have been constructed to mimic behaviour of the strongly coupled gauge matter in various situations. The Sakai-Sugimoto (SS) model [2,3] is a holographic model which contains chiral fermions in the fundamental representation of U(N c ). Its low energy limit is the closest holographic model of the QCD so far. It can also accommodate distinctively the chiral symmetry restoration and the deconfinement phase transition in the non-antipodal case [4]. It provides interesting possibility of the existence of the exotic nuclear phase where quarks and gluons are deconfined but the chiral symmetry is still broken. In the SS model, there are two background metrics describing a confined and a deconfined phase. The deconfined phase corresponds to the background metric with a black hole horizon. The Hawking temperature of the black hole is identified with the temperature of the dual "QCD" matter. When gluons are deconfined, the thermodynamical phase of the nuclear matter can be categorized into 3 phases, the vacuum phase, the chirally broken phase and the chiral-symmetric phase. In the deconfined phase, the interaction between quarks and gluons become the screened Couloub potential. If the coupling is still strong, bound states of quarks could form (see Ref. [5][6][7][8][9][10] for multiquark related studies). The phase diagram of the holographic nuclear matter in the SS model is studied in details in Ref. [11] and extended to include multiquarks with colour charges in Ref. [10]. It has certain similarity to the conventional QCD phase diagram speculated from other approaches e.g. the existence of critical temperature line above which chiral symmetry is restored. The phase diagram also shows the thermodynamic preference of the multiquark phase with broken chiral symmetry for moderate temperature in the situation when the density is sufficiently large. As an implication, it is thus highly likely that matters in the core of neutron stars are compressed into the multiquark nuclear phase. A thorough investigation on the multiquark star suggests higher mass limits of the neutron stars if they have multiquark cores [12]. When the magnetic field is turned on, the phase structure becomes more complicated. Magnetic field induces the pion gradient or a domain wall as a response of the chiral condensate of the chirally broken phase [13]. In the confined phase, this is pronounced [14]. However, it is demonstrated in Ref. [15] that the pion gradient is subdominant to the contribution from the multiquarks in the chirally broken deconfined phase. It was also shown in Ref. [15] that for sufficiently large density, the multiquark phase is more thermodynamically preferred than the chiral-symmetric quark-gluon plasma for small and moderate magnetic field strengths. Therefore it is interesting to explore the phase diagram of the deconfined nuclear matter in the presence of the external magnetic field. We establish two phase diagrams between the chirally broken multiquark (χSB) and the chiral-symmetric quark-gluon plasma (χS-QGP), one at fixed temperature, T = 0.10, and another at fixed field, B = 0.20. The magnetic phase diagram of the similar model for zero baryon density is investigated in Ref. [16]. The phase diagram at finite density is explored in Ref. [17] with the approximation f (u) ≃ 1. We found that for T 0.10, this approximation is no longer valid. Our main results demonstrate that for a given magnetic field and moderate temperature, the most preferred nuclear phase in the SS holographic model is the multiquark-pion gradient (MQ-▽ϕ) phase provided that the density is sufficiently large. We also study the temperature dependence of the baryon chemical potential, the free energy, and the linear response of the pion gradient of the mixed MQ-▽ϕ phase and show that they inherit the temperature dependence mostly from the SS background. Extremely strong magnetic fields could have been produced in many situations. The Higgs mechanism in the cosmological electroweak phase transition could create enormous magnetic fields in the region between two different domains with different Higgs vacuum expectation values [18] which could play vital role in the phase transitions of the nuclear soup at later times. At the hadron and heavy ion colliders, colliding energetic charged particles could produce exceptional strong magnetic field locally. The local magnetic fields produced at RHIC and LHC are estimated to be in the order of 10 14−15 Tesla [19]. On the astrophysical scale, certain types of neutron stars called the magnetars could produce magnetic fields as strong as 10 10 Tesla [20]. This article is organized as the following. In Section 2, the setup of the deconfined SS model with additional baryon vertex and string sources are discussed. Main results are elaborated in Section 3. Section 4 concludes the article. II. HOLOGRAPHIC SETUP OF THE MAGNETIZED MULTIQUARK PHASE The setup we will use is the same as in Ref. [15], the Sakai-Sugomoto model with additional baryon vertex and strings (baryon vertex is introduced in Ref. [21,22]). Starting from a 10 dimensional type IIA string theory with one dimension compactified into a circle which we will label x 4 . Two stacks of D8-branes and D8-branes are then located at distance L from each other in the x 4 direction at the boundary. This separation will be fixed at the boundary and it will play the role of the fundamental scale of our holographic model. Open-string excitations with one end on the D8 and D8 will represent quarks with different chiralities. In the background where the D8 and D8 are parallel, excitations for each chirality are independent and there is a chiral symmetry in the background and at the boundary. For background with connecting D8 and D8, chiral symmetry is broken and there is a chiral condensate. When the energy of the connecting configuration is minimal and there is no extra sources, we define the corresponding boundary gauge matter to be in a vacuum phase. Since the partition function of the string theory in the bulk is conjectured to be equal to the partition function of the gauge theory on the boundary, the free energy of the boundary gauge matter is equivalent to the superstring action in the bulk (modulo a periodicity factor) [23]. We turn on non-normalizable modes of the gauge field a V 3 , a A 1 , a V 0 (defined in units of R D4 /2πα ′ ) in the D8-branes and identify them with the vector potential of the magnetic field, B (defined in units of 1/2πα ′ ), the gradient of the chiral condensate, ▽ϕ, and the baryon chemical potential, µ, at the boundary respectively. These curious holographic correspondence between the branes' fields and the thermodynamical quantities of the gauge matter at the boundary allows us to study physics of the strongly coupled non-Abelian gauge matter at finite density in the presence of the external magnetic field. Electric field can also be added using other components of the gauge field on the D8-branes [16,24] but we will not consider such cases here. The background spacetime of the Sakai-Sugimoto model is in the form is the volume of the unit four-sphere Ω 4 and ǫ 4 represents the volume 4-form. l s and g s are the string length scale and the string coupling respectively. R is the compactified radius of the x 4 coordinate. This radius is different from the curvature R D4 of the background in general. The dilaton field is denoted by φ which will be eliminated by the function of u as stated above. The direction of the magnetic field is chosen so that the vector potential is The baryon chemical potential µ of the corresponding gauge matter is identified with the non-normalizable mode of the DBI gauge field at the boundary by The five-dimensional Chern-Simon term of the D8-branes generates another axial part of the U(1), a A 1 , by coupling it with B and a V 0 . In this way, the external magnetic field induces the axial current j A associated with the axial field a A 1 . The non-normalizable mode of this field at the boundary corresponds to the response of the chiral condensate to the magnetic field which we call the pion gradient, ▽ϕ. External field causes the condensate to form a domain wall which can be characterized by the gradient of the condensate with respect to the direction of the applied field. Therefore the pion gradient also acts as a source of the baryon density in our gauge matter. Additional sources of the baryon density and the baryon chemical potential can be added to the configuration in the form of the baryon vertex and strings [10,11]. where n s = k r /N c is the number of radial strings in the unit of 1/N c . Since the radial strings could merge with strings from other multiquark and generate a binding potential between the multiquarks, this number therefore represents the colour charges of the multiquark in the deconfined phase. It is interesting to note that when there is only string source representing quark matter, the quark matter becomes thermodynamically unstable under density fluctuations [11]. However, adding baryon vertex together with the strings makes the multiquark configuration stable under the density fluctuations [10]. The multiquark phase is even more thermodynamically preferred than the χS-QGP when the density is sufficiently large and the temperature is not too high. With this setup, then the DBI and the Chern-Simon actions of the D8-branes configuration can be calculated to be where defines the brane tension. The factor 3/2 in the Chern-Simon action fixes the edge effect of the region with uniform magnetic field as explained in Ref. [14]. We can write down the equations of motion with respect to each gauge field a V 0 , a A 1 as d, j A are the corresponding density and current density at the boundary of the background (u → ∞) given by In terms of the gauge fields, they are In order to solve these equations, we need to specify the boundary conditions. Due to the holographic nature of the background spacetime, the boundary conditions correspond to physical requirement we impose to the gauge matter. If we want to address chirally broken phase of the gauge matter, we will take a A 1 (∞) ≡ ▽ϕ to be an order parameter of the chiral symmetry breaking and minimize the action with respect to it. This results in setting On the other hand, if we want to study the chiral-symmetric gauge matter (or chiral-symmetric quark-gluon plasma for N c = 3 case), x ′ 4 and a A 1 (∞) will be set to zero. For vacuum phase, a V 0 , a A 1 and d, j A will be set to zero. In any cases, since the total action does not depend on x 4 (u) explicitly, the constant of motion gives where and C(u) ≡ u 5 +B 2 u 2 , D(u) ≡ d+3Ba A 1 (u)−3B▽ϕ/2. The calculation of x ′ 4 (u c ) is described in the Appendix as a result from the equilibrium and scale fixing condition The equations of motion Eqn. (7), (8) can be solved numerically under the constraint (15). The value of µ, ▽ϕ, u c and the initial values of a V 0 (u c ), a A 1 (u c ) are chosen so that a V 0 (∞) = µ, a A 1 (∞) = ▽ϕ and L 0 = 1 are satisfied simultaneously. III. MAGNETIC PHASE DIAGRAM OF THE DENSE NUCLEAR PHASE Generically, the action (5) and (6) are divergent from the u → ∞ limit of the integration and we need to regulate it using the action of the vacuum which is also divergent. The contribution from the region u → ∞ is divergent even when the magnetic field is turned off and it is intrinsic to the DBI action in this background. The divergence can be understood as the infinite zero-point energy of the system and thus could be systematically removed by regularisation. Therefore the regulated free energy is given by where The three nuclear phases above the deconfinement temperature are governed by the same equations of motion, each with specific boundary conditions as the following, magnetized vacuum phase: We will demonstrate later that in the mixed phase, the pion gradient is generically dominated by the multiquark when the chiral symmetry is broken. In Ref. [15], it is shown that the pure pion gradient phase is always less preferred thermodynamically than the mixed phase of MQ-▽ϕ. It is interesting to note that for the pure pion gradient phase, a large magnetic field is required in order to stabilize the generated domain wall [13]. This critical field is determined by the mass of the pion in the condensate, B crit ∼ m 2 π /e. In Ref. [15], this critical behaviour is confirmed in the holographic SS model (the zero-temperature situation is studied in Ref. [25]). More investigation of the pure pion gradient phase in the holographic model should be conducted especially when the field is large since the distinctive feature of physics from the DBI action becomes apparent in this limit. We will leave this task for future work and focus our attention to the mixed MQ-▽ϕ phase in this article. The action of the magnetized vacuum when we set a V 0 , a A 1 = 0 and d, j A = 0 is The position u 0 where x ′ 4 → ∞ of the magnetized vacuum configuration increases slightly with temperature as is shown in Fig. 1. The difference between each temperature decreases as the magnetic field gets larger and all curves converge to the same saturated value u 0 = 1.23 in the large field limit. We can study the temperature dependence of the magnetized multiquark nuclear matter by considering its baryon chemical potential and the free energy as shown in Fig. 2. Both the chemical potential and the free energy decrease steadily as the temperature rises, regardless of the magnetic field. This is originated from the temperature dependence of f (u) = 1 − where for d = 1, B = 0.10; µ 0 = 1.1849, F 0 = 0.7976 respectively. For the baryon chemical potential (free energy), the best-fit value of T 0 is 0.269 (0.233). The fittings are shown in Fig. 3. This could be explained by noting that the regulated free energy is given by µd + Ω(µ, B). The contribution from the first term is dominant therefore the free energy has almost the same temperature dependence as the chemical potential. However, there is a c which for small temperature fractions modifies the temperature function in the following manner, where C 1,2 are some arbitrary constants and C 0 , T 0 are given by It should be noted from Fig. 3 that the temperature dependence is significant for T 0.10 and the approximation f (u) ≃ 1 is not accurate for temperature in this range. The characteristic temperatures we found here are consistent with the phase diagram of the multiquark in Fig. 7. In the multiquark phase when the magnetic field is turned on, the pion gradient is induced by the field in addition to the multiquark. The multiquark phase thus contained the mixed content of multiquarks and the pion gradient. For moderate fields (not too large), the response is linear ▽ϕ ∝ B. In contrast to the case of pure pion gradient phase, the domain wall in the mixed MQ-▽ϕ phase is stable among the surrounding multiquarks even for small field. The critical magnetic field to stabilize the domain wall in the case of pure pion gradient is not required in the mixed phase. where m 0 = 0.347, T 0 = 0.177. The curve fitting is shown in Fig. 4. The density dependence is encoded in m 0 = m 0 (d), T 0 = T 0 (d). As the density increases, the slope of the linear response of the pion gradient becomes smaller as is shown in Fig. 5. The ratio of the pion gradient density and the total baryonic density R ▽ϕ ≡ d ▽ϕ /d = 3B▽ϕ/2d [14] for B = 0.10, T = 0.10 is plotted in the log-scale in Fig. 5 (b). It could be well approximated by from Eqn. (25). This implies that the multiquark states are more preferred than the pion gradient in the presence of the magnetic field, the denser the nuclear matter, the more stable the multiquarks become and the lesser the population of the pion gradient. Finally we compare the free energy of the MQ-▽ϕ phase and the chiral-symmetric quarkgluon plasma phase. For high density, d = 100, this is shown in Fig. 6. For a given density, the multiquark phase is more thermodynamically preferred than the χS-QGP for small and moderate fields. As the magnetic field gets larger, the χS-QGP becomes more thermodynamically preferred. When the field becomes very strong, the transition into the lowest Landau level finally occurs [26]. For a fixed density, increasing magnetic field inevitably results in the chiral symmetry restoration. The phase transition between the MQ-▽ϕ and the χS-QGP is a first order since the free energy is continuous at the transition and the slope has a discontinuity. It implies that the magnetization, M(d, B) = − ∂F E ∂B , of the nuclear matter abruptly changes at the transition. On the other hand, for a fixed field and the moderate temperature, the increase in the baryon density could make the multiquark phase more stable than the χS-QGP. This is shown in the phase diagram in Fig. 7. At a given magnetic field, the multiquark phase could become the most preferred magnetized nuclear phase provided that the density is made sufficiently large and the temperature is not too high. In contrast, the effect of the temperature is the most dominant for chiral-symmetry restoration even when the field is turned on. Sufficiently large temperature will induce chiral-symmetry restoration for most densities as is shown the Fig. 7(b). Nevertheless, theoretically we can always find sufficiently large density above which the multiquark phase is more preferred. The transition line between the MQ-▽ϕ and the χS-QGP phases in the (d, B) phase diagram can be approximated by a power-law for the multiquark with n s = 0 (0.2). This power-law is weaker than the transition line of the χS-QGP to the lowest Landau level studied in Ref. [26] for the antipodal SS model (B ∼ d 2/3 ). The multiquarks with more colour charges (n s > 0) are less preferred thermodynamically but they require higher densities. On the other hand, the transition line in the (d, T ) phase diagram is an increasing function of d but weaker than the logarithmic of the density. Nevertheless, theoretically for a fixed B, T , we can always find sufficiently large density above which the MQ-▽ϕ phase is preferred. The high density region is actually dominated by the multiquark phase indeed. IV. CONCLUSION We diagram is weaker than the logarithmic of the density but nevertheless it is an increasing function with respect to the density. These imply that for sufficiently large density, the chirally broken multiquark phase is the most preferred nuclear phase even in the presence of the external magnetic field. The situation when density becomes extremely large and being dominant occurs in the core of dense star such as the neutron star. Therefore it is very likely that the core of dense warm star composes primarily of the multiquark nuclear matter even when an enormous magnetic field is present such as in the core of the magnetars. It is possible that a large population of the warm magnetars has multiquark cores. These warm dense objects could be relatively more massive than typical neutron stars. Fixing the characteristic scale L 0 to 1 for the brane configuration requires balancing three forces in the gravity picture. The D8-brane tension must be in equilibrium with the tidal weight of the D4 source and the string tension of the colour strings. The derivation of the x ′ 4 (u c ) presented here is the same as in Ref. [15], it is included for completeness. We vary the total action with respect to u c to obtain the surface term. Imposing the
5,017.6
2011-03-22T00:00:00.000
[ "Physics" ]
Precise quantification of silica and ceria nanoparticle uptake revealed by 3D fluorescence microscopy Summary Particle_in_Cell-3D is a powerful method to quantify the cellular uptake of nanoparticles. It combines the advantages of confocal fluorescence microscopy with fast and precise semi-automatic image analysis. In this work we present how this method was applied to investigate the impact of 310 nm silica nanoparticles on human vascular endothelial cells (HUVEC) in comparison to a cancer cell line derived from the cervix carcinoma (HeLa). The absolute number of intracellular silica nanoparticles within the first 24 h was determined and shown to be cell type-dependent. As a second case study, Particle_in_Cell-3D was used to assess the uptake kinetics of 8 nm and 30 nm ceria nanoparticles interacting with human microvascular endothelial cells (HMEC-1). These small nanoparticles formed agglomerates in biological medium, and the particles that were in effective contact with cells had a mean diameter of 417 nm and 316 nm, respectively. A significant particle size-dependent effect was observed after 48 h of interaction, and the number of intracellular particles was more than four times larger for the 316 nm agglomerates. Interestingly, our results show that for both particle sizes there is a maximum dose of intracellular nanoparticles at about 24 h. One of the causes for such an interesting and unusual uptake behavior could be cell division. Introduction Measuring the interaction between nanoparticles and cells is a mandatory step for the investigation of nanoparticles designed for medical treatment, and also for a correct risk assessment of nanoparticles. In both cases, knowledge regarding the kinetics of particle internalization gives the dose as a function of the time and allows for the investigation of a variety of parameters on that might influence the uptake behavior. Typical examples are particle characteristics such as size, morphology, chemical composition, surface charge and functionalization [1][2][3]. In addition, access to the number of intracellular particles is essential in studies aimed to compare the effect of similar particles on different cell types [4]. What all these investigations have in common, though, is the need for a fast and accurate method to quantify the uptake of nanoparticle by cells. In vitro cell culture experiments are well-known models to study the uptake of nanoparticles into human cells. Basically, a monolayer of cells is grown on the bottom of a culture well and nanoparticles are added to this culture to interact with the cells. Fluorescence microscopy is commonly the method of choice to visualize this interaction because it can be performed on live cells with high spatial and temporal resolution. Finally, outcomes of the uptake process are normally assessed via qualitative and semi-quantitative analyses of images. The need for a method to rapidly quantify the absolute number of nanoparticles internalized by cells led us to the development of a highly innovative method that integrates high resolution confocal microscopy with automatic image analysis. This method is called Particle_in_Cell-3D and was described in detail in a previous publication [5]. In this work we briefly describe Particle_in_Cell-3D and present how it was successfully applied to precisely quantify the cellular uptake of silica and ceria nanoparticles. Silica nanoparticles have a wide range of applications such as in chemical mechanical polishing, cosmetics, food, additives to pharmaceutical drugs, and in biotechnological and biomedical fields [6][7][8][9]. Ceria nanoparticles can be also found in many applications, as in ultraviolet absorbers, automotive catalytic converters, fuel additives, and oxygen sensing [10][11][12][13]. Due to the extensive range of applications and to the potential risks of nanomaterials, a growing number of studies regarding the cytotoxicity of silica and ceria nanoparticles can be found in the literature. As regards silica nanoparticles, several investigations showed that the toxicity increases with decreasing particle sizes, increasing doses and longer exposure times [14][15][16]. In the case of ceria nanoparticles, very contradictory findings have been reported. On the one hand, the anti-inflammatory, antioxidant and radio-protective properties have been described as beneficial applications in nanomedicine [17][18][19]. On the other hand, oxidative stress and impaired cell viability were shown to be a function of the particle dose and the exposure time [1,20]. However, most of the studies concerning the interaction of silica and ceria nanoparticles with cells cannot be directly compared as they were performed by applying different cell types and a variety of different particles. Nanoparticles, such as ceria released from automotive catalytic converters, can be taken up via the respiratory tract and then be transferred into the blood stream [21]. Next, the nanoparticles will be in contact with endothelial cells lining the inner surface of our blood vessel system [22,23]. Endothelial cells play a crucial role in many physiological processes and an altered endothelial cell function can be found in innumerous diseases of the cardiovascular, pulmonary, and neurologic systems [24,25]. Therefore, endothelial cells such as the ones used in the present study (HUVEC and HMEC-1) represent a very appropriate model system to estimate the impact of nanoparticles on human health. Results and Discussion Particle_in_Cell-3D Particle_in_Cell-3D [5] is a custom-made macro for the widely used ImageJ software [26] and can be downloaded from the ImageJ Documentation Portal [27]. It is a semi-automatic image analysis routine designed to quantify the cellular uptake of nanoparticles by processing image stacks obtained by two-color confocal fluorescence microscopy. One emission channel is reserved for the plasma membrane and the other one for the nanoparticles. This means that cell membrane and particles must be fluorescently labeled with spectrally separable markers. The two image stacks acquired can then be processed by Particle_in_Cell-3D. Once the images are loaded, it will execute a series of ImageJ commands to accomplish its goals. The initial part (files selection, input of analysis parameters and 3D reconstruction of the cell) are user-assisted. After these preliminary steps, automatic processing takes place ( Figure 1). Particle_in_Cell-3D uses the image of the membrane to define two subcellular regions or interest: intracellular volume and membrane region. Each particle (or agglomerate of particles) is pseudo-colored according to its location and quantified according to its fluorescence intensity. A final analysis report delivers information about the position of each object, the number of nanoparticles forming that object, and its location in x,y,z coordinates. All input parameters, processed images, and results are saved and can be accessed at any time. Furthermore, as a calibration experiment is needed for measuring the fluorescent intensity of individual nanoparticles, Particle_in_Cell-3D has a routine to perform these measurements. Main features The main advantages of this method are its speed, reliability and accuracy. The complete analysis of one cell is performed in a few minutes. Moreover, the results are consistent, that is to say, Particle_in_Cell-3D substitutes the subjective character of human-assisted image analysis by its unbiased outcomes. The cell segmentation strategy employed by Particle_in_Cell-3D includes the formation of a three-dimensional membrane region. The width of this region is set by the user and defines an enlarged transition region between extra-and intracellular spaces. It is much wider than the real cell membrane. The accuracy of the cell segmentation strategy and the typical thickness of the enlarged membrane region were studied by comparing the results achieved with Particle_in_Cell-3D with quenching The respective image of silica nanoparticles labeled with perylene, a fluorescent dye. The 3D location of an intracellular particle is marked by the crossing yellow lines. (c) A smoothing filter is applied and the image of the cell is transformed into a white mask. The image stack of masks is further processed to deliver a 3D reconstruction of the cell boundaries. Intracellular and membrane region are also defined in this step. (d) The cell boundaries, or regions of interest, are then used to segment the image of the nanoparticles (yellow outline). The segmentation procedure occurs throughout the image stack, leading to a 3D localization of the particles with respect to the cell. (e) Quantitative image analysis takes place. The intensity of each object (particle or agglomerate) is compared to the intensity of a single particle previously measured in a calibration procedure. Nanoparticles are pseudo-colored according to the cellular region. In this example the cell membrane is shown in cyan, the intracellular nanoparticles appear in red, and the membrane-associated nanoparticles in yellow. (f) 3D representation of nanoparticle uptake after evaluation. Intracellular nanoparticles can be seen through the window intentionally open in the membrane region (cyan). 3D scale bars = 5 µm. experiments. It was shown that the typical width of the membrane region is about 1.4 µm and that our method is able to create a 3D reconstruction of the cell. As regards the accuracy, the counting strategy of Particle_in_Cell-3D is based on the fluorescence intensity of the nanoparticles. The mean intensity of a single nanoparticle, obtained through a calibration experiment, is compared to the intensity of each object and determines the number of nanoparticles forming this object. It is therefore assumed that the selfquenching of dyes in particle agglomerates is negligible. This approach was proved to be accurate by independent stimulated emission depletion (STED) microscopy, a super-resolution technique [28,29]. Although developed for the absolute quantification of the nanoparticle uptake by cells, this method was made flexible to allow for the quantification in absolute and also in relative values. For example, Particle_in_Cell-3D was used to compare the uptake efficiency of therapeutic nanoparticles for gene delivery functionalized with different targeting ligands [30]. In addition, our method was successfully applied to measure the influence of flow conditions on the cellular uptake of nanoparticles. The flow is generated by a novel microfluidic reactor that can be combined with live-cell imaging and is able to cover the entire physiological range of shear rates [31]. Comparison to other methods Customary techniques performed for achieving the dosage of particles taken up by cells include flow cytometry, mass spectroscopy, electron and light microscopies [32][33][34][35][36][37][38][39]. Flow cytometry provides sound statistics due to the large number of cells evaluated in a short time. Nevertheless, it does not deliver spatial information about the position of nanoparticles inter-acting with the cells, e.g., membrane-associated particles and intracellular particles. Mass spectroscopy offers very high sensitivity, but is a sample-destructive technique and spatial information is not obtained. Moreover, results are normally expressed in arbitrary units, and not in absolute numbers. Electron microscopy allows one to achieve detailed information with very high spatial resolution, but the price to pay is to work on fixed cells, with an elaborated sample preparation and timeconsuming measurements. Light microscopy can be used on live cells to acquire loads of data relatively fast. On the other hand, standard light microscopes such as confocal and wide-field instruments are limited by diffraction. The resolution of light microscopes is not enough to resolve particles smaller than approximately 200 nm and a direct quantification of nanoparticles is not possible. Complications to count nanoparticles are further increased by their tendency to agglomerate in biological media [40]. Our digital method was designed to circumvent the abovementioned restrictions of conventional light microscopy. It does not enable the absolute quantification of particles by overcoming the diffraction barrier, but by inferring particle numbers based on the fluorescence intensity of particles. Cell type-dependent uptake of silica nanoparticles In a preceding publication [4] we found that both the uptake behavior and the cytotoxicity of silica nanoparticles are cell type-dependent, but not interconnected. In this section, we want to present in detail how Particle_in_Cell-3D was used to study the cell type-dependent uptake of 310 nm silica nanoparticles into human vascular endothelial cells (HUVEC) and cancer cells derived from the cervix carcinoma (HeLa). The nanoparticle uptake by single cells was measured through confocal microscopy in a time series between 1 and 24 h. The concentration of nanoparticles was 39.5 µg·mL −1 (or 30000 nanoparticles per cell) in all experiments. We found that within the first 4 h of incubation the number of intracellular particles was up to 10 times higher for HUVEC than for HeLa cells. However, after 10 or 24 h of interaction, the amount of particles taken up by HeLa cells strikingly exceeded the amount of silica particles taken up by HUVEC cells. Characterization of silica nanoparticles In order to allow for the investigation with live-cell imaging, silica nanoparticles were labeled with perylene dye. A detailed description of the synthesis can be found in a previous publication [41]. From experiments on the labeling efficiency of perylene, it was estimated that dye molecules cover only about 0.16% of the surface of the particles and, therefore, should not influence the interaction between particles and cells. In fact, cytotoxicity measurements of labeled silica particles compared to unlabeled silica particles showed that the label did not influence the interaction between nanoparticles and cells. The size of the silica particles, 310 ± 37 nm, was determined by transmission electron microscopy (TEM). In addition, the hydrodynamic diameter of the particles over time was determined by dynamic light scattering (DLS) measurements in water and in cell medium. Depending on the properties of the nanoparticles, they may agglomerate in a given cell medium [40]. In the case at hand, the silica particles became slightly agglomerated as the mean particle size increased from 450 nm, when measured in water, up to sizes between 550 nm and 650 nm for all time points investigated. Besides the size, the zeta potential of the particles was determined to be −14.1 ± 1.5 mV in cell medium. For the quantitative evaluation with Particle_in_Cell-3D, it was necessary to measure the mean fluorescence intensity of a single silica nanoparticle. This calibration experiment was carried out by using the same microscope setup used for the cellular uptake experiments, but instead of having cells incubated with nanoparticles, the particles were deposited and spread on a cover slip and, in order to maintain the same environmental conditions, cell medium was added to the particles. The acquired images were evaluated with the subroutine 'Calibration' of our macro and the mean intensity showed a Gaussian distribution with a mean value of 48090 pixel intensities per nanoparticle for silica particles in the cell medium for HeLa cells and 49430 pixel intensities per nanoparticle for silica particles in the cell medium for HUVEC cells. Quantification of silica-nanoparticle uptake In order to investigate the cell type-dependency of the uptake kinetics of silica nanoparticles, living cells were incubated for different time periods: 1, 2, 3, 4, 10 and 24 h. After incubation, the cell medium the containing nanoparticles was removed and the plasma membrane was stained. Confocal image stacks were then acquired and analyzed with Particle_in_Cell-3D. Figure 2 shows representative 3D perspectives of silica nanoparticles internalized by HUVEC and HeLa cells after 3 and 24 h. By using this method it was possible to precisely localize and quantify the particles interacting with the cells. The number of intracellular particles varied considerably from cell to cell. About 30 cells were evaluated per time point, thus resulting in more than 360 cells in total. The statistics for the number of taken up particles per HUVEC or HeLa cells are plotted in Figure 3. A time-dependent increase of nanoparticles from 1 to 24 h is clearly seen for both cell types. Interestingly, HUVEC cells were more efficient than HeLa cells to incorporate particles within the first 4 h. However, the situation changed completely after 10 or 24 h, when the number of intracellular particles for HeLa cells was significantly larger than that for HUVEC cells. Strikingly, our results regarding the cytotoxicity of silica nanoparticles [4] did not reflect our finding for the uptake kinetics. Exposure to silica nanoparticles over 24 h induced cell death in HUVEC but not in HeLa cells. Yet, after 24 h the number of particles internalized by HeLa cells was twice as large as the number of particles incorporated by HUVEC cells. Quantitative determination of nanoparticle uptake with Particle_in_Cell-3D helped to show that the nanotoxicity of materials cannot be generalized and transferred from one cell type to another. Size-dependent uptake kinetics of ceria nanoparticles This section is devoted to present quantitative results on the particle size-dependent uptake kinetics of ceria nanoparticles of 8 nm and 30 nm. A massive agglomeration of nanoparticles in cell medium was found. Ceria nanoparticles of 8 nm and 30 nm clustered into 417 nm and 316 nm agglomerates, respectively. Nanoparticles at a concentration of 10 µg·mL −1 were incubated with human microvascular endothelial cells (HMEC-1) for 3, 24, 48 and 72 h and imaged through live-cell confocal microscopy. Cytotoxicity assays performed on similar nanoparticles have shown that, in general, the impact of ceria nanoparticles on endothelial cells (HUVEC and HMEC-1) is not significant, and that adverse effects can only be observed at concentrations as high as 100 µg·mL −1 [42]. Such doses exceed the maximum possible in vivo concentrations. Characterization of ceria nanoparticles In order to be investigated with fluorescence microscopy, the particles were marked with Atto 647N. The synthesis of the ceria nanoparticles investigated in this study is described in the literature [43]. The labeling of these particles with Atto 647N did not alter the biological response of the cells, as assessed by cytotoxicity assays. HMEC-1 cells were incubated over 48 h with 100 µg·mL −1 of either non-labeled or Atto (CeO 2 -30nm). The membrane region outlining the cells appears in gray. The intracellular nanoparticles can be visualized in magenta and particles interacting with the membrane appear in yellow. The agglomerates are taken up by cells inside endosomes and accumulate at the perinuclear region. The amount of internalized particles is increasing over 24 h, but after this incubation time, however, the number of particles inside the cells starts to decrease. This effect is more remarkable for the 8 nm nanoparticles than for the 30 nm nanoparticles. 3D scale bars = 5 µm. 647N-labeled ceria nanoparticles. After this period, the relative adenosine triphosphate (rATP) content was analyzed to determine the metabolic impact of nanoparticles on cells. One hundred percent rATP content would mean that the cellular viability of the cells treated with nanoparticles matches the viability of untreated cells. As shown by Strobel et al. [42], incubation with non-labeled 8 nm and 30 nm ceria nanoparticles resulted in rATP values (mean ± standard deviation) of 82.0 ± 5.6% and 76.3 ± 10.8%, respectively. The rATP contents measured after the exposure to Atto 647N-labeled nanoparticles of 8 nm and 30 nm were 80.1 ± 6.2% and 79.5 ± 14.9%, respectively. Therefore, the fluorescent labeling of the ceria nanoparticles presented in this work did not significantly alter the cytotoxicity of these particles on HMEC-1 cells. The primary size of the two nanoparticles was determined through TEM. One particle type has a diameter of 8 nm and is spherical (CeO 2 -8nm), while the other particle type has a diameter of roughly 30 nm (CeO 2 -30nm) (ellipsoid of 27 nm × 30 nm). It has been shown that the smaller the nanoparticles, the stronger the agglomeration [40]. This has been confirmed in the determination of the hydrodynamic diameter of these particles. DLS measurements were carried out and the size of CeO 2 -8nm increased up to 417 nm in cell medium. In the case of the CeO 2 -30nm particles, the diameter in cell medium was determined to be 316 nm. The zeta potential was also assessed in cell medium: −11.3 mV for the 8 nm particles and −12.3 mV for the 30 nm particles. The same procedure described for the silica nanoparticles in the previous section was used to measure the mean fluorescence of single ceria particles. The results were intensities of 131201 pixels (CeO 2 -8nm) and 742814 pixels (CeO 2 -30nm), respectively. There is an important particularity to be mentioned here. The mean intensity of the single particles is in fact the mean intensity of single agglomerates, as it was not possible to obtain single nanoparticles of primary sizes for the calibration experiments. Those agglomerates, however, are in fact the particles that interact with the cells. Quantification of ceria nanoparticle uptake With the purpose of investigating the size-dependent uptake kinetics of ceria nanoparticles for a longer time than traditionally, HMEC-1 cells were incubated with 8 nm (417 nm) and 30 nm (316 nm) nanoparticles for 3, 24, 48 and 72 h. Figure 4 presents illustrative images of the interaction of ceria nanoparticles with endothelial cells. Approximately 15 single cells were measured per time point and per particle type, resulting in a total of 115 cells analyzed in great detail by Particle_in_Cell-3D. These quantitative results are presented in Figure 5 and show that the number of incorporated particles increases steeply between 3 and 24 h, with no significant difference between the two particle sizes. The number of internalized agglomerates of CeO 2 -8nm nanoparticles increased from 337 ± 66 to 2069 ± 248, whereas it increased from 363 ± 37 to 2567 ± 297 for CeO 2 -30nm agglomerates. After this point in time, however, the number of intracellular particles is decreased back to initial levels, 185 ± 61 agglomerates for CeO 2 -8nm and 836 ± 155 for CeO 2 -30nm particles. The dilution of intracellular nanoparticles has been shown to be caused by cell division, as reported in a recent publication [44]. As cells undergo mitosis, intracellular particles of the mother cells are shared with the daughter ones. Cell division may therefore have direct influence by decreasing the number of taken up particles with time. Since the doubling time of HMEC-1 cells is 28.6 h [45], and the dilution of intracellular ceria nanoparticles occurs after 24 h, cell division probably plays an important role in our findings. The number of internalized nanoparticles after 3 h is practically the same for both particle sizes. These numbers then escalates to reach a maximum at around 24 h. After 48 and 72 h, however, the number of particles incorporated by the cells is reduced back to amounts similar to that measured after 3 h. The histograms show the mean ± standard error of the mean of at least two independent experiments (n = 12-16). Results were statistically different (*p < 0.05) for an incubation time of 48 h and highly statistically different (**p < 0.01) for 72 h. Cell division is probably among the dominant causes for the observed dilution of nanoparticles. Yet, other time-dependent parameters may also influence the uptake dynamics. For example, degradation of intracellular particles, exocytosis, cell uptake behavior (e.g., cell-cycle phase dependency, and load capacity), and the number of nanoparticles available for uptake. Conclusion The possibility to quantify nanoparticles on the single-cell level is an important step to better understand the mechanisms of nanoparticles-cell interactions. In this work it was demonstrated that results achieved with Particle_in_Cell-3D were decisive to determine the cell type-dependent uptake kinetics of silica nanoparticles. Moreover, the quantification of intracellular ceria nanoparticles showed that there is a significant difference in the uptake kinetics of 8 nm (agglomerate size 417 nm) and 30 nm (agglomerate size 316 nm) nanoparticles. After 48 h, the particles that form smaller agglomerates, i.e., 30 nm nanoparticles, are internalized more efficiently by endothelial cells. In addition, our findings offered a new insight into the remarkable dilution of intracellular nanoparticles, possibly influenced by cell division. Particle_in_ell-3D can be applied to investigate the dose-dependent effects for the risk assessment of nanoparticles. Additionally, this method can be used to study which factors are determinant for the successful attachment, internalization and cargo release of nanoparticles designed for medical applications. Experimental Nanoparticle characterization Nanoparticle size was determined by transmission electron microscopy (TEM). TEM micrographs were acquired by a JEM 2011 (JEOL, Japan) transmission electron microscope. The nanoparticle dispersion was diluted with EtOH or MeOH and applied onto a carbon-coated copper grid (Plano, Formvar coalfilm on 200 mesh-net). The sizes of the nanoparticles were then determined from TEM images through digital image analysis with the ImageJ software [26]. Zeta potentials and hydrodynamic diameter (through dynamic light scattering) were measured in ultrapure water and in cell medium (see section 'Cell culture' for details) with a Zetasizer Nano (Malvern Instruments, UK). In order to break down agglomerates, the resulting solution was vortexed for 10 s, treated in an ultrasonic bath for 10 min and vortexed again for 10 s. Uptake experiments For live-cell imaging experiments, cells were seeded 24 h before imaging in 8-well Nunc™ Lab-Tek™ II chamber slides (Thermo Fisher Scientific Inc., Germany) at a density of 1.1 × 10 4 cells·cm −2 . HeLa and HUVEC cells were incubated with silica nanoparticles as described before [4]. HMEC-1 cells were incubated with ceria nanoparticles in humidified 5% CO 2 atmosphere at 37 °C. The 10 µg·mL −1 solution of ceria nanoparticles was prepared in the same cell medium used for cell growth. Before addition to cells, the solution was vortexed for 10 s, treated in an ultrasonic bath for 10 min and vortexed again for 10 s. After the incubation time, and just before measurements, the cell membrane was stained with a solution of 10 µg·mL −1 wheat germ agglutinin, Alexa Fluor® 488 (Life Technologies) in cell medium, incubated at 37 °C for 1 min, and washed twice with warm cell medium. Cytotoxicity assay The procedure for the determination of the relative cellular ATP level of ceria nanoparticles is described in detail by Strobel et al. [42]. Live-cell imaging Imaging was performed on a Zeiss spinning disk confocal fluorescence microscope equipped with a Zeiss Plan Apochromat 63× /1.40 Oil/DIC objective. Samples were in 5% CO 2 atmosphere at 37 °C during imaging and were illuminated with laser light alternating between 488 nm and 639 nm, exciting the cell membrane stain and the Atto 647N dye (labeling the ceria nanoparticles), respectively. Image sequences were captured with an electron multiplier charge-coupled device camera (Evolve 512, Photometrics, USA). Several planes of the cells were imaged with a spacing of 250 nm and a detection time of 100 ms per confocal section. Statistics The unpaired Student's t-test was used for statistical analyses. Values were expressed as the mean ± standard error of the mean. Results were considered to be statistically different at p < 0.05 and highly statistically different at p < 0.01. For the determination of the relative ATP content, values represent the means ± standard deviation (n = 3).
6,057.8
2014-09-23T00:00:00.000
[ "Biology", "Materials Science" ]
A Secure IoT-Based Cloud Platform Selection Using Entropy Distance Approach and Fuzzy Set Theory With the growing emergence of the Internet connectivity in this era of Gen Z, several IoT solutions have come into existence for exchanging large scale of data securely, backed up by their own unique cloud service providers (CSPs). It has, therefore, generated the need for customers to decide the IoT cloud platform to suit their vivid and volatile demands in terms of attributes like security and privacy of data, performance efficiency, cost optimization, and other individualistic properties as per unique user. In spite of the existence of many software solutions for this decision-making problem, they have been proved to be inadequate considering the distinct attributes unique to individual user. This paper proposes a framework to represent the selection of IoT cloud platform as a MCDM problem, thereby providing a solution of optimal efficacy with a particular focus in user-specific priorities to create a unique solution for volatile user demands and agile market trends and needs using optimized distance-based approach (DBA) aided by Fuzzy Set Theory. Introduction One of the greatest inventions of Gen Z, Internet has rapidly emerged over the last two decades, connecting people and organizations together into one giant family. This connectivity has generated the urgency of Internet of Things (IoT) [1], which involves sensors, software devices, and other technologies, for the purpose of maintaining the security and privacy of humongous data transmission among other devices and systems [2]. For this sole purpose, several distinct IoT platforms have come into existence with their own unique cloud service providers (CSP) at the backend. But, like every coin has two sides, i.e., it has also led to a problematic situation when it comes to the selection of ideal CSP for a selected set of attributes that purview a finite set of requirements, to assist the process of decisionmaking where one has to deliberate multiple attributes, possible scenarios, market trends, and user biases [3]. According to the best of research and knowledge and observation, no compatible and comprehensive study and solutions been done for this integrated set of requirements in the field of cloud service provider selection (CSPS). However, there exist a lot of work that includes some of our set of factors quality which elaborately and accurately formulates algorithm for some quality factors and some for the technical aspects [4,5]. Divergent from the preexisting schemes, thus, providing a flexible, realistic, and compatible methodology towards cloud service selection (CSS) considering all the possible factors under the sun required for an ideal cloud service. 1.1. Significance of the Research. In spite of the existence of many software solutions for this decision-making problem, they have been proved to be inadequate considering the distinct attributes unique to individual user. Research Gaps in the IoT-Based Cloud Service Providers. The requirement of decision-making among the various cloud service providers amalgamated with IoT applications has led to the emergence of several software solutions in recent years. However, multiple demerits can be observed in the performance and efficiency of these platforms [6]. The current short comings present in the IoT-based cloud service provider solutions involve numerous dimensions including the inefficiency of the platforms to extend for supporting the heterogeneous sensing technologies. Other demerits include the proprietorship of data, providing insinuations of privacy and security [7]. The processing and sharing of information can also be counted as another gap especially in scenarios where it is essential to support novel services. The absence of assistance provided by application developers is another shortcoming faced by several IoT cloud platforms [4]. Furthermore, most of these IoT platforms do not possess the property of expansion for the addition of new components to withstand the emergence of new technologies and provide economies of scale. Lastly, the delivery of the purchased software to the respective connected devices is also not supported by a majority of the marketplaces dedicated for IoT applications. Multicriteria Decision-Making (MCDM) techniques provide a scientific and easy solution. MCDM deals with organizing variegated attributes which come under the purview of decision-making. It specializes in handling issues where the proximity of attributes is close, and human cognitive abilities are not able to take the logical decisions. It does so by performing bargains or trade-offs by replacing one criterion by equivalent another. This paper presents an integrated set of factors that contribute the solution to the problematic issue of the selection of an optimal IoT cloud platform. In a nutshell, the qualities mentioned below make the proposed methodology novel when compared to state-ofthe-art techniques: (1) Identification and categorization of selection attributes (SA): after thorough and detailed studying of more than fifty research papers, few factors, i.e., selection attributes (SA) were filtered out. About 90 factors were carefully studied, and explicitly observed and relevant factors were mined out by removing redundant elements and those which were similar to each other. Finally, these factors were then categorized into broad three categories after extensive reasoning and filtration, namely, Literature Review This section of the research is concerned with the existing studies in the field of selecting optimal cloud computing service provider for IoT-based applications, where the problem of service selection has been represented as an MCDM problem. To search the relevant data, the keywords like Cloud Computing for IoT, Cloud Platforms for IoT Services, IoT based Cloud Service Selection, IoT Service Selection Attributes, and Cloud Service Selection for IoT were used. As a result of this research, a total of 104 research papers from various highly reputed journals and conferences were analyzed in detail. Now, these papers were screened by examining their primary focus, whether it is related to the cloud service selection or not. Then, in the second screening, the approaches used and the case studies mentioned along with the selection attributes (SA) which were mentioned in these research papers were deliberate to make a comparative study of the same. The comprehensive tabular literature survey is shown in Table 1. This paper presents and develops a hybrid decisionmaking framework using two methodologies, namely, Fuzzy Set Theory and Matrix Multicriteria Decision-Making (MMCDM) where identification and categorization of 14 selection attribute are prepared into three categories, namely, quality factors, technical factors, and economic factors. After removing redundant features and filtering unnecessary information, thereby making this framework relatively less vulnerable to prerequisites and limitations as compared to available frameworks and techniques in cloud computing service selection. Security and Privacy Challenges in Cloud-Based IoT Platforms. While IoT and its applications are well explored and secure, the cloud-based IoT platforms are still comparatively less explored and nascent in nature [18]. Categorized in two purviews, static and mobile-based platforms both have variegated challenges on grounds of security and privacy. There are multiple security challenges including identity privacy that deals with protection of details of user of the cloud devices like his/her personal real-world information. Other threats include disclosure of the real-time location of user termed as location privacy [19]. Node compromising attack is also one of the most enduring threats to user's privacy as it includes planned attacked to gain access to user's private information [20]. Removal or addition of transmission multiple layers is a very mundane breach performed by various IoT users; it involves manipulating the concept of reward Table 1 Citation/name Methodology Advantages Disadvantages [8] This study proposes a multistep approach to evaluate, categorize, and rate cloud-based IoT platforms via implementing Multicriteria Decision-Making (MCDM), probabilistic linguistic term sets (PLTSs), and finally, a probabilistic linguistic best-worst (PLBW) is used to score all platforms Though the proposed method seems complex but a real-time implementation via case study provides cogent proof of its efficiency. It also outperforms individual scoring, classification, and evaluating methods. The data used in the case study is limited which explains the flow of the method but falls back to prove its cogency. Moreover, inclusion of latest hybrid techniques in the domain for comparative analysis could further edify the study's significance. [9] Cloud service provider selection approach is proposed via application of Multicriteria Decision-Making (MCDM), analytical hierarchical process (AHP), technique for order of preference by similarity to ideal solution (TOPSIS), and the best-worst method (BWM). Case study is presented to support the same. The study successfully identifies and provides solutions the drawbacks of classical multicriteria decision-making (MCDM) approaches in terms of accuracy, time required, and complexity of computation. AHP is outperformed by proposed approach. The use case scenario presented used stimulated scenarios and data that raises question against the cogency of the proposed study. [10] Additive manufacturing based cloudbased service providing framework is proposed to include both hard and soft services for the ease of customer use. These include data-based testing, design, 3D printing, remote control of printers, and face recognition using AI. This study understands and provides solution to the real-time consumer or customer problems. Its feature providing framework proves to be easy, feasible, and effective. The study only provides a framework along with it merit without any details of implementing or developing the framework for real world application. [11] The study is aimed at identifying various determinants that cause deprecation of various ministry of micro, small, and medium enterprises in India, contributing a huge impact on Indian economy. Data is collated from 500 Indian MSMEs. Multiple criteria include social influence, Internet of Things, perceived ease of use, trust, and perceived IT security risk among others. This study evaluates real-time data from 500 MSMEs that proves its cogency. Moreover, it provides insight that can be directly deliberated by policy makers to create maximum impact. A comparative analysis with other policy-making insight provider algorithms along with impact of implementing the recommended changes would create more clarity and value for the research. [12] A comparative analysis is performed to obtain the best cloud-based IoT platform for any business or organization by deliberating multiple criteria, functional and nonfunctional requirements among five giants, namely, Azure, AWS, SaS, ThingWorx, and Kaa IoT by application various techniques like analytical hierarchical process (AHP), K-means clustering, and statistical tests. The hierarchal method of requirement classification provides edge to the method and various statistical tests implemented on the results obtained creates increased sense of cogency or significance to the study. The cloud-based IoT platforms are limited creating false sense of performance in terms of evaluating more than 5 platforms. Moreover, requirement classification into hierarchy is very time and effort intensive. [13] IoT applications built via cloud-based platforms are assayed for any kind of security challenge or data inconsistency issues that arise due to third party auditors, phishing attacks. It also provides strategies to prevent the same. The objective of the study is very relevant to the need of the hour, providing valid and much needed information. It also provides recommendations handle the same. The scope of study is limited to theoretical analysis without any real data implementation or case study to prove the cogency of the points mentioned in the paper. 3.2. Distance-Based Approach. Distance-based approach (DBA) is an effective and efficient MCDM method. Identifying and defining the optimized state of the multiple attributes that are part of the process is the initial gradation in the proposed method. The optimal state represented by the vector OP is the set of best values of criteria over a range of alternatives. The best values can be maximum or minimum, defining the type of criteria. Reference to Figure 1, as indicated, vector "OP" is the optimal point in a multidimensional space. It acts as a reference point to which the other values of all the alternatives are analyzed to one another quantitatively. In other words, an arithmetical difference of the current values of alternatives from their corresponding optimal values is taken, which represents the ability of the considered alternatives to achieve the optimal state. The decision-making issue which needs to be dealt with is searching for a viable solution on basis of its proximity to the optimal state. In Figure 1, "H" represents the feasible region and "Alt" as the alternative. The distance-based technique is aimed at determining a point in the "H" region and is in closest proximity to the optimal point. To implement the above approach, let i = 1, 2, 3, 4… n = alternatives, and j = 1, 2, 3, 4… m = selection attributes. A matrix is created to represent the entire set of alternatives along with their respective criteria, which is shown in (1). This study poses to create a need for authorization in cloud-enabled IoT systems by assaying various security threats that such a set up encounters via two case studies in order. Proposing control-based authorization system The aim of the study very cogent and current, deliberating recent developments of cloud-based IoT applications. The case studies presented aid to the cause of study while contributing to the significance of the proposed framework. The framework proposed for controlbased authorization lacks any sense of implementation or efforts towards prototype development. [15] An attack distribution detector is proposed to prevent malfunctioning of trust bounders in IoT-based applications, leading to severe data theft. A downsampler-encoder-based cooperative data generator is proposed to discriminate noisy data that may lead of data theft that malfunction trust boundaries. The continuous updating and verification of the model provides it optimal results and performance to detection probable data thefts. The model outperforms primordial machine learning and deep learning techniques. Inclusion of latest hybrid techniques in the domain for comparative analysis could further edify the study's significance. [16] Various cogent issues with IoT middleware are brought to attention while proposing a state of art IoT middleware that can integrate with MQTT, CoAP, and HTTP as application-layer protocols. The problem addressed by In.IoT framework is cogent, and its relevance has been shown very accurately in the study. A comparative analysis with classical middleware and latest hybrid techniques could further edify the significance of the study. [17] An intrusion detection technique for cloud-based IoT application is proposed by implementing machine learning, to obtain state of art accuracy and in-depth analysis of source or type of intrusion. The survey of 95 developments in intelligence-based intrusion detection techniques provides the study significant relevance and ground for comparative analysis with proposed technique. Though the study shows optimal accuracy, false-positive results still hamper its cogency. Wireless Communications and Mobile Computing This matrix is known as the decision matrix [d]. Now, we take the priority weights of these attributes according to the opinions of various experts and calculate their averages. We take the sum of these averages and divide the sum by each of these averages. The result is the creation of another matrix with only one row and columns equal to the number of attributes. This matrix is known as priority weights matrix [PW] as shown in (2). Using the following Equations (3), (4) and (5), the decision matrix is standardized to minimize the impact of different units of measurement and to simplify the process. where d j is the average of each attribute for all alternatives. where S j is the standard deviation of each attribute for all alternatives. where d ij is the value of each attribute for an alternative and d′ ij is the standardized value of each attribute for an alternative. The final matrix is known as the standardized matrix [d′] and is represented as in (6). : The best value of each attribute is selected over the set of alternatives. The best values can be maximum or minimum values, depending on the type of attribute specified. The matrix formed using this set of values is known as the optimal matrix [O] as shown in (7). The distance of each of the alternatives from its optimal state is calculated as the numerical difference between the values of each of the attributes and their corresponding optimum counterparts. The resulting values form a matrix called the distance matrix [O ′ ] as represented in (8). Each value of this matrix is then squared and multiplied by their corresponding priority weights, as explained by Equation (9). Wireless Communications and Mobile Computing The resulting matrix is called the weighted distance matrix [W] as shown in (10). Equation (11) is used to calculate the composite distance, "CD" between each alternative to the optimal state. The one-column matrix formed as a result of this equation is called the composite distance matrix [CD] as shown in (12). The last step of this method involves calculating the rank of each alternative by using their composite distance values. The smallest value gets the 1st rank, the second smallest value gets the 2nd rank, and so on. This is how the DBA MCDM approach is used for cloud service provider selection. Below, Figure 2 represents the model development of the methodology. Estimation-of-Distribution Algorithms. These algorithms are general metaheuristics applied in optimization to represent a recent alternative to classical approaches [21]. EDAs build probabilistic models of promising solutions by repeatedly sampling and selecting points from the underlying search space. EDAs typically work with a population of candidate solutions to the problem, starting with the population generated according to the uniform distribution over all admissible solutions [22]. Many distinct approaches have been proposed for the estimation of probability distribution, Implementation, Results, and Discussions Evaluating various cloud service providers using DBA (distance-based approach) methodology with Fuzzy Set Theory to calculate ranks based upon selection attributes is described by the following steps: (2) Identification of selection attributes: three major factors were identified which were, namely, quality factors, technical factors, and economic factors. They were further classified as: quality factors (functionality, reliability, usability, efficiency, maintainability, and portability), technical factors (storage capacity, CPU performance, memory utilization, platform design, and network speed), and economic factors (service induction cost, maintenance cost, and promotion cost) after the detailed analysis and intensive study of the cloud service providing industry and its various prerequisites along with understanding the market where this industry thrives ( Figure 3). Wireless Communications and Mobile Computing distinguishes real-world problems based upon human comprehensive skills rather than absolute Boolean logic. In other words, the fuzzy system implements scales rather than 0/1 for coherent human understanding where 0 represents absolute fallacy, 1 represents absolute truth, and the middle values represent the fuzziness or the fuzzy values. In this study, we have implemented a triple fuzzy number scale which uses a triplet set of the form [a, b, c] with a sensory scale (Tables 2 and 3) [28]. A survey was conducted among a group of 40 selected experts associated with the technical field. The ( Table 2) questionnaire consisted of 14 pristine questions, based upon which a priority weights matrix was created consisting of the weights or values of the assorted attributes. While in the second questionnaire (Table 3), the nine already selected CSPs were appraised on the grounds of the 14 categorized selection attributes by an adept team of 5 experts. The extracted data from the questionnaires mentioned above were converted from a literal scale to TFN (Triple Fuzzy Number) scale, thereby averaged to a fuzzy number (4) Determination of weights and performance ratings: the expert-assigned linguistic terms were first converted into corresponding TFNs using the fuzzy scale and then defuzzied to get crisp score values. The data was extracted from the questionnaires and then evaluated using a combination of the mathematical formulas and concepts of aggregation and average (5) Creating performance rating matrices: a decision matrix of the performance ratings (Figure 4(a)) and a single-row matrix of the priority weights (Figure 4(b)) were created under the supervision of expert guidance using the fuzzy scale and MCDM (6) Calculating standardized matrix: root mean square of each selection attributes is carefully evaluated; furthermore, the mean previously determined is subtracted from each value and simultaneously divided by the corresponding root mean square of that particular selection attribute to get the standardized matrix ( Figure 5). (7) Creating optimal and distance matrix: the optimal matrix is estimated by targeting the best values of each selected attribute of the standardized matrix ( Figure 6(a)), i.e., the maximum values for quality and technical factors and minimum values for economic factors. Additionally, the distance matrix is calculated by finding the distance between each value of a particular selection attribute with its corresponding best value (Figure 6(b)). (8) Calculating weighted and composite distance matrix: by squaring the respective values of the distance matrix and multiplying them to the corresponding priority weights, the weighted distance matrix was obtained (Figure 7(a)). The matrix formed was then used to evaluate the composite distance matrix by calculating the square root of the total sum of each alternative (Figure 7(b)). (9) Ranking of cloud service providers: finally, the alternatives are ranked in decreasing order of their corresponding values in the composite distance matrix. Therefore, the least rank or rank 1 is most preferable while the maximum rank, i.e., rank 9, is least preferable considering the given set of alternatives (Table 4). The selection of cloud service providers is a problematic task as many decision-making parameters are taken into consideration like security, cost optimization, availability, reliability, and fault tolerance, to name a few. Most of the mentioned factors are not constant and individualistic wherein every consumer who requires a cloud service provider has an almost unique set of demands and requisites, each having selected set of attributes has a different weight than other, i.e., prioritized attributes are not rare. Considering the presented scenario, Multicriteria Decision-Making technique has shown significant efficacy and is implemented in the field widely as it provides both individualistic and concurrent results. Table 4 shows the ranking of nine cloud service providers based on fourteen carefully discerned attributes categorized in three categories, namely, quality factors (functionality, reliability, usability, efficiency, maintainability, and portability), technical factors (storage capacity, CPU performance, memory utilization, platform design, and network speed), and economic factors (service induction cost, maintenance cost, and promotion cost). The step-by-step gradation procedure described above where priority weights and decision matrix are extracted from surveys data using a fuzzy scale, which is then optimized-standardized and refined by priority weights as extracted from user survey data proves to be a simple, effective, and reliable method of action for selection of optimal cloud service provider. Figure 8 shows the graphical representation of the same. Conclusion and Future Work Taking into consideration the current extensive use in the cloud-based IoT services for building computation, storage, infrastructure, and other needs has led to a greater demand for an efficient methodology to drive which given cloud service provider meets one's unique and individualistic demands for fulfilling ever-changing solutions in the field of IoT. Given the scenario of current cloud-based IoT applications with multiple service providers and vivid requirements, a lot of decisionmaking criteria and methodologies already exist, some of which include TOPSIS that is a useful and straightforward technique for ranking several possible alternatives according to closeness to the ideal solution, or AHP, or VIKOR which is based on the aggregating fuzzy merit that represents the closeness of an alternative to the ideal solution by compromising between two or more options to get a unified opinion between multiple criteria, and PROMETHEE compares various measures available by the technique of outranking. Despite the preexisting techniques to classify, evaluate, and rate various IoT-based cloud service providers due to continuously changing set of attributes, challenges user's privacy and security, user authentication, location privacy, disparate demands of customers, and colossus pool of available attributes ranging from performance, cost optimization to quality, it becomes very peculiar to get virtually concurrent results for one's ever-changing characteristics and agility even after discerning the given set of methodologies in the available literature. Therefore, this research meets all the scenarios mentioned above, demands and unique characteristics by using an optimized matrix methodology aided by distance-based approach (DBA), some of its salient features are as follows: (a) Considering a broad set of categories that are further graded into subattributes, i.e., performance, technical and economic factors are individually optimized to get ultimate efficacy (b) Simple, straightforward, reader-friendly, and easily captured and understood by anyone concept and procedure (c) It is obtained by taking into consideration priorities among attributes as extracted from a user by surveyed data Conclusively, in the presented research methodology, a distance-based approach with an optimized approach to 9 Wireless Communications and Mobile Computing consider priorities as set by data extracted from user survey in a simple, lucid yet compelling procedure to select an ideal cloud service provider for IoT applications. This study finds nine alternatives or popular cloud service providers and 14 attributes or deciding criteria. Privacy and security are the two most emerging challenges in IoT applications as provided by cloud service providers due to the nascent nature of the field. Though IoTbased applications have already been explored from the aspects of privacy and security, implementing IoT applications via cloud-based platforms leads to a new set of possible threats. In future work, this study intends to evaluate variegated cloud-based IoT platforms from the aspects of security and privacy by analysing the same under the purview of three criteria. Firstly, the future work will deal with user's individualistic threats of privacy and security like location privacy, breach of personal information, protection of user's hardware and software devices, and user profile authentication. Other criteria include privacy and security challenges for a multilevel organization, namely, secure route establishment, isolation of malicious nodes, self-stabilization of the security protocol, and preservation of location privacy. Lastly, this study would assay multiple case studies of leading cloudbased IoT platforms' breach of security and privacy to perform a comparative analysis of the same. Data Availability The data will be provided based on a request by the evaluation team. Consent All the authors of this paper have shown their participation voluntarily.
5,959.8
2021-05-17T00:00:00.000
[ "Computer Science" ]
Teachers’ perceptions about collaboration as a strategy to address key concepts in mathematics Teachers should be prepared adequately for their profession. Shepherd (2012) alluded to the fact that for quantity and quality, South African school teachers were ranked low, based on learner performance because of poor foundations in mathematics and science compared to other developing countries in Africa, as most developing countries are still finding their feet in the development of Early Childhood Education (ECE)/foundation phase (FP) programme as public service for public good, and their efforts are characterised by a lack of commitment to the implementation of various policies by their respective governments Background: The aim of the study is to investigate teachers’ perceptions about peer collaborative work in designing lessons as a team helped them to identify threshold concepts in the teaching and learning of foundation phase mathematics in Motheo District of Education. Methods: A qualitative approach, with a case study design, was used to combine data from observation and focus group discussions, interviews and group task sheets. Classroom observation was conducted during a workshop conducted by a subject advisor from the Motheo District of Education in collaboration with the researcher. Teachers were purposively selected from seven schools in the Motheo District of Education based on cluster sampling as a way of reviving their professional development through acquisition of mathematical teaching skills involving innovative approaches to teaching and learning of early childhood mathematics. Seven mathematics teachers, one from each school, were interviewed during the workshop. Results: Underpinned by a collaborative theory, the findings of the study revealed that peer collaboration in mathematics teaching was key to helping them (participant teachers) identify threshold concepts in mathematics that they had initially found difficult as individual teachers. This assisted them in teaching the subject effectively at the foundation phase level. The study, furthermore, established that collaboration by mathematics teachers was necessary in order to overcome the paucity of global mathematics teaching skills for early childhood mathematics, to foster learners’ knowledge of mathematical concepts and to stimulate their interest in the subject. Conclusion: It is recommended that more structured collaborative work amongst teachers in general should be encouraged to enable teachers overcome the problem of content gap in their area of specialisation. Introduction The dawn of democratic governance in South Africa in 1994 has been followed by a series of reforms in the education system (Khuzwayo 2005), with much attention being focused on teacher preparation and readiness for classroom teaching and learning. The Department of Education's policy documents are based on the assumption that teachers' content knowledge has a significant influence on learners' learning. Research conducted in many parts of the world reveals that teachers' content knowledge makes a difference in their classroom instructional practices as well as in their learners' achievement (Mishra & Koehler 2006;Newborn 2001;Shulman 1986). Research conducted by Loewenberg Ball, Thames and Phelps (2008) states that teachers must really know the subject that they teach, because if they themselves do not know the subject well, they will find it difficult to have the knowledge they need to help learners learn the content. Robert-Hull, Jansen and Cooper (2015) advocate that the way and manner in which candidates are prepared to be teachers in many parts of the world have a critical influence on what teachers can do to change the teaching environment and what their learners learn in school. Teachers should be prepared adequately for their profession. Shepherd (2012) alluded to the fact that for quantity and quality, South African school teachers were ranked low, based on learner performance because of poor foundations in mathematics and science compared to other developing countries in Africa, as most developing countries are still finding their feet in the development of Early Childhood Education (ECE)/foundation phase (FP) programme as public service for public good, and their efforts are characterised by a lack of commitment to the implementation of various policies by their respective governments Background: The aim of the study is to investigate teachers' perceptions about peer collaborative work in designing lessons as a team helped them to identify threshold concepts in the teaching and learning of foundation phase mathematics in Motheo District of Education. Methods: A qualitative approach, with a case study design, was used to combine data from observation and focus group discussions, interviews and group task sheets. Classroom observation was conducted during a workshop conducted by a subject advisor from the Motheo District of Education in collaboration with the researcher. Teachers were purposively selected from seven schools in the Motheo District of Education based on cluster sampling as a way of reviving their professional development through acquisition of mathematical teaching skills involving innovative approaches to teaching and learning of early childhood mathematics. Seven mathematics teachers, one from each school, were interviewed during the workshop. (Britto, Yoshikawa & Boller 2011). He further indicated that many mathematics and science teachers could not teach the subjects properly because of the lack of content knowledge for FP, and as a result, poor teaching contributed to the poor performance amongst learners in mathematics and the sciences (Shepherd 2012). Teaching of mathematics is a fundamental process that should demonstrate both subject content knowledge (SCK) and the pedagogical content knowledge (PCK) (Shulman 1986). Being equipped with such knowledge enables teachers to be more effective, flexible, fluent thinkers and confident in their use and application of knowledge to the FP learners and processes. However, Rigelman (2007) explained that mathematics representation and explanations given to learners in the classroom are often characterised by the teachers' poor conceptual understanding or knowledge of the subject. Newborn (2001) contends that in America, the types of knowledge essential for the teaching and learning of mathematics in elementary schools have been a research area for the last 40 years. Borasi (1990) maintains that mathematics, like any other subject at school is an important means of bringing a basic understanding of logic and invention to people. Therefore, it would appear that mathematical knowledge, like any other subject knowledge, is valued and must be presented in a manner that portrays conceptual understanding. This statement is affirmed in research conducted by Tsang and Rowland (2005) who state that to teach mathematics effectively, teachers must have good mastery of the substantive and syntactic structures of mathematics. Teachers must not only be capable of telling learners about the accepted facts, concepts and principles of different branches of mathematics, but also be able to explain to learners why a particular mathematical principle is deemed warranted, why it is worth knowing and how it relates to other principles within the same branch and across other branches of mathematics (Tsang & Rowland 2005). This is in line with a statement by the Ghazali (2019) that: [S]tudents need to learn mathematics in ways that enable them to recognize when mathematics might help to interpret information or solve practical problems, apply their knowledge appropriately in contexts where they will have to use mathematical reasoning processes, choose mathematics that makes sense in the circumstances, make assumptions, resolve ambiguity and judge what is reasonable in the context. However, it seems that in many countries, teacher preparation programmes for FP learners for the rural areas sometimes delay for no apparent reasons and do not facilitate the acquisition and development of the necessary content knowledge required by teachers to teach the curriculum to perfection. Some of these programmes are not implemented early at preschool for the learners to acquire skills but put off until they attained the age of 6 years or more, depending on the child's ability to manipulate objects (Preston & Haines 2014), hence the current study. The current study was triggered by a request by a senior education specialist in the Motheo education district of the Free State province of South Africa to organise an intervention programme to support mathematics teachers in the district with the ultimate aim of improving the quality of teaching at FP level and learners' performance in mathematics in the early stage of FP level. Prior to this intervention programme, the researcher realised that there was a deficiency in teachers' professional learning and development, through collaboration in the district, as a way of increasing interest to support the progressively complex skills learners need from teachers to learn mathematics in preparation for further education and work in the 21st century. Darling-Hammond, Hyler and Gardner (2017:v) state that sophisticated forms of teaching are needed to develop student competencies such as deep mastery of challenging content, critical thinking, complex problemsolving, effective communication and collaboration and self-direction. According to these researchers, this is a very effective professional development (PD) strategy needed to help teachers learn and refine the pedagogies required to teach mathematics skills. However, research has shown that many PD initiatives appear ineffective in the educational system in many parts of the world in supporting changes in teacher practices and student learning, hence the current study. Further interrogation by the researcher revealed teachers' views, which indicated that they experienced problems in handling some of the mathematical concepts and that they had to consult their colleagues in most cases for assistance. This lack of SCK, it seems, was the result of forced redeployment where redeployed teachers were required to teach mathematics. A workshop was organised by the researcher and mathematics subject advisors in order to address teachers' lack of SCK. This paper, reporting on the workshop, aims to identify teachers' perceptions about peer collaboration amongst mathematics teachers as a way to address difficult key concepts or to identify the threshold concepts in early childhood mathematics teaching and learning. To address the above problem, the following main question is posed: How can collaborative work on designing lessons as a team help teachers identify threshold concepts in the teaching and learning of early childhood mathematics? Literature review: Peer collaboration amongst teachers Peer collaboration and co-operation amongst teachers are regarded as key factors in improving teachers' PD. International researchers such as Ni Shuilleabhain and Seery (2018), as well as Fullan and Hargreaves (1992), argue that in order for any relevant and successful fundamental change to occur in the classroom, teachers must be encouraged at all times to collaborate with their peers. This serves as part of a learning curve for the teachers in their communities in order to overcome the problem of content gap experienced in some topics within some subjects. Teacher collaboration in a community can provide a powerful structure within which individual teachers can attempt to understand and reflect on new approaches to teaching and learning relevant to their own school context, learners and culture (Dogan, Pringle & Mesa 2016;Vescio, Ross & Adams 2008). What works best for learners is what teachers, as well as the Department of Education, agitate for in schools. Studies have shown that the concept of teamwork is pervasive within the United States Army but found to be limited in the world of academia (Charbonneau et al. 2010). A review of related literature in international contexts has revealed that effective collaboration amongst peer teachers for lesson planning is a form of development of teacher knowledge. Furthermore, collaboration encourages a more learner-centred approach to teaching and learning of mathematics (Dudley 2013;Lewis, Perry & Hurd 2009;Murata et al. 2012;Ni Shuilleabhain 2016). In collaborative engagement with peers, with the view of helping each other, teachers are able to deal directly with the curriculum, identify the aims and objectives of their teaching and plan lessons which are definitively linked to both the philosophy and content of the curriculum (Cajkler et al. 2014;Takahashi & McDougal 2016). It is believed that the feeling of wanting to help one another to accomplish a goal is like leaders empowering their subordinates to accomplish a certain mission or different missions in the classroom environment. This is relevant to everyday teachers' interactions with learners. Peer collaborative work aims at planning and designing lessons through reflection on action and helps teachers to understand mathematical concepts (either difficult or easy), stimulate critical thinking amongst learners and promote learning through hands-on activities. The ability to persistently and carefully consider what and how teachers teach, and to reflect on their actions as teachers to determine what works best for their learners, is central to successful teaching because reflection is a vital component of learning how to teach well (Myers 2012). In some cases, some teachers and learners do not easily understand mathematical concepts. In those instances, collaboration and co-operation amongst teachers is the only way for teachers to gain this understanding. In this study, teachers' perceptions about collaboration were introduced as a new school-based teacher model of PD facilitated through a workshop. Teachers of a particular community met and collectively discussed and identified key mathematical concepts that needed to be taught but with which they were not familiar. Doing so facilitated and improved the teaching of mathematics and further assisted teachers to improve their understanding of certain mathematical concepts. Theoretical framework This study is underpinned by Vygotsky's sociocultural theory which states that because learning takes place amongst individuals, it is an inherently social process activated through the zone of proximal development (Dillenbourg 1999). The study also incorporates Shulman's five proposed content-specific domains of teacher knowledge, as reported by Pasley (2011). Shulman (1986:9) defines SCK as 'the amount and organisation of knowledge in the minds of a teacher' and includes knowledge of mathematics facts, concepts and procedures and their relationships. Pedagogical content knowledge is described as a particular form of content knowledge that represents how the aspects of content are to be taught for conceptual understanding and mathematics knowledge. In teaching mathematical concepts, teachers' knowledge of the concept should consider the importance of the shaping effect of learners' experiences. It is through experiences that the impact of human culture on understanding and acceptance occurs, and it is where an individual constructs the rules and conventions of language with the extensive functional outcomes manifested around us in human society where learning occurs, as advocated by Jaworsk (1994). Vygotsky's views on this theory contributed significantly to social constructivist epistemology, which dwells on how learning is mediated by collaboration in accordance with the context and by sharing of personal experiences with peers through open discussion in the process of overcoming teaching and learning obstacles. This type of learning is a type of social interaction which concentrates on cognitive development of individuals through discussion and sharing of information (Lantolf & Thorne 2006;Lin 2015). The aim of every teacher is to establish a professional learning community conducive for his or her learners as one of the effective means for enhancing teachers' PD effort in the teaching environment. Studies have shown that different methods or forms such as theory-driven approach and Manabu Sato's learning community theory can be applied to structure all components of teacher PD workshops that impact positively on teachers' teaching beliefs, knowledge and skills acquisition for better teaching and learning (Darling-Hammond et al. 2017;Lin & Wu 2016). Accordingly, this can be done in different forms based on the specific needs of individual schools and teachers' needs or objectives. The need for professional support through collaboration would help teachers establish a platform convenient for collegial dialogue amongst colleagues in order to pool wisdom and ideas that support their understanding of difficult areas in order to optimise learning and teaching. Teachers embark on this collaborative discussion to ensure that their students understand the need to learn mathematics in ways that enable them to recognise when mathematics might help to interpret information or solve practical problems, apply their knowledge appropriately in contexts where they will have to use mathematical reasoning processes, choose mathematics that makes sense in the circumstances, make assumptions, resolve ambiguity and judge what is reasonable in the context (Commonwealth of Australia 2008:11). This was an important aspect in this study as participant teachers collaboratively shared their ideas in order to identify strategies of teaching certain concepts that they considered as mathematics content gaps. The purpose of the workshop was to create a learner-centred approach at the FP level combined with collaboration to develop greater understanding of certain mathematical concepts for effective teaching and learning of mathematics especially at the FP level (Lewis 2016). Methodology This research followed a qualitative approach with a case study, as the research sought to understand the experiences of teachers who collaborated as a way to identify and understand certain mathematical concepts that were perceived by the participating teachers to be difficult to teach at the FP level as well as the relevant principles that enhance effective teaching and learning of ECE. Following a discussion with the mathematics district curriculum specialist (DCS), mathematics teachers from certain schools especially where we have FP classes were invited to participate in the research. Participants (30 mathematics teachers) were purposefully selected from various schools in a Free State Education District by means of a cluster sampling technique. At least one teacher was selected from each cluster, with a total of 15 clusters based on FP levels. The aim was to have at least one representative from each cluster, whereby he or she would share the skills and strategies acquired through this collaboration with his or her cluster members at a convenient time in teaching of ECE learners. The purposive sampling technique was used to select 30 FP level mathematics teachers for the purpose of this study. Ten mathematics teachers each from the following three categories were used for the selection of the participant teachers: 10 mathematics teachers from high-achieving schools either in the current or previous teaching experience (five from urban schools and five from rural schools), 10 mathematics teachers from averageperforming schools (five from urban schools and five from rural schools) and 10 mathematics teachers from lowperforming schools (five from urban schools and five from rural schools) in the Free State province. These schools were identified by relying on the Annual National Assessment results for the subsequent 3 years in the province and have experience of teaching FP levels, skills and strategies used in teaching these children for meaningful understanding, as most of their parents do not have time for their children at home. The purpose of the selection strategy was to share a variety of opinions from different teaching and learning environments. Observation, focus group discussions, group task sheets and in-depth interviews were used as data collection strategies. Evidence was gathered by observing a group of 30 mathematics teachers who participated in a workshop (see Figure 1, Figure 2 and Figure 3), with seven participants being interviewed. Because the researcher did not want to interfere unnecessarily in the lesson planning and discussion of concepts, the data collected were essentially limited to feedback on the topics selected by the teachers for discussion, based on their objective for the discussion and curriculum coverage. The dataset gathered for this study over the progression of the sequence included: • Objectives regarding the collaboration to identify threshold concept effective for ECE. • Action plans to solve problems under discussion using physical or real objects of ECE. • Reports and feedback pertaining to the resolution for the problem (P1) elaborated on by individual teachers in the study. • Final reports on the resolutions of subsequent problems to be discussed or solved with learners in the class regarding the actual teaching of learners. • Contingency plans that show teachers' reflective practices used to identify other threshold concepts in early childhood-level mathematic teaching and learning and to assist learners further to solve problems where possible. • Reports from individual teachers indicating the success or failure of a collaboration in identifying certain mathematical concepts. http://www.sajce.co.za Open Access • The way forward for identifying more threshold concepts for future discussion. In addition to the data listed above, the researcher requested information from the participant teachers on the performance of their learners in their various schools prior to the collaboration, the mathematics curriculum and textbooks used in teaching and learning. The researcher also used his field notes report collected to assist in triangulating the data gathered from the teachers. Teachers were later interviewed for further details to augment their outputs. Instrument Observation and focus group discussions, interviews and group task sheets were used for data collection (see Figure 1, Figure 2 and Figure 3), which, according to Maree (2016), can be used to collect from the sources. Research procedure Before any discussion of threshold concepts took place, three of the participant teachers taught three different lessons, with the group observing. An observation schedule, designed according to a strand or strands of teaching mathematics for proficiency, was used to identify particular concepts and how these were presented to the group. The lessons were also videotaped for the purpose of reflection and discussion. Doing so assisted the researcher in critically analysing teachers' mathematics knowledge. The video recording assisted the group in successfully identifying the listed threshold concepts for group discussions. During the course of the discussions, equal opportunities were given to each participant teacher to contribute, share their opinions and clarify points in relation to the identification of threshold concepts that assist learners with problem-solving. This was followed by discussion and in-depth interviews. Throughout the study, participants were recorded and analysed who, when teaching and during in-depth interviews, actually demonstrated interesting mathematics knowledge for identifying key concepts in their presentations and in the task sheet. Trustworthiness and credibility of the study Based on the purpose of this study, relevant participants and suitable instruments were selected for this study. The data collected in this research were recorded electronically and transcribed for analysis. The participant teachers were given the opportunity to review the transcriptions to ensure that they were accurate. The researcher used semi-structured questions to guide the interviews, video recorded the participants as they presented their views based on the questions asked or under discussion. The researcher also studied the lesson plans for lessons presented during this study. The researcher also ensured descriptive validity which actually goes into the details of what actually had been gathered at the field; hence he used open and transparent procedures in gathering the raw data without any fabrication of any part of information (Maree 2016). Data analysis Prior to the analysis, all recordings were transcribed. The researcher analysed the data collected, that is, raw data captured from the responses of the participants who were asked the same set of questions, including some body language as well as facial expressions exhibited by the participants, which were recorded in field notes. An inductive analysis approach was used, and the data analysis was guided by specific evaluation objectives, which involved a detailed reading of the raw data to derive the concept, themes or models. This understanding of inductive analysis in research starts with an area of study and allows the theory to emerge from the data (Miles & Huberman 1994). An analysis of transcriptions, in terms of answering questions and teaching using appropriate mathematics knowledge added significantly to the richness of the research data. Ethical considerations Ethical clearance was obtained from the University of the Free State, Ethical Clearance Number: UFS-HSD2018/0395. 17/07/2018 Findings and results The purpose of the research was to establish mathematics teachers' perceptions about peer collaboration amongst teachers as a way to address difficult key concepts or identify the threshold concepts in early childhood mathematics teaching and learning. The data analysed in this research were gathered through observation, focus group discussion, interviews and task sheets. After teachers engaged in collaborative discussion and shared ideas with one another and with the researcher (see Figure 1, Figure 2 and Figure 3), the participant teachers were able to create and construct appropriate learning opportunities that assisted them in identifying threshold concepts in mathematics for teaching and learning of primary school mathematics. The results are presented according to focus groups followed by observations, task sheet responses and, finally, the interview data. Focus group and discussion-based results of professional development The teachers formed groups comprising at least seven teachers per group of four (see Figure 2 and Figure 3). Gender disparities were considered to avoid bias and cluster sampling avoided aspects relating to age, experience, race and resources of the various school. Questions were randomly distributed to group members for discussion according to the themes that had emerged from observation during lesson presentation. The researcher and research assistants carefully monitored what was discussed when questions were presented to a particular group, and all information was recorded and checked by the assistants. Before focus group discussions commenced, every participant teacher was made aware that when a question was directed at a particular group to elicit comments or viewpoints, they could answer or contribute if no further contributions were forthcoming from that group as a way to develop participant teachers professionally (see Figure 3). The purpose was to ensure that there was positive change in the lives of teachers regarding their teaching and learning of ECE as well as their learners. Darling-Hammond et al. (2017:v) define effective PD as structured professional learning that results in changes in teacher practices and improvements in student learning outcomes. According to them, the use of relevant methodology in structuring effective PD through collaboration involves seven features which include being content focused, incorporating active learning, supporting collaboration, using models of effective practice, providing coaching and expert support, offering feedback and reflection and being of sustained duration, which provides teachers with adequate time to learn, practise, implement and reflect upon new strategies that facilitate changes to their teaching practice. This idea was adhered to in this study; therefore, during the focus group discussions, all group members were asked to pay attention to the answers given by the particular group in order to deliberate successfully and achieve better understanding of the emerging categories or themes from the discussion. During the focus group discussions, the following comments were provided by participants in their various groups about the value of collaboration in workshops (see Figure 3): 'Actually, in our group, we were really impressed by the way various teachers solved problems on fractions. For the fact that we are mathematics teacher does not mean that we know everything. Some of the skills, methods, strategies demonstrated by some teachers through this research has been overwhelming in dealing with ECE mathematics. We are really blessed to have collaborative work like this that exposed us to different opinions of solving mathematics problems or different ways of identifying mathematical concepts which makes teaching of mathematics in FPs very ease.' (Group 3, teacher 2) 'We need to embark on this kind of project very often whether we like it or not because we get to know many things during group discussion which is really difficult for most of us to understand when we plan alone as individual teachers in our respective schools. We could now see different ways of addressing mathematical problems or identifying mathematical concepts that will definitely help us to guide our future learners by showing different skills, activities, concepts and models to make our teaching enjoyable and understandable to our learners which we never knew at the beginning (See Figure 1). Really, we thank the organisers of this programme because we have known now that working together as groups and planning and sharing of ideas openly like this is really valuable and helps to build our skills of identifying some concepts with ease in ECE.' (Group 1, 'Unlike the teacher who only challenged the learner by mere talking without any illustrations is not a good way or procedure of teaching FP learners. The reason why most teachers cannot develop their skills and strategies by applying practical work in their teaching is that, in most cases, the problem may come from the Department of Education, whereby you are being forced to complete the syllabi at all cost without taking into consideration the cognitive level of the learners. When this happens, you will be forced to teach abstractly without doing illustrations of this nature and this does not help you as a teacher teaching FP learners. This must in fact, be looked into by those at the management level in order to recruit teachers with relevant skills and strategies to improve teaching and learning of FP mathematics.' (Group 1, teacher 1) 'There no doubt that anybody here will oppose collaborative work looking at what we have acquired here through challenging, discussion, probing and demonstration. We need to advise the department to make provision for this type of workshop which we hardly get so that we will be able to share our ideas. We are lucky that DCS for mathematics is here with us and we hope to see him taking this request to the provincial manager.' (Group 4) Teachers' perceptions on observation and task sheet analysis During classroom observation of lessons presented by individual teachers, the focus was on the way teachers presented their lessons in relation to their content knowledge and PCK so as to identify key concepts based on the topic presented in line with what Darling-Hammond et al. (2017) advocated in order to develop deep mastery of challenging content, critical thinking, complex problem-solving, effective communication and collaboration and self-direction. It was observed that even though teachers were able to mention some of the key concepts in the activity projected (See Figure 1), some teachers found it difficult to identify certain key concepts in some of the topics they have to teach. For example, based on Figure 1, an activity was projected on the screen and teachers were asked to identify the key concepts in this mental maths activity. This is what some teachers had to say: 'This topic is about 'factors' and 'multiples of 2, 4 and 20.' (Teacher C) 'This particular example is very easy but at times, you find it very difficult to identify some key concepts in topics like fractions, 3D shapes and naming of intercepts in geometry, so we always need to do collaborative work like this to empower us to overcome any barriers in teaching of early childhood mathematics of this nature.' (Teacher A) 'Even though collaborative work is good for teachers for PD; however, it needs time and commitment to make it work effectively. But looking at our time schedules nowadays, it's not easy to have teachers come together to have this kind of engagement. I can't imagine having a collaborative work like this in my lifetime.' (Teacher C) Teachers' perceptions on group discussion for professional development During group discussions, comments and questions raised by the teachers on the task sheet provided opportunities to understand the content and produce golden rules for effective teaching and identification of mathematical concepts. This form of providing coaching and expert http://www.sajce.co.za Open Access support teachers, which involved the sharing of expertise about content and evidence-based practices, focused directly on teachers' individual needs as well as their learners for PD, as indicated by Darling-Hammond et al. (2017). Peer collaboration amongst mathematics teachers is a way to address difficult key concepts or to identify the threshold concepts in early childhood mathematics teaching and learning, which threaten teachers' feelings and confidence. The solutions presented for questions revealed the kind of mathematical knowledge teachers possessed and how that knowledge paved the way for them to identify certain concepts in some aspects of their teaching and learning of mathematics regarding both content and method. Teachers shared ideas through group discussion, and it revealed what was happening in their classes to promote the development of mathematical proficiency. By identifying mathematical concepts during teaching and learning, examining how teachers present their lessons to their learners and linking ideas in different contexts produce meaningful learning. How the concepts could be linked to real-life situations were checked in line with the kind of concept being taught. The categories and themes identified based on knowledge of mathematics (content) and knowledge of instructional practices (method) were used to compare what Kilpatrick et al. (2001) call the 'five strands of mathematics proficiency', namely, conceptual understanding, procedural fluency, strategic competence, adaptive reasoning and productive disposition. This was followed by fellow teachers' judgment on their presentations by the other participating teachers. A fraction question was posed to the teachers so that they could demonstrate their skill in identifying mathematical concepts to facilitate the understanding of their learners. Question 1: If the answer to a sum of a particular problem is 2 7 , demonstrate how you would assist a Grade 7 learner in order for him or her to write down the relevant numbers correctly. What will the numbers be? Explain your answers. The following were the responses by the groups: . This is because addition of these fractions will result in the answer 2 7 . During focus group discussions, an attempt was made by the various group members to answer this question; however, some only provided single solutions without explaining their chosen answer. Group A, for instance, simply wrote without giving further explanation, thus failing to consider alternatives that could have been used. Considering the purpose of the study, as well as teachers' experience of teaching mathematics at the Senior Phase level, this question might have been too simple and they might have believed further explanation was unnecessary and not worth giving. What is interesting though is Group D's lack of response to or opinion on the question but their challenge to the credibility of Group B's answer, even though the first three groups gave different answers. What should be questioned, though, is that all these teachers provided single solutions. This probe by Group D gave a platform for the teachers to deliberate on ways that helped them to understand and identify certain concepts easily, which they had initially found difficult. In actual fact, establishing that the sum of the numbers is Initially, the teachers believed the sum involved two numbers being added; however, as discussions continued, it was evident that the participant teachers knew the sum was not just a mere adding of only two numbers, but addition of different numbers. It was clear that teachers found engaging in such a collaborative discussion was useful because skills for identifying mathematical concepts could be acquired easily. However, time and commitment are needed to make this method or approach work effectively. Discussion of results Extensive research into teacher communities is not common in mathematics education; therefore the need for collaboration is worth considering (Gellert 2008). This study was conducted through a teacher collaboration workshop comprising (30) mathematics teachers from various schools in a Free State Education District. The strategy used was peer collaboration teamwork that helped to identify key concepts in mathematics teaching and learning. The purpose was to supplement traditional approaches to mathematics teaching and learning currently used by teachers in the schools (Maree et al. 2005). Teacher proficiency levels, which are a factor related to the content as well as the application of pedagogical knowledge that needs to be mastered by mathematics teachers for teaching mathematics in schools, were investigated by this study through the sharing of ideas (Darling-Hammond & Sykes 2003;Darling-Hammond et al. 2017;Johnson & Kritsonis 2006). Results discussed here form part of a larger qualitative study that investigated difficulties experienced by mathematics teachers in teaching mathematical concepts in schools. The study revealed that peer collaboration in early childhood mathematics teaching is key to helping teachers identify threshold concepts in mathematics that they had initially found difficult as individual teachers (See Figures 1-3). Collaboration helped them to teach the subject effectively at the FP level. This finding is in line with the claims by Jaworsky (1994), who explained that in teaching mathematical concepts, teachers' knowledge for presenting concepts should reference the importance of shaping learners' experiences, because this is where the impact of the human culture of understanding and acceptance occurs and where the rules and conventions of language use are constructed by an individual with the extensive functional outcomes manifested where learning occurs. This supports teachers' collaborative efforts when they share ideas with one another, as they are given the opportunity to address learners' problems effectively and identify mathematical concepts in teaching and learning. In the same way, Vygotsky's views on this theory contribute significantly to social constructivist epistemology, which dwells much on how learning is mediated in a collaborative manner in accordance with the context, and by sharing personal experiences with peers through open discussion in order to overcome teaching and learning obstacles. This type of learning is similar to social interaction, which concentrates on cognitive development of individuals through discussion and sharing of information (Lantolf & Thorne 2006;Lin 2015). The study, furthermore, established that collaboration by mathematics teachers of different calibres is necessary to overcome the paucity of global mathematics teaching skills for childhood-level mathematics, in order to foster learners' knowledge of mathematical concepts and to stimulate their interest in the subject. Sarason (1993) maintains that if one wants to change the sphere of learners, one needs to first change the education of teachers; hence teachers were fully involved in this research for almost 3 consecutive weeks, after which the intervention yielded fruitful results. Sarason (1993) maintains further that it is necessary to prepare educators for what life is like in classrooms, schools, school systems and society. It is an interesting phenomenon to embark on mathematics teachers' development through collaboration, which serves as a platform characterised by notions of negotiation and identification of certain topics that seem crucial for individual teachers who normally prepare their lessons in isolation. The purpose was to create a learner-centred approach to effective teaching and learning of mathematics, as advocated by Lewis (2016). Ni Shuilleabhain and Seery (2018) and Fullan and Hargreaves (1992) argue that in order for any relevant and successful fundamental change to occur in classrooms, teachers must be encouraged at all times to collaborate with their peers. Engaging teachers in such collaboration is therefore key to helping each other identify the key concepts in mathematics teaching. It is believed that mathematics is a sequential process or development, fixed to a certain person, topic, environment or idea that changes or influences the life of that person through thinking and doing. Researchers such as Chamoso, Cáceres and Azcárate (2012) and Schon (1983) support this notion. Teachers came together and shared ideas in order to overcome the challenges posed by dealing with mathematical concepts. Within the boundaries of a PCK demonstration, Shulman (1986) explains that: [T]he knowledge for the most regularly taught topics in one's subject area, the most useful forms of representation of those ideas, the most powerful analogies, illustrations, examples, explanations, and demonstration in a word, the ways of representing and formulating the subject make it comprehensible to others. This indicates the vital importance of what the teacher knows, how much he or she knows of the content and that he or she knows how to present it so that learners understand. Peer discussion, thinking, doing and sharing of ideas should always go hand in hand in mathematics learning, as it is these activities that help stimulate learners' creativity in divergent ways through coaching (Vygotsky 1978). This could be even more effective when teachers meet as peers and discuss their experiences based on what they know and how they do it, in order to develop important concepts for teaching and learning of mathematics. There are various opinions on what it means for a teacher to know the content that must be taught or the appropriate way to present it to the learners for conceptual understanding (Pasley 2011). Research has demonstrated that teachers can develop themselves better professionally in teaching for mathematics by strengthening the relationship between their instructional practices and their underlying knowledge base through collaboration (Gellert 2008). Thus, communities of mathematics teachers at primary schools could improve their mathematics knowledge and routines of teaching the subject to help learners understand better if they engage in solid collaboration amongst themselves. Doing so will help teachers acquire new skills and knowledge, and they will be able to put their new visions of mathematically rich classroom activities into practice, where development of their knowledge base precedes the development of their instructional practices through the sharing of ideas (Gellert 2008;Gellett 2003). Conclusions and recommendations From the findings of this study, it can be concluded that teachers demonstrated different kinds of mathematical knowledge, knowledge of instruction and knowledge of curriculum to identify threshold concepts in mathematics. Through extensive collaboration, teachers can develop and acquire knowledge and skills relevant to tracking unnecessary misconceptions amongst learners in the mathematics classroom and hence develop an interest in understanding mathematical concepts in everyday life. The study concluded that collaboration was beneficial for teachers in the following ways: it helps in providing coaching and expert support for teachers, which involves the sharing of expertise about content and evidence-based practices, and it focuses directly on teachers' individual needs, as well as their learners, for PD through content-focused discussion. It further incorporates active learning amongst teachers whereby they share their problems and find solutions through collaborative support. They make use of models of effective practices that offer sustainable feedback and reflection, which provide teachers with adequate time to learn, practise, implement and reflect upon new strategies to facilitate changes in their teaching practice. Based on the results, it is recommended that a teacher collaboration network should be organised for teachers. Teachers demand PD programmes such as workshops and in-service training to be fully implemented to assist the teachers to grow and develop professionally. Teachers also indicated that there is a need to enforce teamteaching amongst mathematics teachers, which encourages monitoring of the progress of all the mathematics teachers in the schools in the province. It is further recommended that collaborative class observation, discussion and mutual result reflection should be engaged in on a regular basis.
9,662.8
2022-07-21T00:00:00.000
[ "Mathematics", "Education" ]
An Analytic Framework for Assessing Artificial Intelligence and Assistive Automation Enabled Command and Control Decision Aids for Mission Effectiveness Author Note: This work is supported by the United States Army Combat Capabilities Development Command (DEVCOM) Analysis Center (DAC) under Support Agreement No. USMA 2255. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the U.S. Military Academy, U.S. Army, U.S. Department of Defense, or U.S. Government. The authors would like to thank the DAC Combat Simulations team for their support throughout the project. Abstract: The U.S. Army has significant interest in operationalizing Artificial Intelligence and Assistive Automation (AI/AA) technologies on the battlefield to help collate, classify, and clarify multiple streams of situational and sensor data to provide a Commander with a clear, accurate operating picture to enable rapid and appropriate decision-making. This paper offers a methodology integrated with combat simulation output data into an analytic assessment framework. This framework helps assess AI/AA enabled Decision Aids for command and control with respect to mission effectiveness. Our methodology is demonstrated via a real-world operational vignette of an AI/AA-augmented Battalion assigned to clearing a sector of the battlefield. Results indicate that the simulated scenario with an AI/AA advantage modeled led to a higher expected mission effectiveness score. Introduction The U.S. Army is currently developing Decision Aids that incorporate Artificial Intelligence and Assistive Automation (AI/AA) technologies into the operational battle space.According to the U.S. Army Maneuver Center, soldiers can be up to 10 times more effective in combat when assisted by AI/AA systems such as Decision Aids (Aliotta, 2022).A Decision Aid is a tool designed to assist Commanders in combat scenarios by reducing their decision time while improving decision quality and mission effectiveness (Shaneman, George, & Busart, 2022); these tools help collate operational data streams to assist Commanders with battlefield sense-making to help them make informed, real-time decisions.One problem associated with using AI/AA enabled Decision Aids is that the Army currently lacks a validated framework to assess tool usage in an operational environment.As such, in this paper we describe our research, design, and development of an analytic framework coupled with modeling and simulation to assess AI/AA Decision Aids for command and control in terms of mission effectiveness. As part of our analytic framework development, we conducted extensive literature review along with stakeholder analysis with over 30 stakeholders who are knowledgeable in the domains of AI/AA, Decision Aids, command and control, and modeling and simulation.These stakeholders were placed into focus groups based on their familiarity with aforementioned topics.We conducted virtual focus group meetings with each group, gathered feedback, and used it to drive our findings, conclusions, and recommendations (FCR).Concurrently, we developed a realistic battlefield vignette and scenario.Using this scenario and our FCR output, we collaborated with the U.S. Army DEVCOM Analysis Center (DAC) to develop a functional hierarchy of objec-tives to measure through modeling and simulation.We transferred our hypothetical combat scenario into One Semi-Automated Forces (OneSAF), a simulation software that utilizes computer-generated forces, offering models of entities and behaviors that are partially or entirely automated, and intended to support Army readiness (PEOSTRI, 2023).Using the Analytical Hierarchy Process, we elicited assessment decision-maker preferences and computed weights to objectives in the functional hierarchy and created a spreadsheet model that incorporates output data from OneSAF and provides a quantitative value score.Using A-B testing, we gathered scores for a baseline simulation as well as one in which AI/AA effects were modeled.We compared results of the A and B scenarios and assessed the effects that AI/AA had on mission effectiveness of friendly forces in the simulation. Literature Review Analytic assessment frameworks enable quantitative and/or qualitative data to be evaluated for a multiple criteria decision problem.The qualitative frameworks such as the Kano Model (Violante & Vezzetti, 2017), French Question and Answering (Hordyk & Carruthers, 2018) and Qualitative Spatial Management (Pascoe, Bustamante, Wilcox, & Gibbs, 2009) are used mainly for stakeholder input and brainstorming (Srivastava & Thomson, 2009) without intensive calculation or labor.Quantita-tive assessment frameworks are data-driven and provide a mathematical methodology to determine a system's functions through measures of performance and measures of effectiveness.The Analytic Hierarchy Process (AHP) is applicable to our problem given its use of hierarchical design with pairwise decision maker preference comparison to provide qualitative and quantitative analysis through comparative weighting (Saaty, 1987).While AHP has been used in many applications, to our knowledge this methodology has not been used to assess AI/AA enabled Decision Aids or coupled with A-B testing for assessment. Command and control (C2) systems are used to provide a more detailed, accurate, common operating picture of the battlefield in order to enable effective decision-making; these C2 systems are largely built to increase situational awareness (SA).Studies have shown that Commanders using digitized information display methods, something an AI/AA enabled Decision Aid could enhance, display greater levels of SA than Commanders using radio communications to gather information (McGuinness & Ebbage, 2002).The value gained from AI/AA integration with C2 can be likened to a "cheat" in a combat video game: it provides an information advantage on how the enemy operates and helps friendly forces avoid costly consequences (McKeon, 2022).Research on C2 systems and SA have helped drive the development of the vignette and scenario described herein. Modeling and simulation (M&S) is a simplified representation of a system or process that allows us to make predictions or understand the behavior through simulations.M&S generates data that allows one to make decisions and predictions based off certain scenarios (TechTarget, 2017).This allows the Army to generate and draw conclusions from operational scenarios that have been experienced and ones that the Army expects to face in the future.Simulations help drive the Army's capability assessment.Testing and evaluation often takes place alongside assessment and consists of analyzing models to learn, improve, and draw conclusions from, while also assessing risk.There are many different M&S tools used throughout the military.For example, the Infantry Warrior Simulation (IWARS) is a combat simulation focused on individual and small unit forces to assess operational effectiveness (USMA, 2023).The Advanced Framework for Simulation, Integration and Modeling (AFSIM) is a multi-domain M&S framework for simulation focused on analysis, experimentation, and wargaming (West & Birkmire, 2020).Within the scope of our project, One Semi-Automated Force (OneSAF) is used to model combat situations we have created in order to simulate the effects of having AI/AA advantages on the battlefield. As mentioned, the goal of AI/AA-enabled Decision Aids is to increase quality and speed of decision-making.AI can be utilized for different scenarios and it can provide support to battlefield Commanders and warriors in multiple ways.For example, AI/AA enabled Decision Aids can help warriors in both air and ground combat be able to "analyze the environment" better and "detect and analyze targets" (Adams, 2001).AI/AA enabled Decision Aids can help mitigate human error and create information and decision advantage on the battlefield (Cobb, Jalaian, Bastian, & Russell, 2021).These example information triage advantages gained by AI/AA enabled Decision Aids guided our operational vignette and M&S scenario development. Operational Vignette and Scenario Development In our operational vignette, 1st Battalion is assigned with a small village up to a designated line of advance.The vignette follows Captain Roy, the Battalion Intelligence Officer (BN S2), as he prepares the intelligence situational template (SITTEMP) using an AI/AA enabled Decision Aid (i.e., assistant) which rapidly collects and incorporates accumulated Red intelligence and open source intelligence-derived situational data.It then follows Major Jones and Captain Smith, the Battalion Operations Offi-cer (BN S3) and the Assistant S3 (AS3), as they develop maneuver courses of actions (COA) using the AI/AA enabled Decision Aids to evaluate "what-if" scenarios Finally, it switches to Lieutenant Kim, the Battalion Assistant S2 (BN AS2), as she devel-ops named areas of interest (NAI) based on the selected maneuver scheme and then works to coordinate adequate Intelligence, Surveillance, and Reconnaissance (ISR) coverage between her internal assets and upper echelon resources.Assumptions made as part of the vignette include that the time period is 2030, neither side will use nuclear weapons or take action that represents an existential threat to the other, weather conditions affect BLUE and RED forces equally, the time of the year is fall season with warm and humid weather. Stakeholder Analysis and Functional Hierarchy Development As part of background research for solution framing, we engaged with 32 civilian and military stakeholders who are experts in AI/AA and its contributions to decision-making and simulation-based modeling.The stakeholder analysis process we conducted is as follows: 1) Define and Identify Stakeholders; 2) Define Focus Groups; 3) Assign Stakeholders to Focus Groups; 4) Develop Questions Specific to each Focus Group; 5) Contact Stakeholders and Schedule Focus Group Sessions; 6) Conduct Focus Group Sessions; 7) Synthesize and Analyze Stakeholder Feedback; and 8) Develop FCR matrices.We used the results of the FCR matrices to develop a functional hierarchy diagram of the objectives, measures and metrics to generate/collect from the simulated scenarios.These objectives, measures and metrics were then ranked against each other in terms of importance to the mission set.This set the foundation for using the Analytic Hierarchy Process (described below). Analytic Hierarchy Process and A-B Testing The AHP is a methodology, originally proposed by Thomas Saaty in 1987, that utilizes a series of pairwise comparisons derived from experts' judgment that places each function and sub-function from a functional hierarchy into a prioritized scale.The various attributes are then ranked against each other through tangible data or qualitative opinions of experts.These rankings are then placed on a scale of 1-9 as seen in Table 1.After each attribute is given its weight 1-9, the criteria and sub-criteria are given weights that demonstrate their relative importance (Saaty, 1987). Table 1. AHP Relative Ranking Scale Once these initial pairwise comparisons are complete, there is a series of four axioms that govern the AHP.These axioms state that given two sub-criteria, A i , and A j , the expert can give a preference judgment denoted by θ ij .The preferences share an inverse relationship such that θ ij = 1/θ ji .Further, when comparing two criteria, A i , can never be infinitely more preferred than A j , such that θ ij ∞.Finally, all impactful decisions in the problem can, and should, be formulated using a hierarchy.After the sub-criteria (or alternatives) are ranked through pairwise comparison, the eigenvector method is used to compute the relative values and weights of these criteria.Equation 1 for this method is proposed by Saaty, and is computed as follows: In Equation 2, v is the vector of relative values and λmax is the maximum eigenvalue.Furthermore, by raising the matrix θ to the power of k and normalizing the result, the principal eigenvector can be determined. (2) In the case, e T = (1, 1…,1,1).The v vector is then normalized to the w vector, where ∑ 1.Once the w vector is determined, λmax is determined in Equation 3: (3) This totals up the ranking scores of each relative metric, and then this sum is divided by the total sum of all the metric scores added together.This achieves a relative weight for each criterion and sub-criteria.Once the AHP determines the weights, the sub-criteria weights are multiplied by the relative weights (or eigenvalues) for each criteria to get a localized weight.This calculates a globalized score that represents what each sub-criteria contributes to the scenario.The sub-criteria scores should add up to the relative weights for the main criteria they fall under.When multiple decision makers are involved, one may take geometric mean of the individual evaluations at each level (Saaty, 1987) Our methodology also includes A-B testing to compare Scenario A (without AI/AA) with Scenario B (with AI/AA) to assess the impact of the Decision Aid on C2 mission effectiveness.A-B testing was originally designed for web traffic control where two variants of a product undergo statistical analysis in order to determine the best version (King, Churchill, & Tan, 2017). Modeling and Simulation OneSAF is a tool for modeling and simulating real/future operational scenarios.Our goal in utilizing OneSAF is to make the model as similar to our vignette as possible.We created SITTEMPs for BLUE/RED and used them to input desired entities into the OneSAF simulation.Once the entities were placed, we created actions and maneuvers for each entity to reflect Major Jones and Captain Roy's roles.After emplacing the RED entities we identified the area of interest, the village, Lieutenant Kim's role.Then, we setup two distinct scenarios: A (no AI/AA advantage) and B (AI/AA advantage).In Scenario A, RED is occupying a village and BLUE is set to clear it.RED is set to defend their current battle position and employ defensive measures through direct and indirect fires.The first phase of actions input for BLUE is for Alpha and Charlie Company to move tactically to their battle positions, establishing security and preparing to support Bravo Company.In the next phase, Bravo moves tactically while the Headquarters Company follows.Once Bravo is in position, they begin seizing the objective while Alpha and Charlie clear it.Alpha and Charlie's main goal is to assist Bravo and minimize BLUE casualties.Once Bravo finishes seizing and clearing the objective, the scenario is over. To emulate the AI/AA enabled Decision Aid capability within Scenario B, we expanded on Scenario A by introducing new actions and adjusting existing settings.We provided BLUE with up-to-date information on enemy movements, terrain, and other factors that impact their ability to move, simulating the ability to make more informed decisions and move more quickly.Specifically, we increased the movement speed of BLUE from 4.15 km/hour to 88.99 km/hour.BLUE actors could now move at a speed anywhere between 0 and 88.99 km/hour.In addition to this, we introduced a new action for Alpha company to perform reconnaissance, which provides BLUE with real-time data, situational awareness, target identification, and threat detection, much like an AI/AA enabled Decision Aid would. With increased insights and the transfer of real-time data into BLUE's decision-making process, an additional action was added, for increased support from Charlie as Alpha performed recon.By identifying potential targets and threats during recon, Charlie can take proactive measures to avoid or neutralize these threats.After incorporating these modifications into Scenario B, we were able to create a scenario that reflects an AI/AA advantage and provides BLUE with the tools and insights needed to make more informed decisions and succeed on the battlefield.To accurately assess the impacts of AI/AA capabilities within Scenario B, in comparison to Scenario A, we used OneSAF's Web Replication Tool (WRT) and Data Collection Specification Tool to run the scenarios multiple times and collect data for analysis.Using the WRT, we ran both Scenario A and B simulations 30 times and exported the data to CSV file format.After analyzing the output data, we determined what best lined up with our predetermined metrics as detailed earlier.Unfortunately, we were not able to measure all metrics, due to a lack of data produced within the simulations.Based off the data we were able to collect, we measured the number of BLUE losses, the time before detection by RED, the time to reach mission goals, the number of RED kills, the time to locate RED, and the number of shots versus hits.For metrics we were unable to collect data from, we made the values congruent across both scenarios with respect to the mean, 95th and 5th percentile data bins. Results and Discussion Using the AHP and stakeholder preferences for our objectives, we created a spreadsheet model to analyze the simulation output data for both scenarios.The model takes stakeholder input to determine global weights for criteria and subcriteria.The model then pulls raw data from the OneSAF simulation and converts it to usable data from normalizing scales that give the data a score 1-100.The data used in this analysis is the mean, median, 95th percentile, and 5th percentile values.This data is multiplied by its respective weight to produce a global score for each objective and sub-objective.These values are then summed to make a final mission effectiveness score to quantify how well the forces performed in the simulation.The model is applied to both scenarios within A-B testing, one with AI/AA effects incorporated into the simulation and one without.The data that was collected is seen in Table 2.For metrics where data was not collected, we made the values congruent across scenario A and B with respect to the statistical data bins.Therefore, there was no variability across scenario A and B, but there was variability between each statistical bin to simulate inherent variability in the data.The eight scores from the two simulations are depicted in Table 3, which are compared to determine the impact of the AI/AA effects on friendly force mission effectiveness.For each of the six AHP metrics displayed in Table 2, several statistics (mean, median, 95th and 5th percentiles) from the 30 simulation iterations are provided for both scenarios, allowing us to see the variability of the simulation output data.Overall, these results of the A/B testing indicate that the AI/AA enabled Decision Aid (modeled in Scenario B) generally had a positive impact on C2 mission effectiveness.For example, the mean, 95th percentile and 5th percentile values for the number of friendly losses was lower in Scenario B compared to Scenario A. As another example, the sensor to shooter time was lower in Scenario B for all simulation statistic values.Notably, the AI/AA enabled Decision Aid did not increase the number of enemy killed. Next, we plugged these simulation statistic values for each scenario into the AHP scales to convert into values from 1-100, allowing us to then compute the mission effectiveness score for each scenario.These scores are represented in Table 3.While the mean, 95th and 5th percentile mission effectiveness scores are higher when AI/AA is modeled in Scenario B, there are nearly negligible results in the median.It is clear to see that Scenario A performs worse on average and has more variability in the bounds.Scenario B, on the other hand, has a more compact spread with greater results on average.There is not a huge difference in these results, but our experimentation does indicate that AI/AA enabled Decision Aids generally improved C2 mission effectiveness for our operational vignette. Proceedings of the A potential reason for these results is based in the nature of the simulations and grading scale.Metrics, such as time before detection, are values that should be maximized.However, since Scenario B speeds up the BLUE force, this inherently reduces the time before detection, giving Scenario B a lower score (although the speed increase makes BLUE more lethal).The same principal is true for kills, which should be maximized.Since the BLUE force is faster and spends less time on the objective, there is less time to shoot, so naturally the amount of kills will drop.This dynamic is somewhat counter-intuitive in nature, but it helps explain why Scenario B mission effectiveness scores are not strictly better across all simulation statistics. Conclusions: Summary, Limitations, and Future Work In this work we demonstrated a novel methodology that serves as an analytic framework to assess AI/AA enabled Decision Aids for command and control in terms of mission effectiveness.By developing an operational vignette and subsequent scenario through modeling and simulation, and then leveraging the simulation output data in the Analytic Hierarchy Process for A-B testing, we demonstrated how AI/AA enabled Decision Aids can enhance friendly force capabilities in combat.The main limitation of this research stems from limited capabilities within OneSAF for modeling and simulation that accurately represents the effects of having an AI/AA enabled Decision Aid modeled within the scenario.For example, OneSAF does not easily support the integration of external-to-software algorithms via a software development kit.This makes it very challenging to integrate an actual AI/AA algorithm into the modeling and simulation environment to enable within-simulation inference needed to modeling emergent behaviors/actions.Moreover, OneSAF did not have the proper actions/systems in place to produce outputs for some of the measures/metrics that we needed for complete assessment.Note that we examined the simulation output data from OneSAF for each measure/metric and then categorized each measure/metric output as reliable, somewhat reliable, or unreliable.We chose to only use data for measures/metrics we classified as reliable for the AHP.Thus, our tool was not fully utilized and the mission effectiveness scores of the A-B tests were affected by a lack of data.Future work will expand on this methodology by exploring OneSAF deeper to find more simulation actions/systems, adjusting the current measures/metrics to other vignettes/scenarios, applying other multiple criteria decision analysis techniques other than AHP for comparison, developing a more enhanced analytic tool (rather than using a spreadsheet), and investigating ways to better model AI/AA effects within the simulation. Proceedings of the Annual General Donald R. Keith Memorial Conference A Regional Conference of the Society for Industrial and Systems Engineering Table 2 . Results of the Statistical Analysis on the 30 Simulation Iterations Table 3 . Results of the AHP Annual General Donald R. Keith Memorial Conference West Point, New York, USA May 4, 2023 ISBN: 97819384964-4-8 Proceedings of the Annual General Donald R. Keith Memorial Conference West Point, New York, USA May 4, 2023
4,858
2023-12-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Nonreference Image Quality Evaluation Algorithm Based on Wavelet Convolutional Neural Network and Information Entropy The image quality evaluation method, based on the convolutional neural network (CNN), achieved good evaluation performance. However, this method can easily lead the visual quality of image sub-blocks to change with the spatial position after the image is processed by various distortions. Consequently, the visual quality of the entire image is difficult to reflect objectively. On this basis, this study combines wavelet transform and CNN method to propose an image quality evaluation method based on wavelet CNN. The low-frequency, horizontal, vertical, and diagonal sub-band images decomposed by wavelet transform are selected as the inputs of convolution neural network. The feature information in multiple directions is extracted by convolution neural network. Then, the information entropy of each sub-band image is calculated and used as the weight of each sub-band image quality. Finally, the quality evaluation values of four sub-band images are weighted and fused to obtain the visual quality values of the entire image. Experimental results show that the proposed method gains advantage from the global and local information of the image, thereby further improving its effectiveness and generalization. Introduction Image quality evaluation has a wide range of applications in image compression, image restoration, and video processing. Existing image quality evaluation methods mainly include subjective and objective types [1,2]. Subjective evaluation method directly evaluates or scores the image quality on the basis of people's relevant experiences. The commonly used rating grades are excellent, good, medium, poor, and very poor. The subjective evaluation method is simple and has high accuracy. However, it needs to rely on people's subjective experiences, and the labor costs are high. As such, this method is difficult to promote in practical applications, especially real-time processing. Although the performance of the objective evaluation method is not as good as that of the subjective evaluation method, the image quality can be automatically evaluated by establishing the image distortion evaluation model, thereby reducing labor costs. The objective evaluation method has strong real-time performance. Therefore, it has become a popular research topic in the field of image quality evaluation in recent years. In general, objective evaluation methods include full reference [3,4], reduced reference [5,6], and nonreference [7][8][9][10][11][12]. This study focuses on the nonreference quality evaluation of images, i.e, referring to the original image information in image quality evaluation is unnecessary.Generally speaking, the nonreference image quality evaluation method is also called blind image quality (BIQ) evaluation method. It does not need reference image at all, and estimates the image quality according to the features of the distorted image. The simplest objective evaluation methods are the peak signal-to-noise ratio (PSNR) and mean square error (MSE). These methods are simple to implement with high computational efficiency and are widely used in the visual quality evaluation of image processing. However, the two methods do not consider human visual perception psychology. The evaluation results are relatively different from the subjective judgments, and the intrinsic characteristics of human visual perception are difficult to reflect. In recent years, deep neural networks, especially convolutional neural network (CNN) technology, which achieved better results than traditional methods in image recognition, target detection, and image restoration, achieved unprecedented development in the field of image processing [13,14]. Existing image quality evaluation methods based on CNNs mainly use two CNN characteristics. One is the local receptive field, i.e, people perceive the visual content of images from partial to global. In the image, the correlation between the pixels of the nonlocal area is low, and the correlation between the pixel points of the local area is large. The other is weight sharing, i.e., the CNN obtains the local information of the image by using the same filter operator. In general, all local information in the image obtained by the filter operator is consistent. Therefore, in an actual image quality evaluation, various filter operators may be set to extract effective features, and different feature maps may be extracted by different filter operators. The general CNN framework includes an input layer and convolutional multilayer. A pooling layer between the convolutional layers is mainly used to reduce the size of the feature map and the dimensionality of data. Therefore, the CNN gradually extracts the high-level semantic features of the image by continuously stacking the convolutional and pooling layers and finally converts the feature vector into a classifier or a fully connected layer. Several researchers introduced image quality evaluation methods based on CNN. For example, Kang et al. [15] fused convolutional and fully connected layers to design a deep neural network that predicts image quality scores. This network used an image block with a size of 32 × 32 as an input and used the quality score of the entire image to represent the quality score of the image block. Kim et al. [16] designed a two-stage deep neural network model to evaluate image quality. In the first stage, an end-to-end deep neural network model was trained and imputed as an image block with a size of 32 × 32. The score of the image block was calculated using objective quality evaluation algorithm and used as the output of the deep neural network. In the second stage, the image block of the entire image was inputted into the deep neural network obtained in the first stage, and the features corresponding to all image blocks were merged and outputted as the quality scores of the entire image. Bare et al. [17] used 32 × 32 image blocks as inputs to the CNN similar to the network framework proposed in [16] and adopted the full reference quality evaluation algorithm [18] to calculate the quality score of the image block as the output of the CNN. They constructed the image quality evaluation network framework on the basis of the residual network method [19]. The two models proposed in [16,17] have an evident defect, i.e., the quality score calculated by the objective image quality evaluation method is used to represent the subjective quality score of the image block. Although [17] used the full reference quality evaluation algorithm [18], which can accurately predict the image quality score, the score calculated by the algorithm still had a certain gap with the subjective score. In [20], the author believed that people provide qualitative scores of images, such as very good, good, bad, and very poor. This qualitative evaluation was converted into feature vectors to design the image quality evaluation method. Kim et al. [21] proposed an end-to-end CNN model, which inputs distorted and error images. The model was used to learn the optimal weights automatically and fuse the error images for obtaining the visual quality score of the distorted images. On the basis of the image pair generation strategy, [22] proposed a deep CNN training model, which achieved good image quality evaluation performance. Bosse et al. [23] proposed full reference and nonreference image quality evaluation methods based on deep CNN. The proposed networks mainly included the feature extraction, feature fusion, and pooling layers. Ma et al. [24] proposed an end-to-end blind image quality evaluation method combined with image distortion type prediction and quality prediction and designed a multitask CNN image quality evaluation network model. [25][26][27] proposed the corresponding nonreference image quality evaluation method, which achieved good results. The existing image quality evaluation method based on CNN uses the average value of the image sub-block to represent the quality evaluation value of the entire image. This method can detect lowand high-quality image regions and achieve good image quality evaluation results. However, the visual quality of the partial image sub-block tends to change with the spatial position. After the image is subjected to distortion processing, the quality evaluation based on the partial image sub-block has difficulty reflecting the visual quality of the entire distorted image. On the contrary, image sub-blocks with similar distortion types (e.g., blurred or smooth regions) may also have significantly different visual qualities. The main contributions of this paper are summarized as follows: 1. We present a wavelet convolution neural network for image quality assessment. The product neural network extracts the feature information in multiple image directions, thereby further improving the effectiveness and generalization of the image quality evaluation method. 2. We adopt the information entropy as the weight of quality prediction of sub-band image, and demonstrate that the distribution of information entropy is close to the image region of human visual perception. Using this strategy, the subjective and objective consistencies of the image quality evaluation can be further improved. Application of the Discrete Wavelet Transform (DWT) Wavelet transform is an effective tool to combine time domain and frequency domain. In most applications, discrete signals are used. Therefore, discrete wavelet transform (DWT) must be used instead of continuous wavelet transform. Wavelet transform can decompose the signal by band-pass filter. The result of the band filtering operation will be two different signals, one will be related to the high frequency components and the other related to the low frequency component of the original signal. To compute the DWT of an image I(x, y) of size M × N, it must identify the wavelet scale function W ϕ to define the approximation coefficients and the wavelet function W ψ responsible for horizontal, vertical and diagonal coefficients {H, V, D} following the equations below: with: where j 0 is the start resolution and the scale parameter j is always greater or equal to j 0 . In general, we choose j 0 = 0 and N = M = 2 j in order that j = 0, 1, ..., j − 1 and m, n = 0, 1, ..., 2 j − 1. Calculation of Information Entropy After an image is transformed by wavelet [28], a series of sub-band images with different resolutions can be obtained. Figure 1 shows the results of a Barbara image with a size of 512 × 512 decomposed by two layers of wavelets. The upper leftmost part of each layer in Figure 1 is a low-frequency image, and the upper right, lower left, and upper right corners are the vertical high-frequency, horizontal high-frequency, and diagonal sub-band images, respectively. The second layer decomposes the low-frequency image of the first layer into a low-frequency sub-band image (upper left corner in Figure 2) and a high-frequency sub-band image in the vertical, horizontal, and diagonal directions. Subsequently, the third layer wavelet transform repeats this process to continue to decompose the low-frequency image of the second layer, and the like. The above evaluation shows that the multiscale analysis of wavelet transform can efficiently describe the global and local information of the image. Generally, a low-frequency image reflects the global information of the entire image, but a high-frequency sub-band image reflects the local details, such as edge, contour, and other image areas with mutations. Therefore, this section calculates the corresponding information entropy of each wavelet sub-band image on the basis of the information of multiple directions. Then, each information entropy is used as the visual quality weight of the corresponding sub-band image to describe the effects of different sub-band images on the quality of the entire image. The calculation process of information entropy is summarized as follows: Distorted image I is imput, and S-layer wavelet decomposition is performed for distorted image to obtain low-frequency, horizontal, vertical, and diagonal sub-band images, denoted as I L , I H , I V , and I D , respectively. Then, each sub-band image is divided into image sub-blocks that do not overlap, and the information entropy of each sub-block is calculated. Finally, the average information entropy of all sub-blocks is obtained and used as the visual content weight of the cost function. The number of layers S of the wavelet decomposition is set to 1. The information entropy of each sub-block is calculated as follows: where p(w i ) denotes the probability of wavelet coefficient w i appearing in the sub-block image, and ∑ n i=1 p(w i ) = 1. w i represents the wavelet coefficients of the sub-block image, N B is the number of all wavelet coefficients of each sub-block image. Generally, information entropy reflects the intensity of image information to a certain extent. The larger the information entropy of an image, the larger the amount of information, and the better the visual quality of the image. Moreover, the information entropy of the image includes rich structural information, which can be used to measure the sensitivity of the local image. Therefore, people are inclined to evaluate the visual quality of images from areas with high acuity. Figure 2 presents the information entropy map of the low-frequency and three high-frequency sub-band images after the wavelet transform of Barbara image. Figure 2 also shows the large amount of structural information and the distribution of the information entropy, which is close to the image area of human visual perception. Therefore, the wavelet information entropy of the image can be used as the visual weight to improve the subjective and objective consistencies of the image quality evaluation. Low-frequency sub-band Horizontal sub-band Vertical sub-band Diagonal sub-band Proposed Image Quality Evaluation Algorithm To improve the robustness and generalization of image quality evaluation methods, this study combines wavelet transform and CNN to design a nonreference image quality evaluation method based on wavelet CNN. Figure 3 shows the flow chart of the proposed algorithm. Concretely, the wavelet CNN method is used for designing the nonreference algorithm for image quality evaluation.In Figure 3, the S-wavelet transform is initially performed on the input image to be measured (the image is decomposed using a one-level wavelet transform), and the low-frequency, horizontal, vertical, and diagonal sub-band images are obtained. Because the image is decomposed by S-level wavelet, the number of sub-band images is 3S+1, i.e., the value of K is 3S+1. In our work, we decompose the image with one level wavelet transform, then we get four sub-band images, so the value of K is 4. Besides this, and "db1" is selected as the wavelet filter. These parameters do not depend on the analyzed image (such as size, texture, etc.). This is easy to handle in a unified way. Next, the information entropy of each sub-band image, denoted as H i , i = 1, 2, 3, 4, is calculated. The four sub-band images are used as the inputs of the CNN to output their quality prediction values through the CNN, represented as CNN_IQA1, CNN_IQA2, CNN_IQA3, and CNN_IQA4, respectively. The quality prediction values of the four sub-band images adopt the same CNN structure. Figure 4 shows the detailed flow. Input image Low-frequency subband entropy Horizontal sub-band entropy Diagonal sub-band entropy Image Local Contrast Normalization Preprocessing Before using the CNN to predict the quality of the sub-band image, the image is normalized for local contrast, i.e., removing redundant features that are weakly related to image quality. The process of local contrast normalization preprocessing is as follows: Wavelet decomposition is performed on the distorted image via wavelet transform to obtain a low-frequency sub-band image and three sub-band images in horizontal, vertical, and diagonal directions. Furthermore, to remove redundant feature information that is comprehensively weakly related to image quality, local contrast normalization preprocessing is performed on the four sub-band images. The specific process is summarized as follows: where I(i, j) represents the initial pixel value at (i, j) in the distorted image,Ĩ (i, j) is the normalized value at (i, j) in the distorted image, and µ I and σ I are the pixel mean and standard deviation of the local area of the image, respectively. The value of constant C is set to 1.0. The calculation processes of µ I and σ I are as follows: where w m,n represents the weight of the Gaussian function window. The window size is set to 3 × 3, and the values of M and N are both 3. Sub-Band Image Quality Prediction Based on CNN The low-frequency sub-band image generally reflects the global information of the image, and the high-frequency sub-band image reveals the local detailed information. Therefore, the performance of the image quality evaluation method is further improved to use the global and local information of the image fully. In this section, the CNN is used to predict the low-frequency and three high-frequency sub-band images simultaneously. Initially, the method described in Section 2 of this paper is used to calculate the information entropy of each sub-band image, i.e., the information entropy of low-frequency, horizontal, vertical, and diagonal sub-band images, denoted as H i , i = 1, 2, 3, 4. The four information entropy is used as the weight of the quality prediction, and then the CNN model is trained on the four sub-band images by supervised learning. The quality prediction values of the four sub-band images adopt the same CNN. Figure 4 shows the architecture of the proposed network, which is a 32 × 32 − 26 × 26 × 50 − 13 × 13 × 50 − 400 − 100 − 1 structure. The input is locally normalized 32 × 32 image patches of sub-band image. The first layer is a convolutional layer which filters the input with 50 kernels each size 7 × 7 with a stride of 1 coefficient. The convolutional layer produces 50 feature maps each of size 26 × 26. The obtained feature map is used as the input data of the pooling layer, then 50 feature maps with a size of 13 × 13 will be obtained. Two fully connected layers of 400 nodes and 100 nodes each come after the max pooling. The last layer is a simple linear regression with a one-dimensional output that give the quality score. In Figure 4, the calculation process of the quality prediction value of each sub-band image is as follows: (1) the wavelet-decomposed sub-band images, including low-frequency, horizontal, vertical and diagonal sub-band images, are inputted. Then, local contrast normalization preprocessing is performed on the four sub-band images on the basis of the method in .1. (2) after the subband image is preprocessed, the sub-blocks are divided, assuming that the divided image subblock size is 32 × 32. A 32 × 32 subblock image is used as input data for the CNN. (3) the CNN parameters, including convolutional, pooling, and two fully connected layers, are designed. The convolutional layer uses a convolution kernel with a size of 7 × 7, the number of convolution kernels is 50, and the sliding window step size is set to 1 in the convolution process. Then, the image subblock is convolved with the convolution kernel, and the size of the feature map activated by an activation function is (32 − 7)/1 + 1 = 26 pixels. Thus, 50 feature maps with a 26 × 26 size are obtained; the activation function is RReLU, which is expressed as where x represents an input and a is a small normal number. a is set to 0.01 in the present study. The obtained feature map is used as the input data of the pooling layer. The maximum pooling method is adopted in the present invention, and the step size is set to 2. Then, the feature map size obtained after the pooling is (26 − 2)/2 + 1 = 13 pixels. Hence, 50 feature maps with a size of 13 × 13 will be obtained. LRN represents the local response normalization process, which aims to enhance the generalization of the CNN. In Figure 4, the third and fourth layers are connected, use RReLU as the activation function, and adopt dropout processing. The purpose is to discard some of the elements in the fully connected layer from the network at a probability of 0.5 to avoid over-fitting phenomenon. Finally, the quality prediction values of each sub-band image, which are represented as q 1 , q 2 , q 3 , and q 4 , are obtained using nonlinear regression loss function Softmax loss. In a word, the learning process is summarized as follows: Let p i and y denote the input patch and its ground truth score respectively, f (p i ; w) refers to the predicted score of p i with network weights w. Each image patch was independently regressed onto the global subjective-quality score. The objective function can be written as where . denotes the l 1 norm. It is known that support vector regression (SVR) with −insensitive loss was successfully applied to learn the regression function for nonreference image quality assessment. Please note that the above loss function is equivalent to the loss function used in −SVR with = 0. This problem can be solved by using the stochastic gradient descent (SGD) and back propagation (BP) method. In experiments we perform SGD for 60 epochs in training and keep the model parameters that generate the highest Pearson linear correlation coefficient (PLCC) on the test data set. The CNN model was trained via a patchwise optimization, and, during testing, the outputs of multiple patches composing an input image were averaged to obtain a final predicated subjective score. Proposed Algorithm On the basis of Sections 2, 3.1 and 3.2 of this paper, the proposed algorithm steps are summarized as follows: (1) Distorted image I is inputted, and a single-layer wavelet decomposition is performed on distorted image to obtain low-frequency, horizontal, vertical, and diagonal sub-band images, denoted as I L ,I H , I V , and I D , respectively. On the basis of the method described in Section 2, each sub-band image is divided into image sub-blocks that do not overlap, and the information entropy of each sub-block is calculated. Finally, the average of information entropy of all sub-blocks is obtained, and the value is used as the quality prediction weight for the entire sub-band image. (2) On the basis of the method described in Section 3.2, the images are initially normalized and preprocessed, and then the wavelet CNN model is trained. The quality prediction values of the low-frequency and three high-frequency sub-band images, denoted as q 1 , q 2 , q 3 , and q 4 , respectively, are predicted using the wavelet CNN model. (3) Image quality fusion processing. The information entropies of the low-frequency, horizontal, vertical, and diagonal sub-band images are used as the weights of quality prediction values. Then, they are fused to obtain the quality evaluation value of the entire image. The fusion process can be expressed as follows: where Q represents the quality prediction value of the entire image, K is the number of sub-band images,H i (i = 1, 2, 3, 4) is the average information entropy value of the No. i sub-band image, and indicates the quality prediction value of the No. i sub-band image. Specifically,H 1 ,H 2 ,H 3 ,H 4 are the information entropies and q 1 , q 2 , q 3 , and q 4 are the quality prediction values of the low-frequency, horizontal, vertical, and diagonal sub-band images, respectively. (4) The test is conducted on an arbitrary image database, the image quality evaluation score is obtained by the wavelet CNN model, and the performance of the image quality evaluation method is evaluated. The pseudocode of proposed Algorithm 1 is summarized as follows: Algorithm 1 NIQA algorithm based on wavelet CNN and information entropy. (10), obtain the predicated quality scores of four sub-band images, which denoted as q 1 ,q 2 , q 3 , and q 4 ; 5: Compute the quality score of entire image via Equation (11); 6: Test the wavelet CNN model on an arbitrary database to obtain the corresponding image quality prediction results. Experimental Results and Analysis To verify the performance of the proposed method, the experimental configuration of this study is in a Windows 10 environment, the processor is Intel Core i7-8550U, and the memory is 16 GB. The software tool is MATLAB R2018, and the deep learning library uses MatConvNet (V1.0-beta22). Data Settings On the basis of the image quality evaluation method described in this paper, we use the LIVE [29], TID2008 [30], and TID2013 [31] image databases to evaluate the proposed image quality evaluation method. The LIVE image database [29] has a total of 779 distorted images, including JPEG2000 (169), JPEG (175), Gaussian white noise (145), Gaussian blur (145), and fast fading (145). The distorted image in the LIVE image database is obtained by adding different types and levels of distortion to the 29 reference images. The TID2008 database [30] includes 25 reference images, 1700 different types and degrees of distorted images, and 17 types of distortion, including additive Gaussian noise, JPEG compression, salt and pepper noise, Gaussian blur, JPEG2000 compression, and brightness change. The subjective evaluation score of the distorted image is based on the observer's subjective evaluation in the form of differential average subjective score. The value of difference mean opinion score (DMOS) reflects the subjective quality of the distorted image, and if the value is small, then the corresponding subjective evaluation quality is high. The TID2013 database [31], an enhanced version of TID2008, includes 25 reference images and 3000 distorted images. A total of 24 types of distortion are observed: changing color saturation, multiple Gaussian noise, comfort noise, lossy compression, color image quantization, chromatic aberration, and sparse sampling. The DMOS value of the database is obtained by the 524,340 data provided by the 971 observers, and the mean opinion score (MOS)value range is [0,9]. The variety of distortions in the database makes the database abundant and become a color distortion database. Therefore, numerous image quality evaluation algorithms include the database in the comparative experiment. To analyze the image quality evaluation performance of the proposed and other methods, two indicators are used, namely, Pearson linear correlation coefficient (PLCC) and Spearman rank-order correlation coefficient (SROCC). PLCC is mainly used to evaluate the accuracy of image quality evaluation methods. The larger the value, the better the accuracy of the corresponding evaluation method. PLCC can be defined as: where X i and Y i stand for the MOS and the model prediction of the i-th image, respectively.X andȲ are expressed as the mean of subjective score and prediction score, respectively. n represents the number of test sets. SROCC mainly reflects the consistency between objective and subjective evaluations. The larger the value, the better the performance.SROCC can be defined as: where n is the test image number and d i is the rank difference between the MOS and the model prediction of the i-th image. Tables 1 and 2 show the performance comparison results of the proposed and other methods on the LIVE database. The evaluation methods include the structural similarity methods proposed in [4], the DIIVINE evaluation method based on natural scene statistics in [10], the BRISQUE evaluation method based on spatial domain in [11], the BLIINDS-II method based on DCT in [12], the evaluation method based on CNN in [15], and the BIECON evaluation method in [16]. In the experiments corresponding to Table 1, Since the proposed image quality approach requires a training procedure to calibrate the regressor module, we divide the LIVE database into two randomly chosen subsets-80% training and 20% testing-such that no overlap between train and test content occurs.In the experiments corresponding to Table 2, the distorted images of the 23 original images in the LIVE database are selected as the training samples, and the distorted images of the 6 original images are used as the test samples for experiments. Further, to eliminate performance bias, we repeat this random train-test procedure 1000 times and report the median of the performance across these 1000 iterations. The results in Tables 1 and 2 show that the image quality evaluation performance of the proposed method has improved to a certain degree in most cases compared with the above mentioned evaluation methods, thereby proving its effectiveness. The reason is that this study uses the same feature of CNN and wavelet transform, i.e., multiscale analysis, which can obtain image information in multiple directions, thereby improving the generalization of image quality evaluation methods. In addition, using information entropy as the weight of visual quality evaluation can reflect the influence of different sub-band images on the visual quality of the entire image and improve the subjective and objective consistencies of image quality evaluation. To assess the data dispersion, we added some statistical analysis work to evaluate the data dispersion of PLCC and SROCC on LIVE for the proposed image quality assessment method. The standard variance values of PLCC and SROCC were added in Table 3. From Table 3, it can be seen that the data variance of PLCC and SROCC is small, which shows that the proposed method has relatively stable performance. Furthermore, to evaluate the robustness of the proposed image quality algorithm on LIVE dataset, the Box plots of PLCC and SROCC correlation are showed in Figures 5 and 6. It can be seen that the results obtained on the LIVE dataset demonstrate the robustness of the proposed method on different distortion types. Cross-Validation on the TID2008/TID2013 Databases To verify the adaptability of the proposed method to the new sample, this section trains the model in the LIVE image database and tests it in the TID2008 and TID2013 image databases. Since the TID2008 and TID2013 database contain more distortion types, only four types, i.e., JPEG2000 compression, JPEG, WN, and BLUR are selected in the experiment.FF distortion does not exist in the TID2008 and TID2013 database. Since the range of DMOS scores is from 0 to 100 in LIVE database, and the range of mean opinion score(MOS) scores is from 0 to 9 in TID2008. Therefore, to make a fair comparison, we adopt the same method as [32] to perform a nonlinear mapping on the predicted scores produced by the model trained on LIVE database. The detail of mapping method can be refer to literature [32]. Besides this, the TID2008 is spitted into two parts of 80% and 20% randomly. 80% of the data is randomly selected for estimating parameters of the logistic function, and 20% is used for testing, i.e., evaluating the transformed prediction scores. This random spit procedure are repeated 100 times in this work. Tables 4 and 5 show the results of the cross dataset test. It can be seen that the proposed image quality algorithm outperforms previous state of the art methods.Similarly, we adopt the same procedure as above to test the generalization ability of our method in TID2013 database. Furthermore, for comparison with other algorithms, we added simulation comparison results between our method and other works in dipIQ [25], CORNA [32] and ILNIQE [33] in Tables 6 and 7. The results of the cross-dataset test are shown in the two Tables. From Tables 6 and 7, we can see that the performance of the proposed method is satisfied. Overall, the testing results on two image databases, in most cases, show that the proposed method has better performance than other image quality evaluation methods. Parameter Setting and Performance Evaluation In the wavelet CNN model proposed in this paper, some model parameters are involved, such as the size and number of convolution kernels and the proportion of the training dataset. This section specifically analyzes the effects of different settings of these parameters on network performance. Size of the Convolution Kernel The network model is trained and tested using convolution kernels with different sizes without changing the network structure. Table 8 shows the comparison results of the performance with different sizes of convolution kernel networks. Convolution kernels with different sizes have a small influence on network performance. Hence, this study selects a 7 × 7 convolution kernel. Figure 7 shows the relationship between the number of convolution kernels and the predicted results. The prediction result of the network increases with the increase in the number of convolution kernels. However, when the number of convolution kernels exceeds 50, the prediction result tends to stabilize; furthermore, the increasing number of convolution kernels dramatically increases the time costs required for network training. Therefore, considering the accuracy and time efficiency of network prediction, the number of convolution kernels is set to 50. Analysis of Prediction Results with Different Training Set Sizes To verify the prediction performance of the network under different training set sizes, SROCC and PLCC are used as the functions of the training set size ratio, i.e., the training set sizes are 10% to 90%, respectively, and the training set is used to train the network. The data samples are divided into two different samples, 80%of which are used for training and the remaining 20% are for testing. Figures 8 and 9 show the experimental results. Although the results of the network prediction decreased, they have not deteriorated severely as the size of the training set decreases. As shown in Figure 8, the percentage of SROCC performance in the training set is reduced from 90% to 30%, and the performance degradation is less than 5% in all three databases. However, in Figure 9, the performance of PLCC is slightly degraded, and the decreasing proportion is approximately 7%. The reason is that the designed network is a data-driven method, and the reduction of data volume affects the generalization of the network model to a certain extent. Furthermore, increasing the diversity of training data and the amount of data will help improve the adaptability of the network model to new samples. Computational Cost In this section, we show the computational time of the proposed wavelet CNN model with different image data set. Please note that all the results are performed on a PC with 1.8 GHz CPU, and the memory is 16 GB, and the software tool is MATLAB R2018.we measure the processing time on three image data sets with 32 × 32 block size. Table 9 shows the average processing time. "LIVE + TID2008" in Table 9 refers to that the model is trained with LIVE database, and then tested with TID2008 data. "LIVE + TID2013" has the same meaning as "LIVE + TID2008". As can be seen from Table 9, the training time of the proposed wavelet CNN model is relatively long, which is mainly because the proposed model runs in the CPU environment. To improve the calculation efficiency of the model, we will run it in GPU environment in the next work. Conclusion In this study, the idea of wavelet transform is introduced into the image quality evaluation based on CNN, and an improved nonreference quality evaluation method based on WCNN is proposed. This method uses the multiresolution characteristics of wavelet transform to acquire the sub-band images in multiple directions of the image as the input data of the CNN, thereby further increasing the diversity of the training data. Considering the sparse approximation capability of the wavelet transform to the image, the global and local information of the image is used to improve the effectiveness of image quality evaluation. On the contrary, the rich structural information of image information entropy can be used to reflect the degree of image change. In this study, information entropy is used as the weight of visual quality evaluation, which helps the predicted value of image quality to be close to human visual perception process. Finally, the results of experimental simulation and performance analysis proved the effectiveness and robustness of the proposed algorithm. As a result, the proposed method achieved 0.964 for SROCC,0.967 for PLCC, outperforming most of the existing NR-IQA methods in this article. In the future, we will improve the existing network structure, and combine other methods, such as multi task learning method, to design a new image quality evaluation method.
8,251.6
2019-10-31T00:00:00.000
[ "Computer Science", "Engineering" ]
Simulation and Analysis of Outdoor Microcellular Radio Propagation Characteristics Based on the Method of SBR / Image In this paper, the outdoor microcellular radio propagation characteristics at 3.5 GHz are simulated and analyzed by the method of SBR/Image (Shooting and bouncing ray tracing/Image). A good agreement is achieved between the results simulated and the results given in published literature. So the correctness of the method has been validated. Some simulated propagation parameters of LOS (Line-of-sight) and NLOS (None-line-of-sight) have been compared. The analysis of the above results provides the foundation for the coverage of outdoor microcellular systems. Introduction As mobile communication service has increased dramatically in recent years, the microcellular systems [1]- [5] are needed to accommodate more users with limited resource of frequency.So it is important to predict propagation characteristics (such as path loss, RMS delay spread) for better wireless coverage of outdoor microcellular systems.The ray tracing techniques are usually employed to study the radio wave propagation of outdoor microcellular environments.The several most popular ray tracing techniques are image method, brute force ray tracing, deterministic ray tube method and the method of SBR/Image.Image method [6] has good efficiency for it does not require the reception tests.However, images of scatters are difficult to find in complex environments.Brute force ray tracing [7] can be used in complex environments, but it needs reception tests.A deterministic ray tube method [8] saves computer resources, but it needs to create a ray tree based on the actual environment.SBR/Image method [9] can be used for complex environment, and it can find all propagation paths from the transmitter to the receiver with high accuracy and computational efficiency.So this method is a valuable method which can be applied to predict the radio wave propagation. Simulation Environment Figure 1 shows the top view of simulation environment in [10] which is a rectangular with dimensions 380 × 180 m.The value of the relative permittivity r ε and the conductivity σ is chosen as 3, 0.005 S/m for build- ings and 15, 7 S/m for the ground.In the simulation, vertically polarized omnidirectional antennas with 0 dBi are used for both the transmitter and the receiver.The heights of transmitter antenna and receiver antenna are 25 m and 1.5 m.The frequency of transmitted signal is 3.5 GHz, and the emitted power is 0 dBm.The receiver trajectories include line-of-sight (A-C-B) street, none-line-of-sight (A-C-D) street, the parallel street (E-F) and the perpendicular street (G-H). Simulation Results Figure 2 shows signal path loss versus distance between the transmitter and the receiver.A good agreement is achieved between the results simulated and the results given in literature [10], so the correctness of our method has been validated.It is found that signal path loss increases as distance increases, and it rapidly increases when the receiver moves along NLOS street, because there is no direct ray and the diffracted rays are definitely dominant.When a receiver moves along the LOS street A-C-B, the NLOS street A-C-D, the parallel street E-F and the perpendicular street G-H, the path loss and the RMS delay spread are presented in Figure 3 and Figure 4, respectively.In Figure 3, the path loss of LOS path shows the lowest decay because the direct ray is dominant.In the street A-C-D, signal path loss increases rapidly when the receiver runs into the NLOS region (C-D).The path loss also increases as distance increases when the receiver moves along the parallel street (E-F).However, it is easily found that the path loss is much larger than that of the LOS path.The plotted path loss curves of streets both A-C-D and E-F are seen to agree closely for distance greater than 200 m.In Figure 3, the curve of street G-H is first down and then up, an obvious decrease of the path loss is observed when the receiver moves toward the crossroad. The RMS delay spread is the square root of the second central moment of power delay profile.It is an important parameter to characterize wide-band multipath channels.In this paper, the RMS delay spread of four paths is plotted in spread of the NLOS path increases rapidly compared with LOS path, which means intersymbol interference (ISI) is larger than that of LOS region.When the receiver moves along the parallel street (E-F) and the perpendicular street (G-H), the delay spread presents sharp fluctuation for more obstructions hinder the radio wave reaching to the receiver compared to the LOS street. The formula of the Doppler shift is . So each path will cause the Doppler shift when the transmitter or the receiver is moving.Therefore, the Doppler shift can be determined (Assuming that the transmitter is fixed and the speed of receiver is 1 m/s).The comparison of the Doppler shift when the receiver moves along four streets are shown in The angles A θ and A φ give the direction from which the propagation path arrives at a receiver point. Therefore, the direction of arrival is given by the unit vector as sin( ) cos( ) sin( ) sin( ) cos( ) The angle θ is defined as , and the angle φ is defined as Conclusion In this paper, the method of SBR/Image is employed to study the radio wave propagation in outdoor microcellular environment at 3. Figure 1 . Figure 1.Top view of simulated environment. Figure 4 .Figure 3 . Figure 3.The comparison of the path loss of four streets. Figure 4 . Figure 4.The comparison of the delay spread of four streets. Figure 5 . The Doppler shift of LOS path is very flat and its value is lower than −10 Hz.In the path A-C-D, the Doppler shift presents severe oscillation when the receiver runs into the NLOS path (C-D) because there are more diffraction rays rather than direct ray.It varies between −10 Hz and 0 Hz.The Doppler shift of the parallel street E-F shows sharp fluctuation for distance lower than 125 m.As distance increases, the curve becomes flat for little variation of mean direction of arrival.The values of Doppler shift in path A-C-B, path A-C-D and path E-F are all minus because the receiver runs away from the transmitter.The Doppler shift of the perpendicular street G-H shows severe oscillation and has positive number.It varies between −15 Hz and 10 Hz.The range of Doppler shift in this simulation provides the theoretical foundation for the coverage of outdoor microcellular systems. Figure 6 and Figure 6 and Figure 7 show the distribution of mean angle of arrival (including angles θ and φ ) of all re- ceived points at two paths (the LOS path A-C-B and the perpendicular path G-H).In Figure 6(a), the mean angle of arrival ( φ ) in path A-C-B varies between 177˚ and 182˚ and it distributes around 180˚.The corresponding angle φ in path G-H is shown as Figure 6(b).It varies between 27˚ and 334˚ and has a wider variation range compared with the mean angle of arrival in path A-C-B.In Figure 7(a), the mean angle of arrival ( θ ) in path A-C-B varies between 58˚ and 87˚ and it distributes around 85˚.The angle θ in path G-H is shown as Figure 7(b).It varies between 79˚ and 87˚ and it distributes around 83˚.The distribution of mean angle of arrival ( θ ) has little difference between the NOS path A-C-B and the perpendicular path G-H. Figure 5 . Figure 5.The comparison of the Doppler shift of four streets. 5 GHz.The simulated results show good agreement with the results in the literature, so the correctness of the method has been validated.The path loss curve in LOS path is flat and increases slowly versus distance, the corresponding path loss in NLOS street shows much higher attenuation.The delay spread of NLOS street presents sharp fluctuation compared with that of LOS path, which means intersymbol interference (ISI) strengthen.The value of Doppler shift of path A-C-B, path A-C-D and path E-F are all minus because the receiver runs away from the transmitter.In the perpendicular street G-H, Doppler shift shows severe oscillation and has positive number.The mean angle of arrival ( φ ) in perpendicular path G-H has a wider variation range compared with that in NOS path A-C-B.The distribution of mean angle of arrival ( θ ) has little difference be- tween the NOS path A-C-B and the perpendicular path G-H.The analysis of the above results provides the theoretical foundation for the coverage of outdoor microcellular systems.
1,934.8
2015-03-17T00:00:00.000
[ "Engineering", "Physics" ]
Dark-matter-free Dwarf Galaxy Formation at the Tips of the Tentacles of Jellyfish Galaxies When falling into a galaxy cluster, galaxies experience a loss of gas due to ram pressure stripping. In particular, disk galaxies lose gas from their disks, and very large tentacles of gas can be formed. Because of the morphology of these stripped galaxies, they have been referred to as jellyfish galaxies. It has been found that star formation is triggered not only in the disk, but also in the tentacles of such jellyfish galaxies. The observed star-forming regions located in the tentacles of those galaxies have been found to be as massive as 3 × 107 M ⊙ and with sizes >100 pc. Interestingly, these parameters in mass and size agree with those of dwarf galaxies. In this work, we make use of the state-of-the-art magnetohydrodynamic (MHD) cosmological simulation IllustrisTNG-50 to study massive jellyfish galaxies with long tentacles. We find that, in the tentacles of TNG-50 jellyfish galaxies, the star formation regions (gas+stars) formed could be as massive as ∼2 × 108 M ⊙. A particular star-forming region was analyzed. This region has a star formation rate of 0.04 M ⊙ yr−1, it is metal-rich, has an average age of 0.46 Gyr, and has a half-mass radius of ∼1 kpc, typical of standard dwarf galaxies. Most importantly, this region is gravitationally self-bound. Overall, we identify a new type of dwarf galaxy being born from the gas tentacles of jellyfish galaxies that, by construction, lacks a dark matter halo. INTRODUCTION Among the mechanisms that affect the evolution of galaxies as they enter galaxy clusters, ram pressure stripping (RPS, Gunn & Got, 1972) is believed to play an important role in introducing the gas content and quenching the star formation of galaxies (Cortese et al. 2019). As a galaxy interacts with the intra-cluster medium (ICM), the latter exerts a pressure force on the galaxy's interstellar medium (ISM), stripping it away and sometimes removing it completely.It is close to the cluster center, where the density of the ICM is the highest, and the velocity of an in-falling galaxy reaches its maximum, that RP is the most strong. The net result of this mechanism, is the loss of a galaxy's ISM that, in turn, will form trails of gas in the direction opposite to the galaxy's motion.These have been observed at multiple wavelengths, tracing molecular and atomic gas (e.g.J´achym et al. 2014, 2017, 2019;Moretti et al. 2018Moretti et al. , 2020)), and ionized gas as well, appearing as Hα emission (e.g.Fossati et al. 2016;Mc-Partland et al. 2016;Poggianti et al. 2019).Clumpy Hα can indicate the presence of active star formation outside the galaxy.The presence of young, massive stars in the tails, has also been traced in the UV and optical bands (e.g.Kenney et al. 2019;George et al. 2018;Poggianti et al. 2019).The morphology these galaxies present while<EMAIL_ADDRESS>stripping, led them to be dubbed "Jellyfish galaxies". Long, one sided tails, that often tend to point away from the cluster center, is one of the tell-tale indicators of ram pressure.As RPS is a purely an hydrodynamical force, only the gas component is directly influenced by it, while the stars will go largely unaffected both, from a geometrical and from a dynamical point of view.By this token, the detection of a recently formed young stellar component in the tails of these objects, strongly suggests that this SF occurs in-situ within the stripped gas. The GASP survey (GAs Stripping Phenomena in galaxies with MUSE; Poggianti et al. 2017) has provided us with one of the most detailed views of the properties of such unique type of galaxy in the local (0.04 < z < 0.07) universe.Among the ∼ 60 RPS galaxies belonging to this sample, ionized gas powered by star formation has been detected (R: you can get ionised gas that is not a result of SF as well, so maybe you could sya how they have provided detailed information about the properties of the ionised gas, its dynamics and SF in the tails instead) in the tentacles, when the latter are present (Gullieuszik et al. 2020).Furthermore, in a few cases, the superb resolution provided by ALMA has allowed the detection of CO within the same location (Moretti et al. 2018(Moretti et al. , 2020)), hence further supporting the idea that stars can be formed in this environment, outside of the galaxy. Among the works analyzing this issue in the frame-work of GASP, Gullieuszik et al. (2020) studied the star formation in the tentacles, and found an average star formation rate (hereafter SFR) value of 0.22 M ⊙ /yr per galaxy cluster.In addition, they estimated a total mass of stars formed in the tentacles of 4 × 10 9 M ⊙ per galaxy cluster since z ∼ 1. Going into further details, Poggianti et al. (2019) studied star formation in the tails, by focusing on star forming clumps, and found that their stellar masses range from 10 5 to 3 × 10 7 M ⊙ (with a median of 3 × 10 6 M ⊙ ), and that their core radius range from 100 to 400 pc (with a median of 160 pc), similar in masses and sizes to ultra compact dwarf galaxies (UCDs). Further studies from the same group, exploiting HST follow-up observations of galaxies that are extreme examples of stripping (Gullieuszik et al. 2023), studied the characteristics of the star clumps as a population, finding that star formation in this environment is turbulencedriven, something that is found to be common in mainsequence galaxies as well (Giunchi et al. 2023). The results of these observational works, prompted us to study and analyze star formation in the tentacles of jellyfish galaxies detected in Illustris TNG-50 simulation, a state of the art cosmological simulation.The main goal of this paper, is to look for the presence of selfgravitating objects in the tails of RPS-affected galaxies, whose masses and size resemble those of standard dwarf galaxies.The discovery of such objects in an magnetohydrodynamical simulation, may allow us to better understand the sequence of events that lead up to the formation of such objects, and further support the hypothesis that ram pressure is a viable mechanism for the production of RPS dwarf galaxies (DG), a secondary type of DG in the same family as the so called tidal dwarf galaxies (TDGs), which are formed from tidal debris of interacting galaxies. This RPS DGs, whose existence was already speculated in Poggianti et al. (2019), would be dark matter (DM) free by definition, and would display properties that fall in the range of those observed between UCDs and standard dwarfs. This article is organized as follows: In Section §2 we describe the TNG-50 simulation, and describe our selection method to identify jellyfish galaxies.In Section §3 we present our results.Finally, we present our conclusions in Section §4. SELECTION In this work we study the regions of star formation in the stripped gas from jellyfish galaxies at a cosmological time of z=0.1 in TNG-50 (Pillepich et al. 2019;Nelson et al. 2019b).In the following sub-sections we briefly describe the TNG-50 simulation, and our sample selection. The TNG simulations consist of a set of simulations with different domain sizes, and resolutions.In particular TNG-50 has a (51.7 Mpc) 3 volume box, with a baryonic mass resolution m b = 8.5 × 10 4 M ⊙ , and a DM mass resolution m DM = 4.5 × 10 5 M ⊙ .The minimum allowed adaptive gravitational softening length for gas cells (comoving Plummer equivalent) is ϵ gas,min = 74 pc and for the stars and DM ϵ DM, * = 288 pc. The most important physical ingredients included in the TNG simulations are: The gas radiative processes, the star formation in the dense interstellar medium, the evolution of the stellar population and the chemical enrichment from supernovae Ia, II, as well as from AGB stars (Nelson et al. 2019b). However, because of the resolution limit in TNG-50 there are physical processes like small-scale turbulence, thermal instabilities, and molecular cloud formation, which can not be explicitly modelled (Vogelsberger et al. 2013). The star formation is modelled with a density threshold (0.1 cm −3 ) as described in Springel & Hernquist (2003).In such star formation recipe the gas parcels are stochastically converted into star particles when their density is greater than n H = 0.1 cm −3 (Kennicutt 1983) on a time-scale proportional to the local dynamical time of the gas. Jellyfish galaxies have been studied before in the TNG-50 context.For example, Rohr et al. (2023) analyzed a set of first-infalling jellyfish galaxies in TNG-50, where they find that jellyfish galaxies are a significant source of cold gas accretion into the ICM.Moreover, Göller et al. (2023) confirmed that star formation can be triggered within the RPS tentacles of jellyfish galaxies despite of not finding an overall larger SFR in jellyfish galaxies, compared to their control galaxy sample. It is true that in some regions the media of the RPS tails could be more diffuse as compared to that of the inter-stellar medium (∼1 particle/cm 3 ).However, in our case the gas density in some regions of the tentacles of the main galaxy can be dense enough to reach the threshold imposed by TNG-50 to form stars.In this case then, the sub-grid recipes used in TNG-50 do not affect our results. In summary, the TNG-50 simulation combines a largescale volume with a high mass resolution, a powerful tool to study large star formation regions in the tails of jellyfish galaxies. Sample selection In this study the jellyfish galaxies are identified at a redshift z=0.1.We follow the criteria in Yun et al. (2019), which select satellite galaxies belonging to massive galaxy clusters, (hereafter FoF groups; Friends of Friends groups) with a total mass > 10 13 M ⊙ .They consider only those galaxies whose positions are located between > 0.25 R 200 and < R 200 from the center of the galaxy cluster (R 200 denotes the virial radius of the FoF group).The mass of the stellar component of the jel- lyfish candidate is selected to be > 10 9.5 M ⊙ , so that a minimum number of stellar particles (3 thousand) is assured, with the resolution of TNG-100.We consider only the galaxies whose total gas mass to total star mass ratio (M gas /M stars ) is greater than 0.01.Additionally, Yun et al. (2019) filter their sample by visual inspection requiring an asymmetric distribution of galaxy gas elongated in a preferred direction, and require no companion (interacting) galaxy to avoid tidal effects. At z = 0.1, there are 23 FoF groups that meet the jellyfish group mass criteria stated above.Applying the rest of the criteria, one ends up with 442 galaxies.Since we are interested in the star formation within the tails of jellyfish galaxies, from that total group of 442 galaxies we select only the most massive ones with total masses greater than 2.5 × 10 11 /h M ⊙ , since we want the most massive star formation regions (> 10 7 M ⊙ ).We end up with 23 galaxies, which have an average gas mass of 2×10 10 /h M ⊙ , an average stellar mass of 9×10 10 /h M ⊙ , an average DM mass of 5.3 × 10 11 /h M ⊙ , and an average total mass of 6.4 × 10 11 /h M ⊙ . From the 23 galaxy sample, we find that seven galaxies have a galaxy neighbour (or multiple neighbours) within 50 ckpc from their centers.We eliminate those galaxies, since we want to isolate the effect of RPS triggering large regions of star formation.The latter reduces the sample to 15 galaxies. In order to identify regions of star formation within the tentacles of jellyfish galaxies (massive enough to form dwarf galaxies), one must select those galaxies which have larger amounts of total gas outside their disks.Typically the jellyfish stage occurs at the moment when the RPS effect on a galaxy is maximum (i.e. when the galaxy falls into the cluster of galaxies).Since stars do not undergo the RPS effect, a measure of jellyfish-ness could be the ratio of the half-mass radius of the stellar component to the half-mass radius of the gas component.We computed the stellar to total gas half-mass ratio for all 15 galaxies from the sample and tagged as candidates those galaxies whose stellar to gas half-mass ratio r h, * /r h,gas < 1/3 (the gas distribution of the galaxy is extended beyond the disk). We found six galaxies which have gas half-mass radius three times larger than their stellar counterparts.But, only three of them present an extended distribution of gas.Those extended (tentacle-like) regions of gas are our targets to look for large regions of star formation outside the host galaxy.The further away the star forming regions in the tentacles of a jellyfish galaxy are from the host galaxy, the better chance star formation regions have to survive (tidal) gravitational interactions with their host galaxy.From those three galaxies with extended gas distribution, we looked for stellar and gas over-densities within their jellyfish tentacles and found evidence for their presence in one of those three galaxies.This galaxy is the object of our study. RESULTS We analysed the galaxy with ID 119447, that full-fills the criteria presented in Section 2. This galaxy has a total mass of 4.16 × 10 11 M ⊙ (with M DM = 3 × 10 11 M ⊙ , M gas = 2.3 × 10 10 M ⊙ , and M * = 8.65 × 10 10 M ⊙ ).It has a SF R = 2.14 M ⊙ /yr, and a half-mass radius of 19 kpc. In Figures 1 and 2, we show maps of the stellar (yellowblue), and gas (pink-purple) distributions at six different evolutionary times (from top to bottom redshifts z=0.42, 0.26, and 0.18; and z=0.15, 0.12, and 0.1, respectively).The white arrows indicate the direction to the center of the cluster of galaxies, while the gray arrow indicates the direction of motion of the galaxy with ID 119447.Each map in Figures 1 and 2 come with three panels.The top panel shows the amount of gas (magenta), and stars (blue) as a function of time (given in redshift).The gray vertical line shows the redshift at which the galaxy maps are plotted.The middle panel shows the cluster-centric distance of the galaxy, as a function of time (given in redshift), the vertical gray line corresponds again to the redshift at which the galaxy maps are plotted.The bottom panel shows the X-Y orbit of the galaxy around the galaxy cluster center.The small white dot shows the center of the cluster, and the yellow star shows the position on the orbit of the galaxy of the galaxy, at the particular time when the galaxy maps are plotted.The white arrows in all panels represent the direction to the center of the galaxy cluster, and the gray arrows represent the direction of motion of the galaxy. The top panel of Figure 1 shows the galaxy when it reaches its first peri-cluster passage (infall) to the galaxy cluster, showing a slightly increase of stellar mass and a decrease in gas mass.We observe that the gas distribution is quite deformed due to RPS already at this time (z=0.42).The middle panel in Figure 1 shows the galaxy at its apo-cluster distance (z=0.26),where the galaxy changes direction (gray arrow) and starts to move towards the center of the galaxy cluster.Again the gas distribution appears quite disturbed.Figure 2 shows three snapshots of the galaxy moving in its orbit as it approaches a second infall.Already at z=0.15, the gas of the galaxy starts to be stripped (see pink lines in top subpanels of Figure 2).Between z=0.12 and z=0.1, we observe a major decrease in gas while the stellar component slightly increases.In the latter period of time a prominent tentacle is formed. At z=0.1 (bottom panel of Figure 2), the galaxy with ID 119447 presents clear gas and stellar over-densities.In the top panel of Figure 3, we show overlapped maps of neutral gas (green), SFR (rainbow), and DM (white).We observe that long (∼ 80 ckpc/h) tentacles of neutral (green) gas emerge outside the DM distribution of the galaxy 119447. In Figure 3, we circle in magenta a particular region at the tip of one prominent gas tentacle.Star formation is present along this tentacle.At the tip of the tentacle (see a close-up of tip region in question; A2 in the top panel of Figure 3), the SFR is ∼ 0.03 M ⊙ /yr, which is as large as the SFR at the outskirts of the disk of the galaxy 119447 (see dark blue-pink colors in the top panel of Figure 3).We also observe that this star formation region is quite extended. The magenta circle, at bottom panel of Figure 3 (B) shows the stellar mass map over-plotted to the DM mass map.The tip of the gas tentacle is present here as a stellar over-density. The fact that stars do not trace the tentacle, gives us a hint that the tentacle has a RPS nature, and that tidal effects (e.g.galaxy-galaxy gravitational interac- tions) were not involved.To make sure that in the formation of the tentacle of gas no tidal effects were involved, we computed the cluster-centric distance of our galaxy (ID 119447) from z=1.5 to z=0.1.Our galaxy experienced two infalls to the cluster.The first peri-cluster distance happens at z=0.42 (see top panel of Figure 1), and the second infall happens at z=0.05.Both infalls of the galaxy into the cluster, are characterized by only gas loss.No stellar stripping happens while our galaxy falls towards the cluster centre (see top right sub-panels in each panel of Figures 1 and 2). In addition in the time span analyzed and shown in Figures 1 and 2 (z=0.42-0.1), the galaxy with ID 119447 has not experienced a major merger.We conclude that the reduction in gas (but not in the stars) has a pure ram pressure nature.It is important to notice that the tentacle where we find the stellar and gas over-density lays well outside the DM halo of the galaxy (see white distribution in Figure 3). The region within the magenta circle shown in Figure 3, is a very promising candidate to be a dwarf galaxy born from a RPS tentacle formed while the galaxy with ID 119447 entered its cluster of galaxies for the second time. It has to be noted that few smaller stellar over densities appear linked to smaller gas tentacles.For example, in the top and middle panels of Figure 2 one can see in the bottom-left region a small stellar over-density linked to a thin and short gas tentacle. The Dwarf galaxy candidate We trace back the gas that forms the tentacle from where we find the stellar and gas over-density (highlighted with a white contour in Figures 1 and 2).In the first infall of the galaxy at z=0.42 (see top panel of Figure 1) the gas has been already stripped from the galaxy.Then, the stripped gas is blown across the galaxy by the change in the galaxy direction (see bottom panel of Figure 1).Later, the gas in the galaxy gets stripped again (see Figure 2) as it approaches the second infall (pericenter) in the opposite direction of motion.The middle and bottom panels of Figure 2, show the formation of the RPS tentacle. The formation of the tentacle and eventually the formation of the stellar over-density found at the tip of it, goes back in time since the first infall of the galaxy, where the galaxy has already had a jellyfish phase at z=0.42. We analyze in more detail the tip of the tentacle described in the previous subsection, located at a distance of 75 ckpc/h from the center of the galaxy with ID 119447.We built a map of stellar mass centered at the center of mass of this region, shown in the top panel (A) of Figure 4.In this Figure, the white circle represents 1.5 times the stellar-half-mass radius of the galaxy with ID 119447 (5.7 ckpc/h).In the bottom panel (B) of Figure 4, we show the gas mass map of the SF region within the magenta circle in Figure 3 (labeled as A2). In order to check whether this SF region with an overdensity in stellar mass is in fact an independent object, we have to make sure that the region is self-bound.We computed the total mass fraction of stars and gas as a function of the total energy of the substructure.We tagged the particles with total energies lower than zero, since they define a self-bound and independent object.The total stellar-self-bound mass is M * ,dwarf = 1.7 × 10 7 M ⊙ (253 stellar particles).The total gas-self-bound mass is M gas,dwarf = 2 × 10 8 M ⊙ (2036 gas particles).Then, our self-bound region has a total baryonic mass of 2.17 × 10 8 M ⊙ , and a gas fraction of ∼ 90%. Our assumption is that self-bound objects formed at the tips of jellyfish tentacles should be DM free by construction.To prove it to be true, we computed the number of DM particles within a radius of 5.7 ckpc/h (white circles in Figure 4).Within that volume we found 38 DM particles (≈ 1.7 × 10 7 M ⊙ ).We then computed the total energy associated to these DM particles, and found that none of them were bound; i.e.E tot,DM > 0. In order to study the star formation activity of the self-bound region, we built the instantaneous SFH as a function of the redshift1 .The SFH is presented in the bottom panel of Figure 4.There are clearly two star formation episodes.The first one peaks at z=0.15 (see top panel of Figure 2), when the gas from the main galaxy has piled up, as the galaxy moves forward from the apocenter and experiences RPS.The second episode peaks at z=0.11, when a prominent tentacle of gas starts forming (see middle panel of 2).The mean (mass weighted) age of the stars in our self-bound region is 0.46 Gyr.The next step was to build the cumulative mass profile of the particles belonging to our self-bound region (A2 of Figure 3).We computed a stellar half-mass radius r h, * ≈ 1 ckpc/h, and a gas half-mass radius r h,gas ≈ 1.45 ckpc/h.These sizes, together with the stellar mass and the gas mass of the self-bound region, are consistent with those of a "standard" dwarf galaxy.In view of these characteristics, our self bound-region could be classified as a new type of dwarf galaxy: a RPS dwarf galaxy, which by construction lacks of a DM halo. As a next step to a better characterization of our RPSdwarf candidate, we computed its cumulative mass profile, which we show as a blue line in Figure 5.We compare the mass distribution obtained, with the mass distribution of a Plummer sphere.The Plummer mass distribution relates the half-mass radius with the scale length (r h =1.3×a), therefore for our RPS-dwarf galaxy, a = 0.89 ckpc/h.We show the analytical Plummer mass distribution as a black dashed line in Figure 5.Our RPSdwarf galaxy presents a mass distribution that can be approximated by a Plummer mass distribution, as spherical systems (i.e.elliptical dwarf galaxies, spheroidal dwarf galaxies, globular clusters, etc.) do. A pertinent comparison of the properties of our RPSdwarf galaxy, would be with those of TDGs, since TDGs also form without the need of a DM halo, and their gas (and consequently their in-situ stars) is already enriched. Therefore, we compare the concentration of our RPSdwarf galaxy to the concentration of TDGs reported by Vega-Acevedo & Hidalgo-Gamez ( 2022).The concentration parameter is defined as C = 5×log(r(80%)/r(20%)), where r(80%) is the radius at which 80% of the stellar mass is contained.Analogously, r(20%) is the radius at which 20% of the stellar mass is contained.For our RPSdwarf galaxy we computed C = 5×log10(2.122/0.632)= 2.63.In Figure 5 we compare the concentration value of our RPS-dwarf galaxy, with those of the TDGs reported in Vega-Acevedo & Hidalgo-Gamez (2022).Even if the concentration value is high for our RPS-dwarf galaxy (magenta circle in Figure 5), it still has a very similar value to the TDG Arp305E (C = 2.4±0.4,and M * = 10 7 M ⊙ ) reported in Vega-Acevedo & Hidalgo-Gamez (2022). As we mentioned before, the half-mass radius of the stellar component of our RPS-dwarf galaxy is ∼ 1 ckpc/h.Then, we compare it with the TDGs sample of Dabringhausen & Kroupa (2013).In the bottom panel of Figure 5, we show the half-mass-radius as a function of the stellar mass.The magenta point represents the value for our RPS-dwarf galaxy, and the blue points show the corresponding values of the TDG's sample reported in Dabringhausen & Kroupa (2013). Regarding the rotation, TDGs can show strong velocity gradients which are consistent with rotation (Weilbacher et al. 2002;Bournaud et al. 2004Bournaud et al. , 2007)).However, it has to be noted that the TDGs for which rotation was reported, have a total mass at least an order of magnitude larger than our RPS dwarf galaxy.To compute the rotation profile of our RPS dwarf galaxy, we first identify the position angle (PA) of the rotation axis in the plane of the sky of our RPS dwarf galaxy.We followed the approach of Côté et al. (1995) (see also, Bellazzini et al. (2012) and Bianchini et al. (2013)).We computed a P A = 50 • and an amplitude (A) of 2.6 km/s, which gives an estimate of the internal rotation V r (Bianchini et al. 2013).Additionally, we computed the value of the velocity dispersion σ = 10.6 km/s for our RPS dwarf galaxy.The velocity dispersion is very similar to those simulated TDGs in Ploeckinger et al. (2015) which range from ∼ 15 to ∼ 6 km/s.Now, we can compare the ordered and the random motions in our RPS dwarf galaxy by computing the ratio V r /σ = 0.25, a value similar to those of large globular clusters (e.g.M 15 has a V r /σ = 0.23 Bianchini et al. (2013)).A system whose gravitational support is rotation-dominated has V r /σ > 1, for example V r /σ = 1 − 2 for the TDGs reported by Lelli et al. (2015). We proceed to compute the total SFR of our RPSdwarf galaxy, obtaining a mean SFR of 0.04 M ⊙ /yr.Using the panchromatic STARBurst IRregular Dwarf Survey (STARBIRDS;McQuinn et al. 2018A) McQuinn et al. (2018B) reported the SFRs and stellar masses of starburst and post-starburst dwarf galaxies.They compare the stellar mass of their starburst dwarf galaxies as a function of their SFR.Comparing the values of SFR and stellar mass of our RPS-dwarf galaxy, we find that our object would be classified as a starburst dwarf galaxy (Lee et al. 2011;McQuinn et al. 2018B). Moreover, Marasco et al. (2022) analysed a starburst dwarf galaxy sample from the DWarf galaxies Archival Local survey for Interstellar medium investiga-tioN (Dwalin; Cresci et al. in prep.), and showed the SFR as a function of the stellar mass of these galaxies.The SFR and stellar mass computed for our RPS-dwarf galaxy falls very close to two starburst dwarf galaxies: Tol65 and UM461. On the other hand, it is important to note that Poggianti et al. ( 2019) studied star forming regions within the tails of jellyfish galaxies, finding over 500 of them.Their SFR range from ∼ 0.007 to 1 M ⊙ /yr, and their stellar masses are in the range M * = 10 5 − 3 × 10 7 M ⊙ , and sizes between ∼ 100 and 800 pc.In Figure 6 we show the star forming regions reported by Poggianti et al. (2019) (blue dots), and over-plotted the value of our RPS-dwarf galaxy (magenta dot).It is important to note that our RPS-dwarf galaxy is located where the RPS clumps have the highest values of SFR, and its mass is consistent with the jellyfish clump's distribution. Since our RPS-dwarf galaxy is being formed from already enriched material, we expect its metallicity to be higher than the metallicity of "standard" dwarf galaxies with a similar stellar mass.As we mentioned before, a better comparison in metallicity would be with TDGs, since they are also formed from recycled material previously metal-enriched.Recci et al. (2015), compared the gas oxygen abundances of a sample of gas-rich dwarf galaxies (Lee et al. 2006) and a sample of TDGs (Duc et al. 2014;Boquien et al. 2010).They found that the oxygen abundance (12 + log(O/H)) for the TDGs sample is ∼ 8 − 9.Moreover, Sweet et al. (2014) identified TDGs with 12 + log(O/H) > 8.6. In the bottom panel of Figure 6 we compare the oxygen abundances of the gas-rich dwarf galaxies reported by Lee et al. (2006) (green dots), and the TDGs reported by Boquien et al. (2010) (blue dots), and Duc et al. (2014) (yellow dots).We computed the oxygen abundance for our RPS-dwarf galaxy, obtaining 12 + log(O/H) = 9.5 (magenta dot in the bottom panel of Figure 6), somewhat higher than those reported for star-forming "standard" dwarfs and TDGs.It is important to note that for the progenitor galaxy (ID 19447) 12 + log(O/H) = 10.15, from such enriched gas our RPS-dwarf galaxy is formed, explaining its high value of oxygen abundance.Yet this value of oxygen abundance for the progenitor galaxy is high when compared to star-forming galaxies.In particular, using the mass-metallicity relation of Tremonti et al. (2004), inferred from 53 thousand star-forming galaxies at z ∼ 0.1, for the progenitor galaxy (subhalo with ID 119447) whose stellar mass is 8.65 × 10 10 M ⊙ we obtain 12 + log(O/H) = 9.1, one dex lower than the value from TNG-50.Moreover, if we take the luminosity-metallicity relation of Tremonti et al. (2004) one computes a value of 12 + log(O/H) = 9.2.If the discrepancy of one dex is applied to our RPS-dwarf galaxy, we obtain a value of 12 + log(O/H) = 8.5 (black dot in the bottom panel of Figure 6), which matches the mass-metallicity region of TDGs in the bottom panel of Figure 6. We followed our detected RPS-dwarf galaxy forward in time (from z=0.1 to z=0).We found that it survives slightly more than 1 Gyr.After that, our RPS dwarf falls back to the main galaxy and starts to be tidally disrupted by it (at z=0).Despite its fate, if we take the definition of Bournaud (2010), according to whom a TDG is defined as a long lived object if its age is at least 1 Gyr, then our RPS dwarf galaxy would be classified as a long lived object. CONCLUSIONS Detail observations of SF regions in the tentacles of jellyfish galaxies (Moretti et al. 2018(Moretti et al. , 2020;;Poggianti et al. 2019;Giunchi et al. 2023) have been able to identified individual SF regions along their tentacles.Such SF regions are observed to have stellar masses up to 3 × 10 7 M ⊙ , and have sizes up to 800 pc. If a SF region (triggered by RPS) within the tentacles of jellyfish galaxies is self-bound (with the latter physical parameters of mass and size), it could give rise to a new type of intracluster dwarf galaxy; a stripped purely baryonic dwarf galaxy (Kapferer et al. 2008;Poggianti et al. 2019).More recently, Werle et al. (2024) have studied the stellar populations of the star forming complexes in the tails of strongly RP-stripped galaxies, finding their characteristics to be consistent with those of the population of dwarf galaxies in clusters.We use the state of the art cosmological-magneto-hydrodynamic simulation TNG-50, to look if such dwarf galaxies exists in simulations, and if they could be born solely as a consequence of RPS. We analyzed a population of jellyfish galaxies in the cosmological simulation TNG-50, to look for most massive stellar over-densities within their long gas tentacles.We found a particularly large star forming region at the tip of a tentacle of the jellyfish galaxy with ID-119447. The stellar component of our RPS dwarf galaxy is made of 253 stellar particles, and 2036 gas particles.The latter number of particles guarantees that our RPS dwarf galaxy is well resolved (Joshi et al. (2021) classified the lower stellar mass limit of dwarf galaxies in TNG-50 as those with 120 stellar particles). We have to point out, that the resolution of TNG-50, prevent us to study smaller (< 0.5 ckpc/h) star forming objects in the tail of jellyfish galaxies (as independent self-bound objects), and the ones (well resolved) detected in the simulation would be biased to be larger.Still, that was our goal: Finding dwarf-galaxy sized objects formed from the RPS gas in jellyfish galaxies. We found that this region is self-bound, has a half-mass radius r h, * ≈ 1 ckpc/h, and comprises a population of gas and stars (M * ,dwarf = 1.7 × 10 7 M ⊙ and M gas,dwarf = 2 × 10 8 M ⊙ ). This dwarf-galaxy-like object has a concentration parameter C = 2.63, it has a smooth stellar massdistribution that can be approximated with a Plummer model.Its SFR has a value of 0.04 M ⊙ /yr, in agreement with the SFR values for star forming galaxies with similar stellar mass (McQuinn et al. 2018B;Marasco et al. 2022).It is metal rich according to TNG-50 (12 + log(O/H) = 9.8), or (12 + log(O/H) = 8.5) as derived from the metallicity-mass relation of Tremonti et al. (2004) for its stellar mass, and it does not have a DM component. The existence of such an object in the TNG-50 simulation, proves it is possible to form a dwarf galaxy similar in mass and size as those of standard dwarf galaxies, via RPS.In this work we imposed strict selection criteria to find the optimal galaxy from where a RPS dwarf galaxy could form (a SF region with an in-situ stellar population with M * >= 10 7 M ⊙ ), but many more RPS dwarfs might be present in TNG-50, and even more low-mass SF regions associated to the tentacles of jellyfish galaxies.As an example, there is a region in the bottom left side of the top panel of Figure 2 where an elongated gas structure can be identified showing at its tip, a stellar over-density.It could be a second RPS dwarf formed in this galaxy.Therefore, in a following paper we will study the formation of RPS-dwarf galaxies in a wider range of radius and masses and try to ascertain how commonly this new type of dwarf galaxy can arise, as well as study the dynamical evolution (and fate) of RPS-dwarf galaxies. Using a state of the art cosmological simulation, we corroborate the galaxy formation scenario, which forms dwarf galaxies via RPS: ram pressure stripped dwarf galaxies.These RPS dwarf galaxies are second-generation galaxies as they form from recycled (previously metal enriched) gas, and by construction do not have a DM halo.We find for the first time that these RPS dwarf galaxies could be as large as standard dwarf galaxies.Then, RPS dwarf galaxies would be a different class of dwarf galaxies, whose physical properties resemble those of standard dwarf galaxies. Fig. 1 . Fig. 1.-Maps of the stellar (yellow-blue), and gas (pink-purple) distributions at the evolutionary times z=0.42 (top panel), z=0.26 (middle panel), and z=0.18 (bottom panel).The white arrows indicate the direction to the center of the cluster of galaxies.The gray arrows indicate the direction of motion of the galaxy with ID 119447.The corresponding evolutionary time (given in redshift) is shown in the lower left corner of each panel.The three sub-panels located at the right side of The Figure: (top sub-panel) the gas (magenta) and stellar (blue) mass as a function of time (redshift).The middle sub-panel shows the cluster-centric distance of the galaxy as a function of time (redshift).The bottom sub-panel shows the X-Y orbit of the galaxy around the galaxy cluster center.The small white dot shows the center of the cluster, and the yellow star shows the position on the orbit of the galaxy, at the time where the galaxy maps are plotted. Fig. 3 . Fig. 3.-The top panel (A) shows overlapping maps of neutral gas (green), SFR (rainbow), and DM (white) of the galaxy with ID 119447 at z=0.1.In magenta we circle the ram pressure stripped dwarf galaxy candidate.Panel A2 shows a close-up of the location of the ram pressure stripped dwarf galaxy.The bottom panel (B) shows the overlapping maps of stellar mass (rainbow) and DM (white) of the galaxy with ID 119447.Again, the magenta circles the ram pressure stripped dwarf galaxy candidate. Fig. 4 . Fig. 4.-The top panel (A) shows the XY stellar mass map of the ram-pressure-stripped dwarf galaxy candidate.The middle panel (B) shows the XY map of the gas mass.In both upper panels (A and B) the white circle has a radius of ≈ 5.7 ckpc/h (1.5 times the stellar-half-mass radius of the galaxy with ID 119447).The bottom panel shows the smoothed stellar mass fraction of the rampressure-stripped dwarf galaxy, as a function of redshift, showing two main episodes of star formation at z ∼ 0.15 and z ∼ 0.11. Fig. 5 . Fig. 5.-The top panel shows the cumulative stellar mass fraction of the ram-pressure-stripped dwarf galaxy (blue line).The black dashed line is the analytical Plummer model with a scale factor a = 0.89 ckpc/h (see text).The middle panel shows the concentration parameter as a function of the total stellar mass, for the sample of TDGs reported in Vega-Acevedo & Hidalogo-Gamez (2022) (blue points).The concentration parameter for our rampressure-stripped dwarf galaxy is shown as a magenta point.The bottom panel shows the half-mass radius as a function of stellar mass in the TDGs sample reported in Dabringhausen & Kroupa (2013) (blue points).The magenta point shows the values of our ram-pressure-stripped dwarf galaxy. Fig. 6 . Fig. 6.-The top panel shows the SFR as a function of stellar mass.We plot the star formation clumps found by Poggianti et al. (2019) (blue dots) and compared those with the value of our RPS dwarf galaxy (magenta dot).The bottom panel shows the oxygen abundance as a function of stellar mass of all TNG-50 subhalos at z=0.1 (magenta dots), irregular galaxies from Lee et al. (2006)(green dots), TDGs sample (blue dots) from Boquien et al. (2010), TDGs sample (yellow dots) from Duc et al. (2014).We compare the latter values with the oxygen abundance of our RPS dwarf galaxy from the TNG-50 data (magenta dot), and from scaling the Tremonti et al. (2004)'s relation (black dot).
8,913.8
2024-04-08T00:00:00.000
[ "Physics" ]
Syntaxin 1B Mediates Berberine’s Roles in Epilepsy-Like Behavior in a Pentylenetetrazole-Induced Seizure Zebrafish Model Epilepsy is a neuronal dysfunction syndrome characterized by transient and diffusely abnormal discharges of neurons in the brain. Previous studies have shown that mutations in the syntaxin 1b (stx1b) gene cause a familial, fever-associated epilepsy syndrome. It is unclear as to whether the stx1b gene also correlates with other stimulations such as flashing and/or mediates the effects of antiepileptic drugs. In this study, we found that the expression of stx1b was present mainly in the brain and was negatively correlated with seizures in a pentylenetetrazole (PTZ)-induced seizure zebrafish model. The transcription of stx1b was inhibited by PTZ but rescued by valproate, a broad-spectrum epilepsy treatment drug. In the PTZ–seizure zebrafish model, stx1b knockdown aggravated larvae hyperexcitatory swimming and prompted abnormal trajectory movements, particularly under lighting stimulation; at the same time, the expression levels of the neuronal activity marker gene c-fos increased significantly in the brain. In contrast, stx1b overexpression attenuated seizures and decreased c-fos expression levels following PTZ-induced seizures in larvae. Thus, we speculated that a deficiency of stx1b gene expression may be related with the onset occurrence of clinical seizures, particularly photosensitive seizures. In addition, we found that berberine (BBR) reduced larvae hyperexcitatory locomotion and abnormal movement trajectory in a concentration-dependent manner, slowed down excessive photosensitive seizure-like swimming, and assisted in the recovery of the expression levels of STX1B. Under the downregulation of STX1B, BBR’s roles were limited: specifically, it only slightly regulated the levels of the two genes stx1b and c-fos and the hyperexcitatory motion of zebrafish in dark conditions and had no effect on the overexcited swimming behavior seen in conjunction with lighting stimulation. These findings further demonstrate that STX1B protein levels are negatively correlated with a seizure and can decrease the sensitivity of the photosensitive response in a PTZ-induced seizure zebrafish larvae; furthermore, STX1B may partially mediate the anticonvulsant effect of BBR. Additional investigation regarding the relationship between STX1B, BBR, and seizures could provide new cues for the development of novel anticonvulsant drugs. Epilepsy is a neuronal dysfunction syndrome characterized by transient and diffusely abnormal discharges of neurons in the brain. Previous studies have shown that mutations in the syntaxin 1b (stx1b) gene cause a familial, fever-associated epilepsy syndrome. It is unclear as to whether the stx1b gene also correlates with other stimulations such as flashing and/or mediates the effects of antiepileptic drugs. In this study, we found that the expression of stx1b was present mainly in the brain and was negatively correlated with seizures in a pentylenetetrazole (PTZ)-induced seizure zebrafish model. The transcription of stx1b was inhibited by PTZ but rescued by valproate, a broad-spectrum epilepsy treatment drug. In the PTZ-seizure zebrafish model, stx1b knockdown aggravated larvae hyperexcitatory swimming and prompted abnormal trajectory movements, particularly under lighting stimulation; at the same time, the expression levels of the neuronal activity marker gene c-fos increased significantly in the brain. In contrast, stx1b overexpression attenuated seizures and decreased cfos expression levels following PTZ-induced seizures in larvae. Thus, we speculated that a deficiency of stx1b gene expression may be related with the onset occurrence of clinical seizures, particularly photosensitive seizures. In addition, we found that berberine (BBR) reduced larvae hyperexcitatory locomotion and abnormal movement trajectory in a concentration-dependent manner, slowed down excessive photosensitive seizure-like swimming, and assisted in the recovery of the expression levels of STX1B. Under the downregulation of STX1B, BBR's roles were limited: specifically, it only slightly regulated the levels of the two genes stx1b and c-fos and the hyperexcitatory motion of zebrafish in dark conditions and had no effect on the overexcited swimming behavior seen in conjunction with lighting stimulation. These findings further demonstrate that STX1B protein levels are negatively correlated with a seizure and can decrease the sensitivity of the photosensitive response in a PTZ-induced seizure zebrafish larvae; furthermore, STX1B may partially mediate the anticonvulsant effect of BBR. Additional investigation regarding the relationship between STX1B, BBR, and seizures could provide new cues for the development of novel anticonvulsant drugs. INTRODUCTION Epilepsy is a chronic neurological disease with a high prevalence characterized by spontaneous seizures, abnormal discharges of the brain, and convulsion. According to statistics, 1% of the global population suffers from epilepsy; among them, children, 1 out of 200 of whom are affected (Cowan, 2002;Poduri and Lowenstein, 2011). According to the International League Against Epilepsy 2017 Classification of Seizure Types Basic Version, three major types exist -focal onset, generalized onset, and unknown onset -in which the motor type of seizure is involved. Notably, hyperkinetic seizures have been specified as a subtype of motor onset under focal onset. Patients with motor onset usually suffer a sudden loss of consciousness and symptoms such as rigidity and convulsion. Furthermore, about 20% of epilepsy patients demonstrate other mental illnesses due to anxiety and sleep problems (Sillanpaa et al., 2016;Besag, 2018). Therefore, epilepsy is a serious social burden and a threat to patients in terms of both their physical and mental health, and often brings about great loss of property. Photosensitive epilepsy is caused by visual stimuli with an abnormal electroencephalogram response, which is known as a photoparoxysmal response (Fisher et al., 2005). Recently, people are increasingly coming into contact with more electronic devices, such as televisions, computers, cameras, and other similar items. Unfortunately, this growth in intermittent photic stimulation has greatly increased the prevalence of epileptic seizures. Therefore, the incidence of photosensitive epilepsy is also increasing (Poleon and Szaflarski, 2017), with approximately 5% of epilepsy patients being affected (Martins da Silva and Leal, 2017). A recent study employed gene sequencing to identify the cause of the archetypal generalized photosensitive epilepsy syndrome as a chromodomain helicase DNA-binding protein 2 (CHD2) mutation and found approximately five times as many CHD2 variants in photosensitive epilepsy patients as in the controls (Galizia et al., 2015;Poleon and Szaflarski, 2017). According to another study, bromodomain-containing protein 2 might be an underlying susceptible gene for the photoparoxysmal response (Lorenz et al., 2006). However, despite the efforts of these investigations, the pathogenesis of photosensitive epilepsy is still unclear. Syntaxin 1b (STX1B) is a soluble, N-ethylmaleimide-sensitive fusion attachment receptor (SNARE) protein located in the presynaptic membrane that mediates the fusion of the synapse vesicle and the target membrane, promotes the release of neurotransmitters, and is expressed in the central nervous system (Sollner et al., 1993;Sudhof, 2013;Zhou et al., 2013). According to previous reports, the mutation of stx1b is related to the onset of familial fever-associated epilepsy syndromes. In previous research, stx1b knockdown presented abnormal electrographic activity in zebrafish larvae under hyperthermic conditions (Schubert et al., 2014;Kearney, 2015). Clinical observation found that the presentation of myoclonic astatic epilepsy (MAE) was also related to the variant or deletion of the stx1b gene, suggesting that STX1B should closely observed in the diagnosis of MAE (Vlaskamp et al., 2016). Whether STX1B is involved in photosensitive epilepsy or not has to our knowledge, not yet been reported on. Berberine (BBR) is a natural compound extracted from the traditional Chinese herb Coptis chinensis and has for many years been known to have a good effect on diarrhea. Studies have shown that BBR also has potential therapeutic effects in diabetes (Zhang et al., 2010), hyperlipidemia (Kong et al., 2004;Kim et al., 2009), heart disease (Lau et al., 2001;Zeng et al., 2003), and inflammation (Choi et al., 2006;Lou et al., 2011). In addition, BBR was found to have a neuroprotective effect on multiple central nervous system diseases, such as Alzheimer's disease and epilepsy (Kulkarni and Dhir, 2010;Gao et al., 2014;Hussien et al., 2018). In one study, BBR notably improved cognitive behavior in a rat model of Alzheimer's disease and inhibited the formation of Aβ42, a main constituent of amyloid-β plaques associated with the neurodegenerative condition (Hussien et al., 2018). Another investigation reported that BBR increased the levels of both interleukin 1β and inducible nitric oxide synthase to mediate neuroprotective properties and ameliorated spatial memory impairment in a rat model of Alzheimer's disease (Zhu and Qian, 2006). In a kainate-induced temporal lobe seizure rat model, BBR significantly decreased the incidence of seizures (Mojarad and Roghani, 2014). Furthermore, in a pilocarpineinduced seizure rat model, BBR delayed both latency to the first seizure and time to the development of status epilepticus (Gao et al., 2014). However, few studies on the antiepileptic mechanism of BBR have been published to date. Pentylenetetrazole (PTZ) is a gamma-aminobutyric acid (GABA) receptor inhibitor (Macdonald and Barker, 1977) capable of resisting the inhibitory effect of GABA on neural activity and is often used in seizure models in rodents and zebrafish (Baraban et al., 2005;Stewart et al., 2012;Epps and Weinshenker, 2013;Grone and Baraban, 2015). A number of studies have presented zebrafish epilepsylike seizures via PTZ induction models over the past 10 years (Baraban et al., 2005;Ellis and Soanes, 2012;Stewart et al., 2012;Gupta et al., 2014;Rahn et al., 2014;Torres-Hernandez et al., 2015;Barbalho et al., 2016). Referring to Barabans' research (Baraban et al., 2005), we established a zebrafish seizure model using PTZ and studied the zebrafish convulsive episodes under a dark condition and lighting stimulation; using this model, the correlations of STX1B with seizures and the anticonvulsant effects of BBR were investigated. We found that BBR can promote the expression of STX1B directly or indirectly and alleviate epilepsy-like seizures, especially photosensitive seizures in PTZinduced seizure zebrafish larvae. Zebrafish Feeding and Care AB wild-type line zebrafish (Danio rerio) were obtained from the College of Life Sciences and Technology of Tsinghua University in Beijing, China. The zebrafish were raised under standard laboratory conditions with a 14-h light/10-h dark cycle at a temperature of 28.5 • C ± 1 • C (Kimmel et al., 1995). Zebrafish embryos and larvae were incubated in the rearing water of 280 mg/L Tropical Marine Artificial Seawater Crystal (CNSIC Marine Biotechnology Co., Ltd., Tianjin, China), with a conductivity of 450 to 550 µS. This research was reviewed and approved by the Laboratory Animal Management and Animal Welfare Committee at the Institute of Medicinal Biotechnology of the Chinese Academy of Medical Sciences. The zebrafish experimental protocols complied with the Ethics of Animal Experiments guidelines set by the Institute of Medicinal Biotechnology of the Chinese Academy of Medical Sciences. Microinjection Two stx1b morpholino oligos and a scrambled morpholino oligo were purchased from Gene Tools, LLC (Philomath, OR, United States). The two stx1b morpholino oligo sequences were as follows: 5 -GTGCGATCCTTCATTTTTCCCCGCC-3 (stx1b-MO1) and 5 -AAATATCTCTTGAGATGTCCGCTGC-3 (stx1b-MO2) (Schubert et al., 2014), which are the stx1b antisense oligos used to inhibit STX1B expression by binding to STX1B initiation codon sites. The scrambled morpholino oligo with a randomized 25-base sequence designed by Gene Tools 1 (Philomath, OR, United States) was used as a nonsense control for stx1b-MO. As part of the present study, 0.5 nL of 50 µM stx1b-MO1 or stx1b-MO2 was injected into each embryo of the 1-4-cell stage, and the embryos were subsequently cultivated in the rearing water as described above. STX1B overexpression was prompted via injection of 0.5 nL of pIRES2-stx1b-EGFP and pIRES2-EGFP (as a mock control) at a concentration of 60 ng/µL. The injected embryos at 5 days postfertilization (dpf) were collected for subsequent experiments. Chemical Treatment Berberine was obtained from the National Institutes for Food and Drug Control (Beijing, China). Valproate (VPA) (valproic acid sodium salt, P4543) and PTZ (P6500) were purchased from Sigma-Aldrich (St. Louis, MO, United States). For the seizure model group, we essentially followed the method described by Baraban et al. (2005). Briefly, zebrafish larvae at 7 dpf were exposed to a PTZ solution at concentrations of 2, 4, and 6 mM, respectively, for 1 h and then collected for a behavioral experiment or for 2 h and then collected for in situ hybridization and western blotting experiments. Based on the results of the PTZ dose experiment, 4 mM of PTZ was used for the subsequent experiments conducted in the PTZ-seizure-related groups. Each group contained 24 larvae. For the drug-treated groups, wild-type larvae and injected larvae at 5 dpf were exposed to BBR at concentrations of 25, 50, and 75 µM or to VPA at concentrations of 60, 120, and 240 µM for 2 days (7 dpf), respectively, after being washed three times with the normal rearing solution. Then, the larvae were exposed to 4 mM of PTZ for 1 h and collected for a behavioral experiment, or after 2 h collected for subsequent experiments including whole-mount in situ hybridization and western blotting detections for c-fos and stx1b transcription and protein levels. 1 https://store.gene-tools.com/node/333 Whole-Mount in situ Hybridization Sense and antisense RNA probes of the genes c-fos and stx1b were synthesized using a digoxigenin RNA labeling kit (1175025; Roche Applied Science, Penzberg, Germany) and complementary DNA fragment templates that were amplified using reverse transcription-polymerase chain reaction and inserted into a pGEM-T plasmid. Gene c-fos primer pair sequences were as follows: 5 -AACTGTCACGGCGATCTCTT-3 (the forward primer) and 5 -CTTGCAGATGGGTTTGTGTG (the reverse primer) (Baraban et al., 2005). Gene stx1b primer pair sequences were as follows: 5 -GCAGCACCAAACCCTGATGAAA (the forward primer) and 5 -CCTCCGATACTGGACCGCAAAA (the reverse primer). Larvae were fixed with 4% paraformaldehyde overnight at 4 • C before being stored in methanol at 4 • C. Procedures for whole-mount in situ hybridization were performed as described by Whitlock and Westerfield (2000). Western Blotting For western blot analysis, total proteins were extracted from zebrafish larvae with a RIPA lysis kit (C1053; Applygen Technologies Inc., Beijing, China), separated using 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis, and transferred to a nitrocellulose filter (T41524; PALL, Mexico). Protein blots were blocked with 5% milk in Tris-buffered saline for 1 h at room temperature, with antibodies against STX1B (1:1,000 dilution; 110 403; Synaptic Systems, Coventry, United Kingdom) and β-actin (1:2,000 dilution; A5441; Sigma-Aldrich, St. Louis, MO, United States). The blots were incubated with secondary antibodies (goat anti-mouse or goat anti-rabbit immunoglobulin G from ZSGB-BIO, Beijing, China) for 1 h and visualized by an immobilon western chemiluminescent horseradish peroxidase substrate (Millipore, Billerica, MA, United States). The western blotting was performed in parallel three times. Behavioral Experiment All of the zebrafish swimming activity was analyzed at 7 dpf by the ZebraLab Video-Track system version 3.3 (ViewPoint Life Science, Montreal, QC, Canada). The zebrafish larvae were individually placed into the wells of a 48-well plate (1 fish/well). Locomotor distance, velocity, and swimming tracks were separately recorded in two kinds of conditions. In the first, the larvae stayed in a dark box and their swimming actions were recorded for 20 min, during which time the data and moving tracks were collected once every 2 min, with a red trajectory indicating an abnormal swimming trajectory and an overspeed higher than 4 cm/s defined as a highly active movement and a green trajectory indicating a velocity between 0.2 and 4 cm/s defined as an active movement, respectively. The second condition involved a shift experiment between dark and light, in which the zebrafish larvae were subjected to three cycles of 5 min dark and 10 s light periods, with data collected once every 10 s. The experimental procedure and pharmacological manipulations in this study are depicted in the flowchart in Figure 1. PTZ Induced a Zebrafish Epilepsy-Like Seizure Model and Suppressed Expression of stx1b Gene in Zebrafish Larvae Brains Previous research has reported that the human stx1b gene is associated with familial fever-associated epilepsy syndromes and plays a part in rescuing the function of stx1b knockdown in zebrafish (Schubert et al., 2014;Kearney, 2015). In the present study, we are interested in whether STX1B is also related to the seizures caused by PTZ and if it mediates the effects of the antiepileptic drugs VPA and BBR in zebrafish. First, we set up a zebrafish seizure model using PTZ and confirmed the model by use of a VPA. In this model, larvae swimming distance, velocity, and abnormal trajectory were significantly increased in a PTZ dose-dependent manner and were aggravated particularly under the condition of a shift between dark and light (Figure 2A). VPA showed an obvious therapeutic effect on the seizurelike swimming; specifically, the PTZ-induced larval overspeed swimming was slowed down in a VPA dose-dependent manner under both the dark condition and the dark-light shift condition (Figure 2A). Then, we compared the homology between human and zebrafish STX1B protein sequences. Each of these two STX1B proteins consist of 288 amino acids with a positive ratio of 98% and an identity ratio of 96.8%, in which only 5 amino acids are different and 4 amino acids have similar polarity ( Figure 2B). Therefore, it can be speculated that both proteins may have similar biological functions. Western blotting confirmed that the STX1B protein was decreased by PTZ and increased by VPA in larvae ( Figure 2C). In addition, hybridization in situ results showed that the stx1b gene was expressed mainly in the brain region and clearly downregulated by PTZ and recovered by VPA in a dose-dependent manner ( Figure 2D). This overexcited behavior was inversely proportional to the STX1B level. These results indicate that the STX1B level is negatively associated with PTZ-induced seizures in zebrafish and more closely correlated with a photosensitive seizure. Based on these results, we chose a PTZ concentration of 4 mM for our PTZ-induced seizure model and a VPA dose of 120 µM as a positive control in the following experiments. Level of stx1b Correlates Inversely With PTZ-Induced Seizure in Zebrafish Larvae Further, we investigated whether STX1B could affect PTZinduced seizures, especially under lighting stimulation using gene knockdown and overexpression methods. A stx1b overexpression plasmid and two stx1b morpholino oligos were separately injected into zebrafish embryos to upregulate or downregulate stx1b gene expression. When the zebrafish embryos injected with the stx1b morpholino oligos were exposed to PTZ, stx1b messenger RNA and protein levels were lower (Figures 3A,B) and the neuronal activity marker c-fos level was higher ( Figure 3C) than in those larvae exposed only to PTZ or that received only a morpholino oligos injection. This suggests that the downregulation of STX1B combined with PTZ exposure worsened dysregulation of the two genes' expression. Additionally, behavioral experiments showed that the knockdown of stx1b aggravated the abnormal swimming pathway and velocity instead of the total average velocity and distance in the dark condition and also intensified overexcited behavior under light stimulation induced by PTZ, in comparison with in the PTZ-only and morpholino oligos injection-only groups ( Figure 3D). The behavior changes between the wild-type group and the groups that underwent morpholino oligos injection without PTZ induction were minor or not observed, meaning that the existence of a partial deficiency of STX1B in wild-type larvae did not affect their behavior too significantly. A scrambled MO as a nonsense control for stx1b-MO showed no effects on the expression of stx1b and c-fos and also did not change larval swimming behavior in comparison with the uninjected and PTZinduction groups (relevant data supplied in the Supplementary Material). These results imply that the downregulation of STX1B probably promoted the onset of epilepsy-like seizures, particularly in the case of photic stimulation on the PTZ-treated zebrafish. To further verify these outcomes, we constructed a stx1b overexpression vector and injected it into zebrafish embryos. As shown in Figures 4A,B, in the group of PTZ plus stx1b overexpression, the levels of stx1b messenger RNA and protein were higher than in the PTZ-treated group; in addition the expression of c-fos in the brain was significantly lower than in the PTZ-treated group (Figure 4C). Behavioral experiment The amino acids shown in dark blue are identical, those in shallow blue demonstrate amino acids with similar polarity, and those in white/shallow blue are different. (C) Western blotting tests indicated that STX1B protein was decreased by PTZ (Upper) and increased by VPA (Lower) in a concentration-dependent manner (n = 3). * P < 0.05 and * * * P < 0.001 vs. wild-type; # P < 0.05, ## P < 0.01, and ### P < 0.001 vs. PTZ model; § P < 0.05, § § P < 0.01, and § § § P < 0.001 vs. wild-type in the light condition; P < 0.01, P < 0.01 and P < 0.001 indicated light vs. dark in the same set of conditions. (D) Hybridization in situ results show STX1B gene expression in the larval brain inhibited by PTZ and rescued by VPA in a concentration-dependent manner (n = 20). Frontiers in Molecular Neuroscience | www.frontiersin.org and STX1B protein (n = 3) (B) were reduced and c-fos gene transcription was increased in the larval (7 dpf) brain (n = 20) (C) by stx1b morpholino oligos injection in the PTZ model, as compared with in the PTZ-only and the morpholino oligos-only injection models. The larval swimming experiment (n = 24) (D) showed that average speed and total distance were not changed, but that the abnormal pathway and overspeed were increased following 20 min in the dark condition and that photosensitive seizure was aggravated under the condition of light-dark transition with 5 min in the dark and 10 s in the light for three cycles in the PTZ plus stx1b morpholino oligos larvae, as compared with the two groups of the PTZ-only and the stx1b morpholino oligos-only injection models. The data show average speeds during the 20 min in the dark and the 10 s in the dark-light transformation; the boxes indicate the difference of locomotion distances and speeds between the light-dark transitions. Swimming tracks were recorded at 2 min in the dark condition and the red trajectory indicates overactive movement and the green trajectory indicates active movement. stx1b-MO1 and stx1b-MO2 were two morpholino oligos that bound to the stx1b messenger RNA initiate sequence with a different sequence; by using two target oligos, their inhibition effect was confirmed with each other. * * * P < 0.001 vs. wild-type; # P < 0.05 and ### P < 0.001 vs. PTZ model; § § P < 0.01 and § § § P < 0.001 vs. wild-type in the light condition; P < 0.01 and P < 0.001 indicates light vs. dark. results showed that the overexpression of STX1B had no significant effect on the total distance and average velocity of the PTZ-injected zebrafish, but had a notable reducing effect on abnormal trajectory and overspeed locomotion in the dark condition ( Figure 4D). Moreover, the overexpression of STX1B significantly slowed down the PTZ-induced larval overexcited response in the dark-light shift condition ( Figure 4D). Those results confirm that the upregulation of STX1B alleviated the FIGURE 4 | Larval seizure-like behavior was reduced by increased STX1B level in a PTZ-induced seizure model. (A) Both stx1b transcription in the wild-type and PTZ models were enhanced in larval (7 dpf) brains by stx1b injection as compared with that following no injection and mock injection (n = 20). (B) Western blotting confirmed differential levels of STX1B protein among the variant groups; notably, the STX1B level was raised in the stx1b-PTZ group as compared with in the PTZ-only group (n = 3). (C) c-fos messenger RNA was decreased by STX1B overexpression in the stx1b-PTZ model versus in the PTZ model group or the mock-PTZ group (n = 20). (D) The larval swimming experiment showed that, with STX1B overexpression, average speed and total distance were not obviously changed but abnormal pathway and overspeed were significantly decreased with 20 min in the dark condition, while photosensitive seizure was inhibited under the condition of light-dark shift in the PTZ-model larvae as compared with in the two groups of the PTZ-only model and the PTZ plus mock injection model (n = 24). The rectangles indicate differential responses between light-dark transitions in the three groups of the PTZ-model larvae. * * P < 0.01 and * * * P < 0.001 vs. wild-type; # P < 0.05 and ## P < 0.01 vs. PTZ model; § § P < 0.01 and § § § P < 0.001 vs. wild-type in the light condition; P < 0.001 indicates light vs. dark. seizure, including in particular a photosensitive seizure, in PTZtreated zebrafish, suggesting that the overexpression of STX1B might have a potential protective effect in a PTZ-induced seizure model. Berberine Reduced the PTZ-Induced Seizure-Like Response by Promoting stx1b Gene Expression Previous studies have reported that the use of BBR significantly decreased the incidence of seizures in a seizure rat model (Mojarad and Roghani, 2014) and delayed both latency to the first seizure and time to develop status epilepticus in a pilocarpine-induced seizure rat model (Gao et al., 2014). However, few studies on the anticonvulsant mechanism of BBR have been reported at this time. In this work, we are interested in researching whether the stx1b gene correlates with the BBR anticonvulsant effect. A larval swimming experiment was first performed and the results showed that BBR reduced larval average velocity and total movement distance including abnormal swimming track and overspeed in the dark condition; in addition, BBR also more obviously alleviated a PTZ-induced overexcited response in the light stimulation condition in PTZinduced zebrafish, in a dose-dependent manner ( Figure 5A). Western blotting results confirmed that BBR recovered STX1B protein levels to almost normal in a dose-dependent manner (n = 3). * P < 0.05, * * P < 0.01, and * * * P < 0.001 vs. wild-type; # P < 0.05, ## P < 0.01, and ### P < 0.001 vs. PTZ model; § P < 0.05, § § P < 0.01, and § § § P < 0.001 vs. wild-type in the light condition; P < 0.01 and P < 0.01 indicates light vs. dark. In situ hybridization results showed that BBR inhibited the increase of the c-fos level induced by PTZ and promoted STX1B expression in a concentration-dependent manner in the brain of PTZ-treated larvae ( Figure 5B). Furthermore, a western blotting test also confirmed that STX1B protein increased in a BBR concentration-dependent manner in PTZ-seizure larvae ( Figure 5C). In these tests, a BBR effect that occurred at 75 µM was shown to be nearly similar to that seen with VPA at 120 µM. These results suggest that BBR probably has a therapeutic effect on PTZ-induced seizures in zebrafish. Therefore, we speculate that BBR might be able to suppress an epilepsy-like seizure by upregulating STX1B expression and also that the level of STX1B is associated with seizure outlook. STX1B Mediated the Therapeutic Effect of Berberine on PTZ-Induced Seizure in Zebrafish Since BBR is likely to suppress the onset of PTZ-induced seizures in zebrafish accompanying the enhancement of STX1B expression, we evaluated whether or not BBR was dependent on STX1B protein to play the anticonvulsant role in the zebrafish seizure model. In situ hybridization results showed that, under the stx1b morpholino oligos injection condition, BBR only moderately reduced the c-fos level in the brain region of PTZ-treated zebrafish ( Figure 6A). Behavioral results revealed that BBR mildly attenuated the increase of the average velocity and total movement distance including the abnormal trajectory and overspeed (clonus-like convulsions) in the group of PTZ plus stx1b morpholino oligos in the dark condition ( Figures 6B,C), suggesting that stx1b knockdown caused BBR inhibition action that was obviously weaker than that in PTZ-only-treated zebrafish ( Figure 5A). Moreover, BBR did not prevent an overexcited response in the light stimulation condition (Figures 6B,C). Subsequently, we studied the efficiency of BBR activating STX1B expression under stx1b knockdown in the PTZ-treated larvae and found that BBR only slightly raised stx1b messenger RNA and protein levels in the PTZ plus stx1b morpholino oligos group, in which the STX1B level was lower than that in the stx1b morpholino oligos group and considerably lower than that in the normal control group (Figures 7A,B). Furthermore, a data comparative analysis was carried out between BBR with and without stx1b morpholino oligos injection and indicated that STX1B downregulation significantly weakened or even eliminated BBR efficiency for suppressing an epileptic seizure including abnormal trajectory and overspeed in the dark condition and STX1B protein levels (Figure 8) as well as photosensitive seizures (Figure 6) of the PTZ-induced seizure in zebrafish. Considering the cohesive tendency between the STX1B level variation and the larval behavior results, we infer that STX1B is an important mediator for BBR action on anticonvulsants, in particular for the inhibition of photosensitive seizures that may require proper STX1B expression. DISCUSSION STX1B is a synapse fusion protein that is associated with the release of neurotransmitters, and mutations of the stx1b gene lead to familial fever-associated epilepsy syndromes in FIGURE 6 | Downregulation of STX1B weakened the effects of BBR on anticonvulsant in the PTZ-induced seizure zebrafish model. (A) Hybridization in situ showed that there was a change in the c-fos messenger RNA level in the larval (7 dpf) brain that was induced by BBR in the PTZ plus stx1b morpholino oligos group versus in the three control groups of wild-type, stx1b morpholino oligos injection, and PTZ plus stx1b morpholino oligos (n = 20). (B,C) STX1B downregulation attenuated the efficiency of BBR inhibition on larval overexcited locomotion in terms of speed and distance under non-stimulation conditions and eliminated the action of BBR under dark-light transitions. Swimming trajectories are presented in 2 min recording charts; red tracks indicate over locomotion, while the rectangles indicate the difference between light-dark transitions (n = 24). & P < 0.05, && P < 0.01, and &&& P < 0.001 vs. PTZ plus stx1b morpholino oligos model; θ P < 0.05, θθ P < 0.01, and θθθ P < 0.001 vs. stx1b morpholino oligos model; § P < 0.05 and § § § P < 0.001 vs. wild-type in the light condition; P < 0.01 and P < 0.01 indicates light vs. dark. humans (Sudhof, 2013;Schubert et al., 2014). Stx1b knockout mice (Stx1b −/− ) demonstrated damaged glutamatergic and GABAergic synaptic transmissions (Mishima et al., 2014), while Stx1b +/− mice exhibited a reduced release of GABA and a disturbance of the dopaminergic system in the central nervous system (Fujiwara et al., 2017). GABA is an important inhibitory neurotransmitter in the brain, and the roles of GABA and its receptor in epilepsy have been widely studied (Ferando and Mody, 2012). PTZ is a regular compound used to trigger seizures in animal models that selectively blocks GABA receptor channels and weakens GABA-mediated neurotransmitter systems, causing the neurons to overexcite (Soares et al., 2017). In this study, we used PTZ to establish a zebrafish seizure model and researched STX1B functions in epilepsy-like seizures, including photosensitive seizures. We found that the stx1b gene expression decrease that accompanies epilepsy-like seizure aggravation, was induced by PTZ, and that STX1B increase and the alleviation of a seizure were observed under treatment of the anti-epilepsy drug VPA. Moreover, stx1b knockdown made zebrafish more sensitive to PTZ than just PTZ treatment FIGURE 7 | Stx1b morpholino oligos injection suppressed BBR activation on STX1B expression in the PTZ-model larvae. (A) Hybridization in situ results show that a change of the stx1b messenger RNA level in the larval (7 dpf) brain was induced by BBR with stx1b morpholino oligos injection in the PTZ-model zebrafish, as compared with in the wild-type, stx1b morpholino oligos injection, and PTZ plus stx1b morpholino oligos groups (n = 20). (B) Western blotting results indicated a change of the STX1B protein level similar to the change of the stx1b messenger RNA level under the same treatments (n = 3). & P < 0.05 vs. PTZ plus stx1b morpholino oligos model; θ P < 0.05 and θθ P < 0.01 vs. stx1b morpholino oligos model. did (Figure 3). This indicates that STX1B decline is closely related with PTZ-induced epileptic seizures and that STX1B might be a protein marker in a PTZ-induced seizure model for the screening of anticonvulsant drugs. The alignment of human and zebrafish STX1B protein sequences showed that the STX1B proteins have a high homology of 98% (Figure 2A) and that they possess the same structural domain as compared with syntaxin and SNARE 2 . Altogether, these results hint that STX1B may exert similar biological functions in zebrafish as in humans. A photosensitive seizure is a kind of epileptic response to the visual stimuli of color and light. Triggers can include television and computer games, among many others (Martins da Silva and Leal, 2017). At present, the relationship between photosensitive epilepsy and other genes involved is not very clear, in spite of bromodomain-containing protein 2 and CHD2 being known as a likely susceptible gene in photosensitive epilepsy (Lorenz et al., 2006;Galizia et al., 2015;Poleon and Szaflarski, 2017). However, the correlation of STX1B to photosensitive epileptic seizures has not been reported until now. Photosensitive epilepsy does not only occur in a single kind of epilepsy syndrome; it has also been found in juvenile myoclonic epilepsy, eyelid myoclonia (Jeavons syndrome), and Dravet syndrome (Poleon and Szaflarski, 2017). According to a study, photosensitivity was 2 https://blast.ncbi.nlm.nih.gov/Blast.cgi#alnHdr_66393091 reported to occur in approximately 31% of those with juvenile myoclonic epilepsy (Wolf and Goosses, 1986). Photosensitive epilepsy usually occurs in adolescents: it is estimated that patients between the ages of 7 and 19 years are about five times more likely than those in other age groups to demonstrate the condition (de Bittencourt, 2004). Therefore, it could be argued that photosensitive epilepsy is a serious threat to the physical and mental health of teenagers. In this study, we explore the correlation between STX1B level and photosensitive seizures under the condition of a dark-light shift in a PTZ-seizure zebrafish model. Our behavioral experiments show that PTZ treatment with stx1b knockdown made the larvae oversensitive to light stimulation, and the c-fos level (c-fos is recognized as a marker for neuronal activity, and the expression level of cfos is positively correlated with the degree of epileptic seizure) (Baraban et al., 2005) in the zebrafish brain was significantly higher in the PTZ-treated zebrafish with stx1b knockdown than in the PTZ-only model group. In contrast, STX1B overexpression decreased larval overspeed swimming behaviors under light stimuli and suppressed the c-fos expression in the zebrafish brain, as compared with in the PTZ group. Therefore, we suppose that the STX1B protein can alleviate PTZ-induced photosensitive seizures. Despite there being no known reports of STX1B correlating with photosensitivity, several studies have implicated CHD2 in photosensitivity and have shown that CHD2 mutation is the FIGURE 8 | Comparative analysis of epilepsy-like seizure and STX1B protein levels between BBR with and without stx1b morpholino oligos injection in the PTZ-induced seizure zebrafish. (A) Behavioral comparison indicates that the BBR effect of antiseizure was weakened by stx1b morpholino oligo injection. (B) Comparison of STX1B protein levels induced by BBR between stx1b gene knockdown and non-knockdown. Western blotting showed that levels of STX1B protein were significantly decreased by stx1b morpholino oligo injection under BBR existence. The histograms are generated from data in Figure 5 of behavior and western blotting, Figure 6 of behavior, and Figure 7 of western blotting. # P < 0.05, ## P < 0.01, and ### P < 0.001 vs. PTZ model; & P < 0.05 and && P < 0.01 vs. PTZ plus stx1b morpholino oligos model. ϕ P < 0.05, ϕ ϕ P < 0.01, and ϕ ϕ ϕ P < 0.001 indicate differences in the comparison between uninjected and morpholino oligo injection in the PTZ model under the same concentration of BBR, respectively. first identified cause of the archetypal generalized photosensitive epilepsy syndrome, with CHD2 knockdown markedly increased in the case of zebrafish larval photosensitivity (Galizia et al., 2015). According to other reports, chd2 gene mutations were described in MAE (Carvill et al., 2013;Thomas et al., 2015); at the same time, stx1b gene variants or deletions can also be involved in the etiology of MAE (Vlaskamp et al., 2016). Since both stx1b and chd2 gene mutations can lead to MAE, whether or not STX1B is also related to photosensitivity epilepsy like CHD2 is, remains a question. MAE is an epilepsy characterized by the occurrence of myoclonic-atonic seizures, while myoclonic seizures are a typical symptom in the PTZ-induced seizure model (Frye and Muscatiello, 2001;Poplawska et al., 2015). In association with our results, these studies imply that STX1B may be associated with photosensitive epilepsy. However, the relationship between STX1B and photosensitive response still needs further clinical study. Berberine was reported to have a protection effect on neurodegenerative and neuropsychiatric disorders with respect to its antioxidant and anti-inflammatory roles (Yoo et al., 2006(Yoo et al., , 2008Sedaghat et al., 2017). Some studies have shown that BBR antagonized N-methyl-D-aspartate-induced excitotoxicity in gerbil hippocampal neurons (Yoo et al., 2008) and inhibited morphine-induced locomotor sensitization in mice (Yoo et al., 2006). Moreover, BBR attenuated a repeated nicotine-induced behavioral sensitization by decreasing postsynaptic neuronal activation in rats (Lee et al., 2007). These findings suggest that BBR probably is involved in the inhibition of neuron-locomotion overactivity, but published reports about the action of BBR in epilepsy remain scarce. In the present study, we found that BBR alleviated the overexcitation reaction and decreased the level of cfos induced by PTZ, yet rescued the level of stx1b transcription suppressed by PTZ. When STX1B was downregulated, BBR's therapeutic effect on a photosensitive seizure was significantly reduced or eliminated, suggesting that BBR's inhibitory effect on a photosensitive seizure was dependent on the presence of STX1B protein. We speculate that BBR may indirectly activate some transcription factors to enhance the expression of the stx1b gene. In summary, PTZ induces an epilepsy-like seizure, including photosensitive seizures in zebrafish, which may be partially mediated by STX1B deficiency. Adequate STX1B levels can slow down the hyperexcitation locomotion induced by PTZ in zebrafish. BBR can suppress PTZ-induced seizures in zebrafish by raising STX1B levels. Further research on the relationship between STX1B, BBR, and seizures may provide new clues for the development of novel antiepileptic drugs. AUTHOR CONTRIBUTIONS J-PZ conceived and designed the project. Y-MZ and BC performed the experiments and treated the data. J-DJ provided substantial discussion for writing the manuscript. Y-MZ and J-PZ wrote the manuscript. FUNDING This work was supported by the CAMS Major Collaborative Innovation Project (No. 2016-I2M-1-011) and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (No. 81621064). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
8,835.4
2018-11-26T00:00:00.000
[ "Medicine", "Biology" ]
Understanding Microbial Multi-Species Symbioses Lichens are commonly recognized as a symbiotic association of a fungus and a chlorophyll containing partner, either green algae or cyanobacteria, or both. The fungus provides a suitable habitat for the partner, which provides photosynthetically fixed carbon as energy source for the system. The evolutionary result of the self-sustaining partnership is a unique joint structure, the lichen thallus, which is indispensable for fungal sexual reproduction. The classical view of a dual symbiosis has been challenged by recent microbiome research, which revealed host-specific bacterial microbiomes. The recent results about bacterial associations with lichens symbioses corroborate their notion as a multi-species symbiosis. Multi-omics approaches have provided evidence for functional contribution by the bacterial microbiome to the entire lichen meta-organism while various abiotic and biotic factors can additionally influence the bacterial community structure. Results of current research also suggest that neighboring ecological niches influence the composition of the lichen bacterial microbiome. Specificity and functions are here reviewed based on these recent findings, converging to a holistic view of bacterial roles in lichens. Finally we propose that the lichen thallus has also evolved to function as a smart harvester of bacterial symbionts. We suggest that lichens represent an ideal model to study multi-species symbiosis, using the recently available omics tools and other cutting edge methods. INTRODUCTION Twenty years after the theory of evolution by natural selection started to revolutionize biology, the German mycologist Anton de Bary introduced the term symbiosis to the broader scientific community as a living together of dissimilar organisms (de Bary, 1879). One of his prominent examples were lichens, even though the symbiotic nature -revealed earlier by Schwendener (1869) -was hardly accepted at that time. Scientific peers still considered them as an independent group of organisms with a unique morphology. Meanwhile every biology textbook includes lichens as an obligate association between a fungal (mycobiont) and a photosynthetic partner (photobiont), which can be either cyanobacteria and/or green algae (Nash, 2008). By this association, the photobiont's production of energy via carbon dioxide fixation is enhanced by the sheltering structures of the exhabitant fungal partner. The joint structure, also known as the lichen thallus, is unique and one of the most complex vegetative structures in the entire fungal kingdom. The lichen thallus evolved as early as terrestrial plant life, as the first ancestors of lichens with characteristic morphology can be traced back to the Devonian 400 million years ago (Remy et al., 1994;Honegger et al., 2013). In this paper, we will show that lichens are not merely a partnership involving two unrelated organismal groups, but include a so far largely neglected bacterial component, which contributes to the biology of the holobiont. We will start with some general aspects of the lichen ecology and will then continue with an outline how modern analytical tools are used to understand lichens as a fascinating case of a multisymbiosis. The successful fungal symbiosis, which comprises more than 18,000 named species of fungi is characterized by a poikilohydric lifestyle, which enables lichens to colonize almost all terrestrial environments, ranging from tropical to polar climatic zones, and coastal to high altitude habitats. In addition, lichens grow on the surface of almost every kind of substrate including bare soils, rocks, and plants, but they can be also found in freshwater streams and in marine intertidal zones (Nash, 2008), and various man-made material surfaces. The vegetative bodies vary in color, size (a few millimeters to meters) and growth forms, and some may persist for several 1000 of years (Denton and Karlén, 1973). The wide variety of lichen thallus structures, which are primarily determined by the fungal partner, can be roughly divided into three most common morphological types: crustose, foliose, and fruticose growth forms. Other types exist, but are less frequent (Grube and Hawksworth, 2007). Internally, the vegetative body is either homoiomerous (without stratification), where the mycobiont and photobiont are evenly distributed in the lichen thallus, or heteromerous (with stratification), where at least a fungal upper layer and an algal layer underneath can be distinguished. Crustose lichens are characterized by the attachment of the entire lower surface to the substrate, whereas foliose and fruticose lichens are only partially attached (Büdel and Scheidegger, 2008), and usually have a more or less dense lower fungal layer. Sexual reproduction of the fungal partner requires the development of the species-specific thallus with appropriate algae, since fungal fruit-bodies directly arise in the mature lichen thallus and often incorporate thallus structures. Nevertheless, lichens also evolved various means of asexual reproduction to disperse symbiotic partners together in diverse and specific joint propagules (Büdel and Scheidegger, 2008). Even though the literature continues to report on antibacterial or antifungal compounds from lichens (reviewed in Boustie and Grube, 2005), the long-lived thalli provide interesting microhabitats for other eukaryotic and prokaryotic (both bacteria and archaea) microorganisms (Lawrey and Diederich, 2003;Bjelland et al., 2011;Bates et al., 2012). In previous years attention was increasingly paid to lichenassociated bacteria that were not recognized as being an integral part of the symbiosis. In this review we discuss recent literature on lichen-associated microbiota with focus on diversity, functions, dispersal, habitat specificity, and inter-microbiome relations of the Lobaria pulmonaria-associated bacterial community and conclude with an outline to promote a holistic view on lichen-bacteria interactions. In the first part we review historic aspects and then discuss recent results to develop a more holistic lichen model. UNRAVELING THE LICHEN-ASSOCIATED MICROBIOME -THEN AND NOW Bacteria associated with lichens were initially mentioned in the first half of the 20th century (Uphof, 1925;Henkel and Yuzhakova, 1936;Iskina, 1938). During these early studies various bacterial genera were reported to be associated with lichens such as Azotobacter, Pseudomonas (Gammaproteobacteria), Beijerinckia (Alphaproteobacteria), and the Firmicutes genera Bacillus and Clostridium (Iskina, 1938;Panosyan and Nikogosyan, 1966;Henkel and Plotnikova, 1973). At that time descriptions of bacteria underlay solely phenotypical and physiological characterizations indicating a possible role in nitrogen fixation for some of these bacteria. Nevertheless, Lenova and Blum (1983) already estimated that millions of bacterial cells per gram could colonize a lichen thallus. Several decades passed before the first molecular analyses started using bacterial isolates, (e.g., González et al., 2005;Cardinale et al., 2006, Liba et al., 2006, or Selbmann et al., 2009. While González et al. (2005) only focused on culturable Actinomycetes (with Micromonospora and Streptomyces as predominant genera) of various lichen species from tropical and cold areas, Cardinale et al. (2006) attempted to describe the overall bacterial community composition associated with seven different lichen species from temperate habitats. The latter enabled the identification of several genera affiliated to Firmicutes, Actinobacteria, and Proteobacteria, highlighting Paenibacillus, and Burkholderia to be ubiquitous genera in lichens. However, culture-dependent methods capture only 0.001-15% of the bacterial diversity in environmental samples (Amann et al., 1995), whereas the majority remains unobserved (Rappé and Giovannoni, 2003). To overcome the limitations of selective bacterial isolation from environmental samples and to obtain a more unbiased and less restricted view on the microbial communities, new techniques were employed to complement the traditional methods. First culture-independent investigations on lichen-associated microbiota were assessed with different fingerprinting methods (Cardinale et al., 2006;Bjelland et al., 2011;Mushegian et al., 2011;Cardinale et al., 2012a) and molecular cloning approaches (Hodkinson and Lutzoni, 2009). Such techniques (e.g., DGGE: Muyzer and Smalla, 1998; T-RFLP: Liu et al., 1997;SSCP: Schwieger and Tebbe, 1998) were used to generate microbial community profiles by amplifying genetic markers (e.g., 16S ribosomal DNA) with universal primers. Based on sequence or length polymorphisms PCR products are separated and the degree of sample similarity according to the specific band patterns can be characterized (Smalla et al., 2007). Although many samples can be analyzed in parallel and their profiles can be compared with each other easily, the identification of the bacterial community members in detail is tedious and limited. Margulies et al. (2005) introduced a new time reduced and cost efficient technology to study community compositions and diversity of environmental samples in depth by large-scale high throughput sequencing. Bates et al. (2011) described lichenassociated bacteria for the first time based on this next generation pyrosequencing technology, followed by Grube et al. (2012), Hodkinson et al. (2012), and Aschenbrenner et al. (2014). With the improvement of sequencing technologies and bioinformatics tools the focus in microbial ecology research shifted from the basic taxonomical descriptions to a more detailed and holistic view on microbial communities. Metagenomic, transcriptomic, and proteomic analyses can now shed light on the questions "Who is there?", "What are they capable of?", and "Who is actively doing what?" (Schneider et al., 2011;Aschenbrenner, 2015;Grube et al., 2015). To address these questions, the lung lichen L. pulmonaria (L.) Hoffm. was used as model system due to its relatively fast growth and other facilitative characteristics, e.g., epiphytic growth on tree bark and a low number of secondary metabolites, which could interfere with the conducted analyses. L. pulmonaria is characterized by a leaf-like structure (foliose lichen) and mainly found in old-growth forests with unpolluted air. Its sensitivity to air pollution can be employed for indirect evaluations of air quality and ecosystem integrity (Scheidegger and Werth, 2009). It harbors two photosynthetic partners, a phenomenon observed for approximately 4% of all described lichens (Honegger, 1991). However, only the green alga Dictyochloropsis reticulata forms a continuous layer, whereas cyanobacterial Nostoc strains are maintained in spaced, nodule-like internal compartments (cephalodia). COMPOSITION AND DIVERSITY OF THE LICHEN-ASSOCIATED MICROBIOME DRIVEN BY VARIOUS ABIOTIC AND BIOTIC FACTORS The amount of bacteria found on lichens is surprisingly high in relation to surfaces of higher plant foliage. While a leaf surface comprises only 10 5 cells/cm 2 , some lichen species analyzed for bacterial abundance exceed this value dramatically (Saleem, 2015). For example, Cladonia rangiferina, is colonized by approximately 10 7 -10 8 bacteria per gram of lichen thallus (Cardinale et al., 2008;. Moreover, Alpha diversity indices (Shannon index) of bacterial communities were shown to vary between different lichens, e.g., from on average 4.5 (Solorina crocea) to 7.0 (L. pulmonaria) at a genetic distance of 3% among the microbial OTUs based on 16S rRNA gene sequence dissimilarity Aschenbrenner et al., 2014). L. pulmonaria is mainly colonized by Alphaproteobacteria with Sphingomonadales as the predominant order, followed by Sphingobacteria, Actinobacteria, and Spartobacteria (Aschenbrenner et al., 2014). Contrarily, shotgun sequencingbased studies suggested Rhizobiales as the main order within Alphaproteobacteria (Erlacher et al., 2015;Grube et al., 2015). These results were additionally confirmed with adapted visualizing techniques. Thereby, the predominance of Alphaproteobacteria and Rhizobiales on lichen surfaces were shown with a combined approach of fluorescence in situ hybridization (FISH) and confocal laser scanning microscopy (CLSM). Related to these findings, the lichen-associated Rhizobiales group (LAR1) was reported to be a lichen-specific lineage of Alphaproteobacteria, which can be found among many examined species (Hodkinson and Lutzoni, 2009;Bates et al., 2011;Hodkinson et al., 2012). However, this lineage could not be detected in L. pulmonaria (Aschenbrenner et al., 2014). The observed compositional differences within the same lichen species can be attributed to various reasons such as metagenomic sequencing approach (amplicon vs. shotgun sequencing), utilized databases, or activity of the bacteria in case of metatranscriptomic analysis (Aschenbrenner, 2015) since less than 10% of a microbial community is metabolically active at one time (Locey, 2010). While the predominance of Alphaproteobacteria was also reported in other studies (Bates et al., 2011;Hodkinson et al., 2012), bacterial community composition in general differed among lichen species. These variations are supposed to be driven by various biotic and abiotic factors. Hodkinson et al. (2012) who thoroughly studied the bacterial communities associated with various lichen species comprising 24 mycobiont types with all photobiont combinations of different sampling locations (tropical and arctic regions) highlighted the photobiont type (chlorolichens vs. cyanolichens) and large-scale geography as the main driving forces. Hodkinson et al. (2012) argued that the differences in community composition could be ascribed to both the availability of fixed nitrogen and the type of fixed carbon. Regarding the first one, bacteria associated with cyanolichens have access to fixed atmospheric nitrogen due to the cyanobacterial photobiont, whereas those of chlorolichens lack this benefit in nitrogenrestricted environments. According to that, chlorolichens would preferably enrich species capable of nitrogen fixation rather than cyanolichens. Another suggestion was that green algae release different types of fixed carbon (sugar alcohols: ribitol, erythritol, or sorbitol) than cyanobacteria (glucose; Elix and Stocker-Wörgötter, 2008), thereby shaping the bacterial community with respect to carbon utilization. Both explanations can only partly explain community differences based on taxonomic descriptions as bacteria can exchange and share genes encoding for certain functions via horizontal gene transfer. This agrees with Burke et al. (2011) who argued that ecological niches are colonized randomly by bacteria equipped with suitable functions rather than following bacterial taxonomy. The attempt to explain observed community compositions gets more complicated with regard to tripartite lichens as they carry both types of photobionts as it is the case in L. pulmonaria. Species-specificity for bacterial communities associated with chlorolichens was already indicated in previous studies Bates et al., 2011). Lichenized fungi are able to produce secondary metabolites, which are unique to lichens and comprise several 100 compounds which can be deposited on the extracellular surface of the fungal hyphae (Elix and Stocker-Wörgötter, 2008). As already suggested by Hodkinson et al. (2012) the considerable fraction of secondary metabolites with antimicrobial activities (Kosanić and Ranković, 2015) might cause a selective pressure on lichen-colonizing bacteria as well. However, as L. pulmonaria contains only low concentrations of lichen-specific substances like many other lichens of the suborder Peltigerineae (Beckett et al., 2003), secondary metabolites might play only a minor role in shaping the community structure of Lobaria-associated bacteria. Differences in bacterial community composition might be also due to the lichen growth type as for instance previous studies reported that the bacterial community compositions of crustose lichens differed from those of foliose or fruticose lichens Hodkinson et al., 2012). While the foliose lichens were mainly colonized by Alphaproteobacteria, the crustose lichen Ophioparma sp. was dominated by Acidobacteria (Hodkinson et al., 2012). Another rock-inhabiting crustose lichen Hydropunctaria sp. was mainly colonized by Cyanobacteria, Actinobacteria, and Deinococcus (Bjelland et al., 2011). But growth type on its own does not explain the predominance of certain taxa since the foliose lichen Solorina sp. was also dominated by Acidobacteria . This agrees with previous results of Cardinale et al. (2012b) who showed that growth types do not affect the main bacterial community structure. BACTERIA ARE SPATIALLY STRUCTURED ON LICHENS Thallus sub-compartments of varying age as well as external and internal surfaces offer chemically and physiologically distinct micro-niches and facilitate the formation of various distinct bacterial communities. Based on FISH and CLSM the lichenassociated eubacteria as well as specific bacterial taxa therein were demonstrated to colonize distinct lichen thallus parts in different abundances and patterns (Cardinale et al., 2008). Confocal laser scanning microscopy of the L. pulmonaria surfaces showed that both the upper and the lower cortexes were evenly colonized by Alphaproteobacteria among other eubacteria (Cardinale et al., 2012a;Grube et al., 2015). This was also demonstrated for other dorsiventrally organized lichen thalli such as the leafy Umbilicaria sp. ). In the case of the shrubby species Cladonia the outer cortex of the radially organized hollow thallus (podetium) was merely colonized by single cell colonies and smaller colony clusters, while the highest bacterial density examined on this lichen was found on the internal layer of the podetia forming a biofilm-like coat (Cardinale et al., 2008(Cardinale et al., , 2012b. Contrarily, bacterial colonization on crustose lichens such as Lecanora sp. was distinctly higher in the cracks between the areoles of the thalli ). There were also first indications for endobiotic bacteria within the cell walls of fungal hyphae (Cardinale et al., 2008). Erlacher et al. (2015) previously reported in L. pulmonaria endosymbiotic Rhizobiales, localized in varying depths of the interhyphal gelatinous matrix of the upper cortex and seldom in the interior of fungal hyphae. So far, there is no documentation of bacterial growth in other compartments of L. pulmonaria such as the internal thalline tissue (medulla) or the photobiont layer. The age states in a mature lichen thallus might influence and shape bacterial community structure, which resembles the community succession found, e.g., in the apple flower microbiome (Shade et al., 2013). A recent study has shown that the vegetative propagules of L. pulmonaria were colonized by a more distinct bacterial community than the mature lichen thallus (Aschenbrenner et al., 2014) indicating that the community structure might change over time. In detail, only 37% of thallus-associated bacterial OTUs were shared with the vegetative propagules, conversely, shared OTUs associated with the propagules comprised 55%. While both lichen parts were mainly colonized by Alphaproteobacteria, the lichen thallus was additionally dominated by Deltaproteobacteria, whereas the juvenile vegetative propagules were also colonized in higher abundances by Spartobacteria and Sphingobacteria. Previously, Cardinale et al. (2012b) reported that older thallus parts hosted significantly higher amounts of bacteria than the younger thallus structures including a change of the predominant Alphaproteobacteria to other taxa such as Actinobacteria, Gamma-, and Betaproteobacteria. Also Mushegian et al. (2011) observed a spatial diversification of the bacterial compositions between the more diverse and consistent thallus centers (older parts) and those of the more variable and species poor edges (younger parts). Cardinale et al. (2012b) referred to this bacterial distribution patterns as anabolic centers in the growing and catabolic sinks in the senescing parts of the lichen thallus, respectively. The hypothesis of recycling nutrients in the decaying lichen parts by bacteria can be also underpinned by the presence of specific taxa known for their degradation potential. Sphingomonas sp., which are known to degrade organic matter and xenobiotic substances, were previously isolated from lichens sampled in Arctic and Antarctic regions (Lee et al., 2014), but also reported in other studies Hodkinson et al., 2012;Aschenbrenner, 2015). However, also other genera such as Paenibacillus and Streptomyces were mentioned for their functions (e.g., chitinolytic activity) in the degradation of lichen tissues (Cardinale et al., 2006). DISTRIBUTION AND TRANSFER OF HOST-ASSOCIATED BACTERIA Analyses of lichen-associated bacteria revealed differences in community composition and diversity among geographically distant habitats (Printzen et al., 2012;Aschenbrenner et al., 2014). Printzen et al. (2012) analyzed the geographic structure of lichenassociated Alphaproteobacteria in Antarctic regions indicating that this group is affected by environmental parameters since thalli from sub-polar habitats had more similar communities than those from extrapolar regions. Hodkinson et al. (2012) explained these large-scale geographical effects by the dispersal efficiency of the lichen hosts, where the dispersal happens on small spatial scales rather than on large-scale distances resulting in a geographic differentiation of the community composition. Aschenbrenner et al. (2014) visualized and described the bacterial colonization of lichen propagules. Their results demonstrate that at least a certain proportion of the lichen microbiome is transferred vertically via these symbiotic structures. These bacterial communities were dominated by Alphaproteobacteria, as was already found by Cardinale et al. (2012a). Interestingly, the bacterial consortia of the lichen propagules were more than only a subset of the parental thallus microbiome and also comprised unique species, not shared by the mature thallus. Thus, Aschenbrenner et al. (2014) suggested that the vegetative propagules are equipped with a bacterial starter community. Such bacteria colonizing juvenile structures might influence the subsequent recruitment of new bacteria (Fukami, 2010), thereby shaping the community composition. The importance of the lichen-associated bacteria during the establishment of the lichen symbiosis was already suggested (Hodkinson and Lutzoni, 2009), as the growth of stratified lichen thalli was so far only successful in cultures based on lichen fragments, which apparently include bacteria. Although vertical transmission of lichen-associated bacteria was only shown in a single lichen species, it is very likely that this strategy of microbiome transfer is also common in other species utilizing vegetative diaspores for reproduction, and definitely in other symbioses. There are various examples reporting on a transmission of host-associated bacteria (Bright and Bulgheresi, 2010), e.g., in marine sponges (Wilkinson, 1984;Li et al., 1998). Bacteria associated with terrestrial invertebrates such as insects are known to assist in nutrient uptake and provision of essential amino acids and vitamins (Douglas, 1998;Feldhaar and Gross, 2009), but their vertical transmission strategies vary among distinct species (Sacchi et al., 1988;Attardo et al., 2008;Prado and Zucchi, 2012). In vertebrates including humans the transfer of maternal microbes to the child through natural birth and breast feeding as first inoculum was reported to be important for the baby's health, in particular by shaping the microbiome structure with beneficial microbes (Funkhouser and Bordenstein, 2013). But also in the plant kingdom transfer of plant-associated bacteria, in particular of seeds, from the mother plant was reported (van Overbeek et al., 2011), even though it is common for higher plants to recruit their substantial rhizosphere communities from the surrounding soil (Berg and Smalla, 2009). Vertical transmission was previously shown for the oldest group of land plants, mosses, which belong together with lichens to the group of poikilohydric cryptogams; associated bacteria, especially specific Burkholderia strains, are transferred from the sporophyte to the gametophyte via spores (Bragina et al., 2012(Bragina et al., , 2013. Lichens as Bacterial Hubs Lichens are pioneers in the colonization of hostile environments with extreme temperatures, desiccation, and high salinity, but they may also become very old, either as individuals or as associations (it is assumed that some non-glaciated sites were colonized by lichens since the tertiary). Colonized habitats include arid and semi-arid regions where bare soil can be colonized by, e.g., cryptogamic soil crusts (an association comprising soil particles, lichens, cyanobacteria, algae, fungi, and bryophytes; Beckett et al., 2008), but also more extreme regions such as deserts, where lichens are one of the few successful colonizers. In particular, their capability to become hydrated without contact to liquid water (Printzen et al., 2012) only by fog, dew or high air humidity (Beckett et al., 2008) ensures survival in these dry areas. This suggests that lichens as slow-growing and long-living host organisms might serve as bacterial hubs in these environments facilitating their survival by nutrient and water supply, offering a habitat with various micro-niches and ensuring their distribution over short distances by the dispersal strategies of the lichen host. Thereby the lichens could be important sources/reservoirs of beneficial bacterial strains for other habitats in an environment as well. Habitat Specificity Host specificity for cryptogams (i.e., lichens and mosses) was already reported in previous independent studies Bragina et al., 2012). However, bacterial communities were so far described almost always without a view of adjacent habitats and potential inter-microbiome relationships. Previously bacterial specificity was reported in studies of lichen thalli and their underlying rock substrate (Bjelland et al., 2011). A recent study within the doctoral thesis of Aschenbrenner (2015) focusing on this topic unraveled the specificity of the lichen-associated microbiome compared with the neighboring habitats, i.e., moss and bare bark. This comparative analysis highlighted potential habitat specialists and generalists. In this survey, members of the genus Sphingomonas were identified as generalists in all the three habitats, whereas members of Mucilaginibacter were described as potential specialists of lichens. The lung lichen frequently establishes on mosses, and the sharing of Nostoc strains between both cryptogams suggests a previously undescribed form of ecological facilitation that is mediated by the shared microbiome fraction (Aschenbrenner, 2015). The lung lichen takes up Nostoc strains during growth and incorporates them in the thallus as distinct clusters (known as internal cephalodia in the literature). As Nostoc is enriched on mosses rather than on bark, the growth promoting effect of nitrogen-fixing Nostoc apparently facilitates the efficient development of the lichen thallus, which mostly emerges from moss patches. THE LICHEN-ASSOCIATED MICROBIOME PLAYS A CENTRAL FUNCTIONAL ROLE IN THE LICHEN HOLOBIONT While the host-specific bacterial colonization of various lichen species was demonstrated over the past years, the roles of the bacteria remained largely unknown. This is mainly due to inherent problems to study lichens by experimental approaches (especially re-synthesis of the symbiosis in culture). Metaomics meanwhile emerged as a set of suitable technologies to globally identify potentially beneficial contributions of the bacterial population. Recently, the L. pulmonaria associated microbiome was investigated with an integrated metagenomics and metaproteomics approach to screen for potential functions encoded in genomes and to verify their expression at the protein level , based on a previous pioneering proteomics study (Schneider et al., 2011). The results of Grube et al. (2015) provided strong evidence that the bacterial microbiome is involved in nutrient provision and degradation of older lichen thallus parts, biosynthesis of vitamins and hormones, detoxification processes, and the protection against biotic as well as abiotic stress. Additionally, the high prevalence of bacterial nitrogen fixation was confirmed with -omic data and quantitative RT-PCR. Moreover, a comparison of the whole Lobaria-associated metagenome with a representative set of publicly available metagenomes highlighted its uniqueness. The most closely related metagenomes were found to be those obtained from plant-associated habitats. In particular, Rhizobiales (Alphaproteobacteria) were previously shown to be remarkably abundant in the L. pulmonaria microbiome mainly represented by the families: Methylobacteriaceae, Bradyrhizobiaceae, and Rhizobiaceae. Although they are well known for their beneficial interactions with many higher plants, less is known about their specific roles in terms of the lichens. According to Erlacher et al. (2015) functional assignments based on hierarchical SEED classification indicated an involvement of Rhizobiales in various beneficial functions (e.g., auxin, folate, and vitamin B12 biosynthesis). A further breakdown demonstrated that the predominant Methylobacteriaceae were also the most potent producers of the examined metabolites. These findings suggest the potential for various biotechnological applications of this group. Stress Amelioration and Pathogen Defense Functions are Supported by Metagenomic Data and Culturable Members of the Microbiome Recently, it was shown that the L. pulmonaria associated microbiome includes also various bacteria with antagonistic potential (Cernava et al., 2015a). The most abundant antagonists were assigned to Stenotrophomonas, Pseudomonas, Micrococcus, and Burkholderia. These genera accounted for 67% of all identified antagonistic bacteria. Metagenomic screening revealed the presence of genes involved in the biosynthesis of stress-reducing metabolites. Complementary high-performance liquid chromatography-mass spectrometry (HPLC-MS) analyses enabled the detection of Stenotrophomonas-produced spermidine which is known to reduce desiccation-and high-salinity-induced stress in plants. It was also tested if these protective effects can be transferred to non-lichen hosts such as primed tomato (Solanum lycopersicum) seeds. Results indicated a significant increase in the root and stem lengths under water-limited conditions. The application of lichen-associated bacteria in plant protection and growth promotion may prove to be a useful alternative to conventional approaches. However, further studies are required to evaluate the host range and to elucidate the overall applicability (Cernava et al., 2015a). Furthermore, volatile organic compounds (VOCs) profiles from bacterial isolates showed that lichen-associated bacteria are emitting a broad range of volatile substances. These molecules are most likely involved in various interactions (e.g., communication between microorganisms and the host) and might also increase the overall resistance against various pathogens (Cernava et al., 2015b). The Microbiome Provides Complementary Detoxification Mechanisms Besides the evidence for mechanisms conferring enhanced resistance against biotic as well as abiotic stress, the microbiome provided a first evidence for the involvement in the detoxification of inorganic substances (e.g., As, Cu, Zn), the detailed mechanisms remaining unknown. A deeper insight into these beneficial contributions was possible with samples exposed to elevated arsenic concentration (Cernava, 2015). Metagenomic analyses revealed that the overall microbial community structures from different lichens were similar, irrespective of the arsenic concentrations at the sampling locations, whereas the spectrum of functions related to arsenic metabolism was extended. These functions include bioconversion mechanisms that are involved in the methylation of inorganic arsenic and consequently generate less toxic substances. Furthermore, the abundance of numerous detoxification related genes was enhanced in arsenic-polluted samples. Supplementary qPCR approaches have shown that the arsM gene copy number is not strictly related to the determined arsenic concentrations. Additionally, a culture collection of bacterial isolates obtained from three lichen species was screened for the arsM gene. Detected carriers of arsM were later identified as members of the genera Leifsonia, Micrococcus, Pedobacter, Staphylococcus, and Streptomyces. The overall results underscored the important role of the microbiome in host protection and they provided more detailed insights into the taxonomic structure of involved microorganisms. BACTERIAL MICROBIOME ASSEMBLY ON A SYMBIOTIC FUNGAL STRUCTURE The lichen thallus with its various micro-niches represents a miniature ecosystem for microorganisms. While lichenassociated bacteria were previously neglected and often recognized as contamination of lichen thalli, recent research considers them -with increasing evidence -as important and crucial component of the lichen meta-organism. By their microbiomes lichens are ecologically linked with their surrounding environment (Figure 1). Even though a fraction of their microbiome can be transmitted by local dispersal of vegetative propagules, further recruitment of strains occurs from the local resources in the environment. This finally leads to a specific community structure of mature lichen thalli, which shares a core microbiome over larger distance (Aschenbrenner et al., 2014). Lichen thalli, already present on Earth since the lower Devonian, and representing the most complex vegetative structures in the fungal kingdom, may have evolved as bacterial enrichment structures. The exposed surfaces of lichens are ideally suited to benefit from functions of adapted and enriched bacteria, or from degradation of spurious non-adapted bacteria caught from the environment. The bacterial harvest may readily be dissipated to the symbiotic corporates via the fungal textures. It is this new perspective of the lichen symbiosis, which offers a wide range of new research questions in the near future. FIGURE 1 | A holistic view of the lichen microbiome diversity and identified functions in the environmental context. Lichen-associated bacterial communities were shown to share substantial fractions of identified taxa with adjacent microhabitats (blue circle). This suggests a dynamic acquisition and exchange of beneficial species. Specific proportions of the microbiome are vertically transmitted to the next generation and used for the establishment of novel populations (red circle and arrows). Highly diverse bacterial populations primarily colonize outer lichen layers, but some can also enter the inter-hyphal matrix (green circle). External factors provide a shared microbial 'core assembly' of the habitat, but host-specific factors (gray circle) determine the lichen-specific bacterial community, which contributes a variety of beneficial functions for the host symbiosis (purple circle). CONCLUSIONS -LICHENS AS A CASE MODEL TO UNDERSTAND MULTI-SPECIES SYMBIOSES Undoubtedly, there exist other cases of symbioses involving multiple organismal groups in terrestrial ecosystems. Similar to lichens, these were originally recognized as dual eukaryotic partnerships, but later shown to involve specific bacterial associations as well (e.g., fungi/leaf-cutter ants, Little and Currie, 2007;mycorrhiza, Garbaye, 1994). Modern tools now overcome the difficulties to re-establish complex symbioses under axenic laboratory conditions, and moreover, they allow us to precisely study symbioses in their environmental context. We consider lichens as ideal research objects for this purpose, because in contrast to many other symbiotic systems, they have an unsurpassed ecological range in general, but with rather specific adaptation of each species to their ecological niches. It will thus clearly be a novel and highly interesting theme in symbiotic research to establish the role of the microbiome in ecological adaptation and evolution of the lichen multi-species symbiosis. AUTHOR CONTRIBUTIONS IA, TC, GB, and MG wrote the manuscript. IA and TC contributed with results from their Ph. D. studies. GB and MG complemented the manuscript with profound experience in the fields of microbiome and lichen research.
7,225.4
2016-02-18T00:00:00.000
[ "Biology", "Environmental Science" ]
Influence Mechanism of External Social Capital of University Teachers on Evolution of Generative Digital Learning Resources of Educational Technology of University Teachers-Empirical Analysis of Differential Evolution Algorithm and Structural Equation Model of Bootstrap Self-extraction Technique Conceptual framework of influence mechanism of external social capital of university teachers on evolution of generative digital learning resources of educational technology of university teachers is constructed in this paper, which elaborates transmission mediation role of knowledge search and knowledge activity of education technology of university teachers, as well as positive moderating effect of interactive memory system and organizational citizenship behavior of university teachers. The professional teachers in 211 universities and 985 universities in eastern and central regions of China are taken as the subjects of questionnaire, and empirical analysis of influence mechanism is carried out by differential evolution algorithm and structural equation model based on Bootstrap self-extraction technique. Empirical analysis results show that external social capital of university teachers has a significantly positive effect on knowledge search and knowledge activity of education technology of university teachers. Knowledge search and knowledge activity of education technology significantly and positively promote evolution of generative digital learning resources of educational technology of university teachers. Interactive memory system and organizational citizenship behavior of university teachers significantly and positively moderate relationships among knowledge search, knowledge activity of education technology of university teachers and evolution of generative digital learning resources. INTRODUCTION In recent years, the concept of digital teaching and the concept of productive teaching have been popularized in the teaching process of university teachers.With the development and application of generative digital learning resources in educational technology field, the promotion of the evolution of generative digital learning resources of educational technology of university teachers (Hereinafter referred to as evolution of generative digital learning resources in this paper) became an important way for university teachers to enhance the performance of teaching (Yu, Yang, & Cheng, 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007).Therefore, some papers explore the pre-dependent variables that affect the evolution of generative digital learning resources and the influencing factors of the evolution of generative digital learning resources of educational technology, try to find how to promote the evolution of generative digital learning resources of educational technology, and in which way external social capitals of university teachers promote evolution of generative digital learning resources of educational technology effectively.The above mentioned has aroused concern in academic and business fields, and become one of the focuses in theoretical and industrial research.The theory and practice of educational technology show that external social capital, knowledge search and knowledge activity of educational technology of university teachers, and excellent characteristics of organizational citizenship behavior and interactive memory system of university teachers become key causes and influencing factors that affect evolution of generative digital learning resources in educational technology.In view of these reasons, this paper takes knowledge search and the knowledge activity of education technology as transmission medium variable, interactive memory system and organizational citizenship behavior of university teachers as moderating variables to construct conceptual framework of influencing mechanism of external social capital of university teachers on evolution of generative digital learning resources.Differential evolution algorithm and structural equation model based on Bootstrap self-extraction technique are integrated to verify influence mechanism and conduction path, which provides theoretical framework guidance and practical enlightenment for evolution of generative digital learning resources. Theoretical Hypotheses In order to enter into research theme, research background and research scenarios, based on relevant literatures, this paper follows dominant logic and operational thought of relationships between Nomo network and regulation, and combines professional characteristics of university teachers, results of expert interviews, actual questionnaire survey of university teachers and results of spot interviews.This paper selects organizational citizenship behavior and interactive memory system of university teachers as moderating variables, and sets knowledge search and knowledge activity of education technology as transmitting variables.According to causal relationships among variables, this paper proposes theoretical hypotheses, and constructs conceptual framework of influencing mechanism of external social capital of university teachers on evolution of generative digital learning resources based on theoretical hypotheses. External social capital of university teachers and knowledge search of education technology of university teachers Social capital is sum of actual or potential resources from or embedded in a network of relationships owned by individuals or social groups (Nahapiet & Ghoshal, 1998;Krause, Handfield, & Tyler, 2007;Zhang, 2010).Typically, social capital is divided into internal social capital and external social capital.External social capital is also known as bridge-type social capital, which is sum of actual resources and potential resources embedded in the external relations network (Nahapiet & Ghoshal, 1998;Krause, Handfield, & Tyler, 2007;Zhang, 2010).External social capital focuses mainly on getting resources from external networks across organizational boundaries.Peng (2010), Peng and Li (2011) divided external social capital into four parts: The first indicator, the intensity of internal and external interaction, is to reflect interaction frequency among internal and external members within certain period of time.The second indicator, external network density, is to reflect extensive communication degree among internal members of organization and external members.The third indicator, the degree of internal and external trust, is to reflect degree of trust among internal and external members of organization.The fourth indicator, internal and external common language, is to reflect degree of interconnection based on professional knowledge and skills among internal and external members of organization (Peng, 2010;Peng & Li, 2011). Through the following ways, four dimensions of external social capital of university teachers promote knowledge search of university teachers.Firstly, knowledge search has breadth and depth (Laursen & Salter, 2006;Chen, Yu, & fan, 2010).Knowledge search depth emphasizes the use of the depth of knowledge of the existing stock on the basis of knowing the existing knowledge, and the breadth of knowledge search emphasizes the breadth of developing and using the new knowledge.High frequency contact and close interaction between university teachers facilitate them to establish trust relationship, exchange and integrate educational technical knowledge resources between each other, improve the absorption and recognition of educational technical knowledge and enhance the depth and breadth of educational knowledge search (Adler & Seok, 2000).Secondly, the external network density and the internal and external common language of university teachers can promote the behavior consistency of university teachers in the internal network, improve the efficiency of educational technology and knowledge transfer between university teachers, and facilitate the communication and exchange of information and knowledge between each other.Furthermore, they can promote the trust and relationship commitment between teachers, implement the cooperation depth of educational technical knowledge of university teachers (Xie, Chen, & Cheng, 2011), and strengthen the interaction and learning in educational technology information, knowledge and resources between university teachers.And the open learning mechanism can help university teachers acquire and accumulate educational technology and knowledge (Xie, Zhao, & Cheng, 2011), which will enhance the depth and breadth of knowledge search of educational technology.Thirdly, the higher the density of the external network, the more conducive to enhance the transferring will of the sender of educational technical knowledge, thus promote the transfer efficiency of educational technical knowledge (Zhu, Xu, & Wu, 2011), and facilitate university teachers to have abundant access to network learning resources.At last, the acquired stock, the acquired heterogeneous and the acquired diverse of educational technology information will be enlarged, and the knowledge search breadth of educational technology will be improved, too (Allen, 2000).Fourthly, the degree of internal and external trust and internal and external common language facilitate university teachers to obtain the deeply complicated educational technology and knowledge, explore new solutions to technical information processing and practical problems, and get and integrate educational technical knowledge and experience at different levels and different values.Moreover, they will create and reorganize new educational technology elements and educational technology learning resource, amplify the effects from existing resources of educational technical knowledge, thereby enhancing the depth and breadth of knowledge search of educational technology and obtaining the required technology and knowledge (Uzzi, 1997;Uzzi, 1996).In summary, the following theoretical hypotheses are proposed: Hypothesis 1: External social capital of university teachers (ESC) has a significantly positive effect on knowledge search of educational technology of university teachers (KS). Hypothesis 1.1: Internal and external interaction intensity of university teachers (IES) has a significantly positive effect on KS. Hypothesis 1.2: External network density of university teachers (END) has a significantly positive effect on KS. Hypothesis 1.3: Degree of internal and external trust of the university teachers (IET) has a significantly positive effect on KS. Hypothesis 1.4: Internal and external common language of university teachers (IEC) has a significantly positive effect on KS.Yang (2003), Yang, Zheng, and Chris (2009) divided knowledge into three categories: explicit knowledge, implicit knowledge and active knowledge, and the active knowledge is different from the explicit knowledge and implicit knowledge, which is mainly involved in emotion, personal culture and value orientation.Based on concept of value and shared vision, the active knowledge effectively guides interactions between explicit knowledge and implicit knowledge through ideal, management philosophy, emotional motivation, mission and other forms, which is conducive to organizational learning and strategic decision-making (Yang, 2003;Yang, Zheng, & Chris, 2009).Yu (2011) made a systematic description about active knowledge as follows: on the basis of values, ambitions, ideals and vision, people use emotion, motivation, learning needs, attitudes, ethics, moral standards and other forms of expression to make expectation or emotional experiences from objective things, so as to understand the importance of objective matters.The active knowledge of education technology is seed for university teachers to share belief and play personal role, and the higher activity level of knowledge, the more conducive to university teachers' emotional commitment and motivation, which will promote the sharing of educational technical knowledge between university teachers (Yu, 2011).Internal and external interaction intensity, external network density, internal and external trust and internal and external common language provide an opportunity for university teachers to make internal and external communication, interaction and communication, which facilitates to introduce new external knowledge sources of educational technology and attract new educational technology professionals, and set up new team with differentiation and complementary of education technical knowledge.If university teachers make clear orientation of educational technical knowledge on their own niche, it will optimize the allocation of knowledge resources of educational technology, draw knowledge map of educational technology, and share belief of activity knowledge of education technology based on trust.Finally, it will form and agree with each other's values, ambition and ideal vision, promote emotional commitment and motivation among university teachers, and improve level of knowledge activity of education technology.To sum up, the following hypotheses are made: Hypothesis 2: External social capital of university teachers significantly enhances knowledge activity of educational technology of university teachers (KA). Knowledge search of education technology, knowledge activity of education technology and evolution of generative digital learning resources of educational technology of university teachers Learning resource is the core element of the ubiquitous learning ecosystem.The ubiquitous learning requires a large amount of generative learning resource with sustainable development and open structure (Yu, Yang, & Cheng, 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007).According to the different generation ways of learning resources, learning resources can be divided into pregenerated resource and generative resource, and generative resource has a better expansibility, adaptability and evolvability, which can be adjusted according to the needs of teachers and students dynamically.But it is disadvantageous to take a long time to generate, moreover, the generation process is difficult to control, and the quality of resources varies greatly.The typical generation resources include Wikipedia, learning cell, generative network curriculum (Yu, Yang, & Cheng 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007;Yang & Yu, 2013).The evolution of generative digital learning resources of educational technology refers to the improvement and adjustment of the content and structure of the learners to meet the dynamic and personalized learning needs of the learners in a digital learning environment, and adapt to the changing learning environment continuously (Yu, Yang, & Cheng, 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007;Yang & Yu,2013).Evolution of generative digital learning resources of educational technology of university teachers includes five construction dimensions of learning resource contention, learning resource configuration, marking standards, learning resources teaching quality and attribute, learning resource activity (Yang & Yu, 2013). Knowledge search of education technology of university teachers is mainly through the following channels to facilitate the evolution of generative digital learning resources.Firstly, to enhance the depth and breadth of the knowledge search of education technology of university teachers is advantageous to explore and create new explicit knowledge and implicit knowledge of educational technology (Li & Si, 2009), stimulate creative idea of educational technology, and enhance the knowledge understanding depth of educational technology (Adler &Seok,2000).The above enhancing is also conducive to the transformation of implicit knowledge into implicit knowledge, explicit knowledge into explicit knowledge, implicit knowledge into explicit knowledge and the explicit knowledge into implicit knowledge.Furthermore, it can promote the spiral of educational technical knowledge, and integrate the spiraling educational technical knowledge into teaching resources of educational technology to promote the generative digital learning resources evolution.Secondly, the effective breadth and depth of knowledge search of education technology is easy for university teachers to implement the separation between the persons and educational technical knowledge by encoding strategy, spread educational technical knowledge, use the knowledge base to store educational technical knowledge, and promote the improvement and the repeated application of educational technical knowledge (Li & Si, 2009).In addition, the effective breadth and depth of knowledge search of education technology will help realize mutual transformation between explicit knowledge and implicit knowledge to form a knowledge spiral of educational technology, explore new methods to process education technical information and solve practical problems of educational technology, and expand the acquired storage of educational technology information.And it can obtain heterogeneous information of educational technology on the bases of keeping homogeneous information of educational technology, create new educational technology elements and reorganize resources, and amplify the effect generated by the knowledge resource of education technology and obtain the required knowledge of educational technology.Finally, it will provide the new explicit and implicit knowledge source for the evolution of generative digital learning resources of educational technology of university teachers, and promote the evolution of generative digital learning resources (Uzzi, 1997;Shi, Sun, & Liu, 2013).Knowledge activity of education technology of university teachers is mainly through the following channels to facilitate the evolution of generative digital learning resources of educational technology of university teachers.Firstly, the higher knowledge activity degree of educational technology of university teachers will be more conducive to the emotional commitment, emotional motivation and knowledge sharing of professional university teachers (Yu, 2011).If the education technical knowledge obtained by sharing is integrated into generative digital learning resource of educational technology of university teachers, it will promote the evolution of generative digital learning resource of educational technology.Secondly, the higher the knowledge activity degree of educational technology of university teachers, the more conducive to the establishment of the relationship of trust and emotional commitment among university teachers, which will form an interactive communication atmosphere, promote university teachers to carry out joint planning activities for educational technical knowledge, and build the interoperability and compatibility of educational technical knowledge and skills.Meanwhile, it can strengthen the standardization and consistency of educational technical information and knowledge process, stimulate the conversion between explicit knowledge and implicit knowledge, and apply the standardization and consistency of educational technology information and knowledge process to the generative digital learning resources.And all of these will help the evolution of generative digital learning resources (Shi, Sun, & Liu, 2013).In conclusion, the following theoretical hypotheses are proposed: Hypothesis 3.1: KS significantly promotes the evolution of generative digital learning resources (GDLRE).Hypothesis 3.2: KA significantly promotes GDLRE. The positive moderating effect of interactive memory system of university teachers An interactive memory system is a cooperative division system which is formed by internal team members who rely on each other to encode, store, and extract knowledge in different areas (Hollingshead, 2001;Wang & Xue, 2011;Lewis, 2003;Lewis, 2004).Interactive memory system mainly includes three dimensions: specialization, reliability and coordination (Hollingshead,2001;Wang & Xue,2011;Lewis,2003;Lewis, 2004).The more complicated the educational technical knowledge of university teachers, the more teachers in different fields need to be integrated to decode and encode.The establishment of interactive memory system can bring teachers in different fields together to cooperate with each other in a reliable way, so as to improve the absorptive ability and searching ability of receivers of educational technical knowledge.Related research results show that the interactive memory system can promote communication and cooperation among university teachers, improve the acquisition, sharing, integration and application of knowledge search of educational technology, and enhance the depth and breadth of knowledge search of educational technology.Interactive memory system promotes university teachers to cooperate and communicate, generate the cognition of distribution of internal educational technical knowledge, and understand the position of professional knowledge of educational technology.This system can eliminate barriers to the transfer of educational technical knowledge, reduce the viscosity of educational technical knowledge, and facilitate the search and integration of external and internal educational technical knowledge.Finally, it can promote the evolution of generative digital learning resources. Interactive memory system is convenient for university teachers to make a cooperative division of educational technical knowledge, set up a good organization atmosphere, and promote the sharing of educational technical knowledge.Burke and others study the positive promoting effect of organizational atmosphere on informal knowledge sharing behavior (Burke & Weir, 1978).The interactive memory system formed in the university can absorb more external talents, expand knowledge sources of educational technology of university teachers, and provide opportunities for internal and external cooperation that will improve the interactive memory system.It is necessary to establish a differentiated and complementary experts' team, draw a map of educational technical knowledge based on "goals-expertise-members", find a clear ecological position for their own educational technical knowledge, and optimize the allocation of knowledge resources of educational technology and transfer educational technical knowledge with trust.In addition, the interactive memory system can reduce knowledge senders' concerns and the risk to transfer educational technical knowledge, decrease the protection awareness of educational technical knowledge, and enhance the cooperation willingness and trust between knowledge senders and receivers.Active and specialized modules and coordinated operations can shorten the cultural distance, spatial physical distance and knowledge distance between knowledge senders and receivers, and create some good communication channels and organizational environment.Learning and planning a management strategy for educational technical knowledge can improve teachers' interactive study and the incentive system of educational technical knowledge, which will coordinate the action and activity rhythm between each other, and make them to recognize and show respect to mutual culture, values, behavior patterns and action criterion.All of the above mentioned will strengthen interactive media, optimize the network structure of educational technical knowledge, and make university teachers know the distribution of educational technical knowledge between themselves, which will arouse teachers' knowledge activity of educational technology, and promote the evolution of generative digital learning resource.The following hypotheses are derived from the above: Hypothesis 4.1: Interactive memory system (IMS) significantly and positively moderates relationships between KS and GDLRE. Hypothesis 4.2: IMS significantly and positively moderates relationships between KA and GDLRE. The positive moderating role of organizational citizenship behavior of university teachers Organizational citizenship behavior is required by the organization, although it is not included in the formal job requirements.Regardless of the formal requirements, organizational citizenship behavior is a kind of outside action that is beneficial to the organization, and it is a spontaneous behavior of organization members that cannot get a formal organizational return, but it has a promoting effect on organizational performance (Bateman & Organ, 1983;Organ, 1990).Organizational citizenship behavior consists of two construct dimensions, namely, generalized compliance and altruism (Smith, Organ, & Near, 1983).In view of the characteristics of university teachers' occupation and the relevant research results of scholars, the indicators of organizational citizenship behavior of university teachers include four dimensions: initiative, altruism, self-development and interpersonal harmony (Bateman & Organ, 1983;Organ, 1990;Smith, Organ, & Near, 1983;Liao, Li, & He, 2016).Organizational citizenship behavior of university teachers can promote university teachers to engage in the teaching work, making them willing to take on extra responsibilities and teaching tasks, help other teachers or organization to complete the task and solve the problem.And this behavior can remove the adverse impact generated from the pursuit of personal interests, thus achieving the harmonious interpersonal relationship, and improving self-development and teaching performance of university teachers (Liao, Li, & He,2016).The occupation characteristics of university teachers need the initiative and dedication of university teachers, and it is easy to create a sense of organizational identity and belonging among university teachers, which pushes university teachers to practice the organizational citizenship behavior actively, feedback the school and society, and promote teachers' behavior internalization and identity (Liao, Li, & He, 2016).The occupation characteristics succeeded in arousing university teachers' improvement of work performance, active acquirement of learning resources of educational technology and active search of educational technology knowledge, and enhancement of knowledge activity and the breadth and depth of knowledge search of educational technology.If the searched and active knowledge of education technology is integrated into generative digital learning resource of educational technology, it will promote the evolution of generative digital learning resources.In conclusion, the theoretical hypotheses are put forward: Hypothesis 5.1: Organizational citizenship behavior of university teachers (OCB) significantly moderates positive relationships between KS and GDLRE. Hypothesis 5.2: OCB significantly moderates positive relationships between KA and GDLRE. Construction of Conceptual Framework of Influence Mechanism According to the theoretical hypotheses, the conceptual framework of influence mechanism of external social capital of university teachers on the evolution of generative digital learning resources is constructed as Figure 1, in which knowledge search and knowledge activity are regarded as transmission mediating variables, and interactive memory system and organizational citizenship behavior as moderating variables. Scale Design Referring to the relevant domestic and foreign literature and mature scales, this paper selects some evaluating indicators and designs Likert 1-7 points scale.The related references of the scale are shown in Data Acquisition Referring to the related research achievements (Yu, Yang, & Cheng, 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007;Yang & Yu,2013;Yang, 2015), the learning cell system is a new opening knowledge community, including six core modules such as learning cell, knowledge group, knowledge cloud, learning community, learning tools and individual space.Learning cell is the basic resource unit of learning system, and it is a typically generative digital learning resource, which can aggregate and generate multiple knowledge groups.In this paper, a number of learning cells belonging to educational technology subject are selected from the learning cell system, and the research objects are focused on the registered professional teachers of the universities in several learning cells.In the paper, the design of Likert 5 points scale and questionnaire method are used, and the methods of convenience sampling, subjective sampling, stratified sampling and random sampling are combined.By means of on-site interviews, questionnaires, and E-mail, the professional teachers in 211 universities and 985 universities in central and eastern part of China were asked to answer the questions according to their actual situation, and all of the teachers need to master related knowledge of generative digital learning resources of educational technology, and the related technology of learning cell and learning system.750 questionnaires were sent, in which 650 questionnaires were actually collected.150 invalid copies are removed, and 500 questionnaires are valid.The effective rate of the questionnaire is 66.67%.Among the subjects, 50% are female and 50% are male.Survey samples are typical in the region and the scale of university teachers, 211 universities and 985 universities in eastern region account for 70%, 30% in central region.The age and working life of professional teachers in the survey are normally distributed approximately, and average working years are more than 10 years with rich teaching experience.The survey work was conducted by stages.The independent sample t test shows that there is no significant difference between the questionnaires completed early or late, and there is no response bias in the collected questionnaires. Reliability Test and Validity Test of Scale Based on software SPSS17.0, the method of exploratory factor analysis is used to test the scale reliability, construct validity and building validity, as the results shown in Table 2. Based on software AMOS22.0, the method of confirmatory factor analysis is used to test the assembly validity and convergent validity of the scale, as the result shown in Table 2. Cronbach of the total scale is greater than 0.7, and the values of CITC and item deleted Cronbach are both higher than 0.5, which shows that the scale is reliable very much.The validity tests include the tests of convergent validity and construct validity.In order to ensure the content validity and index reliability of scales, the maturity scale at home and abroad is used.Table 2 shows that the average KMO value is greater than 0.7, sig.value of Bartlett's test is 0.000 less than 0.001, the cumulative extraction square of common factor and the total variance loaded with interpretation are both greater than 50%, and the factor component loadings of each variable are greater than 0.5, all of which show that the scale has a better construct validity to be built.CR values of the scale variables are all higher than 0.6, and the corresponding AVE values are higher than 0.5, showing that the scale has better assembly validity and convergent validity. Table 1. Scale design and references Variables Related References ESC (Nahapiet & Ghoshal, 1998;Krause, Handfield, & Tyler, 2007;Zhang, 2010;Peng &Li, 2011;Peng, 2010;Laursen & Salter, 2006;Chen, Yu, & Fan, 2010;Adler & Seok, 2000) KS IMS (Hollingshead, 2001;Wang & Xue, 2011;Lewis, 2003;Lewis, 2004) KA (Yang, 2003;Yang & Zheng, 2009;Yu, 2011) OCB (Burke & Weir, 1978;Bateman & Organ, 1983;Organ, 1990;Liao, Li, & He, 2016) GDLRE (Yu, Yang, &Cheng, 2009;Zhang & Wang, 2012;Yang & Yu, 2011;Yang, Cheng, & Yu, 2013;Yang & Yu, 2013;Li & Tu, 2007;Yang & Yu, 2013;Yang, 2015) The Process and Results of Empirical Analysis (1) Based on the data collected from the questionnaire and the tests of scale reliability and validity, differential evolution algorithm is adopted, and outstanding individual samples are searched from 500 collected professional teachers in the universities.Excellent samples that will move into the next generation of population groups are used as the empirical analysis samples of structural equation model (SEM) based on Bootstrap selfextraction technique.The main purpose of differential evolution algorithm is to search for individual samples with good fault-tolerant and strong learning ability, so that better individuals with strong learning ability will enter into the next generation groups to maximize the overall search function (Li, Guo, Li, & Liu, 2016;Guo, Li, & Li, 2014;Deb, 2000).Based on the collected samples of 500 university professional teachers, this paper uses differential evolution algorithm, and uses software Matlab and software Stata to set executive parameters, as shown in Table 3.The target is to tap the causal relationship between variables, the interaction among six variables that affect the theoretical framework of the influence mechanism, and the influencing mechanism of external social capital of university teachers on the evolution of generative digital learning resources of educational technology of university teachers, in which knowledge search and knowledge activity are regarded as transmission variables, and interactive memory system and organizational citizenship behavior as positive moderating variables. Secondly, construct individual structure.The population size is indicated by NP, and individual i in the population is recorded in generation G as: ⃗ , = [ 1,, , 2,, , … , ,, ] Among them, D is the dimension contained by the individual. Fourthly, vary the variation.The variation process follows: In the formula: , is individual K to be varied in the current population; , , , , , is a individual selected at random in the current population, and ≠ ≠ ≠ ; , is a varied individual; F ∈ [0, 1] is a scaling factor.DE/x/y/z is usually used to represent different patterns of variation, and DE represents differential evolution algorithm; X is the basic item in front of the difference item; Z is a crossover model. Sixthly, selection.After the crossover, the individual , and the target individual , are sequentially substituted into the objective function for comparison, and the selection process is: (2) The differential evolution algorithm is adopted to identify the outstanding CONCLUSION AND RECOMMENDATION This paper constructs theoretical framework of influence mechanism of external social capital of university teachers on evolution of generative digital learning resources of educational technology of university teachers, in which it focuses on transmission mediation role of knowledge search and knowledge activity of educational technology, as well as positive moderating effect of interactive memory system and organizational citizenship behavior of university teachers.Data of the questionnaire survey are obtained from professional teachers in 211 and 985 universities in eastern and central regions of China, after scale reliability and scale validity are tested, empirical analysis is carried out by differential evolution algorithm and structural equation model based on Bootstrap self-extraction technique.Empirical analysis results show that external social capital of university teachers has significantly positive effects on knowledge search and knowledge activity of education technology of university teachers.Knowledge search and knowledge activity of education technology significantly and positively promote evolution of generative digital learning resources.Interactive memory system of university teachers significantly and positively moderates relationships among knowledge search, knowledge activity of education technology of university teachers and evolution of generative digital learning resources.Organizational citizenship behavior of university teachers significantly and positively moderates relationships among knowledge search, knowledge activity of education technology and evolution of generative digital learning resources.Practical implications of empirical results are that university teachers should pay attention to both inside and outside bridging effect of external social capital, strengthen internal and external interaction strength of university teacher, external network density and internal and external trust degree, and cultivate internal and external common language of university teachers highly, so as to form a common perception of the same symbols, characters, text and symbol.In addition, university teachers should establish interactive memory system, build professionalreliable-coordinated teams, take initiative to practice organizational citizenship behavior, promote depth and breadth of knowledge search of education technology, enhance knowledge activity of educational technology, and continue to promote evolution of generative digital learning resources of educational technology of university teachers. Fifthly , crossover.Cr ∈ [0,1] shows crossover probability.There are two forms of crossing: ① Index model is: ,, = � ,, , = 〈〉 , 〈 + 1〉 , … , 〈 + − 1〉 ,, , ∈ [1, ] individual samples and the next generation population from outstanding individuals.And the excellent searched samples and the next generation population are used as the empirical analysis samples of structural equation model (SEM) based on Bootstrap self -extraction technique.The normal distribution is tested by software AMOS22.0.The values of multivariate skewness coefficient and multivariate kurtosis of variables are less than 10, and the critical comparison values CR corresponding to kurtosis and skewness of variables are between -2 and 2, showing that variables are subordinated to the normal distribution.On the basis of the test of variable normal distribution, the original structure equation model based on Bootstrap is established to reflect latent variables and causal relationship between latent variables, and Bootstrap self-extraction technique is used to estimate path coefficient, as the results shown in Table 4.The fitting goodness index corresponding to the original structure equation model based on Bootstrap self-extraction technique does not reach the minimum standards of regulation.The rate of chi-square to Table 2 . Test results of reliability and validity of scale Table 3 . Executive parameters (executive software is software matlab and stata, and executive parameters are averaged)
7,334.6
2017-08-22T00:00:00.000
[ "Education", "Computer Science" ]
Investigation of space-continuous deformation from point clouds of structured surfaces : One approach to estimate space-continuous deformation from point clouds is the parameter-based epochal comparison of approximating surfaces. This procedure allows a statistical assessment of the estimated deformations. Typically, holistic geometric models approximate the scanned surfaces. Regarding this, the question arises on how discontinuities of the object’s surface resulting from e.g. single bricks or concrete blocks, influence the parameters of the approximating continuous surfaces and in further consequence the derived deformation. This issue is tackled in the following paper. B-spline surfaces are used to approximate the scanned point clouds. The approximation implies solving a Gauss–Markov-Model, thus allowing accounting for the measurements’ stochastic properties as well as propagating them on the surfaces’ control points. A parametric comparison of two B-spline surfaces can be made on the basis of these estimated control points. This approach is advantageous with regard to the transition of the space-continuous deformation analysis to a point-based task, thus ensuring the applicability of the well-established congruency model. The influence of the structure’s geometry on the surfaces’ control points is investigated using terrestrial laser scans of a clinker facade. Points measured in the joints are eliminated using an own developed segmentation approach. A comparison of the results obtained from segmented as well as from unsegmented laser scans for the B-spline approximation and the subsequent deformation analysis provides Introduction Monitoring artificial objects like dams or bridges is a crucial safety task.The increasing use of continuously improving laser scanners demands ongoing development and investigation of methods to detect space-continuous deformations.This paper shall contribute to those investigations and introduces an approach belonging to the category of parameter-based point cloud comparisons.It comprises the approximation of the scanned point clouds with B-spline surfaces adopting in each epoch the same number of control points and knot vectors.With this condition, the B-spline approximation processes of both epochs can be expressed with a linear Gauss-Markov-Model (GMM), wherein solely the control points of the B-spline surfaces are unknowns.Herein a stochastic model based on error propagation of the variances within the measuring configuration and the instrument's precisions is incorporated.The final evaluation of deformation consists in the geometrical comparison of the estimated control points of the approximating B-spline surfaces in the classical congruency model [1].An identified change of a control point indicates a local space-continuous deformation within the B-spline surface.A validation will take place with reference to deformation identified by classic terrestrial measurements including stable-and object-points.The major subject tackled within the newly investigated approach is the influence of the scanned object's structure on the deformation results.Therefore, a segmentation approach is applied to the point clouds that separates regular patterns using an extended region growing approach.In previous related work, including [2][3][4][5] or [6], space-continuous deformation of different masonry structures are investigated by applying different approaches of deformation analysis: Paffenholz et al. [4] applied a geometry-based point cloud comparison incorporated in the M3C2-algorithm to derive deformations of a bridge under load.In Paffenholz and Wujanz [5] this approach is compared to other commercial point cloud comparison methods.The same specimen was used by [3] applying a parameter-based deformation analysis.Here the scanned point clouds are approximated by B-spline surfaces.The derived deformation is based on the calculated Hausdorff-Distance between points of the approximating surfaces.Alba et al. [2] investigated the deformation of a large dam applying point cloud-based comparison methods like mesh-to-mesh or polynomial surface approximations to resampled point clouds.Kalenjuk et al. [6] conducted a parameter-based deformation monitoring by comparing the orientation of normal vectors of fitted planes in scanned concrete panels of retaining walls.The method introduced in this paper is applied and evaluated on terrestrial laser scan data of an aqueduct arc of the Viennese "Hochquellwasserleitung" (see Figure 1).It shows the structure of a brick wall enhancing geometrical differences in the bricks and the joints.The aqueduct contains a gravity channel dimensioned by <1.5 m width and <2 m height. The volume of floating water comprises about 1.9 m 3 of water per meter, which corresponds to a uniform load of 1.9 t/m.Every three months, the pipes are cleaned within a procedure called "Abkehr".At this process the drinkable water is impounded at a reservoir and the channel is released from the water weight.In this context, the question arises whether deformation resulting from the absence of mechanical loads occurs and whether it can be detected in a statistically sound way.A validation of the space-continuous deformation is performed using a point-based deformation analysis.Here targets installed on top of the aqueduct arc are monitored in the same temporal intervals as the point clouds are scanned. The paper is structured as follows: In chapter 2, the methodological principles utilized within this approach are introduced, followed by the description of the conducted measurement campaigns in chapter 3.In chapter 4 the results are presented.In chapter 5 the results are interpreted and discussed.A summary and outlook on further issues that shall be investigated beyond the contents of this paper is given in chapter 6. Segmentation: extended region growing approach B-spline approximations incorporating knot vectors whose internal knots are of multiplicity one, demand point clouds with fairly regular point distribution in order to circumvent local singularities [7].Within the point clouds of the investigated measuring object the geometrical differences between the bricks and the joints cause discontinuities that might influence the quality of the approximating B-spline surfaces and further the derived deformations.Therefore, points within joints are discharged within the segmentation process, which leads to only small, negligible gaps between the bricks.The used approach for the segmentation is based on an initial region growing (RG) [8] solution incorporating two conditions (see Equation ( 1)): on the one hand the difference between intensity values of neighboring points and on the other hand the scalar product between normal vectors of neighboring points.Red-Green-Blue (RGB) values are not incorporated because they were not available within the scanning procedures.where This initial result is generated using Opals [9] and ideally ascribes points of one brick to one segment.Insufficiencies occur on the one hand as points within a segment are excluded due to intensity fluctuation or on the other hand as points lying in joints are mistakenly included in segments and furthermore, several bricks are incorporated into one segment.To eliminate these insufficiencies, an improving approach is applied on the initial RG-segmentation solution with the inclusion on information about the dimensions and the orientation of the immured bricks.Here the latter cases are remedied by excluding segments with significantly higher number of points and a large geometrical extent. The former cases are tackled by fitting bounding boxes of known extent and orientation into the segments.Therefore, the Eigenvalues and Eigenvectors of a segment (resulting from RG) reduced by its centroid are calculated using the Principal Component Analysis (PCA) [10].The direction of the eigenvector corresponding to the smallest eigenvalue points into the direction orthogonal to the scanned bricks.The eigenvector connected to the largest Eigenvalue points into the direction of the longest side of a brick.The cross-product of these two Eigenvectors completes a threedimensional coordinate system.A transformation of this coordinate system into a local up-right coordinate system with its center in the centroid of a segment states the next step of the approach.In a following step, a bounding box can be placed within the centroid of the segment incorporating the initially-known dimensions of the brick.The assumption that the centroid of one segment equals the centroid of one brick is stated as facilitation.When placing the bounding box, all inlying points are ascribed to the same segment.After processing all segments accordingly, the segments get aligned in each row, respectively.This is done by calculating the mean alignment of the segments for each row (mean eigenvector corresponding to the largest eigenvalue).The related edges of the bounding boxes are rotationally corrected by their offset to the calculated mean value.Final step of the segmentation is the gap filling.Here unsegmented areas are considered.Segments are inserted in correspondence to the surrounding segments' position and orientation. B-spline-approximation with inclusion of the variance covariance matrix through error propagation The basis of the developed deformation analysis lies on the space-continuous approximation of the scanned point clouds using B-spline surfaces.A point on the surface Ŝ can be described in dependence of the surface parameters u and , equated with the weighted sum of locally relevant so-called control points P i, j . Ŝ(u, 𝑣) The net of control points can be regarded as a scaffold of the B-spline surface.A local deformation in the scaffold realized through a change in the location of one or more control points results in a local deformation of the surface and vice versa.This characteristic allows the transition of an initial space-continuous deformation analysis approach to the well-known and wide established analysis of congruent points.The mathematical model of the approximation of a point cloud using a B-spline surface can be described as linear -assuming prerequisite introduction of the surface parameters u and , the number of control points n + 1 and m + 1 in the direction u and , the knot vector and the functions' degrees p and q [7]. The linear model corresponds to a GMM, where the known observations S(u, ) are the coordinates of the points p k of the point cloud and the unknowns are the coordinates of the control points P i, j .Integrating a suitable stochastic model in the GMM is crucial.Within this approach the Variance-Covariance-Matrix (VCM) is built through error propagation of known uncertainty sources of the observations. Insertion of the control points in the classical congruency model The control points estimated separately for the two analysed epochs are introduced in the classical congruency model ( [1,11]).As in the point-based deformation analysis, a prerequisite condition for this step is a common definition of the geodetic datum.This is tackled within the present spacecontinuous approach by retaining the same parametrization for both B-spline approximations of epoch 1 and epoch 2. The following introduction of control points in the congruency model transforms the space-continuous deformation analysis into an equivalent point-based problem.This can be done after proofing that for all B-spline approximation models the resulting a posteriori variance factors of unit weight are realisations of the same population.If this condition is statistically proven, the congruency test can be applied, holding the following null hypothesis: The hypothesis states that the expectancy value of the difference d between congruent control points is zero.This results in the following test decision: where If the null hypothesis of the congruency test cannot be rejected at a predefined significance level , one follows that no deformation occurred between the sets of control points and in further consequence the B-spline surfaces are identical.Otherwise, if the null hypothesis is rejected at a confidence level 1 − , deformation occurred within the set of control points and has to be located.For the current study the localisation through decomposition of disclosures according to Gauss is applied [11]. Measurement campaign The measurement campaign includes two one-day field trips acquiring data for epoch 1 (EP1) and epoch 2 (EP2) respectively.EP1 took place before the "Abkehr" i.e. when the aqueduct arc was under usual load of water weight.EP2 took place during the "Abkehr", i.e. in absence of mechanical loads due to water.As measuring instrument the Leica MS60 was used for the net measurement as well as for the scanning of the specimen.Each epoch includes a scan of the aqueduct arc accompanied by a terrestrial geodetic net measurement.The latter is used, amongst others, for the registration of the scanned point clouds.The measurement net consists of ten net points, among whom eight points were delimited with survey bolts and two points were delimited with marking pipes with head plates.In Figure 2 the measured net is visualized. The scanning was conducted from the net point NP02.For an increase in the stationing accuracy the station height was measured with the integrated laser plumb of the instrument.Within each net measurement two object points (SP03 and SP04) were measured as well.Those are equipped with mini prisms installed on the top edge of the aqueduct arc.This allows an initial information whether deformation can be detected within the point-based comparison and further states a validation approach of the results obtained with the space-continuous deformation method introduced in the next chapter.The raw point clouds were centred in the instruments' centroid and registered by applying scan position coordinates as translation and the station orientation values as rotation around the z-axis, both resulting from the net adjustment. Net adjustment The software JAG3D [12] was employed for the adjustment of the terrestrial measurements as well as for the subsequent point-based deformation analysis.Firstly, separated net adjustments of the two measured epochs EP1 and EP2 were conducted.Here outliers were detected individually and the a-priori measurement uncertainties were adjusted using variance component estimation.When the individual statistical test of the net adjustments passed respectively, a joint net adjustment with the uncertainty components of the single net adjustments was conducted.Within this joint net adjustment, the coordinates of the net points were determined solely once.In Table 1 the resulting parameters of the single net adjustment and the joint net adjustment are shown. In accordance with the joint net adjustment, the measured object points were introduced into a point-based deformation analysis.The results of the deformation analysis states that the object points did not move (hypothesis stated in Equation (3) could not be rejected at the confidence level of 95%).The difference between the coordinate elements of the object points measured in the two epochs are visualized together with their standard deviation in Figure 3. Registration of scans As mentioned in chapter 3, the point clouds obtained in the two epochs were registered using the adjusted coordinates of the scanning position (NP02) obtained from the joint net adjustments.The orientation parameter o was applied as rotation parameter along the z-axis.Corresponding values of the two station orientations result from the joint net adjustments.The registration parameters and their uncertainties are noted in Table 2. Segmentation The segmentation process described in chapter 2 was conducted using the following parameters: Within the initial RG segmentation in Opals the used thresholds of acceptance (see Equation ( 1)) were chosen as t 2 = 0.995 and t 1 = 25. In the next step of the segmentation process the bricks' dimensions were incorporated -the immured bricks are of dimension 21.5 × 13.5 × 6.5 cm.In Figure 4 the results of the segmentation process are shown. B-spline approximation One initial task of the B-spline approximation of the point clouds is setting the optimal number of control points.For this, the Bayes' Information Criterion (BIC) [13] was used [14].Within the range of 4-20 control points in both, u and v-direction, 13 × 5 control points resulted as optimal choice, with a degree of 3 in both directions.The functional model consists therefore of a (3k × 3 • 65) design matrix A, where k is the number of points within a point cloud.The initial Bspline parameter values introduced in A are determined in a preceding step by projection of the points onto an initial surface, i.e.Coons Patch [15].The initial stochastic model results by error propagation of the measurements' uncertainties and the uncertainties of the scanning position as well as its orientation.The station coordinates' uncertainties are estimated within the net adjustment to be in the range of 0.2 to The setup covariance matrix is introduced in the GMM to calculate the coordinate components and the uncertainties of the control points.If the null hypothesis of the global test of the GMM is rejected, the stochastic model is iteratively adapted.Observations, i.e. coordinate components of the single points of the point cloud, with a significant value of normalized residuals [11] are downweighted.However, these observations are not directly downweighted, but the uncertainties of the corresponding polar measurement components, i.e. the standard deviations of the horizontal directions and zenith angles as well as slope distances.Those are scaled by a factor of 1.05 in each iteration step.This is based on the finding that local fluctuations in the measuring directions mainly cause the outliers -not the stations' uncertainty values.In the following Table 4 the a-posteriori variance factor of the accepted null hypothesis and the maximum values of adapted standard deviations of horizontal directions, zenith angles and slope distances are shown for the point clouds of both epochs EP1 and EP2, in segmented and unsegmented form, respectively.The maximum values occur for the same points having significant normalized residuals in several iteration steps and thus being downweighted more times.The locations of the points of the point cloud, whose variances were scaled up during the iteration process are in every case distributed all over the arc, commonly rather on the edge regions of the scans.The parameter values in Table 4 show that within the approximation of the unsegmented point clouds the maximum values of downweighted observation are higher than when using segmented point clouds. Deformation analysis The deformation analysis is performed by inserting the control points estimated in the two epochs, in the congruency model.The model is set up for both cases of segmented and unsegmented data, in order to investigate the influence of the object's structure on the outcome of the analysis.In a prerequisite step the a-posteriori variance factors are tested if they are realisations of the same population.In both cases the null hypothesis stating that they belong to the same population was accepted, such that the a-posteriori variance factors were merged.Table 5 shows the resulting test values calculated for the global congruency test and the corresponding quantile for both cases of segmented and unsegmented data. In both cases the global congruency test indicates a significant test value, meaning that deformation occurs in the collectivity of the control points.In the further localization process, which was implemented as localization by means of decomposition of disclosures according to Gauss [11], the deformed control points are identified.In case of unsegmented point clouds four control points are identified as deformed.Using the segmented point clouds, the number of deformed control points was three.In both cases the deformed control points are not centered in a specific area of the arc.In Figure 5 the position of the deformed control points is marked red for both cases.The white points are stable.The surface underneath the control points is the approximated B-spline surface of EP1, respectively.The coordinate difference of the deformed control points are visualized in Figure 6, together with their corresponding standard deviation.It can be noticed that for three cases (20, 50, 51) the coordinate differences lie below their corresponding standard deviation.The components of all difference vectors lie below their doubled standard deviation (2s). As a consequence of these results and in view of the results obtained in the point-based case (see Section 4.1) the assumption that there might be an unrecognized rigid Epoch comparison T F(𝜶 = 5%) Unsegmented PC 1.48 1.17 Segmented PC 1.39 body movement left leading to falsely identified unstable points occurred.Therefore, the Iterative Closest Point (ICP) Algorithm [16] was applied on the already registered point clouds as an additional pre-processing step.The resulting translation and rotation parameters are given in Table 6. The quantity criteria of the B-spline approximation remains unaffected by the application of the ICP-Algorithm.Therefore, the values are the same as listed in Table 4.The test values of the congruency tests changed noticeably.They indicate that no deformation occurred between the two epochs, for both cases of unsegmented and segmented point clouds.The test values and quantiles are stated in Table 7. Analysis of the results Several factors are crucial in the purposed approach.First factor refers to the influence of the object's structure on the B-spline approximation and in further consequence on the outcome of the deformation analysis.Considering the parameters in Table 4, an influence of the object's structure on the epoch-wise B-spline approximation is noticeable as the maximum values of downweighted observations are significantly smaller using segmented point clouds.This consequence is logic because local discrepancies of the point cloud to the approximated surface in the joints are eliminated.Considering the derived test values T in the global congruency tests in Tables 5 and 7, both results show a smaller test value T using the segmented point clouds.In this context it is relevant to state that the imported point clouds were subsampled to a point density of 5 cm before the B-spline approximation, due to computational limits.Nonetheless, an increase of the test value in the congruency test, using unsegmented point clouds, does occur.Still, the resulting statements of the two global congruency tests are in both cases of unsegmented and segmented point clouds the same.In the case of the first global congruency test (Table 5), the null hypothesis is rejected either way.After the additional alignment of the point clouds using the ICP algorithm, the null hypothesis cannot be rejected.The results of the latter test comply with those obtained from the point-based deformation analysis.This result underlines the Epoch Comparison T F(𝜶 = 5%) Unsegmented PC 0.95 1.17 Segmented PC 0.94 importance of a very precise registration of the compared point clouds.In the present case, the registration information extracted from the net adjustment is not sufficient, as it neglects a small but crucial bias.Further causes for the detected inconsistencies can be simplifications of the processing procedure like the coordinate-wise approximation of the B-spline surfaces or the neglection of interepochal correlations.Their impact will be investigated in further studies.This causes the detection of non-existent deformations.Some arguments for an erroneously identification of the moved points in Figure 5 are: the identified points are distributed over the approximated B-spline surface.However, confirming the definition of B-splines a local deformation might presumably affect several neighbouring control points.Furthermore, coordinate differences of the unstable points are mostly below their corresponding standard deviation.When neglecting covariances, these differences are insignificant.Finally, the simplification of the applied stochastic model can be another cause for erroneous detection.Kermarrec et al. [3] already investigated the importance of the propagation of the correlations between observations and its effects on the test results of the global congruency test.Herein, the stochastic model results from error propagation and is therefore fully populated for each scanned point.However, at the level of the polar measurements, these were regarded as uncorrelated.Additional influences as correlating factors in the stochastic model of the point clouds as introduced in [17] or [18] are neglected here. Summary and outlook In this paper a space-continuous deformation analysis approach based on B-spline approximations was established and tested on point clouds of a regularly structured aqueduct arc.The deformation model can be categorized as parameter-based point cloud comparison as the control points of two registered B-spline surfaces are compared by insertion in the congruency model.Within the course of the evaluation an influence of the scanned surface structure on the B-spline approximation and further on the deformation results was investigated by introduction of unsegmented and segmented point clouds.The latter data set showed no structure as points within joints were eliminated.The investigated model of deformation analysis holds a lot of promise, as deformation within the set of control points could be rejected in a statistically sound way.This result was obtained in conformity with a pointwise object point comparison.An influence of the surface's structure on the B-spline approximations could be registered by comparison of the magnitude of downweighted observations using segmented and unsegmented point clouds.The test values within the congruency test were smaller using segmented point clouds as well.However, the test decision was equivalent using either unsegmented or segmented data.The importance of a highly precise point cloud registration appeared to be crucial within the evaluation.Likewise the incorporation of a sufficiently accurate stochastic model proved to be of utter importance.These aspects will be investigated more deeply in future research. Figure 2 : Figure 2: Measurement net with the net points NP01-NP10 and the object points SP03 and SP04.The observations are illustrated as lines between net points. Figure 3 : Figure 3: Difference between the object points from the compared epochs with corresponding standard deviation centered on the horizontal axis. Figure 5 : Figure 5: Visualization of the stable and deformed control points evaluated using (left) unsegmented point clouds and (right) segmented point clouds. Figure 6 : Figure 6: Difference between the deformed control points from the compared epochs with their derived standard deviation centered on the horizontal axis for (l) unsegmented point clouds and (r) segmented point clouds. Table 1 : Parameters of measurement adjustments. Table 2 : Point cloud registration parameters. Table 3 : Uncertainty values used in the error propagation. Table 5 : Test values of the congruency tests. Table 7 : Test values of the congruency tests.
5,654.6
2023-01-10T00:00:00.000
[ "Mathematics" ]
Economic Framework of Smart and Integrated Urban Water Systems : Smart and integrated urban water systems have important roles in advancing smart cities, but their contributions go much further by supplying needed public services and connecting other sectors to meet sustainability goals. Achieving integration and gaining access to financing are obstacles to implementing smart water systems and both are implicit in the economic framework of smart cities. Problems in financing the start-up of smart water systems are reported often. The local and diverse nature of water systems is another barrier because an approach that works in one place may not work in another with different conditions. The paper identifies the challenges posed by the economic framework and provides examples from four cities with diverse characteristics. It outlines pathways to advance implementation of smart water systems by improving control strategies, advancing instrumentation and control technologies, and most of all, to help transform cities by raising customer awareness and trust through reliable and useful water information. Introduction Smart cities have many interrelated parts, and their water systems provide essential functions by supplying needed public services and connecting other sectors with water to aid them in meeting economic, social, and ecological goals. To play these roles effectively, urban water systems should be integrated and take on smart attributes [1]. However, while technological aspects of smart urban water systems are impressive, the systems face challenges to achieve integration, employ smart technologies effectively, and sustain adequate financing sources [2]. Achieving integration and gaining access to adequate financing are principal obstacles to advancing the state of the art of smart urban water systems. While they provide multiple types of services to diverse stakeholders and are vital to cities, they confront financial challenges caused by their multiplexed economic framework. Managing these challenges is complex in cities with decentralized power structures, especially considering issues of affordability and financial limitations. The paper describes the economic framework of smart water systems and smart cities, and it explains the resulting incentives and controls that it imposes on them. It also identifies challenges the systems face, and it outlines future directions to help advance the implementation of smart water systems in smart cities. The future directions go along three lines. One is improvement in control technologies, which is the main topic that is discussed by researchers studying smart water systems. A second is the advancement of instrumentation and controls, which is driven mainly by commercial incentives. Most significant, however, is the third line, which is about how customers use smart water information to help transform cities. Several brief case studies of smart urban water systems and integrated approaches are summarized as examples of current situations. These were selected from a growing inventory of case studies that have been made available through recent studies of the inventory of case studies that have been made available through recent studies of the International Water Resources Association [2]. The selected cases illustrate a leading-edge city (Singapore), a new smart city in Korea (Paju), a growing medium city in the western United States (Fort Collins), and a large city in Mexico with a very limited water supply (Juarez). While the case citations are brief, they range across the major issues along the three lines of discussion, advancing technologies, new control methods, and customer interactions in urban areas. Interest in smart water systems is advancing rapidly, but their implementation and success will be shaped more by economic and social forces than by advances in technology. The literature about smart water systems mainly focuses on examples of new technologies, and this paper aims to focus more directly on the economic forces that must be confronted by system managers to facilitate improved urban water services. Urban Water Systems Urban water systems are conglomerates of the infrastructures and operating controls of water supply, wastewater, stormwater, and recycled water systems [3]. They serve the social, constructed, and natural subsystems of living cities and they connect elements of these subsystems through interdependences [4]. For example, water supply connects to public safety by providing standby fire protection, stormwater systems can add to open space and recreational opportunities, and urban water systems draw supplies from ecosystems and can in turn nourish them. This general perspective explains how water interacts with other subsystems, but a city is a system of systems, and it requires more information to explain how urban water systems work and are controlled. A concept for the interdependences among water supply, wastewater, and reclaimed elements is shown in Figure 1, and stormwater systems can be shown to cut across these as they involve entire cities. Stormwater harvesting can be added to the diagram, as well as the biosolids management that results from wastewater treatment. The urban water system works with raw water diverted and conveyed to treatment, distribution, and to users. Wastewater is collected, treated, and discharged to the receiving stream. Biosolids (or sludge) are dewatered, digested, and disposed of. The stormwater system traverses the city to convey diffused waters, some of which end up in the collection system. Water can be recycled from the wastewater or stormwater system, and raw water can be used directly for purposes such as fire suppression or landscape irrigation. Integration of urban water systems requires cooperation among the management structures for the separate services. This can be formal, as in placing all services in a single organizational unit, or it can be informal, with cooperation occurring on an ad hoc basis. Formal integration requires a comprehensive approach that includes diversifying sources, The urban water system works with raw water diverted and conveyed to treatment, distribution, and to users. Wastewater is collected, treated, and discharged to the receiving stream. Biosolids (or sludge) are dewatered, digested, and disposed of. The stormwater system traverses the city to convey diffused waters, some of which end up in the collection system. Water can be recycled from the wastewater or stormwater system, and raw water can be used directly for purposes such as fire suppression or landscape irrigation. Integration of urban water systems requires cooperation among the management structures for the separate services. This can be formal, as in placing all services in a single organizational unit, or it can be informal, with cooperation occurring on an ad hoc basis. Formal integration requires a comprehensive approach that includes diversifying sources, protecting water at its source, and integrating operations for water storage, distribution, treatment, recycling, and disposal. This requires that the systems are operated as one utility to facilitate efficiencies like the recycling of wastewater or reducing water footprints. Informal integration is much weaker and depends on incentives as well as opportunities for joint work [5]. As urban water systems are nested within the economic, social, and natural systems of cities there are several facets of integration. One integrates the services, such as when the separate services are operated as one utility. Another facet of integration is between the utilities and the external water environment of the streams and aquifers in and around the city. The urban water system impacts the interflows between ground and surface water and the heat island effects in cities, as well as flood plain health and urban ecology. A test for the effectiveness of integration is whether it enables the city to reach net-zero impact status in providing water, preventing pollution, and sustaining urban ecology. Why Integration Is Needed Integration among the separate urban water subsystems is a logical partner to smart systems. Application of controls to the separate water systems has been evolving for decades, but without integration, the efforts will fall short of addressing the major issues that cities face. This reality is widely recognized in the water industry and among urban leaders, and concepts such as One Water are emerging to develop appropriate strategies [6][7][8]. As an example, one of the cases cited here, the city of Singapore, faces the need to develop a total One Water approach if it is to meet its water needs in the future. Another of the cases, Juarez, which also faces a severely limited water supply, is addressing its smart water system with a different approach. Integrated urban water resources management through the One Water approach has the potential to affect the shape and functionality of cities [9]. If the One Water approach is to achieve its goals, it must provide customers with integrated information to avoid confusion. For example, customers would not tolerate one message about water use and another about wastewater, especially when they are confronted with so many new categories of information in today's world. If integration can give people a sense of living in a total water environment, they will be more likely to accept a bill for services that pays for all systems involved. They should be informed how smart systems are improving efficiencies through combinations of services with multiple use facilities, for example, detention ponds and stormwater harvesting. Integration can improve information flow in different ways and, if handled effectively, has the potential to improve customer trust in utilities. Another important advantage of integrated approaches is to protect the urban water ecology, which can be built into the combined system to minimize water withdrawals and protect water quality and species [10,11]. Smart Water Systems in Smart Cities A smart urban water system would join the constellation of other smart systems in a smart city, which use computer controls and information to collect data, use that data to improve operations, and communicate with citizens about all aspects of their lives in the cities [12]. In a smart city, a smart integrated urban water system would feature infrastructure controls, collection of system and user information, application of the information to control water operations and to inform citizens, and it would be available for emergency management. The concept is illustrated in Figure 2, which shows a system to be controlled and actuators to implement controls based on decisions informed by models and data. The data collection function informs system operators and users as well. Data from the system can be used for operations and troubleshooting. Emergency information can be sent to the actuators. The concept has evolved from the early supervisory control and data acquisition (SCADA) and control systems to today's smart concept [13]. More specifically, prior to the evolution of smart systems, the concept would have not included the transmission of data shown by the dotted lines stemming from the general flow of information and control commands. So, the ability of the smart system to provide the information for operations, customers, and emergency management comprises the most promising new feature. The concept has evolved from the early supervisory control and data acquisition (SCADA) and control systems to today's smart concept [13]. More specifically, prior to the evolution of smart systems, the concept would have not included the transmission of data shown by the dotted lines stemming from the general flow of information and control commands. So, the ability of the smart system to provide the information for operations, customers, and emergency management comprises the most promising new feature. These features of smart water systems have evolved from the past when informationbased controls started with the advent of computers. This led to the development of the SCADA systems, and later sophisticated databases and geographical information systems were added to create technology-based platforms for system management. When personal digital assistants became available around the turn of the century, they were widely used in control and data acquisition functions. Currently, these technologies have advanced with the development of smartphones, smart meters, and improved process controls [14]. Toward the future, more automation, advanced use of information for system management tasks such as leak detection, and more attention to cybersecurity are expected. Additionally, user information is expected to improve substantially. Controls, Instruments, and Utilization of Smart Water Systems The advancement of technologies for instrumentation and controls is driving the trend toward smart systems in different sectors of the economy. The concept of smart systems is being applied in cities, for example with smart buildings and smart transportation systems, among others. All have similar attributes based on the availability of sensors, actuators, and controls of subsystems for functions such as lighting, energy management, scheduling, and more. Advancement in control methods follows the availability of technologies but also learning and experience about successful strategies. For example, in the case of smart water systems, some computer-based simulation models have proved to be useful and effective, while others have been developed but discarded due to lack of practical application. These features of smart water systems have evolved from the past when informationbased controls started with the advent of computers. This led to the development of the SCADA systems, and later sophisticated databases and geographical information systems were added to create technology-based platforms for system management. When personal digital assistants became available around the turn of the century, they were widely used in control and data acquisition functions. Currently, these technologies have advanced with the development of smartphones, smart meters, and improved process controls [14]. Toward the future, more automation, advanced use of information for system management tasks such as leak detection, and more attention to cybersecurity are expected. Additionally, user information is expected to improve substantially. Controls, Instruments, and Utilization of Smart Water Systems The advancement of technologies for instrumentation and controls is driving the trend toward smart systems in different sectors of the economy. The concept of smart systems is being applied in cities, for example with smart buildings and smart transportation systems, among others. All have similar attributes based on the availability of sensors, actuators, and controls of subsystems for functions such as lighting, energy management, scheduling, and more. Advancement in control methods follows the availability of technologies but also learning and experience about successful strategies. For example, in the case of smart water systems, some computer-based simulation models have proved to be useful and effective, while others have been developed but discarded due to lack of practical application. A possible example of this situation is the use of control methods that seem promising but have high levels of complexity and maintenance requirements that create a risk of malfunction. The use of inflatable dams placed inside of combined sewers to control overflows seems to fit this category [15]. These were proposed 50 years ago in San Francisco, for example, to address the expensive and perplexing problem of the aging combined sewers. However, their application has been limited. While much of the interest in smart water systems has been driven by technologies and control methods, how customers use smart water information to help transform cities is more strategic. If the past is a guide to the future, it is expected that technologies and control methods will continue to advance incrementally, new business opportunities will be created, and experts and managers will continue to seek out new methods to experiment with them. Meanwhile, the benefits of these advances to customers and how the evolving smart systems are accepted and improve quality of life in cities remain larger questions. This issue is not limited to smart water systems, of course, because people today are confronted with a rapidly changing information environment which affects their daily lives in many ways. Economic Framework of Smart Water Systems An economic framework for smart water systems will involve the supply and demand for the public and private goods provided, their direct and indirect benefits and costs, and their linkages and impacts due to interdependences with other sectors. The organization of system management to facilitate collective action is also involved, as are social issues stemming from interactions with system customers and other stakeholders [16]. The demands for water, wastewater, stormwater, and recycled water services are different. The water supply and recycled water are commodities that can be sold like toll goods [17]. Most explanations of water sector demands focus on them, but the demands for other services must also be assessed. Wastewater service comprises carrying unwanted used water away from properties and requires paying a portion of the treatment and disposal. Stormwater service involves draining properties and public areas and paying to prevent pollution from urban runoff. The supply of each of these services involves a combination of responding to demands and meeting mandates. The direct and indirect benefits and costs of the services stem from the demands that are to be met [18]. The benefits of water supply services involve applying the water for uses that range from meeting essential drinking water requirements to satisfying social needs, like health-related hygiene in the home. For wastewater, the direct benefit to customers is the removal of the unwanted residual water, and environmental protection is an indirect benefit. Stormwater services have multiple benefits, including protecting property and facilitating traffic movement in the city. Their multi-faceted attributes have made them a key part of a sustainability assessment protocol [19]. Taken together, the benefits from urban water services range across multiple categories and cannot be quantified easily. The indirect benefits of urban water services demonstrate the connector roles of water services to various aspects of the economy and society. They stem from linkages and interdependencies with the other sectors, like employment, housing, energy supply, urban transportation, and the environment. Social impacts from water and its interactions can be considered indirect benefits and costs, and there is a substantial literature discussing how to define them [20]. Smart water systems will operate in the context of these interdependencies during normal and emergency times. For example, quality housing depends on meeting all water demands effectively during normal times. If there is a failure, the quality of the housing deteriorates. In the case of emergencies, the interdependencies can lead to systemic failures. For example, during flooding the failure of a stormwater system might trigger failure if the flooding shuts down a part of an electric power system [21]. The economic framework also involves the organization of urban water systems to facilitate effective operation and integration. The organization will depend on the structure of government and on public-private cooperation. Each of the services can be a separate utility or various degrees of consolidation and integration can be forged. Finance is perhaps the most important issue in the economic framework of smart water systems. The management organizations can operate as utilities, but public goods are included among their services, and financing them may require subsidies along with fees for services. Each of the water services has its own financial structure. For example, water supply is like a public utility. Wastewater involves charges that are mandatory and may be offered like a utility, even if its services do not involve the distribution of a commodity. Stormwater has a unique financial structure and has been moving toward a utility model [22]. Recycled water is difficult to finance but is positioned as a utility to facilitate it [23]. By adding smart capabilities to integrated urban water systems new possibilities for effectiveness, equity, and reliability are created. Perhaps the greatest benefits are to fulfill demands more effectively and to extend services on a cost-effective basis to more people. Smart systems with their information and computing technologies should be able to promote integration through linkages for communication and controls [24]. By sensing the status of systems and their performance, greater reliability should be fostered [25]. In the category of improving performance by reducing failures, smart systems can monitor environmental water quality and water in piped systems to prevent exposure of people to negative situations [26]. Additionally, hydrometeorological sensing networks can provide advance notice of urban flood problems and warnings can be issued [27]. Blocking cyber terrorism is another potential benefit, although the smart capability added by linking the systems may open them to new possibilities for attack [28]. Urban customers can expect more information about total benefits from water services and they can learn more about the total picture of water in cities through education programs. They can also learn about water finance and conservation with information about their bills. Subject to privacy concerns, information can be extended down to the water use of individual fixtures by using the technology of the Internet of Things (IoT). Examples of Smart Integrated Water Systems The selection of brief case studies as examples is based on innovation and illustrates a range of situations. Singapore is one of the world's major cities, and it is chosen as the primary case study because it has developed its concepts comprehensively over a period of years. Paju Smart Water City in South Korea is smaller than Singapore and its experience is explained due to its connection with an evolving smart city. Fort Collins, Colorado is still smaller and considered a medium city in the United States that is evolving its smart systems in ways that are characteristic of leading-edge cities. Juarez, Mexico is a large city in an arid zone with major issues in sustaining an adequate water supply. It has used smart systems to address major issues of sustainability. In Singapore, the imminent loss of its source of water motivated the city to create an integrated water system, and this led to the development of a "four taps" approach to water management that includes local catchments, imported water, reuse, and desalination. The four taps provide for integration of sources, reuse provides a link between water supply and wastewater infrastructure, and a separate stormwater system is also used [29]. Singapore's water agency (PUB) has explained its approach to smart water management [30]. The integration will connect the smart drainage grid, smart plants, smart water grid, and smart sewer grid. Technologies to enable the smart features will include machine learning for decision support systems, big data to provide insights about the system, the Internet of things to connect sensors and devices, process simulations for scenarios using digital twins, and robots or unmanned vehicles to perform manual tasks. Singapore anticipates using many smart features in its individual systems. For example, the drainage grid will have extensive hydrometeorological monitoring, the smart plants will feature automation, the smart water grid will include pressure flow and water quality, and the smart sewer grid will include illegal discharge tracing. When these features are fully operational, they will aid Singapore in its quest to be self-sufficient in water and to have greater water security. Paju is a developing city in the northern part of South Korea, near the 38th parallel. Its population is about 425,000, and it has been the focus of developing a smart water system with the sponsorship of K-Water, South Korea's national water agency. Paju has been explained as a case study is one of the selections in the study by the International Water Resources Association about smart water management. Korea has devoted substantial attention to the development of improved integrated water management and the use of intelligent water resources operations technology to promote water security and improved operations [31,32]. This emphasis on using technology is carried out in conjunction with multiple objectives, such as water safety and security, and including a "tastier and healthier water supply". This requires the use of smart water management to provide real-time information about tap water quality to gain public trust. K-water is pursuing objectives to manage source water, optimize water treatment, operate intelligent networks, customize industrial water supply, and optimize wastewater treatment. These goals are translated into water quality management, water quantity management, risk management, energy management, pipe network analysis, and demand forecasting. So far, the major accomplishments of the Paju Smart Water City Pilot Project focus on improving the reliability of tap water, better customer service, and relieving distrust by citizens of tap water. Prior to the project, only about 1% of the population used water for drinking, but after the project pilot project, this percentage increased to 36.3%. Reports of satisfaction have also improved and the need for water purifiers will be significantly reduced. Fort Collins, Colorado has an integrated utility where four services are managed together: water supply, wastewater, stormwater, and electric power. The city has recently completed a study of how to foster additional integration by linking raw irrigation water with its water supply system to utilize water sources and infrastructure better [33]. Fort Collins exhibits a small amount of water recycling, but its gravity water supply system would make it expensive to treat wastewater and pump it back uphill. Moreover, the water law system that operates in Colorado places certain barriers in place for water recycling. The smart water features in the Fort Collins system are typical of those in US cities that are implementing new technologies. For example, the city has district metering areas with valves and remote pressure gauges, access to radar-based weather forecasting, and nondestructive sensors for main wastewater sewers. Fort Collins is customer friendly with its utility services and engages actively with water users by reporting about periodic water uses to encourage conservation. Fort Collins can access other technologies such as advanced metering infrastructure and real-time modeling for water distribution, and it implements them according to need. The Juarez City case study addresses issues of a major metropolitan area in an arid zone. Juarez and El Paso are twin metropolitan areas coalescing as one urban region. Juarez draws its water from a binational aquifer in which management can be contentious [34]. The city has undertaken an aggressive smart water program based on measurement and control to interact with customers in proactive ways for water conservation. The features of the system are smart meters with high accuracy, pressure control valves, advanced metering infrastructure, and a web platform for customer outreach. The project is to be self-funded and progress through 2021 has been promising. Implementation started with major users and will be extended to additional smart meter applications and system controls. The eventual goal is to eliminate most water losses, measure and manage water use carefully, and improve the capacity of the utility to continue to serve the growing city, despite water supply limitations. More case studies are available, and a synthesis can be found in [2]. Many case studies of integrated approaches are also available [35]. The lessons learned converge on a few principal challenges. These will be summarized next, and future directions will be summarized afterward with the identification of trends that are evident from the cases. Challenges to Integration An understanding of significant barriers to implementing integrated and smart water systems has emerged from the pilot studies. Some of the challenges were summarized in the smart water management study by (IWRA, 2021), which had that as a goal. Other barriers have been identified through studies such as [36], which assessed studies about the implementation of integrated strategies. The understanding of challenges converges to a few main problems. Most barriers to integration deal with organizational stovepipes and the difficulty in merging or even gaining cooperation among public organizations [37]. Another challenge is the complexity of integrated urban water systems, with their different levels of infrastructure and operations management and their separate regulatory structures. To move the needle will require transformations in institutional cultures to overcome resistance to change. Concessions may be needed to overcome resistance to integration. For example, participants in one case study overcame stovepipes partially by allowing the merged organizations to retain their separate SCADA systems. In terms of implementing smart attributes, financial barriers were identified most often, with a focus on higher-than-expected costs and lower revenues than needed. Lack of financial support for demonstration and training was also mentioned. A related barrier was the large amount of time and institutional commitment required. The institutional commitment required competes with other capacity issues, which also pose limiting factors for the implementation of smart systems. Capacity is needed among staff to plan and manage programs, as well as from support organizations to provide services and equipment. For example, in support organizations, the large-scale demand for the production of smart meters may be a constraint. On the utility side, the task of reducing staff positions that perform traditional but now outdated functions can be difficult. Additionally, other stakeholders such as allied agencies must have enough capacity to participate productively, to understand, and support the transition to smart systems. The local nature of water systems and the diversity it creates constitutes a barrier because, even if an approach works in one place, it may not be possible to replicate it in another with different conditions. This means that pilot studies may have limited applicability, except for very specific components or practices in smart systems. It is also difficult to assess the benefits of smart systems if incremental performance improvements and providing enhanced information to customers do not yield politically attractive results in the near term. This problem, when combined with the limited funding for smart systems, may defer programs indefinitely. Given the problems in large cities, the greatest opportunities for innovation may be in small and medium-size cities where different approaches in governance are more feasible. A reality in the vision of integrated urban water systems is the case of struggling cities where public services fall way below minimum standards [38]. Smart water cities hold the potential to improve quality of life and urban environments anywhere in any place where effective governance is in place. When it is, it seems inevitable that urban water systems will change and their performance will respond to the felt needs of people in the cities and that smart systems can help to effect behavioral changes such as greater confidence in tap water for drinking, water conservation, greater use of recycling, and improved urban water environments. The behavioral changes will occur as people learn more about their water systems, which can occur through the provision of more useful and effective information about the systems. People can learn about health effects, positive environmental management, and preparation for emergencies, among other water-related subjects. The model shown in Figure 2 provides a way to forecast how these changes can occur. The primary driver will be user information for daily needs and emergency preparation. People have ongoing needs for quality drinking water service, responsible wastewater management, drainage of properties, and common areas. While they may take the services for granted in some cases, with creative approaches to water education, better cooperative arrangements between water managers and water users can take place. Organizational and governance changes required to facilitate future improvements are likely, although in some situations progress may be slow. Resistance to change is as deep roots both on an individual and organizational basis. Possibility for improvements brought about by new technologies may drive change more rapidly than top-down political decisions intended to force changes prematurely. As new generations of water managers take the reins, they will be more familiar with technological possibilities and search for ways to change the way business is conducted. With ongoing changes in technologies, it seems certain that unexpected possibilities will develop. The recent availability of social media, smart devices, wireless communications, machine learning, and others have ushered in management possibilities that were heretofore unknown. These have already changed many institutions and although urban water organizations may be slower and less agile than private businesses, they will change as well. The results should be positive to make systems more robust and reliable and for consumers is well, both in quality of service and in effectiveness.
7,263.2
2022-03-02T00:00:00.000
[ "Engineering" ]
Integrating Resonator to Enhance Magnetometer Microelectromechanical System Implementation with ASIC Compatible CMOS 0.18 μm Process In this study, a multi-function microelectromechanical system (MEMS) was integrated with a MEMS oscillator, using the resonant frequency oscillation characteristics of the oscillator to provide the Lorentz current of the magnetometer to enhance a large dynamic range of reading, which eliminates the off-chip clock and current generator. The resonant frequency can be adjusted by adjusting the bias voltage of the oscillator to further adjust the sensitivity of the magnetometer. With the mechanical Q value characteristic, a great dynamic range can be achieved. In addition, using the readout circuit of the nested chopper and correlated double-sampling (CDS) to reduce the noise and achieve a smaller resolution, the calibration circuit compensates for errors caused by the manufacturing process. The frequency of the tuning range of the proposed structure is 17,720–19,924 Hz, and the tuning range of the measurement result is 110,620.36 ppm. The sensitivities of the x-, y-, and z-axes of the magnetometer with driving current of 2 mA are 218.3, 74.33, and 7.5 μV/μT for ambient pressure of 760 torr. The resolutions of the x-, y-, and z-axes of the magnetometer with driving current of 2 mA are 3.302, 9.69, and 96 nT/√Hz for ambient pressure of 760 torr. Introduction In recent years, the magnetometer (MAG) has become one of the key elements in the inertial measurement unit (IMU), used for navigation, alignment, height detection, and so on. The signal of angular velocity can be obtained by the MAG sensor with the use of software [1]. With this, the conventional 9-axis IMU can be implemented without requiring a gyroscope. In the natural environment, received signals come from different magnitudes and frequency ranges. Therefore, how to design a wide range of MAGs becomes more important for wearable device applications. This paper refers to the previous paper titled "A. Multi-function microelectromechanical Systems Implementation with ASIC compatible CMOS 0.18 µm process" [2]. In the previous paper, the microelectromechanical systems (MEMS) with multi-functions including a three-axis MAG, a three-axis accelerometer (ACC). Then, the magnetic sensor has many advantages to replace the MEMS gyroscope and reduces hardware cost. However, the resolution was 44.06-87.46 nT/ √ Hz and was limited by circuit noise equivalent magnetic field (CNEM) and the equivalent Brownian noise (BNEA). Moreover, the curvature was still too large and the sensitivity was too small in the three-axis MAG, at 7.1-10.7 uV/uT. In this paper, the length of the beam is decreased in the MEMS to reduce the curvature and sacrifice sensitivity. To solve the problems of the sensitivity falling and the problems encountered in the previous paper, the MAG-integrated resonator is used. The MAG with Lorentz current integrates a microelectromechanical system (MEMS) resonator to replace the original DC bias current. Thus, the MAG and the MEMS resonator resonate at close resonance frequency to expand the sensitivity tuning range. In the proposed MEMS oscillator with fishbone structure, the resonance frequency has a large tuning range and can provide Lorentz current with different frequencies. It can adjust the sensing range of the MAG from microtesla to tesla values and is suitable for many applications. The MEMS oscillator with fishbone is analyzed by changing the DC between sense and shuttle to enhance the softening effect of the spring [3]. Moreover, the entire circuit can be simply divided into three parts. The first part is a readout circuit including noise-reducing chopper architecture, frequency division multiplexing, and time division multiplexing methods. Then the calibration circuit eliminates the output DC offset due to process changes. Finally, the Lorentz current generator is composed of a MEMS oscillator and a sustaining amplifier. The design specifications indicate a sensing range of the MAG for ambient pressure of 760 torr. The development of a multi-axis complementary metal-oxide semiconductor (CMOS) MEMS resonant magnetic sensor using Lorentz and electromagnetic forces was presented in [4] and used in-plane coils to drive a suspended spring-mass structure and produced input AC current. An ultra-sensitive Lorentz force MEMS MAG with a pico-tesla limit of detection was presented in [5] and the input bias current was 7.245 mA. Integrated fluxgate MAG for use in isolated current sensing was presented in [6] and used a Forster-type fluxgate with simplified excitation current. A z-axis MAG for MEMS inertial measurement units using an industrial process was presented in [7] and its overall sensitivity of 150 µV/µT at 250 µA of peak driving current was still small. A real-time 32.768 kHz clock oscillator using a 0.0154 mm 2 micromechanical resonator frequency-setting element was presented in [8] and reduced power consumption, operating with only 2.1 µW of DC power. A sub-150 microwatt back-end-of-line (BEOL)-embedded CMOS-MEMS oscillator with a 138 dBΩ ultra-low-noise trans-impedance amplifier (TIA) was presented in [9] and the phase noise figure-of-merit of 190 dB was achieved at 1-kHz offset with a resonator Q of 1900. Phase-noise reduction in a CMOS-MEMS oscillator under nonlinear MEMS operation was presented in [10] and analyzed a phase-controlled closed-loop and the frequency stability of the self-sustained oscillator. Process, Modeling, and Design of MEM Sensor, and Structure of MEMS Magnetometer The proposed multi-function MEMS, including magnetometer (MAG) and accelerometer (ACC) functions, was fabricated with the standard UMC 0.18 µm 1-poly, 6-metal (1P6M) CMOS-MEMS process. The typical differential equation of mechanical equilibrium is given in [11]. The frequency response of displacement to external actuation force is and Hooke's law is given as F ext = k e x i f w << w 0 , where m e is the effective mass and x is its displacement, k e is an effective spring constant, b e is an effective damping coefficient, and F ext is any external actuation force. When an actuation force operates at the natural frequency ω 0 , the magnitude response of displacement becomes |x(jω 0 )| = Q|F ext (jω 0 )| k e . This amplification of the displacement magnitude response by Q times at resonance frequency is one of the most typical characteristics of the resonator and is used to expand the sensitivity tuning range of the MAG in this paper. The model of the MEMS MAG includes stator, rotor, anchors, fingers, springs, and proof mass, and is shown in Figure 1a. The length and number of folding springs [12] will influence the structure of the resonance frequency. In addition, the length of the beam is decreased in the structure to reduce the curvature. The sizes of the structure's different parts are shown in Table 1 and are marked in Figure 1b, and the resolution of the UMC 0.18 µm CMOS MEMS process is 0.001 µm. In comparison to the previous paper [2], the sizes of the structure are reduced to improve the curvature of the structure. Micromachines 2021, 12, x FOR PEER REVIEW 3 beam is decreased in the structure to reduce the curvature. The sizes of the structu different parts are shown in Table 1 and are marked in Figure 1b, and the resolution the UMC 0.18 μm CMOS MEMS process is 0.001 μm. In comparison to the previous per [2], the sizes of the structure are reduced to improve the curvature of the structure (a) (b) Operational Principle of MEMS Magnetometer The magnetic field ⃑ ( ) with applied current ( ) flowing through a suspen conductor of length L will result in the Lorentz force ⃑⃑⃑⃑⃑⃑⃑⃑⃑ ( ), which can be shown ⃑⃑⃑⃑⃑⃑⃑⃑⃑ ( ) = ( ( ) × ⃑ ( )) [13]. The suspended conductor of length L is designed be 642.2μm to increase the length of the current flow and obtain greater sensitivity, the width of the suspended conductor is 2.6μm. This structure widens the beam are reduce the cross-axis interference phenomenon and uses the time-sharing method reduce mutual interference when the x, y, and z currents are applied. Models of sens the in-plane and out-plane magnetic fields are shown in Figure 2a Operational Principle of MEMS Magnetometer The magnetic field B (jω) with applied current I (jω) flowing through a suspended conductor of length L will result in the Lorentz force F mag (jω), which can be shown as F mag (jω) = LI l (jω) × B (jω) [13]. The suspended conductor of length L is designed to be 642.2 µm to increase the length of the current flow and obtain greater sensitivity, and the width of the suspended conductor is 2.6 µm. This structure widens the beam area to reduce the cross-axis interference phenomenon and uses the time-sharing method to reduce mutual interference when the x, y, and z currents are applied. Models of sensing the in-plane and out-plane magnetic fields are shown in Figure 2a,b. beam is decreased in the structure to reduce the curvature. The sizes of the structure's different parts are shown in Table 1 and are marked in Figure 1b, and the resolution of the UMC 0.18 μm CMOS MEMS process is 0.001 μm. In comparison to the previous paper [2], the sizes of the structure are reduced to improve the curvature of the structure. Operational Principle of MEMS Magnetometer The magnetic field ⃑ ( ) with applied current ( ) flowing through a suspended conductor of length L will result in the Lorentz force ⃑⃑⃑⃑⃑⃑⃑⃑⃑ ( ), which can be shown as ⃑⃑⃑⃑⃑⃑⃑⃑⃑ ( ) = ( ( ) × ⃑ ( )) [13]. The suspended conductor of length L is designed to be 642.2μm to increase the length of the current flow and obtain greater sensitivity, and the width of the suspended conductor is 2.6μm. This structure widens the beam area to reduce the cross-axis interference phenomenon and uses the time-sharing method to reduce mutual interference when the x, y, and z currents are applied. Models of sensing the in-plane and out-plane magnetic fields are shown in Figure 2a Simulation Results of Finite Element Method for MEMS Sensor A 3-D model of the MAG is established by the CoventorWare tool, and the element unit is 0.5 × 0.5 × 0.5 (µm 3 ). The resonance frequency f O , through modal analysis by MemMech, analyzes displacement and resonance frequency; the x-, y-, z-axis resonance frequencies are 10.059, 17.555, and 18.776 kHz, generalized masses are 9.51, 3.95, and 3.75 ×10 −10 kg, and the structure at the resonance frequency will have Q times higher displacement than the original displacement and greater sensitivity. By the movement displacement of the x-, y-, z-axis modal analysis, we can ensure the design's current direction and the MAG's plane motion direction along the x-, y-, and z-axes, as shown in Figure 3. Simulation Results of Finite Element Method for MEMS Sensor A 3-D model of the MAG is established by the CoventorWare tool, and the element unit is 0.5 × 0.5 × 0.5 (μm 3 ). The resonance frequency , through modal analysis by MemMech, analyzes displacement and resonance frequency; the x-, y-, z-axis resonance frequencies are 10.059, 17.555, and 18.776 kHz, generalized masses are 9.51, 3.95, and 3.75 × 10 −10 kg, and the structure at the resonance frequency will have Q times higher displacement than the original displacement and greater sensitivity. By the movement displacement of the x-, y-, z-axis modal analysis, we can ensure the design's current direction and the MAG's plane motion direction along the x-, y-, and z-axes, as shown in Figure 3. Since the applied magnetic force is converted to pressure that the CoventorWare tool can simulate, the expected applied current is 2 mA [6,14]. Pressure is applied to the beam with a magnetic field of 0-6000 μT with an interval of 400 μT to simulate deformation for detecting the magnetic field in the in-plane and out-plane direction; the displacement is 0.0153/-μm and 0.0005/0.000167 μm and the sensitivity is approximately The MAG moves in the x-or y-axis direction; conductor_0 (stator) and conductor_1 (rotor) stacked METAL1-5 are shown in Figure 4a, the initial capacitance is 0 = 1.103 × 50 × 0.8 = 44.12 fF, the number of fingers is 40, and the overlap part of the finger is 80% of the original. When detecting the x-or y-axis magnetic field, the MAG moves in the out-plane direction; with conductor_1 (rotor) stacked METAL2-5 and conductor_0 with METAL1-3 and conductor_2 with METAL4-5 as the stator, the average initial capacitance value is 0 = [ Since the applied magnetic force is converted to pressure that the CoventorWare tool can simulate, the expected applied current is 2 mA [6,14]. Pressure is applied to the beam with a magnetic field of 0-6000 µT with an interval of 400 µT to simulate deformation for detecting the magnetic field in the in-plane and out-plane direction; the displacement is 0.0153/-µm and 0.0005/0.000167 µm and the sensitivity is approximately 2.55 × 10 −7 / − µm µT , 8.33 × 10 −8 /2.783 × 10 −8 µm µT . The MAG moves in the x-or y-axis direction; conductor_0 (stator) and conductor_1 (rotor) stacked METAL1-5 are shown in Figure 4a, the initial capacitance is C 0 = 1.103 × 50 × 0.8 = 44.12 fF, the number of fingers is 40, and the overlap part of the finger is 80% of the original. When detecting the x-or y-axis magnetic field, the MAG moves in the out-plane direction; with conductor_1 (rotor) stacked METAL2-5 and conductor_0 with METAL1-3 and conductor_2 with METAL4-5 as the stator, the average initial capacitance value is C 0 = 1.001+1.229 2 × 0.8 × N = 14.41 fF (Figure 4b), and N = 18, which corresponds to the x-and y-axes. Displacement (d) and capacitance variance (∆C) are equal to C 0 × ∆d d . When detecting a z-axis magnetic field of 6000 µT, the displacement and capacitance variance of the MAG along the x-axis at the frequency f O are 256 nm and 1.815 fF, and in the y-axis direction are 53.6 nm and 0.624 fF. When detecting an x-axis magnetic field of 6000 µT, the displacement and capacitance variance of the MAG along the z-axis at resonance are 30.8 nm and 0.058 fF. Then, we calculate the three-axis capacitance change values for 16 g acceleration and at resonance frequency: ∆C x = 0.0205 fF, ∆C y = 0.0205 fF, and ∆C z = 0.296 fF. Because the Q value has an important influence on the structure, the general damping constant is divided into squeeze and slide [11]. The viscosity of air at 760 torr and 300 K is 1.86 × 10 −5 kg/ms and is set as the default value in the simulator, according to [15,16]. Then, finite element method (FEM) simulation software can be used to simulate the damping coefficient of the structure's out-plane motion; the slide damping coefficient under an out-plane motion for ambient pressure of 760 torr is dominantly affected, and is about 6.874 × 10 −7 N m s at the resonance frequency. The Q value can be estimated as follows: where f 0 is the resonance frequency, M is the mass, and 57. When moving in the in-plane motion along the y-axis, the squeeze and slide coefficients for the in-plane motion for ambient pressure of 760 torr are b = 4.433052 Finally, the equivalent Brownian noise for ACC is 1.44615 µg √ Hz [17]. In addition, the CoventorWare tool is used to simulate the applied pressure and convert it into force (F) and displacement (x), and according to Hooke's law F = kx, the x-and z-axis spring constants are calculated, shown by k Figure 5 shows a fully differential electrostatic MEMS transducer resonator composed of driving electrodes, movable elements, and sensing electrodes. Assuming that the area of cross-section A is the overlap between rotor and stator, gap d 0 is the spacing when the MEMS is in the neutral position and ε is the permittivity of the material. If there is a displacement x between stator and rotor, the capacitance is given by C drive = εA Operational Principle of MEMS Resonator where C dc is the DC bias voltage applied to the movable element. If we let V ac = 0 V, the electrostatic force that attracts the moving part to both driving and sensing electrodes is given by Because the structure is symmetric along the x-axis, the electrostatic forces of the sensing and driving parts will cancel each other out and the structure will be in equilibrium. Then, if we add the non-zero AC voltage to the driving electrode and assume that |Vdc| >> |Vac|, the force due to the driving electrode will become F = 2 Figure 5. Model of general electrostatic MEMS transducer resonator. Because the structure is symmetric along the x-axis, the electrostatic forces of the sensing and driving parts will cancel each other out and the structure will be in equilibrium. Then, if we add the non-zero AC voltage to the driving electrode and assume that |V dc | >> |V ac |, the force due to the driving electrode will become dc + + ∂C drive ∂x V dc V ac and the first part will be cancelled by the electrostatic force caused by the sensing electrode. The total actuating force can be expressed as F actuation = ∂C drive ∂x V dc V ac and the system will become a resonator and reach the maximum displacement dependent on the frequency and Q factor of the MEMS structure. The output current i o is shown in Figure 4, and the feedthrough current caused by C f and the output current can be determined by ∂t ; the former part is motional current caused by the movement of the rotor and the last term is the undesired part originated from feedthrough capacitance. We set the frequency of driving voltage to resonance frequency f o to obtain the largest ∂C drive ∂x term and maximize the actuating force. Additionally, we generally set the C f much smaller than C sense to validate the lack of feedthrough current. When the structure is operated under resonance frequency, it can be approximately changed to where f 0 is the resonance frequency of the moving structure and x is the displacement. The value of C sense is given by ∂C sense ∂x ∼ = ε 0 N gn h r ζ d 0 [19], where N gn denotes the number of gaps in comb fingers; ζ is equal to 1.1 and is the constant to model the capacitance due to fringing electric fields; ε 0 is the permittivity constant; h r is the thickness of the structure, which is the total thickness stacked from METAL1 to METAL6; and d o is the gap spacing between rotor and stator. Thus, by substituting the parameters, the output current can be , and the frequency driving voltage V ac is on its resonance frequency f 0 . Structure of MEMS Resonator A fully differential MEMS oscillator structure was implemented, as shown in Figure 6a. The prototype of this MEMS oscillator is based on [20]. This structure consists of two driving ports, two sensing ports, and a movable shutter. The shutter is suspended above the substrate by four symmetrical springs with two folds, each connected to four anchors through clamp-clamp beams instead of directly connecting the springs to the anchors. The structure of the shutter and electrodes is stacked from METAL1-6 to increase the variance of capacitance. DC bias Vdc is supplied to the shutter through the surrounding anchor. The geometric parameters of the structure are shown in Table 2. The structure of the shutter and electrodes is stacked from METAL1-6 to increase the variance of capacitance. DC bias V dc is supplied to the shutter through the surrounding anchor. The geometric parameters of the structure are shown in Table 2. To minimize the feedthrough capacitance [21], the proposed MEMS oscillator was designed, and different views are shown in Figure 6a,b. The electrodes were changed from a comb to fishbone design. The purpose of the design is to solve the problem of feedthrough current and increase the tuning range of the resonance frequency as the bias voltage is varied. The structure is symmetric along the oblique diagonal line. Fingers on each side are set with different gap spacings, as shown in Figure 6c. In this way, simulation results will show the sensitivity and the effect of spring softening effect will increase. A zoomed-in view of the fingers is shown in Figure 6c. There are two different gap spacings between rotor and stator, 2.5 and 5 µm, which gives an unbalanced force from the beginning to make it offset on the x-axis. When the AC drive signal is applied, the spring on the offset side is softened and produces a large change in resonance frequency. The driving electrodes have the remaining comb design to maintain the linearity of the driving force. The sizes of the proposed MEMS structure are shown in Table 2. Model of MEMS Resonator The MEMS oscillator can be modeled as an equivalent series resistor, inductor, and capacitor (RLC) circuit in parallel with a feedthrough capacitor, C f [19,22], as shown in Figure 7a. To evaluate the effective R, L, and C values, the ratio of electromechanical transformer turns η e is defined by actuating force F act over driving voltage V ac , and With an effective transformer turn ratio, the equivalent RLC can be obtained, given by where m e , k e , b e , and Q are the equivalent mass, spring constant, damping coefficient, and quality factor, respectively. C o is the capacitance between the moving part and sensing electrode as the shutter is in equilibrium. , where , , , and Q are the equivalent mass, spring constant, damping coefficient, and quality factor, respectively. is the capacitance between the moving part and sensing electrode as the shutter is in equilibrium. is motional impedance, which indicates the power loss in the MEMS structure with large , which means the following sustaining amplifier must have high gain. Feedthrough gets larger and larger compared to ; the gain and phase plot shown in Figure 7b will be critically influenced and cannot satisfy Barkhausen's criterion at the resonance frequency and the oscillator may fail to oscillate. R m is motional impedance, which indicates the power loss in the MEMS structure with large R m , which means the following sustaining amplifier must have high gain. Feedthrough C f gets larger and larger compared to C m ; the gain and phase plot shown in Figure 7b will be critically influenced and cannot satisfy Barkhausen's criterion at the resonance frequency and the oscillator may fail to oscillate. Modeling and FEM Simulation of Proposed MEMS Oscillator We conducted the modal analysis of the proposed MEMS structure in MemMech, and the resonance frequencies were 16.41, 18.79, and 21.96 kHz, and the generalized masses were 2.14, 6.61 and 3.46 × 10 −10 kg. Then, the initial capacitance between stator and rotor and the feedthrough capacitance between driving and sensing ports were simulated in MemElectro, which can set the electrostatic boundary conditions to simulate the charge and capacitance between conductors. A summary of the capacitance array is shown in Table 3, and it can be seen that the feedthrough capacitance from positive and negative driving ports to sensing ports is nearly the same, which means C feed1 is equal to C feed2 and the feedthrough current can theoretically be cancelled. After finishing the FEM simulation, the resonance frequency was 18.796 kHz, effective mass in the desired mode was 6.6108 × 10 −10 kg, squeeze damping coefficient was 1.9948 × 10 −7 N/m/s, slide damping coefficient was 2.1713 × 10 −7 N/m/s, total damping coefficient was 4.1661 × 10 −7 N/m/s, effective spring constant was 9.22 N/m, and feedthrough capacitance was 1.156 fF. With mechanical parameters, we can evaluate η e , R m , C m , and L m . As ζ, V dc , h r , and N gn were set to 1.1, 20 V, 11.14 µm, and 28, respectively, electromechanical transformer turn ratio η e will be 2.34 × 10 −8 C/m. As a result, R m , L m , and C m are 760 M, 1.2 MH, and 59.2 aF, respectively. The RLC equivalent model and important parameters under 760 torr are shown in Figure 8a; C f eed is 1.156 fF, C 1 is 21.555 fF, C 2 is 45.374 fF, R m is 760 MΩ, L m is 1.2 MH, and C m is 7600.0593 fF. In addition, quality factor Q is an important specification in the oscillator. Under ambient pressure of 760 torr, the quality factor can be calculated as √ mk b , equal to 187.5. The proposed MEMS structure has larger (∂Csense)/∂x than the MEMS structure without fishbone (original). Neglecting the fringing capacitance, the variant capacitance of sensing electrodes can be expressed as where ∆x is the displacement and A is the overlapping area. through capacitance was 1.156 fF. With mechanical parameters, we can evaluate e, , , and . As , , ℎ , and were set to 1.1, 20 V, 11.14 μm, and 28, respectively, electromechanical transformer turn ratio e will be 2.34 × 10 −8 C/m. As a result, , , and are 760 M, 1.2 MH, and 59.2 aF, respectively. The RLC equivalent model and important parameters under 760 torr are shown in Figure 8a; is 1.156 fF, 1 is 21.555 fF, 2 is 45.374 fF, is 760 M, is 1.2 MH, and is 7600.0593 fF. (a) (b) (c) In addition, quality factor Q is an important specification in the oscillator. Under ambient pressure of 760 torr, the quality factor can be calculated as √ , equal to 187.5. The proposed MEMS structure has larger (∂Csense)/∂x than the MEMS structure without fishbone (original). Neglecting the fringing capacitance, the variant capacitance of sensing electrodes can be expressed as ∆ where ∆ is the displacement and A is the overlapping area. As seen in Figure 8, the displacement gets larger and the nonlinearity in the proposed MEMS structure increases. By operating the MEMS structure in the linear region, at the same area, the proposed MEMS has a larger capacitance sensitivity compared to the original. The variance of capacitance in the proposed structure is 1.25 times larger than the original. The magnification can be even larger by lengthening the fishbone finger. The second advantage of the proposed structure is the wide tuning range of resonance frequency with a strong spring softening effect. Assuming that and are fixed, (∂Cdrive)/∂x will determine the magnitude of electrostatic spring constant . The resonance frequency, which is given by 0 = 1 2 √ ( − ) , will decrease as electrostatic spring constant increases. The spring constant will increase as the cross voltage between the stator and rotor increases. Finally, for the proposed MEMS structure the spring constant is 9.22 N/m, variant capacitance per driving voltage is 5 aF/V, the resonance frequency at = 0V is 18,796 kHz, the tuning range of resonance frequency is 16,379.14-18,796 Hz, and the maximum tuning range is 128,590.5 ppm. As seen in Figure 8, the displacement gets larger and the nonlinearity in the proposed MEMS structure increases. By operating the MEMS structure in the linear region, at the same area, the proposed MEMS has a larger capacitance sensitivity compared to the original. The variance of capacitance in the proposed structure is 1.25 times larger than the original. The magnification can be even larger by lengthening the fishbone finger. The second advantage of the proposed structure is the wide tuning range of resonance frequency with a strong spring softening effect. Assuming that V dc and V ac are fixed, (∂Cdrive)/∂x will determine the magnitude of electrostatic spring constant K ess . The resonance frequency, which is given by f 0 = 1 2π (k e −k ess) m , will decrease as electrostatic spring constant K ess increases. The spring constant will increase as the cross voltage between the stator and rotor increases. Finally, for the proposed MEMS structure the spring constant is 9.22 N/m, variant capacitance per driving voltage is 5 aF/V, the resonance frequency at V dc = 0 V is 18,796 kHz, the tuning range of resonance frequency is 16,379.14-18,796 Hz, and the maximum tuning range is 128,590.5 ppm. System Architecture The system architecture, shown in Figure 9, is divided into MEMS readout circuit, calibration circuit, and Lorentz current generator (MEMS oscillator). System Architecture The system architecture, shown in Figure 9, is divided into MEMS readout circuit, calibration circuit, and Lorentz current generator (MEMS oscillator). Readout Circuit The readout architecture is shown in Figure 11 [2]. When the MEMS sensor is driven by the external acceleration or magnetic field, the rotor will move and cause varied capacitance. The variation of capacitance is modulated with a 400 kHz pulse signal and converted into a voltage signal by the capacitance-to-voltage (C/V) circuit, and the nested Figure 9. The overall architecture of the proposed readout system. Readout Circuit The readout architecture is shown in Figure 11 [2]. When the MEMS sensor is driven by the external acceleration or magnetic field, the rotor will move and cause varied capacitance. The variation of capacitance is modulated with a 400 kHz pulse signal and converted into a voltage signal by the capacitance-to-voltage (C/V) circuit, and the nested chopper amplifier is used to reduce the residual offset [23]. A fully differential bridge capacitive sensing scheme is used [11]. After the first demodulation, the signal band is converted to about 25 kHz and the second-order biquad filter separates the demodulated and undesired signals. After the correlated double-sampling (CDS) with the demodulation function circuit, the amplitude of the signal will be larger by two times [11]. There should be a buffer to push the large capacitor [24], and two cascading RC filters are implemented for the filtering function. Then the frequency division multiplex reads multiple signals at once and modulates this signal to a different frequency. The time-division multiplexing (TDM) can reduce the mutual interference of the MEMS three-axis signals and the power consumption can be reduced. Calibration Circuit The settling time of calibration operation of node A in the two cascading RC filters is used to execute the calibration circuit. The calibration circuit eliminates the DC offset of the output resulting from process variation and includes the two cascading filters. Node A is the settling time of the calibration operation, the continuous-time comparator determines the polarity of the offset, the control logic circuit controls the switches to reduce the DC offset, the interface of calibration and readout circuits uses successive-approximate register (SAR) based logic to switch the unbalanced capacitance and reduce the DC deviation from differential input, and a 5-bit resistor-to-resistor (R-2R) digital-to-analog (DAC) process provides the analog output and connects to the control logic circuit as a unity gain voltage follower. After the first (coarse) operation, the second (fine) operation of the calibration circuit will complete 10 calibration cycles [2,25]. Resonator Circuit A MEMS resonator cascaded with a multi-stage trans-impedance amplifier (TIA) and output buffer to form a closed loop is shown in Figure 10a. To overcome the resistive loss (Rm) in the MEMS structure, a sustaining amplifier with high gain is required. The automatic gain control circuit is added to control the linearity of the output signal. To maintain the oscillation, the loop gain of the system must be higher than unity with zero phase shift to satisfy Barkhausen's criterion. Resonator Circuit A MEMS resonator cascaded with a multi-stage trans-impedance amplifier (TIA) and output buffer to form a closed loop is shown in Figure 10a. To overcome the resistive loss (Rm) in the MEMS structure, a sustaining amplifier with high gain is required. The automatic gain control circuit is added to control the linearity of the output signal. To maintain the oscillation, the loop gain of the system must be higher than unity with zero phase shift to satisfy Barkhausen's criterion. The first stage of TIA is implemented by a topology of an integrated differentiated-based TIA and uses the advantage of high gain with sufficient bandwidth; the integrator and differentiator are shown in Figure 10b,c. The integrator first presents a phase shift of 90° and the differentiator compensates 90° of phase shift back. It results in a total phase shift approximately close to zero with high gain. The overall gain of the TIA is given by , where 1 , 2 represent the effective re- The first stage of TIA is implemented by a topology of an integrated differentiatedbased TIA and uses the advantage of high gain with sufficient bandwidth; the integrator and differentiator are shown in Figure 10b,c. The integrator first presents a phase shift of 90 • and the differentiator compensates 90 • of phase shift back. It results in a total phase shift approximately close to zero with high gain. The overall gain of the TIA is given by , where R 1 , R 2 represent the effective resistance of the MOS in the integrator and differentiator by varying the V control in Figure 10b, and the gain of TIA will be changed as R 2 is varied, with the V control of Figure 10c Since the gain of the sustaining amplifier is still far behind the order of giga-ohms, a capacitor feedback amplifier (Figure 11a) is used to amplify the signal. In addition, a current-mirror amplifier with resistive load is used as the output stage (Figure 11b). With this, the output swing of the amplifier can be wide from 0.2 to 1.6 V. The simulation results are shown at a frequency of 18.8 kHz for the TT, SS, and FF corners; the sustaining amplifier gain is 197. 72, 198.41, and 196.67 dB, the sustaining amplifier phase is −1.18, −1.19, and −1.1 • , and the output DC level of the sustaining amplifier is 1.07, 1.13, and 1.01. Since there is no feedback loop in the output stage to lock the output DC level, the level will deviate. To solve the problem, a decoupled capacitor is used to block the DC part and pass the AC part of the signal. An automatic gain control (AGC) circuit is used to control the output swing of the MEMS oscillator by giving a reference voltage. The AGC circuit consists of a peak detector and an integrator. The operating principle is illustrated in Figure 12a. ; the output voltage of the oscillator is comparable to the voltage stored on 1 with an amplifier. If is greater than 1 , the output of the amplifier will be raised and close to the supply rail ( ). The voltage on 1 will increase because of the charging current from the low-threshold voltage n-type MOS (NMOS) device. Besides, if is smaller than 1 , 1 will be pulled down to the ground by the discharging NMOS. Thus, by balancing the charging and discharging speedwell, 1 can follow the peak value of . Figure 11. Circuit of (a) amplifying stage and (b) output stage. An automatic gain control (AGC) circuit is used to control the output swing of the MEMS oscillator by giving a reference voltage. The AGC circuit consists of a peak detector and an integrator. The operating principle is illustrated in Figure 12a. V out ; the output voltage of the oscillator is comparable to the voltage stored on C 1 with an amplifier. If V out is greater than V C1 , the output of the amplifier will be raised and close to the supply rail (V DD ). The voltage on C 1 will increase because of the charging current from the lowthreshold voltage n-type MOS (NMOS) device. Besides, if V out is smaller than V C1 , V C1 will be pulled down to the ground by the discharging NMOS. Thus, by balancing the charging and discharging speedwell, V C1 can follow the peak value of V out . tector and an integrator. The operating principle is illustrated in Figure 12a. ; the output voltage of the oscillator is comparable to the voltage stored on 1 with an amplifier. If is greater than 1 , the output of the amplifier will be raised and close to the supply rail ( ). The voltage on 1 will increase because of the charging current from the low-threshold voltage n-type MOS (NMOS) device. Besides, if is smaller than 1 , 1 will be pulled down to the ground by the discharging NMOS. Thus, by balancing the charging and discharging speedwell, 1 can follow the peak value of . Similarly, in the integrator, we compare the voltage on C 1 and the reference voltage V c . If V c1 is greater than V c , it means the amplitude of the oscillator is higher than the expected value, and the output of the integrator will be pulled down and V control will raise. As shown in Figure 10c, the gain of the differentiator can be changed since V control is raised. Because the conductance resistance is reduced when the gate voltage of NMOS increases, the gain of the TIA will decrease and lower the amplitude of the MEMS oscillator. Finally, the peak value of V out will be close to the reference voltage V c . Figure 12b,c shows simulation results of the AGC circuit, and the reference voltage is 1.3 V. Figure 13b shows the scanning electron microscope (SEM) view of the proposed MEMS oscillator. Figure 13a shows a layout view of the chip. The ACC and MAG are combined in one structure. The MEMS structure in the middle is the proposed MEMS oscillator. The testing circuits of the readout circuit and sustaining amplifier are included in the chip. The dimensions of the chip are 2538.64 × 1849.37 µ m 2 . Figure 13b shows the scanning electron microscope (SEM) view of the proposed MEMS oscillator. Figure 14a shows the top view of the MEMS oscillator captured by white light interferometry (WLI), and the displacement along the z-axis between the two beams on the sides is only 0.1 µ m. As shown in Figure 14b, the curvature at the anchor is 0.351 µ m. The distance between stator and rotor, which determines the overlapping area of electrodes, is only 1.008 µ m. The measurement result shows that the fishbone finger will maintain the flatness of the structure. The maximum displacement caused by curvature along the z-axis is 2.385 µm in the middle of the finger, 0.618 µm on the sides of the finger, 1.5 µm in the springs, and 1.775 µm in the beam. Figure 14a shows the top view of the MEMS oscillator captured by white light interferometry (WLI), and the displacement along the z-axis between the two beams on the sides is only 0.1 µm. As shown in Figure 14b, the curvature at the anchor is 0.351 µm. The distance between stator and rotor, which determines the overlapping area of electrodes, is only 1.008 µm. The measurement result shows that the fishbone finger will maintain the flatness of the structure. The maximum displacement caused by curvature along the z-axis is 2.385 µm in the middle of the finger, 0.618 µm on the sides of the finger, 1.5 µm in the springs, and 1.775 µm in the beam. The frequency responses of the MAG simulation, measurement, and error percentage are 17.62 kHz, 17.99 kHz, and 2.06% for in-plane resonance frequency, 18.87 kHz, 18.58 kHz, and 1.56% for out-plane resonance frequency, 25.33 kHz, 395.8 kHz, and 93.6% for in-plane Q value, and 155.57 kHz, 142.29 kHz, and 9.3% for out-plane Q value. The frequency responses of the MEMS oscillator simulation, measurement, and error percentage are 18.8 kHz, 19.5 kHz, and 3.49% for in-plane resonance frequency and 183 kHz, 134.8 kHz, and 36% for in-plane Q value. Measurement and Discussion The measurement setup of the spring softening effect is shown in Figure 15a. The frequency responses of the MAG simulation, measurement, and error percentage are 17.62 kHz, 17.99 kHz, and 2.06% for in-plane resonance frequency, 18.87 kHz, 18.58 kHz, and 1.56% for out-plane resonance frequency, 25.33 kHz, 395.8 kHz, and 93.6% for in-plane Q value, and 155.57 kHz, 142.29 kHz, and 9.3% for out-plane Q value. The frequency responses of the MEMS oscillator simulation, measurement, and error percentage are 18.8 kHz, 19.5 kHz, and 3.49% for in-plane resonance frequency and 183 kHz, 134.8 kHz, and 36% for in-plane Q value. The measurement setup of the spring softening effect is shown in Figure 15a. The measurement result of the proposed MEMS oscillator with bias voltage changed from 0 to 27 V, where the resonance frequency is reduced by only 2204 Hz. The frequency tuning ranges of the proposed oscillator for simulation and measurement are 16,379.14-18,796.1 and 17,720-19,924 Hz, and the tuning ranges for simulation and measurement are 128,590.5 and 110,620.36 ppm. age are 17.62 kHz, 17.99 kHz, and 2.06% for in-plane resonance frequency, 18.87 kHz, 18.58 kHz, and 1.56% for out-plane resonance frequency, 25.33 kHz, 395.8 kHz, and 93.6% for in-plane Q value, and 155.57 kHz, 142.29 kHz, and 9.3% for out-plane Q value. The frequency responses of the MEMS oscillator simulation, measurement, and error percentage are 18.8 kHz, 19.5 kHz, and 3.49% for in-plane resonance frequency and 183 kHz, 134.8 kHz, and 36% for in-plane Q value. The measurement setup of the spring softening effect is shown in Figure 15a. The measurement result of the proposed MEMS oscillator with bias voltage changed from 0 to 27 V, where the resonance frequency is reduced by only 2204 Hz. The frequency tuning ranges of the proposed oscillator for simulation and measurement are 16,379.14-18,796.1 and 17,720-19,924 Hz, and the tuning ranges for simulation and measurement are 128,590.5 and 110,620.36 ppm. The measurement of the circuit is on the printed circuit board (PCB), and the low dropout regulator (LDO) and TIA are used to connect with the MEMS oscillator and amplify the small output current sensed from the electrodes. Then, the testing circuit of the readout circuit is measured. To model capacitance variation with 100 Hz caused by acceleration and 18,800 Hz caused by Lorentz current, 100 and 18,800 Hz sinusoidal test signals with a magnitude of 4 mVpp are given. Figure 16a,b shows the output waveforms The measurement of the circuit is on the printed circuit board (PCB), and the low dropout regulator (LDO) and TIA are used to connect with the MEMS oscillator and amplify the small output current sensed from the electrodes. Then, the testing circuit of the readout circuit is measured. To model capacitance variation with 100 Hz caused by acceleration and 18,800 Hz caused by Lorentz current, 100 and 18,800 Hz sinusoidal test signals with a magnitude of 4 mVpp are given. Figure 16a,b shows the output waveforms and output range corresponding to varied input. The output range is from about 0.5 to 1.13 V, which is not an overestimation compared to the measurement. The contribution of noise is shown in Figure 16c. The noise floor of the testing readout circuit is equal to 11.967 μV/√Hz compared to 5.72 μV/√Hz in the simulation. Comparing the gain in DC and periodic AC analysis (PAC), the simulation, measurement, and error results are 36.1 dB, 41.76 dB, and 13.55%; the 3 dB bandwidths are 54 kHz, 53.34 kHz, and 1.23%; the output upper bounds are 1.2 V, 1.13 V, and 6.37%; the output lower bounds are 0.48 V, 0.5 V, and 3.68%; and the noise floor values are 5.72 μV√Hz, 11.97 μV/√Hz, and 52.2%. For the TIA test circuit of the sustaining amplifier in the MEMS oscillator, there should be an input current for measuring the gain of the TIA. Hence, an on-chip Gm-cell is implemented to provide a small input current. The simulated trans-conductance of the Gm-cell is about 15 A/V. As a result, by inputting voltage with a differential of 0.1 mV, an input current equal to 1.5 nA can be obtained. We gave differential inputs of 1 kHz sinusoidal wave with 0.1 mV difference as the testing signal. Figure 17a shows the measurement of the output signal under the testing input. Figure 17b shows the output voltage under different inputs. Because of the high gain, the noise in the input will be amplified and disturb the output. To calculate the gain of the TIA, root mean square is used, The contribution of noise is shown in Figure 16c. The noise floor of the testing readout circuit is equal to 11.967 µV/ √ Hz compared to 5.72 µV/ √ Hz in the simulation. Comparing the gain in DC and periodic AC analysis (PAC), the simulation, measurement, and error results are 36.1 dB, 41.76 dB, and 13.55%; the 3 dB bandwidths are 54 kHz, 53.34 kHz, and 1.23%; the output upper bounds are 1.2 V, 1.13 V, and 6.37%; the output lower bounds are 0.48 V, 0.5 V, and 3.68%; and the noise floor values are 5.72 µV √ Hz, 11.97 µV/ √ Hz, and 52.2%. For the TIA test circuit of the sustaining amplifier in the MEMS oscillator, there should be an input current for measuring the gain of the TIA. Hence, an on-chip Gm-cell is implemented to provide a small input current. The simulated trans-conductance of the Gmcell is about 15 A/V. As a result, by inputting voltage with a differential of 0.1 mV, an input current equal to 1.5 nA can be obtained. We gave differential inputs of 1 kHz sinusoidal wave with 0.1 mV difference as the testing signal. Figure 17a shows the measurement of the output signal under the testing input. Figure 17b shows the output voltage under different inputs. Because of the high gain, the noise in the input will be amplified and disturb the output. To calculate the gain of the TIA, root mean square is used, and the gain can be evaluated by Gain TI A = Vout rms Vin rms . The gain of the tested TIA circuit for simulation, measurement, and error is 122.14 dB, 128.903 dB, and 5.24%. ment, and error results are 36.1 dB, 41.76 dB, and 13.55%; the 3 dB bandwidths are 54 kHz, 53.34 kHz, and 1.23%; the output upper bounds are 1.2 V, 1.13 V, and 6.37%; the output lower bounds are 0.48 V, 0.5 V, and 3.68%; and the noise floor values are 5.72 μV√Hz, 11.97 μV/√Hz, and 52.2%. For the TIA test circuit of the sustaining amplifier in the MEMS oscillator, there should be an input current for measuring the gain of the TIA. Hence, an on-chip Gm-cell is implemented to provide a small input current. The simulated trans-conductance of the Gm-cell is about 15 A/V. As a result, by inputting voltage with a differential of 0.1 mV, an input current equal to 1.5 nA can be obtained. We gave differential inputs of 1 kHz sinusoidal wave with 0.1 mV difference as the testing signal. Figure 17a shows the measurement of the output signal under the testing input. Figure 17b shows the output voltage under different inputs. Because of the high gain, the noise in the input will be amplified and disturb the output. To calculate the gain of the TIA, root mean square is used, and the gain can be evaluated by = | |. The gain of the tested TIA circuit for simulation, measurement, and error is 122.14 dB, 128.903 dB, and 5.24%. Comparisons of the MEMS magnetometer are shown in Table 4, the sensitivity and resolution for the previous paper [2] in the three axes were 7.1-10.7 uV/uT and 44.06-87.46 nT/ √ Hz. The sensitivity and resolution in this paper were 7.5-218.3 uV/uT and 3.032-96 nT/ √ Hz. References [4][5][6][7] proposed to use different structures and different from the CMOS MEMS process. Comparisons of the MEMS oscillator (OSCI) are shown in Table 5, defining a figure of merit (FOM) [10]. The resonance frequency of the MEMS oscillator (OSCI) at V dc = 0 V is 18,796 kHz, and the tuning range of resonance frequency is 16,379.14-18,796 Hz. The maximum tuning range is 128,590.5 ppm, better than other papers. Conclusions The study integrates MAG and ACC in one multi-function MEMS structure, then uses a MEMS oscillator to enhance the sensitivity and resolution of the three-axis MAG. The sensitivities in the three axes improve from 7.1-10.7 uV/uT to 7.5-218.3 uV/uT and the resolutions from 44.06-87.46 nT/ √ Hz to 3.032-96 nT/ √ Hz. The MEMS oscillator with bias voltage changed from 0 to 27 V, and the resonance frequency was reduced by only 2204 Hz, less than the ring oscillator. In the MEMS sensor, three solutions for curvature are presented. One is the proposed MEMS structure with fishbone, and the curvature along the z-axis is improved from 7.5 to 2.38 µm and 1.8 to 0.618 µm for sensing electrodes in the middle and sides. The second solution is the readout circuit using noise-reduction technology, the frequency division multiplexing method, and the time-division multiplexing method. The third solution is implementing a capacitance calibration circuit. A SAR-based calibrator can eliminate the offset between differential outputs with a minimum capacitive resolution of 20.83 aF, and the offset caused by the curvature can be greatly reduced from hundreds of millivolts down to below 10 mV after the calibration. In addition, the multi-function MEMS with oscillator can be placed at an ambient pressure of 10 torr, which can enhance the Q value by 10 times, increase the displacement, and lower the damping coefficient by one-tenth. The sensor can have a larger vibration amplitude when there is magnetic field strength and can obtain a larger capacitance change value, which can also effectively reduce the sensing size to produce less residual stress.
11,522
2021-05-29T00:00:00.000
[ "Engineering", "Physics" ]
Modified NPV Model as a New Evaluation Approach of Investment Decision : This paper is proposed to explore an appropriate strategy for evaluating enterprises’ investment decisions. As a consequence of the scarcity of market resources and limited enterprise budget, when expanding business, it is necessary for enterprises to decide which project is worthy of investment and their priority. The data is collected from an insurance enterprise, which faced an investment decision in 2015. The enterprise received the applications from Dalian, Qingdao, and Ningbo branches, concerning constructing their subordinate institutions. Through assessing with the widely applied Net Present Value (NPV) model, which performs as a gold rule in this field. The net cash flow is the direct yield for an enterprise, and it is calculated by discounting predictable future income until 2020 and subtracting initial investments in 2015, containing time value of money presented as discount factor. According to calculation, the NPV value of establishing institution in Ningbo Branch is the highest, which signifies that it is cost-effective and should be prioritized. In this research, the risk factor of discount rate fluctuation is considered and incorporated into the original calculation structure, forming a modified version —— Net Present Value at risk (NPV at risk). There are differences between the values of two models, which will be an important impact for enterprises’ investment decisions. Introduction By the end of 2021, the investment end of the insurance industry had continued to release policy dividends. China Banking and Insurance Regulatory Commission has frequently issued statements of deregulation in certain fields, including the opening of investment projects of new facilities and products, the approval of high profits and large-scale business, and optimization of the rating requirements etc. Therefore, the insurance industry is well grounded and guaranteed to implement corporate expansion in 2022. Owing to the scarcity of social resources and limited budget, trade-off exists among different investment projects, for instance, the selection of establishing institution. Mature enterprises own branches, and each branch will apply for capital resources to head office as a consequence of market opportunities, project requirements or foreseeable benefits. For enterprises, it is required to decide the investee according to the reports of branches and the evaluation of policy, market and demand. Enterprise expansion is a common phenomenon over the world, and the model which determines enterprise's decision of expansion investment is significant. It will directly influence future earnings. This paper investigates the evaluation model for enterprises to make investment decisions. Limited resources force enterprises to take the priority of branch expansion seriously. For evaluation models, the NPV model has been most widely applied recently. In Application of NPV Method in Venture Capital Project Evaluation Gao [1] expounds how NPV operates in evaluating venture capital projects, including the determination of final value, discount rate, present value and equity ratio with the consideration of the stage of capital investment. It finally verified that the NPV method is suitable for venture capital evaluation while it needs to be modified with after-tax profit and P/E ratio etc. Besides, Bai [2] has suggested that NPV will introduce coefficient of critical success factors, which directly contribute to condition of the diversification project. In his paper entitled Application of NPV Method in Diversification Project, the time of delivery is a success factor for transportation industry. After multiplying and prescribing, results obtained are more accurate but different with those calculated by traditional NPV model. Other models that can evaluate investment decisions are less complete than NPV. Whenever there is a contradict, NPV has always been the primary principle. According to Lu [3], within the Comparison between NPV and IRR method, although NPV is the golden rule, IRR has several advantages. It is calculated without using discount rate as benchmark which weakens the effects of inflation or opportunity cost, and it reflects fund utilization efficiency. However, all backward characteristics are possible to be compensated through optimization, like combining the benefit indicator NPV with the efficiency indicator--Net Present Value Index. Objectively speaking, limitations of NPV are unavoidable. NPV can hardly reflect real circumstances because of market fluctuations. Tang [4] once illustrated in Evaluation of the Option in NPV Rule During Investment, that in reality, there are few static nodes where investors can either execute or abandon. Therefore, dynamic analysis and probability status are required to be investigated in NPV model. Understanding the variation of discount rate and the significance of option is recognized as the source of value. Moreover, in Objections to The Current NPV Method, Yuan [5] argued that although the initial investment in the current NPV model deducts funding costs, the discount rate is still subject to funding rates. Without taking into account financing cost, every calculation will repeatedly offset it, which increases the cost of capital and over-adjusts the amount of cash outflow. Therefore, enterprises should pay attention to the treatment of financing costs, ensuring the deviation of NPV exists within the ideal range. As a consequence, scholars have carried out researches for improving it. According to Limitations of NPV Method and Its Improvement, Wang [6] affirmed the value of NPV model for financial feasibility evaluation of investment projects at beginning. His paper is divided into four sections, including the basic principle of NPV method, limitations, and improved ideas combined with the value of options. In terms of the modification, Wang recommended to calculate discount rate by utilizing Capital Asset Pricing Model (CAPM), and to apply its lowest rate instead of an interval. Meanwhile, he also emphasized the timing of investing and advocated to deal with the changeable market environment with new form of NPV+ROV. Similarly, Shu and Yuan [7] also proposed measures for optimization in Defects of NPV Method in Investment and Its Improvement. Authors firstly acknowledged NPV model and then criticized its authority from the perspective regarding excessive deduction of financing cost, double calculation of interest cost, and the difficulty in reflecting the reality with weighted average. It is advised to adjust the discount rate based on perceived risk and strengthen the management of options with Black-Scholes model. Additionally, Guo [8] dissected and compared the derivative products of NPV model in Discuss the Evolution of NPV rule: APV and EVA. As NPV owns its applicability, Guo indicated the necessity of sensitivity analysis, which covers the predictability of cash flow and the influence degree of discount rate. Adjusted Present Value (APV) considers the variation of the cost of capital and is separately calculated based on different sources of cash flow, including equity and debt. The basic expression of Economic Value Added (EVA) is residual income, which is directly measured by deducting capital cost from after-tax net operating profit. Weakening discounting process simplifies the decision-making process, and EVA became increasingly popular as it focused on shareholder value creation. In particular, NPV lacks coverage of risk. In Firm Projects and Social Behavior of Investors, Hudakova [9] claimed that NPV of similar project and industry is usually normally distributed. After adding statistics knowledge, she designed a risk parameter which equals to the difference between the industry average NPV and the NPV obtained by a particular enterprise. Projects are analyzed in front of the whole industry and the impact of market risk are considered. In order to improve the model for reflecting the reality, adding risk segments is beneficial for enterprises to consider all possible events. As far as Zhang [10] concerned, NPV at Risk is expressed with a given confidence level as normal distribution is adopted to interpret the confidence level of NPV value. In the NPV at Risk in Economic Evaluation of Multi-investor Construction Projects, owing to the long period, large investment base and high risk of construction projects, the NPV at Risk approach is more appropriate by combining weighted average cost of capital model and Monte Carlo method. In this paper, risk factor is designed as the difference between predicted annual discount rates and their average, referring to the concept of variance in statistics, and it is added to denominator in the term of every discounting returns. By analyzing the institution construction project of Changan, it is discovered that there are differences between the calculated values of NPV and its improved version with risk factor. Traditional NPV model discounts expected future cash inflows to the present at average rate of 11.7% and receives the difference with the costs of relevant projects in Qingdao, Ningbo, and Dalian. Based on the NPV rule, if the budget can be satisfied, the branch with larger NPV is supposed to be preferentially invested, which is Ningbo, meaning that the discounted net cash value of establishing institutions in Ningbo Branch is bigger, with a simultaneous consideration of construction payment and possible revenue. However, the NPV model with risk parameter improves the regulation of discount rate. Instead of calculating a single weighted average 11.7% of all expected discount rates, the volatility of the discount rate in following years is introduced. On account of the large differences among the original NPV values and little distinction of those discount rates, the final results of expansion decisions are equal. Nevertheless, the results may be different when utilized data for calculating. In reality, owing to the varying market environment, plenty of unavoidable risks are hidden in the implementation process of every project, and the future inflow of cash cannot be guaranteed. Hence, the structure of NPV model with risk parameter owns more reference value. Data The data utilized in this research is obtained from the insurance market, which was converted by a City branch of Changan Liability Insurance Co., LTD. Changan Liability Insurance Co., Ltd. was established on November 07, 2007, covering liability insurance, property loss insurance, credit insurance and guarantee insurance etc. In terms of its scale, Changan Liability Insurance Co., Ltd. has established provincial-level and city-level institutions and approximately 300 branches nationwide, providing more than 14 trillion Chinese yuan of society risk protection and serving nearly 20 million customers. This research concentrates on its decisions on enterprise expansion: under the premise of limited capital, it is required to consider the priority of branch selection, for investing in their institution construction. This paper investigates a project which was expected to launch in three city branches of Changan Liability Insurance Co., LTD, including Qingdao, Ningbo and Dalian. Therefore, it is necessary to undergo feasibility analysis, for instance, forecasting net cash flow which is a direct yield for firms, according to market and previous information. Among which, the initial cash investment is called preparation cost by the insurance industry, including the rental cost of institutional construction, labor training cost, water and electricity costs, etc. Future estimated annual cash flow is divided into cash inflow and outflow. Premium is inflow, while outflow generally contains sales expenses, administrative expenses, customer service expenses, legal taxes and fees. Concrete data is listed in Table 1 and exhibited in Fig. 1. The preparation of its investment contains firstly, estimating the growth rate of cash flows on account of historical trend. Secondly, investors are supposed to recognize current year's cash flow as the base, combine the estimated growth rate thus forecast cash flows in next few years. Moreover, the discount rate in insurance industry is calculated for the reserve, which is defined as the net cash flows after deducting premiums payable for future liabilities, performing as an insurer's biggest expense. On the basis of the "Benchmark Yield Curve for Measurement of Insurance Contract Reserves" compiled by China Central Government Bond Registration and Clearing Co., LTD., Changan drew on the higher annual discount rates (lower estimated cash flow for conservative calculation) in previous years, which was 10.5%, and adopted in 2015. The discount rates in following years' calculation were retrieved with a view to predictable insurance market fluctuations. Model This paper investigates enterprise expansion project, that it is necessary to consider the priority of branch selection owing to limited resources. Different branch appeals are evaluated with the assistance of NPV model and its modified version--NPV at risk in this research: NPV The establishment of institutions is a significant decision with far-reaching influence for every enterprise. The consumption level, market demand and operating capacity of different locations are different, which can directly affect the profitability and continuity of enterprises in the future. Due to restricted initial investment, enterprises cannot set up subsidiaries in all ideal regions, that is, expansion is limited, so it is necessary for enterprises to select a location for establishing an institution. NPV has recently considered as a golden rule as it focuses on net cash flow, which is the direct earnings of the business, and also takes into account the time value of money expressed as a discount factor. In general, investors are willing to choose higher projects with positive NPV to maximize proficiency. The formula is listed above, where 0 is on behalf of initial investment in cash, represents cash inflow in year t, and ̅ is the average rate of the predicted discount rate. NPV at risk However, with the diversification of the market, the NPV model lacks consideration of the predictable risks of establishment and later operation of institutions, and there is great uncertainty about whether the target institution can achieve its expected cash flows. In this research, risk factor is incorporated into the calculation process of NPV model, thus NPV-at-risk method is proposed. In particular, the paper uses the risk premium to represent the risk factor, which is processed by calculating the weighted average of discount rates and make a difference between the average and current discount rate. Therefore, the NPV at risk can be shown as: The formula is listed above, ̅ is market rate in each year and is risk premium. Result Based on the data collected, calculation results of NPV and NPV at risk models are separately filled in the following tables. Table 2 shown, the initial screening condition is if the NPV value is greater than zero. In terms of the above case, opening subsidiary institutions in Qingdao, Ningbo and Dalian can all generate positive revenue. Therefore, when the budget can be satisfied, the branch company with the larger NPV is supposed to be preferentially selected for investment, which signifies that the discounted NPV value is larger. Therefore, investing in Ningbo Branch is more efficient for enterprise development. However, it is also necessary to consider the initial investment, which is the preparation cost of investing in every branch for constructing corresponding institution. Under the NPV model, investment in Ningbo Branch seems to be more profitable at present, while its input is relatively higher, which is nearly 2.44 million yuan. As a consequence, enterprises should determine whether existing funds are available to support the project investment. Table 3 shown, investing in Ningbo Branch for constructing institution is also more efficient for enterprise expansion. The final outcome is similar to the one reached by the NPV model. In this circumstance, owing to the limited variation of annual discount rate and the large cash flow base, the different calculation processes of two models does not influence project selection, though the value varies. It is still essential to utilize an appropriate approach as the eventual decision may change with different models. As the market environment is continuously changing, combining risk factor to the present NPV model is beneficial as it offers more rigorous reference for enterprises. Conclusion In conclusion, this paper investigates the evaluation approach of investment decisions launched by enterprises and analyzes the traditional NPV model and its modified version which proposed with a combination of the risk factor. Throughout the paper, it is demonstrated surrounding with the case of deciding the priority of constructing subordinate institutions by an enterprise of its city-level insurance branches, including Ningbo, Qingdao and Dalian. After calculating with NPV model, investing in Ningbo Branch will bring more substantial returns. In view of the market fluctuation and the potential error by using the average discount rate, this paper redesigns NPV formula and introduces the risk factor to denominator, forming NPV at risk. There is a difference between the obtained values of the two models. However, owing to the large differences among NPV values of three branches, the eventual decision is not affected under this circumstance, while it is still necessary to be considered in other situations. The direction of investment is of great significance. When facing expansion, enterprises should evaluate the benefits and expenses of each investment appeal according to the model. In addition to the expected future cash inflows and the preparation outflow there are still quantities of elements that will influence the investment decision, for instance, the fluctuation of discount rate discussed in this research, and the risk of receiving future income etc. This is a financial issue that all business managers should pay attention to. The selection of projects will directly contribute to enterprise's condition and social status, including the demand for its products and services, how it coordinates with the booming market, its profits and the degree of target achievement. However, this paper only covers the risk of encountering different discount rates, abandoning the traditional structure of applying the weighted average as a single data in NPV calculation. In the future research, scholars are recommended to take other influential factors into account and bring them into the framework of NPV gradually, so as to optimize and complete NPV model for better reflecting the actual net estimated value.
4,025
2022-01-01T00:00:00.000
[ "Business", "Economics" ]
Progress to a Gallium-Arsenide Deep-Center Laser Although photoluminescence from gallium-arsenide (GaAs) deep-centers was first observed in the 1960s, semiconductor lasers have always utilized conduction-to-valence-band transitions. Here we review recent materials studies leading to the first GaAs deep-center laser. First, we summarize well-known properties: nature of deep-center complexes, Franck-Condon effect, photoluminescence. Second, we describe our recent work: insensitivity of photoluminescence with heating, striking differences between electroluminescence and photoluminescence, correlation between transitions to deep-states and absence of bandgap-emission. Room-temperature stimulated-emission from GaAs deep-centers was observed at low electrical injection, and could be tuned from the bandgap to half-the-bandgap (900–1,600 nm) by changing the electrical injection. The first GaAs deep-center laser was demonstrated with electrical injection, and exhibited a threshold of less than 27 mA/cm2 in continuous-wave mode at room temperature at the important 1.54 μm fiber-optic wavelength. This small injection for laser action was explained by fast depopulation of the lower state of the optical transition (fast capture of free holes onto deep-centers), which maintains the population inversion. The evidence for laser action included: superlinear L-I curve, quasi-Fermi level separations satisfying Bernard-Duraffourg’s criterion, optical gains larger than known significant losses, clamping of the optical-emission from lossy modes unable to reach laser action, pinning of the population distribution during laser action. Introduction The ongoing quest for "thresholdless" [1] semiconductor lasers has led to the development of new materials (e.g., quantum wells [2], wires, and dots [3,4]) and new optical resonators (e.g., microdisks [4] and photonic bandgap crystals [1]).In a novel effort towards "thresholdless" lasers, we recently demonstrated [5] that native deep-acceptor complexes in gallium-arsenide (GaAs) exhibited laser action at very low current densities.Moreover, in contrast to conventional semiconductor devices, whose operating wavelengths are determined by the bandgap energy, we showed [6,7] that the room-temperature stimulated-emission from GaAs deep-centers can be tuned very widely from the bandgap (∼900 nm) to half-the-bandgap (1600 nm).Here we review both historical work and our progress towards this first GaAs deep-center laser. First, in Section 3, we summarize some well-known properties of deep-centers in highly n-doped GaAs: the nature of the deep-acceptor complexes, the Franck-Condon effect, the observed photoluminescence.Second, we describe our recent work on GaAs deep-centers: the total radiative output in photoluminescence, the insensitivity of the photoluminescence with respect to a 90 • C rise above room temperature, the dependence of the photoluminescence (PL) and electroluminescence (EL) on the pump power, a correlation between transitions to deep-states and the absence of bandgap emission, the fast capture of free holes onto deep-centers.An important aspect of our work was the observation of a significant difference between photoluminescence and electroluminescence.In our work, the PL could not be saturated, and the PL spectra retain the same shape for all optical pump powers.In stark contrast, the EL is found to saturate at long wavelengths, and to show a strong spectral blue-shift with increasing injection.These observations were explained by a small hole diffusion length and fast capture of free holes onto deep-centers.This fast capture of free holes onto deep-centers is consistent with the absence from all our deep-center samples of bandedge emission in photoluminescence. Next, in Section 5, we report our work on room-temperature stimulated-emission from GaAs deep-centers at low electrical injection.The evidence for stimulated-emission includes: a superlinear L-I curve, a quasi-Fermi level separation large enough to satisfy the Bernard-Duraffourg criterion, and an optical gain large enough to overcome significant loss.We found that the room-temperature stimulated-emission from GaAs deep-centers can be tuned very widely from the bandgap (about 900 nm) to half-the-bandgap (1600 nm) by changing the electrical injection. Section 6 presents our work on the GaAs deep-center laser.The first GaAs deep-center laser was demonstrated with electrical injection, and exhibited a threshold of less than 27 mA/cm 2 in continuous-wave mode at room temperature at the important 1.54 µm fiber-optic wavelength.This small injection which achieves laser action can be explained by a fast depopulation of the lower state of the optical transition (i.e., fast capture of free holes onto deep-centers).The latter helps to maintain the population inversion.The evidence for laser action included: a superlinear L-I curve, an optical gain large enough to overcome significant loss, a clamping of the optical emission from lossy modes that do not show laser action, and a pinning of the population distribution during laser action. Methods In our work, all samples were grown by molecular beam epitaxy on semi-insulating GaAs substrates.All samples were representative of several dozen growths.We chose to write about these specific samples because we had taken more comprehensive data from these samples.Sample A consists of 2,488 Å of the GaAs deep-centers, above which was a 1,418 Å GaAs layer p-doped at 3.2 × 10 19 cm −3 .Sample B consists of 2,974 Å of the GaAs deep-centers above which was no p-layer.Sample C consists of 2,271 Å of the GaAs deep-centers, above which was a 1,294 Å GaAs layer p-doped at 2.5 × 10 19 cm −3 .The GaAs deep-center layers in samples A, B, C were grown at 570 • C under As-rich conditions and high Si-dopant flux (4.5 × 10 19 cm −3 ).Sample D was a control sample of 21 periods of high-quality 120 Å InGaAs/120 Å InAlAs MQWs lattice-matched to indium phosphide (InP).Sample E was a control sample of 2,325 Å of bulk InGaAs lattice-matched to InP.Transmission, photoluminescence (PL), and Hall measurements were performed on samples A, B and C. Electroluminescence (EL) and current-voltage were measured on samples A and C. PL was performed on samples D and E. Sample F consisted of 3000 Å of undoped GaAs buffer, 1,149 Å of AlAs etch stop, 2,271 Å of GaAs deep-centers, 399 Å of Al 0.45 Ga 0.55 As, above which was a 1,294 Å GaAs layer p-doped at 3.2 × 10 19 cm −3 .Sample G consisted of 3,000 Å of undoped GaAs buffer, 42 periods of a distributed Bragg reflector (DBR, 1,148 Å of GaAs and 1,340 Å of AlAs), 2,486 Å of GaAs deep-centers, 399 Å of Al 0.45 Ga 0.55 As, above which was a 3143 Å GaAs layer p-doped at 3.2 × 10 19 cm −3 .The growth which yielded Samples H-N consisted of 3,000 Å of undoped GaAs buffer, 35 periods of a DBR (1104 Å of GaAs and 1,266 Å of Al 0.86 Ga 0.14 As), 2,108 Å of GaAs deep-centers, 399 Å of Al 0.45 Ga 0.55 As, above which was a 1,937 Å GaAs layer p-doped at 3.2 × 10 19 cm −3 .The Si donor concentration in the deep-center layer was always 4.5-4.8× 10 19 cm −3 .We estimate that the concentration of Si Ga -V Ga in the deep-center layer was about ∼1.5-2 × 10 19 cm −3 .Further details were reported previously [7].Devices were fabricated using standard photolithography, wet etches, and Ti-Au contacts.Pixels are shown in Figure 1i.Individual devices were isolated from each other by etching 128 µm × L mesas in the n-type deep-center layer.Current was injected through 104 µm × L mesas in the p-type GaAs layer.In Samples F and G, L was 75 µm and 150 µm, respectively.The pixels in Samples F and G were fabricated with wet etches (phosphoric acid).The wet etches left the mesa edges with a random roughness.The isolation etch of Sample G extended 3.5 periods into the DBR.The pixels in Samples H-N were fabricated via reactive ion etching (RIE) with an inductively coupled plasma.The isolation etch of Samples H-N extended 4.5 periods into the DBR.The Ti-Au contacts were 20 µm wide.The electrical injection utilized current pulses which were 25-400 µs wide at 50% duty cycle.The optical emission was measured through a SPEX 1681B spectrometer, and collected by either an InGaAs photodetector or a photomultiplier tube. All samples had similar layer structures, and were operated at similar current densities.The main difference between the samples F-N was the presence or absence of a resonant cavity.Sample F was not placed within a resonant cavity or waveguide.A DBR was placed underneath the active layer in Samples G-N to increase the optical path for resonant normal wave vectors K Z .In Samples F and G, Figures 1c-d respectively show that wet-etched rough facets preclude the optical-feedback characteristic of resonant cavities.In the six Samples H-N, Figure 1e shows that RIE facets made possible a resonant cavity and optical-feedback. Photoluminescence and Electroluminescence Studies We recently [7] developed a new growth technique which uses a large n-type doping to thermodynamically favor the formation of large concentrations of compensating deep-acceptors.The deep-acceptors effectively compensate the material only if their associated energy-levels lie below the midgap.(Deep-levels which lie above the midgap are usually donor levels.)Our growth conditions thus favor the formation of deep-levels below midgap, and not above midgap.This allows the formation of a high-quality pseudo-bandgap between the conduction-band and midgap (Figure 2a).The relative absence of states within this pseudo-bandgap makes the radiative efficiency large.Thus, the new material has energies as in Figure 2a, rather than Figure 2b.In Figure 2b-c, E C , E V , E d , and E U are, respectively, the conduction-and valence-band edges, the deep-levels, and an upper-state resonant with the conduction-band.Figure 2c shows the radiative transition between the state E U near the conduction-band and a deep-state E d1 , as well as the fast capture [7] of free holes onto deep-centers.The literature [8,9,10] says that the upper state E U corresponds to a state centered on the donor in a donor-V Ga complex, whereas the lower state E d corresponds to a state centered on the V Ga in the complex. Total radiative output The novel material shows a total radiative output which is at least comparable to high-quality InGaAs quantum-wells lattice-matched to InP. Figure 3a shows [7] room temperature PL from GaAs deep-centers and from InGaAs.The excitation in Figure 3a,b was a 10 mW HeNe laser.All data was normalized to a sample thickness of 0.25 µm.Curve a in Figure 3a shows PL from the as-grown sample A of 2,488 Å of GaAs deep-centers.Below, we estimate a deep-center internal radiative efficiency of about 90%.Curve b in Figure 3a shows PL from sample D, the 21 periods of high-quality 120 Å InGaAs/120 Å InAlAs MQWs lattice-matched to indium phosphide (InP).Curve c in Figure 3a shows PL from sample E, the 2,325 Å of bulk InGaAs lattice-matched to InP.Significantly, sample A showed a total PL (integrated over wavelength) greater than from both high-quality InGaAs MQWs lattice-matched to InP (curve b in Figure 3a) and bulk InGaAs (curve c in Figure 3a). In these photoluminescence (PL) studies, the epilayer thickness was always less than the characteristic absorption length of the excitation laser.Figure 5a indicates this by showing that the excitation creates minority holes throughout the entire deep-center layer. Two measurements of radiative efficiency The internal radiative efficiency was assessed in two ways [7].First, the measured radiative-efficiency in the novel material was checked with a method reported by H. C. Casey [24,25,26,27].Our measured PL were compared with our brightest samples of p-type GaAs (various thicknesses and concentrations of beryllium (Be) doped layers).Casey and Panish [25] and numerous others [24,26,27] have shown that the internal radiative efficiency of p-GaAs varies between 5% and 95%, and is a well-known function of the p-type doping.Thus, p-type GaAs has a radiative efficiency which is well calibrated and documented in the literature.We found that our brightest p-type GaAs calibration samples have internal radiative efficiencies which are in good agreement with the literature [24,25,26,27], and are thus ideal control samples.Second, we calibrated all elements of our optical setup.We directly measured the PL which is captured by a F/1.5 lens, and focussed onto a calibrated photodetector with a F/4 lens.We assumed that the externally measured PL consists [28,29] of only that portion of the internal PL radiation which is incident upon the sample surface at less than the critical angle.This gives a second estimate of the internal radiative energy.Both methods showed that sample A, the as-grown GaAs deep-centers (curve a in Figure 3a), had an internal radiative efficiency of slightly more than 90%.This internal efficiency describes radiation into all optical modes in all directions at all wavelengths and in both polarizations.(It is not the definition of internal efficiency in lasers, which describes radiation into a single optical mode at one single wavelength and one polarization.) Evidence of a high-quality energy-gap having few nonradiative traps within the original bandgap Figure 3b shows [7] room-temperature PL, obtained with a HeNe laser, from GaAs deep-centers (sample B) over a wider wavelength range.Significantly, no PL is observed at the bandedge (0.85 µm) (Figures 3a,b, 5b, 7b) from any of the deep-center samples.For deep-centers in nominally n-type GaAs, this indicates an absence of free holes.The observed PL spectra are broad, and extend from 1.0 µm to 1.9 µm.This broad PL spectra results from transitions from states near the conduction-band (E U in Figure 2b) to the many deep-acceptors (Si Ga -V Ga , V Ga , Si As , Si Ga -V Ga -Si Ga , and their ionization states) whose energies extend from the midgap down to the valence-band.Moreover, no PL is observed at wavelengths longer than 1.9 µm.This absence of long-wavelength transitions is consistent with an absence of deep-acceptor states (which would act as either radiative and nonradiative traps) between the conduction-band and midgap.This absence of states is consistent with the observed large radiative efficiency.Our observations indicate that holes created by the excitation laser relax quickly from the valence-band to deep-states near midgap. Absence of saturation of the photoluminescence Figure 4a shows room-temperature PL [7] from sample B, the 2,974 Å of GaAs deep-centers, as a function of the excitation peak power (from a few mW to 2 W).The excitation was a 816 nm GaAs laser having a 700 µm × 200 µm spot size.Significantly, the shape of the PL spectra in Figure 4a remains unchanged for all excitation intensities.Two peaks (at 1.31 µm and 1.45 µm) are always observed, and the relative heights of the two peaks remain unchanged for all excitation intensities.Figure 4b shows that the PL peak at 1.31 µm increases linearly with excitation intensity, even up to 2 W. Thus, even with a 2 W optical excitation (equivalent to 1.1 kW/cm 2 ), we were unable to saturate the deep-level transitions.Thus, with increasing optical excitation, the PL spectra retain the same shape, and the PL increases linearly.In sharp contrast to the PL, the EL spectral shape changes significantly with injection, as shown [7] in Figure 5b (and Figure 8a).Figure 5b shows the PL spectrum (dashed-dotted line), on top of which is superimposed several EL spectra (solid and dashed lines), for sample C. The EL spectra have been normalized so that the EL peaks lie on top of the PL spectrum.Figure 5b shows that the EL at any specific current excites only a subset of the transitions (wavelengths) in the original PL spectrum.Moreover, as the current is incrementally increased, the EL spectrum shifts incrementally to shorter wavelengths.The latter indicates that the exact value of the current can be used to select specific transitions (wavelengths).This indicates inhomogeneous broadening of the PL. Figure 6.The donor-V Ga complex is known [8] to show a Franck-Condon shift [7].a, Arrows c and d show a Franck-Condon spectral shift of absorption away from luminescence.b, Photoluminescence [7] at different excitation wavelengths from the GaAs deep-centers.The excitation at 808 nm (solid curve) yields much brighter PL than the excitation at 980 nm (dashed curve).The new material absorbs efficiently only at short wavelengths (<980 nm), whereas the PL occurs at long wavelengths (1-1.7 µm).c, The measured transmission [7] indicates that, at wavelengths (1-1.7 µm) of bright PL from the GaAs deep-centers, the absorption loss is very small.It is well-known [7,11,19,20,21,22,23] that n-type GaAs is compensated by donor-vacancy-on-gallium (donor-V Ga ) complexes under As-rich conditions.It is also well known [7,8,11,13,17,18] that the donor-V Ga complex shows a Franck-Condon spectral shift of absorption away from luminescence, because V Ga is highly coupled to the lattice.(Arrows a and b in Figure 6a show luminescence and absorption at the same energies.Arrows c and d in Figure 6a show a Franck-Condon shift where absorption occurs at higher energies than luminescence.The literature [8] says that the upper state E U corresponds to a state centered on the donor in the donor-V Ga complex, whereas the lower state E d corresponds to a state centered on the V Ga in the complex.Vacancies are highly coupled to lattice-vibrations.The configuration coordinate in Figure 6a describes the coupling of vacancies to lattice-vibrations.)We now show that both transmission and photoluminescence at different excitation wavelengths are consistent with this well-known Franck-Condon shift. 3.7. Photoluminescence at different excitation wavelengths shows that absorption occurs at shorter than 1 µm, but emission occurs at longer than 1 µm Figure 6b shows the PL [7] from the GaAs deep-centers at two different excitation wavelengths.The excitation at 808 nm (solid curve) yields much brighter PL than the excitation at 980 nm (dashed curve).All data in Figure 6b correspond to the same number of incident photons.The PL from different excitation wavelengths (i.e., photoluminescence excitation) is often used as a measure of absorption.Thus, Figure 6b shows that efficient absorption in the novel material occurs only at short wavelengths (<980 nm), whereas the PL occurs at long wavelengths (1-1.7 µm).This is consistent with the well-known Franck-Condon shift associated with V Ga -complexes. Transparency in the novel material is achieved at near-zero injection Figure 6c shows that [7], at wavelengths (1-1.7 µm) of bright PL from the GaAs deep-centers, the material is nearly transparent even at zero injection.The measured transmission through the GaAs deep-centers in Figure 6c indicates an absorption loss of less than 3.6 cm −1 at 1.6 µm wavelengths.This absorption loss of 3.6 cm −1 is considerably less than the typical bandedge absorption (10 4 cm −1 ).Thus, the injection which achieves transparency in the novel material is much less than in direct-gap semiconductors.Figure 6c also shows that, in the novel material, absorption occurs at short wavelengths (<1 µm), whereas the PL occurs at long wavelengths (1-1.7 µm).Again, this is consistent with the well-known Franck-Condon shift associated with V Ga -complexes. The new material has PL showing a high degree of temperature insensitivity It is well known that the optical-emission from high-quality MQWs changes dramatically with temperature.The latter results from the strong temperature dependence of the bandgap energy in conventional semiconductors.In stark contrast to conventional semiconductors, we report virtually no change in both the spectral shape and peak height of the PL [7] from the GaAs deep-centers between 295 K (curve a [dashed curve] in Figure 7a) and 385 K (curve b [solid curve] in Figure 7a).For comparison, the PL peak from high-quality InGaAs MQWs shifts from 1.53 µm to 1.60 µm between 295 K (curve c [dashed curve] in Figure 7a) and 385 K (curve d [solid curve] in Figure 7a).This is accompanied by a drop (not shown) in the InGaAs PL peak at 385 K to 0.7 times the PL peak at 295 K. Figure 7b shows that the PL (normalized to the peak) from the GaAs deep centers at 77 K (solid curve) shifts to slightly longer wavelengths at 295 K (dashed curve).The PL at 77 K was 1.8 times that at room temperature. Electroluminescence spectra from p-n junction Figure 8a shows room temperature EL spectra [7] from a p-n junction where the n-layer is the deep-center-layer.Details of the EL allow us to evaluate some important lifetimes in the material.We show below that the blue-shift of the EL in Figure 8a relative to the PL (Figure 3) results from the small volume (one L P into the deep-center-layer) over which free holes exist in Figure 8a.This is equivalent to a fast lifetime for capture of free holes by deep-centers. Electroluminescence in the absence of a p-layer A useful control sample [7] is shown in Figure 9.This device consists of only the n-type deep-center-layer with no p-layer.Measurements with this device were useful because the mechanism for hole injection into the deep-center layer differs significantly from that in a p-n junction.Without a p-layer, holes are created in the deep-center-layer via impact ionization of majority electrons all along the electron paths.The latter occupy most of the volume of the entire deep-center layer.When the holes are created over a large volume (Figure 9a), the EL spectral shape (Figure 9b) looks a lot like the PL (Figure 3).This makes sense because, in PL, holes are also created over a large volume (the entire deep-center-layer).The active volume in PL is indicated in Figure 5a by an epilayer thickness which was always less than the characteristic absorption length of the excitation laser. 3.12.Possible explanations for blue-shift of EL spectra from p-n junction: heating or internal electric fields? Figures 9 and 7a show [7] that the blue-shift of the EL in Figure 8a cannot be explained by either device heating or a Stark effect due to an internal electric field.The voltage and current used for electron-injection to obtain the EL in Figure 9b are somewhat larger than those used in the p-n junction 8a.This is significant because any I-V heating would be greater in Figure 9 than in Figure 8a.Moreover, the electric field across the deep-center-layer (and any Stark effect) is greater in Figure 9 (15 V drop) than in Figure 8a (5.5 V drop).Since the EL in Figure 9b and Figure 8a incur similar I-V heating and internal electric field (and Stark effect), then heating and electric field (and Stark effect) cannot explain the spectral blue shift in the p-n junction EL of Figure 8a relative to the EL of Figure 9b and to the PL of Figure 3.This is consistent with our earlier observation in Figure 7a that the PL at 385 K is virtually the same as the PL at 295 K. Absence of bandedge emission and absence of free holes Significantly, no bandedge PL (0.85 µm) is observed [7] in Figures 3a,b, 5b, 7b.The absence of bandedge PL from the n-type deep-center-layer indicates an absence of free holes.The latter indicates that free holes are quickly trapped by deep-centers before a conduction-to-valence-band transition occurs.The lifetime τ dv,h for hole capture into a deep-center is indicated in Figure 1b.The hole-diffusion-length L P is related to τ dv,h through, L 2 P = (k B T /q) µ h τ dv,h , where µ h is the hole mobility in the deep-center-layer.A fast τ dv,h implies a short L P .This has important consequences for EL. Figure 8b shows that the electrically injected holes from the p-region are immediately captured by deep-centers in the first L P of the n-type deep-center-layer.[7] from a device which does not have a p-layer.a, In a device consisting of only the n-type deep-center-layer, holes are created via impact ionization of electrons over a large volume of the deep-center-layer (all along the electron paths).b, The EL spectra from the deep-center-layer looks a lot like the PL when the holes are created in a large volume of the deep-center-layer (all along the electron paths).This is unlike Figure 8a, where the EL spectra exhibits a spectral blue-shift relative to the PL, and where the holes in Figure 8a The EL spectra in Figure 8a can be explained [7] by a small fixed value of L P , and the small number of deep-centers within a small L P .At low injection, holes scatter up to deep-levels near midgap within one L P of the junction.At higher injection, holes have filled all states near midgap within one L P of the junction, and holes start to populate deep-levels closer to the valence-band within one L P of the junction.This makes possible transitions involving higher photon energy (E 2 as well as E 1 in Figure 8b), as electrons combine with holes located at deep-levels closer to the valence-band.Thus, at higher injection, the EL spectra in Figure 8a shift to shorter wavelengths. The solid curves in Figure 8a show that the EL at longer than 1.32 µm increases for small injection, but saturates at a low current.For example, the EL at 1.45 µm in Figure 8a remains the same for all injection between 2 mA to 10 mA.This saturation of the EL brightness at long wavelengths can be explained by a filling with holes of all midgap states within one L P of the p-n junction.Since additional holes at higher injection no longer reach midgap states, the number of transitions to midgap states remains the same, and the EL at long wavelengths saturates at these injection levels. At currents greater than 10 mA (dashed in Figure 8a), the EL at longer than 1.32 µm, surprisingly, starts to rise again.This increase in the long-wavelength EL is accompanied by the presence of free holes: bandedge EL (0.85 µm) is observed only for injections greater than 10 mA in Figure 8a.This is sensible because, when most deep-centers within one L P of the junction have captured a hole, the repulsive Coulomb force makes it difficult for additional holes to be captured by the same deep-centers.Thus, at these higher currents, some free holes exist within one L P of the junction (Figure 8b), and these free holes can give rise to bandedge EL.The latter holes can also be trapped into unoccupied states near midgap further from the junction (L P 2 >L P in Figure 8b) in the interior of the deep-center layer.This explains why bandedge EL (dashed in Figure 8a) occurs simultaneously with a sudden rise in the EL at long wavelengths (1.32-1.7 µm) beyond the saturated values at low current. Figure 10.This is an ideal optical material [7]: the four-level system has a large radiative output, and exhibits fast depopulation of electrons from the lower state of the optical transition. Fast depopulation of the lower-state of the optical transition Previously [7,30,31], using estimates of the hole diffusion length in the deep-center material, we showed that the hole capture into (i.e., the depopulation of electrons out of) the lower state of the optical-transition is very fast (10-100 fs) at room-temperature.This hole capture lifetime is consistent with previous measurements [10] of the coefficient for capture of a free hole onto the native deep-acceptors (mainly V Ga and their complexes) in n-type GaAs.This lifetime for depopulation of the lower state is also consistent with previous direct measurements [32,33] (via fast pump-probe experiments) of the 100 fs trapping of holes by the vacancy-on-Ga-site.The physics which explains the fast hole capture onto deep-centers is that, in compensated semiconductors, the deep-acceptor complexes are negatively charged, and thus exhibit a large capture cross-section for positively-charged holes.Thus, we demonstrated that the novel material constitutes a four-level system which shows both a bright optical-transition and fast depopulation of the lower state of the optical-transition.This is summarized in Figure 10.Such a four-level system is known [34,35] to be an ideal optical material for lasers. Summary Thus far, we have shown that the deep-centers in the novel material showed a total radiative output which is at least comparable to that from the same thickness of high-quality InGaAs-quantum-wells-on-InP, and a large internal-radiative-efficiency for emission into all wavelengths longer than the bandgap.This indicates that a new high-quality energy gap has formed between the conduction-band and about midgap.Radiative emission is observed at energies greater than half-the-bandgap.The well-known Franck-Condon spectral shift was observed via both transmission measurements and photoluminescence at different excitation wavelengths.Moreover, the deep-centers showed very temperature-insensitive photoluminescence over a rise of 90 • C above room-temperature.We also found that the PL is inhomogeneously broadened.An important aspect of our work was the observation of a significant difference between photoluminescence and electroluminescence.In our work, the PL could not be saturated, and the PL spectra retain the same shape for all optical pump powers.In stark contrast, the EL is found to saturate at long wavelengths, and to show a strong spectral blue-shift with increasing injection.These observations were explained by considering the number of deep-centers which are probed in PL versus EL.Since the characteristic absorption length of the pump laser was greater than the deep-center layer thickness, the PL probes a large volume, the entire deep-center layer, and thus, a large number of deep-centers.With a p-n junction, the EL probes only a small volume, the first L P of the deep-center layer, and thus, only a small number of deep-centers.Our observation of a small L P indicates fast capture of free holes onto deep-centers.This fast capture of free holes onto deep-centers is consistent with the absence from all our deep-center samples of bandedge emission in PL.Finally, the novel material is found to be ideal for lasers, with fast femtosecond depopulation of the lower state of the optical transition. Regimes of behavior in the L-I curve Solution of the laser rate equations [34,36] shows three regimes of behavior.At low injection, spontaneous-emission (also known as fluorescence or light-emitting-diode (LED) behavior) is indicated [34,36] as a +1 slope in a log-log plot of optical-emission as a function of current density J (the "L-I" curve).This indicates that the optical-emission is proportional to the first power of J.At a higher injection, stimulated-emission is observed, and rises much more quickly as J s , where s > 1.On a log-log plot, stimulated-emission shows [3,4,34,36,37,38,39] a superlinear slope s, (s > 1).Typical values of the superlinear slope s range from 2.5-3.5, for large microdisk lasers [37,38] at room-temperature, to 2.9-11, for very small microdisk lasers [3,4,39] at low temperature.This superlinear growth of the optical-emission continues until the photon number reaches 1/β [36], where β is the spontaneous emission coefficient, beyond which the laser output increases linearly with J.This linear dependence of the L-I curve at high injection indicates a pinning of the population inversion at its threshold value.At threshold, the slope of the L-I curve on a log-log plot has its greatest value [34]. Superlinear L-I at specific wavelengths One criterion for demonstrating stimulated-emission is identified in three classic papers which reported the first demonstration of stimulated-emission in GaAs [40], bulk GaN [41], and GaN microdisks [38].The key is that, in situations where laser action is not achieved, the stimulated-emission shows both a spectral shift and broadening with increasing injection.No Fabry-Perot modes are observed.(See Figure 3 in [40], Figure 2 in [41], and Figures 1 and 2 in [38].)Thus, in the absence of laser action, the criteria for stimulated-emission was not the observed spectral broadening nor the absence of Fabry-Perot modes.Rather, the criterion for stimulated-emission was a superlinear L-I curve at a specific wavelength: see Figures 1 and 2 in [38].This is especially true in dye lasers, where the stimulated-emission spectrum is broad: see Figure 8 in [42].The superlinear L-I curve must be demonstrated at a specific wavelength because, in stimulated-emission, a photon gives rise to additional photons at the same wavelength. Figure 11.Room-temperature stimulated-emission measured from the edge of Sample F [6].The superlinear rise of the optical-emission as J 3.6 is the signature of stimulated-emission.Optical Emission (a.u.) Figure 11 shows the optical-emission [6] measured from the sample edge (edge emission) as a function of J at room-temperature at specific wavelengths.At every wavelength, and for a significant range of J, the optical-emission is seen to increase by two to three orders-of-magnitude with a fast rise of J 3.6 .This superlinear rise of the optical-emission as J s at specific wavelengths is the signature of stimulated-emission.Note that, as the injection is increased, the stimulated-emission can be tuned widely from long wavelengths (about half-the-bandgap) to short wavelengths (near the bandgap).(This observed wavelength tuning range (900-1,600 nm) was limited by the response of our photodetector.The actual tuning range may be a bit wider.)In Figure 11, also note that, as the stimulated-emission at shorter wavelengths rises, the stimulated-emission at longer wavelengths clamps.11 with increasing injection.a, At small injection, holes scatter up to midgap states.b, At higher injection, the large number of holes at energies (e.g., E d2 ) further down from the midgap dramatically increases the shorter-wavelength (e.g., hν 2 ) optical-emission rate.Holes recombine radiatively (e.g., hν 2 ) with electrons before they can scatter up to midgap states.The optical-emission at hν 2 "uses up" the holes needed for long-wavelength emission. Carrier population distribution and the L-I curve Figure 12 explains both the shift to shorter wavelengths of the stimulated-emission (the superlinear L-I) and the clamping of the long-wavelength emission in Figure 11 with increasing injection.Figure 12a shows that, at small injection, holes scatter up to midgap states, and stimulated-emission occurs at long wavelengths.Figure 12b shows that, at higher injection, many holes arrive at energies (e.g., E d2 ) further down from the midgap and closer to the valence band.The large number of holes at these energies (e.g., E d2 ) dramatically increases the shorter-wavelength (e.g., hν 2 ) optical-emission rate.Hence, holes recombine radiatively (e.g., hν 2 ) with electrons before they can scatter up to midgap states.Thus, the shorter-wavelength (e.g., hν 2 ) optical-emission processes "use up" the holes needed for longer-wavelength (e.g., hν 1 ) optical emission.This explains the shift to shorter wavelengths of the stimulated-emission in Figure 11, as the long-wavelength optical-emission clamps. The Bernard-Duraffourg criterion Another criterion for demonstrating stimulated emission between two energy bands is a sufficiently large quasi-Fermi level separation, as derived in a classic paper by Bernard and Duraffourg [43].Our previous work [6,7] showed that, as the injection increases, the hole quasi-Fermi-level within the first L P of the deep-center layer drops from near midgap to near E V , as indicated in Figure 13a-c.This is manifest in the measured spectra, Figure 13d, as a shift of the optical-emission from half the bandgap energy (1.6 µm) to shorter wavelengths (1.0 µm) near the bandgap.The observed spectral blue shift of the optical emission corresponds to a similar increase in the separation between the electron and [43].The latter showed that when ∆E F exceeds the transition energy E U d , the stimulated emission exceeds the absorption at E U d .Figure 13d shows that ∆E F increases with increasing injection.Consistent with Figure 13d [6] and the Bernard-Duraffourg result, Figure 11 shows that the superlinear L-I (stimulated emission) is achieved at shorter wavelengths, as ∆E F exceeds the transition energies associated with shorter wavelengths, with increasing injection.This is the Bernard-Duraffourg signature of stimulated emission, and is depicted in Figure 13b-c. Increasingly superlinear L-I with a resonant cavity Figure 11 shows that the onset of stimulated-emission occurs at a J less than 1 A/cm 2 (at wavelengths longer than 1.3 µm).This stimulated emission was observed in the absence of a resonant cavity or waveguide.Any enhancement due to cavity effects was avoided by deliberately etching mesa edges with a random roughness.With a longer optical path and higher quality optical confinement, we would expect the stimulated-emission to increase, and the exponent s in the functional dependence J s of the stimulated emission to be larger.Indeed, Figure 15 shows [5] this to be true: the superlinear exponent s is about 3 without a resonant cavity (Figure 11), and is 64 with a resonant cavity (Figure 15a).The resonant cavity is formed by locating a distributed-Bragg-reflector (DBR) under the active layer, and reactive ion etching (RIE) a pixel to a depth of 4.5 DBR periods, as shown in Figure 1e. Observation of a gain larger than a significant loss A final piece of evidence for stimulated-emission is the observation [5,6] of an optical gain large enough to overcome a significant loss.In the next section, when we discuss laser action, we will also discuss both the nature of the optical modes and the optical emission spectra from deep-centers in the presence of a resonant cavity.Here, we will summarize the results, and defer the more detailed explanations until the next section.In the presence of a resonant cavity, we would expect that the largest spectral peak in the optical emission to correspond to a low-loss optical mode.It is thus both surprising and significant that the spectra [5] in Figures 17c, 18b, and 19c below all show the low-loss total-internal-reflection (TIR) mode, labeled G, to be suppressed, while the very lossy vertical mode, labeled F, dominates the spectra as a narrow peak.The latter signifies that enough material gain exists to overcome the large optical loss, which we will find to be a 70% transmission loss with each reflection of the lossy vertical mode at the sample surface.This large optical gain, which is needed to overcome the large loss, is another indicator of stimulated-emission. Summary Thus far, we have demonstrated stimulated-emission from deep-centers in highly n-doped GaAs, and electrical injection was the pump mechanism.The evidence for stimulated-emission includes: a superlinear L-I curve, a quasi-Fermi level separation large enough to satisfy the Bernard-Duraffourg criterion, and an optical gain large enough to overcome significant loss.We have demonstrated that the room-temperature stimulated-emission from GaAs deep-centers can be tuned very widely from the bandgap to half-the-bandgap by changing the electrical injection.Room-temperature deep-center stimulated-emission is demonstrated at electrical injections less than 1 A/cm 2 .This small injection which achieves stimulated-emission will be explained below by the fast capture of free holes onto deep-centers.Figure 14.Room-temperature L-I curves [5] measured in cw-mode from the same pixel on Sample G at different wavelengths in a single-pass geometry.At current densities greater than 65 A/cm 2 , the optical-emission at 1.35 µm becomes proportional to the first power of J, and the longer wavelength (e.g, 1.45 µm and 1.55 µm) optical-emission clamps at a constant value.The latter indicates that, at these injection levels, all the additional carriers supply the optical emission into the low-loss total-internal-reflection mode, rather than the longer wavelength lossy modes.These two observations indicate gain pinning and single-pass laser action.6. Laser Action Evidence for laser action Our evidence [5] for laser action from GaAs deep-centers is summarized here: (1) L-I curves, showing a superlinear regime at low injection, where the stimulated-emission rises three orders-of-magnitude with the functional dependence J s , and a regime linear in J at higher injection, indicating gain-pinning and laser action.The exponent s in the stimulated-emission regime is found to be larger for longer optical paths and higher quality optical confinement.(2) In the single pass geometry of Sample G below, laser action is observed for the longest wavelength total-internal-reflection (TIR) mode of Figure 1g.At an injection high enough for laser action in this TIR mode, the optical emission from lossy modes, such as the vertical mode of Figure 1f, is clamped at a constant value.See Figure 14.(3) With a resonant cavity and RIE facets, we observed a pinning of the carrier distribution among the energy-levels at all injections greater than threshold.Without a resonant cavity, previous work [6,7] showed that an increasing injection results in a marked shift in the carrier distribution, and a rise in shorter wavelength emission.(4) With a resonant cavity and RIE facets, the dominant mode in the optical emission spectrum (Figures 17c, 18b, and 19c) is the lossy vertical mode of Figure 1f.In order to be dominant, this lossy mode must undergo significant optical gain.A significant gain is another indicator of laser action. Relevant optical modes Figure 1d shows a single-pass measurement from the edge of Sample G.A bottom DBR was added in Sample G to increase the optical path for resonant normal wave vectors K Z .Wet-etched rough facets preclude the optical-feedback characteristic of resonant cavities.Figure 1f shows the "vertical" waveguide mode, which is a mode that reaches the sample surface at normal incidence and whose longitudinal wave vector K X is nearly zero.This vertical mode is very lossy, because 70% of the power reaching the sample surface is transmitted vertically, and lost from the waveguide.Figure 1g shows the longest wavelength total-internal-reflection (TIR) mode.Here, rays from within the semiconductor are incident upon the sample surface at the critical angle θ C for TIR. Figure 1h shows shorter wavelength TIR modes.Here, rays from within the semiconductor are incident upon the sample surface at an angle greater than θ C .These shorter wavelength modes make fewer passes through the active region.The vertical mode of Figure 1f makes the largest number of passes through the active region, but is quite lossy. Increasingly superlinear L-I with better optical confinement Sample F was not placed within a resonant cavity or waveguide.Figure 11 [6] shows that the single-pass L-I curves from Sample F rise a significant two to three orders-of-magnitude at every wavelength at a superlinear rate of J 3 .This superlinear rise as J 3 is the signature of stimulated-emission.We found that the superlinear slope s of the stimulated-emission is larger for longer optical-paths.Figure 14 shows the single-pass L-I curves from the same pixel on Sample G at different wavelengths.With the longer optical-path, the stimulated-emission at 1.35 µm, the longest wavelength for TIR, shows a rise as J 9 (which is a much sharper rise than from Sample F).At J>65 A/cm 2 , the optical-emission at 1.35 µm becomes proportional to the first power of J.At these J, the longer wavelength optical-emission (e.g., at 1.45 µm for the lossy vertical mode of Figure 1f, and also at 1.55 µm) clamps at a constant value (zero slope in the L-I curve).The latter indicates that, at J>65 A/cm 2 , all the additional carriers supply the optical emission into the low-loss TIR mode, rather than the longer wavelength lossy modes.These two observations indicate gain pinning and single-pass laser action from Sample G. Wavelengths shorter than 1.35 µm also show stimulated-emission, and correspond to the modes in Figure 1h. Longer optical path with a resonant cavity In the six Samples H-N, RIE facets made possible a resonant cavity and optical-feedback.See Figure 1e.Figures 15 to 19 show L-I curves and spectra from the six samples H-N [5].Each pixel has been labeled by its length L, the measurement wavelength λ, the threshold current density (where the log-log plot of L-I has its greatest slope), and the functional form J s of the stimulated-emission.With the optical-feedback resulting from RIE facets, the stimulated-emission is seen to rise even more sharply as J s , where s is as large as 27, or even 64.(See samples H and K in Figures 15a and 16, Figure 15.Room-temperature L-I curve [5] measured in cw-mode at a fixed wavelength from a resonant cavity with both a bottom DBR and RIE facets.The pixel has been labeled by its threshold current density (where the log-log plot of L-I shows the greatest slope), and the functional form J s of the stimulated emission.a,With the optical-feedback resulting from RIE facets, the stimulated-emission from Sample H is seen to rise even more sharply as J s , where s is as large as 64.b, Sample J shows a "S-shaped" L-I curve, which indicates a transition from spontaneous-emission, to stimulated-emission, towards laser-action.15b shows sample J to have an "S-shaped" L-I curve, which indicates a transition from spontaneous-emission, to stimulated-emission, towards laser-action.The stimulated-emission at 1.54 µm from sample L is seen in Figure 17 to rise a significant three orders of magnitude as J 11 , beyond which the optical-emission quickly becomes linear in J.The latter indicates gain pinning and laser action.The threshold is less than 2 A/cm 2 .Figures 18 and 19 show the stimulated-emission at 1.54 µm from samples M and N to rise as J 2.3 and J 2.5 , respectively, beyond which the optical-emission quickly becomes linear in J.The latter indicates gain pinning and laser action.The threshold is less than 69 mA/cm 2 and 27 mA/cm 2 , respectively, for samples M and N. Optical emission spectra Figure 20 shows the spectra from Sample G at the indicated J [5].Figures 17c, 18b, and 19c show the measured room-temperature spectra [5] for samples L, M, and N. The inset in Figure 17c shows that the peak at 1.54 µm is TE polarized.After correcting for the spectrometer resolution, the width of the spectral peaks in Figures 17c, 18b, 19c, and 20 was 12 nm.(Since the vertical mode of Figure 1f is quite lossy and has a low Q, the Fabry-Perot modes in the spectra of Figures 17c, 18b, 19c, and 20 would show significant spectral overlap, and thus could not be individually resolved.)Figure 16.Room-temperature L-I curves [5] measured in cw-mode from a resonant cavity with both a bottom DBR and RIE facets.a, With the optical-feedback resulting from RIE facets, the stimulated-emission from Sample K is seen to rise sharply as J s , where s is as large as 27.b,The constant vertical separation on a log-log plot of the L-I curves at different wavelengths from the same pixel indicates that the optical-emission has the same spectral shape for all indicated J, and that the carrier distribution is pinned among the energy-levels.(Without the resonant cavity, the optical emission shows a significant shift in the population distribution with increasing injection.)Optical Emission (a.u.) b 6.7.Loss in optical modes, and the observation of a gain larger than a significant loss Another indicator of laser action is the observation of an optical gain large enough to overcome a significant loss [5].We would expect that the spectral peaks in Figures 20, 17c, 18b, and 19c correspond to the low-loss longest wavelength TIR mode of Figure 1g, rather than the lossy vertical mode of Figure 1f.(The labels F and G in Figures 20, 17c, 18b, and 19c correspond to the modes shown in Figure 1f and 1g, respectively.)Certainly, the single-pass measurement of Sample G in Figure 20 indeed shows its spectral peak G at the low-loss TIR mode.In conventional vertically emitting lasers, the lossy vertical mode of Figure 1f never shows laser action unless both a top and bottom DBR are present.(In conventional slab waveguide lasers, laser action does not take place unless both a top and bottom cladding layer are present, and the lossy vertical mode of Figure 1f never shows laser action.)Without a top DBR, the vertical mode of Figure 1f suffers a 70% transmission loss with every reflection at the sample surface.Thus, in the six samples H-N, it is very significant that the lossy vertical mode dominates Figures 17c, 18b, and 19c as the narrow spectral peak F, while the low-loss TIR mode, labeled G in Figures 17c, 18b, and 19c, is suppressed.The latter signifies that enough material gain exists to overcome the large 70% transmission loss incurred by the vertical mode with each trip to the Figure 17.Room-temperature measurements [5] in cw-mode from a resonant cavity (bottom DBR and RIE facets).a, The stimulated-emission at 1.54 µm from Sample L is seen to rise a significant three orders of magnitude as J 11 , beyond which the optical-emission quickly becomes linear in J.The latter indicates gain pinning and laser action.b, The constant vertical separation on a log-log plot of the L-I curves at different wavelengths from the same pixel indicates that the optical-emission has the same spectral shape for all indicated J, and that the carrier distribution is pinned among the energy-levels.(Without the resonant cavity, the optical emission shows a significant shift in the population distribution with increasing injection.)c, Optical-emission spectra from Sample L at 17 A/cm 2 .All samples having a resonant cavity showed that the low-loss TIR mode, labeled G in Figure 17c, is suppressed, while the lossy vertical mode, labeled F in Figure 17c, dominates the spectra as a narrow peak.The latter signifies that enough material gain exists to overcome the large 70% transmission loss incurred by the vertical mode with each trip to the sample surface. .Room-temperature measurements [5] in cw-mode from a resonant cavity with both a bottom DBR and RIE facets.a, The stimulated-emission at 1.54 µm from Sample M is seen to rise as J 2.3 , beyond which the optical-emission quickly becomes linear in J.The threshold current density is observed to be less than 69 mA/cm 2 .b, Optical-emission spectrum from Sample M at 7 A/cm 2 .All samples having a resonant cavity (bottom DBR plus RIE facets) showed that the low-loss total-internal-reflection mode, labeled G in Figure 18b, is suppressed, while the lossy vertical mode, labeled F in Figure 18b, dominates as the narrow spectral peak.This signifies that enough material gain exists to overcome the large 70% transmission loss incurred by the vertical mode with each trip to the sample surface.sample surface.This large gain, which is needed to overcome the large loss, is another indicator of laser action.Since the vertical mode makes far more passes through the active layer than any of the TIR modes (compare Figures 1f, g, h), the lossy vertical mode acquires a higher net gain, and dominates Figures 17c, 18b, and 19c. Estimate of the optical gain The fact that the lossy vertical mode dominates the optical emission spectra is quite striking.The reason is that our measurement geometry (from the sample edge) was chosen to collect all emission from TIR modes, whose Poynting vector is parallel to the sample surface.This geometry does not collect most of the emission from the lossy vertical mode, whose Poynting vector is normal to the sample surface.Yet, the lossy vertical mode dominates the optical emission spectra even in this suboptimal measurement geometry.With a 70% transmission loss with each reflection at the sample surface and a very long optical path length, the only way that, even in this suboptimal measurement geometry, the lossy vertical mode can achieve a higher optical emission than any of the TIR modes is if net gain is achieved Figure 19.Room-temperature measurements in cw-mode from a resonant cavity.a, The stimulated-emission at 1.54 µm from Sample N rises as J 2.5 , beyond which the optical-emission quickly becomes linear in J.The threshold current density is observed to be less than 27 mA/cm 2 .b, The constant vertical separation on a log-log plot of the L-I curves at different wavelengths from the same pixel indicates that the optical-emission has the same spectral shape for all indicated J, and that the carrier distribution is pinned among the energy-levels.(Without the resonant cavity, the optical emission shows a significant shift in the population distribution with increasing injection.)c, Optical emission spectrum from Sample N at 25 A/cm 2 .With a resonant cavity, the low-loss TIR mode, labeled G in Figure 19c, is suppressed, while the lossy vertical mode, labeled F in Figure 19c, dominates the spectra as a narrow peak.Thus, enough material gain exists to overcome the large 70% transmission loss incurred by the vertical mode with each trip to the sample surface. .Room-temperature optical-emission spectra [5] measured from the same pixel on Sample G in a single-pass geometry at different current densities.As expected, the low-loss total-internal-reflection mode, labeled G in Figure 20, dominates the spectra as narrow peaks, while the lossy vertical mode, labeled F in Figure 20, is suppressed.(Since no top DBR has been placed on Sample G, the normal component K Z of the wave vector has a broad continuum of values, as determined by the broad cavity resonance in Figure ??a below.Consequently, the Fabry-Perot modes are broadened by the width of the cavity resonance.) in one round trip (from the surface down vertically to the DBR and back up vertically to the surface).Using 210 nm for the thickness of the deep-center layer and a loss dominated by the 70% transmission, the gain for the lossy vertical mode is found to be 2.9 × 10 4 cm −1 .(The thickness of the electrically pumped portion of the deep-center layer, one hole diffusion length, is much less than 210 nm.So the gain of the lossy vertical mode may be higher than this estimate.)This is a large gain at these injection densities (27 mA/cm 2 to 2 A/cm 2 ). Three observations explain how a large optical gain is achieved even at low injection.First, the Franck-Condon effect allows transparency to be achieved even at nearly zero injection.Second, the electrically pumped region extends only one hole diffusion length into the deep-center layer.The small volume of the electrically pumped region contains a small total number of deep-centers, and allows for a smaller threshold current.Moreover, the injected holes occupy a smaller volume, and thus achieve a higher concentration.Third, fast depopulation of the lower state of the optical transition (e.g., fast capture of free holes onto deep-centers) allows a population inversion to be easily maintained. Net gain over a wide wavelength range We have observed that the lossy vertical mode of Figure 1f (labeled F in Figures 17c, 18b, and 19c) has sufficient single-pass gain to overcome the 70% transmission loss with every reflection at the sample surface.This implies that shorter wavelength TIR modes (e.g., Figures 1g-h) also show net gain in a single-pass [5] because TIR modes suffer no transmission loss at the sample surface.Gain (and laser action) is then observed over a wide wavelength range [44].Indeed, at every wavelength between 1.2 µm and 1.6 µm, the L-I curves from samples K, L, and N in Figures 16b, 17b, and 19b all show stimulated-emission regimes, followed by gain pinning and laser action.A consequence of the optical-resonator is thus to reduce the threshold at all wavelengths between 1.2 µm and 1.6 µm.Figures 8,11,13,14,and 20 show that, in a single-pass through Samples F and G, the optical-emission at shorter wavelengths increases with increasing injection [5,6].The shift to shorter wavelengths of the optical emission with increasing injection was reported in Figure 4 in [7] for a sample fabricated without a waveguide or optical resonator.For a one decade increase in the injection, from 2 A/cm 2 to 20 A/cm 2 , the optical-emission from Sample F, which has no optical resonator, shows a dramatic shift from a peak centered at 1.58 µm to a peak centered at 1.27 µm.This is shown [6] in Figure 13.For a similar increase in the injection, from 2 A/cm 2 to 17 A/cm 2 , the optical-emission from Sample L (which does have an optical resonator) shows no change in the spectral shape [5].This is indicated by the constant vertical separation of the curves at different wavelengths from the same pixel in Figure 17b.Similarly, for a 400-fold increase in the injection, from 27 mA/cm 2 to 10 A/cm 2 , the optical-emission from Sample N (which does have an optical resonator) shows no change in the spectral shape.This is indicated by the constant vertical separation of the curves at different wavelengths from the same pixel in Figure 19b.For a similar increase of 400-fold in the injection, Figure 13 shows the optical-emission from Sample F, which has no optical resonator, to exhibit a dramatic shift from a peak centered at 1.59 µm to a peak centered at 1.1 µm.This shift to shorter wavelengths of the optical emission with increasing injection indicates that the carrier population shifts significantly among the energy-levels in the absence of an optical resonator. Carrier population pinning In contrast, when an optical resonator is added, the spectra radiated by the six samples H-N all had the same shape [5] (shown in Figures 17c, 18b, and 19c) at all current densities measured.(This is indicated by the constant vertical separation of the curves at different wavelengths from the same pixel in the log-log plots of Figures 16b, 17b, and 19b.)Thus, in the presence of an optical resonator, the carrier population has the same distribution among the energy-levels for all injection currents shown in Figures 16b, 17b, and 19b.This striking observation indicates that, even though the photon number is rising sharply, the carrier distribution among the energy-levels is pinned.Carrier population pinning is another indicator of laser action. Observation of increased radiative recombination rate with a resonant cavity The data in Figures 15 to 19 actually show that the radiative recombination rate increases in the presence of a resonant cavity.This can be argued as follows.The hole population N holes,dl in deep levels a, At small injection, holes occupy mainly midgap states.b, At higher current injection, band filling occurs.In the absence of a resonant cavity, the number of holes in deep-levels increases with increasing injection, as seen in Figure 13.In the presence of a resonant cavity, Figures 15 to 19 show that the hole population remains pinned in midgap states for all injections above threshold.The hole population in deep-levels no longer increases with increasing injection.Hence, the rate of radiative recombination of holes with electrons is greater than in Figure 21.In the state, the injected flux equals the recombination flux, and the steady state concentration of holes in deep-levels is, Thus, for a fixed injection, a smaller steady state concentration of holes in deep-levels implies a higher recombination rate R eh .Figures 21 and 22 show the electron and hole population distributions in Samples F and H-N, respectively (without and with a resonant cavity, respectively).Figure 21 shows band filling at higher injection in the absence of a resonant cavity.At higher injection, the additional holes populate states further down from the midgap and closer to the valence band.Here, in the absence of a resonant cavity, the number of holes in deep-levels increases with increasing injection.This describes the spectra shown in Figure 13.The earlier discussion of Figures 15 to 19 shows that, in the presence of a resonant cavity, the electron and hole populations remain pinned for all current injections greater than threshold.This is shown in Figure 22.The spectra in Figures 15 to 19 show that holes remain pinned mainly in midgap states for all injections above threshold.Thus, unlike the situation in Figure 21, the hole population in deep-levels within a resonant cavity no longer increases with increasing injection. In fact, for a fixed injection current density, the hole concentration in the presence of a resonant cavity (Samples H-N in Figure 22) smaller than the hole concentration in the absence of a resonant cavity (Sample F in Figure 21).For a fixed injection, a higher hole concentration is associated with Sample F in Figure 21 because higher hole concentrations are accompanied by a shift of the hole population to states closer to the valence band.Samples H-N in Figure 22 show no such shift of the hole population to states closer to the valence band.According to Equation ( 2), the smaller steady-state hole concentration implies a higher recombination rate R eh in Samples H-N in the presence of a resonant cavity.This makes sense because the stimulated emission rate is proportional to the photon population, and the latter is greater in the presence of a resonant cavity. Stimulated-emission and Einstein B coefficient The previous section showed that, for a fixed injection, the steady-state concentration of holes is a measure of the electron-hole recombination rate: a small steady-state concentration of holes would result from a large electron-hole recombination rate.We also showed that the electron-hole recombination rate is definitively larger in the presence of a resonant cavity.Since the resonant cavity enhances the number of photons and the stimulated-emission into resonant modes, the increased electron-hole recombination is a result of an increased stimulated-emission rate.This direct observation of an increased stimulated-emission rate indicates that the radiative emission dominates in the presence of the resonant cavity.The latter indicates a high radiative efficiency.The observation of an increased stimulated-emission rate is surprising because the photon population is small: the injected current density (and the photon density) is very low, and the resonant cavity has very low Q (the transmission loss through the sample surface is 70% and the RIE etch was only 4.5 periods into the bottom DBR).The noticeably larger stimulated-emission rate, even at the low photon densities resulting from low injection in a low-Q cavity, indicates a sizable Einstein B coefficient.In a previous work [7], we estimated a sizable Einstein B coefficient of 8.2 × 10 −10 cm 3 /s by equating the injection flux with the total electron-hole recombination flux.Although such estimates always involve a significant margin of error, they are, at least, consistent with our observation of a high stimulated-emission rate even at the low photon densities which result from a low injection density in a low-Q cavity. Historically, it has been widely thought that point defects should show only weak radiative transitions.However, the deep-centers in this material are not simple point defects, but are believed to be native deep-acceptor complexes: e.g., complexes consisting of a vacancy-on-gallium-site and a donor-on-gallium-site. The radiative transition occurs between an electron on a donor-on-gallium-site and a hole on a vacancy-on-gallium-site.Since the vacancy-on-gallium-site and the donor-on-gallium-site are next-nearest neighbors, the wave function overlap and the optical dipole is significant.A similar situation occurs in F-center (i.e., color-center) lasers, wherein a strong radiative transition occurs at anion vacancies in alkali halides.A future research direction should be directed at understanding this optical transition strength. Polarization of the optical emission In the presence of the resonant cavity, the narrow spectral peak at 1.54 µm in Figures 17c, 18b, and 19c is found to be TE-polarized [5].This is demonstrated in the inset in Figure 17c.This is a sensible observation, because the reflectivity of TE-polarized radiation is greater than that of TM-polarized radiation for all incident angles.With the greater reflectivity at each interface, TE-polarized radiation shows better confinement in the DBR waveguide.17c, 18b, and 19c would show significant spectral overlap, and thus could not be individually resolved. Effect of lossy cavity on mode structure Most laser cavities are designed to have low loss.The spectral width of these low-loss optical modes is usually much less than the spectral spacing between modes.The large loss inherent in the GaAs deep-center laser mode, shown in Figure ??, has unexpected implications for the mode structure.The broad spectral width of lossy optical modes has important implications for the number of modes which go into laser action.For example, in random lasers with non-resonant feedback [48], where the spectral broadening of each mode is greater than the spectral spacing between modes, the large number of modes within the spectral broadening are coupled together.These modes go through threshold and laser action together.Thus, even at threshold, these random lasers are not single-mode, but have a large number of coupled modes going through threshold together.The GaAs deep-center laser is exhibiting something similar.The large number of modes within the spectral width of the broad cavity resonance in Figure ??a simultaneously go through threshold and laser action together.Thus, even at low injection, the GaAs deep-center laser is not expected to be single-mode. Small injection to achieve laser action The surprisingly small injection which achieves stimulated-emission and laser action can be explained [7] by the very efficient fast capture of free holes onto deep-centers.The physics which explains the fast hole capture onto deep-centers is that, in compensated semiconductors, the deep-acceptor complexes are negatively charged, and thus exhibit a large capture cross-section for positively-charged holes.The coefficient for capture of a free hole onto the native deep-acceptors (mainly V Ga and their complexes) in n-type GaAs has been measured [10] to be 2 × 10 −6 cm 3 -s −1 .For our deep-acceptor concentration of 2 × 10 19 cm −3 , this coefficient yields a capture lifetime for free holes of about 25 fs, and corresponds to a free hole diffusion length of 25 Å.Recent pump-probe measurements [32,33] have determined the capture lifetime of a free hole onto V Ga to be 100 fs.This agrees with our previous estimate [7] based upon determining a free hole diffusion length L P of 100 Å in the deep-center material.This hole diffusion length is thus 10 4 times smaller than in high quality GaAs [49].The hole diffusion length is indicated schematically in the device layer structure of Figure 13a as L P .The latter is important for two reasons.First, when L P is smaller, then holes are injected into a smaller region (one L P away from the p-n junction), and thus achieve a higher concentration within the smaller region.Second, when L P is smaller, then electrical injection probes a smaller total number of deep-centers: only those deep-centers within one L P of the p-n junction.A smaller total number of deep-centers constitutes a smaller population which is to be inverted.Population inversion is then easier to achieve within a smaller L P within the n-type deep-center layer because of: first, the higher hole concentration, and, second, the smaller total number of deep-centers, whose population is to be inverted.(In regions of the n-type deep-center layer which are more than one L P away from the p-n junction, the population is not inverted, but the absorption [7,8] is very small at the wavelengths of the deep-center transitions.)Thus, the small injection which achieves laser action is a consequence of the fast depopulation of the lower state of the optical transition (i.e., fast capture of free holes onto deep-centers).The latter is important for maintaining the population inversion.Moreover, a Franck-Condon effect allows transparency to be achieved at nearly zero injection.Finally, a large stimulated-emission rate (Einstein B coefficient) is observed even at the low photon densities which result from a low injection density in a low-Q cavity. Summary Thus far, we have demonstrated the first GaAs deep-center laser.Electrically-pumped broad-area lasers exhibited a threshold of less than 27 mA/cm 2 in cw-mode at room-temperature at the 1.54 µm wavelength.In Sample G having wet-etched facets, the longest wavelength TIR mode (Figure 1g) shows a superlinear L-I curve at low injection, and a linear regime in the L-I curve at higher injection.At injections high enough for laser action in this TIR mode, the longer wavelength optical emission is clamped at a constant value.The latter indicates that, at these injection levels, all the additional carriers supply the optical emission into the low-loss TIR mode, rather than the longer wavelength lossy modes.Both observations indicate single-pass laser action and gain pinning.With a resonant cavity and RIE facets, the stimulated-emission from sample L rises a significant three orders of magnitude as J 11 with a threshold less than 2 A/cm 2 .The threshold is less than 69 mA/cm 2 and 27 mA/cm 2 , respectively, for samples M and N.With a resonant cavity and RIE facets, samples H-N all show a pinning of the carrier distribution among the energy-levels at all injections greater than threshold.(Without a resonant cavity, previous work [6,7] showed that an increasing injection results in a marked shift in the carrier distribution, and a rise in shorter wavelength emission.)With a resonant cavity and RIE facets, the dominant mode in the optical emission spectrum (Figures 17c, 18b, and 19c) is the lossy vertical mode of Figure 1f.In order to be dominant, this lossy mode must undergo an optical gain which is large enough to overcome a significant loss.A large optical gain is another indicator of laser action. Conclusions We have reviewed recent work which allowed the first demonstration of a GaAs deep-center laser.First, we summarized some well-known properties of deep-centers in highly n-doped GaAs: the nature of the deep-acceptor complexes, the Franck-Condon effect, the observed photoluminescence.Second, we describe our recent work: the total radiative output in photoluminescence, the insensitivity of the photoluminescence with respect to a 90 • C rise above room temperature, the dependence of the photoluminescence and electroluminescence on the pump power, striking differences between electroluminescence and photoluminescence, a correlation between transitions to deep-states and the absence of bandgap emission, the fast capture of free holes onto deep-centers.We observed room-temperature stimulated-emission from GaAs deep-centers at low electrical injection.The evidence for stimulated-emission included: a superlinear L-I curve, a quasi-Fermi level separation large enough to satisfy the Bernard-Duraffourg criterion, and an optical gain large enough to overcome significant loss.The room-temperature stimulated-emission from GaAs deep-centers can be tuned very widely from the bandgap (about 900 nm) to half-the-bandgap (1,600 nm) by changing the electrical injection.The first GaAs deep-center laser was demonstrated with electrical injection, and exhibited a threshold of less than 27 mA/cm 2 in continuous-wave mode at room temperature at the important 1.54 µm fiber-optic wavelength.This small injection which achieves laser action can be explained by a fast depopulation of the lower state of the optical transition (i.e., fast capture of free holes onto deep-centers).The latter helps to maintain the population inversion.The evidence for laser action included: a superlinear L-I curve, an optical gain large enough to overcome significant loss, a clamping of the optical emission from lossy modes that do not show laser action, and a pinning of the population distribution during laser action. Outlook Three obvious directions of future research are: very low threshold lasers, tunability over a wide spectral range, and very short pulse generation.In this review, stimulated emission and laser action were observed at very low current density.This was in spite of the fact that the optical cavities showed very significant loss (70% transmission loss through the sample surface, and only a shallow RIE etch of 4.5 periods into the bottom DBR).A higher Q cavity would result in further reduction of the operating current.The addition of a top DBR or a top cladding layer in a slab waveguide are obvious paths for increasing the Q of the cavity.We also showed that the stimulated emission is not pinned to the bandgap energy, but could be tuned from 900-1700 nm.This should allow for semiconductor lasers having a very wide spectral tuning range.A wide spectral tuning range is useful for spectroscopy, lab-on-a-chip, chemical species identification, fiber-optics, and medicine.Optical gain spanning 1.3 to 1.6 µm in the same semiconductor is very unique, and could prove useful for very dense wavelength division multiplexing.Such tunable semiconductor lasers would be compatible with integration on GaAs and mass production.A further application for GaAs deep-centers is the generation of pulses having a very short duration.The latter requires optical gain over a wide spectral range, such as was observed in this review. Figure 1 .Figure 2 Figure 3 . Figure1.Lifetimes and measurement geometries[5].a, Lifetimes for electrons.b, Lifetimes for holes.c, Single-pass measurement from the edge of Sample F. The active layer was not inside a resonant cavity or waveguide.d, Single-pass measurement from the edge of Sample G.A bottom mirror (DBR) was added to increase the single-pass optical length.The rough wet-etched facets preclude the optical-feedback characteristic of resonant cavities.e, Edge emission from Samples H-N.The RIE facets allow a resonant cavity, with its characteristic optical-feedback.f, Lossy "vertical" waveguide mode, which is normally incident upon the sample surface and whose longitudinal wave vector K X is nearly zero.70% of the incident power is transmitted vertically and lost through the top surface.g, Low-loss longest wavelength total-internal-reflection (TIR) mode.Here, rays from within the semiconductor are incident upon the sample surface at the critical angle θ C for TIR.h, Shorter wavelength TIR mode.Here, rays from within the semiconductor are incident upon the sample surface at an angle greater than θ C .This mode makes fewer passes through the active region.i, Top view of the semiconductor surface showing the pixel dimensions. Figure 4 .Figure 5 . 3 . 5 . Figure 4. Room temperature PL[7] as a function of the optical pump power.a, The room temperature PL retains its spectral shape for all excitation laser peak powers up to 2 W. b, The PL peak at 1.31 µm as a function of the excitation laser peak power. Figure 7 . Figure7.a, The PL[7] from GaAs deep-centers and high-quality InGaAs MQWs at 295 K (dashed lines) and 385 K (solid lines).b, The PL of GaAs deep-centers at 295 K (dashed line) and 77 K (solid line).All PL have been normalized to the peak values. Figure 8 . Figure 8. a, Room temperature EL spectra [7] of p-n junction with the deep-center-layer as the n-region.b, Energy band diagram showing the p-layer on the left and the n-type deep-center layer on the right.At low injection, free holes are captured onto deep-centers within one L P of the p-n junction, and scatter up to midgap states.(Holes are labeled in red).At higher injection, the additional holes fill all deep-states within one L P of the p-n junction from the midgap down to the valence-band.As these deep-states fill up, the EL saturates at the long wavelengths corresponding to transitions to these deep-states. Figure 9 . Figure 9. Room temperature EL[7] from a device which does not have a p-layer.a, In a device consisting of only the n-type deep-center-layer, holes are created via impact ionization of electrons over a large volume of the deep-center-layer (all along the electron paths).b, The EL spectra from the deep-center-layer looks a lot like the PL when the holes are created in a large volume of the deep-center-layer (all along the electron paths).This is unlike Figure8a, where the EL spectra exhibits a spectral blue-shift relative to the PL, and where the holes in Figure8aexist only in a small volume (the first L P ) of the deep-center-layer. Figure 9. Room temperature EL[7] from a device which does not have a p-layer.a, In a device consisting of only the n-type deep-center-layer, holes are created via impact ionization of electrons over a large volume of the deep-center-layer (all along the electron paths).b, The EL spectra from the deep-center-layer looks a lot like the PL when the holes are created in a large volume of the deep-center-layer (all along the electron paths).This is unlike Figure8a, where the EL spectra exhibits a spectral blue-shift relative to the PL, and where the holes in Figure8aexist only in a small volume (the first L P ) of the deep-center-layer. Figure 12 . Figure 12.Electron and hole population distributions which explain both the shift to shorter wavelengths of the stimulated-emission (the superlinear L-I) and the clamping of the long-wavelength emission in Figure11with increasing injection.a, At small injection, holes scatter up to midgap states.b, At higher injection, the large number of holes at energies (e.g., E d2 ) further down from the midgap dramatically increases the shorter-wavelength (e.g., hν 2 ) optical-emission rate.Holes recombine radiatively (e.g., hν 2 ) with electrons before they can scatter up to midgap states.The optical-emission at hν 2 "uses up" the holes needed for long-wavelength emission. Figure 13 . Figure 13.Room-temperature spectra measured from the surface of Sample F [6]. a, Layer structure showing p-doped layer, n-type deep-center layer, and hole diffusion length.b, At low current, the hole quasi-Fermi level E F h is above midgap, and most of the optical emission is at long wavelengths (1.6 µm).At high current, E F h is pulled far below midgap, and most of the optical emission is at shorter wavelengths (1.1 µm).Here, ∆E F exceeds the transition energy E U d .This is the Bernard-Duraffourg signature for stimulated emission at E U d .d, Room-temperature optical-emission spectra measured from the sample surface at different injection in the absence of a resonant cavity.The observed blue shift in the optical emission corresponds to a similar increase in ∆E F with increasing injection. Figures 14 to 19 Figures 14 to19 show[5] measured L-I curves and spectra at room-temperature in cw-mode.Figure14shows the single-pass L-I curves from the same pixel on Sample G at different wavelengths.With the longer optical-path, the stimulated-emission at 1.35 µm, the longest wavelength for TIR, shows a rise as J 9 (which is a much sharper rise than from Sample F).At J>65 A/cm 2 , the optical-emission at 1.35 µm becomes proportional to the first power of J.At these J, the longer wavelength optical-emission (e.g., at 1.45 µm for the lossy vertical mode of Figure1f, and also at 1.55 µm) clamps at a constant value (zero slope in the L-I curve).The latter indicates that, at J>65 A/cm 2 , all the additional carriers supply the optical emission into the low-loss TIR mode, rather than the longer wavelength lossy modes.These two observations indicate gain pinning and single-pass laser action from Sample G. Wavelengths shorter than 1.35 µm also show stimulated-emission, and correspond to the modes in Figure1h. Figure 18.Room-temperature measurements[5] in cw-mode from a resonant cavity with both a bottom DBR and RIE facets.a, The stimulated-emission at 1.54 µm from Sample M is seen to rise as J2.3 , beyond which the optical-emission quickly becomes linear in J.The threshold current density is observed to be less than 69 mA/cm 2 .b, Optical-emission spectrum from Sample M at 7 A/cm 2 .All samples having a resonant cavity (bottom DBR plus RIE facets) showed that the low-loss total-internal-reflection mode, labeled G in Figure18b, is suppressed, while the lossy vertical mode, labeled F in Figure18b, dominates as the narrow spectral peak.This signifies that enough material gain exists to overcome the large 70% transmission loss incurred by the vertical mode with each trip to the sample surface. Figure 20.Room-temperature optical-emission spectra[5] measured from the same pixel on Sample G in a single-pass geometry at different current densities.As expected, the low-loss total-internal-reflection mode, labeled G in Figure20, dominates the spectra as narrow peaks, while the lossy vertical mode, labeled F in Figure20, is suppressed.(Since no top DBR has been placed on Sample G, the normal component K Z of the wave vector has a broad continuum of values, as determined by the broad cavity resonance in Figure??a below.Consequently, the Fabry-Perot modes are broadened by the width of the cavity resonance.) Figure 21 . Figure 21.Electron and hole population distribution in Sample F (no resonant cavity).a, At small injection, holes occupy mainly midgap states.b, At higher current injection, band filling occurs.In the absence of a resonant cavity, the number of holes in deep-levels increases with increasing injection, as seen in Figure13. Figure 22 . Figure 22.Electron and hole population distribution in Samples H-N (with resonant cavity).In the presence of a resonant cavity, Figures 15 to 19 show that the hole population remains pinned in midgap states for all injections above threshold.The hole population in deep-levels no longer increases with increasing injection.Hence, the rate of radiative recombination of holes with electrons is greater than in Figure21. the injected flux F injected of holes per unit volume per unit time and the recombination rate R eh of holes with electrons through, dN holes,dl / dt = F injected − N holes,dl R eh(1) Figure 23 . Figure 23.The Fabry-Perot modes of the GaAs deep-center laser.a, Reflection at normal incidence from the top surface of the wafer.Since the device structure has no top DBR, the normally incident optical modes show a 70% transmission loss, and the cavity resonance is broad.b, The lossy vertical mode (blue arrows) has a K Z determined by the broad cavity resonance in Figure ??a.c, If K Z were fixed at a single value, then the Fabry-Perot spectrum would be determined by the discrete values of the longitudinal K X .d, Since K Z has a continuum of values, as determined in Figure ??a, the Fabry-Perot modes are broadened by the width of the resonance. 14 . Spectral broadening of the Fabry-Perot modesIndividual Fabry-Perot modes of the GaAs deep-center laser were difficult to resolve, because the large waveguide loss broadens the spectrum of individual modes.This is shown in Figure??.It is important to note that the device is not a vertical-cavity-surface-emitting laser (VCSEL), because the structure has no top DBR mirror.Thus, an optical mode, originating from within the semiconductor and normally incident upon the sample surface, would show a spectrally broad cavity resonance.The cavity resonance is indicated in Figure??a as the spectrally broad dip in the reflection stopband at normal incidence.Figure ??b shows the lossy vertical mode as blue arrows.The normal component K Z of the wave vector of this mode attains a broad continuum of values, as determined by the broad cavity resonance in Figure ??a.If K Z were fixed at a single value, then the Fabry-Perot spectrum would be determined by the discrete values of the longitudinal component K X of the vector, as pictured in Figure ??c.These discrete values of K X are determined by boundary conditions on the electric and magnetic fields.Since the actual K Z has a continuum of values, determined by the broad cavity resonance in Figure ??a, the Fabry-Perot modes are broadened by the width of the resonance, as shown in Figure ??d.The Fabry-Perot modes merge, and individual modes are difficult to resolve.Thus, since the vertical mode of Figure 1f is quite lossy and has a low Q, the Fabry-Perot modes in Figures
18,464.6
2009-10-22T00:00:00.000
[ "Physics" ]
Hybrid classical integrability in squashed sigma models We show that SU(2)_L Yangian and q-deformed SU(2)_R symmetries are realized in a two-dimensional sigma model defined on a three-dimensional squashed sphere. These symmetries enable us to develop the two descriptions to describe its classical dynamics, 1) rational and 2) trigonometric descriptions. The former 1) is based on the SU(2)_L symmetry and the latter 2) comes from the broken SU(2)_R symmetry. Each of the Lax pairs constructed in both ways leads to the same equations of motion. The two descriptions are related one another through a non-local map. Introduction The notion of integrability is of significance in theoretical and mathematical physics. It enables us to study physical quantities non-perturbatively and often prove strong-weak dualities exactly as in the case of sine-Gordon and massive Thirring models [1]. Similarly, integrability would be an important building block toward the proof of AdS/CFT [2] (For an overview, see [3]). In this direction the symmetric coset structure of AdS spaces and spheres would play an important role [4]. A classification of symmetric cosets potentially applicable to AdS/CFT is performed in [5]. In applications of AdS/CFT to condensed matter physics, there is a motive to consider gravitational backgrounds, such as Schrödinger [6,7] and Lifshitz [8] spacetimes, represented by non-symmetric cosets [9]. As other examples, anisotropic geometries like warped AdS spaces and squashed spheres also appear as gravity duals to field theories in the presence of a magnetic field [10]. In condensed matter physics a magnetic field is of importance to vary the system, and hence the anisotropic geometries are very interesting to study. In this letter we will focus upon the classical integrable structure of a two-dimensional sigma model defined on a three-dimensional squashed sphere. Since the squashed sphere is described as a non-symmetric coset, it is not so obvious in comparison to symmetric cases such as principal chiral models [11]. The squashed sphere is described as a one-parameter deformation of round S 3 and the metric of squashed S 3 is represented by the left-invariant one-form J ≡ g −1 dg with an SU(2) group element g. The where ε ab c is the totally antisymmetric tensor. The constant C measures the deformation from S 3 . When C = 0 , the metric (1.1) describes the round S 3 with radius L . For C = 0 , the S 3 isometry SO(4) = SU(2) L × SU(2) R is broken to SU(2) L × U(1) R . The infinitesimal transformations under SU(2) L × U(1) R are given by Let us consider a two-dimensional non-linear sigma model whose target space is the squashed sphere (1.1). The action is given by The coordinates and metric of base space are x µ = (t, x) and η µν = diag(−1, +1) . Suppose that the value of C is restricted to C > −1 so that the sign of kinetic term is not flipped. Note that the Virasoro and periodic boundary conditions are not imposed here. Instead, we impose the boundary condition that the variable g(x) approaches a constant element rapidly as it goes to spatial infinity, That is, J µ (x) vanishes rapidly as x → ±∞ . The equations of motion are Multiplying T 3 and taking the trace, we obtain the conservation law for U(1) R , Then the expressions in (1.5) are simplified as We will show that the equations of motion (1.7) are reproduced from the two descriptions, 1) the rational description with SU(2) L and 2) the trigonometric one with U(1) R . Rational description First let us consider a description based on the SU(2) L symmetry. The SU(2) L Noether current j L µ is given by Then the conservation laws follow from (1.7). The number of dynamical degrees of freedom in this system is just three. It agrees with that of the conserved charges for SU(2) L . Thus the equations of motion and the conservation laws of SU(2) L are equivalent. The U(1) R current is automatically conserved due to the conservation laws of SU(2) L . Although the current (2.1) does not satisfy the flatness condition, it can be improved by adding a topological term so that it does. The improved currentj L µ is given bỹ and satisfies the flatness condition [12]: 3) The anti-symmetric tensor ǫ µν on the base space is normalized as ǫ tx = +1 . The coefficient of the last term in (2.2) is fixed so that the flat condition (2.3) is satisfied. For the improved SU(2) L current, the current algebra is deformed by the squashing parameter C as follows: Here we have used the vector index notation withj L,a µ ≡ −2Tr(T ajL µ ) . Due to the improvement, an infinite number of conserved charges can be constructed, for example, by following [13]. The first two of them are Here and θ(x) is a step function. Although the current algebra is deformed, the Yangian algebra [14] is still realized and the Serre relations are also satisfied [12]. This is the case even after adding the Wess-Zumino term [15], though the current algebra becomes much more complicated. It is a turn to construct a Lax pair. With the improved SU(2) L current, it can be constructed as a linear combination, where λ is a spectral parameter. Then the commutation relation leads to the whole equations of motion (1.7) . With (2.4), the monodromy matrix U L (λ) is defined as The symbol P means the path ordering. Due to (2.5), The classical r-matrix is derived by evaluating the Poisson bracket of the monodromy matrices. Following the prescription in [16], it is evaluated as and the classical r-matrix is Note that the resulting r-matrix does not contain C and is of the familiar rational type. Thus it satisfies the classical Yang-Baxter equation as a matter of course. Trigonometric description Next we shall consider another description based on the broken SU(2) R symmetry. We first show that the broken SU(2) R symmetry is enhanced to a q-deformed SU(2) R symmetry. Recall that the U(1) R current is given by The normalization is taken for later convenience. Now let us consider the following currents, The field χ(x) contained in (3.2) is given by and non-local. Thus the currents in (3.2) are also non-local ‡ . To show the conservation laws of non-local currents in (3.2) , it is necessary to use the boundary condition (1.4) and the identities, The Poisson brackets of j R,± t and j R,3 The conserved charges are constructed as Then the transformation laws generated by Q R,± are where ξ ± are new non-local fields given by . ‡ Note that a non-local symmetry concerning SU (2) R is discussed also in [17] from a T-duality argument. However, the one discussed here is different from this. A modification is done motivated by [18]. It is now straightforward to check the invariance of the equations of motion (1.7) directly and thus the transformation laws (3.4) give rise to an "on-shell" symmetry. When C = 0 , the transformation laws (3.4) are reduced to the usual SU(2) R ones. Thus the Poisson brackets of the charges lead to a q-deformed SU(2) algebra [14,19] : Here we have rescaled Q R,± as The normalization of (3.1) is fixed so that the expression of the second commutator in (3.5) is obtained. It is worth noting the C → 0 limit where round S 3 is reproduced and hence the SU(2) R Yangian should be recovered. This is the case as we can see by expanding the non-local where Q R,± (0) and Q R,± (1) are the SU(2) R Yangian generators. The third component of the Yangian generators is supplemented from the Poisson bracket of the + and − components. On the other hand, when considering the C → ∞ limit, Tr(T ± J µ ) and Tr(T 3 J µ ) have to vanish for the finiteness of Q R, 3 . This implies that a single element of SU(2) is specified. In analogy with the XXZ model, the C → ∞ limit resembles the Ising model limit. The fact that a single point is preferred would be analogous to that a ferromagnetic ground state is picked up in the Ising model. It is a turn to consider a Lax pair given by [20] S a andS a are related to J a t and J a x as follows: Here λ is a spectral parameter and w a (λ) are defined as The location of pole α is specified as By definition, α can take a complex value, while C must be real. Therefore α should be real or purely imaginary. When we take α = iβ (β: real) , then C = tan 2 β . Then the range of C is naturally restricted to the physical region C ≥ −1 . By rescaling λ as λ = αλ and taking the α → 0 limit in (3.7), the Lax pair of rational type for SU(2) R is reproduced. The commutation relation leads to the equations of motion (1.7) with the help of the flatness of J = g −1 dg . Then the monodromy matrix is defined as and it is conserved, Following the prescription in [16], the Poisson bracket of the monodromy matrices is evaluated as The resulting classical r-matrix is given by and is of trigonometric type. This classical r-matrix also satisfies the Yang-Baxter equation. Finally let us discuss the equivalence between the two descriptions. The current circumstance is quite similar to the Seiberg-Witten map [21]. On the one hand, The improvement term added in (2.2) may be regarded as a constant two-form flux. On the other hand, the existence of q-deformed SU(2) R implies a "quantum space" such as a noncommutative space. Discussions In this letter we have shown that SU(2) L Yangian and q-deformed SU(2) R symmetries are realized in a two-dimensional sigma model defined on a three-dimensional squashed sphere. According to these hidden symmetries, we have presented the two descriptions, 1) the rational description and 2) the trigonometric one. They are related one another via a non-local map and hence are equivalent. Recall that one may consider the Seiberg-Witten map in a field theory equipped with a magnetic field. On the other hand, warped AdS spaces, which are obtained from the squashed sphere through double Wick rotations, appear as gravity duals of condensed matter systems in the presence of a magnetic field. Therefore, the equivalence discussed here would be rather natural as a sigma model realization of Seiberg-Witten map in the field theory dual. The next question is what is the interpretation of this equivalence in gravitational theories. Warped AdS spaces appear also in the Kerr/CFT correspondence [22]. A threedimensional slice of the near-horizon extreme Kerr geometry [23] is described as a warped AdS 3 space. It would be interesting to consider the role of q-deformed SU(2) R in this direction. It may lead to a new source of entropy. Another issue is to construct the Bethe ansatz based on the SU(2) L Yangian and qdeformed SU(2) R symmetries. The speculated Bethe ansatz should be called "hybrid" Bethe ansatz which is composed of the S-matrices of XXX and XXZ models for the left and right, respectively. In fact, quantum solutions are already known [24][25][26], though the classical integrable structure we revealed here has not been discussed there. It would be interesting to consider them in the context of AdS/CFT.
2,682.4
2011-07-19T00:00:00.000
[ "Physics" ]
Whole Exome Sequencing Identies a Novel COL1A1 Missense Mutation Causing Dentinogenesis Imperfecta Type I Without Skeletal Abnormalities Background (cid:0) Osteogenesis imperfecta (OI) is a genetic disorder characterized by bone fragility, blue sclerae and dentinogenesis imperfecta (DGI), which are mainly caused by a mutation of the COL1A1 or COL1A2 genes that encode type I procollagen. Methods: The ultrastructure of dentin was analyzed by micro-CT, scanning electron microscopy, energy-dispersive spectroscopy analysis, nanoindentation test and Toluidine Blue Staining. Whole-exome sequencing (WES) was performed to identify the pathogenic gene. The function of the mutant COL1A1 was studied by real-time PCR, western blotting, subcellular localization. Functional analysis in dental pulp stem cells (DPSCs) was also performed to explore the impact of the identied mutation on this phenotype. Results: WES identied a missense mutation (c.1463G > C) in exon 22 of the COL1A1 gene. However, the cases reported herein only exhibited DGI-I in the clinical phenotype, there is no bone disease and any other common abnormal symptom caused by COL1A1 mutation. In addition, ultrastructural analysis of the tooth affected with non-syndromic DGI-I showed that the abnormal dentin was accompanied by disruption of odontoblast polarization, reduced numbers of odontoblasts, loss of dentinal tubules, and reduction in hardness and elasticity, suggesting severe developmental disturbance. What’s more, the odontoblast differentiation ability based on DPSCs that were isolated and cultured from the DGI-I patient was enhanced compared with those from an age-matched, healthy control. Conclusion: This study helped the family members to understand the disease progression and provided new insights into the phenotype-genotype association in collagen-associated diseases and improve clinical diagnosis of OI/DGI-I. the odontogenic differentiation markers DSPP and OCN. the mutant proband over-mineralization inuence the of dentin mutant COL1A1 the in HEK293T cells. of our knowledge, the present the rst to explore the inuence of COL1A1 mutation on odontoblastic differentiation based on hDPSCs. Introduction Dentinogenesis imperfecta (DGI) is a rare autosomal dominant disease that is traditionally classi ed as DGI-I, DGI-II and DGI-I, which represent a group of hereditary developmental conditions that affect the structure and composition of dentine [1]. While types II and III involve only the teeth, type I is the dental manifestation of osteogenesis imperfecta (OI)-a connective tissue disorder characterized by bone fragility, which may be associated with blue sclerae, DGI and hearing loss. OI is traditionally classi ed as type I, type II, type III, and type IV, which ranges from very mild types with nearly no fractures through variable skeletal deformities to intrauterine fractures and perinatal death [2,3]. However, as the high heterogeneity of patients with OI, the traditional classi cation could not establish a de nitive clinical diagnosis can be di cult, particularly without biochemical or molecular genetic information [4]. Mutations in the type I collagen genes, COL1A1 and COL1A2, have been identi ed in approximately 90% of cases with OI. As we all known, OI can damage the life quality of the patients because the main causative gene, type I collagen, is the major structural protein of bone, dentin, and other brous tissues [5][6][7]. Therefore, we can think that the mutation in type I collagen gene might alter the collagen brils, which may affect the formation and stability of bone and dentin minerals and nally result in a variety of abnormal phenotypes [8]. Although a lot of type I collagen genes mutations had been reported, DGI without OI has never been linked with COL1A1 mutations [9,10], and little is known on phenotype changes of dentin structure and ultrastructure in patients with DGI-I [11][12][13]. The main pathological feature in DGI-I is the abnormality of dentin mineralization. Mineralization represents a homeostasis and depends on the normal differentiation of human dental pulp stem cells (DPSCs) [14,15]. Moreover, DPSCs are highly considered for odontogenesis and reparation of pulp tissue [16]. Interestingly, thus far, no data exist on the potential functional roles that DPSCs may have during dentin development in DGI-I. On this account, human DPSCs can be a valuable model to investigate odontoblastic differentiation impacted by COL1A1 mutation. Here, we describe a patient was heterozygous for the novel mutation c.1463G>C (p.G488A) in COL1A1. Notably, she didn't have any bone problems or other phenotypes associated with OI, but only clinically evident DGI phenotypes such as opalescent teeth, obliterated pulp chambers and marked cervical constriction of bulbous crown. Meanwhile, we elucidate morphological alterations of defective dentin in patients affected by DGI-I, by ultrastructural and DPSCs-based analyses. In this study, we report that COL1A1 mutation causes non-syndromic human DGI-I. Patient and clinical examination An otherwise healthy 18-year-old Chinese female presented with abnormity of tooth colour, came to Nanfang Hospital (Guangdong province, China) for speci c treatment. Clinical assessment and radiographic examinations were performed on the subjects of the family. All procedures in this study were approved by the institutional review board and ethics committee of Nanfang Hospital, an a liate of Southern Medical University. Micro-CT Analysis With their informed consent, the wisdom teeth extracted from patient (III:2) and the age-matched control female were subjected for ultrastructural analysis. To obtain detailed 3D structural information inside the samples, micro-CT was performed using a μCT-Sharp (Micro-M90 China) with the following settings:70 kV, 100 µA, an isotropic resolution of 20 µm and a scan angle of 360°. 3D models of the teeth and dental pulp were reconstructed with Med Project analysis software. The CT images were calibrated using hydroxyapatite mineral of known densities [0.25 g·cm -3 and 0.75 g·cm -3 ] as elsewhere reported [40]. Measurement of the mineral density of the enamel and dentine of each tooth was carried out using Image J software. Scanning electron microscopy (SEM) The whole tooth sample was embedded in epoxy and sectioned into slices at a thickness of 5mm along the mesial-distally plane using a precision cutter. Samples were sputter-coated with gold using an auto sputter coater (Agar Scienti c, Elektron Technology, UK). A Hitachi SU-70 scanning electron microscope (Hitachi, Japan) was used to observe the microstructure of samples at × 1k and × 10k magni cations. Energy-Dispersive Spectroscopy (EDS) analysis EDS analysis was realized on the same teeth evaluated for SEM observations. Quantitative element analysis of Ca, P, Na and Mg was carried out and quantitative analysis to locally determine the composition of the target tissue (in weight %). Toluidine Blue Staining The teeth without decalci cation were mesial-distally cut at a thickness of 10μm with Leica Histocore Autocut (Germany). The slices were stained with 1% toluidine blue (TB) and observed under Leica DMI6000B (Leica Microsystems, Germany). Nanoindentation test The tooth slices were polished until no discernible scratches could be seen under an optical microscope. The Hardness and Young's modulus of enamel and dentin were measured by using nanoindentation instrument (TI-900, TriboIndenter, Hysitron, USA) with Berkovich diamond indenter. The detecting areas were randomly selected at a distance of 1mm along enamel-dentinal junction (EDJ). The experimental parameters are as follows: the strain rate is 0.05 s -1 , the depth limit is 2 μm, peak hold time for 10s, 200μm apart [41,42]. The drift rate of the material caused by temperature uctuations in the environment was monitored to correct all test data throughout a loading-hold-unloading cycle for each indentation test. Mutation analyses Genomic DNA was extracted from peripheral blood of the proband by phenol-chloroform method and was delivered to the Genesky-Shanghai (China) for whole-exome sequencing (WES) analysis. Then, to con rm the causative mutation, co-segregation analyses in all family members were performed. High-resolution melting analyses using 200 genomic DNA samples from random individuals were performed to investigate the mutation frequency in the general population. Cell transfection and Subcellular localization HEK293 cells were cultured in 12-well dishes and transfected with WT or MUT plasmids using Lipofectamine™ 2000 transfection reagent (Invitrogen). After 36h of transfection, the cells were rinsed three times with phosphate-buffered saline (PBS, Sigma-Aldrich, USA) and nuclei were stained with 0.1μg/ml 4′, 6-diamidino-2-phenylindole (DAPI, Sigma) for 10 min at room temperature. Subsequently, a confocal uorescence microscope (LSM 880, Carl Zeiss AG, Germany) was then used to image the cells. Quantitative real-time polymerase chain reaction Quantitative RT-PCR was applied to examine the expression of COL1A1 DSPP and OCN. After 36 hours transfection of HEK293 cells or after 14 days odontogenic differentiation of hDPSCs, total RNA was isolated using Trizol reagent (Invitrogen) and reverse transcribed into cDNA using the PrimeScript™ RT reagent Kit (Takara, China). These genes primers have been published elsewhere [43,44]. Gene expression levels were calculated using the (2 -ΔΔCT ) method. Western blotting analysis Western blot was applied to examine the expression of COL1A1 DSPP and OCN. After 36 hours transfection of HEK293 cells or after 14 days odontogenic differentiation of hDPSCs, cells were collected and washed with cold PBS and lysed with cell lysis buffer (Beyotime, China) supplemented with 1% phenylmethanesulfonyl uoride (PMSF, Beyotime) to prevent protein degradation. Total protein (20 μg) was separated by 10% SDS-polyacrylamide gel and transferred onto a polyvinylidene di uoride (PVDF) membrane (Millipore, USA). After being blocked in 5% nonfat milk in Tris-buffered saline containing 0.1% Tween-20 for 1 h at room temperature, the membranes were then incubated with anti-EGFP (Ray Antibody Biotech, China), anti-DSPP (Santa Cruz, USA), anti-OCN (Abcam, USA) and anti-GAPDH (Sigma, USA) overnight at 4 °C. The next day, the membranes were incubated for 1h at 37°C with the corresponding secondary antibodies (Proteintech, China), and the immunoreactive proteins were visualized with the ECL Kit (Beyotime, China) according to the manufacturer's instructions. Cultivation of hDPSCs and Alizarin Red S staining Isolation of hDPSCs was performed as described elsewhere [43]. For odontoblastic differentiation experiments, the cells were cultured in an odontogenic medium (OM), consisting of DMEM, 10% of FBS, 50mg/mL ascorbic acid (Sigma, USA), 5 mM β-glycerophosphate (Sigma, USA), and 10 nM dexamethasone (Sigma, USA). For ARS staining, when hDPSCs were 70% con uent, the ordinary medium was replaced with the OM to induce the odontogenic differentiation. After 14 days, the induced cells were xed for 15 min at room temperature in 4% paraformaldehyde and then stained for 30 min with 2% ARS (Beyotime, China). Statistical analyses Results are presented as means ± standard deviation (SD) of at least three independent biological replicates. Biological replicates were analyzed as at least three technical replicates per experimental point. The signi cance of differences was determined using one-way analysis of variance. The observed differences were considered statistically signi cant at p values< 0.05. Clinical phenotype The teeth of the proband were typically amber and translucent and show signi cant attrition, especially in molar teeth (Fig. 1a-e). Radio-graphic examination of the teeth revealed bulbous crowns with prominent cervical constrictions. The pulp chambers and root canals of affected teeth were smaller than normal or completely obliterated (Fig. 1f). Radiographs of limb bones and knee revealed no signi cant osteopenia, bony destructive process, periosteal reactions, or evidence of any acute fractures, dislocations, or injuries ( Fig. 1g-j). Besides, bone mineral density, serum calcium, alkaline phosphatase, sclera and echocardiography revealed no remarkable ndings. The overall characteristics of the clinical and radiographic results supported a clinical diagnosis of DGI-I (Fig. 4a). Ultrastructure of the teeth Micro-CT analysis shown that a bulbous shape and color change in the proband teeth (Fig. 2a) meanwhile the 3D image of pulp showed an irregularly obliterated pulp chamber and scattered pulp stones. The mineral density measurement showed that the DGI-I teeth had similar scores in the enamel, but lower scores in the dentin compared to the control teeth (Fig. 2b). The SEM images of the control dentin showed the regularly organized dentin tubes and an evenly calci ed matrix, while the DGI-I teeth presented very few dentin tubules and enlarged malformed dentin tubes (Fig. 2c). At high magni cation, the peritubular dentin of the control teeth is highly calci ed and minerals are densely packed, however, the peritubular dentin is more porous and unevenly less calci ed. The TB staining observed that there was severe disorganization of the dentin tubules in DGI-I teeth with more irregular dentin in the area towards the pulp. Moreover, the number of odontoblasts adjacent to the mineralized dentin layer was visibly reduced and an obvious difference in odontoblast morphology was observed among them. The roof odontoblasts of the control teeth were columnar in shape, with the nucleus located at the basal end of each odontoblast. However, in the patient's teeth, the odontoblasts became attened as a result of lost polarity and the odontoblast layer appeared disorganized (Fig. 2e). Mechanical properties of enamel and dentine Nanoindentation test shown the nanoindentation load-displacement curves of the enamel and dentin, which indicated the dentin hardness values and elastic modulus of the DGI-I teeth were signi cantly reduced compared to the control values. But there was no difference in enamel value (Fig. 3a). Exact values of mechanical properties (average and standard deviation) are summarized in (Fig. 3c). EDS data analysis shown elemental measurements of P concentration was lower in the DGI-I teeth than the control teeth, whereas Na, Mg and Ca had no differences (Fig. 3b). The identi ed DGI-I showed mutation of COL1A1 WES analysis showed that a novel heterozygous missense variant (c.G1463C, p.G488A) in COL1A1 exon 22 was found to be the cause for DGI-I in the proband of the family. Sanger sequencing shown that this mutation was not identi ed in any other members of the family (Fig. 4b). Meanwhile, no mutations are detected in the genomic DNA samples from 200 healthy individuals (data not shown). I-TASSER indicated that the COL1A1 c.1463G>C mutation changed the tertiary structure of the protein, causing the changes of portions of the alpha-helix and random coil structure. The Gly488 position is highly conserved in the other known EDA proteins, suggesting that it has an important function in the protein. Functional analysisafter plasmid transfection As shown in Fig. 5c, there was no differences in the subcellular localization of the MUT versus WT protein. In addition, no difference was observed in the levels of mRNA between cells transfected with the MUT plasmid compared with those transfected with WT plasmid. However, western blot analysis revealed that the expression of mutant COL1A1 protein was increased compared with the WT protein (Fig. 5d). Changes in odontogenic genes and proteins Flow cytometric analysis of the surface markers of hDPSCs and the adipogenic and odontoblast differentiation abilities of hDPSCs were shown in the supplementary information. To determine whether the COL1A1 mutation affected hDPSCs differentiation, we analyzed the changes in the levels of odontogenic-speci c mRNA and protein markers in induced hDPSCs using qRT-PCR and western blotting, respectively. The expression of COL1A1, DSPP, and OCN in hDPSCs with the COL1A1 mutation was signi cantly higher than that in control hDPSCs for 14 days after differentiation (Fig. 6a). Moreover, western blotting showed that the protein expression of these genes in DGI-I hDPSCs was signi cantly upregulated compared with that in the control hDPSCs after odontoblastic differentiation (Fig. 6b). The results proven that the DGI-I hDPSCs had a higher odontogenic differentiation ability, and ARS staining con rmed it also (Fig. 6c). Discussion As the most abundant tooth matrix protein, type I collagen plays crucial roles in maintaining the integrity of tooth structure and tooth strength. It is an ordered heterotrimer that consisted of two α1(I) chains and one α2(I) chain, which are encoded by COL1A1 and COL1A2 genes, respectively [17]. Mutations in COL1A1 or COL1A2 show as the following ways: one is quantitative defect including frameshift, nonsense, etc. lead to the synthesis of a reduced amount of normal type I collagen; the other is structural defect including missense mutation, mainly involving glycine replacement within Gly-Xaa-Yaa repeat. In the collagen triple helix, the Gly-substitution missense will produce structural deformation of the triple helix, leading to destabilization of the helical structure, affecting the synthesis of collagen [17][18][19][20]. In our study, the singlebase substitution in a Gly codon leading to Ala substitutions, c.1463G>C (p.G488A) (Fig. 4b). Actually, the identity of the residue replacing Gly appears to be closely related to the degree of clinical severity of OI cases. Substitutions of Gly by Ala, the smallest replacement residue, as we found in the proband only showing DGI-I phenotype, are often mild [21,22]. Our tertiary structural analysis revealed that the effects of the Gly-substitution in the sequence on the conformation were relatively local (Fig. 5b). The differences of dentin formation between molars can be related to hard tissue formation development of tooth germs, eruption times and length of exposure to oral factors [23][24][25][26]. There was differed greatly phenotype among the molar teeth of the proband. In the proband, the crown of the rst molars and second molars displayed totally obliterated pulp chambers, but the third molar teeth only had some irregular pulp stones without excessive dentin formation and obliteration of the pulp cavity, which provide possibility for verifying the odontoblast differentiation ability of dental pulp stem cells in follow-up studies. Human DPSCs can differentiate into odontoblasts that secrete a mineralized matrix with the mineral and molecular characteristics of dentin, and their normal differentiation is essential for dentin development and formation, which provide a valuable model to investigate odontoblastic differentiation [14]. In this study, we compared the odontogenic abilities of hDPSCs from the proband with a healthy control. For this purpose, we performed ARS staining to monitor mineralization, and we examined the expression levels of the odontogenic differentiation markers DSPP and OCN. The results provided further evidence that the hDPSCs from the mutant proband shown an over-mineralization trend compared with the control and therefore may in uence the quality of dentin formation. In addition, the expression of mutant COL1A1 protein was also increased in hDPSCs, which is consistent with the results in HEK293T cells. To the best of our knowledge, the present study is the rst to explore the in uence of COL1A1 mutation on odontoblastic differentiation based on hDPSCs. From the view of microstructure, dentin consists of a mineral-rich (or hypermineralized) tubular phase, termed peritubular dentin, next to a collagen-rich brillar network phase called intertubular dentin [27]. Then, among brillar collagen there contains 85% type I collagen, 15% types III and V collagen [28]. Consistent with the former reports, the SEM images of tubules in the mutant dentin are almost completely occluded by peritubular dentin (Fig. 2c), which reduces the apparent size and numbers of the pores [29]. Actually, previous studies have shown increased mineralization to be a characteristic feature of OI bone achieved by densely packed mineral particles as a result of defective collagen, leading to high fragility [30,31]. Recently, some scholars proposed that the enlarged dentin collagen brils might cause the poorly packed collagen molecules, and nally affect the dentin mineralization [28]. However, we have also observed the quality of mineralization in the DGI-I dentin was far from satisfactory (Fig. 2b). In case of the DGI-I, the hardness was found to be signi cantly lower and the exposed collagen presented overall a lower elasticity than the control samples(p<0.05), which was consistent with the clinical high brittleness phenomenon (Fig. 3c). In this study the hardness values of normal dentin were in good agreement with the previous studies of dentin [32,33]. And in our studies, the element P in DGI-dentine shown a lower level compared with normal dentine, which verify the positive association between dentin hardness and mineral content [34,35]. Odontoblasts are neural crest-derived cells secreting predentin and dentin and their dysfunctional status may account for a variety of structural changes in dentin from patients with DGI-I[36, 37]. As we seen in the studies, irregular shapes and inverted polarity of odontoblasts further con rmed that the COL1A1 mutation can result in the abnormal dentin. In view of the over-mineralization trend of cultured hDPSCs, abnormal odontoblasts morphology, the decrease hardness of dentin, and the clinical obliterated dental pulp, we can assume that the initial, slow ''entombing'' of the dysfunctional odontoblasts is then followed by a fast, disordered matrix deposition and mineralization, eventually leading to complete pulp obliteration [38,39]. Conclusion In conclusion, we report a novel mutation in exon 22 of COL1A1, causing non-syndromic DGI-I in a Chinese family, which expanded the known pathogenic spectrum of COL1A1 gene. And the detailed molecular and clinical features will be useful for exploring phenotype-genotype correlations. Abbreviations OI: Osteogenesis imperfecta; DGI: dentinogenesis imperfecta; WES: Whole-exome sequencing; DPSCs: dental pulp stem cells; SEM: scanning electron microscopy; EDS: energy-dispersive spectroscopy; TB: toluidine blue; EDJ: enamel-dentinal junction; WT: wild type; MUT: mutant type; DMEM: Dulbecco's modi ed Eagle's medium; FBS: Fetal bovine serum; ARS Alizarin Red S staining. Figure 1 Clinical images. a-e Intraoral views of the proband. The teeth of the proband were typically amber and translucent and show signi cant attrition, especially in molar teeth. f-j Panoramic radiographs and radiovisiography images. The pulp chambers and root canals of affected teeth were smaller than normal or completely obliterated. Radiographs of bones and knee revealed no signi cant osteopenia, bony destructive process, periosteal reactions, or evidence of any acute fractures, dislocations, or injuries. Figure 2 Teeth ultrastructural analyses. a 3D reconstruction of the tooth CT data. 3D reconstruction of pulp chambers. b Typical CT sections through the teeth are presented using false colour calibrated with respect to mineral density to generate mineral density maps. c SEM of representative exfoliated teeth. Declarations The SEM images of the control dentin showed the regularly organized dentin tubes and an evenly calci ed matrix, while the DGI-I teeth presented very few dentin tubules and enlarged malformed dentin tubes. d Toluidine blue staining of tooth. The control dentin shows the regularly organized lines, while the proband dentin has irregular lines and waved structures, which are loosely packed. Moreover, the number and morphology of odontoblasts adjacent to the mineralized dentin layer were visibly different. d, dentin; od, odontoblast; pd, predentin. Effect of mutation on COL1A1 function. a Conservation analysis of this abnormal variation by Polyphen-2. The result showed that amino acid 488 COL1A1 was highly conserved between different species. b The 3D structure of mutated COL1A1 was different from that of the wild-type predicted by I-TASSER. c Subcellular localization of COL1A1 in HEK293 cells. The mutant COL1A1 was localized in the cytoplasm similar to the wild-type protein. d The mRNA and protein expression level of COL1A1 in HEK293 cells. Mutant COL1A1 mRNA expression was no different than that of the wild type in HEK293 cells, but the mutant COL1A1 protein expression was increased than that of the wild type (P> 0.05). Values are means ± SD of three independent experiments (*P< 0.05 and **P< 0.01)
5,196.8
2021-04-22T00:00:00.000
[ "Medicine", "Biology" ]
Cefepime-Induced Neurotoxicity Cefepime is a common antibiotic used to treat various infections such as pneumonia, skin infections, and intra-abdominal infections due to its broad gram-positive and gram-negative spectrum. However, patients with acute kidney injury, end-stage renal disease, and renal transplantation are disproportionately at higher risk of developing complications from administration of cefepime, secondary to its predominant renal excretion. Current guidelines prescribe cefepime renal-dosing, dependent on the glomerular filtration rate, to prevent toxicity. This study presents a rare case where an acutely hospitalized patient undergoing chronic renal transplant rejection was administered renal-dose cefepime. Despite renal dosing, the patient developed neurotoxicity that manifested as delirium, inability to tolerate oral intake, and non-convulsive status epilepticus. Solely adjusting for renal dysfunction may be inadequate to prevent the accumulation of cefepime metabolites, which may present in an atypical manner in the patient. Such possibilities emphasize the need for continued evaluation of a patient’s mentation in case of cefepime administration. Cefepime-induced neurotoxicity incidences need to be evaluated and researched thoroughly. Introduction Cefepime is a fourth-generation cephalosporin, which is excreted primarily by the kidneys. Cefepimeinduced neurotoxicity is a well-documented adverse effect in patients with renal failure. Symptoms are correlated with decreased cefepime clearance due to reduced glomerular filtration rate (GFR) as well as increased central nervous system (CNS) penetration due to blood-brain barrier dysfunction [1]. The symptoms include depressed consciousness, encephalopathy, aphasia, myoclonus, and seizures [2]. While the mechanism of action behind this phenomenon is not well understood, it is thought to be related to concentration-dependent gamma-aminobutyric acid (GABA) antagonism. Most of the neurotoxicity case reports are associated with inappropriate dosing of cefepime [1]. However, a minority of cases reported (<25%) occur despite appropriate dosing of the medication. Typically, treatment involves discontinuation of the precipitating drug [3]. Here we present a unique case of a 64-year-old woman with renal failure who became altered on day six in the hospital due to non-convulsive status epilepticus while receiving renally dosed cefepime. Case Presentation A 64-year-old caucasian woman with a past medical history of scleroderma with pulmonary fibrosis, renal transplant 18 years ago, chronic pericardial effusion, and hypertension presented to the hospital with two days of right-sided neck pain and stiffness associated with numbness and tingling of her hands. Preliminary work-up was remarkable for an acute kidney injury with severe electrolyte derangements including hypocalcemia, hypomagnesemia, and hyponatremia. After two days of aggressive electrolyte repletion, the patient's symptoms resolved, but kidney function continued to slowly deteriorate. The regional transplant center was contacted in coordination with the hospital nephrology team who deemed this as an acute chronic renal transplant rejection. The patient began to develop progressively worsening urinary retention requiring repeated straight catheters and eventually a foley. During this time she began to develop diffuse abdominal pain prompting an infectious work-up and resulting in two blood cultures and urine culture being positive for Pseudomonas aeroginosa. A ten-day course of cefepime was started for the concurrent infection with consideration of hemodialysis being initiated in the setting of continued worsening renal function. The infectious disease team was consulted for further recommendations given the patient's immunocompromised state and deescalation of antibiotics with cultures being pan-sensitive. On day six of cefepime administration, the patient developed acute delirium after undergoing placement of a tunneled dialysis catheter. Initially, the delirium was attributed to side effects of sedation, but the delirium began to worsen. She was interestingly always able to answer the standard orientation questions and partake in a linear conversation but developed fluctuating mental status. This was emphasized by her husband who endorsed unusual conversations with his wife and was further evident by inappropriate effects including hysterical laughter. Infectious work-up and computed tomography of the head were unrevealing. In light of the worsening mental status and progressively decreasing oral intake, an electroencephalogram (EEG) was ordered to further assess the altered mental status (Figure 1). FIGURE 1: Electroencephalography (EEG) The black encircled area shows triphasic waves rather than frontal sharp waves that are most likely consistent with sub-clinical status epilepticus EEG was remarkable for subclinical status epilepticus. The patient was loaded with levetiracetam and put on maintenance dosing for the remainder of the admission. Cefepime was discontinued and meropenem was started under infectious disease recommendations due to reported rare side effects of altered mental status from cefepime. Over the course of the next three days, the patient's mental status, oral intake, and hemodynamic stability improved remarkably. At her outpatient primary care physician appointment two weeks later she was reported to be doing well and had no associated complaints or confusion. Discussion Cefepime, a common fourth-generation cephalosporin, has been reported in the literature due to its ability to cause neurotoxicity particularly in patients with renal failure. While the mechanism of cefepime-induced neurotoxicity (CIN) is still being researched, it has been hypothesized in the literature to be related to a decrease in GABA release from nerve terminals through a mechanism that is not fully understood, resulting in hyperexcitation of the neurons and depolarization of the postsynaptic membrane [1]. This manifests clinically as seizures, myoclonus, and encephalopathy [4]. It was first reported in the literature in 1999 in a patient with end-stage renal disease who was found to have high cefepime levels and subsequently developed altered mental status, myoclonus, and generalized tonicclonic seizures [2]. Cefepime-induced neurotoxicity (CIN) occurs primarily in patients with renal dysfunction as the antibiotic is primarily renally excreted. For this reason, the U.S. Food and Drug Administration has recommended that the drug be renally-dosed [5]. Other risk factors for neurotoxicity include inappropriate dosing of the drug, previous brain injury (due to CNS penetration), older age, and disruption of the bloodbrain barrier secondary to sepsis, uremia, or CNS infection [3]. Symptoms of CIN, including encephalopathy, seizures, and EEG changes, generally begin to appear approximately four days after initiation of the antibiotic. Encephalopathy generally manifests as tremor, aphasia, myoclonus, drowsiness, stupor, coma, confusion, delirium, and agitation. In a minority of cases (approximately 13%), a seizure was the only isolated symptom [3,5]. Diagnosis of CIN is made based on neurological symptoms starting days after cefepime initiation, related EEG findings consistent with generalized periodic discharges with a triphasic wave pattern, and resolution of symptoms and EEG abnormalities following discontinuation of the medication. Furthermore, CIN is a diagnosis of exclusion, and other causes of toxic and metabolic encephalopathy must be ruled out prior to making the diagnosis [5]. Treatment of CIN involves discontinuation of the drug. In a minority of cases with severe presentations, hemodialysis has been initiated for rapid removal of the drug [3]. Antiepileptic drugs are not indicated unless the patient is presenting with convulsive seizures or nonconvulsive status epilepticus [5]. Resolution of symptoms and EEG abnormalities should be expected approximately after one to three days following discontinuation of cefepime [3]. Our case presents a patient with acute renal failure requiring hemodialysis treated with renally-dosed cefepime for coverage of her Pseudomonas bacteremia secondary to a urinary tract infection. CIN developed approximately six days following the initiation of cefepime, manifested by encephalopathy and EEG abnormalities consistent with subclinical status epilepticus. Other organic causes including metabolic etiologies and stroke were ruled out. Three days after discontinuation of cefepime, the patient demonstrated an increase in oral intake and returned to her baseline mental status. This case adds to the compilation of literature surrounding CIN due to its unique presentation of encephalopathy and EEG abnormalities in a patient with acute renal failure who received appropriate renally-dosed cefepime. Previous literature surrounding CIN focused on inappropriate renal dosing of cefepime as the major risk factor for neurotoxicity. In our patient, Cefepime was reduced by 50-75% the recommended dose in accordance with her decreased creatinine clearance. Nonetheless, the patient developed signs and symptoms of CIN within the expected time frame, which resolved appropriately with discontinuation of the antibiotic. We theorize that CIN may have developed in our patient despite appropriate dosing due to both her older age and disruption of the blood-brain barrier due to uremia and sepsis. While the majority of cases regarding CIN have focused on inappropriate dosing of cefepime as the major risk factor, this case demonstrates that cefepime-induced neurotoxicity can occur in instances where cefepime is appropriately renally-dosed. It is important to understand this risk when initiating cefepime in an individual with renal failure, and have a low threshold for discontinuation of the medication if the patient begins to demonstrate signs or symptoms of CIN. Future research surrounding the incidence and severity of renally dosed cefepime precipitating CIN is warranted given the increasing use of the antibiotic for the treatment of sepsis. Conclusions Recognizing cefepime induced neurotoxicity can be a challenging due to a multitude of factors that more commonly are associated with encephalopathy and seizures. However, in the absence of other discernable sources, cefepime-induced toxicity should be considered as a diagnosis of exclusion. Dose adjustment can reduce the incidence of toxicity, but without close monitoring even therapeutic levels can lead to neurotoxicity. While this is one independent case at our institution, further research is needed to identify risk factors other than poor renal function to help recognize the true incidence and assist clinicians in their management. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2,265.2
2021-09-01T00:00:00.000
[ "Medicine", "Biology" ]