text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Risk Analysis of Bankruptcy in the U.S. Healthcare Industries Based on Financial Ratios: A Machine Learning Analysis : The prediction of bankruptcy risk poses a formidable challenge in the fields of economics and finance, particularly within the healthcare industry, where it carries significant economic implications. The burgeoning field of healthcare electronic commerce, continuously evolving through technological advancements and changing regulations, introduces additional layers of complexity. We collected financial data from 1265 U.S. healthcare industries to predict bankruptcy based on 40 financial ratios using multi-class classification machine learning models across various industry subsectors and market capitalizations. The exceptionally high post-tuning accuracy rates, exceeding 90%, along with high-performance metrics solidified the robustness and exceptional predictive capability of the gradient boosting model in bankruptcy prediction. The results also demonstrate the power and sensitivity of financial ratios in predicting bankruptcy based on financial ratios. The Altman models highlight the return on investment (ROI) as the most important parameter for predicting bankruptcy risk in healthcare industries. The Ohlson model identifies return on assets (ROA) as an important ratio specifically for predicting bankruptcy risk within industry subsectors. Furthermore, it underscores the significance of both ROA and the enterprise value to earnings before interest and taxes (EV/EBIT) ratios as important parameters for predicting bankruptcy based on market capitalization. Recognizing these ratios enables proactive decision making that enhances resilience. Our findings contribute to informed risk management strategies, allowing for better management of healthcare industries in crises like those experienced in 2022 and even on a global scale. Introduction Bankruptcy risk analysis, a critical domain in economic and financial research, presents significant challenges due to the varied benchmarks across industries and their complex interactions with diverse factors in finance, law, and economics [1,2].Particularly within the healthcare sector, these challenges are magnified due to their substantial influence on public health and socio-economic stability.This sector not only plays a pivotal role in employment and serves as a key focus for investment but also substantially impacts the broader medical and financial ecosystems through the repercussions of hospital and health insurance provider bankruptcies [1][2][3][4][5][6][7][8][9][10].Moreover, the correlation between healthcare sector financial distress and personal bankruptcies, notably driven by uninsured medical expenses, underscores this issue's severity; nearly two-thirds of all U.S. personal bankruptcies are medically related [7,8].Additionally, the interaction between instabilities in the healthcare industry bankruptcies has been recognized [11].In this context, the burgeoning field of electronic commerce (e-commerce) emerges as a significant factor.The evolution and integration of e-commerce have revolutionized investment patterns and financial stability across many sectors, including healthcare.E-commerce platforms enhance the accessibility and distribution of healthcare services and products, thereby influencing the financial dynamics of healthcare institutions.These platforms also affect consumer behavior and spending on health services, potentially increasing the financial risk for healthcare providers that fail to adapt to digital marketplaces.Furthermore, e-commerce introduces new regulatory challenges and competitive pressures that can exacerbate financial instability in the healthcare sector.As e-commerce continues to expand, its impact on the stock market dynamics that underpin the financial health of medical entities cannot be overlooked.Therefore, the interplay between e-commerce and healthcare financial stability is critical for predicting potential bankruptcies in this sector.This study aims to address this nuanced landscape by focusing on the bankruptcy risk in U.S. medical and healthcare entities, influenced by stock market dynamics. The year 2022 presents a unique context for analyzing bankruptcy risk in the U.S. healthcare industries, characterized by challenging economic conditions.This period saw a bear market, record-high inflation levels not seen in over four decades, and a marked increase in interest rates [Federal Reserve, 2023], all unfolding alongside the ongoing recovery from the COVID-19 pandemic.Such conditions precipitated significant market volatility, acutely impacting growth-sensitive sectors, notably medical and healthcare.Our research is positioned to delve into bankruptcy risk within this particular timeframe, aiming to provide healthcare stakeholders with critical insights.By analyzing this period, we intend to alert them to potential financial instabilities and propose preemptive strategies to effectively navigate and mitigate bankruptcy risks during comparable periods of economic adversity. Literature Review Given the capability of financial ratios to reflect critical elements of a company's financial status, several studies have investigated the use of these ratios in assessing bankruptcy risk, particularly in sectors outside of medical and healthcare.These studies typically employ traditional statistical methods, such as discriminant and regression analysis, to analyze bankruptcy risks.Supriyanto et al. conducted an investigation into financial distress in mining companies, with a specific focus on financial ratios [12].Similarly, Amalia et al. examined financial ratios to predict company bankruptcy in the cigarette industry [13]. Lee et al. explored the financial ratios of savings banks in relation to bankruptcy through quantitative empirical analysis using statistical models [14].Additionally, Tian et al. studied bankruptcy prediction across international markets with financial ratios by employing adaptive statistical techniques and a discrete hazard statistical model [15].Traditional statistical methods can generally identify the significance and relationship of each individual parameter, such as financial ratios, with bankruptcy.Nevertheless, given the multi-factorial nature of bankruptcy, it is imperative to consider the impact of all financial ratios simultaneously for bankruptcy prediction.Moreover, the intricate interconnections between these ratios should also be taken into account during this predictive analysis.Therefore, these traditional statistical analyses may not be fully capable of discerning the intricate interplay between financial parameters and bankruptcy [16]. Artificial intelligence and machine learning methods have demonstrated a greater aptitude than traditional statistical analysis for addressing concerns in evaluating bankruptcy risk.Having emerged as pivotal technologies in the fourth industrial revolution [17,18], these methods enable the development of sophisticated models that consider both the multifaceted roles of financial ratios and the interrelationships between them.Recent studies have delved into the application of diverse machine learning algorithms to ascertain which models are most effective in predicting bankruptcy and financial distress [19][20][21][22].These investigations aim to enhance the accuracy and reliability of bankruptcy predictions by leveraging the advanced analytical capabilities of machine learning.Additionally, some of these studies have applied these models to assess the dynamics of financial distress and bankruptcy in non-healthcare industries across various subsectors [23][24][25][26][27][28][29][30][31][32][33][34][35].Carmona et al. implemented innovative machine learning algorithms to predict bankruptcy in French firms [36].They assessed the prediction of business financial distress and bankruptcy in French firms using a novel machine learning model.In another study, they proposed an enhanced gradient boosting machine learning algorithm for bankruptcy prediction, and their experimental results substantiated its superiority in comparison to traditional feature selection methods [37].Lombardo et al. contributed to the prediction of corporate bankruptcy by utilizing machine learning algorithms that estimate survival probabilities and predict defaults, employing time-series accounting data [38].They also suggested an innovative method capable of detecting emerging risks within particular sectors and industries, thus aiding in the identification of market segments where firms remain undisturbed by disruptions [39].Bragoli et al. also developed a predictive model that classified solvent and bankrupt firms, utilizing the enhanced gradient boosting machine learning algorithm [40]. Some studies have concentrated on predicting bankruptcy risk during specific critical time periods characterized by economic crises.This focus is crucial as economic downturns significantly alter the financial landscape, thereby impacting the accuracy and relevance of bankruptcy prediction models.Liu et al. focused on high-tech and startup businesses in Europe after the 2008 economic crisis, emphasizing the predictive capabilities of machine learning, which could potentially even contribute to preventing failures or acquisitions [41].Papík et al. explored the prediction of financial distress and bankruptcy risk among small-and medium-sized enterprises (SMEs) during the COVID-19 crisis using machine learning [42].They advised against the use of qualitative indicators in SME prediction models and noted a shift in the relationship with historical financial data between 2019 and 2020 due to the COVID-19 pandemic.Liu et al. explored the influence of public health crises on the economic prospects of SMEs by utilizing machine learning analysis [43].Their investigation into the predictive aspects of credit default risks for SMEs post COVID-19 underscores the effectiveness of machine learning methods in assessing and controlling credit risk, thereby elucidating its impact on SMEs. Objective Previous research has laid important groundwork in bankruptcy risk analysis; however, unresolved issues persist, and new challenges have emerged, particularly in the context of the economic climate of 2022 [1,38].There is considerable uncertainty regarding the most effective financial metrics for predicting bankruptcy in the healthcare industry.Notably, the application of bankruptcy prediction within the healthcare industry, especially through machine learning techniques, remains underexplored.This study innovatively focuses on predicting bankruptcy across various subsectors and market capitalizations within the healthcare industry.We utilized a comprehensive set of financial ratios for prediction, acknowledging their widespread applicability and establishing a new precedent in financial analysis.Financial ratios offer a diverse set of financial metrics that provide a concise yet powerful snapshot of a company's financial health and resilience [44].This research aims to address these gaps by focusing on the bankruptcy risk analysis of healthcare industries in the United States, with particular emphasis on the economically tumultuous year of 2022.Our primary objective was to employ machine learning algorithms to analyze the intricate relationship between financial ratios and bankruptcy risk in the healthcare industry.Additionally, we aimed to identify the most effective financial ratios that are most crucial and sensitive for predicting bankruptcy in these sectors.Our approach seeks to provide strategic insights for stakeholders in the healthcare stock market, including investors, policymakers, shareholders, and healthcare professionals.Through this, we aspire to contribute to the financial stability and resilience of the medical and healthcare sectors by facilitating informed decision making. Data Collection Due to the lack of precise financial information on the medical and healthcare stock market in available financial databases, we gathered financial data from the U.S. healthcare stock market via Sec.gov, covering 1265 companies.The data were sourced from official 2022 reports, including balance sheets, income statements, and cash flow statements, all publicly available.For the purpose of this study, the time-dependent financial ratios were derived from data spanning a full calendar year, specifically from 1 January 2022 to 1 January 2023.This approach was deliberately chosen to align with standard financial reporting practices, which typically reflect the financial status of an entity at both the beginning and the end of each year.The calculated ratios in Table 1 encapsulate the financial dynamics within this period, offering a snapshot of each company's financial health at these critical fiscal junctures.The detailed formulas and definitions for these 40 financial ratios are provided in Table 1 and Supplementary Material S1.These ratios encompass a multitude of nuanced sub-parameters across key financial dimensions, including but not limited to indebtedness, cash flow dynamics, profitability metrics, and sector-specific peculiarities, all of which are fundamentally extrapolated from an extensive array of financial data pertaining to stocks.For instance, the debt-to-equity ratio sheds light on a company's leverage structure, whereas ratios such as the operating cash flow ratio, price-to-free cash flow ratio, and the cash ratio collectively provide a multifaceted understanding of its liquidity situation.In a similar vein, the current ratio and the debt ratio deliver a concise yet informative overview of the company's debt landscape.On the front of profitability analysis, indicators such as the gross profit margin, net profit margin, and return on equity are of paramount importance.Further, ratios like price-to-sales and price-to-book value are essential in gleaning insights on the peculiarities inherent to the specific industry in question.Collectively, these ratios are imperative in constructing a comprehensive and insightful portrayal of a firm's financial vitality and its strategic standing within its respective sector. Study Design and Machine Learning Analysis This study employed a machine learning classification approach to analyze bankruptcy risk.Forty financial ratios (detailed in Table 1) served as features within the machine learning methodology, with the bankruptcy value for each of the 1265 industries as the prediction target.To ensure rigorous computation of these targets (bankruptcies), we utilized three widely recognized empirical bankruptcy prediction models: the Altman Z-score model, the modified Altman model, and the Ohlson model-commonly referred to as the O-score.These models have been empirically validated in numerous studies for their effectiveness in predicting bankruptcy risk, providing a robust and reliable framework for the analysis [45][46][47][48][49].The formulations for these three bankruptcy models are provided in Equations ( 1 The machine learning classification algorithms used to predict bankruptcy were logistic regression, decision tree, naive Bayes, K-nearest neighbors (KNN), and support vector machine (SVM), along with ensemble learning algorithms including adaptive boosting (AdaBoost), gradient boosting, and random forest algorithms.A comprehensive explanation of the machine learning algorithms employed in the current study, along with their corresponding equations, is provided in Supplementary Material S2. Market capitalization serves as a powerful indicator of a company's size, reflecting not only the equity value perceived by the market but also providing insight into the financial robustness of the firms.Previous studies have shown that the market capitalization of industries is of particular importance in the analysis of bankruptcy risks [54][55][56].On the other hand, in examining bankruptcy risk in the healthcare industries, industry subsector classification plays a pivotal role, as each subsector contributes distinct elements to the overall risk profile.Given the significance of both market capitalization and industry subsectors in bankruptcy risk analysis, this study employs two approaches: Bankruptcy risks are analyzed based on the industry subsector and the market capitalization of industries.This dual perspective enables a nuanced assessment of how industry dynamics and company size characteristics interact to influence bankruptcy likelihood.To implement these approaches, we applied two distinct multi-class machine learning classifications.Firstly, the targets were classified into four different industry subsector classes.Secondly, the targets were classified into five different market capitalization classes.It is important to note that we had three targets (Altman Z-Score, modified Altman, and Ohlson bankruptcy models), and as a result, we repeated the entire machine learning multi-class classification procedure three times.Finally, the algorithm's predictions were compared to the test outcomes using confusion matrices to evaluate accuracy, precision, recall, and F1 scores.The algorithm with the best performance metrics was selected as the final method for bankruptcy prediction.It is worth mentioning that all targets were not completely balanced.Hence, we employed stratified sampling in the train-test split to preserve the target variable bankruptcy distribution in both training and testing sets.Additionally, StratifiedKFold cross-validation was used, demonstrating the suitability of our chosen methodology for maintaining the proportion of classes across folds.These primary techniques effectively solved issues related to minor imbalances during the prediction process, making advanced methods such as over-sampling or under-sampling unnecessary.The machine learning code was implemented using Python software, version 3.11. It should be noted that the four industry subsector classes, defined based on the healthcare sectors of the U.S. industries, include (1) biotechnology; (2) medical, which comprises medical devices, instruments and supplies, and medical distribution; (3) pharmacology, encompassing drug manufacturers-specialty and generic, as well as general-and pharmaceutical retailers; and (4) healthcare, which covers diagnostics and research, health information services, medical care facilities, and healthcare plans.The five market capitalization classes were defined as follows [57]: (1) large-cap, with a market value of USD 10 billion or more; (2) mid-cap, with a market value between USD 2 billion and USD 10 billion; (3) small-cap, with a market value between USD 300 million and USD 2 billion; (4) micro-cap, with a market value between USD 50 million and USD 300 million; and (5) nano-cap, with a market value of less than USD 50 million. This study recognizes that dividing industries into various subsectors and market capitalizations has enabled a multi-class classification that includes some minority classes.Recent research suggests that for non-medical data, such as financial datasets, resampling methods can effectively address the prediction challenges associated with minority classes [58].However, the research also advises that, in most cases, standard classifiers are preferable for non-medical datasets similar to those analyzed in this study.Based on these insights, this study employs standard classifiers and does not utilize resampling or cost-sensitive methods. Statistical Analysis For the assessment of baseline characteristics within our dataset, we employed statistical analysis methods and computed measures, including the mean and standard error (SE), for 40 distinct financial ratios.The Shapiro-Wilk test was applied to determine the normality of the distribution within our dataset.To ascertain the statistical significance of individual features, we utilized the one-way analysis of variance (ANOVA).A p-value threshold of 0.05 was set for determining statistical significance. Results Table 2 presents a summary of the financial ratios, including the mean, SE, and p-values, which underscore the observed differences in the financial ratios.A three-dimensional principal component analysis (PCA) was conducted on the dataset, comprising industries and their corresponding financial ratios (Figure 1).The resulting graphical panels highlight variations in the distribution of industries, categorized by industry subsectors and market capitalization tiers.This method of dimensionality reduction effectively preserves the inherent variance within the dataset, facilitating a more nuanced examination of the complex attributes of the industries [59].Within the derived three-dimensional space, individual industries are represented as points, with their positions reflecting the synthesized financial ratios.The results, as depicted in Figure 2, reveal a significant bankruptcy risk for healthcare industries in the U.S. during 2022.The bankruptcy risk, calculated based on the Altman Z-Score, modified Altman, and Ohlson models, is depicted at 88.5%, 62.1%, and 43.2%, respectively, as shown in the non-green segments of Figure 2. The observed disparities in bankruptcy risk evaluations are attributable to the distinct methodological underpinnings of the Altman models and Ohlson model.The Altman models, originally calibrated for manufacturing firms, predominantly rely on financial ratios reflective of this sector's characteristics.In contrast, the Ohlson model boasts a design intended for broad applicability across diverse corporate environments, such as service-centric industries, and integrates an extensive array of financial ratios.In our investigation, StratifiedKFold cross-validation was systematically applied to each classifier, ensuring that the folds preserved a proportional distribution of classes.This procedure, in conjunction with detailed hyperparameter optimization-as documented in Supplementary Material S3-was critical in reducing the likelihood of data leakage.The application of these meticulous stratification and tuning strategies led to enhancements in models' predictive accuracy, with increases ranging from 7.1% to 10.3%.In addition, all models exhibited acceptable accuracy post tuning, with the sole exception of the naive Bayes classifier (Supplementary Material S3).It should be noted that the naive Bayes model, despite its probabilistic foundation and advantages in simplicity and computational speed, did not achieve as high an accuracy as the other models in predicting bankruptcy within the U.S. healthcare industries.The relatively lower performance of this model may be explained by the model's assumption of independence among predictors, a condition presumably not met by the complex interactions inherent in our dataset.The consistency of performance improvements post tuning across the remaining models substantiates the robustness of our analytical framework and corroborates the nonexistence of overfitting within our results.Our final tuned models exhibited acceptable accuracy rates for predicting bankruptcy using financial ratios inspired by the Altman Z-score model, with performance by classifier as follows: SVM (87.3%),KNN (82.3%), naive Bayes (68.0%), logistic regression (86.3%), decision tree (86.4%),AdaBoost (87.3%), random forest (90.6%), and gradient boosting (90.8%), as detailed in Supplementary Material S3 for industry subsectors.The accuracies for the modified Altman bankruptcy model were 88.1%, 83.3%, 66.7%, 87.3%, 87.4%, 84.1%, 90.3%, and 90.6%, respectively, while the Ohlson model showed comparable performance with accuracies of 87.9%, 80.0%, 69.1%, 87.8%, 88.2%, 74.5%, 89.3%, and 90.0%.Classifications based on market capitalization also yielded acceptable accuracy rates, broadly aligning with those based on industry subsectors (Supplementary Material S3).The results depicted indicate that, in addition to accuracy, the precision, recall, and F1 scores for the models fall within a satisfactory range.Finally, the results of our analysis demonstrated that both the random forest and gradient boosting algorithms exhibited exceptional accuracy and robustness in predicting bankruptcy based on all three bankruptcy models utilized.Upon closer examination of the results, the gradient boosting model was found to have a marginally higher accuracy compared to the random forest algorithm.Therefore, we chose to focus our analysis on this particular algorithm.Given the critical importance of minimizing false negatives in bankruptcy prediction, where failing to identify an impending bankruptcy could have drastic financial repercussions, the receiver operating characteristic (ROC) curve's ability to illustrate the trade-off between sensitivity (true-positive rate) and specificity (false-positive rate) across different thresholds makes it a fitting choice.It facilitates the identification of an optimal balance, ensuring that the model minimally overlooks actual cases of bankruptcy, which is of paramount concern in financial analytics.Therefore, we compared ROC curves in Figure 4 based on different bankruptcy models and classifications for the gradient boosting model.The areas under the ROC curves for the Altman Z-score, modified Altman, and Ohlson bankruptcy prediction models were notably high (greater than 90%) for classifications based on industry subsectors and market capitalization.The high ROC curve areas show the exceptional capability of our gradient boosting algorithm in differentiating bankrupt from solvent healthcare industries.This suggests not only the accuracy of our predictions Figure 2. Percentage difference insolvency/bankruptcy prediction across various bankruptcy prediction models.Pie charts display the differences in bankruptcy and solvency percentages based on various industry subsectors and market capitalization classifications using three bankruptcy models.Industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.Market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Figure 3 presents a heatmap of the confusion matrix for our final tuned gradient boosting model, offering a visual representation of the model's performance across different classes.The gradation of colors reflects the proportion of true positives, true negatives, false positives, and false negatives, thereby providing an intuitive overview of classification accuracy and misclassification patterns.The heatmap of the confusion matrix, as depicted in Figure 3, further corroborates the high accuracy and robust performance of the gradient boosting model in predicting bankruptcy based on financial ratios.The predominance of high values along the matrix's main diagonal and lower values in off-diagonal cells indicates a substantial rate of correct predictions relative to the misclassifications.This visualization underscores the gradient boosting model's effectiveness in distinguishing between the various classes. but also confirms their consistency and robustness across diverse industry contexts and varying scales of market capitalization.The superior performance of the gradient boosting model and the suitable size of our dataset reinforce the notion that this efficient and transparent model is advantageous, particularly in scenarios where complexity is unnecessary, and high-dimensional deep learning approaches do not confer additional benefits.Figure 3. Heatmaps of the confusion matrices showcasing the classification models performance in differentiating between solvency and bankruptcy across industry subsectors and market capitalization classes.Industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.Market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Given the critical importance of minimizing false negatives in bankruptcy prediction, where failing to identify an impending bankruptcy could have drastic financial repercussions, the receiver operating characteristic (ROC) curve's ability to illustrate the trade-off between sensitivity (true-positive rate) and specificity (false-positive rate) across different thresholds makes it a fitting choice.It facilitates the identification of an optimal balance, ensuring that the model minimally overlooks actual cases of bankruptcy, which is of paramount concern in financial analytics.Therefore, we compared ROC curves in Figure 4 based on different bankruptcy models and classifications for the gradient boosting model.The areas under the ROC curves for the Altman Z-score, modified Altman, and Ohlson bankruptcy prediction models were notably high (greater than 90%) for classifications based on industry subsectors and market capitalization.The high ROC curve areas show the exceptional capability of our gradient boosting algorithm in differentiating bankrupt from solvent healthcare industries.This suggests not only the accuracy of our predictions but also confirms their consistency and robustness across diverse industry contexts and varying scales of market capitalization.The superior performance of the gradient boosting model and the suitable size of our dataset reinforce the notion that this efficient and transparent model is advantageous, particularly in scenarios where complexity is unnecessary, and high-dimensional deep learning approaches do not confer additional benefits. Discussion Predicting the probability of bankruptcy remains among the most formidable challenges in contemporary economic and financial research [1,2].Specifically, the evaluation of bankruptcy risk in the industries and, more precisely, within the medical and healthcare sectors holds immense significance due to its profound financial implications.Consequently, the present study was designed to achieve a precise prediction of bankruptcy risk for U.S. healthcare industries based on comprehensive financial ratios that encapsulate a wide array of financial metrics.This allows for the early detection of financial fragilities, the protection of economic interests, and the provision of timely insights for stakeholders.Effective prediction aids in the enhancement of management strategies, the attainment of well-informed advancement and risk control measures, and ensures the provision of high-quality healthcare services to not only the American population but potentially the global community as well.Our results indicated high bankruptcy risks as predicted by Altman, modified Altman, and Ohlson bankruptcy models, which can be attributed to the distinctive economic landscape of 2022.This can be related to significant industry volatility driven by the COVID-19 pandemic, escalating inflationary pressures, and a series of aggressive interest rate hikes in 2022.These factors led to increased borrowing costs that impacted market liquidity and valuations, particularly within growth-oriented sectors such as healthcare, which are traditionally more sensitive to interest rate changes. The High Robustness and Predictive Power of Gradient Boosting Machine Learning Algorithm Logical and meaningful improvements in cross-validation accuracies following hyperparameter tuning (by 7.1-10.3%),combined with the remarkably high levels of post-tuning accuracy (exceeding 90%) for the gradient boosting model, as well as elevated precision, recall, and F1 scores-as evidenced by the confusion matrices-and areas under the ROC curve surpassing 0.9 jointly affirm the robustness and exceptional predictive capability of this model.Therefore, the gradient boosting model can efficiently predict the bankruptcy risk of healthcare industries based on financial ratios with high accuracy and robustness.The study by Carmona et al. on predicting financial distress in French firms corroborated and further supported our findings regarding the effectiveness of gradient boosting algorithms in financial distress prediction [36,37].The results also revealed that even the accuracies of our other machine learning algorithms were higher than those reported in some corresponding studies related to the prediction of bankruptcy risk based on other financial parameters, even with a larger database [60].This underscores the power and sensitivity of financial ratios in predicting bankruptcy within the healthcare industries, providing creditors, investors, and other stakeholders with the opportunity to take proactive measures to mitigate financial challenges. Important Financial Ratios Sensitive to Bankruptcy Prediction The feature importance analysis presented in Figure 5 indicated that, according to both Altman models, the most important and sensitive ratio for predicting bankruptcy risk across various industry subsectors and market capitalizations is the return on investment (ROI).ROI is a measure comparing net income to investment.A high ROI indicates that the investment's gains are favorable compared to its cost.Conversely, the Ohlson model identified the return on assets (ROA) as the important ratio in predicting bankruptcy risk specific to industry subsectors, with a feature importance of 7.8%.ROA is a financial metric showing a company's profitability relative to its total assets.It helps determine if a company efficiently uses its assets to generate profit.For the predictions based on market capitalization, the Ohlson model highlighted not only the ROA ratio with an importance of 6.3% but also the enterprise value to earnings before interest and taxes (EV/EBIT) ratio, which has an importance of 6.6%.The comprehensive details regarding the relative importance of all parameters under investigation are illustrated in Figure 5. Notwithstanding the absence of prior scholarly work specific to bankruptcy risk prediction within the healthcare industries, the extant literature on bankruptcy assessment in disparate industries also reinforces the significance of several ratios analogous to those we identified as pivotal [12][13][14][15]61]. Limitations and Future Directions It should be noted that, although 40 financial ratios covering key financial domains were utilized to predict bankruptcy, future research could enhance our findings by incor- The importance and sensitivity of the ROI as illuminated by the Altman models suggest that investors and analysts in medical and healthcare sectors place considerable emphasis on the immediate efficiency of capital usage.This ratio's predictive potency could potentially stem from its ability to encapsulate the sector's unique capital structures and reinvestment patterns critical to sustaining operations in the complex healthcare market.On the other hand, the Ohlson model's emphasis on ROA accentuates the critical role of effective asset utilization in the financial viability of healthcare firms, hinting at a market environment where operational efficiency is a harbinger of fiscal health.Additionally, the Ohlson model's accentuation of the EV/EBIT ratio underscores the market's valuation of profitability relative to a company's total valuation.The nuanced divergence observed between the models regarding the most influential ratios offers compelling insights into the multifaceted nature of bankruptcy risk assessment in this sector, underscoring the need for a multidimensional analytic approach in financial prognostication efforts within the healthcare industries.Consequently, for an augmented and holistic forecast of bankruptcy risk, it is incumbent upon both researchers and practitioners to amalgamate the varied lenses provided by the Altman and Ohlson models. It should be noted that our investigation, as described, revealed widespread financial distress among U.S. healthcare industries in 2022.This trend might partly be attributed to these key financial ratios identified by these models, which likely reflect the broader fiscal challenges faced by these entities.These bankruptcy models were originally conceptualized to be effective over a two-year period; hence, the important and sensitive parameters suggested by these models theoretically remain relevant and effective until 2024.Financial analysts should continue to monitor these ratios closely to maintain the solvency of the medical and healthcare sectors.Therefore, recognizing these ratios not only offers a retrospective analysis of past financial difficulties but also provides strategic foresight.By vigilantly tracking these financial indicators, stakeholders can better prepare to address fiscal vulnerabilities proactively, thereby reducing the risk of similar financial challenges in the future.In addition to managing the aforementioned financial ratios based on our findings, recent advancements in medical technology also help address some challenges in the financial aspects of healthcare industries [62][63][64][65][66][67][68][69][70][71].Therefore, technological advancements can be beneficial beyond just controlling our recognized financial ratios. Limitations and Future Directions It should be noted that, although 40 financial ratios covering key financial domains were utilized to predict bankruptcy, future research could enhance our findings by incorporating additional predictive parameters.Alzayed et al. recently highlighted the importance of corporate governance alongside financial ratios in the context of bank failure predictions [72].Hence, incorporating factors like corporate governance, competitive dynamics, and regulatory considerations may also prove significant in enhancing the prediction of bankruptcy for the healthcare industries, an area ripe for future academic exploration.Future studies could benefit from repeating our analysis using data from a more typical financial year as a control group.This comparison would allow researchers to discern whether the financial metrics identified as crucial during the crisis conditions of 2022 hold similar significance in more stable times.Such a study would also help in evaluating the robustness of these metrics across varying economic conditions, providing a deeper understanding of their utility in continuous risk management and strategic financial planning in the healthcare sector.Advancements in natural language processing in future studies could provide mechanisms to evaluate market sentiment and management discourse, while machine learning models like the gradient boosting algorithm demonstrated in this study could be further refined by integrating sector-specific data for a more tailored risk assessment.Exploring the impact of emergent global health issues and economic disruptions beyond COVID-19 may also be essential in shaping more resilient financial prediction frameworks for the sector. Conclusions This study predicted the risk of bankruptcy in the U.S. healthcare industries based on financial ratios across different industry sub-sectors and varying market capitalizations, using machine learning analysis.The results suggest that financial ratios serve as robust predictors of bankruptcy, and the gradient boosting algorithm can significantly enhance the predictive power of conventional models.This study has elucidated important financial ratios that served as potential harbingers of financial distress and heightened bankruptcy risk within U.S. healthcare industries during 2022.The identification of these ratios is instrumental not only for signaling potential red flags but also for equipping stakeholders with the insights necessary to devise strategies aimed at risk mitigation and ensuring the sustainability of the healthcare industries.The comprehensive perspective offered through this research enriches the academic discourse on bankruptcy prediction and offers strategic foresight for stakeholders within healthcare industries.Ultimately, the findings lay a robust foundation for future research and offer a framework for informed decision making that serves investors, policymakers, shareholders, and healthcare professionals, thereby contributing significantly to the financial stability and resilience of the medical and healthcare sectors. Figure 1 . Figure 1.Three-dimensional principal component analysis (PCA) of U.S. healthcare industries features: This figure illustrates the PCA plots of 1265 U.S. healthcare industries, visualized in threedimensional space through the first three principal components.This analytical approach is used to identify patterns regarding (a) bankruptcy across different medical industry subsectors and (b)bankruptcy relative to market capitalization categories.Each data point represents a unique stock, with its spatial positioning derived from an aggregation of features within the dimensions defined by principal components 1, 2, and 3.In panels (a), industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.In panels (b), market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Figure 1 . Figure 1.Three-dimensional principal component analysis (PCA) of U.S. healthcare industries features: This figure illustrates the PCA plots of 1265 U.S. healthcare industries, visualized in three-dimensional space through the first three principal components.This analytical approach is used to identify patterns regarding (a) bankruptcy across different medical industry subsectors and (b) bankruptcy relative to market capitalization categories.Each data point represents a unique stock, with its spatial positioning derived from an aggregation of features within the dimensions defined by principal components 1, 2, and 3.In panels (a), industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.In panels (b), market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Figure 2 . Figure 2. Percentage difference insolvency/bankruptcy prediction across various bankruptcy pre- diction models.Pie charts display the differences in bankruptcy and solvency percentages based on various industry subsectors and market capitalization classifications using three bankruptcy models.Industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.Market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Figure 3 . Figure 3. Heatmaps of the confusion matrices showcasing the classification models performance in differentiating between solvency and bankruptcy across industry subsectors and market capitalization classes.Industry subsector classes are labeled as 0 for solvency and from 1 to 4 for bankruptcy in biotechnology, medical devices, pharmacology, and healthcare services, respectively.Market capitalization classes are defined as 0 for solvency and from 1 to 5 for bankruptcy in large-cap, mid-cap, small-cap, micro-cap, and nano-cap categories, respectively. Figure 4 . Figure 4.The ROC curves illustrating the true-positive rate against the false-positive rate at various threshold settings for the classification models applied to industry subsectors and market capitalization classes.ROC, receiver operating characteristic. Figure 5 . Figure 5. Feature importance analysis highlighting the relative importance of each financial ratio in determining bankruptcy risk based on different industry subsectors and market capitalizations. Figure 5 . Figure 5. Feature importance analysis highlighting the relative importance of each financial ratio in determining bankruptcy risk based on different industry subsectors and market capitalizations. Table 1 . The formulas for 40 financial ratios (features) used to evaluate the bankruptcy risk of healthcare industries in the U.S. The symbol # represents the number of financial ratios (FR number). Table 2 . Statistical summary of financial ratios (features) including mean, SE, and p-value.FR, financial ratio; SE, standard error.
8,330
2024-05-30T00:00:00.000
[ "Medicine", "Business", "Computer Science", "Economics" ]
Short-Term Wind Power Prediction Based on Encoder–Decoder Network and Multi-Point Focused Linear Attention Mechanism Wind energy is a clean energy source that is characterised by significant uncertainty. The electricity generated from wind power also exhibits strong unpredictability, which when integrated can have a substantial impact on the security of the power grid. In the context of integrating wind power into the grid, accurate prediction of wind power generation is crucial in order to minimise damage to the grid system. This paper proposes a novel composite model (MLL-MPFLA) that combines a multilayer perceptron (MLP) and an LSTM-based encoder–decoder network for short-term prediction of wind power generation. In this model, the MLP first extracts multidimensional features from wind power data. Subsequently, an LSTM-based encoder-decoder network explores the temporal characteristics of the data in depth, combining multidimensional features and temporal features for effective prediction. During decoding, an improved focused linear attention mechanism called multi-point focused linear attention is employed. This mechanism enhances prediction accuracy by weighting predictions from different subspaces. A comparative analysis against the MLP, LSTM, LSTM–Attention–LSTM, LSTM–Self_Attention–LSTM, and CNN–LSTM–Attention models demonstrates that the proposed MLL-MPFLA model outperforms the others in terms of MAE, RMSE, MAPE, and R2, thereby validating its predictive performance. Introduction Wind power is a clean and renewable energy source that is widely used in power systems.As the amount of wind power generation equipment installed increases [1], more wind power is connected to the power grid system.As a renewable natural resource, wind power itself has a high degree of uncertainty, and the amount of generated power is also uncertain, which poses a great safety hazard when connected to the grid.Therefore, the prediction of wind power is an indispensable safety guarantee for power grid security.In general, wind power prediction methods can be divided into three categories: physical methods, statistical methods, and artificial intelligence methods [2].Physical methods typically utilise wind speed, humidity, pressure, and temperature information from numerical weather prediction (NWP) to model the relationship between wind speed and wind power [3].The NWP method first predicts the future wind speed, and then calculates the wind power through the wind power curve [4].However, the NWP method necessitates the utilisation of meteorological prediction products in real time during actual application, which inevitably increases the prediction cost [5].Statistical methods include autoregressive (AR) models [2], autoregressive moving average (ARMA) models [6], and multiple autoregressive moving average (M-ARMA) models [7].Because statistical methods make predictions under certain assumptions, this results in most statistical methods being unable to solve the problem of nonlinear time series wind power data prediction [8].Several scholars have combined statistical methods with machine learning methods to predict wind power data.In the latest wind power prediction research based on the combination of statistical methods and machine learning, Wan et al. [9] proposed a method (CBC) for generating nonparametric prediction distributions using high-order statistics.This method combines machine learning with conditional moments and cumulants, which can describe the overall uncertainty in the prediction process and use the unique additivity of high-order cumulants to quantify the overall uncertainty of the estimated conditional moments.Three different series expansions, namely, Gram-Charlier, Edgeworth, and Cornish-Fisher, were used to improve the overall performance and generalization ability. With the continuous development of technology, more and more artificial intelligence methods have been proven to have excellent performance in the field of wind power forecasting, including backpropagation neural networks (BP) [10], support vector machines (SVM) [11], and graph neural networks (GNN) [12].In terms of short-term wind power prediction methods, multilayer perceptrons (MLP), light gradient boosting machines (Light-GBM) [13], and convolutional neural networks (CNN) [14] are widely used.Liu et al. [15] proposed a wind farm cluster power prediction model based on power fluctuation pattern recognition and spatiotemporal graph neural network prediction.In this study, the extreme points of the data were first statistically analyzed and the wind farm cluster power was divided into different fluctuation processes.Then, four indicators for judging the division of power fluctuation patterns were summarized from the two aspects of time stability and amplitude fluctuation in these fluctuation processes.Finally, the dynamic spatiotemporal correlation between adjacent wind farm sites was considered under different fluctuation modes and a spatiotemporal graph neural network was used to predict each fluctuation mode.In the latest study on wind power forecasting using graph neural networks, Yang et al. [16] considered the correlation between multiple wind farms and proposed the wind farm cluster (WFC) short-term power forecasting method based on global information adaptive perceptual graph convolution.First, a method for calculating the dynamic correlation coefficient between wind farms was proposed, with the graph structure at each moment obtained through this method.Then, the key features and dynamic correlation coefficients between multiple wind farms were obtained by using graph embedding and clustering algorithms.Finally, an adaptive graph convolution network was established to predict wind power. Because wind power data represent a kind of time series data, each element has strong temporal correlation.This characteristic of wind power data poses a challenge to the above methods, as they cannot fully capture this relationship.To address this issue, recursive neural network (RNN) [17] approaches have garnered significant interest from scholars.Notable RNN networks, such as long short-term memory (LSTM) neural networks, have demonstrated remarkable efficacy in wind power prediction.Wen et al. [18] proposed a new time series prediction model, LSTM-Attention-LSTM, for nonstationary multivariate time series data.Their model uses two LSTM networks for the encoder and decoder, with an attention mechanism placed between the encoder and decoder.They verified this model based on multiple real datasets, proving that the model can effectively improve the accuracy of multivariate and multistep time series data prediction.Zhou et al. [19] employed the K-means clustering method to categorize diverse factors influencing wind power, and proposed a novel K-means-LSTM prediction model for wind power prediction.Chen et al. [20] conducted a feature screening process on the multiple factors affecting wind power and subsequently proposed a novel wind power prediction model combining CNN and BiLSTM.Tang et al. [21] considered the impact of four meteorological variables on wind power generation: wind speed, wind direction, air pressure, and temperature.They used the CNN-LSTM architecture to extract key feature information from the data and used the attention mechanism to assign different weights highlighting the most critical features, thereby achieving more accurate wind power prediction.Ye et al. [22] divided NWP data according to fluctuation trends, extracted different fluctuation features, and used the improved grey wolf optimizer to optimize the hyperparameters of the LSTM-based Seq2Seq model for prediction.Wang et al. [23] proposed a method for predicting wind power generation through the wind power conversion relationship.In their study, wind speed data were first decomposed into multiple subcomponents using empirical mode decomposition (EMD), then these subcomponents were divided into three frequency components (high, medium, and low frequency) using K-means clustering.Finally, three machine learning models, namely, SVM, XGBoost regression, and Lasso regression, were used to predict these three components.The WPC model was then used to calculate the output power of wind power generation based on the predicted wind speed value.Dai et al. [24] proposed an offshore wind power prediction model based on ensemble empirical mode decomposition (EEMD) and an LSTM network.The input wind power data were decomposed into different signal components using EEMD, while the LSTM network was used to obtain different predicted wind power for each group of decomposed components.These predictions were then combined to obtain the final prediction results.In the latest study on wind power generation prediction based on variable modal decomposition (VMD), Tan et al. [25] used the VMD algorithm to decompose wind power data into several subsequences in order to reduce the nonstationarity of the data, then used BiLSTM for wind power prediction, with an improved MPA method (IMAP) used to optimize the parameters of the BiLSTM network.Lei et al. [26] proposed a soft measurement model based on an LSTM network; they used VMD to preprocess the data and the isolation forest algorithm to detect anomalies in the original sequence during preprocessing.Then, an LSTM network was used to predict each modal component separately and the prediction of each component was summed up and output to obtain better prediction results.Zhong et al. [27] employed principal component analysis to reduce the dimensionality and denoise NWP data, after which they used an LSTM network with hyperparameters optimized by a genetic algorithm (GA) to predict wind power.Zhao et al. [28] utilized a graph convolutional neural network to extract features based on the shared spatial characteristics between wind power data.Subsequently, an LSTM network was employed to extract temporal features and perform wind power prediction based on spatial and temporal characteristics.The above studies demonstrate that artificial intelligence methods are both efficient and feasible for wind power prediction.In particular, recurrent neural networks (RNNs), represented by LSTM networks, are more accurate in capturing temporal correlations and have better prediction performance than traditional shallow networks when applied to predicting time series data such as wind power data. In order to predict future short-term power generation through NWP data, this paper proposes a novel hybrid prediction model named MLL-MPFLA.The model first employs a multilayer perceptron (MLP) to extract multidimensional features from the wind power dataset, accelerating the feature extraction process.Next, an LSTM-based encoder-decoder model is utilized to capture temporal features within the dataset.The final wind power prediction results are obtained by integrating both the multidimensional and temporal features.Additionally, a multi-point focused linear attention mechanism is introduced into the decoding process of the LSTM-based encoder-decoder model.This approach allows for the weighted combination of different subspace features, enabling comprehensive integration of features across multiple dimensions for more accurate predictions.The main contributions of this paper are as follows: Experimental validation: To verify the effectiveness of the proposed model, we conducted comparative experiments using real wind power generation data from a wind farm in Xinjiang, China.The model was compared with the MLP, LSTM, LSTM-Attention-LSTM, LSTM-Self_Attention-LSTM, and CNN-LSTM-Attention models, focusing on three key aspects: performance metrics, error analysis, and prediction effectiveness.The remainder of this paper is organised as follows: Section 2 provides a concise overview of the pertinent methodologies; Section 3 delineates the overarching model architecture and improvements to the focused linear attention mechanism; Section 4 illustrates the predictive efficacy of the proposed MLL-MPFLA model on wind power data and analyses the experimental outcomes; finally, Section 5 offers a summary and conclusion to the paper. Multilayer Perceptron (MLP) MLP is a deep learning model based on a feedforward neural network.It can be used to solve various machine learning problems, including classification, regression, and clustering.Additionally, it can be used for data feature extraction [29].Its structure can be divided into three layers: the input layer, hidden layer and output layer.The input layer and output layer have one layer each, while the hidden layer can have multiple layers.Each node layer is composed of numerous neurons, all of which are fully connected to the previous layer [30].Each node layer receives the output of the previous layer, performs a nonlinear activation function operation, and obtains the output of the current layer.The input data are received by the input layer of the MLP, processed by the nonlinear activation function of the hidden layer, and finally the processed data are output at the output layer.This hierarchical structure endows MLP with considerable expressive capacity, enabling it to address nonlinear problems and high-dimensional data [31].In addition, it can be trained using a backpropagation algorithm; following repeated iterations of training, MLP learns the intricate nonlinear relationships between input features, thereby facilitating the extraction of features from the data. Long Short-Term Memory Neural Network (LSTM) An RNN is a neural network structure with recurrent connections and that has been specifically designed to process sequence data with time correlations.In an RNN, the connections between the neurons form a loop path, which allows the network to process sequence data step-by-step while retaining the previous information state.Although RNNs have strong expressive power in processing sequence data, they also have several limitations.These include difficulty in capturing the time correlation between long sequence data, gradient vanishing, and gradient explosion.In order to address these issues, Hochreiter and Schmidhuber proposed the LSTM network, which captures long-term dependencies between data by introducing a gating mechanism.LSTM networks have three key gating units and two key variables [32].The gating units are the input gate, forget gate, and output gate.Among the two key variables, one is primarily responsible for short-term memory, that is, the hidden state h, which is used to record the current time step information, while the other is responsible for long-term memory, that is, the cell state C, which is used to record the characteristics of the entire time series data.When time series data pass through these gated units, the hidden state h and cell state C are continuously updated and forgotten through learning in order to obtain more accurate dependencies between the data.This process can be represented by the following function [33,34]: where I t , F t , and O t , correspond to the outputs of the input gate, forget gate, and output gate respectively, which are process variables used to calculate the final output; C t is the candidate cell state; C t is the cell state at time step t; C t updates the information stored in the cell state at the current step through the cell state C t−1 at the previous step, the candidate cell state C t at the current step, and the gated outputs I t and F t at the current step; x t is the value of the input sequence at time step t; h t is the hidden state at time step t, which can represent all the information of the entire sequence and is calculated by the output gate result O t and cell state C t at the current step; W t , W f ,W o , and W c are the weights matrix; ξ i , ξ f , ξ o , ξ c are the biases; σ(•) represents the sigmoid function; and tanh(•) represents the hyperbolic tangent function. Encoder-Decoder Network Encoder-decoder networks were originally employed in the translation of text or answering of language questions.Subsequently, scholars applied the LSTM architecture to the prediction of time series data, achieving favourable outcomes.The encoder-decoder network proposed by Kyunghyun Cho et al. [35] and Sutskever et al. [36], which they called the sequence2sequence model, contains two independent RNNs called the encoder and decoder.The encoder extracts the input sequence features and encodes them into a context vector C, which is then used as the initial hidden state input of the decoder and combined with the input time series data to obtain a new output sequence of the decoder.This process is referred to as the decoding-encoding process.In an encoder-decoder network, the context vector C produced by the encoder can assist the decoder in extracting time features between time series data to a greater extent, thereby enabling the decoder to achieve enhanced performance in time series data prediction tasks.However, although encoderdecoder networks are more effective at time series data prediction than a single RNN, they exhibit certain limitations.For instance, if the input time series data are of considerable length, then the input sequence may be forgotten, resulting in inadequate acquisition of the long-term characteristics of the data.The encoder context vector C derived in this manner is unable to fully reflect the overall characteristics of the entire long-term data series.In order to address the issue of long-term series, an attention mechanism is typically employed in the encoding-decoding process.The attention mechanism combines the context vector C obtained by the encoder with the input sequence of the decoder, recomputes an attention output as the input of the decoder according to different weights; it then uses the decoder to obtain a new prediction result.The advantage of this approach is that different weights can be assigned according to the relative importance of different data features at different times.Furthermore, weighting processing allows for a more accurate understanding of the overall time series data dependency of long time series, which in turn enables the generation of more accurate output results. Focused Linear Attention From the perspectives of both computational power and feature extraction, Han et al. [37] used a simple and efficient mapping function and an effective feature extraction module to introduce an efficient replacement for the self-attention mechanism called the focused linear attention mechanism.The focused linear attention mechanism not only reduces computational complexity from O(N 2 ) to O(N), but also has efficient feature extraction capabilities.In both the self-attention mechanism and the focused linear attention mechanism, three weight matrices are defined to compute the dependencies between the elements.These matrices are referred to as the query matrix, key matrix, and value matrix, are referred to as Q, K, and V.The SoftMax attentional similarity in self-attention is calculated as follows [38]: where Sim(•, •) is the formula for calculating the similarity, the calculation order is (QK T )V, and the calculation complexity is O(N 2 ).In the focused linear attention mechanism, Q, K, and V are similarly used to obtain the dependency relationship between each element.Unlike the self-attention mechanism, the similarity calculation method in the focused linear attention mechanism is as follows: where the function θ(x) = f (ReLU(x)), f (x) = ∥x∥ ∥x p ∥ x p .Subsequently, the self-attention mechanism in Equation ( 9) can be rewritten using the similarity calculation method of the linear attention mechanism, resulting in the expression in Equation (10): ∑ n j=1 e According to the associative law of matrix multiplication, the calculation order is converted from (QK T )V to Q(K T V), which can be obtained as follows: reducing the computational complexity of the converted data from O(N 2 ) to O(N).While this result represents a reduction in computational complexity, it also entails a loss of the ability to extract the features containing the most information. In order to solve the problem of insufficient feature extraction with the linear attention mechanism, a depth-wise convolution module (DWC) is added to the focused linear attention calculation matrix, which is used to calculate several local features adjacent to each query vector in order to ensure the diversity of the overall features of the output.The output of the overall focused linear attention mechanism can be expressed as follows: The focused linear attention mechanism offers two key advantages.First, it reduces the computational complexity of the model.Second, it has a higher feature extraction capability for data.However, the focused linear attention mechanism has a tendency to focus excessively on one aspect of the feature extraction process when applied to time series data prediction.We propose an improved version of the focused linear attention mechanism.This new mechanism allow features to be extracted from time series data in multiple subspaces.In addition, it can fully consider the data features in different subspaces and more fully understand the feature relationships in long-term time series data. Overall Architecture LSTM networks have demonstrated excellent results in predicting time series data.However, their performance in multi-step prediction of multivariate time series data is unsatisfactory.Therefore, in order to enhance the accuracy of multidimensional and multistep prediction, an MLP is employed to perform preliminary multidimensional feature extraction on the input time series data.MLP does not require convolutional computation and can process data quickly; therefore, it can be used to quickly extract multidimensional features from an input sequence.Then, a layer of the LSTM network acts as an encoder to extract the temporal correlation features and encode them to obtain the context vector C of the input sequence.Subsequently, another layer of the LSTM network is employed as a decoder to decode the context vector C.This is done in order to analyse and predict the input sequence based on the multidimensional features and temporal features stored in the context vector C.In the decoding process, a multi-point focused linear attention mechanism is utilised.This is done with the intention of fully considering the different features of the input sequence in the multivariate dimension and time dimension in multiple different subspaces.By calculating the multidimensional features and temporal features in multiple subspaces, a more comprehensive and accurate understanding of the feature relationship between time series data can be obtained.The prediction results obtained in different subspaces are weighted to improve the accuracy of the prediction output, and the final result is output.Figure 1 illustrates the overall MLL-MPFLA model structure. Multidimensional Feature Extraction Based on MLP An MLP is a basic neural network model that consists of one or more fully connected layers in which each neuron layer is connected to all neurons in the previous layer.MLP models are typically employed to address classification and regression problems.In addition to these tasks, they can also be utilized for data feature extraction.In this study, we consider the relationship between wind speed, temperature, pressure, and other multivariate factors influencing wind power.Long-term wind power data are initially segmented into sequences of fixed length and subsequently subjected to an MLP comprising two hidden layers for extraction of the multidimensional features.The data following the input layer are processed by linear transformation and a ReLU(•) activation function, then transmitted to the first hidden layer.To prevent overfitting, the output result is subjected to dropout processing after linear transformation in the first hidden layer prior to transmission to the subsequent layer.The second hidden layer combines the output of the first hidden layer with the original data; after linear transformation, the result is transmitted to the output layer as the output of the second hidden layer.The final data after feature extraction are obtained by linear transformation in the output layer. Encoder-Decoder Network Based on LSTM Since Kyunghyun Cho et al. [35] first proposed the encoder-decoder network model, it has gained considerable popularity among scholars engaged in the field of natural language processing.In this paper, we apply the model to the task of time series data prediction and compare it with traditional prediction models such as LSTM and MLP.Our results demonstrate that the encoder-decoder, which is typically composed of two recurrent neural networks, provides significantly enhanced prediction accuracy.In this paper, an LSTM network is employed as the encoder and decoder in light of its proven efficacy in extracting temporal features from time series data.In the encoder, the temporal features of the input sequence are extracted by the LSTM network and converted into a vector representation of fixed dimension.This conversion process is designed to retain the time correlation characteristics between the data in the entire sequence to the greatest extent possible.The specific conversion methodology is outlined below.For the sake of simplicity, we assume that the input time series data are represented by x = [x 1 , x 2 , • • • , x n ], where x t represents the input data at time step t.At time step t, the LSTM network converts the input data x t and context vector C t−1 of the previous step into the context vector C t of the current step.This conversion is represented by the function f (•): Consequently, the input time series data x = [x 1 , x 2 , • • • , x n ] can be passed through the encoder to obtain a context vector C containing the temporal features and multidimensional feature information of the entire input sequence.The hidden state h t of the decoder at time step t is the output y t−1 of the decoder at the previous step and the context vector C of the encoder as input.These are combined with the hidden state h t−1 of the decoder at the previous step to obtain the hidden state h t at the current step.The function g(•) represents the transformation of the decoder's hidden state: After obtaining the hidden state h t of the decoder at time step t, the probability output of the output y t at the current step is calculated by combining the output y t−1 at the previous step.As the encoding-decoding operation delves deeper into the temporal dependency relationship between time series data, a greater number of temporal features that influence the probability output are taken into account during the calculation process.This results in more accurate prediction outcomes than those of a single LSTM network.Figure 2 illustrates the encoder-decoder network based on LSTM network. Multi-Point Focused Linear Attention Mechanism It is typical to incorporate an attention mechanism into the encoder-decoder network.This mechanism combines the hidden states of the two time series data inputs in the encoder and decoder, thereby facilitating more comprehensive feature extraction.Nevertheless, the prediction model based on the attention mechanism requires further enhancement in terms of prediction accuracy.With the objective of further improving the prediction accuracy, in this paper we employ a multi-point focused linear attention mechanism.The focused linear attention mechanism is improved by combining the characteristics of time series data; we call the resulting improved attention mechanism the multi-point focused linear attention mechanism.In the focused linear attention mechanism, the SoftMax similarity calculation method is not used; instead, the linear similarity calculation method is adopted.Although this reduces the computational complexity, it has the disadvantage of insufficient feature extraction from the data.To address this issue, the focused linear attention mechanism employs a deep convolution module to convolve and extract multiple adjacent features in close proximity to each V, thereby extracting more data features [37].This process is described by Equation (12).Because the focused linear attention mechanism performs convolutional feature extraction on V, and because V is obtained through linear calculation, part of the original information contained in the time series data is lost, resulting in incomplete feature extraction from the time series data.Taking this into account, we perform convolutional feature extraction directly on the input data of each time step in the multi-point focused linear attention mechanism, replacing the deep convolution module in the focused linear attention mechanism.We use CONV(x) to represent the convolutional feature extraction operation on the input sequence of each time step, which we use to replace the DWC(V) module in Equation (12).Then, the output of the improved focused linear attention mechanism can be described by Equation ( 15): The advantage of this approach is that it can fully consider the characteristics of the time series data and use the original data for feature extraction directly, reducing the loss of features to ensure that more complete features are extracted from the input time series data sequence.During calculation, the focused linear attention mechanism may focus unduly on the features in a certain subspace while ignoring the feature information of other subspaces.To address this issue, the multi-point focused linear attention mechanism proposed in this paper employs a strategy that fully leverages the feature information across multiple subspaces.This involves initializing the focused linear attention mechanism into multiple groups of distinct Q, K, and V, calculating the attention output corresponding to each group of Q, K, and V, then weighting multiple different attention outputs to obtain a new attention output.As shown in Figure 3, the data x t at the current time step are matrixmultiplied with multiple sets of different projection matrices to obtain multiple sets of different Q, K, and V. Equation ( 16) describes this process: where Q n , K n , and V n respectively represent the Q, K, and V of the nth subspace at time step t, x t ∈ R N×C represents the input data at time step t, and W Q n , W K n , W V n ∈ R C×C are projection matrices.Then, the corresponding attention outputs are calculated based on the multiple sets of Q, K, and V. Equation ( 17) describes this process: where Q n , K n , and V n respectively represent the Q, K, and V of the nth subspace at time step t, while A n represents the attention output of the nth subspace at time step t.Finally, we concatenate the multiple sets of attention outputs and multiply them by a projection matrix to obtain the final attention output.Equation ( 18) describes this process: where M t represents the multi-point focused attention output at time step t, Concat(•) represents matrix concatenation, and W m ∈ R nC×C is the projection matrix.After calculating the attention output of the multi-point focused linear attention mechanism, this attention output can be used to more accurately analyze the input time series data during the decoding process, thereby obtaining better prediction results.The multi-point focused linear attention mechanism calculates multiple sets of different initial values Q, K, and V in the same improved focused linear attention mechanism to fully consider different features in multiple subspaces.Compared with the focused linear attention mechanism, it can capture more relational features of time series data in multiple different subspaces, thereby further improving the accuracy of time series data prediction results. Experiment and Analysis In this section, we first provide a description of the real dataset and preprocessing used in our experiments.Then, we describe the experimental verification conducted on this dataset, with five other prediction methods commonly used as benchmark models included for comparison with the proposed composite prediction model.The effectiveness of the proposed MLL-MPFLA model is proven by comparing these models on several performance indicators. Experimental Data and Preprocessing The dataset utilised in this study is described in this subsection.This dataset is derived from the actual wind power generation data of a power plant in Xinjiang, China, as documented in the Aliyun Tianchi dataset.The dataset contains 3649 samples collected every 15 min.Each sample includes eleven environmental influencing factors along with actual power generation data.The eleven influencing factors include the wind speed at 10 m, 30 m, 50 m, and 70 m away from the power generation equipment, the wind direction at 10 m, 30 m, 50 m, and 70 m away from the power generation equipment, and the temperature, air pressure, and humidity near the power generation equipment at the current moment.Because the impact of wind direction data on power generation is not highly correlated, in this study only the impacts of wind speed, temperature, air pressure, and humidity at 10 m, 30 m, 50 m, and 70 m away from the power generation equipment on the actual power generation are considered.Selected data from the dataset are shown in Table 1.In this study, the total samples are divided into two parts, as illustrated in Figure 4; the first 80% of the samples are designated as training samples, while the remaining 20% constitute test samples.Table 2 provides a statistical description of the dataset.For data preprocessing, considering that wind power data are discrete, the data were smoothed first.The advantage of this approach is that it can reduce the noise interference in the original data, eliminate the impact of random fluctuations, and enable the neural network model to better analyze and process the data.In this paper, KalmanFilter smoothing was selected; other methods for smoothing include exponential smoothing, polynomial smoothing, Gaussian smoothing, and more.Then, we used Z-Score standardization to convert the data to a unified scale.After Z-Score standardization, the mean of the data was adjusted to 0 and the standard deviation was adjusted to 1. Finally, we divided the entire data set into multiple segments using a sliding window of size 20.In each segment, the first 16 datapoints are used as the batch size for model training and the last four are used as labels to verify the prediction results.Therefore, the MLL-MPFLA model can use the wind power generation data of the past four hours (i.e., sixteen wind power generation data points) to predict the wind power generation in the next hour (i.e., the next four moments).Because the data in this dataset are highly complete with no missing data, we did not perform any missing data processing.The data used in all experiments described in this article are based on the above preprocessing approach. Evaluation Metrics In order to evaluate the accuracy of the MLL-MPFLA model in wind power generation prediction, three commonly used quantitative indicators are used as performance evaluation indicators: mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of determination (R 2 ).These can be respectively expressed by the following formulas: where P i is the predicted value, T i is the true value, and T(i) is the mean of the actual values. Analysis of Wind Power Generation Prediction Results In order to assess the efficacy of the proposed MLL-MPFLA model, five commonly used prediction models were selected as benchmark models for comparative experiments.The specific settings of the benchmark models are presented in Table 3.The LSTM-Attention-LSTM and CNN-LSTM-Attention models were proposed in [18,21], respectively.In this paper, we conducted cross-validation through a large number of experiments and select the best hyperparameters in the MLL-MPFLA model based on empirical settings.The specific hyperparameter settings are shown in Table 4, taking the number of hidden units in the LSTM network's decoder and encoder as an example.We first set the initial value of the number of units to 8 and increased the number of units by multiples of 8 each time until the best parameter setting was obtained.We verified whether the parameters were optimal by comparing the MAE and RMSE indicators.The encoder and decoder in the MLL-MPFLA model used the same LSTM hyperparameter settings, with 3 hidden layers, 512 hidden units, 0.001 learning rate, 0.05 dropout, and 260 training epochs.The convolution layer parameters for convolution feature extraction of the input time series were set as follows: the number of channels was set to 16, the number of convolution kernels to 16, the convolution kernel size to 1 × 1, and the stride of the convolution to 1.In the MLP used for multidimensional feature extraction, the number of units in the first hidden layer was set in the same way as in the above method; the specific number of hidden layers units was set to 512.In order to ensure the consistency of the output data dimension, the number of units in the second hidden layer was set to 8. Following [29], we used two hidden layers and a dropout value of 0.1.In the encoding-decoding process, we utilised the multi-point focused linear attention mechanism in 8 subspaces for weighted sum prediction.The hyperparameter setting method of the benchmark models was the same as for the MLL-MPFLA model: the number of MLP hidden layers was set to 3, the number of hidden layers units in the first and second hidden layers to 512, the number of hidden layer units in the third hidden layer to 8, the dropout to 0.1, the activation function was ReLU(•), and the number of training rounds was 10.The number of hidden layers of the LSTM was set to 3, the number of hidden layer units to 512, the learning rate to 0.001, the dropout to 0.05, and the training round to 150.In LSTM-Attention-LSTM, the LSTM hyperparameter settings used by the encoder and decoder were the same: the number of hidden layers was set to 3, the number of hidden layer units to 512, the learning rate to 0.001, the dropout to 0.05, and the number of training epochs to 260.In LSTM-Self_Attention-LSTM, the LSTM hyperparameter settings used by the encoder and decoder were the same: the number of hidden layers was set to 3, the number of hidden layer units to 512, the learning rate to 0.001, the dropout to 0.05, and the number of training epochs to 260.In the CNN-LSTM-Attention model, the number of LSTM hidden layers was set to 2, the number of hidden layer units to 256, the learning rate to 0.001, and the dropout to 0.05.For the CNN, the number of channels was set to 256, the number of convolution kernels to 4, and the number of training epochs to 100.All of the above LSTM networks were implemented using the LSTM class in Pytorch 2.2.2.Considering that the prediction results of neural networks are random, multiple experiments were conducted in order to reduce random errors, taking the average of the results.We conducted five repetitions, with the experimental results shown in Table 3 and Figure 5.The prediction results are shown in Figure 6a.Furthermore, all of the aforementioned models were executed on a server equipped with a 3.5 GHz Intel Core i7-13700K processor, an NVIDIA GeForce RTX 4090 graphics processing unit (GPU), and 32 GB of memory, as illustrated in Table 5. 3 and Figure 5 demonstrate that the MLL-MPFLA model proposed in this paper outperforms the five benchmark models in short-term wind power prediction.Figure 5 shows intuitively that the MLL-MPFLA model has the lowest MAE, RMSE, and MAPE indicators along with the highest R 2 indicator.From Table 3, the proposed model's MAE is the lowest at 5.2124, its RMSE is the lowest at 7.0972, and its R 2 is the highest at 0.9843.The LSTM-Self_Attention-LSTM model is the second-best performer, with an MAE value of 9.9060, RMSE value of 13.1949, and R 2 value of 0.9457.The MLP model is the least effective, with MAE, RMSE, and R 2 values of 23.2081, 30.4275, and 0.7119, respectively.The R 2 index is a measure of the degree of fit between the prediction result and the true value, with higher values indicating a greater degree of fit.As illustrated in Table 3, the R 2 index of the proposed MLL-MPFLA model exhibits a notable increase relative to the benchmark models, reaching 0.2724, 0.1646, 0.0931, 0.0386, and 0.1310, respectively.This indicates that the MLL-MPFLA model exhibits the most optimal fit.The MAE and RMSE results for MLP are 23.2081 and 30.4275, respectively.Compared with MLP, the MAE and RMSE of the proposed model are respectively reduced by 17.9957 and 23.3303.The superiority of MLL-MPFLA over MLP lies in the extraction and analysis of the temporal characteristics of wind power data and the use of the multi-point focused linear attention mechanism to fully consider the impact of temporal characteristics on power generation.From the analysis of the MAPE index, in Figure 6a, it can be seen that the MAPE index of MLP is high because the prediction error for certain data points is large when MLP predicts data close to 0; thus, no comparison analysis with the MAPE index of the MLL-MPFLA model is possible.The superiority of MLL-MPFLA over these two comparative models lies in its deeper extraction and analysis of the multidimensional features of wind power data and its use of a more efficient multi-point focused linear attention mechanism to fuse the multidimensional time series data features in multiple subspaces and fully extract the features of the time series data, thereby obtaining better prediction performance.In addition, compared with CNN-LSTM-Attention, the MAE, RMSE, and MAPE of the proposed model are reduced by 10.7471, 14.5932, and 46.3280% respectively.This is because the proposed model uses a special encoding-decoding operation to enhance the feature extraction capability for time series data.In addition, the proposed model uses a multi-point focused linear attention mechanism with stronger feature extraction capability, which is an indispensable factor in its achieving better prediction results.Through the above analysis, we can draw the following conclusions.The MLL-MPFLA model proposed in this paper represents the optimal performance.It analyses and combines the multidimensional and temporal features of time series data, then weights the prediction results of different dimensions through the multi-point focused linear attention mechanism, thereby obtaining superior prediction performance. Error Analysis of Model Prediction Results The error of each model is shown in Figure 7, where the error calculation Error = P i − T i , P i represents the predicted value and T i represents the true value.It can be observed that the error of the proposed model is smaller than that of other models, indicating that the accuracy of the prediction results is relatively high.In theory, it is desirable for the difference between the predicted value and the true value to be infinitely close to 0; however, from the actual prediction results it can be seen that this is difficult to achieve.In actual prediction tasks, a smaller difference between the predicted value and true value indicates a better prediction effect.The red curve in Figure 7 represents the prediction error of the MLL-MPFLA model, exhibiting a floating range near 0. Overall, the prediction error is smaller than that of the five compared benchmark models.For the shallow MLP neural network, only the impact of multiple environmental factors on the power generation is considered, without considering the impact of time series characteristics on power generation; thus, the prediction result has a large error.The MLL-MPFLA model fully extracts and analyzes the multidimensional characteristics and time series characteristics of wind power data at the same time, meaning that the prediction error is greatly reduced compared with the MLP.For the LSTM network, although it can extract and analyse the time series characteristics of wind power data, its ability to extract multidimensional features of wind power data is obviously insufficient compared with the MLL-MPFLA model, resulting in a higher prediction error.For the LSTM-based encoder-decoder network, the time characteristics of the data can be extracted and analyzed; although the prediction error is significantly reduced compared to MLP and LSTM, it is still higher than our proposed MLL-MPFLA model.This is because the MLL-MPFLA model not only designs a separate multidimensional feature extraction module for wind power data but also uses a superior multi-point focused linear attention mechanism, resulting in the prediction error of the MLL-MPFLA model being lower than that of the LSTM-based encoder-decoder network.In addition, although the prediction errors shown in Figure 7 are smaller at some moments than those of the MLL-MPFLA model, the overall proportion of these points with smaller errors is very small.This phenomenon is due to the random nature of the neural network model's prediction output, which results in the appearance of points that are closer to the true power value, thereby reducing the error compared to the MLL-MPFLA model.With the exception of a few points that may be attributed to randomness, the overall error analysis indicates that the prediction accuracy of the MLL-MPFLA model is superior to that of the other models. Effectiveness Analysis of Model Prediction Results A comparison of the prediction results with those of the other five benchmark models is presented in Figure 6. Figure 6a shows the comparison of the prediction results of all models, while Figure 6b-f shows local enlarged prediction diagrams of the five benchmark models.Figure 6g The figure illustrates that the prediction result curve of the proposed model exhibits the highest degree of fit with the true value curve accompanied by the smallest error, indicating the most accurate prediction results.In addition, it can be seen from Figure 6b that the prediction error of MLP near the value of 0 is large; in particular, when the data fluctuate greatly near the value, the prediction effect is the worst.This situation causes the MAPE index to soar, making the MAPE index of MLP higher than that of MLL-MPFLA.The reason for this phenomenon is that the shallow neural network MLP does not extract the time features of the time series data and its extraction of multidimensional features is not sufficient, resulting in the worst prediction performance and the most obvious decline in fit compared with the MLL-MPFLA model.The LSTM network demonstrates commendable performance in the time series data prediction task; however, numerous factors have an impact on the prediction outcomes of wind power data in this study.The single LSTM network has a limited effect on multivariate feature extraction, and its prediction effect is significantly inferior to that of the MLL-MPFLA model.In comparison to the LSTM network, the MLL-MPFLA model has a distinct network module for deep extraction of the multidimensional features of the time series data.Additionally, it employs a multipoint focused attention mechanism to assign varying weights to the prediction results.Through continuous learning and training, the optimal weight matrix can be identified, enabling the generation of optimal prediction results.While the prediction efficacy of the LSTM-based encoder-decoder network is considerably superior to that of the shallow MLP neural network, its fit remains inferior to that of the MLL-MPFLA model.This is primarily reflected in the substantial discrepancy in prediction outcomes when the data exhibit significant fluctuations.This phenomenon is due to the model's incomplete learning of the multidimensional features that influence wind power data, which results in suboptimal prediction outcomes when the data exhibit significant fluctuations.From the fit analysis of the prediction results of each model, it can be seen that the proposed MLL-MPFLA model fully considers the impact of multidimensional features and temporal features on the prediction results, uses a more efficient multi-point focused linear attention mechanism, and obtains the best prediction results compared with the other five benchmark models. Generalization Experiment Without readjusting the hyperparameters of the proposed model, the generalization of the model was verified using the public ETTh1 dataset [39].The experimental results are shown in Figure 8.It can be seen from the figure that the prediction results of the MLL-MPFLA model are highly consistent with the real data.Similarly, without readjusting the hyperparameters of the benchmark model, the other five benchmark models were used on the same dataset.The R 2 indexes of MLP, LSTM, LSTM-Attention-LSTM, LSTM-Self_Attention-LSTM, CNN-LSTM-Attention, and MLL-MPFLA are 0.6464, 0.8042, 0.7436, 0.7860, 0.6890, and 0.9145 respectively.From the R 2 index, it can be seen that the MLL-MPFLA model has the highest degree of fit on different datasets. In conclusion, the MLL-MPFLA model proposed in this paper demonstrates a notable enhancement in comparison to other benchmark models across the three dimensions of performance indicators, result errors, and prediction result fitting effects.This evidence substantiates the effectiveness and reliability of the proposed model in wind power data prediction and validates its potential as a robust analytical and predictive tool for power grid security maintenance.The prediction time of all methods was statistically analyzed under the server configuration shown in Table 5.The results show that the prediction time required by all models is less than 0.2 s for the test data (size 4 × 16 × 8 bytes), which can meet the needs of most real environments, including resource-constrained environments.However, in the MLL-MPFLA model, because the hyperparameters were empirically set through a large number of experiments, the hyperparameters need to be reset when the dataset changes.In addition, as with most prediction models, the prediction effect of our proposed model will tend to decline when the prediction step size increases. Conclusions The prediction of wind power generation represents an effective measure for the stable operation of power grids.The superiority of the MLL-MPFLA model proposed in this paper is evident in its ability to separately extract multidimensional features and temporal features of time series data while fully considering the correlation between the two.Furthermore, a more efficient multi-point aggregation linear attention mechanism is employed to fully consider the varying importance of different features from multiple subspaces, enabling more accurate predictions.The following is a summary of the full text.First, an MLP is employed to extract the multidimensional features of a multitude of factors that influence power generation.Subsequently, the multidimensional features and temporal features are integrated and predicted in conjunction with the LSTM-based encoder-decoder network model.The advantage of this approach is that the time series data can be fully mined and associated in both the multivariate dimension and the time correlation dimension.In the decoding process, the multi-point focused linear attention mechanism is used to weight the different features of the wind power data in multiple subspaces.This approach fully considers the distinct features present in each subspace and integrates features across multiple dimensions, thereby enhancing the accuracy of the prediction.A case study of a wind power dataset from Xinjiang, China was conducted to compare the MLL-MPFLA model with five benchmark models: MLP, LSTM, LSTM-Attention-LSTM, LSTM-Self_Attention-LSTM, and CNN-LSTM-Attention.The efficacy of the MLL-MPFLA model was then demonstrated through a comparative analysis of four evaluation metrics (i.e., MAE, RMSE, MAPE, and R 2 ), an error analysis of the prediction results, and an effect analysis of the prediction curves.In summary, the MLL-MPFLA model proposed in this paper can make accurate predictions of future short-term power generation based on wind power data generated in a previous period of time.It can then make correct responses in advance according to the prediction results, ensuring the safe maintenance of the power grid and reducing the occurrence of accidents.Because the hyperparameters of our model are empirically set through experiments, in subsequent work optimization methods such as Bayesian optimization could be used to reduce the workload of empirical hyperparameter setting by automatically optimizing the hyperparameters of the model.In addition, it would be possible to introduce the attention mechanism into the extraction of multidimensional features and improve the ability of the model to extract multidimensional features of data through the attention mechanism, allowing it to achieve higher prediction accuracy while enhancing its ability to predict data at more unknown time points in the future. Figure 1 . Figure 1.Framework of the composite MLL-MPFLA model for short-term wind power prediction. Figure 2 . Figure 2. Detailed process of the LSTM-based encoder-decoder network in the proposed MLL-MPFLA model. Figure 3 . Figure 3. Detailed process of the multi-point focused linear attention mechanism in the proposed MLL-MPFLA model. Figure 4 . Figure 4. Actual wind power data in the dataset. Figure 5 . Figure 5.Comparison of MLL-MPFLA evaluation metrics with the five benchmark models. Figure 6 . Figure 6.Short-term wind power prediction results for the five different methods: (a) shows all predicted results, (b-f) show the partial prediction results of the five benchmark models, and (g) shows the MLL-MPFLA partial prediction results.4.3.1.Performance Analysis of the ModelsTable 3 and Figure5demonstrate that the MLL-MPFLA model proposed in this paper outperforms the five benchmark models in short-term wind power prediction.Figure5shows intuitively that the MLL-MPFLA model has the lowest MAE, RMSE, and MAPE indicators along with the highest R 2 indicator.From Table3, the proposed model's MAE is the lowest at 5.2124, its RMSE is the lowest at 7.0972, and its R 2 is the highest at 0.9843.The LSTM-Self_Attention-LSTM model is the second-best performer, with an MAE value of 9.9060, RMSE value of 13.1949, and R 2 value of 0.9457.The MLP model is the least effective, with MAE, RMSE, and R 2 values of 23.2081, 30.4275, and 0.7119, respectively.The R 2 index is a measure of the degree of fit between the prediction result and the true value, with higher values indicating a greater degree of fit.As illustrated in Table3, the R 2 index of the proposed MLL-MPFLA model exhibits a notable increase relative to the benchmark models, reaching 0.2724, 0.1646, 0.0931, 0.0386, and 0.1310, respectively.This indicates that the MLL-MPFLA model exhibits the most optimal fit.The MAE and RMSE results for MLP are 23.2081 and 30.4275, respectively.Compared with MLP, the MAE and RMSE of the proposed model are respectively reduced by 17.9957 and 23.3303.The superiority of MLL-MPFLA over MLP lies in the extraction and analysis of the temporal characteristics of wind power data and the use of the multi-point focused linear attention mechanism Figure 7 . Figure 7. Error comparison between MLL-MPFLA and the five benchmark models. is a local enlarged prediction diagram of the proposed MLL-MPFLA model.The bars in the local enlarged diagram represent the error size of the current point. Table 1 . Selected data from the experimental dataset. Table 2 . Statistical information of the dataset. Table 3 . Evaluation metrics from five experiments on MLL-MPFLA and the five benchmark models. Table 5 . Server configuration information. Similarly, the MAE, RMSE, and MAPE of LSTM are 17.3552, 24.0803, and 38.7232% respectively.Compared with LSTM, the MAE, RMSE, and MAPE of the MLL-MPFLA model are reduced by 12.1428, 16.9831, and 17.4987%, respectively.The superiority of MLL-MPFLA over the baseline LSTM network lies in its deeper extraction and analysis of the multidimensional characteristics of wind power data and the use of an encoder-decoder network based on LSTM, which enhances the LSTM network's ability to analyze the temporal characteristics of data.Compared with LSTM-Attention-LSTM, the MAE, RMSE, and MAPE of the proposed model are reduced by 8.0202, 11.5786, and 13.3184%, respectively.Compared with LSTM-Self_Attention-LSTM, the MAE, RMSE, and MAPE of the proposed model are reduced by 4.6936, 6.0977, and 12.2826%, respectively.
11,992.4
2024-08-25T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
The effect of confinement on thermal fluctuations in nanomagnets We study the magnetization dynamics in nanomagnets excited by stochastic magnetic fields to mimic temperature in a micromagnetic framework. The effect of confinement arising from the finite size of the structures is investigated, and we visualize the spatial extension of the internal magnon modes. Furthermore, we determine the temperature dependence of the magnon modes and focus specifically on the low frequency edge modes, which are found to display fluctuations associated with switching between C- and S-states, thus posing an energy barrier. We classify this fluctuating behavior in three different regimes and calculate the associated energy barriers using the Arrhenius law. Mesoscopic spin systems can be used as a playground for investigations of magnetic ordering and dynamics. A range of mesoscale magnetic structures have been fabricated using nanolithography, spanning from analogues to the 1D and 2D Ising model systems, 5,6 to extensive two-dimensional frustrated artificial spin ice (ASI) Mesoscopic spin systems can be used as a playground for investigations of magnetic ordering and dynamics. [1][2][3][4] A range of mesoscale magnetic structures have been fabricated using nanolithography, spanning from analogues to the 1D and 2D Ising model systems, 5,6 to extensive two-dimensional frustrated artificial spin ice (ASI) structures. [7][8][9] The elements are often treated as point-like magnetic dipoles and, more recently, as artificial magnetic atoms. These analogies only hold to a certain point when describing thermal fluctuations and transitions in mesoscopic systems. Furthermore, it has become evident that the analogy to a point-like dipole can even be misleading, resulting in misinterpretations and quantitative discrepancies between observations and calculations. [10][11][12][13] The reason for this originates in contributions from both static and dynamic textures in the magnetization of the elements. [14][15][16][17] Even though extensive work has been done on the magnetization dynamics in such elements, [17][18][19][20] little is known about the effect of the extension of the elements on the thermal excitations. Here, we investigate the influence of temperature on the inner magnetization of the Ising like mesospins. The model system we use for these investigations consists of elongated, stadium-shaped nanomagnets, as illustrated in Fig. 1, with an aspect ratio of length (L): width (W): thickness (t) ¼ 90 : 30 : 1. Henceforth, we will refer to these magnetic elements as mesospins. We use the micromagnetic simulation package MuMax 3 for all the calculations. 21 Effects such as exchange, crystalline anisotropy, and demagnetization are taken into account by means of an effective field. The mesospins are assumed to have magnetic properties close to those of Permalloy (Py), with a saturation magnetization of M s ¼ 10 6 A/m and an exchange stiffness of A ex ¼ 10 À11 J/m. The Gilbert damping constant is set to a ¼ 0:001. The structure is divided into cells, the size of which is given by l x  l y  l z ¼ 2.5 nm  2.5 nm  t nm. The in-plane component of the cell size is smaller than the exchange length, given by l ex ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2A ex =l 0 M 2 S p ¼ 4:0 nm, ensuring reliable simulation results. 21 A typical problem that appears when simulating magnetization dynamics in a micromagnetic framework is the appearance of a Van Hove singularity, resulting from the discretization of the magnetic continuum, and the corresponding cutoff for spin waves with a wavelength Applied Physics Letters ARTICLE scitation.org/journal/apl smaller than twice the cell size. 22 However, with the current cell size, we find that this singularity is moved far beyond 100 GHz, i.e., much larger than the frequency region of interest (f < 30 GHz). The temperature is simulated by a time varying thermal field H therm i ðtÞ, with the following properties: where i and j denote cell indices, D is the power of the fluctuations, d(t) is the Dirac delta function, and d ij is Kronecker's delta function. The first equation implies that the thermal field vanishes upon averaging. The second equation defines delta correlations both in space and time. The delta correlation in time is justified by considering that the correlation time of the thermal fluctuations is a few picoseconds, i.e., of the order of the inverse Debye frequency, as thermal fluctuations of the spin degrees of freedom originate from interaction with phonons. This timescale is much smaller than that typical of magnetization dynamics. The delta correlation in space is justified by considering that the correlation length of thermal fluctuations is typically a few unit cells, i.e., much smaller than the micromagnetic cell size. 14,22 As such, the thermal field in a micromagnetic framework is effectively random in space and time, which is numerically realized through a random vector g, the size of which varies with a Gaussian distribution around unity and whose direction is randomized for every time step and cell. The fluctuation power D can be found from the fluctuation-dissipation theorem, 23 and thus, the expression for the thermal field becomes (1) where a is the Gilbert damping constant, k B is the Boltzmann constant, T is the temperature, M s is the saturation magnetization, c is the gyromagnetic ratio, V is the volume of the cell, and Dt is the time step. A sixth order Runge-Kutta-Fehlberg solver is used in MuMax 3 to calculate the thermal fluctuations using adaptive time steps. 24 The spatial and temporal randomness of the field ensures excitation of all eigenmodes in the structures, as opposed to methods where more homogeneous magnetic fields are used for the excitations. 25 The time window of the simulations is typically 25 ns, within which the magnetization vector mðx; y; tÞ is recorded every 5 ps, resulting in a frequency resolution of 0.04 GHz and a range of 0-100 GHz. In order to obtain reliable spectra, each simulation is run four times with different thermal seeds, after which the resulting spectra are averaged. The spatial dependence of the magnon amplitudes can be found by taking the Fourier transform of fluctuating components via m y;z ðx; y; f Þ ¼ Ffm y;z ðx; y; tÞg. 26,27 Furthermore, the spatial dependence can be averaged out in order to obtain the spectrum, via hm y;z ðx; y; f Þi x;y ¼ m y;z ðf Þ. The magnon spectral density n can be extracted from m y;z ðf Þ, by using the following relation: 28 In the thermodynamic limit, the magnon spectrum is continuous for isotropic ferromagnets. When the size is finite, a gap will be obtained at jkj ¼ 0. Figure 2 shows the full spectrum of magnons per unit area for two different mesospin sizes, taken at T ¼ 100 K. Standing magnon modes emerge in the longitudinal and transversal directions, the order of which we indicate with the integers v and w, respectively. The uniform (v, w) ¼ (0, 0) mode shows up at f ¼ 6.3 GHz for the mesospin with L ¼ 450 nm and splits into higher order longitudinal modes as the frequency is increased. One exception to this is that the (1,0) mode has a lower frequency than the uniform mode, whereas all the other modes with v > 1 have a frequency higher than the Kittel mode. This is a consequence of the dynamic dipolar interaction in the case that k k m. In this configuration, the dispersion relation, f ðkÞ, has a minimum for k 6 ¼ 0, i.e., a magnon with a finite wavelength has the minimum frequency. The frequency gaps between the transverse magnon modes are much larger than the gaps for the longitudinal modes, as a result of the difference in extension. In addition to the modes in the interior of the elements, we observe edge modes, the lowest order of which is seen at f ¼ 2.5 and 5.5 GHz. The L ¼ 270 nm mesospin shows only a single edge mode, centered around 1.8 GHz. An increase in temperature leads to an increase in occupied magnon modes, as shown in Fig. 3(a), and at low temperatures, only the lowest lying states are occupied. Between 250 and 300 K, we observe an increased occupation of states in the gap region at f < 3 GHz. To get a better picture of the change of available states with temperature, we investigated the magnon occupation numbers (MONs). We obtain this quantity using nðE; TÞ ¼ DðE; TÞFðE; TÞ, where D(E, T) are the magnon occupation numbers and F(E, T) is the thermal distribution function. Magnons are bosons, following Bose-Einstein statistics. However, since each cell in the micromagnetic simulation is a coarse grained average over a large ensemble of quantum mechanical spins, a classical description of the cells should be sufficient. Therefore, we use the Rayleigh-Jeans distribution, which scales as FðE; TÞ / T=E. 29 The MONs are calculated using DðE; TÞ ¼ nðE; TÞ=FðE; TÞ, and the results are plotted in Fig. 3(b). For frequencies f > 5 GHz, we observe a slight decrease in the resonance frequencies with increasing temperature, which likely results from a decreased effective field due to a lower overall magnetization, a mechanism which is captured by Bloch's law. Additionally, mode hybridization occurs for two modes located around 10 GHz. Using amplitude maps (not shown here), we find that the mode with the lower frequency is a center mode and the higher frequency mode is an edge mode. The most striking difference in the temperature dependence of D(E, T) can be seen in the low frequency region, i.e., f < 3 GHz, which is populated exclusively by edge modes. We can discern three different thermal regimes for the behavior of these edge modes, the numbers of which are indicated in Fig. 3(b). Below 200 K, the mesospin with L ¼ 450 nm features a low frequency mode at 2.4 GHz, which decreases slightly in frequency as the temperature is increased. At T > 200 K, we observe the emergence of additional states spanning the range of 0 to 2 GHz, which implies a transition between two different regimes. A third regime can be identified, as the L ¼ 270 nm mesospin features a mode at a similar position that increases significantly with frequency as the temperature is increased, with no available states below these frequencies. The mode moves from 1 GHz at low temperatures to 2.4 GHz at 300 K, and the peak position of this mode scales as f / T 1 4 . We observe for this particular mode that the ellipticity decreases with increasing temperature (see the supplementary material). For the mesospin with L ¼ 360 nm, we find behavior indicative of a transition between these three regimes, with the first transition occurring at T ¼ 10 K and the second occurring at T ¼ 200 K. In order to uncover the origin of these transitions, we inspect the averaged transverse m y components at the edges of the L ¼ 450, 360, and 270 nm mesospins, as shown in the upper, middle, and lower panel of Fig. 4(a), for a temperature of T ¼ 50 K. At this temperature, the mesospin with L ¼ 450 nm is in regime I and can be seen to oscillate around a non-zero value of m y . We interpret this result as the mesospin being locked into either a C-or an S-state [see the inset of the top panel in Fig. 4(a)], where it precesses. The mesospin with L ¼ 360 nm is in regime II, which is characterized by irregular switching of the edge magnetization in the transverse direction, which occurs at longer timescales than the precessional motion in the locked C-or S-state [see Fig. 4(a), middle panel, and the supplementary material]. This slow switching process explains the increased intensities at lower frequencies in the magnon occupation numbers [see Fig. 3(b), left panel]. These two distinct regimes imply the presence of an energy barrier, the amount of transitions over which is determined by the temperature. The L ¼ 270 nm mesospin is in regime III over the whole temperature interval and oscillates constantly around m y ¼ 0, as illustrated in the lower panel in Fig. 4(a). This behavior is consistent with the absence of an energy barrier between C-and S-states for this mesospin size, meaning that the L ¼ 270 nm mesospin does not have an Sor C-state configuration as a groundstate. It is, thus, a balance between demagnetization energy and exchange energy, which determines whether an energy barrier is formed, a line of reasoning that is similar to flux closure/single domain magnetization transitions in mesoscopic structures of low aspect ratio. 30 We can estimate the height of the barriers in the L ¼ 360 nm and L ¼ 450 nm mesospin using the Arrhenius law (s ¼ s 0 e DE k B T ), as illustrated in Fig. 4(b). Here, s is the inverse switching rate, given by the average time spent in either configuration having a positive or negative m y , s 0 is the inverse attempt frequency, and DE is the height of the energy barrier. Variable s can be found by dividing the total simulation time by the amount of switches. One should, in this case, be careful not to take into account "false" switching events, i.e., the edge magnetization must spend sufficient time in a metastable state. 31 We demand that the time it takes to equilibrate should be larger than s eq ¼ 1 ns and disregard switching events that occur within a shorter interval after an initial switching event. A long simulation of 1 ls was performed in order to obtain sufficient statistics on the switching. The uncertainty is determined from the deviation of switching rates between the two different edges. By fitting the Arrhenius law, we find a significant difference in the activation energy: 6 and 57 meV for the L ¼ 360 nm and L ¼ 450 nm mesospins, respectively. The inverse attempt frequencies are s 0 ¼ 3.08  10 -9 s (L ¼ 360 nm) and s 0 ¼ 1.35  10 -9 s (L ¼ 450 nm). The energy landscapes and the corresponding values for the energy barriers are illustrated in Fig. 4(c). The data points for the two highest temperatures belonging to the 360 nm nanomagnet are seen to deviate from the otherwise linear relation. This is due to too many false switching events, thereby masking true switching events, and therefore, we assign no weight to these data points in the fitting procedure. This shortcoming calls for more sophisticated methods to more accurately determine the dynamics of the switching. Additionally, we evaluated the correlation of the magnetic state of the edges, by calculating the Pearson correlation coefficient numerically, qðm y;1 ; m y;2 Þ as described in the supplementary material. All the tested temperatures and mesospin sizes show a weak anticorrelation within the range of -8% < q < 0%, except for the constant switching of the L ¼ 360 nm mesospin at 250 K (q ¼ 0.8%) and the L ¼ 450 nm mesospin at 50 K (q ¼ -16%). The weak anticorrelation likely originates from the weak stray field interaction between the m y components, which favors oppositely aligned magnetization in the lateral direction. The observed fluctuations might play a strong role in the spectral response and symmetry breaking in vertices of ASI arrays, with temperature, as presented here, being a further tuning parameter. 17 The theoretical and simulation approach described for addressing thermal excitations in mesoscopic magnetic systems is potentially useful for resolving emergent collective behavior. The latter is particularly important for solving issues related to the ordering and thermal excitations of coupled mesospins. 13,[32][33][34] This knowledge may even find its application in logic and computation, 35 such as design of neuromorphic-like architectures based on ASIs and their magnonic properties. 36 See the supplementary material for the details concerning the comparison of magnon mode frequencies to analytic expressions. We also include raw temperature-dependent switching data used for the determination of relaxation times for our structures.
3,722.6
2020-11-09T00:00:00.000
[ "Physics" ]
Presenting a Social Value Database and Simulator for Public Health Abstract Background There is increasing recognition that Public Health Institutes need to build on the traditional value for money approach, to find ways to capture, measure and show the full range of their outcomes, impacts and related value. As part of a drive to measure value and impact in public health and demonstrate how investment in health can contribute to an Economy of Well-being, Public Health Wales has developed an interactive database to capture and illustrate the social value of public health services and interventions. Methods Scoping reviews of both academic and grey literature were undertaken to populate a database of health economics evaluations of public health interventions, focusing on Social Return on Investment (SROI). In addition, a simulated methodology was developed which allows the evidence to be manipulated and made relevant to individual contexts to help inform investment decisions at a local level. Results To date, the database has accumulated an excess of 50 SROI evaluations of various public health interventions, across areas including mental health, behaviour change, physical activity, nutrition, employment and primary care. The evaluations are based on European and International contexts, are published in both grey and academic sources, and are of varying quality. Conclusions SROI is a credible method for measuring the value of wider social, economic and environment outcomes achieved from public health interventions. The Social Value Database and Simulator presents a collation of studies and analysis utilising innovative health economics methods. Key messages • Public Health Wales’ Social Value Database and Simulator collates economic evaluations of public health interventions, to be used by policy makers to enable improved investment in health and well-being. • Social Return on Investment is a credible method for measuring the wider impact created by public health interventions. Background: There is increasing recognition that Public Health Institutes need to build on the traditional value for money approach, to find ways to capture, measure and show the full range of their outcomes, impacts and related value. As part of a drive to measure value and impact in public health and demonstrate how investment in health can contribute to an Economy of Well-being, Public Health Wales has developed an interactive database to capture and illustrate the social value of public health services and interventions. Methods: Scoping reviews of both academic and grey literature were undertaken to populate a database of health economics evaluations of public health interventions, focusing on Social Return on Investment (SROI). In addition, a simulated methodology was developed which allows the evidence to be manipulated and made relevant to individual contexts to help inform investment decisions at a local level. Results: To date, the database has accumulated an excess of 50 SROI evaluations of various public health interventions, across areas including mental health, behaviour change, physical activity, nutrition, employment and primary care. The evaluations are based on European and International contexts, are published in both grey and academic sources, and are of varying quality. Conclusions: SROI is a credible method for measuring the value of wider social, economic and environment outcomes achieved from public health interventions. The Social Value Database and Simulator presents a collation of studies and analysis utilising innovative health economics methods. Key messages: Public Health Wales' Social Value Database and Simulator collates economic evaluations of public health interventions, to be used by policy makers to enable improved investment in health and well-being. Social Return on Investment is a credible method for measuring the wider impact created by public health interventions. Background: The validity of self-reported disease prevalence estimates in health surveys may be low when compared to data from medical records in administrative registers. Such discrepancies reflect a low content validity of the survey question, which may ultimately compromise the application of these survey data for public health purposes. The aim of the present study was to examine the agreement of self-reports of seven diseases with data from administrative registers, both overall and by sociodemographic characteristics. Methods: Prevalence estimates of self-reported current and/or previous diabetes, asthma, rheumatoid arthritis, osteoporosis, myocardial infarction, apoplexy, and cancer, respectively, were derived from the Danish National Health Survey in 2017 (n = 183,372 adults aged 16 years). Individual-level data were linked to registry data on the same diseases. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), kappa, and total agreement between self-reported and registry-documented prevalence estimates were examined. Results: For all included diseases, the specificity was >92%, and the sensitivity varied between 59% (cancer) and 95% (diabetes). NPV was >94% for all diseases and PPV varied between 13% (rheumatoid arthritis) and 93% (cancer). Total agreement varied between 91 % (asthma) and 99% (diabetes), whereas kappa was lowest for rheumatoid arthritis (0.21) and highest for diabetes (0.88). Sociodemographic variables were significantly associated with total agreement with sex, age, and educational level exhibiting the strongest associations. Conclusions: Overall, total agreement, specificity, and NPV between selfreported and registry-documented disease prevalence estimates are high, but PPV and kappa vary greatly between diseases. The latter findings reflect a low content validity of the applied survey question for specific diseases. This should be taken into account when interpreting similar results from surveys. Key messages: The validity of self-reported disease prevalence estimates may be low when compared to data from medical records. We found positive predictive values and kappa to vary greatly between diseases. Future studies should aim at designing survey questions properly in order to ensure a high content validity of the applied question.
1,274
2022-10-01T00:00:00.000
[ "Medicine", "Economics" ]
Stationary Anonymous Sequential Games with Undiscounted Rewards Stationary anonymous sequential games with undiscounted rewards are a special class of games that combine features from both population games (infinitely many players) with stochastic games. We extend the theory for these games to the cases of total expected reward as well as to the expected average reward. We show that in the anonymous sequential game equilibria correspond to the limits of those of related finite population games as the number of players grows to infinity. We provide examples to illustrate our results. 1952 to model the choice of routes of cars where each driver, modeled as an atomless player, minimizes its expected travel delay. In Wardrop's model, there may be several classes of players, each corresponding to another origin-destination pair. The goal is to determine, what fraction of each class of players would use the different possible paths available to that class. The equilibrium is known to behave as the limit of the equilibria obtained in games with finitely many players, as their number tends to infinity [2]. It is also the limit of Nash equilibria for some sequence of dynamic games in which randomness tends to average away as the number of players increases [3]. Another class of games that involves a continuum of atomless players is evolutionary games, in which pairs of players that play a matrix game are selected at random, see [4]. Our objective is again to predict the fraction of the population (or of populations in the case of several classes) that plays each possible action at equilibrium. A Wardrop type definition of equilibrium can be used, although there has been a particular interest in a more robust notion of equilibrium strategy, called Evolutionary Stable Strategy (we refer the reader to [5,6]). In both games described above, the player's type is fixed, and the actions of the players determine directly their utilities. Extensions of these models are needed, whenever the player's class may change randomly in time, and when the utility of a player depends not only on the current actions of players but also on future interactions. The class of the player is called its individual state. The choice of an action by a player should then take into account not only the game played at the present state but also the future state evolution. We are interested in particular in the case where the action of a player not only impacts the current utility but also the transition probabilities to the next state. In this paper, we study this type of extension in the framework of the first type of game, in which a player interacts with an infinite number of other players. (In the road traffic context, the interaction is modeled through link delays, each of which depends on the total amount of traffic that uses that link.) We build upon the framework of anonymous sequential games, introduced by Jovanovic and Rosenthal in 1988 in [7]. In that work, each player's utility is given as the expected discounted utility over an infinite horizon. The theory of anonymous sequential games with discounted utilities was further developed in [8][9][10][11][12]. Conditions under which Nash equilibria in finite-player discounted-utility games converge to equilibria of respective anonymous models were analyzed in [13][14][15][16]. Also applications of this kind of models were numerous: from stochastic growth [17] and industry dynamics [18][19][20][21] models to dynamic auctions [22][23][24] and strategic market games [25,26]. Surprisingly, the cases of expected average utility and total expected utility have remained open ever since 1988, even though this kind of models were applied in some networking contexts [27,28]. Our main contribution in this paper is giving conditions, under which such extensions are possible. Similar extensions have been proposed and studied for the framework of evolutionary games in [29,30]. The analysis there turns out to be simpler, since the utility in each encounter between two players turns out to be bilinear there. The structure of the paper is as follows. We begin with a section that presents the model and introduces in particular the expected average and the total expected reward criteria. The two following sections establish the existence of stationary equilibria for the average and the total reward (Sects. 3 and 4, respectively). Section 5 is concerned with showing that the equilibria for models of the two previous chapters that deal with infinite number of players are limits of those obtained for some games with a large finite number of players, as this number goes to infinity. We end with two sections that show how our results apply to some real-life examples, followed by two paragraphs, containing open problems and conclusions. The Model The anonymous sequential game is described by the following objects: -We assume that the game is played in discrete time that is t ∈ {1, 2, . . .}. -The game is played by an infinite number (continuum) of players. Each player has his own private state s ∈ S, changing over time. We assume that S is a finite set. -The global state, μ t , of the system at time t, is a probability distribution over S. It describes the proportion of the population, which is at time t in each of the individual states. We assume that each player has an ability to observe the global state of the game, so from his point of view the state of the game at time t is 1 (s t , μ t ) ∈ S × (S). -The set of actions available to a player in state (s, μ) is a nonempty set A(s, μ), with A := (s,μ)∈S× (S) A(s, μ)-a finite set. We assume that the mapping A is an upper semicontinuous function. -Global distribution of the state-action pairs at any time t is given by the measure τ t ∈ (S × A). The global state of the system μ t is the marginal of τ t on S. -An individual's immediate reward at any stage t, when his private state is s t , he plays action a t , and the global state-action measure is τ t is u(s t , a t , τ t ). It is a (jointly) continuous function. -The transitions are defined for each individual separately with the transition function Q : S × A × (S × A) → (S), which is also a (jointly) continuous function. We will write Q(·|s t , a t , τ t ) for the distribution of the individual state at time t +1, given his state at time t, s t , his action a t , and the state-action distribution of all the players. -The global state at time t + 1 will be given by 2 Any function f : S × (S) → (A) satisfying supp f (s, μ) ⊂ A(s, μ) for every s ∈ S and μ ∈ (S) is called a stationary policy. We denote the set of stationary policies in our game by U. Average Reward We define the long-time average reward of a player using stationary policy f when all the other players use policy g, and the initial state distribution (both of the player and his opponents) is μ 1 , to be Further, we define a stationary strategy f and a measure μ ∈ (S) to be an equilibrium in the long-time average reward game iff for every other stationary strategy g ∈ U, and, if μ 1 = μ, and all the players use policy f then μ t = μ for every t ≥ 1. Remark 2.1 The definition of the equilibrium used here differs significantly from that used in [7]. There the stationary equilibrium is defined as a state-action distribution τ with τ s = μ, such that the Bellman equation for a player maximizing his discounted reward against others playing according to τ is satisfied τ -a.s. Our definition directly relates it to the reward functionals. Total Reward To define the total reward in our game, let us distinguish one state in S, say s 0 , and assume that A(s 0 , μ) = {a 0 } independently of μ for some fixed a 0 . Then the total reward of a player using stationary policy f when all the other players apply policy g, and the initial distribution of the states of his opponents is μ 1 , while his own is ρ 1 , is defined in the following way: where T is the moment of the first arrival of the process s t to s 0 . We interpret it as the reward accumulated by the player over whole of his lifetime. State s 0 is an artificial state (so is action a 0 ), denoting that a player is dead. μ 1 is the distribution of the states across the population when he is born, while ρ 1 is the distribution of initial states of new-born players. The fact that after some time the state of a player can become again different from s 0 should be interpreted as that after some time the player is replaced by some new-born one. The notion of equilibrium for the total reward case will be slightly different from that for the average reward. We define a stationary strategy f and a measure μ ∈ (S) to be in equilibrium in the total reward game iff for every other stationary strategy g ∈ U, where ρ = Q(·|s 0 , a 0 , τ ( f, μ)) and (τ ( f, μ)) sa = μ s ( f (s)) a for all s ∈ S, a ∈ A, and, if μ 1 = μ, and all the players use policy f , then μ t = μ for every t ≥ 1. Existence of the Stationary Equilibrium in Average-Reward Case In the present section, we present a result about the existence of stationary equilibrium in anonymous sequential games with long-time average reward. We prove it under the following assumption: (A1) The set of individual states of any player S can be partitioned into two sets S 0 and S 1 such that for every state-action distribution of all the other players τ ∈ (S× A): There are a couple of equivalent definitions of "strongly communicating" property used above appearing in the literature. We follow the one formulated in [31], saying that a set S 1 of states in a Markov Decision Process is strongly communicating iff there exists a stationary policy 3 f τ such that the probability of going from any state s ∈ S 1 to any other s ∈ S 1 for a player using f τ is positive. Assumption (A1) appears often in the literature on Markov decision processes with average cost and is referred to as "weakly communicating" property, see e.g., [32], chapters 8 and 9. 4 It guarantees that the optimal gain in a Markov decision process satisfying it is independent of its initial state. As we will see, it also guarantees that this optimal gain is continuous in τ , which will be crucial in proving the existence of an equilibrium in our game. It is also worth noting that without assumption (A1) the average-reward anonymous sequential game may have no stationary equilibria at all. This is shown in the following example 5 Example 3.1 Let us consider an average-reward anonymous sequential game with S = {1, 2, 3}, and 3 Note that we assume this stationary policy may depend on τ , as we consider the properties of the Markov chain of individual states of a player under fixed state-action distribution of all the other players. 4 All the properties appearing in the assumptions are commonly used in the Markov decision processes literature. Those readers who are not familiar with them or are interested in mutual relationships between these properties are referred to [31,33]. 5 The example is a reworking of Example 3. in [34]. thus, the decision is only made by players in state 1. For the simplicity we will denote this only decision by a in what will follow. The immediate rewards for the players depend only on their private state as follows: u(s) = 3 − s. Finally, the transition matrix of the Markov chain of private states of each player is It violates assumption (A1), as e.g., when τ 11 ≥ 1 4 for the pure strategy assigning a = 0 in state 1, states 1 and 2 are absorbing in the Markov chain of individual states of a player, and state 2 is transient, while when it assigns a = 1, states 1 and 2 become communicating. We will show that such a game has no stationary equilibrium. Suppose that ( f, μ * ) is an equilibrium. We will consider two cases: (a) τ 11 ≥ 1 4 for τ corresponding to μ * and f . Then p * = 0, and so if a player uses action 1 with probability β, the stationary state of the chain of his states when his initial state's distribution is μ * is , μ * 3 , and his long-time average reward is which is a strictly decreasing function of β (recall that μ * 1 ≥ τ 11 ≥ 1 4 ). Thus his best response to f is the policy, which assigns probability 1 to action a = 0 in state 1. But if all the players use such policy, τ 11 = 0, which contradicts our assumption that it is no less than 1 4 . (b) τ 11 < 1 4 . Then it can be easily seen that the stationary state of any player's chain when he uses action 1 with probability β ∈ [0, 1] is independent of the initial distribution of his state μ * and equal to 2 5+β , β 5+β , 3 5+β , which gives him the reward of 4+β 5+β = 1− 1 5+β , which is clearly a strictly increasing function of β. Thus the best response to f is to play action a = 1 with probability 1, which, if applied by all the players, results in stationary state μ * = 1 3 , 1 6 , 1 2 and consequently τ 11 = 1 3 , contradicting the assumption that it is less than 1 4 . Thus this game cannot have a stationary equilibrium. Now we are ready to formulate the main result of this section. Before we prove the theorem, let us introduce some additional notation. We will consider a Markov decision process M(τ ) of an individual faced with a fixed (over time) distribution of state-action pairs of all the other players. For this fixed τ ∈ (S × A), let J τ ( f, μ) denote the long-time average payoff in this process when the player uses stationary policy f , and the initial distribution of states is μ that is τ ). By well-known results from dynamic programming (see e.g., [32]), in a weakly communicating Markov decision process (this is such a process by (A1)), the optimal gain is independent of μ. We denote this uniform optimal gain by G(τ ) that is Lemma 3.2 states a crucial feature of G. It is preceded by another technical one. Lemma 3.1 Suppose that μ(n) are invariant measures of Markov chains with common finite state set S and with transition matrices P(n), respectively. Then if μ(n) → μ and P(n) → P, then μ is an invariant measure for the Markov chain with transition matrix P. Proof By the definition of invariant measure every μ(n) satisfies for every s ∈ S If we pass to the limit, we obtain which means that μ is an invariant measure for the Markov chain with transition matrix P. Lemma 3.2 Under (A1), G is a continuous function of τ . Proof Let τ n be a sequence of probability measures on S × A converging to τ . Since all of the MDPs we consider here have finite state space, each of them has a stationary optimal policy 6 , say policy f n is optimal in M(τ n ). Next, let μ n be an invariant measure corresponding to strategy f n in M(τ n ) (by (A1) such a measure exists, maybe more than one). Such an invariant measure must satisfy otherwise f n would not be optimal. Note next that U and (S) are compact sets. Thus, there exists a subsequence of f n converging to some f 0 ∈ U and then a subsequence of μ n converging to some μ 0 . Without a loss of generality, we may assume that these are sequences f n and μ n that are convergent. By the continuity assumption about Q and Lemma 3.1, μ 0 is an invariant measure corresponding to f 0 in M(τ ). If we next pass to the limit in (1), we get To show the inverse inequality, suppose that f ∈ U is an optimal policy in M(τ ). By (A1) the states in the Markov chain of individual states for a user applying f in M(τ ) can be divided into a class of transient states and a number of communicating classes. Let S * be a communicating class such that the ergodic payoff in this class is equal to G(τ ). Define now the policies g n in the following way: (Here f τ n is a communicating policy derived from assumption (A1)). One can easily notice that under these policies applied in M(τ n ), all the states from S \ S * would be transient. Now let μ n be an invariant measure corresponding to g n in M(τ n ). Again using Lemma 3.1 we can show that the limit (possibly over a subsequence) of the sequence μ n , say μ 0 is an invariant measure of the limit of g n , which is equal to f on S * . At the same time, μ 0 s = 0 for s ∈ S \ S * , so we can write (from the definition of invariant measure) for every s ∈ S * : which means that μ 0 is also an invariant measure for f , and it is entirely concentrated on S * . But since S * is a communicating class under f , this implies On the other hand ending the proof. Proof of Theorem3. 1 We consider two multifunctions of τ ∈ (S × A): and let (τ ) := B(τ ) ∩ C(τ ). We will show that has a fixed point, and then that this fixed point corresponds to an equilibrium in the game. First, note that C(τ ) is the set of all the possible stationary state-action measures in M(τ ). By Theorem 1 in [35] it is also the set of occupation measures corresponding to all the possible stationary policies and all the possible initial distributions of states in M(τ ). Since G(τ ) is the optimal reward in this MDP, there exists a stationary policy and thus an occupation measure corresponding to it, for which the reward is equal to G(τ ). This implies that for any τ , (τ ) is nonempty. Further, note that for any τ , both B(τ ) and C(τ ) are trivially convex and thus so is their intersection. Finally, as an immediate consequence of Lemma 3.1, C has a closed graph. On the other hand, the closedness of the graph of B is a trivial consequence of the continuity of u (by assumption) and G (by Lemma 3.2). The graph of is the intersection of the two and thus is also closed. The existence of a fixed point of follows now from Glickberg's fixed point theorem [36]. Now suppose τ * is this fixed point. Since it is a fixed point of C, it satisfies This implies that if the initial distribution of states is τ * S , and players apply stationary policy f defined for any τ ∈ (S × A) by 7 for any fixed a 0 ∈ A, the distribution of state-action pairs in the population is always τ * . On the other hand, since τ * ∈ B(τ * ), f is the best response of a player when the state-action distribution is always τ * and thus together with τ * S an equilibrium in the game. Existence of the Stationary Equilibrium in Total Reward Case In this section, we show that also for the total reward case under some fairly mild assumptions, the game has an equilibrium. What we will assume is the following: (T1) There exists a p 0 > 0 such that for any fixed state-action measure τ , and under any stationary policy f , the probability of getting from any state s ∈ S \ {s 0 } to s 0 in |S| − 1 steps is not smaller than p 0 . We write that this assumption is fairly mild, as it is not only necessary for our theorem to hold but also for the total cost model to make sense, as it is trivially shown in an example below. The immediate rewards for the players depend only on their private state as follows: Finally, the transitions of the Markov chain of private states of each player are defined as follows: if his action in state 1 is 1, he moves with probability 1 to state 2; if his action is 2, he moves with probability 1 to state 3. In state 2 he moves to 1 with probability 1, while in state 3 he dies with probability 1 2 and stays in 3 with probability 1 2 . This game is completely decoupled (in the sense that neither the rewards nor the transitions of any player depend on those of the others), so it is easy to analyze. It is immediate to see that under pure stationary policy choosing action 1 in state 1, a player never dies. Moreover, he receives payoffs of 1 and −1 in subsequent periods, so his reward (the sum of these rewards over his lifetime) is not well defined. If we try to correct it by defining the total reward as the lim sup or lim inf of his accumulated rewards after n periods of his life, we obtain a strange situation that the policy choosing action 1 is optimal for each of the players in the lim sup version of the reward and his worst possible policy for the lim inf version. Remark 4.1 The total reward model, specifically when (T1) is assumed, bears a lot of resemblance to an exponentially discounted model where the discount factor is allowed to fluctuate over time, which suggests that the results in the two models should not differ much. Note, however, that there is one essential difference between these two models. The 'discount factor' in the total reward model (which is the ratio of those who stay alive after a given period to those who were alive at its beginning) appears not only in the cumulative reward of the players but also in the stationary state of the game, and thus also in the per-period rewards of the players. Thus this is an essentially different (and slightly more complex) problem. On the other hand, the fact that each of the players lives for a finite period and then is replaced by another player, with a fixed fraction of players dead and fixed fractions of players in each of the states when the game is in a stationary state, makes this model similar to the average reward one. In fact, using the renewal theorem, we can relate the rewards of the players in the total reward model with those in the respective average-reward model. This relation is used a couple of times in our proofs. Now we can formulate our main result of this section. Theorem 4.1 Every anonymous sequential game with total reward satisfying (T1) has a stationary equilibrium. As in the case of the average reward, we start by defining some additional notation. Let M(τ ) be a modified Markov decision process of an individual faced with a fixed (over time) distribution of state-action pairs of all the other players. This modification is slight but important, namely we assume that in M(τ ) the state s 0 is absorbing, and the reward in this state is always 0. This is a classic MDP with total reward, as considered in the literature. For this fixed τ ∈ (S × A), let J τ ( f, ρ) denote the total payoff in this process when the player uses stationary policy f , and the initial distribution of states is ρ, and let G(τ ) denote the optimal reward in M(τ ) that is Q(·|s 0 , a 0 , τ )). We can prove the following auxiliary result: Proof By a well-known result from the theory of Markov decision processes (see e.g., [32], Lemma 7.1.8 -assumption of this lemma is satisfied as a consequence of assumption (T1) and the fact that function u is continuous on a compact set and thus bounded), J τ ( f, ρ) is the limit of the rewards in respective discounted MDPs J τ β ( f, ρ) as the discount factor β approaches 1. Since by a well-known result (see e.g., Lemma 8.5 in [37]), discounted rewards are continuous functions of stationary policies, and they are linear in the initial distribution of states, to show the continuity of J it is enough to prove that the convergence of J τ β ( f, ρ) is uniform. Let us take an ε > 0 and set M = max s,a,τ |u(s, a, τ )| (such a number exists, as u is a continuous function defined on a compact set). The probability that after m(|S| − 1) steps the state of an individual did not reach s 0 is for any stationary policy f by assumption (T1) not greater than (1 − p 0 ) m . This means that for any m, which for m big enough, say bigger than m ε , is not greater than ε 3 . Note that analogously for m ≥ m ε and any β ∈ (0, 1) Next, note that for β big enough, say β > β ε . Combining (5), (6), and (7) and using the triangle inequality, we obtain for β > β ε , which ends the proof. Proof of Theorem 4. 1 We shall consider two multifunctions of τ ∈ (S × A): and let (τ ) := B(τ ) ∩ C(τ ). We will show that has a fixed point and that it corresponds to an equilibrium in the game. First, note that (τ ) is nonempty for any τ ∈ (S × A), as any invariant measure corresponding to an optimal stationary policy in M(τ ) belongs to (τ ) (such an optimal stationary policy exists according to Theorem 7.1.9 in [32]). Next, we show that (τ ) is convex for every τ ∈ (S × A). By the renewal theorem (Theorem 3.3.4 in [38]) the total reward occupation measure corresponding to some stationary policy f is equal to the average-reward occupation measure under the same policy multiplied by the expected lifetime. This implies that for any ρ ∈ (τ ) (the notation used here is the same as in the definition of B(τ )) Next, note that the time spent in state s 0 , playing action a 0 is independent of the policy used by the player. We shall denote this time by T 0 . Again by the renewal theorem we can write Substituting this into (8) we obtain or equivalently The set of probability measures ρ satisfying the above equality is clearly a polytope, and hence a convex set. What we are left to show is that the graph of is closed. Suppose that τ n , τ, ρ n , ρ ∈ (S × A), τ n → τ , ρ n → ρ and ρ n ∈ (τ n ) for every n. By Lemma 3.1 ρ ∈ C(τ ). Clearly, as ρ n → ρ, also f ρ n → f ρ . Since ρ n ∈ B(τ n ), for any stationary strategy g. By Lemma 4.1 and the continuity of Q also implying that ρ ∈ B(τ ) and hence also in (τ ). This means that all the assumptions of the Glicksberg theorem [36] are satisfied, and has a fixed point. Now suppose τ * is this fixed point. Because τ * ∈ C(τ * ), it satisfies On the other hand, notice that for any stationary policy g and any initial state distribution ρ, and so τ * ∈ B(τ * ) implies that f τ * is the best response of a player, when the state-action distribution of his opponents is always τ * , and hence ( f * , τ * S ) with f * (s, τ ) ≡ f τ * (s) for any τ ∈ (S × A) is an equilibrium in the game. The Relation with Games with Finitely Many Players Main point of criticism of anonymous games in general is that the limiting situation with an infinite number of players does not exist in reality, and thus it is not sure that the results obtained for a continuum of players are relevant for the real-life ones, when the number of players is finite, but large. In this section, we present some results connecting the anonymous game models from previous sections with similar models with finitely many players. In these results we will, in addition, use the following two assumptions. (AT1) Q(·|a, s, τ ) = Q(·|a, s) for all τ ∈ (S × A) and A(·, μ) = A(·) for all μ ∈ (S). (AT2) For any f ∈ U and τ ∈ (S × A) the Markov chain of individual states of an individual using f , when the state-action distribution of all the other players is τ , is aperiodic. The assumptions similiar to (AT1) appear in a recent paper [39] on stochastic games with a finite number of players and average reward. They are also used in some papers on Markov evolutionary games [29] and in a recent application of anonymous sequential games to model power control problem in a wireless network [27]. It allows to decouple the Markov chains of individual states for each of the players (so that the dependence between different players is only through rewards)-this decoupling is crucial in our proofs of convergence of games with finite number of players to respective anonymous models. Importantly though, in most engineering applications individual state of a player is either his energy (or some other private resource) level or his geographical position for which assumption (AT1) is naturally satisfied. We will need to define some additional notation. -We say that an n-person stochastic game is the n-person counterpart of an anonymous game iff it is defined with the same objects S, A, u, and Q, with the difference that the number of players is n, and in consequence the global states and stateaction distributions are defined on subsets of (S) and -We will consider a wider set of possible policies in the game. Namely, we will consider a situation, when each player uses a stationary policy over the whole game, but this policy is chosen at the beginning of the play according to some probability distribution. This means that any probability distribution from (U) will be a policy in the game. -We will consider a different (i.e., standard) definition of policies and equilibrium in the average-reward game. We will say that policies ( f 1 , . . . , f n ) form a Nash equilibrium in the average-reward n-person game iff for any player i and any initial distribution of the global state μ 1 , f 1 , . . . , g i , . . . , f n ) for any other policy g i . If this inequality holds up to some ε, we say that ( f 1 , . . . , f n ) are in ε-equilibrium. For both models (with average and total reward) we will also consider the notion of equilibrium defined as for the anonymous game-we will call it then a weak equilibrium (and analogously define weak εequilibrium). Now we can prove the following two results: Theorem 5.1 Suppose ( f, μ) is an equilibrium in either an average-reward anonymous game satisfying (A1) and (AT1) or a total reward anonymous game satisfying (T1) and (AT1). Then for every ε > 0 there exists an n ε such that for every n ≥ n ε ( f, μ) is a weak equilibrium in the n-person counterpart of this anonymous game. A stronger result is true for the average-reward game. Theorem 5.2 For every ε > 0 there exists an n ε such that for every n ≥ n ε the nperson counterpart of the average-reward anonymous game satisfying (A1), (AT1) and (AT2) has a symmetric Nash equilibrium (π n , . . . , π n ), where π n ∈ (U). Moreover if ( f, μ) is an equilibrium in the anonymous game, then π n is of the form: f is the communicating policy induced by the assumption (A1) 8 , S l are ergodic classes of the individual state process of a player when he applies policy f , and μ * is the probability measure on these ergodic classes corresponding to measure μ over S 9 . Proof of Theorem5.1 Let J l (μ, g, f ) denote the reward in the n-person counterpart of the given average-reward anonymous game for a player using stationary strategy g against f of all the others, when initial distribution of individual states is μ. where σ (g) is the occupation measure of the process of individual states of the player using policy g, and m n f is a probability measure over the set (S × A) denoting the frequency of the appearance of different state-action measures of all the players over the course of the game. Note further that σ (g) is under (AT1) independent of n and the same as in the limiting (anonymous game) case. On the other hand, m n f converges weakly as n goes to infinity to the invariant measure corresponding to the policy f and the initial distribution μ. Thus, for any stationary strategy g. The thesis of the theorem follows immediately. To prove the total reward case, we first need to write the reward in the n-person counterpart of the given total reward anonymous game for a player using stationary strategy g against f of all the others, when initial distribution of states is μ: with s 1 distributed according to Q(·|s 0 , a 0 ) (recall that by (AT1) this distribution is independent of the global state-action distribution τ ). This by the renewal theorem equals where σ (g) is the occupation measure of the process of individual states of the player using policy g, and m n f is a probability measure over the set (S × A), denoting the frequency of the appearance of different state-action measures of all the players over the course of the game. But both σ (g) and E g,μ,Q T (g) are under (AT1) independent of n and the same as in the limiting (anonymous game) case, while m n f converges weakly to the invariant measure corresponding to the policy f and the initial distribution μ as n → ∞. The thesis of the theorem is now obtained as in the average-reward case. Proof of Theorem5.2 Fix ε > 0 and take n ε such that for every n ≥ n ε ( f, . . . , f ) is a weak ε-equilibrium in the n-person counterpart of the average-reward game. Next, note that for every n ∈ N the process of individual states of each of the players using policy π n has a unique invariant measure μ. Since by (AT2) the process of individual states is aperiodic, it will converge to this invariant measure and, consequently, the reward for a player using policy f against f of all the others will be equal to J (μ, f, f ) for any initial state distribution. However, since in a weakly communicating Markov decision process the optimal gain is independent of this distribution, and since ( f, . . . , f ) is a weak ε-equilibrium in the n-person game for n ≥ n ε , it is also a Nash ε-equilibrium in the n-person game. Application: Medium Access Game In the remainder of the paper, we present two simple examples of application of our framework to model some real-life phenomena. The Model The first example we present is a medium access game (MAC) between mobile phones. We are inspired by a power control game studied in [30] that considered a Markov Decision Evolutionary Game (MDEG) framework, which can be viewed as a special case of an anonymous sequential game (ASG), where the level of the power used for transmission is controlled. The total energy available for transmission is assumed to be constrained by the battery energy level, and the objective of a user is to maximize the amount of successful transmissions of packets taking into account the fact that the lifetime of its battery is limited. The MDEG framework is restricted to pairwise interactions, so that formalism was used under the assumption of a sparse network. The MAC game problem that we shall study below will show how this type of restriction can be removed by using the ASG framework (instead of the MDEG framework). The model can be described as follows: Time is slotted. At any given time t, a mobile finds itself competing with N t other mobiles for the access to a channel. N t is assumed to have Poisson distribution with parameter λ. We shall formulate this as a sequential anonymous game as follows. -Individual state A mobile has three possible states: F (full), AE (Almost Empty) and E (Empty). -Actions There are two actions: transmit at high power H or low power L. At state AE a mobile cannot transmit at high power, while at E it cannot transmit at all. -Transition probabilities From state AE the mobile moves to state E with probability p E and otherwise remains in AE. At state E the mobile has to recharge. It moves to state F after one time unit. A mobile in state F transmitting with power r moves to state AE with probability proportional to r and given by αr for some constant α > 0. -Payoff Consider a given cellular phone that transmits a packet. Assume that x other packets are transmitted with high power and y with low power to the same base station. A packet transmitted with low power is received successfully with some probability q, if it is the only packet transmitted, i.e., y = 0, x = 0. Otherwise it is lost. A packet transmitted with high power is received successfully with some probability Q > q, if it is the only packet transmitted at high power, i.e., x = 0. The immediate payoff is 1, if the packet is successfully transmitted. It is otherwise zero. In addition, there is a constant cost c > 0 for recharging the battery. Aggregate utility for a player is then computed as long-time average of the per-period payoffs. Suppose p is the fraction of population that transmits at high power in state F, and that μ F , μ AE , and μ E are fractions of players in respective states. Then probability of success for a player transmitting at high power is while probability of success when a player transmits at low power is These values do not depend on actual numbers of players applying respective strategies-only on fractions of players in each of the states using different actions. Thus, instead of considering an n-player game for any fixed n, it is reasonable to apply the anonymous game formulation with τ = [τ The Solution Stationary state of the chain of private states of a player using policy f prescribing him to use high power with probability p when in state F is Thus, it can be computed that his expected average reward is of the form It can be either a strictly increasing, a constant or a strictly decreasing function of p, depending on whether AD > BC, AD = BC, or AD < BC, and thus the best response of a player against the aggregated state-action vector τ is p = 1 when AD > BC, any p ∈ [0, 1] when AD = BC or p = 0 when AD < BC. This leads to the following conclusion: since by Theorem 3.1 this anonymous game has an equilibrium, one of the three following cases must hold: (a) If then all the players use high power in state F at equilibrium. then all the players use low power in state F at equilibrium. (c) If none of the above inequalities holds, than we need to find p * satisfying Then all the players use policy prescribing to use high power with probability p * in state F at equilibrium. Remark 6.1 It is worth noting here that some generalizations of the model presented above can be considered. We can assume that there are more energy levels and more powers at which players could transmit in our game (similarly as in [27]). We can also assume that the players do not always transmit, only with some positive probability (then the individual state becomes two-dimensional, consisting of player's energy state and an indicator of whether he has something to transmit or not). Both these generalizations are tractable within our framework, though the computations become more involved. General Linear Framework In the next example we consider a game satisfying some additional assumptions. Let K = (S × A). Let u(τ ) be a column vector whose entries are u(k, τ ). We consider now the special case that u(k, τ ) is linear in τ . Equivalently, there are some vector u 1 over K and a matrix u 2 of dimension |K | × |K | such that Similarly, we assume that the transition probabilities are linear in τ . Then the game becomes equivalent to solving a symmetric bilinear game that of finding a fixed point of (2)-(3). Linear complementarity formulation can be used and solved using Lemke's algorithm. From the solution τ , the equilibrium ( f, τ S ) can be derived with a help of equation (4). Maintenance-Repair Game: The Model The maintenance-repair example presented below can be seen as a toy model. Its main purpose, however, is to show, how the abovementioned method can be used in a concrete game satisfying the linearity conditions mentioned above. Each car among a large number of cars is supposed to drive one unit of distance per day. A car is in one of the individual states: good (g) or bad (b). When a car is in a bad state, then it has to go through some maintenance and repair actions and cannot drive for some (geometrically distributed) time. A single driver is assumed to be infinitesimally "small" in the sense that its contribution to the congestion experienced by other cars is negligible. We assume that there are two types of behavior of drivers. Those that drive gently, and those that take risks and drive fast. This choice is modeled mathematically through two actions: aggressive (α) and gentle (γ ). An aggressive driver is assumed to drive β times faster than a gentle driver. Utilities A car that goes β times faster than another car, traverses the unit of distance at a time that is β times shorter. Thus the average daily delay it experiences is β times shorter. We assume that at a day during which a car drives fast, it spends 1/β of the time that the others do. It is then reasonable to assume that the contribution to the total congestion is β times lower than that of the other drivers. More formally, let η be a delay function. Then the daily congestion cost D of a driver is given as u(g, α, τ ) = u(g, γ, τ )/β, u(g, γ, τ ) = −η (τ (g, γ ) + τ (g, α)/β)) . For the state b we set simply which represents a penalty for being in a non-operational state. It does not depend on a nor τ . Transition probabilities: We assume that transitions from g to b occur due to collisions between cars. Further, we assume that the collision intensity between a car that drives at state g and uses action a is linear in τ . More precisely, We naturally assume that c a α > c a γ for a = α, γ and that c α a > c γ a for a = γ, α. If a driver is more aggressive than another one, or if the rest of the population is more aggressive, then the probability of a transition from g to b increases. We rewrite the above as Q(b|g, a, τ ) = c a · τ (g, .). Once in state b, the time to get fixed does not depend any more on the environment, and the drivers do not take any action at that state. Thus ψ := Q(g|b, a, τ ) is some constant that is the same for all a and τ . The Solution We shall assume throughout that the congestion function η is linear. It then follows that this problem falls into the category of Sect. 7.1. Let τ be given. Let a driver use a stationary policy p. Then the expected time it remains in state g is σ (p, τ ) = 1 Q (b|g, p, τ ) . The expected repair time of a car (the period that consists of consecutive time it is in state b) is given by ψ −1 . Thus the total expected utility during that time is Thus the average utility is given by where μ is an arbitrary initial distribution and where π(τ ) is the stationary policy that is obtained from τ as in (4). Let p * be a stationary equilibrium policy and assume that it is not on the boundary, i.e., 0 < p * α < 1. We shall consider the equivalent bilinear game. Let ρ * be the occupation measure corresponding to p * . It is an equilibrium in the bilinear game. Since the objective function is linear in ρ, ρ * should be such that each individual player is indifferent between any stationary policy. In particular, we should have J (μ, 1 α , π(τ )) = J (μ, 1 γ , π(τ )), where 1 a is the stationary pure policy that chooses always a. We thus obtain the equilibrium occupation measure ρ * as a τ that satisfies The equilibrium policy p * is obtained from ρ * as in (4), and the equilibrium stationary measure is ρ * S . Perspectives The framework analyzed in this paper generalizes that of Jovanovic and Rosenthal [7], addressing the cases of expected average reward and total expected reward, which have not yet been studied in the literature. Lack of this kind of results so far is not so unexpected, if we take into account that game-theoretic tools were for many decades used mostly in economic contexts, where discounted rewards are usually more appropriate in multistage models. The time scales that are of interest in dynamic games applied to Engineering, and in particular to telecommunication networks, are often much faster than those occurring in economic models, which makes reward criteria focusing on the long-run behavior of the analyzed systems more appropriate. These are young and quickly developing fields of application of game theory that generate those new models. Our paper tries to fill in the existing gap. It has to be noticed, however, that a lot remains to be done. First of all, the results presented here are limited to the games with finite state and action spaces. One natural generalization would be finding conditions for equilibria to exist in analogous games played on infinite, and possibly noncompact, state, and action sets. Second interesting problem is, whether similar results hold, when the global state of the game does not converge to some stationary distribution, at least under some subclass of stationary policies. In such a case not only the proofs may become more involved, but also the notion of equilibrium needs redefining. Finally, for applications it may be quite important to extend the present model to semi-Markov case, where the moments when a player makes his decisions remain discrete but follow some controlled impulse process. Another important issue, addressed just briefly in Sect. 7.1, concerns designing tools for computing equilibria in such games. As it is widely known, one of the main problems when dealing with dynamic games, stochastic games in particular, when the number of players becomes large, is the so-called curse of dimensionality-in stochastic games the sizes of state and action spaces growing exponentially being a particular problem. Although equilibrium-existence results often exist for games with big finite number of players, the computation becomes impossible usually already for a relatively small number. The anonymous game formulation simplifies visibly the structure of the game and thus gives much bigger chances of computing the equilibria, which then could be used as approximate equilibria for the models with large finite number of players. In addition, the equilibria computed for the limiting, anonymous case, may have (under appropriate conditions) a property, used in Medium Access Game example presented in Sect. 6 that they are equally good approximations of equilibria of games with a different large number of players, which can be used when the exact number of players is unknown. In Sect. 7.1 we present a way to transform the problem of finding equilibrium in an anonymous game with linear utilities into that of finding an equilibrium in a bimatrix game. Designing similar tools for more general case is another important problem to solve. Finally, the resemblance of the framework considered in this paper to that of the classical traffic assignment problem suggests, it may be natural to try to extend tools from the traffic assignment framework to the one studied in this paper, such as the convergence of dynamic decentralized learning schemes (e.g., the replicator dynamics). More generally, it remains to study, whether anonymous sequential games provide good approximations for similar games but with finitely many players, and vice versa. Our paper answers this question only in a very limited range, while this type of question is frequently asked in mean field games, which are games played by an infinite population of players, see [40] and references therein. Conclusions The framework of the game defined in this paper is similar in nature to the classical traffic assignment problem in that it has an infinity of players. In both frameworks, players can be in different states. In the classical traffic assignment problem, a class can be characterized by a source-destination pair, or by a vehicle type (car, pedestrian or bicycle). In contrast to the traffic assignment problem, the class of a player in our setting can change in time. Transition probabilities that govern this change may depend not only on the individual's state but also on the fraction of players that are in each individual state and that use different actions. Furthermore, these transitions are controlled by the player. A strategy of a player of a given class in the classical traffic assignment problem can be identified as the probability it would choose a given action (path) among those available to its class (or its "state"). The definition of a strategy in our case is similar, except that now the probability for choosing different actions should be specified not just in one state. Our paper provides definitions and tools for the study of anonymous sequential games with the total cost and with the average cost criteria, which have not been covered in the existing literature. It provides conditions for the existence of stationary equilibrium, and illustrates through several examples, how to compute it. The contribution of the paper is not only in extending previous results to new cost criteria but also in providing an appropriate definition of equilibria for the new cost criteria.
12,331.2
2011-12-15T00:00:00.000
[ "Economics", "Mathematics" ]
High repetition rate operation of saturated table-top soft x-ray lasers in transitions of neon-like ions near 30 nm We report average powers exceeding 1 microwatt in laser transitions of Ne-like ions at wavelengths near 30 nm. Gain-saturated operation was obtained at a repetition rate of 5 Hz exciting solid targets with pump pulses of ~1 J energy and 8 ps duration impinging at grazing incidence of 20 degrees. Gain-length products of about 20 were obtained in the 30.4 nm and 32.6 nm transitions of Ne-like V and Ne-like Ti respectively. Strong lasing was also observed in Ne-like Cr at 28.6 nm and in the 30.1 nm line of Ne-like Ti. ©2005 Optical Society of America OCIS codes: (140.7240) UV, XUV, and X-ray lasers; (340.7480) X-rays References and Links 1. B. R. Benware, C. D. Macchietto, C. H. Moreno, and J. J. Rocca, “Demonstration of a High Average Power Tabletop Soft X-Ray Laser,” Phys. Rev. Lett. 81, 5804-5807 (1998). 2. M. Frati, M. Seminario, J. J. Rocca, “Demonstration of a 10-μJ tabletop laser at 52.9 nm in neonlike chlorine,” Opt. Lett. 25, 1022-1024 (2000). 3. S. Sebban, R. Haroutunian, P. Balcou, G. Grillon, A. Rousse, S. Kazamias, T. Marin, J. P. Rousseau, L. Notebaert, M. Pittman, J. P. Chambaret, A. Antonetti, D. Hulin, D. Ros, A. Klisnick, A. Carillon, P. Jaegle, G. Jamelot, J. F. Wyart, “Saturated Amplification of a Collisionally Pumped Optical-Field-Ionization Soft X-Ray Laser at 41.8 nm,” Phys. Rev. Lett. 86, 3004-3007 (2001). 4. A. Butler, A. J. Gonsalves, C.M. McKenna, D. J. Spence, S. M. Hooker, S. Sebban, T. Mocek, and I. Bettaibi, and B. Cros, “Demonstration of a Collisionally Excited Optical-Field-Ionization XUV Laser Driven in a Plasma Waveguide,” Phys. Rev. Lett. 91, Art. 205001 (2003). 5. S. Sebban, T. Mocek, D. Ros, L. Upcraft, P. Balcou, R. Haroutunian, G. Grillon, B. Rus, A. Klisnick, A. Carillon, G. Jamelot, C. Valentin, A. Rousse, J. P. Rousseau, L. Notebaert, M. Pittman, D. Hulin, “Demonstration of a Ni-Like Kr Optical-Field-Ionization Collisional Soft X-Ray Laser at 32.8 nm,” Phys. Rev. Lett. 89, Art. 253901 (2002). 6. P. V. Nickles, V. N. Shlyaptsev, M. Kalachnikov, M. Schnürer, “Short Pulse X-Ray Laser at 32.6 nm Based on Transient Gain in Ne-like Titanium,” Phys. Rev. Lett. 78, 2748-2751 (1997). 7. J. Dunn, Y. Li, A. L. Osterheld, J. Nilsen, J. R. Hunter, V. N. Shlyaptsev, “Gain Saturation Regime for Laser-Driven Tabletop, Transient Ni-Like Ion X-Ray Lasers,” Phys. Rev. Lett. 84, 4834-4837 (2000). 8. K. A. Janulewicz, A. Lucianetti, G. Priebe, W. Sandner, P. V. Nickles, “Saturated Ni-like Ag x-ray laser at 13.9 nm pumped by a single picosecond laser pulse,” Phys. Rev. A 68, Art. 051802 (2003). 9. R. Keenan, J. Dunn, V. N. Shlyaptsev, R. Smith, P. K. Patel, D. F. Price, “Efficient pumping schemes for high average brightness collisional x-ray lasers,” in Soft X-Ray Lasers and Applications V, E. E. Fill, S. Suckewer, eds., Proc. SPIE 5197, 213-220 (2003), and R. Keenan, J. Dunn, P. K. Patel, D. F. Price, R. F. Smith, V. N. Shlyaptsev, “High repetition rate grazing incidence pumped X-ray laser operating at 18.9 nm,” Phys. Rev. Lett. (to be published). 10. B. M. Luther, Y. Wang, M. A. Larotonda, D. Alessi, M. Berrill, M. C. Marconi, V. N. Shlyaptsev, J. J. Rocca, “Saturated high-repetition-rate 18.9-nm tabletop laser in nickellike molybdenum,” Opt. Lett. 30, 165167 (2005). 11. M. A. Larotonda, B. M. Luther, Y. Wang, Y, Liu, D. Alessi, M. Berrill, A. Dummer, F. Brizuela, C. S. Menoni, M. C. Marconi, V. N. Shlyaptsev, J. Dunn, and J. J. Rocca, “Characteristics of a Saturated 18.9-nm Tabletop Laser Operating at 5-Hz Repetition Rate,” IEEE J. Select. Topics Quantum Elecron. 10, 1363-1367 (2004) 12. Y. Wang, M. A. Larotonda, B. M. Luther, D. Alessi, M. Berrill, V. N. Shlyaptsev, and J. J. Rocca, “Demonstration of saturated high repetition rate tabletop soft x-ray lasers at wavelengths down to 13.9 nm,” Phys. Rev. Lett. (submitted). 13. R.J. Thomas and J. M. Davila. “EUNIS: a solar EUV normal incidence spectrometer,” in UV/EUV and Visible Space Instrumentation for Astronomy and Solar Physics, O. H. W. Siegmund, S. Fineschi, M. A. Gummin, eds., Proc. SPIE 4498, 161-172 (2001). 14. J. Nilsen, “Analysis of a picosecond-laser-driven Ne-like Ti x-ray laser,” Physical Review A 55, 3271-3274 (1997). 15. G. J. Tallents, Y. Abou-Ali, M. Edwards, R. E. King , G. J. Pert, S. J. Pestehe, F. Strati, R. Keenan, C. L. S. Lewis, S. Topping, O. Guilbaud, A. Klisnick, D. Ros, R. Clarke, D. Neely, and M. Notley, “Saturated and Short Pulse Duration X-Ray Lasers,” in X-Ray Lasers:2002, J. J. Rocca, J. Dunn, and S. Suckewer, eds., AIP Conf. Proc. 641, 291-297 (2002). Introduction There is much interest in the development of compact soft x-ray lasers capable of generating high average powers for applications.This requires operation of the soft x-ray amplifiers in the gain-saturated regime at high repetition rate.The first soft x-ray lasers to achieve high average powers used collisional electron impact excitation of Ne-like ions in a capillary discharge plasma [1,2].Capillary discharge excitation has produced average powers of a few mW in the 46.9 nm line of Ne-like Ar.Collisional optical-field-ionization lasers operating at 10 Hz repetition rate in Pd-like Xe at 41.8 nm [3,4] and in Ni-like Kr at 32.8 nm [5] have also been reported to reach gain saturation.Saturated soft x-ray amplification in transitions of Nelike and Ni-like ions excited by transient collisional electron excitation was also obtained but only at repetition rates of one shot every several minutes in plasmas heated by picosecond duration pulses of 3-7 J energy [6][7][8].In recent work the energy necessary to pump transient collisional soft x-ray lasers has been significantly reduced using a grazing incidence pumping geometry that increases the absorption of the pump beam in the gain region [9][10][11][12].This pumping geometry significantly increases the energy deposition efficiency of the pump beam into the gain region by taking advantage of refraction to increase the path length of the pump rays through this region of the plasma.Excitation of Mo plasmas at a grazing incidence angle has resulted in gain saturated operation in the 18.9 nm line of Ni-like Mo at 5-10 Hz repetition rate [9][10][11].Most recently saturated laser operation at 5 Hz repetition rate was obtained in several transitions of Ni-like ions with wavelengths ranging from 16.4 nm to 13.9 nm by grazing incidence heating of plasmas with 8 picosecond pulses of 1 J energy [12].It is of significant interest to extend these results to other isoelectronic sequences, as different applications require access to different wavelengths.For example, the characterization of extreme ultraviolet optics for solar coronal studies would significantly benefit from compact high repetition lasers with wavelengths between 30 and 37 nm that includes both the HeII 30.4 nm line and strong lines of FeXI-XVI [13]. Herein we report the extension of gain saturated high repetition rate laser-pumped transient soft x-ray lasers to transitions in Ne-like ions using grazing incidence pumping.High average power soft x-ray laser operation was obtained for the first time to our knowledge in the 2p 5 3p 1 S 0 →2p 5 3s 1 P 1 transitions of Ne-like Ti and V at 32.6 nm and 30.4 nm respectively.We also observed strong lasing in the corresponding line in Ne-like Cr at 28.6 nm, and in the 30.1 nm 2p 5 3d 1 P 1 →2p 5 3p 1 P 1 line of Ne-like Ti which inversion relies on strong re-absorption of the 2.335 nm resonant transition linking the 3d 1 P 1 laser upper level to the ion ground state [14]. Setup The pump beam geometry is similar to the one used in recent experiments with Ni-like ions [10][11][12].The targets were 4 mm wide polished slabs with a thickness of 2 mm for Ti and V and 1mm for Cr.They were irradiated with pulses from a Ti:sapphire laser system operating at a center wavelength of 800 nm consisting of a mode-locked oscillator and three stages of chirped-pulse amplification.A beam splitter was placed at the exit of the third amplifier stage to direct a fraction of the energy of the uncompressed laser pulses (120 ps duration) into the pre-pulse arm.The rest of the laser energy was compressed to 8 ps to form the main heating pulse.Pre-pulses of 0.35 J for Ti and 0.5 J for V and Cr, were used to form a plasma by irradiating the target at normal incidence.This pre-pulse was preceded by a 10 mJ pre-pulse about 5 ns before.The pre-pulses were focused into a 4.1 mm long × 30 µm wide line using the combination of a spherical and a cylindrical lens.The plasma was allowed to expand to reduce the density gradient and it was subsequently rapidly heated by the 8 ps duration pulse with ~ 1 J of energy impinging at a selected grazing incidence angle onto the target.The short pulse was focused into a line of the same size utilizing an f = 76.2 cm parabolic mirror placed at 7 degrees from normal incidence.The normal to the target surface was tilted from the axis defined by the pre-pulse beam to form grazing incidence angles of 17, 20 or 23 degrees with respect to the axis of the short pulse beam.The plasma emission was attenuated with calibrated Al filters and a set of metallic meshes of measured transmissivity.The soft x-ray laser beam was monitored using a flat field spectrograph composed of a 1200 l/mm goldcoated variably spaced spherical grating and a 1 square inch back-illuminated CCD detector array placed in the image plane of the grating. Results Figure 1 shows on-axis spectra corresponding to 4 mm long plasmas of Ti, V and Cr irradiated at a grazing incidence angle of 20 degrees.In the Ti experiment the energy of the picosecond pulse was 1 J.In the V and Cr experiments the energy of the main pre-pulse was increased to 0.52 J at expense of the energy of the picosecond pulse, which in these cases was ~ 0.9 J.In all cases, the 3p 1 S 0 →3s 1 P 1 line of the Ne-like ions is observed to clearly dominate the spectra.In the case of Ti, lasing was also observed in the 30.1 nm 3d 1 P 1 →3p 1 P 1 line of the Ne-like ion, but its intensity of was weaker for the range of pump parameters investigated.Figure 2 shows the variation of the soft x-ray laser output intensity as a function of the angle of incidence of the short pulse beam for all three lasers.At an incidence angle of 17 degrees lasing was observed for the 3p 1 S 0 -3s 1 P 1 lines of the Ne-like ions of all three species (see Fig. 2).However, at this angle the pump beam is deposited in a region where the electron density is lower than the optimum value for maximum soft x-ray laser output intensity. The output intensity of all three lasers was observed to increase significantly for an angle of 20 degrees, for which refraction helps to couple the pump beam into a region of higher electron density (2x10 20 cm -3 ).At the steeper angle of incidence of 23 degrees a significant fraction of the beam energy is absorbed in a higher density region where the electron density gradients are too steep for optimum amplification.Also contributing to a lower laser output at this angle is the shorter duration of the gain and the increased mismatch between the velocity of the traveling wave of the pump and the speed of light in the plasma.Figure 3 shows the output intensity of the 30.4 nm line of Ne-like V as a function of time delay between the main pre-pulse and the short pulse for a grazing incidence angle of 20 degrees.Strong lasing was observed to occur over a wide range of time delays.The optimum delays were observed to be approximately 600 ps for Ne-like Ti and 450 ps for Ne-like V and Cr.This result, which follows the same trend observed for lasing in Ni-like ion transitions [12], is related to the fact that a more highly ionized pre-plasma is required for lasing at higher Z, allowing less time for plasma cooling during expansion and recombination.The maximum intensity of the 30.1 nm line of Ne-like Ti occurs at a delay of 520 ps, an earlier time than the optimum for the 32.6 nm line.At this delay the intensity of the 30.1 nm line is typically half that of the 32.6 nm line. Figure 4 shows the variation of the laser intensity of the 30.4 nm line of the Ne-like V as a function of plasma length.The solid line represents a fit of the data with the expression derived by Tallents et al. for the variation of the laser intensity with plasma length taking into account gain saturation [15].For short plasma lengths the laser output intensity is observed to increase exponentially with a small signal gain coefficient of g = 72 cm -1 , until saturation is reached.The gain-length product reaches 21.7 for a 4 mm target, which exceeds the gain-length product value of ~ 15 at which most collisionally excited soft x-ray lasers have been observed to reach gain saturation.A similar measurement for the 32.6 nm line of Ne-like Ti yielded a comparable gain-length product, gxl= 18.4.The gain-length product for the 28.6 nm line of Ne-like Cr was not measured, but the laser output intensity was lower than for the other two elements. Operation at 5 Hz repetition rate was demonstrated for both Ne-like Ti and V moving the targets at a constant velocity of 40 µm per shot.Figure 5 illustrates a series of 250 contiguous laser shots at this repetition rate for the 30.4 nm line of Ne-like V.The 250 consecutive shots have a distribution characterized by a standard deviation which is 35% of the mean.Operation at 5 Hz repetition rate yielded laser pulses with an energy of up to 540 nJ, estimated from the counts on the CCD taking into account the quantum efficiency of the detector and the losses.The average pulse energy was 300 nJ corresponding to an average output power of about 1.5 µW.For the 32.6 nm line of Ne-like Ti the maximum soft x-ray laser pulse energy observed was estimated to be 780 nJ.The average energy for this line was 530 nJ, corresponding to an average output power of about 2.6 µW.This is to our knowledge the first report of laser average powers in excess of 1 microwatt in this region of the spectrum. Conclusions In summary, microwatt average power laser-pumped Ne-like ion lasers were demonstrated for the first time to our knowledge.This demonstration of saturated high repetition rate table-top lasers in Ne-like Ti and Ne-like V and its possible extension to other isoelectronic lines will significantly increase the diversity of soft x-ray laser wavelengths available for applications requiring high average powers. Fig. 1 . Fig. 1.Single shot on-axis spectra of 4 mm long line focus plasmas showing lasing in the 2p 5 3p 1 S 0 -2p 5 3s 1 P 1 transition of Ne-like Ti, V and Cr ions.In all three cases, this laser-line dominates the spectrum. Fig. 2 . Fig. 2. Variation of output laser intensity as a function of grazing incidence angle for Ne-like Ti, V and Cr.Each point represents the mean of 15 or more consecutive laser shots.In all three cases the laser operates best at 20 degrees.At this angle the standard deviation of each data set ranges from 14% to 38% of the mean. Fig. 3 . Fig. 3. Laser output intensity of the 30.4 nm line of Ne-like V as a function of time delay between the main pre-pulse and the short pulse.Lasing is strong for delays ranging from 400 ps to 600 ps.Each point is the average of 10 or more laser shots; the error bars correspond to ± the standard deviation of the set. Fig. 4 . Fig. 4. Intensity of the 30.4 nm line of Ne-like V as a function of plasma length.Each point is the average of 10 or more shots.A fit of the data results in a gain coefficient of 72 cm -1 and a gain-length product of 21.7.Each point is the average of 10 or more laser shots; the error bars correspond to ± the standard deviation of the set. Fig. 5 . Fig. 5. Shot-to-shot variation of the intensity of the 30.4 nm Ne-like V laser line at 5 Hz repetition rate.The 250 consecutive shots have a distribution characterized by a standard deviation which is 35% of the mean.
3,840.2
2005-03-21T00:00:00.000
[ "Physics" ]
SELENBP1 overexpression in the prefrontal cortex underlies negative symptoms of schizophrenia Significance Selenium-binding protein 1 (SELENBP1) is up-regulated in the prefrontal cortex of patients with schizophrenia as per postmortem reports, including the present study. However, no causative link between SELENBP1 and schizophrenia has yet been established. Here, we examined the anatomical deformities, physiological properties, electroencephalographic characteristics of the frontal cortex, and behaviors of animal models overexpressing human SELENBP1 to prove the role of SELENBP1 in schizophrenia pathogenesis. The animals exhibited several anatomical and electroencephalographic features of schizophrenia in the frontal cortex. Importantly, they showed behavioral endophenotypes related to the negative symptoms of schizophrenia as well as reduced sociability. These findings provide a causative link between PFC SELENBP1 upregulation and negative symptoms of schizophrenia. The selenium-binding protein 1 (SELENBP1) has been reported to be up-regulated in the prefrontal cortex (PFC) of schizophrenia patients in postmortem reports. However, no causative link between SELENBP1 and schizophrenia has yet been established. Here, we provide evidence linking the upregulation of SELENBP1 in the PFC of mice with the negative symptoms of schizophrenia. We verified the levels of SELENBP1 transcripts in postmortem PFC brain tissues from patients with schizophrenia and matched healthy controls. We also generated transgenic mice expressing human SELENBP1 (hSE-LENBP1 Tg) and examined their neuropathological features, intrinsic firing properties of PFC 2/3-layer pyramidal neurons, and frontal cortex (FC) electroencephalographic (EEG) responses to auditory stimuli. Schizophrenia-like behaviors in hSELENBP1 Tg mice and mice expressing Selenbp1 in the FC were assessed. SELENBP1 transcript levels were higher in the brains of patients with schizophrenia than in those of matched healthy controls. The hSELENBP1 Tg mice displayed negative endophenotype behaviors, including heterotopias-and ectopias-like anatomical deformities in upper-layer cortical neurons and social withdrawal, deficits in nesting, and anhedonia-like behavior. Additionally, hSELENBP1 Tg mice exhibited reduced excitabilities of PFC 2/3-layer pyramidal neurons and abnormalities in EEG biomarkers observed in schizophrenia. Furthermore, mice overexpressing Selenbp1 in FC showed deficits in sociability. These results suggest that upregulation of SELENBP1 in the PFC causes asociality, a negative symptom of schizophrenia. SELENBP1 | schizophrenia | social behavior | frontal cortex | Brodmann area 9 Identifying the molecular changes causally involved in the pathogenesis of psychiatric diseases is a major challenge in neurobiological studies of mental disorders. Several studies probing genetic risk factors associated with schizophrenia have recently made advances along these lines, showing that the expression levels of the selenium-binding protein SELENBP1 (1) are altered in the brain and blood of patients with schizophrenia (2)(3)(4)(5)(6). A subsequent study analyzing gene expression in specific brain regions of patients with schizophrenia reported SELENBP1 upregulation in the prefrontal cortex (PFC) (7). Diagnostic symptoms of schizophrenia include negative symptoms characterized by social withdrawal, apathy, and emotional blunting (8,9). Patients with schizophrenia with impaired social cognition show reduced volume of the right prefrontal white matter (10)(11)(12)(13)(14). The social impairments observed in patients with schizophrenia are similar to those in individuals with PFC damage (15). Consistent with this, similar deficient social behaviors have been observed in animals with PFC lesions (16) and in animals with a PFC knockdown of phospholipase C-β1 (PLC-β1), which is associated with the pathogenesis of schizophrenia (17). Therefore, the PFC may be a nexus for the negative symptoms of schizophrenia (18). The etiology of schizophrenia has a significant genetic component (19)(20)(21). Genetic studies have investigated the potential causal role of susceptibility-related genes in the brain tissues of patients with schizophrenia by examining schizophrenia-like behaviors and neuropathological features of genetically modified animals (22). Neuregulin 1 (Nrg1)deficient mice and transgenic (Tg) mice expressing dominant-negative Disc1 (disrupted in schizophrenia-1) exhibit pathological features similar to those found in the brains of patients with schizophrenia and behavioral phenotypes similar to those observed in animal models of schizophrenia (21,23). However, there has been no causal link between SELENBP1 upregulation and the manifestation of diverse schizophrenia symptoms. Here, we sought to address this issue. To replicate earlier finding on the SELENBP1 upregulation in the PFC region of patients with schizophrenia, we measured expression levels of SELENBP1 transcripts in postmortem Brodmann area 9 (BA9) of patients with schizophrenia. We also generated and characterized Tg mice expressing human SELENBP1 (hSELENBP1), showing that these mice displayed brain correlates of schizophrenia and behavioral phenotypes characteristic of the negative symptoms of schizophrenia. To prove a causative link between overexpression of SELENBP1 in the PFC and social deficits, we generated and characterized a mouse model in which Selenbp1 was transduced into the neonatal frontal cortex (FC). SELENBP1 Upregulation in the BA9 Region of Patients with Schizophrenia. Previous postmortem brain studies have revealed the upregulation of SELENBP1 in the BA9 region of patients with schizophrenia (4)(5)(6)(7)24). To confirm these findings, we collected postmortem BA9 samples from six patients with schizophrenia (SCZ A-E), one patient with schizoaffective disorder (SCZ-aff) and five healthy controls (Healthy A-F), matched by sex, race, age, postmortem interval (PMI), and tissue pH ( Table 1). The SELENBP1 transcript levels in these six matched pairs of human brain samples were measured by quantitative reverse transcription-PCR using four sets of SELENBP-specific primers targeting the 3′ untranslated region (set #1), exon 2 to 4 (set #2), exon 3 to 4 (set #3), and exon 7 (set #4) ( Fig. 1A and SI Appendix, Table S1). These analyses revealed alterations, mostly upregulation, in SELENBP1 expression in BA9 tissues from SCZ A-E and patients with SCZaff compared to those from healthy A-F individuals ( Fig. 1 B and C and SI Appendix, Table S2). The relative expression levels of SELENBP1 transcripts in patients with schizophrenia were higher than those in controls (primer set #1, t t 5 = 2.05, P < 0.05, onesample t test, one-tailed, Fig. 1B), consistent with previous findings (4,7). These results confirm that SELENBP1 levels are up-regulated in the BA9 region of patients with schizophrenia. We also analyzed copy number variations (CNVs) in two SELENBP1-overexpressing PFC samples and found a partial overlap of these CNVs with the CNVs of long non-coding RNAs (lncRNAs) of autism spectrum disorder (ASD)-related genes (SI Appendix, Table S3). Higher Levels of SELENBP1 and Neuroanatomical Defects in the Brain Regions of hSELENBP1 Tg Mice. We generated Tg mice carrying a human SELENBP1 gene regulated by the chicken β-actin promoter and genotyped Tg founders and their progenies by PCR using a specific set of primers that detected transgene integration (SI Appendix, Fig. S1A). A 733-bp PCR product was detected in genomic DNA samples extracted from the tails of Tg mice but not in those from non-Tg mice (SI Appendix, Fig. S1B). In addition to increased levels of SELENBP1 transcripts, we found higher levels of SELENBP1 protein in primary tissues and PFC of Tg mice than in non-Tg mice (PFC, t 7 = −36.89, P < 0.001; muscle, t 7 = −3.17, P < 0.05; pancreas, t 7 = −8.25, P < 0.001; skin, t 7 = −5.25, P < 0.01; unpaired t test; SI Appendix, Fig. S1C). Immunohistochemical staining of the mouse brain showed elevated levels of SELENBP1 in multiple brain regions, including the PFC and hippocampus, in Tg mice than in non-Tg littermates (SI Appendix, Fig. S1D). Interestingly, the cortices were malformed in hSELENBP1 Tg mice. Specifically, cortical thickness was significantly reduced in hSELENBP1 Tg mice than non-Tg mice (t 16 = 2.33, P < 0.05; unpaired t test; SI Appendix, Fig. S2 A and B). Furthermore, some SELENBP1 Tg mice (#439 and #473) exhibited heterotopias or ectopias (SI Appendix, Fig. S2C) caused by inappropriate migration through the marginal zone, resulting in a lack of layer I (SI Appendix, Fig. S2C) without exhibiting any other severe brain developmental abnormalities (SI Appendix, Fig. S2D). Reduced Excitability of Layer 2/3 Pyramidal Neurons in the PFC of hSELENBP1 Tg Mice. To evaluate the functional integrity in the PFC of hSELENBP1 Tg mice, we measured the intrinsic firing properties of layer 2/3 pyramidal neurons in PFC brain slices from non-Tg and hSELENBP1 Tg mice ( Fig. 2 A and B). To this end, we injected depolarizing currents (20-pA increments, 10 steps, 1-s duration) into layer 2/3 pyramidal neurons in the PFC from a resting membrane potential of -60 mV (Fig. 2C) and compared the activity between the two groups. A two-way repeated-measures (RM) ANOVA of the frequency of action potentials (group, non-Tg vs. SELENBP1 Tg; repeated variable, injected current) revealed no effect of group (F 1,21 = 2.27, P = 0.15) but showed a significant effect of current (F 1,21 = 135.65, P < 0.001); it also showed a significant interaction between effects of group and current (F 9,189 = 3.79, P < 0.001; Fig. 2 D and E). Post hoc analyses showed that neuronal excitability was reduced at +120 pA (Fig. 2F). No difference in the membrane capacitance of the recorded neurons was detected between the two groups (t 21 = 0.54, P = 0.60; Fig. 2G). Thus, these results indicate that SELENBP1 upregulation in the FC during schizophrenia pathogenesis leads to a reduction in neuronal excitability. Next, we examined whether there were between-group differences in evoked beta (15 to 25 Hz) and gamma power (26 to 50 Hz) during S1 (0 to 0.05 s) and S2 (0.5 to 0.55 s) (Fig. 3F). The beta (t 20 = 3.07, P < 0.01) and gamma (t 20 = 2.17, P < 0.05) powers of the hSELENBP1 Tg mice were less significantly reduced from S1 to S2 compared to those of non-Tg mice ( Fig. 3 G and H). No between-group differences in S1 (ERP1) and S2 (ERP2) were found across the entire frequency range (2 to 100 Hz) (t 20 < 0.67, P > 0.51, Fig. 3I). Most EEG responses of the parietal cortex exhibited no between-group differences and different characteristics from those of the FC (SI Appendix, Fig. S3). Table S1) targeting the 3′ UTR (set #1), exon 2 to 4 (set #2), exon 3 to 4 (set #3), and exon 7 (set #4) of human SELENBP1 transcript variants. Ex, Exon; UTR, untranslated region. (B) Differences in SELENBP1 transcripts levels in the PFC of five patients with schizophrenia (SCZ A-E; SI Appendix, Table S2) and one with schizoaffective disorder (SCZ-aff; SI Appendix, Table S2) relative to that in six matched healthy controls (Healthy A-F; SI Appendix, Table S2), measured using primer set #1 (P < 0.05), set #2 (P = 0.17), set #3 (P = 0.08), and set #4 (P = 0.13). SELENBP1 expression in matched healthy controls (Healthy A-F, SI Appendix, Table S2) and Reduced Sucrose Consumption. We next investigated the schizophrenia-like endophenotypes of hSELENBP1 Tg mice, first evaluating the social behaviors of mice using a three-chamber social approach and novelty task. In the social approach task ( Fig. 4A-1), as measured by exploration time, non-Tg mice preferred to explore the novel mouse (stranger 1) over an inanimate object (t 16 = 2.44, P < 0.05; paired t test; Fig. 4A-2). In contrast, hSELENBP1 Tg mice did not exhibit this behavior (t 13 = 1.25, P = 0.23; paired t test; Fig. 4A-3), indicating sociability deficits. In the social novelty task ( Fig. 4B-1), when the inanimate object was replaced with another novel mouse (stranger 2), both non-Tg (Fig. 4B-2) and hSELENBP1 Tg (Fig. 4B-3) mice preferred stranger 2 to stranger 1 (non-Tg: t 16 = 4.65, P < 0.001; Tg: t 13 = 3.50, P < 0.01; paired t test), indicating that social novelty recognition was intact in Tg mice. We further found that hSELENBP1 Tg mice exhibited impairment in nesting behavior ( Fig. 4C-1), a behavioral measure of schizophrenia-like social withdrawal (25, 26) (t 28 = 2.86, P < 0.01; independent t test; Fig. 4C-2). Specifically, whereas non-Tg mice built an identifiable nest at a distinct location in the cage, hSELENBP1 Tg mice did not form nests and tended to scatter pieces of nesting material over the cage floor ( Fig. 4C-2). Tests of sucrose preference showed that hSELENBP1 Tg mice consumed less sucrose than non-Tg mice (independent t test, t 15 = 2.98, The FC EEGs of the non-Tg (n = 12) and hSELENBP1 (n = 10) mice were measured by averaging the ERPs of 100 repetitions of two identical 5 kHz 50 ms tones (S1 and S2) with 50 ms interstimulus interval. (B and C) Animals in two groups exhibited significantly larger P20, N40, and P20-N40 in S1 than in S2 (*). hSELENBP1 Tg mice had smaller P20 and P20-N40 amplitudes in S1 than non-Tg mice (**). (D and E) Significant between-group differences in normalized ratios S2/S1 and S1-S2 (*). (F) FC grand average spectrograms. (G and H) Less reduction in beta and gamma power of the hSELENBP1 Tg mice from S1 to S2 compared to non-Tg mice (*). (I) No between-group differences in S1 (ERP1) and S2 (ERP2) across the entire frequency range. Data are presented as means ± SEM. Fig. 4D), indicating anhedonia in Tg mice. In the forced swim test (FST), adapted mostly to assess the depressive-like behavior, no between-group difference was found (independent t test, t 18 = −0.43, P = 0.67; Fig. 4E) Hyperlocomotion is a positive symptom of schizophrenia in humans and rodents (8,9). Since the deficits observed in the social approach tasks might be caused or influenced by anxiety, we measured anxiety-like behaviors in hSELENBP1 Tg mice using open-field and elevated plus-maze tasks. In the open-field task, hSELENBP1 Tg and non-Tg mice showed no significant difference in locomotor activity (t 15 = −0.10, P = 0.92; independent t test; Fig. 4F-1) and did not differ in the percentage of time spent in the central area of the open-field arena (t 15 = 0.33, P = 0.75; independent t test; Fig. 4F-2). Similarly, in the elevated plus-maze task, we found no significant between-group differences in the percentage of entries into the open arms (t 13 = −0.37, P = 0.72; independent t test; Fig. 4G-1) or the total number of arm entries (t 13 = −1.02, P = 0.33; independent t test; Fig. 4G-2). Additionally, we examined sensorimotor gating status of hSELENBP1 Tg mice using the prepulse inhibition task and found no between-group differences (group, F 1,18 = 1.63, P = 0.22; intensity, F 2,36 = 6.18, P < 0.01; interaction, F 2,26 = 0.94, P = 0.40; Fig. 4H). An assessment of working memory using the Y-maze task showed no differences in the percentage of spontaneous alternations between hSELENBP1 Tg mice and non-Tg mice (t 29 = 1.30, P = 0.20; independent t test; Fig. 4I-1) or the total number of arm entries (t 29 = 0.10, P = 0.93; independent t test; Fig. 4I-2). These results demonstrate no effects of hSELENBP1 on locomotion, working memory, or anxiety-like behavior. Overexpression of Selenbp1 in the FC Causes Sociability Deficits in Mice. Before examining the behavioral effects of modulating Selenbp1 expression in the FC, we measured the expression levels of endogenous Selenbp1 in the cortex region of wild-type C57B/6 mice. These experiments showed that Selenbp1 expression in the mouse brain was high in neonates (2-d-old mice) and low thereafter (SI Appendix, Fig. S4A). To examine whether the behavioral phenotypes of negative symptoms observed in hSELENBP1 Tg mice depend on the FC, we injected lentiviral vectors expressing recombinant mouse Selenbp1 or DsRed2 into the FC of 2-d-old wild-type mice (SI Appendix, Fig. S4 B and C). We confirmed the FC expression levels of transduced and endogenous Selenbp1 transcripts in mice injected into the FC with the Selenbp1encoding lentiviral vector (SI Appendix, Fig. S4D). In contrast to the diminished preference for sucrose observed in hSELENBP1 Tg mice than in non-Tg mice, mice with FC-transduced Selenbp1 showed no differences in sucrose preference compared to control DsRed2 mice (t 11 = −0.35, P = 0.74; independent t test; Fig. 5C). There were no between-group differences in the total distance of spontaneous locomotion (independent t test, t 27 = −0.13, P = 0.90; Fig. 5D-1) or the percentage of time spent in the central sector of the open-field task (t 27 = −1.13, P = 0.27; independent t test; Fig. 5D-2). Moreover, the number of entries into each arm (t 24 = 1.51, P = 0.15; independent t test; Fig. 5E-1) and percentage of entries into the open arms in the elevated plus-maze test (t 24 = −1.21, P = 0.24; independent t test; Fig. 5E-2) did not differ between the groups. Finally, there were no between-group differences in the percentage of spontaneous alternations (t 19 = −0.15, P = 0.89; independent t test; Fig. 5F-1) or the total number of arm entries (t 19 = −0.91, P = 0.37; independent t test; Fig. 5F-2) in the Y-maze spontaneous alternation task. Additionally, we examined the attentional status of DsRed2 controls and mice with FC-transduced Selenbp1 mice using the attentional set-shifting task (ASST), a behavioral task for measuring cognitive symptoms in schizophrenia animal models (27). All mice in both groups showed comparable performances in all discrimination sessions, except for one discrimination session (SI Appendix). Specifically, DsRed2 controls and mice with FC-transduced Selenbp1 showed no significant differences in the number of trials to reach the criterion in simple discrimination (t 26 Discussion Postmortem neurochemical investigations of patients with schizophrenia, including those reported in the present study, imply an association between increased PFC SELENBP1 expression and schizophrenia development. However, no study has demonstrated a causal genetic link between increased PFC SELENBP1 expression and schizophrenia. To provide experimental evidence for such a causal relationship, we generated Tg mice carrying hSELENBP1 and transduced mouse Selenbp1 into the FC of wild-type mice. hSELENBP1 Tg mice exhibited neuroanatomical correlates, a decreased excitability of putative glutamatergic neurons in the PFC, impaired sensory gating of the FC, and negative behavioral endophenotypes of schizophrenia. We also observed a deficit in the social approach task and IDS discrimination of ASST in mice with FC-transduced Selenbp1. To determine the relationship between increased SELENBP1 expression and schizophrenia-like endophenotypes, we generated Tg mice carrying human SELENBP1 and characterized them neuroanatomically, electroencephalographically, and behaviorally. hSELENBP1 Tg mice exhibited thinner cortices, heterotopias, and ectopias, all of which are observed in the brains of patients with schizophrenia (28,29). We also measured the frequency of action potentials evoked by depolarizing currents in PFC 2/3-layer pyramidal neurons of hSELENBP1 Tg mice and found a decreased excitability of these neurons. Additionally, the FC ERPs to a paired tone presentation, a translational endophenotype for schizophrenia (30), were measured in the hSELENBP1 Tg mice. hSE-LENBP1 Tg mice had more minor ERP amplitudes and evoked beta and gamma power than non-Tg mice, indicating auditory sensory gating impairment. Finally, we extensively examined the behavioral endophenotypes of hSELENBP1 Tg mice using several behavioral tasks, showing that these mice exhibited schizophrenia-like negative phenotypes: less social preference in the three-chambered social approach, task deficits in nesting behavior, and the absence of sucrose preference. However, hSELENBP1 Tg mice exhibited no differences compared to non-Tg mice in other behaviors relevant to schizophrenia-like endophenotypes, including locomotion, social novelty, sensorimotor gating, and working memory. We also found no difference between hSELENBP1 and non-Tg mice in the FST. Interestingly, abnormalities in the FC ERPs were reported in serine racemase knockout (SRKO) mice, a genetic mouse model of N-methyl-D-aspartate (NMDA) receptor hypofunction (31). SRKO mice exhibit altered glutamatergic neurotransmission and a deficit in the social novelty task but no deficits in sensorimotor gating (31,32). Multiple similarities between SRKO and hSELENBP1 Tg mice suggest reduced sociality that occurs with NMDA hypofunction in the PFC. However, further studies are required to examine the integrity of GABAergic neurons and molecular changes in the PFC of hSELENBP1 Tg mice. Numerous studies have implicated PFC dysfunction or hypofunction in the negative symptoms of schizophrenia, including impaired social behavior (10)(11)(12)(13)(14)33). Consistent with this, social behavior is impaired by PFC lesions in mice (16) and PFC dysfunction in mice and humans (34,35). Notably, the social impairments of patients with schizophrenia are similar to patients with PFC damage (15). PLC-β1 is involved in postnatal-cortical development and neuronal plasticity (36), and mice lacking PLC-β1 exhibit schizophrenia endophenotypes (37). Our previous studies showed that the knockdown of PLC-β1 in the PFC of the mice disrupted social behavior (17,38). Here, we sought to determine whether Selenbp1 transduction in the FC impaired social behavior. First, to determine the optimal mouse age for injecting a lentiviral vector encoding Selenbp1, we assessed the age-dependent expression of endogenous SELENBP1 in the cortex of the mouse brain (SI Appendix, Fig. S4A). Endogenous SELENBP1 expression was high on postnatal days 2 and 7, with little expression after that. Based on these observations and previous reports that injection of nerve growth factor into the neonatal FC induces schizophrenia-like endophenotypes in adults (39), we injected LV-mCMV-Selenbp1 into the FC of postnatal mice on day 2. This experimental intervention disrupted social and attentive behaviors; however, the other behaviors examined remained intact. Therefore, further studies are needed to elucidate the relationship between expression levels of SELENBP1 or SELENBP1-associated markers in the FC and the degree of social impairment. Notably, a neonatally treated or lesioned animal might be an optimal neurodevelopmental model of schizophrenia since schizophrenia is the end state of abnormal neurodevelopment-initiated years before the onset of brain disorders (40)(41)(42). As exemplified here with hSELENBP1 Tg mice, this mouse model is a valuable tool for performing multiple-level analyses to link SELENBP1 overexpression in the brain to specific anatomical, electrophysiological, electroencephalographical, and behavioral outcomes. However, as described above, endogenous SELENBP1 expression was high during the postnatal period, with little expression after that. In addition, SELENBP1 mRNA levels are increased in the PFC of patients with schizophrenia (5). Considering these points, we generated mice transduced with Selenbp1 in the FC on postnatal day 2 and performed several behavioral analyses of these mice, focusing on PFC-dependent tasks. Mice with FC-transduced Selenbp1 exhibited deficits in social behaviors similar to hSELENBP1 Tg mice but not in nesting behavior and sucrose preference. These discrepancies may be explained by regional (whole brain vs. FC) and temporal (lifelong vs. developmental period) differences between the two mouse models. For example, the study assessing naturalistic behaviors, anxiety, and cognition of mice with reduced NMDA receptor expression in the whole brain mentioned a possibility that nesting behavior deficits of these mice were indicative of a global impairment (43). Therefore, the reduction of nesting behavior in hSELENBP1 Tg mice might indicate a global impairment rather than a phenotype of schizophrenia. Previous gene expression analyses of postmortem samples from patients with schizophrenia revealed increased SELENBP1 expression in the PFC (7). We verified these findings, demonstrating increased SELENBP1 levels in the PFC of patients with schizophrenia compared with those of controls across diverse demographic and postmortem factors, including sex, race, age, PMI, and tissue pH ( Fig. 1 and SI Appendix, Tables S1 and S2). Consistent with these observations, neuroimaging studies have shown an association between PFC abnormalities and negative symptoms (10)(11)(12)(13)(14). The extensive molecular genetic study of schizophrenia implicates glutamatergic dysfunction (44). Interestingly, putative glutamatergic neurons were impaired in the hSELENBP1 Tg mice. Therefore, animal models in the present study may be a translational tool for studying schizophrenia development. Furthermore, given the involvement of various genetic determinants in highly heritable brain disorders, such as schizophrenia and ASD, as demonstrated by genome-wide association studies (45,46), we examined CNVs and relevant loci in PFC tissues of two patients with schizophrenia with up-regulated SELENBP1. These analyses identified overlapping CNVs in lncR-NAs of the ASD-related genes, HERC2, and GOLGA8, in the 15q11.1-11.2 schizophrenia/ASD-associated duplication region (SI Appendix, Table S3 for details), suggesting that the pathogenesis of SELENBP1 upregulation in schizophrenia may be related to that of ASD. SELENBP1 has been suggested to play a role in the development of schizophrenia; however, the mechanism remains unknown, largely because SELENBP1 is not robustly expressed in the brains of adult humans and rodents (47). Moreover, unlike the vast majority of selenium-related proteins that are abundantly expressed in neurons, SELENBP1 is predominantly expressed in astrocytes (47). Furthermore, in addition to impaired neuronal signaling in the pathophysiology of schizophrenia, several studies have suggested abnormalities in glia and impaired interactions between glia and neurons (48,49). Hence, neuropathological and behavioral characterization of animals overexpressing cell-type specific SELENBP1 would more clearly define the role of SELENBP1 and provide clues to reveal its mechanism in schizophrenia. Several studies investigating the development of schizophrenia point to abnormalities in neurodevelopmental processes, such as genetic divergence, exposure to environmental risk factors, inflammation, and an increase in oxidative stress (50)(51)(52). Moreover, because physiological sulfide compounds can alleviate oxidative stress, they can provide complementary protection of antioxidant gene expression and storage of sulfides in response to oxidative events (53,54). Recent genetic and biochemical studies have shown that human SELENBP1 has a 54% similarity to bacterial methanethiol oxidase at the amino acid level (55). Consistent with this, SELENBP1 acts as a methanethiol oxidase that catalyzes the conversion of methanethiol into hydrogen sulfide (H 2 S), formaldehyde, and hydrogen peroxide (H 2 O 2 ) in a human fibroblast cell line and mouse erythrocytes (55). In this context, excess hydrogen sulfide and polysulfide production may play a role in the pathogenesis of schizophrenia (51). Thus, it is expected that increased SELENBP1 expression in the brain could produce an excess of sulfur-containing compounds and induce schizophrenia-like abnormalities in behavior and cellular morphology, a possibility that warrants further investigation. In summary, because human SELENBP1 is a highly conserved protein (56), social-behavioral impairments in both Tg mice carrying the human SELENBP1 gene and mice transduced with SELENBP1 into the FC provide clinical and molecular clues to understand the role of SELENBP1 in schizophrenia development. Furthermore, in light of reports that negative symptoms of schizophrenia are a discrete category and are related to dysfunction or hypofunction of dissociable brain circuits, including those in the PFC (18,57), our results suggest that increased expression of SELENBP1 in the PFC (BA9) of individuals with schizophrenia is a molecular change relevant to the negative symptoms. The present results also support previous reports documenting the downregulation of SELENBP1 CNVs and proteins in the blood of patients with schizophrenia (2, 3), suggesting that SELENBP1 and its associated biological correlates could potentially be peripheral diagnostic biomarkers of the negative symptoms. Materials and Methods Collection of Postmortem Human Brain Tissue. Postmortem human brain tissue (Brodmann area 9), collected during autopsy at the Cuyahoga County Medical Examiner's Office (Cleveland, OH), was supplied by Dr. Stockmeier (Postmortem Brain Core Facility, University of Mississippi Medical Center). Tissue collection and retrospective psychiatric assessments of all subjects were approved by the Institutional Review Board of the University of Mississippi Medical Center (IRB protocol 1999-1002) and University Hospitals Cleveland Medical Center (IRB protocol 11-88-233; see Table 1 for details on the subjects). The Institutional Review Board of Chungnam National University approved the study procedure (IRB No. 201810-BR-168-10). Quantitative PCR. Total RNA from postmortem human PFC tissues was isolated using TRIZOL (Invitrogen). RNA (1 μg) was reverse transcribed into cDNA using oligo d(T) primers and reverse transcriptase (PrimeScript RT-PCR; TAKARA Biomedical Inc.), and 70 ng of the resulting cDNA was used for real-time quantitative PCR (SsoAdvanced Universal SYRB Green Supermix, Bio-Rad). Oligonucleotides targeting SELENBP1 (primer sets #1 to 4; Fig. 1A and SI Appendix, Table S1) were designed using PRIMEQUEST (Integrated DNA Technologies). The cycle threshold (Ct) values for each target transcript were normalized using the Ct value of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the reference gene. The relative expression of SELENBP1 transcripts in patients with schizophrenia was assessed in each pair matched according to sex, race, age, PMI, and tissue pH according to relationship 2 (ΔCt patient -ΔCt healthy) . Then differences in matched pairs were analyzed by one-sample t test using Bio-Rad CFX Manager (Bio-Rad). Transgene Construction and Generation of hSELENBP1 Tg Mice. The plasmid, CAGGS-human SELENBP1, was linearized with HindIII and SspI and purified. Linearized plasmid DNA was injected into the male pronucleus of fertilized eggs in C57BL/6J mice. These eggs were transplanted into the oviducts of pseudo-pregnant mice. The Tg founders were bred in wild-type C57BL/6J mice. Animal care and experimental procedures were approved by the Institutional Animal Care and Use Committee of Chungnam National University guidelines (CNUIACUC No. CNU-01110). For details regarding Tg mouse production and protocols, see the SI Appendix. Lentivirus Construction, Production, and Intracranial Injection into the FC of Neonatal Mice. Full-length cDNAs for Selenbp1 (NM_009150.3) and DsRed2 were cloned into the pLenti-M1.4 lentiviral vector backbone containing an IRES-puro r gene cassette under the control of the murine cytomegalovirus (mCMV) immediate-early promoter. Second-generation lentiviral vectors pseudotyped with vesicular stomatitis virus G (VSV-G) were generated by co-transfection of HEK293T cells with the pLenti-M1.4 transfer vector plasmid, psPAX2 plasmid, and VSV-G envelope plasmid using Lipofectamine (Invitrogen, Waltham, MA). Intracranial injections into the FC of neonatal mice were performed as previously described (58). See the SI Appendix for further details. Detection of SELENBP1. To verify SELENBP1 expression in Tg and non-Tg mice, as well as normal C57BL/6NTac mice, we performed western blotting, immunohistochemistry, and immunofluorescence staining using mouse anti-SELENBP1 antibodies from OriGene Technologies (TA504700; Rockville, MD) and MBL Inc. (M061-3; Woburn, MA). For details regarding reagents and protocols, see the SI Appendix. Stereotaxic Surgery and EEG Recordings. For FC and parietal cortex EEG recordings, age-matched littermates of 5-to 6-mo-old male and female mice underwent the stereotaxic surgery and were given at least 7 d to recover before recording the FC and parietal cortex EEG to auditory stimuli, as reported previously with minor modifications (31). See the SI Appendix for details. Behavioral Measurements. A battery of behavioral tests (open-field test, elevated plus-maze test, three-chamber social approach and novelty task, Y-maze spontaneous alternation, nest-building test, forced swim task, sucrose preference task, prepulse inhibition task, and attention set-shift task) was performed using age-matched littermates of 5-to 6-mo-old male mice as reported previously, with minor modifications (59). See the SI Appendix for details. Statistical Analyses. All data are expressed as means ± SEM. A one-sample t test, paired t test, independent t test, and ANOVA were used for statistical analyses of all parameters. Group differences were assessed using Sidak's post hoc test, where necessary. SPSS Statistics 25 (IBM) and Prism 9 software (GraphPad Software) were used for statistical analyses and graphical figures, respectively. The alpha level was set to 0.05. Data, Materials, and Software Availability. All study data are included in the article and/or SI Appendix.
7,009.8
2022-12-13T00:00:00.000
[ "Psychology", "Biology", "Medicine" ]
Diurnal Variations of Endogenous Steroids in the Follicular Phase of the Menstrual Cycle Rationale: The diurnal variations of cortisol and of other upstream glucocorticoid steroids have been well described. However, diurnal variations of other steroids in the steroid synthesis pathways have not been fully addressed in the literature. Objective: To explore possible diurnal variations of several endogenous steroids. Methods: Blood samples were taken every fourth hour during 24 hours in 10 healthy drug naïve pre-menopausal women in the follicular phase of the menstrual cycle. Using the LC-MS/MS technique, serum was analyzed for concentrations of glucocorticoids (cortisol, cortisone, 11-deoxycortisol), androgens (androstenedione, testosterone, DHEA), pregnenes (pregnenolone, 17OH-pregnenolone), progestins (progesterone, 17OH-progesterone), and estrogens (estrone, estradiol). The concentration of the anesthetic steroid allopregnanolone was analyzed using the radioimmunoassay (RIA) technique. The blood samples were divided into six time intervals; 02:01-06:00, 06:01-10:00, 10:01-14:00, 14:01-18:00, 18:01-22:00, and 22:01-02:00. Each steroid was tested for possible diurnal variation using repeated measures ANOVA for within-subject variation. Results: All steroids except the estrogens exhibited a significant diurnal variation (p<0.05). Apart from allopregnanolone, all the steroids peaked in concentration at 08:00 (e.g., just after awakening). Allopregnanolone had a flatter curve, its highest concentrations occurring throughout the day and its peak concentrations at about 12:00. Conclusions: The present study suggests that when assessing concentrations of steroids in the glucocorticoid group and those in the pregnene, androgen, and progestine groups, as well as allopregnanolone, it might be necessary to accounted for a diurnal variation. However, a possible interaction between menstrual-cycle phase and the hypothalamus-pituitary-adrenal (HPA) axis and the diurnal variations of the steroids should be confirmed by future studies. *Corresponding author: Anna Tiihonen Möller, Department of Obstetrics and Gynecology, Stockholm South General Hospital, Sjukhusbacken 10, 118 83 Stockholm, Sweden, Tel: +4686164670; E-mail<EMAIL_ADDRESS>Received February 12, 2016; Accepted February 22, 2016; Published February 28, 2016 Citation: Tiihonen Möller A, Bäckström T, Söndergaard HP, Kushnir MM, Bergquist J, et al. (2016) Diurnal Variations of Endogenous Steroids in the Follicular Phase of the Menstrual Cycle. Neurochem Neuropharm Open Access 2: 109. Copyright: © 2016 Tiihonen Möller A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Neurochem Neuropharm Open Access, an open access journal ISSN: 2469-9780 Citation: Tiihonen Möller A, Bäckström T, Söndergaard HP, Kushnir MM, Bergquist J, et al. (2016) Diurnal Variations of Endogenous Steroids in the Follicular Phase of the Menstrual Cycle. Neurochem Neuropharm Open Access 2: 109. Introduction Many studies have been conducted assessing biological changes in patients with both somatic and psychiatric conditions. Most studies of posttraumatic stress disorder (PTSD) have focused on the primary stress pathway: the hypothalamus-pituitary-adrenal axis (HPA), which is activated during acute stress. The hypothalamus secretes corticotropin-releasing factor (CRF) which stimulates the pituitary to release adrenocorticotropic hormone (ACTH), resulting in the production of glucocorticoids and other steroids in the adrenal cortex. It is known that the secretion of cortisol is associated with the natural circadian rhythm, and the assessment of cortisol concentrations should be adjusted for the diurnal variation, with peak values in the morning just after awakening [1]. As cortisol is one of the end products of the glucocorticoid biosynthesis pathway, it can be assumed that the biosynthesis of glucocorticoids upstream and other steroids (Figure 1) also are affected by diurnal variations. Dehydroepiandosterone (DHEA) and its sulfate (DHEAS) are synthesized from cholesterol, via pregnenolone (sulfate) and 17-hydroxypregnenolone, almost exclusively by the adrenals, only ± 10% of plasma DHEA is derived from the gonads [2]. DHEAS is quantitatively the major steroid hormone secreted by the adrenals, and its plasma concentration in young adults is 10 to 20 times the cortisol concentration. DHEAS has a low metabolic-clearance rate, and as a consequence of this, DHEAS is practically constant during the day and night [3]. However, the concentration of DHEA is 500 to 1,000 times lower than that of DHEAS, and studies have shown that DHEA levels show both pulsatile and nyctohemeral variations, in parallel with cortisol and ACTH pulses [4]. DHEA is also a precursor of testosterone and estrone. One could therefore surmise that the concentrations of these steroids also have diurnal variations. Alterations the in the neuroactive steroid allopregnanolone in PTSD patients have been suggested. Allopregnanolone, another stress steroid, is anesthetic and anxiolytic. It is converted from progesterone by the rate-limiting enzyme 5α-reductase and has been found at decreased concentrations in cerebrospinal fluid and serum in both patients with PTSD and those with depression [5,6]. It has been determined that allopregnanolone does not manifest a diurnal variation during the luteal phase of the menstrual cycle [7], but there are no reports supporting its diurnal variation during the follicular phase. Recent studies suggest that alterations occur in steroid concentrations soon after a traumatic event, suggesting that acute biological responses may serve as risk or resilience factors for the development of PTSD. To assess whether an acute response to a trauma is associated with the steroid concentrations, one needs to know whether it is necessary to account for a diurnal variation of that particular steroid. Participants Ten non-traumatized women were recruited through advertisement. Before entering the study, all participants received written and oral information about the study and signed a writtenconsent form. Everyone received a medical screening consisting of a physical examination and a test of thyroid and liver function. The study was approved by the local medical ethics committee in Stockholm (2011/851-31/3). Inclusion criteria:Women between 18 and 40 years old, who were not taking any hormonal contraceptives or daily medications, and who were physically and mentally healthy were included. Participants had to experience regular menstruations, and all blood sampling was conducted during the follicular phase of their menstrual cycles (days [6][7][8][9][10][11][12]. Participants had to score low on the Beck Depression Inventory (BDI), the Stanford Acute Stress Reaction Questionnaire (SASRQ), and the Structured Clinical Interview for DSM-IV (SCID-I) interview. These scales are clarified later in this article. Exclusion criteria: Potential participants were excluded if they had a history of psychosis, major or bipolar depression, alcohol or substance abuse, neurological disease, endocrine disease, polycystic ovarian syndrome (PCOS), premenstrual dysphoric disorder (PMDD) or if they were pregnant. Further, women were excluded if they were smokers or were taking any daily medications, such as antidepressants, anxiolytics, and sedative drugs. Potential participants were also excluded from the study if they reported having taken any benzodiazepines in the three months before the challenge (including any occasional medication) or having consumed alcohol within 72 hours before the blood sampling. Psychometrics used during inclusion: The Beck Depression Inventory [10] is a 21-item inventory measuring depressive mood and vegetative symptoms of depression. Cut-off points for the sum scores were 0-9 (no depression), 10-16 (mild depression), 17-29 (moderate depression), and ≥ 30 (severe depression). Participants were allowed to score a maximum of nine points. The Stanford Acute Stress Reaction Questionnaire, SASRQ [11] is mainly used to diagnose ASD, but in this study it was used as a total score for measuring PTSD symptoms. To be included, participants had to have low scores (<10 out of a possible 150 points). The PTSD Module of the Structured Clinical Interview for DSM-IV (SCID-I) [12] was also used to establish the absence of posttraumatic stress symptoms. Participants had to score zero out of a possible six points. Procedures The study was conducted in a hospital setting, in a room at the Emergency Clinic for Raped Women at Stockholm South General Hospital, Sweden. An intravenous catheter was inserted in the forearm, at a minimum of one hour before the first blood draw. Blood samples were drawn every fourth hour for 24 hours during the follicular phase (days 6-12) of the menstrual cycle. On the day of the study, participants followed their daily routines. At night, the intravenous catheter made it possible for the research nurse to take blood samples without interfering with the women's sleep. Participants slept in an ordinary bed in a quiet research room and according to their usual diurnal cycles. The blood samples were collected in two untreated vacutainer tubes. The samples were then centrifuged at 3,200 rpm for 10 minutes. Equal aliquots of serum were then transferred to 2 ml microtubes, labeled with an ID code, and stored at -70°C until transfer on dry ice for analysis. A4, DHEA and PROG were purchased from Steraloids Inc. (Newport, RI, USA). The internal standards were deuterium labeled analogues of the steroids d3-Te, d3-Pregn, d2-11DC, d9-17OHProg, d3-17OHPregn, d4-F, d3-E, (Cambridge Isotope Laboratories, Andover, MA, USA), d4-E1, and d3-E2 (CDN Isotopes, Toronto, ON, Canada). All other chemicals were of the highest purity commercially available. Steroid analyses Samples were analyzed as previously described [8,[13][14][15][16]. Briefly, steroids were extracted from samples; DHEA, A4, Te, Pregn, 17OHPregn, 17OHP and Prog were derivatized with hydroxylamine to form oxime derivatives; estrone and estradiol were derivatized with dansyl chloride to form dansyl derivatives. The limit of quantification (LOQ) was 0.05 ng/mL for Pregn, 17OHProg, and 11DC; 0.25 ng/mL for 17OHPregn; 1 ng/mL for Prog; 0.01 ng/mL for Te and A4; 0.05 ng/ mL for DHEA; and 1 pg/mL for E1 and E2 [14]. The intra-assay and inter-assay CV were <8% and <11%, respectively [8,13,14]. All steroids were analyzed in positive-ion mode using an electrospray ion source on a triple-quadruple mass spectrometer (AB Sciex5500; Foster City, CA, USA). The HPLC system consisted of series 1260 and 1290 HPLC pumps (Agilent Technologies, Santa Clare, USA), and an HTC PAL autosampler (LEAP Technologies, NC, USA) equipped with a fast wash station. The quadruples Q1 and Q3 were tuned to unit resolution, and the mass spectrometer conditions were optimized for the maximum signal intensity of each steroid. Two mass transitions were monitored for each steroid and its internal standard (IS). Quantitative data analysis was performed using Analyst ® 1.5.2 software. Calibration curves were generated with every set of samples using six calibrators; three quality control samples were included with every set of samples. The Specificity of the analysis in every sample was evaluated by comparing concentrations determined using the primary and the secondary mass transitions [17]. Radioimmunoassay (RIA): The allopregnanolone analysis method has been described in detail elsewhere [18]. Briefly, in this study allopregnanolone was separated from cross-reacting steroids by celite chromatography and thereafter the quantification was made by RIA using a polyclonal rabbit antiserum raised against 3α-hydroxy-20-oxo-5α-pregnane-11-yl-carboxymethyl-ether coupled to bovine serum albumin, provided by RH Purdy, (The Scripps Research Institute, La Jolla, CA, USA) [19]. The rabbit antiserum was used in a dilution of 1/5000. The antibody solution was prepared using [11,12] 3 H-allopregnanolone, 3 × 10 6 cpm/32 ml (Perkin-Elmer Life Sciences, Boston, USA) solution containing 65 mM boric acid (Merck) buffer, pH=8.0, bovine serum albumin 100 mg/ml (Sigma, St Louis, USA), human gamma globulin solution 20 mg/ml (Octapharma, Sweden), and antibody in a milliliter ratio of the antibody solution: 30:1:1:0.006. The solution was allowed to equilibrate overnight at 8°C. Antibody solution (200µl) was added to all sample tubes, and the mixture once again was allowed to stand overnight at 8°C. After the addition of 200 µl saturated ammonium sulfate, each tube was again mixed and centrifuged at 20,000 rpm for 20 minutes. Thereafter, the supernatant was aliquoted into a counting vial and diluted with 3.0 ml Optiphase scintillation medium (Wallac, Finland). The samples were counted in a RackBeta (Wallac, Finland) scintillation counter. The sensitivity of the assays was 25 pg, with an intra-assay coefficient of variation for allopregnanolone of 6.5% and an interassay coefficient of variation of 8.5%. The RIA used does not detect the 3β-epimer (isoallopregnanolone) or the 5β-epimer (pregnanolone). Data analyses The blood draws were performed six times, once during each of the intervals 02:01-06:00, 06:01-10:00, 10:01-14:00, 14:01-18:00, 18:01-22:00, and 22:01-02:00. Mean and standard deviation for each time interval and steroid were calculated. The blood samples were tested for each steroid`s diurnal variation using repeated measures ANOVA for within-subject variation. Spearman correlation coefficient was calculated to assess the correlation between concentrations of the steroids. Results were considered significant when the p-value was less than 0.05. All statistical analyses were conducted using the statistical software version SPSS 22.0. Concentrations of steroids at each time interval Serum concentrations for each steroid and time interval are listed in Table 1. Concentrations of progesterone in the follicular phase were low, and most samples were below the sensitivity of the assay. Therefore, progesterone was excluded from further analyses. Diurnal variations All steroids, with the exception of estrone and estradiol, had a diurnal variation with significant within-subject variation (see Table 1). Apart from allopregnanolone, all steroids peaked in concentration during time interval 2 (i.e., around 08.00, just after awakening). For cortisol, cortisone, pregnenolone, 17OH-pregnenolone, DHEA, 17OH-progesterone, and androstenedione, the mean concentration at time interval 2 was significantly higher than the mean concentrations in all the other time intervals. Allopregnanolone exhibited flatter curve then the other steroids did, its highest concentrations occurring in time intervals 2, 3, and 4 (i.e., between 06:00 and 18:00). The highest serum concentration of allopregnanolone was seen during time interval 3. Serum concentration in time interval 3 was higher than in the rest of the time intervals but not significantly higher than that in time interval 2. Serum concentrations for all steroids during 24 hours are shown in Figures 2-5 NS Note: Data for progesterone are not listed, as concentrations in follicular phase were below LOQ of the assay. *P-value regarding within-subject variation for each steroid using repeated measures ANOVA. Bivariate correlations As seen in Table 2, apart from the estrogens, which being correlated only with each other, high correlations occurred between almost all the steroids. Discussion The present study is unique, as it is the first to explore the relationship between concentrations of steroids in the pathway of steroid biosynthesis in reproductive-age women during the follicular phase of the menstrual cycle. The major findings of the present study were that all measured steroids, with the exception of estrogens, showed a diurnal variation in the follicular phase of the menstrual cycle. For all measured steroids, with the exception of allopregnanolone, peak concentrations were observed around 08:00. Allopregnanolone peaked throughout the day, its highest concentrations occurring around 12.00. Further, we saw correlations between a majority of the steroids. The finding that almost all steroids have a natural circadian rhythm is interesting and further demonstrates the importance of adjusting for this in studies examining differences in serum concentrations of steroids in certain patient groups when the subjects are pre-menopausal women in the follicular phase of their menstrual cycles. The finding of a diurnal variation in allopregnanolone is interesting because no variation was seen in the luteal phase of the menstrual cycle in patients with PMDD or in controls [7]. This suggests that allopregnanolone is secreted from the adrenals in the follicular phase with a rhythm similar to that of the glucocorticoids, but on a flatter curve. The somewhat different slope on the curve might be caused by the longer biosynthesis pathway to allopregnanolone; however, this should be explored in further studies. In the study on patients with PMDD [7], both patients and healthy controls had comparable concentrations of cortisol. However, patients with higher concentrations of allopregnanolone displayed blunted nocturnal cortisol levels. The researchers suggested that the diurnal secretion of cortisol in the luteal phase could be influenced by concentrations of allopregnanolone. Further, they suggested that the timing of blood sampling and individual levels of allopregnanolone could explain the discrepancies in studies examining the HPA axis in PMDD patients. One could assume also that when interpreting the HPA axis in PTSD patients, allopregnanolone concentrations should be considered. Not adjusting for female patients in luteal phase of the menstrual cycle (i.e., when allopregnanolone concentrations are known to be high [20]) could potentially blunt the results. In a recent study, Inslicht et al. [21] discussed difficulties in interpreting neurosteroid responses in pre-menopausal women, suggesting that the reproductive hormones may be involved in modulating the HPA axis. In the present study all participants were in the follicular phase of the menstrual cycle, and no correlation was seen between allopregnanolone and cortisol (r=0.108, p=0.429). However, correlations were found between allopregnanolone, pregnenolone, and DHEA and between all steroids (except the estrogens) and androstenedione. Estrone and estradiol correlated with each other but not with any of the other steroids. Conclusions In sum, the present study suggests that just as there is for the glucocorticoids, there is also a natural circadian rhythm to allopregnanolone, the pregnenes, androgens, and progestins in the follicular phase of the menstrual cycle. However, a possible interaction between menstrual-cycle phase and the HPA axis, as well as the diurnal variations of the steroids, should be strengthened with future studies.
3,873.8
2016-02-28T00:00:00.000
[ "Biology", "Medicine" ]
Simple and Efficient Microwave Assisted N-Alkylation of Isatin We present herein the results of microwave promoted N-alkylations of isatin (1) with different alkyl, benzyl and functionalized alkyl halides. Reactions were carried out under different conditions, always employing methodologies compatible with MW assisted chemistry. Generation of isatin anion employing diverse bases and solvents or using the preformed isatin sodium salt was tested. The best results were achieved using K2CO3 or Cs2CO3 and a few drops of N,N-dimethylformamide or N-methyl-2-pyrrolidinone. These reactions present noteworthy advantages over those carried out employing conventional heating. Introduction N-alkylation of isatin (1, Scheme 1) reduces the lability of the isatin nucleus towards bases, while maintaining its typical reactivity. Thus, N-substituted isatins 2 have been frequently used as intermediates and synthetic precursors for the preparation of a wide variety of heterocyclic compounds [1,2]. In addition, properly functionalized N-alkyl isatins present different biological activities [1], and in recent years compounds showing potent cytotoxicity in vitro [3], antiviral activity [4] and potent and selective caspase inhibition [5] have been reported, among others. As part of our ongoing investigations, we needed to use a series of isatinacetic acid derivatives. This fact led us to explore synthetic methods to obtain N-alkylisatins 2 (Scheme 1). Literature procedures include: a) direct synthesis from N-alkylanilines and b) N-alkylation of isatin. Direct synthesis involves tedious multistep processes which usually give N-alkylisatins in low to moderate overall yields [6]. N-Alkylation of isatin (1) is usually carried out generating the highly conjugated isatin anion (1 -) [7] with different bases, followed by treatment with appropriate alkylating agents, generally alkyl halides or sulphates (Scheme 1). These methods had been extensively reviewed [1,8] and include the use of bases such as NaOH, NaH, CaH 2 and K 2 CO 3 in different solvents. Synthesis of N-functionalized isatins using a parallel synthesis employing a polymer supported strong base for the deprotonation step has been recently reported [9]. Scheme 1. Synthetic route for N-substituted isatins 2. Though some of the above mentioned methods provide good yields of N-alkylisatins, they generally present drawbacks related to: a) the base lability of the isatin nucleus [10], b) use of hazardous reagents such as metal hydrides, which require anhydrous solvents, c) use of aprotic organic solvents with high water solubility and high boiling points, leading to complex workups, d) use of carcinogenic solvents in some cases and e) side reactions due to the presence of keto-carbonyls (i.e. reductions when metallic hydrides are used, aldolisation when K 2 CO 3 in acetone is employed). Besides, reaction times are in general lenghty, with consequent formation of by-products, and hence low yields and difficulties in product isolation. Our interest in this type of reactions prompted us to test the use of microwave (MW) irradiation as an alternative energy source. MW heating has gained popularity in the last decades as it remarkably accelerates a wide variety of reactions and minimizes thermal decomposition of the products. Since the initial work of Gedye [11] and Giguere [12], a rapidly increasing number of reports and reviews have been published demonstrating the importance of such methodology [13]. However, to the best of our knowledge, the potentialy of this method has not been exploited yet for the type of reactions of interest in this case [14]. We present herein results of MW assisted synthesis of N-alkylisatins 2 by N-alkylation of isatin (1) with different alkyl, benzyl and functionalized alkyl halides (Scheme 1, Table 1) and their comparison with those obtained under conventional heating. Reactions were carried out under different conditions, always employing methodologies compatible with MW assisted chemistry. Results and Discussion We initially examined reactions under "dry" conditions [16], irradiating the mixture of neat reactants, either generating isatin anion (1 -) in situ (Method A) or employing the pre-formed sodium isatin salt (Na + 1 -) (Method B). In all cases decomposition of reactants or recovery of unreacted starting material was observed. On the other hand, results improved notably when some drops of a polar aprotic solvent, enough to humidify the reaction mixture, are added, giving a polar mixture that is more prone to MW absorption [17]. This is a fundamental requirement in the case where the sodium salt of isatin or alkylating agents with high melting point (i.e.: N-methyl-N-phenylchloroacetamide) are used. Using ethyl chloroacetate as alkylating agent, we optimized reaction conditions by testing several parameters such as different bases and solvents. The best results were obtained employing K 2 CO 3 or Cs 2 CO 3 in N,N-dimethylformamide (DMF) or N-methyl-2-pyrrolidinone (NMP). Results obtained with different alkyl halides are shown in Table 2. Employing low or medium power settings full conversions are achieved in a few minutes and moderate to high yields of compounds 2 are obtained. The use of NMP is specially important when poorly reactive halides are employed (see entry 19). We also observed that, in general, the use of Cs 2 CO 3 as the base facilitates the workup, but yields are lower in some cases (see entries 5 and 17). In experiments carried out under conventional heating we encounter longer reaction times and lower yields. As an example, using K 2 CO 3 /DMF, the cinnamyl derivative 2e was obtained in high yield in two minutes, whereas under classical heating four hours were required (Entry 8). Furthermore, to reach satisfactory yields high amounts of solvent are required, thus making product isolation most difficult. Entry Reaction of isatin with phenacyl bromide, either under conventional heating or in the MW promoted reaction, leads to N-substituted derivative 2l in acceptable yields, although the MW procedure provided the best results (Entry 22). Variable amounts of epoxide 3 (Scheme 2), resulting from addition of the halometylketone anion (A) onto the isatin β-carbonyl and further cyclization were obtained as a side product [18]. MW promoted N-alkylation of isatin using the preformed sodium salt (Na + 1 -) requires higher power to complete the reactions, but the yields do not surpass 70% (entries 4, 6, 9, 15, 18 and 20). In the reaction with phenacyl bromide, the method facilitates work up and improves yields by minimizing epoxide formation (Entry 23). Absence of an excess of base which would make carbanion A formation difficult, accounts for such results (Scheme 2). As an alternative, techniques combining MW irradiation with the use of isatin sodium salt supported on mineral surfaces under solvent free conditions were used (Method C), an eco-friendly methodology which had received attention in recent years [16]. Under such conditions, high powers were required. In order to avoid reactant and product decomposition, reactions were conducted with intermittent heating. This method was designed to avoid overheating of reactants, according to Varma et al., when a household microwave oven is employed. [19] However, yields did not exceed 62 %, and could not be improved using phase transfer catalysis (Entries 7, 16 and 21). Scheme 2. Probable mechanism for the synthesis of epoxide 3. Conclusions We have developed a simple and efficient MW assisted synthesis of N-alkylisatins 2 by Nalkylation of isatin (1) using a household oven. The procedure involves the use of K 2 CO 3 or Cs 2 CO 3 and a few drops of DMF or NMP, and is a general one for reactions with alkyl, benzyl and functionalized alkyl halides of different reactivity. The use of MW irradiation offers many advantages over conventional heating: it remarkably decreases reaction times, requires less solvent, thus facilitating reaction workups, and increases yields. General Melting points were taken on a Büchi capillary apparatus and are uncorrected. The 1 H-and 13 C-NMR spectra were recorded on a Bruker MSL 300 MHz spectrometer. DMSO-d 6 was used as the solvent, and the standard concentration of the samples was 20 mg/mL. Chemical shifts are reported in ppm (δ) relative to TMS as an internal standard. Splitting multiplicities are reported as singlet (s), broad signal (bs), doublet (d), double doublet (dd), triplet (t), double triplet (dt), quartet (q), and multiplet (m). MS (electron impact) were performed on a MS Shimadzu QP-1000 instrument at 70 eV. High resolution spectra were obtained with a model VG AutoSpec three sector (EBE) mass spectrometer (Waters, Milford, MA, USA) at a scan rate of 1 scan/4 sec, operating with variable magnetic field at 8000 resolving power (10% valley definition) using perfluorokerosene (PFK) as reference compound. TLC analyses were carried out on Silica gel 60 F 254 using chloroform:methanol (9:1) as solvent. Preparative thin layer separations (PLC) were carried out by centrifugally accelerated radial chromatography using a Chromatotron model 7924T. The rotors were coated with Silica Gel 60 PF 254 and the layer thickness was 2 mm. Chloroform and increasing percentages of methanol were used as eluent. Reagents, solvents and starting materials were purchased from standard sources and purified according to literature procedures. Reactions with reagents solid or high boiling point under MW irradiation were conducted in a domestic MW oven (BGH 16260) employing open vessels. Adaptation for reflux heating [20] was used when volatile alkylating agents were employed. General procedure for synthesis of compounds 2 employing conventional heating A mixture of isatin (1, 147 mg, 1 mmol), potassium carbonate (1.82 mg, 1.3 mmol), the corresponding alkyl halide (1.1 mmol) and DMF (5 mL) was heated in an oil bath at appropriate temperature and monitored by TLC. When the reaction was completed, the reaction mixture was poured into ice-water. If the product crystallized, the resulting solid was filtered, washed with water and purified by recrystallization or by chromatographic methods. If not, the suspension was extracted with chloroform and the organic layer was washed with water, and then dried and concentrated in vacuo affording compounds 2. Details of the reactions (temperature, times and yields) are listed in Table 2. Using either higher temperatures or smaller amounts of solvent, the yields diminish. General procedures for synthesis of compounds 2 employing MW irradiation Method A: Generating the isatin anion (1 -) in situ Reaction conditions were selected using ethyl chloroacetate as alkylating agent. Na 2 CO 3 , K 2 CO 3, Cs 2 CO 3, CaH 2 , TEA, LiOH, NMM, NaOEt were tested as bases. The following polar aprotic solvents were evaluated: DMF, DMA, HMPT, MeCN, DMSO and NMP. The best results were obtained using K 2 CO 3 or Cs 2 CO 3 and a few drops of DMF or NMP. The following general procedure was employed: an intimate mixture of isatin (1, 147 mg, 1 mmol), the appropriate alkyl halide (1.1 mmol), base (1.3 mmol) and some drops of the corresponding solvent (giving a slurry at room temperature) was exposed to MW irradiation. The reaction mixture was cooled to room temperature and mixed thoroughly with ice-water. Compounds 2 were isolated following the procedure indicated above. Solvents, powers, times and yields are listed in Table 2 (entries 1- 3, 5, 8, 10-14, 17, 19 and 22). Method B: Employing preformed isatin sodium salt A solution of sodium (0.8 g) in absolute ethanol (16 mL) was added to isatin (6 g) suspended in absolute ethanol (24 mL), the mixture being well shaken to avoid caking. The violet-black isatin sodium salt (Na + 1 -) was collected, well washed with alcohol and finally with benzene until the washings were colourless and then dried. An intimate mixture of isatin sodium salt (Na + 1 -) (169 mg, 1 mmol), the appropriate alkyl halide (1.1 mmol) and some drops of the corresponding solvent was exposed to MW irradiation and the reaction products isolated as was indicated in the method A. Solvents, powers, times and yields are given in Table 2 (entries 4, 6, 9, 15, 18, 20 and 23). Method C: Employing supported reagents To a solution of isatin sodium salt (Na + 1 -, 169 mg, 1 mmol) in the minimum amount of water, neutral alumina (400 mg) was added. The mixture was evaporated with a rotary evaporator and the solid was dried 1 h at 110ºC. The corresponding alkyl halide (1.1 mmol) was adsorbed onto isatin on alumina and the mixture was irradiated by microwave in a Pyrex beaker (15 mL). After cooling at room temperature, the mixture was extracted with dichloromethane. The product was purified after evaporation of the solvent. Powers, times and yields are listed in Table 2 (Entries 7, 16 and 21).
2,844.8
2008-04-01T00:00:00.000
[ "Chemistry" ]
Combining Partial Specifications using Alternating Interface Automata To model real-world software systems, modelling paradigms should support a form of compositionality. In interface theory and model-based testing with inputs and outputs, conjunctive operators have been introduced: the behaviour allowed by composed specification \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s_1 \wedge s_2$$\end{document} is the behaviour allowed by both partial models \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s_1$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s_2$$\end{document}. The models at hand are non-deterministic interface automata, but the interaction between non-determinism and conjunction is not yet well understood. On the other hand, in the theory of alternating automata, conjunction and non-determinism are core aspects. Alternating automata have not been considered in the context of inputs and outputs, making them less suitable for modelling software interfaces. In this paper, we combine the two modelling paradigms to define alternating interface automata (AIA). We equip these automata with an observational, trace-based semantics, and define testers, to establish correctness of black-box interfaces with respect to an AIA specification. Introduction The challenge of software verification is to ensure that software systems are correct, using techniques such as model checking and model-based testing. To use these techniques, we assume that we have an abstract specification of a system, which serves as a description of what the system should do. A popular approach is to model a specification as an automaton. However, the huge number of states in typical real-world software systems quickly makes modelling with explicit automata infeasible. A form of compositionality is therefore usually required for scalability, so that a specification can be decomposed into smaller and understandable parts. Parallel composition is based on a structural decomposition of the modelled system into components, and it thus relies on the assumption that components themselves are small and simple enough to be modelled. This assumption is not required for logical composition, in which partial specification models of the same component or system are combined in the manner of logical conjunction. Formally, for a composition to be conjunctive, the behaviour allowed by s 1 ∧ s 2 is the behaviour allowed by both partial specifications s 1 and 2 . Such a composition is important for scalability of modelling, as it allows writing independent partial specifications, sometimes called view modelling [3]. On a fundamental level, specifications can be seen as logical statements about software, and the existence of conjunction on such statements is only natural. Conjunctive operators have been defined in many language-theoretic modelling frameworks, such as for regular expressions [12] and process algebras [5]. Conjunction for Inputs and Outputs A conjunctive operator ∧ has also been introduced in many automata frameworks for formal verification and testing, such as interface theory [8], ioco theory [3] and the theory of substitutivity refinement [7]. Within these theories, systems are modelled as labelled transition systems [15] or interface automata [1] (IA), and actions are divided into inputs and outputs. An informal example of some (partial) specification models, as could be expressed in these theories, is shown by the automata in Figure 1, in which inputs are labelled with question marks, and outputs with exclamation marks. The specifications represent a vending machine with two input buttons (?a and ?b), which provides coffee (!c) and tea (!t) as outputs, optionally with milk (!c+m and !t+m). The first model, p, specifies that after pressing button ?a, the machine dispenses coffee. The second model, q, specifies that after pressing button ?b, the machine has a choice between dispensing tea, or tea with milk. The third model, r, is similar, but uses non-determinism to specify that button ?b results in coffee with milk or tea with milk. The fourth model, p ∧ q ∧ r, states that all former three partial models should hold. Here, we use the definition of ∧ from [3], but the definition from [7] is similar. An input is specified in the combined model if it is specified in any partial model, making both buttons ?a and ?b specified. Additionally, an output is allowed in the combined model if it is allowed by all partial models, meaning that after button ?b, only tea with milk is allowed. Conjunctions of states This form of conjunctive composition acts as an operator on entire models. However, a partial specification could also describe the expected behaviour of a particular state of the system, other than the initial state. For example, suppose that the input ?on turns the vending machine on, after which the machine should behave as specified by p, q and r from Figure 1. This, by itself, is also a specification, illustrated by s in Figure 2. However, the formal meaning of this model is unclear: transitions connect states, whereas p ∧ q ∧ r is not a state but an entire automaton. A less trivial case is partial specification t, also in Figure 2: after obtaining any drink by input ?take, we should move to a state where we can obtain a drink as described by specifications p, q, r and t. Thus, we combine conjunctions with a form of recursion. This cannot easily be formalized using ∧ as an operator on automata, like in [3,7,8]. Defining conjunction as a composition on individual states would provide a formal basis for these informal examples. Conjunctions of states are a main ingredient of alternating automata [6], in which conjunctions and non-determinism alternate. Here, non-determism acts as logical disjunction, dually to conjunction. Because of this duality, both conjunction and disjunction are treated analogously: both are encoded in the transition relation of the automaton. This contrasts the approach of defining conjunction directly on IAs, where non-determinism is encoded in the transition relation of the IA, whereas conjunction is added as an operator on IAs, leaving the duality between the two unexploited. In fact, the conjunction-operator in [3] even requires that any non-determinism in its operands is removed first, by performing an exponential determinization step. For example, model r in Figure 1 is nondeterministic, and must be determinized to the form of model q before p ∧ q ∧ r is computed. This indicates that it is hard to combine conjunction and nondeterminism in an elegant way, without understanding their interaction. Despite their inherent support for conjunction, alternating automata are not entirely suitable for modeling the behaviour of software systems, since they lack the distinction between inputs and outputs. In this respect, alternating automata are similar to deterministic finite automata (DFAs). Distinguishing inputs and outputs in an IA allows modelling of software systems in a less abstract way than with the homogeneous alphabet of actions of DFAs and alternating automata. Contributions We combine concepts from the worlds of interface theory and alternating automata, leading to Alternating Interface Automata (AIAs), and show how these can be used in the setting of a trace semantics for observable inputs and outputs. We provide a solid formal basis of AIAs, by combining alternation with inputs and outputs (Section 3.1), defining a trace semantics for AIAs (Section 3.2), by lifting the input-failure refinement semantics for non-deterministic interface automata [11] to AIAs, providing insight into the semantics of an AIA, by defining a determinization operator (Section 3.3) and a transformation between IAs and AIAs (Section 3.4), and defining testers (Section 4), which represent practical testing scenarios for establishing input-failure refinement between a black-box implementation IA and a specification AIA, analogously to ioco test case generation [15]. The definition of input-failure refinement [11] is based upon the observation that, for a non-deterministically reached set of states Q, the observable outputs of that set are the union of the outputs of the individual states in Q, whereas the specified inputs for Q are the intersection of the inputs specified in individual states in Q. For conjunction, we invert this: outputs allowed by a conjunction of states are captured by the intersection, whereas specified inputs are captured by the union. In this way, our AIAs seamlessly combine the duality between conjunction and non-determinism with the duality between inputs and outputs. Proofs can be found in the extended technical report [10]. Preliminaries We first recall the definition of interface automata [1] and input-failure refinement [11]. The original definition of IAs [1] allows at most one initial state, but we generalize this to sets of states. Moreover, [1] supports internal actions, which we do not need. Transitions are commonly encoded by a relation, whereas we use a function. In examples, we represent IAs graphically as in Figure 1. For the remainder of this paper, we assume fixed input and output alphabets I and O for IAs, with L = I ∪ O. For (sets of) sequences of actions, * denotes the Kleene star, and denotes the empty sequence. We define auxiliary notation in the style of [15]. Definition 2. Let s ∈ IA, Q ⊆ Q s , q, q ∈ Q s , ∈ L and σ ∈ L * . We define We omit the subscript for interface automaton s when clear from the context. We use IAs to represent black-box systems, which can produce outputs, and consume or refuse inputs from the environment. This entails a notion of observable behaviour, which we define in terms of input-failure traces [11]. Definition 3. For any input action a, we denote the input-failure of a as a. Likewise, for any set of inputs A, we define A = {a | a ∈ A}. The domain of input-failure traces is defined as FT I,O = L * ∪ L * · I. For s ∈ IA, we define Ftraces(s) = traces(s) ∪ {σa | σ ∈ L * , a ∈ I, a ∈ in(s after σ)} Thus, a trace σa indicates that σ leads to a state where a is not accepted, e.g. a greyed-out button which cannot be clicked. Any such set of input-failure traces is prefix-closed. Input-failure traces are the basis of input-failure refinement, which we will now explain briefly. This refinement relation was introduced in [11] to bridge the gap between alternating refinements [1,2] and ioco theory [15]. Similarly to normal trace inclusion, the idea is that an implementation may only show a trace if a specification also shows this trace. Moreover, the most permissive treatment of an input is to fail it, so if a specification allows an input failure, then it also must allow acceptance of that input, as expressed by the input-failure closure. for all σ ∈ L * , a ∈ I and ρ ∈ FT I,O , σa ∈ S =⇒ σaρ ∈ S. The inputfailure closure of S is the smallest input-failure closed superset of S, that is, Input-failure refinement and input-failure equivalence on IAs are respectively defined as The input-failure closure of the Ftraces serves as a canonical representation of the behaviour of an IA. That is, two models are input-failure equivalent if and only if the closure of their input-failure traces is the same, as stated in Proposition 5. . Formally, it is thus a preorder, making it suitable for stepwise refinement. Alternating Interface Automata Real software systems are always in a single state, but the precise state of a system cannot always be derived from an observed trace. Due to non-determinism, a trace may lead to multiple states. In IAs, this is modelled as a set of states, such as the set of initial states, the set T (q, ) for state q and action , and the set s after σ for IA s and trace σ. The domain of such non-deterministic views on an IA with states Q is thus the powerset of states, P(Q). In set of states Q, traces from any individual state in Q may be observed. Alternation Alternation generalizes this view on automata: a system may not only be nondeterministically in multiple states, but also conjunctively. When conjunctively in multiple states, only traces which are in all these states may be observed. Alternation is formalized by exchanging the domain P(Q) for the domain D(Q). Formally, D(Q) is the free distributive lattice, which exist for any set Q [14]. where equivalence of terms is completely defined by the following axioms: [Distributivity] In short, (D(Q), ∨, ∧, ⊥, ) forms a distributive lattice. Expression q is named the embedding of q in D(Q), and operators ∨ and ∧ are named disjunction and conjunction, respectively. For the remainder of this paper, we make no distinction between expressions and their equivalence classes. For finite n, we introduce the shorthand n-ary operators and , as follows: We distinguish the embedding q ∈ D(Q) from q itself. We require this distinction only in Definition 18, where we will point this out. Otherwise, we do not need this distinction, so we write q instead of q . Intuitively, disjunction q 1 ∨q 2 replaces the non-deterministic set {q 1 , q 2 }. This is formalized by extending IAs with alternation. Configurations and ⊥ are analogous to the empty set of states in an IA s: if T s (q, ) = ∅, this means that state q does not have a transition for . In terms of input-failure refinement, not having a transition for an input means that the input is underspecified, whereas not having a transition for an output means that the output is forbidden. This distinction is made explicit in AIA by using to represent underspecification and ⊥ to represent forbidden behaviour. We will formalize this in Section 3.2. Definition 7 also allows output transitions to , meaning that the behaviour is unspecified after that output. Automata models which do not allow distinct configurations and ⊥ commonly represent such underspecified behaviour with an explicit chaotic state [3,4] instead. We graphically represent AIAs in a similar way as IAs, with some additional rules. A transition T (q 0 , ) = q 1 is represented by a single arrow from q 0 to q 1 . We represent T (q 0 , ) = q 1 ∨ q 2 by two arrows q 0 − → q 1 and q 0 − → q 2 , analogous to non-determinism in IAs. Conjunction T (q 0 , ) = q 1 ∧ q 2 is shown by adding an arc between the arrows. Nested expressions are represented by successive splits, as shown in Example 8. A state q without outgoing arrow for an output ∈ O represents T (q, ) = ⊥, and a state without input transitions for input indicates T (q, ) = . For ∈ O, a transitions T (q, ) = is shown with an arrow to , denoting underspecification, but note that is a configuration, not a state. Example 8. Figure 3 shows Moreover, AIA s B combines the partial specifications from Section 1. Before defining trace semantics for AIAs, we extend the transition function from single actions to sequences of actions, by defining an after-function on AIAs. This function transforms configurations by substituting every state according to the transition function, similarly to the approach for alternating automata in [6]. Like before, we omit the subscript if clear from the context. We also define (s after σ) = e 0 s after s σ. Example 11. Consider s B in Figure 3. We evaluate s B after ?on ?b !t, as follows: Intuitively, this means that giving a tea without milk after ?on ?b is forbidden. In contrast, tea with milk is allowed, and leads to configuration q 10 B : Input-Failure Semantics for AIAs IAs are equipped with input-failure semantics, based on the traces and underspecified inputs of the IA. We lift this to AIAs via the after-function, using that ⊥ indicates forbidden behaviour, and indicates underspecified behaviour. Compare Definition 4 and Definition 12 for input-failure refinement for IAs and for AIAs. For AIAs, refinement is defined directly over their Ftraces, whereas for IA, the input-failure closure of the Ftraces is used for the right-hand model (and optionally for the left-hand model, according to Proposition 5). In this regard, AIAs are a more direct and natural representation of input-failure traces, since the input-failure closure is not needed. Another motivation to represent input-failure traces with AIAs is the connection between the distributive lattice D(Q) and the lattice of sets of input-failure traces: ∧ and ∨ are connected to intersection and union of input-failure traces, respectively, and and ⊥ represent the largest and smallest possible input-failure trace sets. does not allow transitions to T (q, a) = ⊥ for an input a: in that case, Ftraces(q) would contain trace , but it would not contain extension a nor a of , meaning that after trace it is not allowed to accept nor to refuse a. We can lift configurations and ⊥, as well as ∧ and ∨, to the level of AIAs. This provides the building blocks to compose specifications. Specifications s and s ⊥ can be used to specify that any or no behaviour is considered correct, respectively. The operators ∧ and ∨ on specifications fulfill the same role as existing operators in substitutivity refinement [7], and have similar properties, described in Proposition 14. AIA Determinization In case of nestings of ∧ and ∨, the after-set s after σ may not be clear immediately, so a transition function producing configurations without ∧ and ∨ is easier to interpret. For this reason, we lift the notions of determinism and determinization from IAs [11] to the alternating setting. Definition 17. Let s ∈ AIA and e ∈ D(Q s ). Then e is deterministic if e = or e = ⊥ or e = q for some q ∈ Q s . Furthermore, s is deterministic if for all σ ∈ L * , configuration s after σ is deterministic. Compare the notions of determinism for IAs and AIAs. For every trace σ, a deterministic IA s is in a singleton state (s after σ) = {q}, unless (s after σ) = ∅ (that is, σ is not a trace of s). For AIAs, this singleton set {q} is replaced by the embedding q , and ∅ is replaced by or ⊥, depending on whether this set was reached by an undespecified action or a forbidden action. We now define determinization, where we require the distinction between q and q to avoid ambiguity. Example 20. Figure 4 shows (the reachable part of) the determinizations of s A and s B from Figure 3. In det(s A ), state q 0 A ∧ q 2 A has no outgoing !x-transition. Example 20 shows that an input is specified by a conjunction of states in the determinization if any of the individual state specify this input, whereas an output is allowed by a conjunction of states only if all of the individual state allow this output. In the setting of IA, [11] already established that this works in a reversed way for non-determinism, following their definition of determinization: all individual states of a disjunction should specify an input to specify it in the determinization, and any individual state should allow an output to allow it in the determinization. Their so-called input-universal determinization is an instance of the determinization from Definition 18, using only disjunctions. This duality arises from Definition 10 of after, since the determinization directly represents the after-function: the determinizations in Example 20 correspond to the after-sets such as those derived in Example 11. This correspondence is formalized in Proposition 21. Proposition 21. Let s ∈ AIA and σ ∈ L * . Then A known result [6] is that alternating automata are exponentially more succinct than non-deterministic automata, and double exponentially more succinct than deterministic automata. Although alternating automata are not a special case of AIAs (as AIAs lack the accepting and non-accepting states of alternating automata), we expect AIAs to be exponentially more succinct than IAs, as well. Connections between IAs and AIAs IAs and AIAs are used to represent sets of input-failure traces, and are in that sense interchangeable. First, we show that any IA can be translated to an AIA. Definition 29 formalizes how disjunction in an AIA corresponds to nondeterminism in IA. Specifically, if no transitions are present for some output in an IA, then the transition function of the corresponding AIA gives ∅ = ⊥ for this output, analogous to the explicit case for inputs. Note that the graphical representation of an IA and that of its induced AIA are the same. The translation from AIAs to IAs is more involved. For disjunctions of states (q after ) = q 1 ∨ q 2 , the translation of Definition 24 can simply be inverted, but this is not possible for conjunctions. As such, we represent any configuration by its unique disjunctive normal form. Definition 27. Let e ∈ D(Q). Then DNF(e) is the smallest set in P(P(Q)) such that e = { Q | Q ∈ DNF(e)}. The set DNF(e) can be constructed by using the axioms from Definition 6. Testing Input-Failure Refinement So far, we have introduced refinement as a way of specifying correctness of one model with respect to another. Often, a specification is indeed a model, but we use it to ensure correctness of a real-world software implementation. To this end, we assume that this implementation behaves like a IA. We cannot see the actual states and transitions of this IA, but we can provide inputs to it and observe its outputs. We assume that this IA must have an initial state, i.e. it is non-empty. In this section, we introduce a basis for model-based testing with AIAs, analogously to ioco test case generation [15]. Given a specification AIA, we derive a testing experiment on non-empty implementation IAs, in order to observe whether input-failure refinement holds with respect to the specification. This requires an extension of input-failure refinement to these domains. Testers for AIA Specifications From a given specification AIA, we derive a tester. We model this tester as an IA as well, which can communicate with an implementation IA through a form of parallel composition. The tester eventually concludes a verdict, indicating whether the observed behaviour is allowed. To communicate, the inputs of the implementation must be outputs for the tester, and vice versa (note that I and O denote the inputs and outputs for the implementation, respectively). The tester should not block or ignore outputs from the implementation, meaning that the tester should be input-enabled. If the tester intends to supply an input to the implementation, it should also be prepared for a refusal of that input. A verdict is given by means of special states pass or fail. Lastly, to give consistent verdicts, a tester should be deterministic. This leads to the following definition of testers. Definition 34. A tester for (an IA or AIA with) inputs I and outputs O is a deterministic, input-enabled IA t = (Q t , O, I ∪ I, T, q 0 t ) with pass, fail ∈ Q t , such that pass and fail are sink-states with out(pass) = out(fail) = ∅, and a ∈ out(q) ⇐⇒ a ∈ out(q) for all q ∈ Q t and a ∈ I. Testing is performed by a special form of parallel composition of a tester and an implementation. If the tester chooses to perform an input while the implementation also chooses to produce an output, this results in a race condition. In such a case, both the input or the output can occur during test execution. We assume a synchronous setting, in which the implementation and specification agree on the order in which observed actions are performed (in contrast to e.g. a queue-based setting [13], in which all possible orders are accounted for). These assumptions are in line with the assumptions in e.g. ioco-theory [15], and lead to the following definition of test execution. Definition 35. Let i ∈ IA be non-empty, and let t be a tester for i. We write We say that i fails t if q 0 t | q 0 i σ − → fail | q i for some σ and q i , and i passes t otherwise. We reuse the notions of soundness and exhaustiveness from [15], to express whether a tester properly tests for a given specification. Definition 36. Let s ∈ AIA and let t be a tester for s. Then t is sound for s if for all i ∈ IA with inputs I and outputs O, i fails t implies i ≤ if s. Moreover, t is exhaustive for s if for all i ∈ IA, i passes t implies i ≤ if s. A simple attempt to translate specification AIA s to a sound and exhaustive tester would be similar to the determinization of s, but replacing every occurence of ⊥ and by fail and pass, respectively. However, this tester is quite inefficient. If a tester reaches pass after both σa and σa, then this input a does not need to be tested after σ. Specifically, this is the case if and only if trace σa leads to specification configuration . We thus improve the tester for a given specifications as follows. Figure 3 is shown in Figure 5. for all q t ∈ Q t , | out(q t )| ≤ 1, and there are no infinite sequences q 0 t , q 1 t , . . . for q 0 t , q 1 t , . . . ∈ Q t \ {pass, fail} such that q 0 The test case generation algorithm of [15] is non-deterministic, since it must choose at most one inputs in every state, and it must choose when to stop testing. We avoid defining a separate test case generation algorithm, and instead use Theorem 39 to obtain sound test cases. If specification s 1 is weakened to s 2 , such that tester(s 2 ) is a test case, then soundness of tester(s 2 ) for s 1 is guaranteed by the theorem. Such a weakened singular specification s 2 describes a finite, tree-shaped part of the original specification s 1 . It can be created from s 1 similarly to test case generation in [15]. In every state σ of the tree s 1 , we either decide to pick one input specified in s 1 and also specify that in s 2 ; or we do not specify any input, but only outputs; or we leave any successive behaviour unspecified ( ). Test cases based on singular specifications are inherently sound, and for any incorrect implementation, it is possible to find a singular specification which induces a test case to detects this incorrectness. Theorem 42. If s 2 is a singular specification for s 1 , then tester(s 2 ) is a sound test case for s 1 . Theorem 43. Let i ∈ IA and s 1 ∈ AIA. If i ≤ if s 1 , then there is a singular specification s 2 for s 1 such that i fails tester(s 2 ). Example 44. Specification s B in Figure 3 can be weakened to singular specification s C shown in Figure 6. Indeed, s B ≤ if s C holds, which can be established by comparing s C with det(s B ) in Figure 4. Therefore tester(s C ) is a sound test case for s B . Conclusion and Future Work Alternating interface automata serve as a natural and direct representation for sets of input-failure traces, and therefore also for refinement of systems with inputs, outputs, non-determinism and conjunction. We have used the observational nature of input-failure traces to define testers, describing an experiment to observationally establish refinement of a black-box system. 6. A weakened version sC of the vending machine, and the test case tester(sC ). Question and exclamation marks are interchanged in tester(sC ) to indicate that the input and output alphabets have been interchanged with respect to sC . The disjunction and conjunction of alternation brings interface automata specifications closer to the realm of logic and lattice theory. On the theoretical side, a possible direction is to extend configurations from distributive lattices to a full logic. On the practical side, classical testing techniques acting on logical expressions, such as combinatorial testing, could be translated to our black-box configurations of states. Possible criticism on our running example of a vending machine s B in Figure 3 may be that its representation as an AIA is not concise, since the determinization det(s B ) is much smaller and more understandable than s B itself. This is because the individual specifications offer a choice between outputs, such as tea with or without milk, whereas the intersection of all choices is singleton. A more natural encoding for this example is to express the types of drink with data data parameters, and the restrictions on them by logical constraints. This requires an automaton model in style of symbolic transition systems [9], which could be enriched with the concepts of alternation of AIAs. Interface automata typically contain internal transitions, and the interaction between internal behaviour and alternation is not immediately clear. A possible approach to extend AIAs with internal behaviour is to lift the -closure of [1], the set of states reachable via internal transitions, to the level of configurations.
6,820.8
2020-02-20T00:00:00.000
[ "Computer Science" ]
Numerical solution of an integral equation arising in the problem of cruciform crack using Daubechies scale function This paper is concerned with obtaining approximate numerical solution of a classical integral equation of some special type arising in the problem of cruciform crack. This integral equation has been solved earlier by various methods in the literature. Here, approximation in terms of Daubechies scale function is employed. The numerical results for stress intensity factor obtained by this method for a specific forcing term are compared to those obtained by various methods available in the literature, and the present method appears to be quite accurate. Introduction Integral equations occur naturally in many areas of mathematical physics. Many engineering and applied science problems arising in water waves, potential theory and electrostatics are reduced to solving integral equations. The problem of finding out the crack energy and distribution of stress in the vicinity of a cruciform crack leads to the integral equation where This is an integral equation of some special type since the kernel L(x, t) has singularity at (0, 0) only. f(x) is a prescribed function relating to the internal pressure given by Since the cracks are in the shape of a cross, the problem is known as the cruciform crack problem. Of interest here, is the stress intensity factor (1) which is directly proportional to stress intensity at the crack trip. How the integral equation (1.1) occurs in the problem of cruciform crack is explained by Stallybrass [8] who solved the integral equation in a closed form using Wiener-Hopf technique and provided the numerical results for the stress intensity factor. Rooke and Sneddon [6] solved this integral equation approximately by using an expansion in terms of Legendre functions and obtained numerical results which are very close to those of Stallybrass [8] although the convergence is slow. The two methods appear to be somewhat elaborate. The integral equation (1.1) has also been solved numerically by various other methods from time to time. For example, Elliot [2] employed the method of Sigmoidal transformation to obtain approximate solution for the case f (x) = 1 . It is not obvious if this method is useful for other forms of f(x). Tang and Li [9] solved the integral equation approximately by employing Taylor series expansion for the unknown function and obtained very accurate numerical estimates for the stress x −1 . 3 intensity factor. They made use of Cramer's rule in the mathematical analysis so that if one increases the number of terms in the approximation the calculation becomes unwieldy so as to make the method unattractive. Bhattacharya and Mandal [1] solved the integral equation approximately by two different methods, one is based on expansion of the unknown function in terms of Bernstein polynomials and the other is based on expansion in terms of rationalized Haar functions. Singh and Mandal [7] also solved it by using Legendre multi-wavelets. All these methods provide numerical results for (1) which are very close to exact results given by Stallybrass [8]. Expansion in terms of Bernstein polynomials or Haar functions or Legendre multi-wavelets suggest expansion in terms of other functions such as Daubechies scale functions since these provide a somewhat new tool in the numerical solution of integral equations. In this paper, Daubechies scale functions are employed to expand the unknown function (x) . K-Daubechies scale function is employed to find approximate solution of integral equation taking K = 3 . It may be noted that K = 1 corresponds to Haar wavelets. As the result can be improved taking larger value of K, so the results obtained by using K-Daubechies scale function are better than the results using the rationalized Haar functions. Though Legendre multi-wavelets give satisfactory results, K-Daubechies scale function has some interesting features like compact support, fractal nature and no explicit form at all resolutions. Only the knowledge of the low-pass filter coefficients in two-scale relation is required throughout the calculation. For these reasons, Daubechies scale function is used as an efficient and new mathematical tool to solve integral equations. At x = 1 , the expansion of (x) reduces to a finite expansion because most of Daubechies scale functions vanish. Actually, the integral equation (1.1) produces a system of linear equations in the unknown coefficients. After solving this linear system, the unknown function (x) is evaluated at x = 1 so as to obtain numerically the value of the stress intensity factor. For different values of in the expression of internal pressure f(x) given by (1.3), (1) is obtained and compared to known results available in the literature. It is found that the method is quite accurate as the approximate values of (1) obtained by the present method are seen to differ negligibly from exact values. Basic properties of Daubechies scale function and wavelets Daubechies discovered a whole new class of compactly supported orthogonal wavelets, which is generated from a single function (x) , known as Daubechies scale or refinable function. K-Daubechies scale function (K ≥ 1) has 2K scaling coefficients and has compact support [0, 2K − 1] . It may be noted that K = 1 corresponds to the Haar wavelets. Using the explicit form of f(x) in (1.3) and the two-scale relation (2.1), the expression in (3.4) reduces to the form Now, using the Gauss-type quadrature rule with complex nodes and weights for integrals involving Daubechies scale function (cf. Panja and Mandal [5]), we obtain where The determination of the nodes x i and weights w i is described by Panja and Mandal [5]. The basic trick for the calculation of the integral (3.5) is described by Kessler et al. [3] and Panja and Mandal [4]. If (x) is the scale function with compact support [0, 2K − 1] (K ≥ 1) , then it produces a system of orthonormal basis sn given by (2.5). From [4]. Now, the calculation of (3.6) is described. Using the twoscale relation (2.1), from (3.6), we obtain (3.13) (3.14) where I s m,n is given by the relation If −(2K − 2) ≤ m, n ≤ 0 , then Θ m,n,l 1 ,l 2 (x, t) in (3.17 ) has singularity at (0, 0) , and for these values of m, n, the values of I s m,n cannot be determined using the relation (3.18). If −(2K − 2) ≤ m, n ≤ 0 , the recursion relation for I s m,n is obtained as Again using the Gauss quadrature rule involving the Daubechies scale function, (3.19) is reduced to the form Here, h l j for j = 1, 2 (l j = 0, 1, 2 … , 2K − 1) are the low-pass filters. Basic trick for calculating weights w with a program in MATHEMAT-ICA has been discussed by Panja and Mandal [5]. Also, I s m,n = 0 for m or n ≤ −(2K − 1) or m or n ≥ 2 s . We present here the numerical values of I s m,n for( K = 3)-Daubechies scale functions taking s = 3 for those values of m and n for which Θ m,n,l 1 ,l 2 (x, t) has singularity at (0, 0). Table 1 shows the values of I s m,n for N = 5, whereas Table 2 shows the values of I s m,n for N = 7. (3.20) Numerical results A comparison between the numerical values of (1) obtained here by using Daubechies scale functions and exact results of Stallybrass [8] is given in Table 3 for different values of = 1, 2, 3, … , 10. The table shows the exact values of (1) ( = 1, 2, … , 10) according to Stallybrass [8] and the results obtained by the present method with their relative errors. The numerical results are displayed in Fig. 1(a-e). For the sake of clarity, five figures are drawn, wherein the stress intensity factor (1) is depicted against the parameter for different integral values. In each figure, (1) obtained from Stallybrass's [8] exact result is denoted by the symbol " ◻ ," and (1) obtained from other approximate methods is shown. The figures are self-explanatory. However, as the result obtained by Sigmoidal transformation method is only for = 1 , this is not shown here. From these figures, it is obvious that all the methods including the present method provide very accurate results. Conclusion Here, a numerical scheme based on expansion in terms of K-Daubechies scale function is employed for obtaining approximate numerical estimates of an integral equation of some special type arising in the classical problem of cruciform crack in elasticity. Comparison between the numerical results obtained by the present method with the exact results obtained by Stallybrass [8] shows that the method is quite accurate. The method works nicely for moderate values of K (e.g., K = 3 ). The results can be further improved taking larger values of K (K > 3).
2,074.8
2019-11-12T00:00:00.000
[ "Mathematics" ]
Tip-Clearance Measurement in the First Stage of the Compressor of an Aircraft Engine In this article, we report the design of a reflective intensity-modulated optical fiber sensor for blade tip-clearance measurement, and the experimental results for the first stage of a compressor of an aircraft engine operating in real conditions. The tests were performed in a ground test cell, where the engine completed four cycles from idling state to takeoff and back to idling state. During these tests, the rotational speed of the compressor ranged between 7000 and 15,600 rpm. The main component of the sensor is a tetrafurcated bundle of optical fibers, with which the resulting precision of the experimental measurements was 12 µm for a measurement range from 2 to 4 mm. To get this precision the effect of temperature on the optoelectronic components of the sensor was compensated by calibrating the sensor in a climate chamber. A custom-designed MATLAB program was employed to simulate the behavior of the sensor prior to its manufacture. Introduction The development of more and more efficient aircraft engines is a continuous challenge that provides several benefits. In addition to monetary profit, carbon-emission reductions, longer service lives, and flight-range capabilities of the aeroplanes are improved, thanks to the reduction of fuel burnt [1]. The performance of the engine can be significantly improved by minimizing the leak flows through the gap between the blade tip and the casing of the compressor or the turbine. Therefore, this distance, known as tip clearance (TC), plays a major role in the aerodynamic efficiency of axial compressors and turbines [2]. The TC value varies with the operation condition of the engine (ground idle, takeoff, cruise, and landing) [3], as well as with the engine aging [4]. These fluctuations of the TC are due to two types of loads, namely engine and flight loads. The first kind of load encompasses centrifugal, thermal, internal engine pressure, and thrust loads, whereas flight loads are comprised by inertial (gravitational), aerodynamic (external pressure), and gyroscopic loads [5]. An accurate and real-time TC measurement is necessary to prevent any blade contacting with the casing and to lessen the leak flows in the fan, compressor, and turbine sections of the engines, which serves to optimize the engine performance. That is, precisely, the purpose of active-clearance control systems, in which the TC is limited by directing air to the casing by means of valves to control the thermal expansion of the casing, and to keep the TC to a minimum so that the engine efficiency increases. In contrast with power-system turbines where TC common values range from 2 to 8 mm, in aircraft turbines TC values are usually lower than 3 mm [6], so a resolution better than 25 µm is required for the whole measurement interval [7]. Currently, several kinds of sensors are employed to carry out TC measurements. The most common method is the employment of capacitive sensors [8,9]. These sensors are robust, small, and low-cost, but their accuracy is limited to 30 µm [10]. Microwave sensors have also been employed for TC measurements [11,12]. Since their signal depends on several variables, they require a complex calibration and advanced processing of signals. Besides, they provide a limited spatial resolution as compared to optical sensors, and it is not an economical technology [13]. Another option are eddy-current (inductive) sensors, which have the advantage of not requiring a direct view of the blade tip, so the sensor is not exposed to the harsh conditions of the engine [14], but their calibration is highly dependent on the tip shape and temperature [15]. Finally, optical sensors offer multiple inherent advantages [16] and that is the reason why they are being more and more employed in the aircraft industry [17][18][19]. Regarding TC measurement, optical sensors yield the best resolution. On the contrary, they are seriously affected by contamination and debris, thus being suitable only for the clean parts of the engine such as the fan and the compressor, or for the testing of rigs during the engine-development phase [20]. Optical sensors for TC measurements have been developed by using diverse techniques such as Doppler positioning [21] or interferometry [22]. Nevertheless, the simplest and most affordable devices to obtain high accuracy [23] and bandwidth are intensity-based sensors [24,25]. Several configurations of intensity-modulated sensors using trifurcated bundles for TC measurements have been previously proposed by other authors [15,26,27]. However, their performances were demonstrated only in laboratory conditions and using a rig instead of a real engine. In addition to this, their measurement ranges and stand-off distances are not suitable for measurements in real engines. In this paper, we present the results obtained for an intensity-based sensor whose principal component is a tetrafurcated bundle of optical fibers. It was specifically designed to carry out tip clearance and tip timing measurements in a compressor of a real engine in a simultaneous and independent way. In Section 2, the experimental set-up and the sensor design are explained. In Section 3, the results obtained in the compressor of a real aircraft engine are presented and discussed. The conclusions of the work are summarized in Section 4. Sensor Design In previous works, we developed a reflective intensity-modulated optical fiber sensor to carry out TC measurements in turbines [20] and rotating components of aircraft engines [28]. The essential component of this sensor was a bundle of optical fibers. We used to employ trifurcated bundles, with one common leg on one side and three independent legs on the other. One of these legs was connected to a light source (illuminating fiber), and the other two legs were connected to their respective photodetectors (receiving fibers). The illuminating fiber was located in the center of the common leg and the light emitted by this fiber was reflected by the target and collected by two rings of receiving fibers surrounding the illuminating fiber. Each ring of receiving fibers was gathered into a leg on the other end of the bundle, which was connected to a photodetector in order to convert the optical signals into an electrical one. Finally, the quotient of these two voltage signals (V 1 and V 2 ) was related to the distance of the target by a linearized calibration curve as the one shown in Figure 1 (for a more detailed explanation of the sensor operation see [20,29]). With respect to the works of other groups, we introduced two important improvements in the sensor design [29]. Firstly, we used a single-mode fiber as illuminating fiber to reduce the modal noise at the output of the bundle. The second improvement was the employment of asymmetric gain for the photodetectors, which increases the sensitivity of the sensor. The resulting sensitivity was more than the double of the sensitivity obtained using a configuration of symmetric gain as it is shown in [29]. In the calibration curve depicted in Figure 1, two different regions can be distinguished. The region I (front-slope region) provides more sensitivity than the region II (back-slope region). In addition, the region I is less sensitive to noise since the amplitudes of V1 and V2 have higher values than those of the region II. On the other hand, the measurement range from 1 to 1.6 mm may be too short and it requires placing the bundle tip very close to the blades. In previous tests, we employed the region II due to these constraints. However, in this occasion we decided to make some changes in the bundle design so that the most sensitive region I could be used for the tests. Sensors 2016, 16,1897 3 of 14 the region II due to these constraints. However, in this occasion we decided to make some changes in the bundle design so that the most sensitive region I could be used for the tests. Once the sensor was assembled in the casing of the engine, the required measurement range for the TC was estimated to be the interval from 2 to 4 mm. Therefore, the first necessary variation in the bundle design consists in shifting the measurement range so that it starts at 2 mm. The beginning of region I is determined by the target distance in which the reflected light starts to enter the second ring of receiving fibers, so the distance between the center of the illuminating fiber and the fibers of the second ring of receiving fibers was increased in the new design. To take this ring away, we could have chosen the option of inserting a considerable number of needles between the first and second rings of receiving fibers until achieving the required distance, but we decided to insert fewer needles and to introduce another ring of receiving fibers. The fibers of this ring were gathered in another independent leg, so what we have is a tetrafurcated bundle (see Figure 2). This leg allows carrying out tip-timing measurements with another photodetector whose gain, and therefore whose bandwidth, is not dependent on the gain of the photodetectors used for the TC configuration [30]. The common leg of the bundle has a threaded head to facilitate its coupling to the casing of the engine, whereas the legs on the other side have conventional FC connectors. The second necessary variation in the bundle design consists in widening the measurement range of the region I. We could modify two characteristics of the optical fibers in order to get this objective. The first option is to use an illuminating fiber with a lower numerical aperture. The upper limit of the measurement range for the region I is determined by the distance of the target in which the reflected-light cone completely covers the second ring of receiving fibers. Thus, the measurement range becomes higher as the numerical aperture of the illuminating fiber becomes smaller, as illustrated in Figure 3. Since a single-mode fiber with a numerical aperture of 0.12 is employed as illuminating fiber, there is little margin to reduce it. The other option to extend the range is to increment the diameter of the receiving fibers so that the target distance can be greater before the second ring is completely covered by the reflected light. The effect of increasing the diameter of the receiving fibers is depicted in Figure 4. Once the sensor was assembled in the casing of the engine, the required measurement range for the TC was estimated to be the interval from 2 to 4 mm. Therefore, the first necessary variation in the bundle design consists in shifting the measurement range so that it starts at 2 mm. The beginning of region I is determined by the target distance in which the reflected light starts to enter the second ring of receiving fibers, so the distance between the center of the illuminating fiber and the fibers of the second ring of receiving fibers was increased in the new design. To take this ring away, we could have chosen the option of inserting a considerable number of needles between the first and second rings of receiving fibers until achieving the required distance, but we decided to insert fewer needles and to introduce another ring of receiving fibers. The fibers of this ring were gathered in another independent leg, so what we have is a tetrafurcated bundle (see Figure 2). This leg allows carrying out tip-timing measurements with another photodetector whose gain, and therefore whose bandwidth, is not dependent on the gain of the photodetectors used for the TC configuration [30]. The common leg of the bundle has a threaded head to facilitate its coupling to the casing of the engine, whereas the legs on the other side have conventional FC connectors. The second necessary variation in the bundle design consists in widening the measurement range of the region I. We could modify two characteristics of the optical fibers in order to get this objective. The first option is to use an illuminating fiber with a lower numerical aperture. The upper limit of the measurement range for the region I is determined by the distance of the target in which the reflected-light cone completely covers the second ring of receiving fibers. Thus, the measurement range becomes higher as the numerical aperture of the illuminating fiber becomes smaller, as illustrated in Figure 3. Since a single-mode fiber with a numerical aperture of 0.12 is employed as illuminating fiber, there is little margin to reduce it. The other option to extend the range is to increment the diameter of the receiving fibers so that the target distance can be greater before the second ring is completely Based on our previous experience, we finally decided to employ the fibers of the manufacturer Fiberguide Industries shown in Figure 5, where R1 = 300 µ m, R2 = 700 µ m, and R3 = 1070 µ m. In the same figure, a microscope picture of the cross section of the manufactured bundle is also depicted. The final distances in our manufactured bundle were R1 = 324 µ m, R2 = 686 µm, and R3 = 1125 µ m, somewhat different from the design radius. Based on our previous experience, we finally decided to employ the fibers of the manufacturer Fiberguide Industries shown in Figure 5, where R1 = 300 µ m, R2 = 700 µ m, and R3 = 1070 µ m. In the same figure, a microscope picture of the cross section of the manufactured bundle is also depicted. The final distances in our manufactured bundle were R1 = 324 µ m, R2 = 686 µm, and R3 = 1125 µ m, somewhat different from the design radius. Based on our previous experience, we finally decided to employ the fibers of the manufacturer Fiberguide Industries shown in Figure 5, where R1 = 300 µm, R2 = 700 µm, and R3 = 1070 µm. In the same figure, a microscope picture of the cross section of the manufactured bundle is also depicted. The final distances in our manufactured bundle were R1 = 324 µm, R2 = 686 µm, and R3 = 1125 µm, somewhat different from the design radius. Based on our previous experience, we finally decided to employ the fibers of the manufacturer Fiberguide Industries shown in Figure 5, where R1 = 300 µ m, R2 = 700 µ m, and R3 = 1070 µ m. In the same figure, a microscope picture of the cross section of the manufactured bundle is also depicted. The final distances in our manufactured bundle were R1 = 324 µ m, R2 = 686 µm, and R3 = 1125 µ m, somewhat different from the design radius. Before manufacturing the bundle, we developed a MATLAB program in order to verify the behavior of the sensor. This software was developed according to the theoretical models described in literature [23,[31][32][33]. As all these papers assume that the target is a mirror instead of a blade, we adjusted the parameters of the reflected light in order to get more realistic results. This parameter optimization was achieved by performing several experimental measurements using the trifurcated bundles we have in our laboratory. Figure 6 depicts the experimentally obtained calibration curve for the sensor employing the tetrafurcated bundle in the region I, together with the simulation. Table 1 shows the distance differences between the two curves for the same values of V 2 /V 1 obtained along the measurement range. We can see that the simulation provides good results except for the last part of the measurement range, where the distance difference increases significantly. Before manufacturing the bundle, we developed a MATLAB program in order to verify the behavior of the sensor. This software was developed according to the theoretical models described in literature [23,[31][32][33]. As all these papers assume that the target is a mirror instead of a blade, we adjusted the parameters of the reflected light in order to get more realistic results. This parameter optimization was achieved by performing several experimental measurements using the trifurcated bundles we have in our laboratory. Figure 6 depicts the experimentally obtained calibration curve for the sensor employing the tetrafurcated bundle in the region I, together with the simulation. Table 1 shows the distance differences between the two curves for the same values of V2/V1 obtained along the measurement range. We can see that the simulation provides good results except for the last part of the measurement range, where the distance difference increases significantly. The rest of the components of the sensor are depicted in Figure 7. A laser module from Frankfurt Components (HSML-0660-20-FC, Frankfurt Laser Company, Friedrichsdorf, Germany) was employed as source of light. It has a nominal output power of 20 mW at 660 nm. An optical isolator (IO-F-660 from Thorlabs, Newton, NJ, US) was placed between the laser and the bundle in order to avoid any possible reflection that could destabilize the light source. Finally, the photodetectors employed were PDA100A-EC ones from Thorlabs. Photodetectors 1 and 2 were employed for TC measurements with transimpedance gains of 1.51 × 10 3 V/A (BW = 1.5 MHz) and 1.51 × 10 5 V/A (BW = 60 kHz), respectively. Photodetector 3 was used for tip timing measurements and its gain was 4.75 × 10 4 V/A (BW = 200 kHz). Regarding the calibration of the sensor, we employed a linear translation stage and we followed a procedure similar to our previous works [20]. Since it was impossible to use a compressor blade for the calibration process, the most similar blade available in our laboratory was employed. The ambient temperature of the test cell was expected to be quite low (5-10 °C), so we checked the effect of the temperature on the calibration curve of the sensor. We introduced all the sensor components in a climate chamber and we carried out two calibrations at 20 °C and 10 °C. The resulting curves The rest of the components of the sensor are depicted in Figure 7. A laser module from Frankfurt Components (HSML-0660-20-FC, Frankfurt Laser Company, Friedrichsdorf, Germany) was employed as source of light. It has a nominal output power of 20 mW at 660 nm. An optical isolator (IO-F-660 from Thorlabs, Newton, NJ, US) was placed between the laser and the bundle in order to avoid any possible reflection that could destabilize the light source. Finally, the photodetectors employed were PDA100A-EC ones from Thorlabs. Photodetectors 1 and 2 were employed for TC measurements with transimpedance gains of 1.51 × 10 3 V/A (BW = 1.5 MHz) and 1.51 × 10 5 V/A (BW = 60 kHz), respectively. Photodetector 3 was used for tip timing measurements and its gain was 4.75 × 10 4 V/A (BW = 200 kHz). and their linearization for the measurement range are depicted in Figure 8. Even though the distance difference to the linearized calibration curve was quite small at the beginning of the measurement range, it reached 100 µ m in the last part of the measurement range, so we employed the calibration obtained at 10 °C for the tests. Experimental Set-up The performance of the optical sensor was tested in the SO-3 engine that is employed to power the TS-11 "Iskra" combat jet trainer. Its compressor is composed of seven stages and the optical sensor was installed in the first one. This stage has 28 blades made of 18H2N4WA steel, with a length of 100 mm, a chord of 37 mm, and a maximum width of 1.5 mm. The surface of the blade is rough and it usually presents some corrosion which makes the measuring more difficult. The tests were performed on the test cell that the Air Force Institute of Technology has in Warsaw. Figure 9 depicts the upper view of the blade and the engine in the test cell. A special bracket was designed to fix the tip of the bundle in the casing of the engine. The tip of the bundle was placed at an approximated distance of 3 mm from the blade tip. Both the bracket and the final arrangement of the bundle in the area of the casing corresponding to the first stage of the compressor can be observed in Figure 10. At this part of the engine the gas temperature is approximately the same as the ambient temperature. The rotational speed of the engine is dependent on the operation condition of the engine, and the revolutions per minute (rpm) of the rotor range from 6900 rpm in idling condition to a maximum of 15,600 rpm during the takeoff. Regarding the calibration of the sensor, we employed a linear translation stage and we followed a procedure similar to our previous works [20]. Since it was impossible to use a compressor blade for the calibration process, the most similar blade available in our laboratory was employed. The ambient temperature of the test cell was expected to be quite low (5-10 • C), so we checked the effect of the temperature on the calibration curve of the sensor. We introduced all the sensor components in a climate chamber and we carried out two calibrations at 20 • C and 10 • C. The resulting curves and their linearization for the measurement range are depicted in Figure 8. Even though the distance difference to the linearized calibration curve was quite small at the beginning of the measurement range, it reached 100 µm in the last part of the measurement range, so we employed the calibration obtained at 10 • C for the tests. and their linearization for the measurement range are depicted in Figure 8. Even though the distance difference to the linearized calibration curve was quite small at the beginning of the measurement range, it reached 100 µ m in the last part of the measurement range, so we employed the calibration obtained at 10 °C for the tests. Experimental Set-up The performance of the optical sensor was tested in the SO-3 engine that is employed to power the TS-11 "Iskra" combat jet trainer. Its compressor is composed of seven stages and the optical sensor was installed in the first one. This stage has 28 blades made of 18H2N4WA steel, with a length of 100 mm, a chord of 37 mm, and a maximum width of 1.5 mm. The surface of the blade is rough and it usually presents some corrosion which makes the measuring more difficult. The tests were performed on the test cell that the Air Force Institute of Technology has in Warsaw. Figure 9 depicts the upper view of the blade and the engine in the test cell. A special bracket was designed to fix the tip of the bundle in the casing of the engine. The tip of the bundle was placed at an approximated distance of 3 mm from the blade tip. Both the bracket and the final arrangement of the bundle in the area of the casing corresponding to the first stage of the compressor can be observed in Figure 10. At this part of the engine the gas temperature is approximately the same as the ambient temperature. The rotational speed of the engine is dependent on the operation condition of the engine, and the revolutions per minute (rpm) of the rotor range from 6900 rpm in idling condition to a maximum of 15,600 rpm during the takeoff. Experimental Set-up The performance of the optical sensor was tested in the SO-3 engine that is employed to power the TS-11 "Iskra" combat jet trainer. Its compressor is composed of seven stages and the optical sensor was installed in the first one. This stage has 28 blades made of 18H2N4WA steel, with a length of 100 mm, a chord of 37 mm, and a maximum width of 1.5 mm. The surface of the blade is rough and it usually presents some corrosion which makes the measuring more difficult. The tests were performed on the test cell that the Air Force Institute of Technology has in Warsaw. Figure 9 depicts the upper view of the blade and the engine in the test cell. A special bracket was designed to fix the tip of the bundle in the casing of the engine. The tip of the bundle was placed at an approximated distance of 3 mm from the blade tip. Both the bracket and the final arrangement of the bundle in the area of the casing corresponding to the first stage of the compressor can be observed in Figure 10. At this part of the engine the gas temperature is approximately the same as the ambient temperature. The rotational speed of the engine is dependent on the operation condition of the engine, and the revolutions per minute (rpm) of the rotor range from 6900 rpm in idling condition to a maximum of 15,600 rpm during the takeoff. The signals provided by the photodetectors are very sharp due to the small thickness of the blades, and their amplitudes are quite different for each of the blades, as can be seen in Figure 11. Whereas V 1 is around 100 mV, certain blades produce peaks of several volts in V 2 . Therefore, we had to use two different PXIe-6358 data-acquisition modules (from National Instruments, Austin, TX, United States) in order to take advantage of the 16-bit resolution of each module. The sampling frequency for the signal acquisition was 500 kS/s. . Signals obtained from both photodetectors, V1 (Red) and V2 (Black), when the engine was in idling state. Figure 11. Signals obtained from both photodetectors, V 1 (Red) and V 2 (Black), when the engine was in idling state. Results The tests consisted of four complete cycles in which the engine ran from idling operational state to takeoff and back to idling state again. The starting and final rotational speed was approximately 7000 rpm and the maximum speed reached during takeoff was 15,600 rpm. In our first and the third cycle the engine acceleration and deceleration were linear, the rotational speed increased continuously up to 15,600 rpm and then decreased back to idling state. In our second and fourth cycle the rotational speed increased in steps of about 1000 rpm (see Figure 12b), whereas the deceleration was linear as in the other cycles. The flight profile for the cycles 1 and 2 are shown in Figure 12. The signals provided by the photodetectors are very sharp due to the small thickness of the blades, and their amplitudes are quite different for each of the blades, as can be seen in Figure 11. Whereas V1 is around 100 mV, certain blades produce peaks of several volts in V2. Therefore, we had to use two different PXIe-6358 data-acquisition modules (from National Instruments, Austin, TX, United States) in order to take advantage of the 16-bit resolution of each module. The sampling frequency for the signal acquisition was 500 kS/s. Results The tests consisted of four complete cycles in which the engine ran from idling operational state to takeoff and back to idling state again. The starting and final rotational speed was approximately 7000 rpm and the maximum speed reached during takeoff was 15,600 rpm. In our first and the third cycle the engine acceleration and deceleration were linear, the rotational speed increased continuously up to 15,600 rpm and then decreased back to idling state. In our second and fourth cycle the rotational speed increased in steps of about 1000 rpm (see Figure 12b), whereas the deceleration was linear as in the other cycles. The flight profile for the cycles 1 and 2 are shown in Figure 12. During each cycle, three signals were acquired: the OPR (Once Per Revolution) signal and the signals corresponding to the outputs of the photodetectors, V 1 and V 2 . These signals were stored and post-processed off-line using a LabVIEW program designed specifically for the analysis of TC measurements using the sensor. The program divides the whole acquisition in individual revolutions making use of the OPR signal. In this way, it can evaluate the quotient V 2 /V 1 during one revolution and find the TC values for each of the 28 blades of the compressor. The minimum of such values is considered as the TC for that revolution. To calculate the TC value for each blade, the calibration curve in Figure 8 is employed to convert the value of the quotient V 2 /V 1 into distance. In Figure 13 the three signals (OPR, V 1 and V 2 ) are depicted at 7600 rpm and at 15,600 rpm. For the sake of clarity, the amplitude of the signal V 1 has been magnified by a factor of 10. In this figure, it is clearly seen that the amplitude of the signals becomes higher as the rotational speed increases. This is due to the fact that the faster the engine is turning, the closer the blades are to the casing and the higher is the reflected light intensity that reaches the bundle of optical fibers. During each cycle, three signals were acquired: the OPR (Once Per Revolution) signal and the signals corresponding to the outputs of the photodetectors, V1 and V2. These signals were stored and post-processed off-line using a LabVIEW program designed specifically for the analysis of TC measurements using the sensor. The program divides the whole acquisition in individual revolutions making use of the OPR signal. In this way, it can evaluate the quotient V2/V1 during one In Table 2 the TC measurements during the acceleration of the cycles 1 and 3 are shown. In order to calculate the TC at each rotational speed, an interval of 0.2 s (100,000 samples) was considered as a "constant-speed" interval. The rotational speed in each measurement is the average of the interval. In both cycles, the TC decreases as the rotational speed increases showing a reasonable behavior. The results for the TC values are quite similar during both acceleration cycles, being the most significant differences between 12,000 and 15,000 rpm. In the case of 15,000 rpm, the sensor shows a maximum difference of 96 µm. In Table 3, we can observe the results during the acceleration of the cycles 2 and 4. In these cases the transition from idling state to takeoff is accomplished in steps of 1000 rpm approximately. These cases yield longer intervals in which the rotational speed is constant, so we decided to employ 0.6-s intervals (300,000 samples) to determine the TC at each speed. The behavior of the sensor is correct during both cycles, and the maximum TC difference between the cycles is 24 µm. In Figure 14, the measurements during the four cycles have been plotted. Except for the case of the first cycle in the range from 12,000 to 15,000 rpm, the TC measurement exhibits the same evolution in all cycles and the sensor provides similar TC values for all the speeds of each cycle. In Table 4, the TC values obtained during the engine deceleration in every cycle are depicted. For all cases, the deceleration is linear, and the evolution of the measurement is shown in Figure 15. As can be seen in the graph, the four cycles behave in a highly analogous way, yielding a maximum difference of 23 µ m among the measurements, which is very similar to the previous case. In Table 4, the TC values obtained during the engine deceleration in every cycle are depicted. For all cases, the deceleration is linear, and the evolution of the measurement is shown in Figure 15. As can be seen in the graph, the four cycles behave in a highly analogous way, yielding a maximum difference of 23 µm among the measurements, which is very similar to the previous case. In Table 4, the TC values obtained during the engine deceleration in every cycle are depicted. For all cases, the deceleration is linear, and the evolution of the measurement is shown in Figure 15. As can be seen in the graph, the four cycles behave in a highly analogous way, yielding a maximum difference of 23 µ m among the measurements, which is very similar to the previous case. In Figure 16, the TC values for each revolution of the acceleration in the Cycle 3 have been plotted. In order to help to visualize the TC evolution during the engine acceleration, a smoothed representation of the results is also shown. In Figure 17, the TC measurements of each cycle of acceleration and deceleration are depicted. These charts allow us to verify that there are no apparent signs of hysteresis in the performance of the sensor. In Figure 16, the TC values for each revolution of the acceleration in the Cycle 3 have been plotted. In order to help to visualize the TC evolution during the engine acceleration, a smoothed representation of the results is also shown. In Figure 17, the TC measurements of each cycle of acceleration and deceleration are depicted. These charts allow us to verify that there are no apparent signs of hysteresis in the performance of the sensor. Regarding the accuracy of the results, unfortunately no other sensor could be installed to measure TC during the tests. The amplitude of the signals provided by the inductive sensors installed for tip timing measurement depended on speed, and they were not calibrated for tip In Figure 16, the TC values for each revolution of the acceleration in the Cycle 3 have been plotted. In order to help to visualize the TC evolution during the engine acceleration, a smoothed representation of the results is also shown. In Figure 17, the TC measurements of each cycle of acceleration and deceleration are depicted. These charts allow us to verify that there are no apparent signs of hysteresis in the performance of the sensor. Regarding the accuracy of the results, unfortunately no other sensor could be installed to measure TC during the tests. The amplitude of the signals provided by the inductive sensors installed for tip timing measurement depended on speed, and they were not calibrated for tip Regarding the accuracy of the results, unfortunately no other sensor could be installed to measure TC during the tests. The amplitude of the signals provided by the inductive sensors installed for tip timing measurement depended on speed, and they were not calibrated for tip clearance measurement. Consequently, there is not any reference with which the results can be compared. However, it is worth noting that the laboratory tests carried out prior to the measurements in the test cell with the real engine provided errors lower than 1% in the range of the measurement with a minimum resolution of 1 µm. With respect to the sensor precision, these preliminary tests give us a standard deviation in the laboratory measurements of 2 µm. In order to compare this value with the measurements in the real engine, the TC for each rotational speed has been assigned to nearest value of the rotational speed shown Table 5, in such a way that we have eight values for each rotational speed (except for 15,600 rpm, in which we have only four). As can be observed in Table 5, the maximum value for the standard deviation of all cycles is 34 µm. The Cycle 1 contributes significantly to heighten this value due to the strange behavior of the measurements during the acceleration of this cycle. If we discard this cycle, the maximum standard deviation drops to 12 µm, which is an excellent value if we take into account that the measurements were taken at slightly different rotational speeds. Conclusions The TC measurement for the first stage of a compressor of an aircraft engine was carried out using an optical fiber sensor. The engine in the test cell worked in real conditions. The tests consisted of four cycles from idling state to takeoff and back to idling state. Even though it was no possible to install a reference sensor to compare the accuracy of the results, the sensor showed a correct behavior as regards the variation of the rotational speed. The standard deviation of the measurements, if we discard the Cycle 1 due to its strange behavior in the last part of the acceleration, was 12 µm. This is a notable value if we take into account that the measurements were taken at slightly different speeds and that the conditions were adverse due to the corrosion present on the blades. It is also worth noting the short time needed to calibrate and to install the sensor in the engine, and the feasibility of performing independent tip-timing measurements using the leg three of the optical fiber bundle. In conclusion, a correct behavior of the sensor was demonstrated for the first stage of the compressor in such a way that we can expect similar results in harsher environments as the turbine. In order to get satisfactory results, the high temperature and contamination issues must be correctly tackled with the necessary modifications in the bundle design.
8,559.6
2016-11-01T00:00:00.000
[ "Engineering", "Physics" ]
Influence of Nitrogen Ion Implantation on the Disc Brake Material of Motor Vehicles Component Weaknesses of local disc brakes are cover several conditions such as low hardness, wear, and corrosion resistance. To improve this weakness, it is necessary to modify the surface properties of the material. The aim of this research is to study the influences of nitrogen ion implantation on the surface properties of a disc brake material. The implantation process was carried out for various of ions dose such as 3.107×10 ions/cm, 3.148×10 ions/cm, 3.728×10 ions/cm, 4.039×10 ions/cm, 4.350×10 ions/cm at a certain energy and beam current of 60×10 ions/cm, 30 μA respectively. Hardness and wear properties were tested using microhardness tester and wear testing machine, respectively. Meanwhile, the crystalline structure for un-implanted (raw) and implanted materials at the optimum dose was analyzed using XRD. From the hardness test results, it can be obtained that the hardness of raw material is 59.82 VHN and after implantation it reached the highest value of 109.78 VHN or increases by factor 83%, while the wear test results is 22.9×10 mm/kg for raw material and after implantation it reaches the highest value of 2.5×10 cm/kg or decreases by factor 88%. These conditions were obtained at 3.728×10 ions/cm of dose. Based on the XRD analysis, 45.5% Fe2N and 54.55% Fe3N compounds are formed. Introduction A braking system is one of the critical safety components of an automobile. It is mainly used to decelerate vehicles from an actual speed to the desired speed. Friction based braking systems are still a conventional device to convert kinetic energy into thermal energy, through friction between the brake pads and the rotor faces [1][2][3][4]. All braking system depends upon the frictional force to stop, control or prevent motion [5,6]. Because these components are always rubbing against the surface of other components, it will wear out quickly so that their service life will also be reduced. To reduce the wear rate or to extend the fatigue life or service life, the surface of the component needs to be improved. Several surface treatment methods are used to improve the surface quality such as carburizing, nitriding, carbonitriding, induction hardening, shot peening, physical vapor deposition (PVD) and chemical vapor deposition (CVD), as well as an ion implantation technique [7,8]. These treatments form a hardened surface layer with compressive residual stress, and therefore, the fatigue life is improved by the surface layer [9,10]. Ion implantation is a surface modification technique by which atoms and molecules are ionized, accelerated in an electrostatic field, and implanted into the near-surface of a substrate. This technique produces a modification in the structure of metals by the formation of new crystalline phases, metastable or amorphous, and thus to improve the surface properties [11]. A great advantage is the negligible effect of ion implantation on the dimensions of the treated element; hence, the process can be applied in the final stage of manufacturing of products that already have their final dimensions [12]. Besides the improvement of tribological properties, ion implantation contributes to an increase in mechanical strength. This is associated with an increase in the microhardness of the implanted samples. The implantation process is accompanied with the appearance of compressive stresses and inclusions of nitrides, carbides, and borides. The implantation-induced hardening process depends on the type and dose of implanted ions and the temperature of the implanted material. In most of the applications for surface treatment, implanted elements are nitrogen, carbon and boron that harden the surface alloy as a consequence of fine particle formation by precipitation. The introduction of new atoms in the crystal lattice is not the only effect of ion implantation, but also the damage originated in the crystal structure of the target by the energetic collision cascades must also be taken into account [13]. As each ion penetrates the target, it undergoes a series of collisions displacing host atoms along the way. Both the ion and dislodged target atoms can continue and cause further damage, and so the energy is spread over many moving particles. Therefore, after implantation of high doses of ions, an initially crystalline target will be so perturbed that it will have changed to a highly disordered state [14]. The amount of crystallographic damage can be enough to cause the partial amorphization of the metal surface, depending on the dose, energy, temperature (governing self-annealing that can occur to repair some or all of the damage as it is generated) and ion species (heavy ions displace a greater volume of target atoms per ion) [15]. Finally, the implantation of a high dose of ions induces significant compressive stress that can contribute to the blocking of the fissures and close the channels of corrosion [16]. Methodology The material that is used in this study was a local disc material, the type of component is presented in Figure 1. In this experiment, local disc component was cut into specimens 4×14 mm in size in a disc shape using a wather jet cutting. The specimens were grounded with SiC papers from 80 up to 5000 mesh and polished mechanically with 1 µm diamond paste. The polished specimens were washed with acetone in an ultrasonic cleaner and dried at room temperature. The chemical compositions of the samples are listed in Table 1. The samples were implanted by using 150 keV/2mA ion implanter. Implantation of the samples performed at dose 3.107×10 16 ion/cm 2 , 3.148×10 16 ion/cm 2 , 3.728×10 16 ion/cm 2 , 4.039×10 16 ion/cm 2 and 4.35×10 16 ion/cm 2 at 60 keV of ions energy. The increase of the temperature of the samples was ensured solely by the incoming ion beam without any additional heating. Analysis of the Hardness and Wear Rate The effect of nitrogen ion implantation in micro-hardness properties of samples was tested by using Vickerss Tester Microhardness Tester type MTX7, while the wear properties were tested using the Ogoshi High Speed Universal Wear Testing Machine with the result as shown in Figure 2. It concluded that the hardness and wear rate of samples improved after nitrogen ion implantation and extend of improvement increases with the dose. The maximum hardness is appearing at a nitrogen ion dose of 3.728×10 16 ion/cm 2 . In this condition, the hardness increase from 59.77 VHN (raw material) to 109.78 VHN (83% higher), while the wear rate decreases from dose 3.107×10 16 ion/cm 2 to 3.728×10 16 ion/cm 2 or reduce by a factor 88%. Over this dose the hardness decreases while the wear rate increases. This may be caused by defects due to the excess nitrogen ions implantations. XRD Analysis of Implanted and Un-implanted Samples The phase composition of nitrogen-implanted samples was analyzed using X-rays in the Θ−2Θ mode and λ = 1.548 Å of CuKα radiation. The presence of nitride in implanted samples at optimum conditions was revealed by the spectrum in Figure 4, Tables 3, and 4. Figure 3 and Table 2 show the XRD patterns of un-implanted sample material. Conclusion Based on the experiments, it can be concluded as follow: 1. The hardness of un-implanted materials is 59.77 VHN, after being implanted for various of time or ions dose the hardness increased and reached optimum in order of 109.78 VHN or increase by factor 83%, while the wear rate decreased from 22.9×10 -9 mm 2 /kg to be minimum in order 2.5×10 -9 mm 2 /kg or decreased by factor 88%. This optimum condition was achieved at 60 minutes of implantation time or at 3.728×10 16 ions/cm 2 of ions dose. 2. Based on XRD analysis and after being analyzed using Crystallography Open Database (COD) code entry number 96-411-3942 it's observed that for un-implanted samples the observed phase is α Fe, after being implanted at optimum conditions the formed phases are 54.5% of Fe 3 N and 45.5% of Fe 2 N. The formation of these phases are causing in increasing the hardness or reducing the wear rate.
1,843.8
2019-09-29T00:00:00.000
[ "Materials Science", "Engineering" ]
BPSK Circuit Based on SDC Memristor Digital communication based on memristors is a new field. The main principle is to construct a modulation and demodulation circuit by using the resistance variation characteristics of the memristor. Based on the establishment of the Knowm memristor simulation model, firstly, the modulation circuit is designed by using the polarity and symmetry of the memristor and combined with the commercial current feedback amplifier AD844. It is proved that the modulated signal based on the memristor is a strong function of phase, and the demodulation circuit is designed accordingly. All simulation circuits are based on the actual commercial physical device model. The analytical expression of the output signal of the modulation and demodulation circuit is deduced theoretically, and the communication performance of the whole system is simulated by LTSpice. At the same time, the influence of the parasitic capacitance of the memristor on the circuit performance is also considered. After the simulation verification, the hardware circuit experiment of the modulation and demodulation circuit is carried out. The waveforms of the modulated signal and the demodulated signal are measured by an oscilloscope. The experimental results are completely consistent with the simulation and theoretical results. Introduction The concept of memristor was put forward by Chua for the first time [1,2]. Until May and June 2008, three papers were published in a row in nature to report the discovery of memristor [3][4][5]: HP laboratories found that a two-layer titanium dioxide film sandwiched between two platinum sheets had the characteristics of the memristor. It was the first time confirming the physical existence of nano-memristors from theory and experiment, which caused great shock in the industry and academia, so memristors become a new, hot research field [6]. However, the memristor design by HP laboratories is still unavailable in the commercial market because of its fabrication complexity and high cost, which delays the commercialization of real-time applications of memristors. Recently, the Knowm company in the United States released the first commercial discrete memristor [7], which enables the academic community to carry out experiments on memristors in the laboratory. Memristors can be applied to analog signal processing because of their rich dynamic characteristics [8]. Using the characteristic that memristance is a strong function of signal frequency and phase, a memristor can even be applied to the field of digital communication. At present, there are relevant reports. For example, the memristor is proposed to be used as spectrum enhancement (frequency multiplier, modulator, mixer, etc.) in [9]. Recently, attempts to use memristor characteristics in communication have been reported, such as a UWB receiver [10] and a memristor-based modulator [9,[11][12][13][14]. Ref. [12] proposed an amplitude modulation (AM) circuit based on the linear doping drift model. In addition, frequency shift keying (FSK), amplitude modulation (AM), binary phase shift keying (BPSK), and amplitude shift keying (ASK) modulators are also proposed in [11]. In [13], the simulation circuit of the memristor element is experimentally studied and used in the new design of FSK, ASK, and phase shift keying (PSK) modulators. In addition, when designing the new structure of the compression sensing system, the random modulator based on a memristor is used, and the storage and switching characteristics of memristor devices are used [14]. The architecture of memory reference receivers based on memristors is introduced, in which the operation is performed by calculating the correlation between reference waveform and received signal [10]. Refs. [15,16] summarized some of the above work and presented, as an application, a modulation and demodulation scheme together with a complete transceiver based on memristors. However, the modulation or demodulation circuit designed in the published literature is either based on the memristor emulator or the ideal linear or nonlinear memristor model. There is still a distance from the actual physical circuit based on the memristor, so it is impossible to carry out hardware verification. Given the above shortcomings, this paper proposes a complete modulation/ demodulation transceiver link for the BPSK communication system based on the actual commercial memristor model and carries out the experimental verification of the hardware circuit. This paper is organized as follows. In Section 2, the model of the commercial Knowm memristor is proposed and programmed with LTSpice. It is worth noting that, unlike the traditional deterministic model of memristor, this model introduces random variables, so it has a wider range of applications. Section 3 designs the modulation circuit based on this model, and Section 4 designs the modulation circuit. These two parts not only give a detailed circuit system diagram but also clearly explain the working principle of the circuit. Section 5 is the simulation and experimental results. Modeling and Simulation of Knowm Memristor Different from the traditional deterministic memristor model, random variables can be used to model the memristor. The specific idea is to regard the memristor as a collection of conductive channels, which have different resistance values, and the channel itself can be composed of nanoparticles such as ions. By applying an external voltage, the channel is switched between ON and OFF to change the resistance of the device. With the increase in the number of channels, the memconductance will be larger and larger, which is because the device has more conductive paths. A series of memristors can be modeled by modifying the number of channels. Ref. [17] gives the model of Knowm memristor as follows: where x is the state variable, that is, the number of normalized conductive channels, and β = V T −1 is the reciprocal of the thermal voltage. The other parameters are shown in Table 1. Modulation Circuit BPSK is the simplest form of PSK (phase shift keying). It is a digital modulation scheme, which realizes data modulation by changing the phase of the carrier. BPSK uses two phases 180° apart to represent "1" and "0". It is suitable for low-cost passive transmitters and has the best anti-interference performance. In addition, it is also widely used in wireless local area networks (LAN), radio frequency identification (RFID), and Bluetooth communication. Figure 4 shows the BPSK modulation circuit diagram, and Table 2 shows the parameters in the diagram. This circuit scheme is not new and directly cited from [13]; however, we use the real memristor model instead of the ideal model mentioned in [13]. In the circuit, three AD844 chips are connected as shown in the figure. The AD844 chip is the current feedback operational amplifier (CFOA) of the AD company. This type of opamp provides a closed-loop bandwidth that is determined primarily by the feedback resistor and is almost independent of the closed-loop gain. The AD844 is free from the slew rate limitations inherent in traditional op-amps and other current-feedback opamps. It combines high bandwidth and very fast large signal response with excellent DC performance. Although optimized for use in current-to-voltage applications and as an inverting mode amplifier, it is also suitable for use in many noninverting applications. The AD844 can be used in place of traditional op-amps, but its current feedback architecture results in much better AC performance, high linearity, and an exceptionally clean pulse response. The power supply voltage of AD844 is 15V ± , AD844 chip terminal z is the current output terminal, and terminal v is the voltage output terminal. The z- Modulation Circuit BPSK is the simplest form of PSK (phase shift keying). It is a digital modulation scheme, which realizes data modulation by changing the phase of the carrier. BPSK uses two phases 180 • apart to represent "1" and "0". It is suitable for low-cost passive transmitters and has the best anti-interference performance. In addition, it is also widely used in wireless local area networks (LAN), radio frequency identification (RFID), and Bluetooth communication. Figure 4 shows the BPSK modulation circuit diagram, and Table 2 shows the parameters in the diagram. This circuit scheme is not new and directly cited from [13]; however, we use the real memristor model instead of the ideal model mentioned in [13]. In the circuit, three AD844 chips are connected as shown in the figure. The AD844 chip is the current feedback operational amplifier (CFOA) of the AD company. This type of op-amp provides a closed-loop bandwidth that is determined primarily by the feedback resistor and is almost independent of the closed-loop gain. The AD844 is free from the slew rate limitations inherent in traditional op-amps and other current-feedback op-amps. It combines high bandwidth and very fast large signal response with excellent DC performance. Although optimized for use in current-to-voltage applications and as an inverting mode amplifier, it is also suitable for use in many noninverting applications. The AD844 can be used in place of traditional op-amps, but its current feedback architecture results in much better AC performance, high linearity, and an exceptionally clean pulse response. The power supply voltage of AD844 is ±15 V, AD844 chip terminal z is the current output terminal, and terminal v is the voltage output terminal. The z-terminal output current i of CFOA1 copies the input current of the amplifier, and then i is split into i 1 and i 2 . The currents i 1 and i 2 provide a bias current for memristors M1 and M2 through CFOA2 and CFOA3, respectively, and generates a voltage on it, which is output to terminal v of CFOA2 and CFOA3, and then superimposed at the output node to form output voltage Vo. Parameter The operating principle of the circuit is explained as follows: the circuit based on the current feedback operational amplifier (CFOA1) is used to convert the unipolar binary digital input signal into the current. The output current of CFOA1 is distributed between two equal resistors R2 and R3, which in turn provide bias for the two memristors M1 and M2 connected to CFOA2 and CFOA3 terminal z. If the current flowing through the resistor is ft π V, and we have 2 1 Considering 2 3 R R = , the following is obtained by using Kirchhoff voltage and current law: Figure 4. Parameter Value (Ω) The operating principle of the circuit is explained as follows: the circuit based on the current feedback operational amplifier (CFOA1) is used to convert the unipolar binary digital input signal into the current. The output current of CFOA1 is distributed between two equal resistors R 2 and R 3 , which in turn provide bias for the two memristors M 1 and M 2 connected to CFOA2 and CFOA3 terminal z. If the current flowing through the resistor is i R 1 , then i R 1 is precisely copied at the z-terminal of CFOA1, so i = i R 1 . The carrier voltage at the in-phase terminal of CFOA2 is 6 sin(2π f t)V, and we have R 2 i 1 + R 3 i 2 = 6 sin(2π f t). Considering R 2 = R 3 , the following is obtained by using Kirchhoff voltage and current law: According to Equations (3) and (4), we have and we have the memristor equations: Because the polarity of the two memristors is opposite, one increases with the current flowing, and the other inevitably decreases with the current flowing. Therefore, one of the resistances of the two memristors is larger and another is smaller. When a sinusoidal carrier voltage is applied in CFOA2 in-phase terminal, the sinusoidal voltage drop generated on the two memristors will have opposite polarity and different amplitude, and the output voltage is the superposition of the voltage drop on the two memristors: Therefore, according to the binary digital input of the CFOA1 in-phase terminal, the output voltage will have two opposite phases. Demodulation Circuit From the perspective of analytical expression, firstly, the memristor state equation: is a first-order differential equation, rearranged and we obtain: Then, the solution of Equation (11) is where C is an arbitrary constant, which is determined by the initial resistance of the memristor. Since v(t) is a function of phase, x is also a function of phase, which can be recorded as x(θ), where θ is a phase. So according to Equation (1), there are Thus, the memconductance is also a function of phase. Figure 5 reflects the relationship between the input signal phase and the memristor conductance. recorded as x(θ), where θ is a phase. So according to Equation (1), there are Thus, the memconductance is also a function of phase. Figure 5 reflects the relationship between the input signal phase and the memristor conductance. From 0 to 1 ms, the signal has a 0 phase and from 1 to 2 ms, it has a 180-degree phase. It can be seen that the memristor conductance is a strong function of the input signal phase. Figure 6a,b shows the system block diagram and circuit diagram of the proposed BPSK demodulator, respectively, and Table 3 shows the relevant parameter values. Although the topology of this circuit is not new and can be found in [13], we performed a modification here, inverting the amplifier and adopting the real memristor model instead of the ideal model in [13]. From 0 to 1 ms, the signal has a 0 phase and from 1 to 2 ms, it has a 180-degree phase. It can be seen that the memristor conductance is a strong function of the input signal phase. Figure 6a,b shows the system block diagram and circuit diagram of the proposed BPSK demodulator, respectively, and Table 3 shows the relevant parameter values. Although the topology of this circuit is not new and can be found in [13], we performed a modification here, inverting the amplifier and adopting the real memristor model instead of the ideal model in [13]. Table 3. List of parameters of Figure 6. Parameter Value (S.I Units) 50 Ω R 4 1 kΩ R 5 100 kΩ C 1 10 nF C 2 100 µF The circuit principle is explained as follows: as shown in Figure 6b, the BPSK demodulating signal is applied to the voltage divider between the memristor element and the resistor. The output of the voltage divider is As mentioned earlier, the memristance R m is a strong function of the phase of the input signal. Since R m varies with the input phase, the amplitude of V 1 also varies with the phase. In other words, the voltage divider converts the change of the sinusoidal input phase into the change of sinusoidal output amplitude, thereby creating a PSK to ASK (amplitude shift keying) converter based on the memristor. The envelope detector generates a DC signal proportional to the ASK signal amplitude after the voltage divider in Figure 6b. Resistor R 2 creates a discharge path to automatically reset the output of the envelope detector at the beginning of each bit without resetting the circuit, thereby creating a multilevel output based on the phase of the input signal. The power consumed through the discharge path can be determined by the difference between the energy stored in the capacitor during the high level and the low level dividing the discharge time, thus 0.5C 1 where V h is the capacitor voltage during bit "1", V l is the capacitor voltage during bit "0", and t d is the discharge time. The circuit does not need any carrier recovery circuit, which reduces the related overhead in other traditional BPSK demodulators and means it is suitable for ultra-low power applications. In addition, the demodulator has the characteristics of simple structure, small volume, and low power consumption. It can be used in the monolithic integrated design of wireless implantable neural recording systems. Figure 6b shows the BPSK demodulator circuit starting from the memristor resistor voltage divider. It converts different amplitude currents related to different phases of the received signal into different amplitude voltages through the series resistance (R 1 ). The inverting amplifier is used to amplify the output voltage of the voltage divider so that it exceeds the forward voltage of the diode. If the output level of the voltage divider is high, it cannot be used. Then, there is a passive simple envelope detector for detecting the signal peak, and finally, a first-order low-pass filter (LPF) to reduce the ripple at the output. Generally, in BPSK, bit "1" is represented by several sinusoidal periods with a 0 • phase shift, while a bit "0" is represented by several sinusoidal periods with a 180 • phase shift. Therefore, the corresponding memresistance of bit "1" is low in bit time (as shown in Figure 5). The voltage drop at both ends of R 1 is sinusoidal, and the peak value is relatively large. Therefore, the output of the envelope detector is high, indicating the voltage level of logic "1". On the contrary, when a 180 • phase-shifted sinusoidal waveform is applied, the average value of R m is larger in the case of bit "0". Therefore, the voltage drop at both ends of R 1 is sinusoidal, the peak value is relatively small, and the output of the envelop detector corresponds to the voltage level of bit "0". This is a very simple BPSK demodulator, which does not need expensive hardware and avoids the use of carrier recovery circuits. It should be noted that the values of C 1 and R 2 must be carefully selected to ensure that the conversion time (the time required for the peak detector to convert between different levels) is sufficiently less than the bit duration and greater than the carrier period. T c and T b are carrier period and bit duration, respectively, and τ is the time constant of capacitor-discharge, τ = R 2 C 1 . Finally, the output signal of Figure 6b is sent to the waveform shaping circuit based on the comparator of Figure 6c to obtain the modified demodulated signal. The power supply voltage of LM 339 is ±15V. The reason that we need the comparator is that the rising edge of the signal V out1 out of the LPF is not steep enough because of the nonideality of the simple RC LPF, but by using the comparator, the steepness can be improved, and a clearer pulse shape is obtained. Simulation and Experiment Results In this section, SPICE simulation experiments are carried out to verify the modulation/demodulation circuit. Figure 7a shows the transient time-domain simulation circuit diagram of the modulation/demodulation. In the simulation experiment, firstly, the memristor model is written as SPICE code, and then the circuit symbol is automatically generated by LTSpice, that is, the block in the figure. The upper sub-circuit of the circuit diagram is a modulation circuit, and its output is output1. Then, output1 is directly sent to the lower sub-circuit of the circuit diagram, that is, the demodulation circuit and its output are output2. Then, output2 is sent to the input of the comparator circuit, and the demodulated signal output3 is obtained from its output. The simulation results are shown in Figure 7b. The upper waveform is a modulated signal. The high bit is 5 V, representing bit "1", and the low bit is 0 V, representing bit "0"; the middle waveform is the modulated signal, i.e., output1, and the lower waveform is the demodulated signal, i.e., output3. Due to the parasitic capacitance effect of the memristor, we paralleled the parasitic capacitance of 10 nF at both ends of the memristor in the circuit shown in Figure 7a, and the simulated waveform is shown in Figure 8. It can be observed that the pulse width of the output demodulation waveform is widened, which is caused by the parasitic capacitor charging and discharging and the capacitor tends to maintain the potential across its terminals for a certain time, that is, the RC product tends to be larger, and the charging and discharging time tends to be longer. The hardware experiment setup is shown in Figure 9a, and the required instruments are a regulated power supply, a signal generator, and an oscilloscope. The regulated power supply provides the supply voltage of three AD844s and one LM339. The signal generator uses two channels to generate a modulating signal square wave and carrier sine wave, respectively. Figure 9b shows the modulated waveform, which is different from the simulation waveform because the memristor model used in the simulation is different from the actual device and cannot fully simulate the behavior of the real device. However, we still observe the sudden change of the 180 • phase, which is a significant feature of the BPSK signal. Figure 9c shows the demodulation waveform. The interested reader may refer to [7] for further information about real Knowm SDC memristor device fabrication and electrical characteristics. output are output2. Then, output2 is sent to the input of the comparator circuit, and the demodulated signal output3 is obtained from its output. The simulation results are shown in Figure 7b. The upper waveform is a modulated signal. The high bit is 5 V, representing bit "1", and the low bit is 0 V, representing bit "0"; the middle waveform is the modulated signal, i.e., output1, and the lower waveform is the demodulated signal, i.e., output3. Due to the parasitic capacitance effect of the memristor, we paralleled the parasitic capacitance of 10 nF at both ends of the memristor in the circuit shown in Figure 7a, and the simulated waveform is shown in Figure 8. It can be observed that the pulse width of the output demodulation waveform is widened, which is caused by the parasitic capacitor charging and discharging and the capacitor tends to maintain the potential across its terminals for a certain time, that is, the RC product tends to be larger, and the charging and discharging time tends to be longer. The hardware experiment setup is shown in Figure 9a, and the required instruments are a regulated power supply, a signal generator, and an oscilloscope. The regulated power supply provides the supply voltage of three AD844s and one LM339. The signal generator uses two channels to generate a modulating signal square wave and carrier sine wave, respectively. Figure 9b shows the modulated waveform, which is dif- carrier sine wave, respectively. Figure 9b shows the modulated waveform, which is different from the simulation waveform because the memristor model used in the simulation is different from the actual device and cannot fully simulate the behavior of the real device. However, we still observe the sudden change of the 180° phase, which is a significant feature of the BPSK signal. Figure 9c shows the demodulation waveform. The interested reader may refer to [7] for further information about real Knowm SDC memristor device fabrication and electrical characteristics. Conclusions This paper presents the BPSK modulation circuit and demodulation circuit based on a commercial Knowm memristor. Based on the simulation model of the Knowm memristor, the modulation circuit is designed by using the polarity and symmetry of the memristor and commercial current feedback amplifier AD844. In all cases, the unipolar binary digital input is first converted into a current. This current is used to bias the memristor by changing its resistance. Thus, exploiting the inherent memristor characteristic of having two values of resistance; that is RON and ROFF. It is proved that the modulated signal based on the memristor is a strong function of phase, and the receiving and demodulation circuit is designed. The designed transceiver circuits employ the commercial passive memristor and hence are implementable as hardware prototypes and also have low power. For demonstration purposes, we use a 1 kHz carrier frequency; however, it can be easily extended to several MHz. The modulation/demodulation circuit can be further extended to the modulation and demodulation of ASK and FSK signals. Compared with related work in [11,13,15,16], we use real commercial memristor products. Beyond simulation, we also carried out hardware experiments and measured the modulated and demodulated waveforms. The proposed memristor model is suitable for analog as well as digital applications such as analog computations, neuromorphic circuits, adaptive filters, and signal processing circuits. Because of its low power, it is especially appropriate for low-power biomedical applications. Conclusions This paper presents the BPSK modulation circuit and demodulation circuit based on a commercial Knowm memristor. Based on the simulation model of the Knowm memristor, the modulation circuit is designed by using the polarity and symmetry of the memristor and commercial current feedback amplifier AD844. In all cases, the unipolar binary digital input is first converted into a current. This current is used to bias the memristor by changing its resistance. Thus, exploiting the inherent memristor characteristic of having two values of resistance; that is R ON and R OFF . It is proved that the modulated signal based on the memristor is a strong function of phase, and the receiving and demodulation circuit is designed. The designed transceiver circuits employ the commercial passive memristor and hence are implementable as hardware prototypes and also have low power. For demonstration purposes, we use a 1 kHz carrier frequency; however, it can be easily extended to several MHz. The modulation/demodulation circuit can be further extended to the modulation and demodulation of ASK and FSK signals. Compared with related work in [11,13,15,16], we use real commercial memristor products. Beyond simulation, we also carried out hardware experiments and measured the modulated and demodulated waveforms. The proposed memristor model is suitable for analog as well as digital applications such as analog computations, neuromorphic circuits, adaptive filters, and signal processing circuits. Because of its low power, it is especially appropriate for low-power biomedical applications.
5,903.8
2022-08-01T00:00:00.000
[ "Computer Science" ]
The Expression of the Nectin Complex in Human Breast Cancer and the Role of Nectin-3 in the Control of Tight Junctions during Metastasis Introduction Nectins are a family of integral protein molecules involved in the formation of functioning Adherens and Tight Junctions (TJ). Aberrant expression is associated with cancer progression but little is known how this effects changes in cell behaviour. This study aimed to ascertain the distribution of Nectins-1 to -4 in human breast cancer and the effect on junctional integrity of Nectin-3 modulation in human endothelial and breast cancer cells. Methods A human breast tissue cohort was processed for Q-PCR and immunohistochemistry for analysis of Nectin-1/-2/-3/-4. Nectin-3 over-expression was induced in the human breast cancer cell line MDA-MB-231 and the human endothelial cell line HECV. Functional testing was carried out to ascertain changes in cell behaviour. Results Q-PCR revealed a distinct reduction in node positive tumours and in patients with poor outcome. There was increased expression of Nectin-1/-2 in patients with metastatic disease, Nectin-3/-4 was reduced. IHC revealed that Nectin-3 expression showed clear changes in distribution between normal and cancerous cells. Nectin-3 over-expression in MDA-MB-231 cells showed reduced invasion and migration even when treated with HGF. Changes in barrier function resulted in MDAN3 cells showing less change in resistance after 2h treatment with HGF (p<0.001). Nectin-3 transformed endothelial cells were significantly more adhesive, irrespective of treatment with HGF (p<0.05) and had reduced growth. Barrier function revealed that transformed HECV cells had significantly tighter junctions that wildtype cells when treated with HGF (p<0.0001). HGF-induced changes in permeability were also reduced. Overexpression of Nectin-3 produced endothelial cells with significantly reduced ability to form tubules (p<0.0001). Immunoprecipitation studies discovered hitherto novel associations for Nectin-3. Moreover, HGF appeared to exert an effect on Nectin-3 via tyrosine and threonine phosphorylation. Conclusions Nectin-3 may be a key component in the formation of cell junctions and be a putative suppressor molecule to the invasion of breast cancer cells. Introduction The Nectins are a family of immunoglobulin-like cell adhesion molecules that have long been thought of as essential components for the formation of cell-cell adhesions and regulators of cellular functions that include cell polarization, differentiation, movement, proliferation and survival [1]. The Nectin family is comprised of four members, Nectin Each Nectin has a c-terminal motif of 4 amino acids (E/A-K-Y-V) that interacts with the PDZ domain of afadin. Nectin-1 has two splicing variants, nectin-1a and -1b/HigR [2][3]. Nectin-2 also has two splicing variants, nectin-2a and -2d [4][5]. Nectin-3 has three splicing variants nectin-3a, -3b and -3d [6]. The extracellular regions of splicing variants are identical, but their transmembrane regions and cytoplasmic regions are different. The cytoplasmic regions of nectin-1a, -2a, -2d, -3a and 3d have a C-terminal conserved motif of 4 amino acid residues (E/A-X-Y-V), which interact with the PDZ domain of afadin through which they are liked to the actin cytoskeleton [7]. The physiological role of Nectins has yet to be satisfactorily clarified [8], although work suggests that they may play a key role in the proper organisation of both adherens junctions (AJ) and tight junctions (TJ) [9]. These Ca (2+)-independent cell adhesion molecules first form cell-cell adhesions where cadherins are recruited, forming adherens junctions in epithelial cells and fibroblasts. In addition, Nectins recruit claudins, occludin, and junctional adhesion molecules (JAM's) to the apical side of AJs, forming TJs in epithelial cells. All four Nectin family members have one extracellular region with three Ig-like loops, one transmembrane segment and one cytoplasmic tail [10]. The formation of cisdimers is necessary for the formation of Nectin trans-dimers. Nectin-3 was first described by Satoh-Horikawa [6] as a member of the Nectin family. The first Ig-like loop of Nectin-3 is essential and sufficient for the formation of trans-dimers with Nectin-1, but the second Ig-like loop of Nectin-3 was furthermore necessary for its cell-cell adhesion activity [10]. Although Nectins were initially thought to be only localised at AJs, studies have suggested that a role in the formation or organisation of TJs may be found. Reymond et al. [11] showed that Nectin-3 (PRR3) interacts with afadin by interaction of the Cterminal to the PDZ domain of afadin. Inagaki et al. [12] have shown that the Nectin-afadin system is able to recruit ZO-1 to the Nectin-based cell-cell adhesion sites in non-epithelial cells that have no TJs. Besides their role in physiology, Nectins have been involved in different pathological processes in humans where they serve as virus receptors (poliovirus and herpes simplex virus), they are involved in orofacial malformation (CLPED1) and recently they have been described as markers, actors and potential therapeutic targets in cancer [13][14]. Nectin-2 and Nectin-4 are often overexpressed in tumours, and are associated with a poor prognosis [14]. Indeed, Nectin-2 has been found to be overexpressed in clinical breast and ovarian cancer tissues by using gene expression profile analysis and immunohistochemistry studies [15]. Nectin-2 was over-expressed in various cancer cell lines as well [13]. Interestingly, a polyclonal antibody specific to Nectin-2 suppressed the in vitro proliferation of OV-90 ovarian cancer cells, which express endogenous Nectin-2 on the cell surface. The anti-Nectin-2 antibpdies generated were classified into 7 epitope bins. The anti-Nectin-2 mAbs demonstrated antibody-dependent cellular cytotoxicity (ADCC) and epitope bin-dependent features such as the inhibition of Nectin-2-Nectin-2 interaction, Nectin-2-Nectin-3 interaction and in vitro cancer cell proliferation. A representative anti-Nectin-2 mAb in epitope bin VII, Y-443, showed anti-tumour effects against OV-90 cells and MDA-MB-231 breast cancer cells in mouse therapeutic models, and its main mechanism of action appeared to be ADCC. These findings suggest that Nectin-2 is a potential target for antibody therapy against breast and ovarian cancers [15]. The expression of Nectin-4 was increased in ovarian cancer compared with normal ovaries. Reverse transcriptase-polymerase chain reaction (RT-PCR) and quantitative RT-PCR validated the overexpression of Nectin-4 messenger RNA in ovarian cancer compared with normal ovarian cell lines and tissues. Protein levels were elevated in ovarian cancer cell lines and tissue compared with normal ovarian cell lines. Cleaved Nectin-4 was detectable in a number of patient serum samples and in patients with benign gynecologic diseases with high serum CA125 levels, Nectin-4 was not detected in the majority of cases, suggesting that it may serve as a potential biomarker that helps discriminate benign gynecologic diseases from ovarian cancer in a panel with CA125. Fabre-Lafay et al. [16] also found Nectin-4 not to be detected in normal breast epithelium. By contrast, Nectin-4 was expressed in 61% of ductal breast carcinoma vs 6% in lobular type. Expression of Nectin-4 strongly correlated with the basal-like markers EGFR, P53, and P-cadherin, and negatively correlated with the luminallike markers ER, PR and GATA3. All but one ER/PR-negative tumour expressed Nectin-4. The detection of Nectin-4 in serum improved the follow-up of patients with MBC as the association of CEA/CA15.3/Nectin-4 allowed monitoring of 74% of these patients compared to 67% with the association CEA/CA15.3 [16]. Serum Nectin-4 was also found to be a marker of therapeutic efficiency and correlates, in 90% of cases, with clinical evolution. The authors concluded that Nectin-4 was a new tumourassociated antigen for breast carcinoma and a new bio-marker whose use could help refine breast cancer taxonomy and improve patient follow-up [16]. In lung cancer, Maniwa et al. [17] demonstrated that of 127 patients, 25% showed membranous expression of Nectin-3, and others showed negative or cytoplasmic expression. Membranous expression of Nectin-3 was found to be a prognostic factor for decreased overall survival and Multivariate Cox proportional hazards model analyses revealed that membranous expression of Nectin-3 was an independent prognostic factor. In tumours expressing membranous Nectin-3, some did not co-localize with E-cadherin and these patients showed poorer prognosis than other patients for overall survival. Conversely, membranous expression of Nectin-3 with E-cadherin co-localization was found to associate with good prognosis of patients. Breast cancer is the most common cancer in women worldwide and is the principle cause of death from cancer among women globally. Despite the high incidence rates, in Western countries, 89% of women diagnosed with breast cancer are still alive 5 years after their diagnosis, which is due to detection and treatment [18]. The UK and USA have one of the highest incidence rates worldwide (together with the rest of North America and Australia/ New Zealand), making these countries a priority for breast cancer awareness (Worldwide Breast Cancer statistics). Breast cancer has been the most common cancer in the UK since 1997, despite the fact that it is rare in men (Cancer Research UK Statistics). It is by far the most common cancer among women in the UK (2010), accounting for 31% of all new cases of cancer in females. In 2010, there were 49,961 new cases of breast cancer in the UK with 49,564 (99%) in women and 397 (less than 1%) in men, giving a female to male ratio of around 125 to 1. Despite rising survival rates, mainly due to earlier detection and better treatments, many of the cellular processes underlying the disease remain to be determined. Aberrant expression of Nectins has been associated with cancer and evidence has shown that Nectins may be integral to the correct functioning of TJs. TJs in epithelial cells act as cellcell adhesion structures and govern paracellular permeability. Disruption of these functions often leads to the dissociation and metastasis of cancer cells. There has not been, to date, a study examining the distribution and expression of all four Nectins in human breast cancer and further information on the roles of Nectin-3 have yet to be determined. This study aimed to ascertain the distribution of Nectins in human breast cancer and to determine the role that Nectin-3 may have in regulating cell behaviour in human breast cancer and endothelial cells. In vivo work was carried out under the strict guidelines of the UK Home Office to ensure that the 3R's were strictly adhered to. Thus, the minimum number of animals was used in the experiment, with the minimum of suffering and maximum attention to animal welfare. The maximum severity band allowed was moderate, although the procedures carried out in this work were ostensibly only mild. Animals were checked daily and their behavior and health monitored. Animals were weighed and measured twice weekly to ascertain loss of health (as determined by weight loss greater than 20% or tumour burden greater than 1cm3). Adverse effects resulted in sacrifice via UK Schedule One procedures. Cell lines and culture conditions The human breast cancer cell lines MDA-MB-231 and MCF-7 were obtained from ECACC and the HECV endothelial cell line from the Interlab Cell Line Collection (ICLI), Naples, Italy and were routinely maintained in Dulbecco's Modified Eagle Medium/F12 (DMEM/F12) (Sigma-Aldrich, Dorset, UK) supplemented with 10% fetal calf serum (FCS), penicillin and streptomycin (Sigma-Aldrich, Dorset, UK). The cells were incubated at 37uC, 5% CO2 and 95% humidity. Human breast specimens The human breast cancer tissue cohort consisted of a total of 133 breast samples obtained from breast cancer patients (106 breast cancer tissues and 27 associated background or related normal tissue), with the consent of the patients and local ethical committee approval (Bro Taf Healthboard). The tissues were verified by a pathologist as normal background and cancer specimens, and it was confirmed that background samples were free from tumour deposit. The tissues were immediately frozen in liquid nitrogen following excision. RNA extraction and Reverse Transcription-Polymerase Chain Reaction (RT-PCR) Cells were grown to confluence in a 25 cm 3 flask before RNA was extracted using total RNA isolation (TRI) reagent and following the protocol provided (Sigma-Aldrich, Dorset, UK). RNA was converted to cDNA using iScript cDNA synthesis kit (Primer Design Ltd., Southampton, UK). Following cDNA synthesis, samples were probed using actin primers to check the quality of the cDNA and confirm uniform levels within each sample together with those specific for the transcript (full primer sequences are outline in Table 1). Conventional PCR was performed using a T-Cy Thermocycler (Beacon Technologies Ltd., The Netherlands) using REDTaqH ReadyMix TM PCR Reaction mix (Sigma-Aldrich, Dorset, UK). Cycling conditions were as follows: 94uC for 5 min, 94uC for 30 s, 55uC for 30 s, 72uC for 30 s and the final extension phase at 72uC for 7 min for 36 cycles. The PCR products were separated on a 2% agarose gel and electrophoretically separated. The gel was then stained with ethidium bromide prior to examine under ultraviolet light and photographs taken. The primers used are shown in Table 1. Real-time quantitative Polymerase Chain Reaction (Q-PCR) The Amplifluor system was used to detect and quantify transcript copy number of Nectin-1, Nectin-2 and Nectin-3 in tumour and background samples. Primers were designed by Beacon Designer software, which included a complementary sequence to universal Z probe (Intergen, Inc.). Each reaction contains 10 pmol reverse primer (which has the Z sequence), 10 pmol of FAM-tagged universal Z probe (Intergen, Inc.) and cDNA (equivalent to 50 ng RNA) (primer sequences are shown in Table 1). Sample cDNA was amplified and quantified over a large number of shorter cycles using an iCyclerIQ thermal cycler and detection software (BioRad laboratories, Hammelhempstead, UK) under the following conditions: an initial 5 minute 94uC period followed by 60 cycles of 94uC for 10 seconds, 55uC for 15 seconds and 72uC for 20 seconds. Detection of GAPDH copy number within these samples was later used to allow further standardisation and normalisation of the samples. Q-PCR primers are shown in Table 1. Over-expression of Nectin-3 in MDA-MB-231 breast cancer cells and HECV endothelial cells A range of normal human tissues were screened for Nectin-3. Normal breast tissue was chosen for endogenous expression of Nectin-3. The human breast cancer cell line MDA-MB-231 and the human endothelial cell line were chosen for introduction of the Nectin-3. The gene, after amplification from breast tissue cDNA was cloned into aPEF6/V5-His TOPO TA plasmid vector (Invitrogen Ltd., Paisley, UK) before electroporation into the cells. Expression of the gene was confirmed by RT-PCR. The Nectin-3 expression construct and empty plasmid were, respectively, used to transfect MDA-MB-231 and HECV cells by electroporation. Stably transfected cells were then used for subsequent assays after being tested at both transcriptional and translational level. Those cells containing the expression plasmid and displaying enhanced Nectin-3 expression were designated MDAN3 and HECVN3, those containing the closed pEF6 empty plasmid and used as control cells were designated MDApEF6 and HECVpEF6 and unaltered wild type were designated MDAWT and HECVWT. Expression primers were: Necti-n3EXF1: 59-atggcgcggaccctgcggccgtc-39 and Nectin3RX 59ctaaacataccactccctcct-39. Construction of hammerhead ribozyme transgene targeting human Nectin-3 Hammerhead ribozymes that specifically target human Nectin-3 were constructed based on its secondary structure. Touch down PCR was used to generate PCR-based ribozymes which were then cloned into a pEF6/V5-His vector (selection markers: ampicillin and blasticidin, for prokaryotic and mammalian cells respectively), and amplified in Escherichia coli, purified, verified and used for electroporation into both MDA-MB-231 and MCF-7 human breast cancer cells lines. The targets were as follows: Nectin3-ribR1-actagtacaatgcctgtcaaaacttttcgtcctcacggact and Nectin3rib1Fctgcagaacggtgagatatgccttgctgatgagtccgtgagga. SD-PAGE, Western blotting and co-immunoprecipitation Cells were grow to confluence, detached and lysed in HCMF buffer containing 0.5% SDS, 0.5% Triton X-100, 2 Mm CaCl2, 100 mg/ml phenylmethylsulfonyl fluoride, 1 mg/ml leupeptin, 1 mg/ml aprotinin and 10 Mm sodium orthovanadate for 1 hour, sample buffer was added and the protein boiled at 100uC for 5 min before being spun at 13,000 g for 10 min to remove insolubles. Protein concentration was quantified using Bio-Rad Protein Assay kit (Bio-Rad Laboratories, Hertfordshire, UK). Equal amounts of protein from each cell sample were added onto a 10% or 15% (depending on protein size) acrylamide gel and subjected to electrophoretic separation. The proteins were transferred onto nitrocellulose membranes which were blocked and probed with specific primary antibodies (1:500), following with peroxidase-conjugated secondary antibody (1:1000). Protein bands were visualized with Supersignal West Dura system (Perbio Science UK Ltd., Cramlington, UK) and detected using a CCD-UVIprochemin system (UVItec Ltd., Cambridge, UK). Co-immunoprecipitation samples were prepared as follows: cell lysate of the protein of interest was probed with primary antibodies (1:100 dilution) and placed on a rotating wheel for 2 hour allowing primary antibodies to bind to their targets. One hundred microlitres of conjugated A/G protein agarose beads (Santa-Cruz Biotechnologies Inc., USA) were added to each sample to make the antibody-protein complex insoluble, followed by overnight incubation on the rotation wheel. The supernatant was discarded and the pellet was washed in 200 ml of lysis buffer and resuspended in 200 ml of 2X Lamelli sample buffer concentrate (Sigma-Aldrich, Dorset, UK), then denatured for 5 minutes by boiling at 100uC. Trans-epithelial resistance (TER) and Paracellular Permeability Cells were seeded into 0.4 mm transparent pore size inserts (Greiner bio-one, Stonehouse, UK) at a density of 50,000 cells in 200 ml of medium within 24 well plates, grown to confluence, the medium removed and replace with fresh Dulbecco's Modified Eagle's medium containing 15 Mm Hepes, L-Glutamine (Lonza Laboratories, Verviers, Belgium). Medium alone was added to the base of the wells (control) or with 40 or 50 ng/ml HGF [19]. Resistance across the layer of cells was measured using an EVON volt-ohmmeter (EVOM, World Precision Instruments, Aston, Herts, UK), equipped with static electrodes (WPI, FL, USA) for a period of 4 h. Paracellular permeability (PCP) was determined using fluorescently labeled dextran FITC-Dextran 40, molecular weight being 40 kDa. Human breast cancer cells were prepared and treated as in the TER study, but with the addition of Dextran-40 to the upper chamber. Medium from the lower chamber was collected for intervals up to 2 h after addition of HGF. The relative fluorescence from these collections was read on a multichannel fluorescence reader (Denly, Sussex, UK). In vitro cell growth assay MDA-MB-231 and HECV cells were seeded into a 96 well plate at a density of 3,000 cells/well to obtain density readings after 1 day, 2 days, 3 days, 4 days and 5 days. Within each experiment four duplicates were set up. After appropriate incubation periods, cells were fixed in 4% formaldehyde in BSS for 5-10 minutes before staining for 10 minutes with 0.5% (w/v) crystal violet in distilled water. The crystal violet was then extracted from the cells using 10% acetic acid. Absorbance was determined at a wavelength of 540 nm on a plate reading spectrophotometer. In vitro cell matrix adhesion assay The cell-matrix attachment was carried out as previously described method [19]. Briefly, 45,000 cells were seeded onto the Matrigel basement (10 mg/well) membrane in 200 ml of normal medium and incubated at 37uC with 5% CO2 for 40 minutes. After the incubation period, the medium was aspirated and the membrane washed 5 times with 150 ml of BSS to remove the nonattached cells, then fixed in 4% formaldehyde (v/v) in BSS for 10 minutes before being stained in 0.5% crystal violet (w/v) in distilled water. The number of adherent cells were counted from 5 random fields per well and 5 duplicate wells per sample, under a microscope. In vitro invasion cell assay Cell culture inserts (BD Falcom TM Cell Culture Inserts, BD Bioscience, Erembodegem, Belgium) were placed into a 24-well plate using forceps and coated in Matrigel (BD Biosciences, Oxford, UK). The working solution of Matrigel was prepared at a concentration of 0.5 mg/ml, adding 100 ml to each insert and allowed to dry overnight. Once dried the inserts were rehydrated in 100 ml sterile water for 1 hour. The water was then aspirated and cells were seeded in the inserts over the top of the artificial basement membrane at a density of 30,000 cells in 200 ml per well. The plates were then incubated for 3 days at 37uC at 5% CO2. After the incubation period, the Matrigel layer together with the non-invasive cells was cleaned from the inside of the insert with a tissue paper. The cells which had migrated through the pores and invaded into the Matrigel, were fixed in 4% formaldehyde (v/v) in BSS for 10 minutes before being stained in 0.5% crystal violet (w/v) in distilled water. The cells were then visualized under the microscope under 640 magnification, 5 random fields counted and duplicate inserts used for each test sample. In vitro Cytodex-2-bead motility assay Cells were pre-coated onto Cytodex-2 beads (GE Healthcare, Cardiff, UK) for 2 hours. The medium was aspirated and the beads washed in medium to remove non-adherent or dead cells. The beads were resuspended in 5 ml of medium. Cell were aliquoted into a 24-well plate, 5 duplicate wells per sample (300 ml/well), and incubated overnight. Following incubation, cells that had migrated from the Cytodex-2 beads and adhered to the base of the wells were washed gently in BSS, fixed in 4% formaldehyde (v/v) in BSS for 10 minutes before being stained in 0.5% crystal violet (w/v) in distilled water. Five random fields per well were counted under the microscope. Tubule formation assay A volume of 100 ml serum free medium containing 250 mg Matrigel (250 mg/well) was seeded in a 96 well plate and left to gel in an incubator for 30 minutes, followed by a heating oven until dry. Before use, the Matrigel was rehydrated in 100 ml of serum free medium and cells seeded at a density of 40,000 cells/well. Following incubation at 37uC for 1 hour, the medium was aspirated and a second layer of Matrigel was added followed by incubation at 37uC 3 for 30 minutes to gel. Medium was then added and the cells left overnight to allow tubules to form. In vivo development of mammary tumours Athymic nude mice (nu/nu) were purchased from Charles River Laboratories (Charles River Laboratories, Kent, UK) and maintained in filter top units according to Home Office regulations and ethical requirements. Each group of mice consisted of 5 mice and each mouse was injected with a mix of 2610 6 cancer cells in 100 ml in a 0.5 mg/ml Matrigel suspension in both flanks. Two groups were included: MDA-MB-231 pEF6 and MDA-MB-231 N3exp . The mice were weighed and tumour size measured twice weekly using a vernier calipers under sterile conditions. Those mice that developed tumours exceeding 1 cm 3 or suffered 25% weight loss during the experiment were terminated under Schedule 1 according to the UK Home Office and the UK Coordinating Committee on Cancer Research (UKCCCR) instructions. Tumour volume was determined using the following formula: tumour volume = 0.5236width26length. Statistical analysis Statistical analysis was performed by MINITAB version 13.32 (Minitab Inc. State College, PA, USA) using a two-sample student t-test and the non-parametric Mann-Whitney confidence interval test or Kruskal-Wallis, where appropriate. Figure 1A and B. Expression of Nectins in human breast cancer at the transcript level Nectin-1 was found to have higher expression with increasing NPI ( Nectin expression and patient survival When patient outcome was analysed, it was seen that Nectin-1 was elevated in patients that had died from breast cancer, and all poor outcomes overall ( Figure 1C). In comparison, Nectin-2, -3 and -4 were all reduced in patients with metastatic disease and those who had died of the disease ( Figure 1C and 1D). When looking more closely at those with metastatic disease, we found that this was also true for metastasis to bone (Figure 1 E and 1F), however, these did not reach significance. Long-term survival curves were calculated using Kaplan-Meier survival curves (Fig. 1G) When considering patient outcome in those with ductal cancer, it was found that Nectin-1 was increased in those with poor outcome (with metastasis and death), but reduced with those who had bone metastasis ( Protein expression and distribution of Nectin-1, -2 and -3 in patient tissues When paired tumour/background tissues were screened for Nectin protein expression, there appeared little overall difference in tumour expression for both Nectin-1 and Nectin-2 ( Figure 1H left and center panels), although both strongly stained tumour cells, whereas only endothelial cells were strongly stained in background tissue. However, Nectin-3 was much reduced in tumour tissues, when compared to background ( Figure 1H, right panel). Further investigation showed that tumour cells had reduced staining of the cytoplasm but concentrated complexes of Nectin-3 within the nuclear area (Figure 2 left panel). This was not the case in cells in background tissues. Due to the unusual distribution we decided to investigate Nectin-3 further. Screening of Nectins in human cancer cell lines PCR primers were initially designed to amplify a 300 bp region of each Nectin for fast screening of human breast cancer and endothelial (HECV) cell lines ( Figure 3A, Table 1 (Table 1). Only BT-482 cells contained the complete length Nectin-3 transcript Figure 4A, top). Of all the regions amplified, only the regions amplified to produce a 386 bp and an 844 bp fragment produced the correct products ( Figure 3B). All the breast cancer cells analysed had the correct fragment 386 bp fragment, apart from MCF-7, ZR-751, BT-549 and MDA-MB-435S cells. Only MDA-MB-436 cells had the 844 bp fragment. It appears from these results that Nectin-3 is expressed as a truncated form in nearly all the human breast cancer cells analysed. Although all PCR experiments were consistent and carried out a minimum of three times, we decided to examine the expression of Nectin-3 as regards to confluency. We chose two breast cancer cells lines: MDA-MB-231, aggressive cells that only expressed the 386 bp region and MCF-7 cells, less aggressive and not expressing any regions of Nectin-3. We extracted mRNA from both cell lines after growth to reach 25%, 59% and 100% confluency. We were interested to see that after using the PCR primers amplifying the 386 bp region, that transcript signal increased with increasing confluency for both cell lines (top, Figure 3C). Moreover, this was confirmed using Western Blotting (bottom, Figure 3C). We then decided to over-express Nectin-3 in MDA-MB-231 and MCF-7cells to observe any changes in cell aggressiveness. Construction of over expression of Nectin-3 Nectin-3 (complete gene) was cloned into MDA-MB-231 and MCF-7 human breast cancer cells and expression confirmed using RT-PCR ( Figure 5). Moreover, the human endothelial cell line, HECV also received the Nectin-3 gene for further investigation ( Figure 3D). to MDA-MB-231 wild-type cells (MDAWT), even under the influence of HGF (hepatocyte growth factor), a well-known motogen and metastasis factor (p,0.001) Figure 4A. Invasion was also significantly reduced in MDAN3 cells (p,0.02) Figure 4B. In vitro growth assays showed that MDAN3 cells were also significantly slower growing than MDAWT cells ( Figure 4C) even under the influence of HGF ( Figure 4D), p,0.05). When looking at changes in TJ function, it was found that barrier function (as measured using TER) was significantly increased in MDAN3 cells, implying that the modified cells had ''tighter'' TJ. This increased barrier function was also able to resist the effect of HGF, when compared to MDAWT cells ( Figure 4E), p,0.001. In vivo tumour growth assays showed that Nectin-3 over-expression was able to significantly reduced tumour growth over 28 days ( Figure 4F Effect of Nectin-3 expression on human endothelial cell behavior Over-expression of Nectin-3 effected significant changes on human endothelial cells. TER was significantly increased in HECVN3 cells, in comparison to HECVWT ( Figure 5A) p, even under the influence of HGF ( Figure 5B) p,0.00001. Paracellular permeability of HECVN3 cells was reduced compared to HECVWT, both in treated and non-treated cells ( Figure 5C and D, p,0.05). There was however, no significant difference in growth rate of Nectin-3 over-expressed cells ( Figure 5E and 5F). Adhesion of HECVN3 cells was significantly reduced compared to HECVWT/HECVPEF control cells with or without HGF ( Figure 5G), p,005. When a tubule formation assay was carried out, HECVN3 cells produced tubules with significantly reduced size, even when treated with HGF, which is a strong angiogenic factor ( Figure 5H), p,0.0001. Investigation of Nectin-3 protein binding Due to the discrepancy in expression of Nectin-3, as seen from RT-PCR, we decided to investigate the binding of Nectin-3 in cells using two antibodies, (A) which binds to the C-terminus of Nectin-3 and (B) which binds to an internal region. A number of immunoprecipitates were then carried out with both antibodies, in order to determine possible binding partners of Nectin-3. Figure 6A shows the Immunoprecipitation results of this. There was, overall, little difference in precipitations between the two different antibodies used, (A) mapping to the C-terminus of Nectin-3, (B) mapping to an internal region. Positive precipitations were observed for a-catenin, ZO-1 (at both the C-teminus and an internal antibody), Nectin-1 and -2, Ezrin, Nectin-4, weakly for ZO-2 at the internal region, but strongly at the C-teminus, weakly for ZO-3. There was also interaction to b-catenin, c-catenin and Moesin (two isoforms showing for (A), weak signal for (B)), Radixin and Actin. There was also a strong precipitation with MAGI-2. We also repeated the precipitation with a selection of the antibodies and probed with Nectin-3 (A) and found strong precipitation with SIPA-1, ZO-1, Occludin (at an internal region), CAR, b-catenin and ROCKI ( Figure 6B). We were surprised to see the precipitation with Ezrin and again repeated this precipitation with both (A) and (B)-again, this was positive ( Figure 6C). The phosphorylation status of Nectin-3, the effect of HGF and ROCK inhibitor As over-expression of Nectin-3 in both MDA-MB-231 and HECV cells prevented the effect of HGF on cell motility, barrier function and invasion or tubule formation (all key contributors to metastasis) we went on to determine if HGF exerted any effect on the phosphorylation status of the protein. From the Immunoprecipitation shown in Figure 6D, it appeared that HGF exerted little effect on serine phosphorylation of Nectin-3, and no effect on either tyrosine or threonine phosphorylation. As we were surprised to observe an interaction between Nectin-3 and ROCKI, we looked at the effect of the ROCK inhibitor (y-27632) on Nectin-3 phosphorylation. We found an increased tyrosine phosphorylation of Nectin-3 after ROCKI inhibitor (10 mm) treatment over 30 mins (Figure 5D, middle). There was also a weak increase in threonine phosphorylation ( Figure 5D, bottom). Western blotting was then carried out to determine any changes in expression of the other Nectin proteins after Nectin-3 over-expression in MDA-MB-231 cells (Figure 6 E). It can be seen whilst there was no difference in protein levels for Nectin-1, both Nectin-2 and Nectin-4 showed some reduction in levels. We then went on to investigate if Nectin-3 over-expression caused a change in protein levels of the proteins we found to have potential binding interactions with Nectin-3 ( Figure 6 F). Surprisingly, the over-expression of Nectin-3 in these Discussion The Nectin protein family is still little investigated in cancer. Our study has shown that high levels of Nectin-1 and Nectin-2 is associated with poor prognosis and patient outcome in human breast cancer. Previous studies on Nectin-1 have concentrated on its role in sensitivity to herpes oncolytic therapy in squamous cell carcinoma, thyroid cancer and head/neck carcinoma [20][21]. However, it has been demonstrated that there is a link between reduction in breast cancer cell invasion caused by SNAI1-triggered epithelial to mesenchymal transition (EMT) and the downregulation of Nectin-1 [22]. The over-expression of Nectin-2 has previously been described in breast and ovarian cancer tissues using gene arrays and immunohistochemistry [13]. The authors further determined that Nectin-2 was over-expressed in various breast and ovarian cell lines using flow cytometry, concluding that Nectin-2 could serve as a target for antibody therapy in these cancer types. Increased levels of Nectin-2 have also been found to be a biomarker for poor prognosis and metastatic disease in squamous cell and adenosquamous carcinoma and adenocarcinoma of the gallbladder [23]. We found Nectin-3 and Nectin-4 to be reduced in breast cancer and associated with good prognosis and patient outcome. However, in lung adenocarcinoma, it has been reported that membranous expression of Nectin-3 is an independent prognostic indicator [17]. Interestingly, the authors demonstrated this was only true in patients where Nectin-3 did not co-localise with Ecadherin; where there was co-localisation, patient prognosis was favourable. In contrast to the results described here, an earlier study on Nectin-4 using a smaller cohort, described no expression in normal breast epithelium, but expression in 61% of the ductal cancers examined and that nearly all ER/PR negative tumours expressed Nectin-4 [16]. When testing serum from patients, the authors found there to be a correlation between serum Nectin-4 and disease progression. In ovarian cancer, Nectin-4 has also been reported to be increased in tumour cells and tissues, compared to normal [15]. Our immunohistochemical staining for Nectin-3 in human breast tissues revealed the protein to be concentrated as small inclusions in the nuclear region. These inclusions were seen only in cells from tumour tissues. This could have a direct bearing on the role of Nectin-3 in the cell. Nectin-3 was first described by Satoh-Horikawa et al. [6]. They found a novel Ca2+independent homophilic binding cell-cell unit located at cadherin based adherens junctions. The authors isolated three splicing variants which were nectin-3a (largest), -3b (middle), and -3c (smallest). Nectin-3a was found to consist of three extracellular domains, a transmembrane region and a cytoplasmic tail with a PDZ-binding motif. Nectin-3a formed a cis-homo-dimer and showed Ca2+ -independent trans-homo-interaction to cause homophilic cell-cell adhesion. Nectin-3a furthermore showed trans-hetero-interaction with nectin-1 or -2 but did not form a cis-hetero-dimer with nectin-1 or -2. Moreover, Nectin-3a interacted with actin and colocalised with Nectin-2 [6]. In comparison, nectin-3c lacked the C-teminal PDZ motif and was unable to interact with actin. In our current study, all four Nectins had aberrant expression in the cancer cells lines investigated. Nectin-3 was only fully expressed in one breast cancer cell line (BT-482). Sequential overlapping amplification via RT-PCR showed that the majority of breast cancer cell lines expressed the first domain of Nectin-3. This truncated Nectin-3 does not correspond to any of the Nectin-3 spliced variants, being too short. Interestingly, increased confluency of human breast cancer cells resulted in increased transcript of Nectin-3. Over-expression of Nectin-3 in human breast cancer cells resulted in a significantly reduced aggressive phenotype, cells that were less motile, less invasive, slower growing; however, these cells had increase TJ function. In addition, expression of Nectin-3 in human endothelial cells also imbued cells with increased barrier function and reduced ability to undergo tubulogenesis. It appears that Nectin-3 could be involved in barrier function and that the aberrant expression observed in wild type cells could prevent correct assembly of cell-cell junctions. Each member of the Nectin family forms homo-cis-dimers, followed by formation of homotrans-dimers, causing cell-cell adhesion [24]. Nectin-3 also forms hetero-trans-dimers with either Nectin-1 or Nectin2 and the formation of these is much stronger than that of the homotrans-dimers. It has been shown that the first Ig-like domain of Nectins-1, -2, and -3 contain a highly conserved peptide sequence that corresponds to aa 118-132, aa 125-139, and aa 142-156, respectively (6). The authors drew an analogy with E-cadherin, where this conserved sequence is crucial for trans-interaction between cells. Nectin-3a does not form a cis-hetero-dimer with Nectin-1a or -2a and it is likely that a portion(s) of the extracellular region, which is different from that necessary for trans-interaction, determines cis-dimerization specificity [6]. Nectins have the potential to recruit the E-cadherin-beta-catenin complex to the Nectin based cell-cell adhesion sites through afadin and a-catenin [25]. Moreover, Nectins have been shown to recruit ZO-1 also [26]. Fukuhara et al. [27] have shown that Nectin-1 plays a role in the localisation of TJ components, Claudin-1 and occludin, in the formation of the junctional process in MDCK cells. Claudin-1 and occludin accumulated at the apical sites of Nectin-1a-based cellcell adhesion sites during the formation of the junctional complex. The accumulation of Claudin-1 and occludin could be inhibited by Nectin inhibitors, gD and Nef-3, which inhibited the trans interaction of Nectin-1a. Nectin inhibitors also impaired the barrier function of TJ [27]. Such results suggest that trans interaction of Nectin-1 is necessary for the localisation of Claudin-1 and occludin as well as the formation of TJs. Claudin-1 and occludin interact with ZO-1 through its C-terminal and Nectin-1 recruits ZO-1 to the Nectin based cell-cell adhesion sites through afadin in a cadherin-independent manner [26]. Thus Nectin-1 recruits Claudin-1 and occludin through their cytoplasmic-tail binding proteins afadin and ZO-1. Nectin-1 is also involved in the localisation of JAM at TJs [27]. During the formation of cell-cell junctions, the trans-interaction of Nectins first occurs at the initial cell-cell contact sites, and then promotes the formation of cadherin-based AJs and the subsequent formation of claudinbased TJs [28]. It can therefore be postulated that Nectin-3 has a hitherto unreported role in the successful organization of TJs. The Immunoprecipitation experiments we carried out showed a number of potential binding partners amongst well known TJ proteins that could suggest a similar role for Nectin-3 in the recruitment of TJ components to the cell membrane. This could be an essential mechanism in breast epithelial cells to maintaining cell adhesion and barrier function. In cancer cells therefore, aberrant expression of Nectin-3 could lead to the prevention of TJ protein recruitment, lack of formation of TJs and hence loss of cellcell integrity. Increasing evidence has placed TJs as the key structure that cancer cells must overcome in order to successfully metastasize [29]. Also of interest was the possible interaction of Nectin-3 with proteins from the ERM family, i.e. ezrin, radixin and moesin. The ERM protein family act as molecular cross-linkers between actin filaments and proteins anchored in the cell membrane [30]. They participate in a complex intracellular network of signal transduction pathways and play a key role in the regulation of adhesion and polarity of normal cells through interactions with various membrane molecules. Ezrin and related molecules are concentrated at surface projections such as microvilli and membrane ruffles where they link the microfilaments to the membrane. Actin binding proteins allow cross-linking of actin filaments and regulation of actin filaments prior to cell motility. ERM proteins are believed to act as membrane organisers and linkers between plasma membrane molecules such as CD44 and ICAM-2 and the cytoskeleton [31][32]. There is now compelling scientific and clinical evidence that adhesion molecules and the ezrin family are important structures in controlling cell functions such as adhesion as well as controlling the progressive nature of cancer cells [33]. It is therefore probable that control of the assembly or disassembly of Figure 6. Immunoprecipitation study of Nectin-3 (A, an antibody to the C-terminal region; B, an antibody to an internal region) and proteins that are involved in cell to cell adhesion (A). Immunoprecipitation of relevant proteins probed with Nectin-3 (B). Ezrin precipitation and confirmation of Nectin-3 interaction (C). Phosphorylation study of Nectin-3 after treatment with HGF (40 ng/ml) and/or the ROCK inhibitor Y-27632 (D). Effect of Nectin-3 over-expression on the protein expression of Nectin-1, -2 and -4 (E). Effect of Nectin-3 over-expression on the protein expression of potential binding partners (F). doi:10.1371/journal.pone.0082696.g006 cell-cell adhesion and changes in the cell actin cytoskeleton leading to motility could involve some interaction between Nectin-3 and ezrin, for example. This is an area that could be fertile for further research. The possible interaction between ROCKI and Nectin-3 could also be another exciting area of research. Changes in the phosphorylation status of Nectin-3 shown in this current study, demonstrates that the Y-27632 ROCK inhibitor increased both tyrosine and threonine phosphorylation. There have been no studies investigating the phosphorylation status of Nectin-3 and limited studies on the phosphorylation of other Nectins. Nectin-2d is tyrosine phosphorylated in response to cell-cell adhesion [8] and knockdown of afadin or Nectin-3 in NIH3T3 cells caused relatively rapid suppression of the PDGF (platelet-derived growth factor)-induced phosphorylation of Akt; moreover, an increase in the phosphorylation of Akt occurred in afadin-or Nectin-3knockdown NIH3T3 cells after treatment with PDGF [34], suggesting that the Nectin-afadin complex is involved in the (PDGF)-induced activation of phosphatidylinositol 3-kinase (PI3K)-Akt signaling for cell survival. Conclusion In conclusion, it appears that Nectin family members have disparate expression in human breast cancer and that the aberrant expression of Nectin-3 is associated with metastatic disease. The expression and interaction of Nectin-3 in breast cancer and endothelial cells indicates that Nectin-3 may be a key component in the formation of cell junctions and be a putative suppressor molecule to the invasion of breast cancer cells.
9,287.6
2013-12-26T00:00:00.000
[ "Biology", "Medicine" ]
Understanding the foundations of the structural similarities between marketed drugs and endogenous human metabolites Background: A recent comparison showed the extensive similarities between the structural properties of metabolites in the reconstructed human metabolic network (“endogenites”) and those of successful, marketed drugs (“drugs”). Results: Clustering indicated the related but differential population of chemical space by endogenites and drugs. Differences between the drug-endogenite similarities resulting from various encodings and judged by Tanimoto similarity could be related simply to the fraction of the bitstrings set to 1. By extracting drug/endogenite substructures, we develop a novel family of fingerprints, the Drug Endogenite Substructure (DES) encodings, based on the ranked frequency of the various substructures. These provide a natural assessment of drug-endogenite likeness, and may be used as descriptors with which to derive quantitative structure-activity relationships (QSARs). Conclusions: “Drug-endogenite likeness” seems to have utility, and leads to a simple, novel and interpretable substructure-based molecular encoding for cheminformatics. Introduction In a recent study (O'Hagan et al., 2015), motivated by the recognition that drugs do, and probably have to, hitchhike on metabolite transporters in order to get into cells (Dobson and Kell, 2008;Dobson et al., 2009a,b;Giacomini et al., 2010;Kell et al., 2011Kell et al., , 2013Kell, 2013Kell, , 2015Kell and Goodacre, 2014;Kell and Oliver, 2014), we have used the recent availability of a curated reconstruction of the human metabolic network, Recon2 Thiele et al., 2013), to ask the question as to how similar in structural terms marketed drugs are to the molecules (hereafter "endogenites") involved in endogenous human metabolism. While the results depended quite considerably on the exact 2D descriptor used to encode the structures, it was noted that for the commonly used MACCS166 descriptor (Durant et al., 2002;Todeschini and Consonni, 2009) in the implementation described (and see http://www. dalkescientific.com/writings/diary/archive/2014/10/17/maccs_key_44.html), there was at least one endogenite with a Tanimoto similarity (TS) exceeding 0.5 for more than 90% of marketed drugs. As noted in those references (Durant et al., 2002;Todeschini and Consonni, 2009), the MACCS166 descriptor consists of a string of 166 binary elements representing the presence or absence of 166 (slightly arbitrary and not necessarily druglike) features. We note that not all the MACCS keys represent substructures, some are rather simple, e.g., "has one or more element [x] atoms." Most of the cheminformatic tool kits (e.g., RDkit, CDKit) are implemented using SMARTS queries; these can only approximate the original MDL MACCS keys. In some cases the intended behavior of the key (query) was ambiguous, in other cases, a SMARTS query is unable to replicate the original MDL query as intended. Nevertheless, the various toolkit MACCS fingerprints are claimed to be sufficiently close to the original MDL versions. The 166 subset were based on the MDL MACCS key that were made public. The RDKit implementation is described at http://rdkit. org/Python_Docs/rdkit.Chem.MACCSkeys-pysrc.html. It was concluded that while this "does not mean, of course, that a molecule obeying the rule is likely to become a marketed drug for humans, it does mean that a molecule that fails to obey the rule is statistically most unlikely to do so" (O'Hagan et al., 2015), implying that the degree of endogenite-likeness could indeed be a useful chemical filter in drug discovery programmes. Others too have noted the general "natural metabolite-likeness" of drugs (e.g., Feher and Schmidt, 2003;Karakoc et al., 2006;Gupta and Aires-De-Sousa, 2007;Dobson et al., 2009b;Ranganathan, 2009, 2011;Peironcely et al., 2011;Zhang et al., 2011;Chen et al., 2012;Walters, 2012;Hamdalla et al., 2013;Manallack et al., 2013), often using supervised methods of machine learning, though in our own work (O'Hagan et al., 2015), especially to avoid the dangers of overtraining (Broadhurst and Kell, 2006), we purposely confined ourselves to using unsupervised methods only. We also noted (O'Hagan et al., 2015) that a rather smaller fraction of molecules in typical drug discovery libraries obeyed the rule. Partly for reasons of space, however, the previous study (O'Hagan et al., 2015) left a considerable number of questions rather open. These included, for instance, which fingerprint method might be most "suitable" (and whether "better" ones existed), whether similarity measures should be based on a suitable fusion of the results from using different fingerprints (e.g., Ginn et al., 2000;Hert et al., 2004;Whittle et al., 2006; FIGURE 1 | A "mind map" of the manuscript. Gardiner et al., 2009;Chen et al., 2010;Medina-Franco et al., 2011;Willett, 2013a,b), which substructures were most important in determining endogenite-likeness, which parts of metabolite space were most fully populated by drugs, whether results differed markedly if we used other clustering methods, and so on. The purpose of the present paper is to develop and provide some of these analyses. It is concluded that drugs are indeed like metabolites when viewed in a variety of orthogonal ways, and that the substructures found within endogenites and marketed drugs provide a novel and useful means of encoding chemical structures in a simple and easy-to-understand manner. Figure 1 gives an overview of the paper in the form of a "mind map" (Buzan, 2002). Molecular Data We used the same molecules for marketed drugs as before (O'Hagan et al., 2015); they were provided in their entirety as Supplementary files to that paper (O'Hagan et al., 2015) and are not reproduced here. The number of endogenites was lowered to 1057 to remove wildcards in lipids with variable chain lengths, since for some purposes we were here specifically interested in molecular weights, but the endogenites were otherwise identical too. Data for Maybridge fragments and Chembridge molecules were downloaded from their respective websites, and other data were downloaded as indicated in the text. Software We used the KNIME environment (Berthold et al., 2008;Mazanetz et al., 2012;Meinl et al., 2012) throughout, along with a variety of its cheminformatics toolkits such as CDK (Beisken et al., 2013) and RDKIT (Riniker et al., 2013). Details were as given previously (O'Hagan et al., 2015) (and note that the MACCS fingerprints there were not hashed; a correction has been appended at the journal). Quite a few of the nodes used R code, written by O'Hagan and incorporated into the "R Snippet" KNIME node, with substructure counting via the RDKit Substructure Counter node. Results and Discussion Fingerprints Even (as in O'Hagan et al., 2015) using just 2D fingerprints, the apparent closeness of drug and endogenite molecules to each other (as judged by their Tanimoto similarity coefficients) was differentially "rugged" (the hierarchical clustering showed many more small clusters for drugs than for metabolites), and could differ quite substantially depending on which fingerprint was used (see also e.g., Eckert and Bajorath, 2007;Leach and Gillet, 2007;Faulon and Bender, 2010;Koutsoukas et al., 2014;Maggiora et al., 2014;Medina-Franco and Maggiora, 2014). To explore this further, we decided to compare the drug and metabolite spaces, alone and with each other, using a modification of the approach. Because, of course, the nearest metabolite to itself has a TS of 1, we decided to proceed as follows: 1. For each querying molecule (whether a drug or an endogenite) rank the queried molecules (whether drug or endogenite) and determine the TS of the 90th percentile of closeness. 2. Do this for each fingerprint encoding. 3. For each query molecule and each queried molecule, find the maximum value of the TS among the eight fingerprints tested. 4. Plot the TS of the 90th percentile of the queried molecule against the fraction of the querying molecules tested. Considering first the endogenites (as compared to each other), we see (Figure 2A) that the RDKIT encoding shows the greatest similarities for metabolites that are ranked as being the most similar, but that MACCS and Layered encoding preserve the Frontiers in Pharmacology | www.frontiersin.org greater appearances of similarity as the overall similarities decrease. Using these encodings, 40-50% of molecules still had molecules whose TS at the 90 th percentile was 0.5 or above. By contrast ( Figure 2B), these fractions were uniformly lower for drugs vs drugs, consistent with the rather spikier or "patchy" population of the normalized chemical space relative to that of endogenites (many of which, especially CoA and steroid/sterol derivatives, share many structural similarities) (O'Hagan et al., 2015). The drug-endogenite comparison ( Figure 2C, with the drugs being the query molecules) gives data broadly similar to those shown in Figure 2A of O'Hagan et al. (2015) where closeness to only the very nearest metabolite was plotted, consistent with a view that a querying drug is more commonly close in structural terms not just to a single endogenite but to many such that occupy that part of endogenite space. Figure 2 also shows the data for the "maximum" TS (Gardiner et al., 2009) among the different fingerprints when only the nearest metabolite is returned. Finally, the complementary endogenitedrug comparison, with the endogenite being the query molecule, shows similar but complementary behavior ( Figure 2D). One conclusion, given the fact that more than 90% of marketed drugs are seen to be similar to at least some metabolites, and that one might therefore wish to use this as a filter in the analysis of candidate drug libraries, is that for these kinds of comparisons the MACCS, RDKit, Layered or "maximum" fingerprint choice is most likely to return such a result. Another way of looking at such data is to compare the distributions of the nearest Tanimoto similarities between marketed drugs and metabolites for the different encodings ( Figure 3A). It is clear from such a plot ( Figure 3A) that not (2)] of ZINC database compounds and to marketed drugs. In each case library compounds are more similar to each other than to marketed drugs. (D) Topological polar surface area and molecular weight distributions of drugs, Recon2 compounds and five "rule-of-3"-compliant (Congreve et al., 2003) libraries of 500 fragments each that are sold for drug screening purposes. The inset is scaled to show all marketed drugs. Frontiers in Pharmacology | www.frontiersin.org only is the closeness of the "nearest" metabolite different for the different encodings but that the encodings cover metabolite space differentially. At least for the Morgan and Feat Morgan encodings, that resemble ECFP and FCFP (Landrum et al., 2011), this can be ascribed in part to the much smaller number of bits in the encoding that have the value 1 (Figure 3B), since the value for the TS is partly a function of this (Flower, 1998;Godden et al., 2000;Holliday et al., 2002Holliday et al., , 2003Wang et al., 2007;Al Khalifa et al., 2009). [In a similar vein, we also looked at the use of a strategy that doubles the length of the bitstring encoding by adding its complement (Knuth, 1986), such that 50% of the bits are 1 and 50% 0. This was not beneficial, as the high density of zeroes in the original merely doubled the number of similar bits (data not shown).] We also observed previously that the distribution of metabolite-(endogenite-) likenesses differed significantly between marketed drugs and (many of) the kinds of molecules typically found in drug discovery libraries. A convenient way of encoding these is simply to look at the distribution of bitstring densities (of 1 s) for the appropriate encoding between the molecules (Flower, 1998). Thus, Figure 4A shows that these differ very significantly for random samples taken from Recon2, from marketed drugs, and from the ZINC (Irwin et al., 2012) databases, with drug candidates typically being less like metabolites than are drugs (see also Chen et al., 2012;Walters, 2012), regardless of the database used (Figures 4B,C). The distributions of topological polar surface area (TPSA) and molecular weight (see Abad-Zapatero et al., 2010 are shown ( Figure 4D) for endogenites (Recon2), for marketed drugs, and for 5 libraries of small molecule "fragments" (Maybridge "Ro3"-compatible, Congreve et al., 2003, libraries). For a given molecular weight, endogenites are typically significantly more polar than are marketed drugs or fragments, especially for lower molecular weights. Thus, when compounds are ranked by molecular weight (MW), the median MW for drugs, endogenites and fragments are 335, 291, and 179-185 (depending on the library). For these molecules the TPSA values are 69, 124, and 30-69Å 2 . A noteworthy point (see also Gopal and Dick, 2014), however, is that fully one quarter of marketed drugs are not in fact larger than typical fragments ( Figure 4D); indeed when ranked by increasing molecular mass, the 500th marketed drug (of 1383) has a MW of just 297. We also looked to see whether metabolites that were known substrates (from the Recon2 map) for known transporters (see also Sahoo et al., 2014; exhibited any greater likelihood to be those with the nearest TS to the query drug; no significant evidence for or against this was found (data not shown), and of course they may be, and may need to be, endogenite-like at their targets too. Clustering Using Self-organizing Maps Teuvo Kohonen's Self Organizing (Feature) Map (Kohonen, 1989(Kohonen, , 2000Oja and Kaski, 1998) is a well-known unsupervised learning method of clustering data according to a measure of their similarity. It was therefore of interest to see how "drug" and "endogenite" spaces were organized when represented as such a map. To this end, we used the MACCS encoding for marketed drugs, with 10 × 10 nodes and 10 clusters (numbers chosen to give a reasonable but not excessive degree of clustering, given the number of drugs). Figure 5A (left side) shows the distribution of the different numbers of drugs as clustered (by color, based on the similarity of their weight vectors) into the different nodes (circles), while the right hand side of the same figure represents a projection of Recon2 metabolites as projected onto the trained network. The number of circles for each cluster varies quite significantly, from 2 to 15, while the heterogeneous distribution of metabolites shows clearly that some parts of drug space are much less close to multiple metabolites than are others (e.g., the "orange"-and "lemon"-colored clusters). This is especially obvious when the data are displayed as a contour map ( Figure 5B). In the converse approach, we trained a selforganizing map (SOM) on Recon2; in this case ( Figure 5C) the number of nodes per cluster varied from 1 to 21, showing again that metabolite space has some significantly larger clusters than does drug space, while the projection of drugs onto metabolite space ( Figure 5D) shows a highly significant clustering into a particular area of metabolite space, consistent with the finding that there was a significant preference for some metabolites (O'Hagan et al., 2015). Substructural Basis for Drug-endogenite Likenesses Our previous analyses of drug-endogenite likenesses looked at the molecules "as a whole." However, it is obvious that some substructures may be more common in endogenites than in marketed drugs and vice versa, a simple example being the recognition that human endogenites do not contain halogen atoms while various drugs do (e.g., of the 1381 marketed drugs, 148 of them contain at least one fluorine atom). Thus, Figure 6 shows the distribution of atom types for the three classes drugs, endogenites, and library compounds. Starting arguably with Murcko, 1996, 1999), a number of papers have analyzed the frequency of occurrence in FDA-approved, marketed drugs of various substructures, including heterocycles , rings (Aldeghi et al., 2014;Taylor et al., 2014), the chronological (and relatively recent) appearance of S and F in drugs (Ilardi et al., 2014), and even metallodrugs (Mjos and Orvig, 2014). Papers also exist in which fingerprinting methods have been used to distinguish drugs from metabolites (e.g., Ranganathan, 2009, 2011;Peironcely et al., 2011;Walters, 2012;Hamdalla et al., 2013). However, while Chen et al. (2012) did note that human metabolites and natural products tended to have fewer terminal rings than do marketed drugs, no one has compared the substructures found in marketed drugs with those found in the human endogenites represented in Recon2, which is what we now do here. Using the Indigo substructure analyser in KNIME, we extracted relevant substructures from both endogenites and marketed drugs, and ranked them according to the normalized frequency of their appearances. The top 60 substructures in each clade are shown in Figure 7, while all are illustrated diagrammatically in the inset to Figure 7A, with the full Table of data being supplied as Supplementary Information. It is clear from Figures 7A,B that while there are indeed some clear similarities between drugs (blue) and endogenites (red) (Figure 7A), with a greater frequency of more substructures in drugs (Figure 7B), there are also some substantial differences ( Figure 7C) in the frequency of various substructures between endogenites and present marketed drugs (those substructures that occur frequently in drugs are sometimes referred to as "privileged, " Tounge and Reynolds, 2004;Costantino and Barlocco, 2006;Schnur et al., 2006). It is probably also worth noting that in some sense substructures may be related to the fragments that have proved so useful in drug screening (e.g., Hall et al., 2014), and that proposals exist that one might concentrate on those that are metabolite-like (Davies et al., 2009) or natural-product-like (Over et al., 2013). Use of Drug/endogenite Substructure Presence as an Encoding Strategy While some encodings, such as MACCS (Durant et al., 2002), use the presence or absence of particular substructures as the basis for their binary scoring, the substructures so chosen are somewhat arbitrary (or at least not necessarily based on any knowledge of the structures of marketed drugs nor endogenites). Armed with the substructures of Figures 7A-C (Supplementary Information) we used each of the substructures found (whether in endogenites, drugs or both) as a 1419-bit presence/absence encoding, on the basis that these substructures ought at least to FIGURE 7 | Frequency of representation of different substructures in endogenites and marketed drugs. Self-organizing maps were run as in Figure 5 for 10 separate occasions. For each SOM node, using the MCS (maximum common scaffold) analyser from Indigo within KNIME, we extracted all substructures for each SOM node; this was performed 10 times, and duplicates removed. form the basis of useful drug molecules in the future, as they must include or contribute to the concept of "drug-likeness" (Muegge, 2003;Lipinski, 2004;Oprea et al., 2007; Abad-Zapatero et al., 2010, 2014; Camp et al., 2012;Garcia-Sosa et al., 2012;Yusof and Segall, 2013), not least since approved drugs occupy only a rather particular subset of the chemical Universe (Ruddigkeit et al., 2012(Ruddigkeit et al., , 2013. We refer to this encoding as the Drug-Endogenite-Substructure (DES) encoding. Given its origins and basis, the DES encoding is necessarily likely to indicate more clearly than many encodings the drugmetabolite similarities, and such data are given in Figure 8, both for the full set of substructures so extracted ( Figure 8A) and for truncated versions decreased as per the ranking order in the full Supplementary Information (Figures 8B-D). In this case, it is clear that there are advantages in not being too comprehensive, and that using the DES encoding with the top 10% of drugendogenite substructures results in a drug-endogenite similarity even greater than that found previously [1] using the MACCS encoding; this again would seem to reflect the fraction of bits set to 1 in the bitstring that results from the encoding. This is also true for molecules taken at random from the ZINC database ( Figure 8D). The KNIME element that calculates the bitstring from the molecular structure encoded in SMARTS strings was mainly written in R, and is provided as Supplementary File 2 (Scaffold2DES-Fingerprint.7z). Given the supplementary information it is possible to cut substructures from both the most and least frequently found substructures in the list. We suggest that these encodings might also be useful for various purposes, and might usefully be referred to as X DES Y where X and Y are numbers referring to the first and last of the substructures used. [We note that one might also use something like an evolutionary algorithm for subset selection (e.g., Broadhurst et al., 1997) and other kinds of optimization (Kell and Lurie-Luke, 2015), but as noted above we have chosen to avoid supervised methods for these purposes here.] A common use of these kinds of encodings is in the calculation of quantitative structure-activity relationships (Geldenhuys et al., 2006;Tropsha, 2010;Stålring et al., 2011;Warr, 2011;Ruusmann et al., 2014). We assessed the ability of the DES and other encodings to predict the binding of various drugs to three candidate targets, using data taken from the internet. Thus, Figure 9A shows the out-of-bag prediction from a random forest-based (Breiman, 2001;Svetnik et al., 2003;Knight et al., 2009) QSAR using data on the dopamine D2 receptor downloaded from http://www.bindingdb.org/. In this case we used a random forest learner that was based on the "ensemble tree learner" KNIME node and the full DES encodings, and compared it with the other encodings. The DES encoding was of comparable utility to the other encodings used, although we note that these are log-log plots and that the slope of the lines are rather less than unity, so there would be inaccuracy in linear plots (Kell et al., 2011Kell and Oliver, 2014). Figure 9B shows the same QSAR, using only the fractions of the DES encodings indicated. Clearly one can learn very effectively using just the commonest 20% of substructures. Figures 9C,D show a similar analysis for factor Xa inhibition (Fontaine et al., 2005) using data downloaded from http://www.cheminformatics. org/datasets/, while Figure 9E split the data (as did the original authors) into training (out of bag predictions) and test sets as is arguably preferable (Broadhurst and Kell, 2006;Kell and Oliver, 2014). Lastly here (Figure 9F), those data were also split into two output classes based on whether the molecule was a "good" or "poor" inhibitor for factor Xa; obviously the DES encoding admits a highly accurate classifier. Finally, to show the generality of the utility of the new encodings (Figure 10), we used the various encodings to devise quantitative structure-activity relationships for two datasets from the ChEMBL bioactivity database (Bento et al., 2014), here using partial least squares (Wold et al., 2001) and the regression error characteristic (Bi and Bennett, 2003;Mittas and Angelis, 2010) to indicate that reasonable predictions could be obtained by methods other than random forests. Conclusions The concept of drug-endogenite likenesses continues to appear to have utility, and substructure analyses of drugs and endogenites (for which we provide all the data) show both similarities and differences that have led us to implement here a simple substructure-based cheminformatics encoding (Fontaine et al., 2005) using data downloaded from http://www.cheminformatics.org/datasets/. (B) Same as (C) save that we used only the fractions of the DES encodings indicated. (E) Same as (C) save that data were split into training (out of bag predictions) and test sets as per the data at http://www.cheminformatics.org/datasets/. (F) Classification of data (using a Receiver Operator Characteristic curve) from (C) to (D) based on whether the molecule was a "good" or "poor" inhibitor. into training and test sets, and training data were pre-processed using a low variance filter, and a correlation filter prior to PLS (5 latent variables). The test data were used for plotting the scatter plot and the REC curve. PLS was carried out using the R plsdepot package using the Knime R Integration and Scripting Nodes. The REC curve plot also shows the curve for using the mean value as predictor; this is taken as a reference worst-case method. family, DES, that has a clear and interpretable basis. We note a strong tendency for the Tanimoto similarity metric to favor bitstrings (and hence encodings that lead to them) that are highly populated with ones, and this will bear further analysis. However, we anticipate that variants of the DES encoding may provide useful filters for assessing drug-and endogenite-likenesses and for other cheminformatics purposes. Author Contributions DBK and SO'H conceived of the study, participated in its design and coordination and helped to draft the manuscript. SO'H wrote the workflows. All authors read and approved the final manuscript. Authors' Information DBK is a Research Professor at the University of Manchester, a role to which he returned full time following a 0.8FTE 5-year secondment at Chief Executive of the Biotechnology and Biological Sciences Research Council. He was previously Director of the Manchester Centre for Integrative Systems Biology (www.mcisb.org). His interests include systems biology, chemical biology, pharmaceutical drug transporters, synthetic biology, and iron metabolism. His website is http:// dbkgroup.org and he tweets as @dbkell. At Google Scholar his work has been cited more than 30,000 times, with an H-index of 90. SO'H has a Ph.D. in Chemistry from Warwick University, and following a period in industry is now a Computer Officer at the University of Manchester, specializing in cheminformatics, chemometrics, machine learning and the closed-loop automation of scientific instrumentation. Acknowledgments DBK thanks the Biotechnology and Biological Sciences Research Council for financial support (grant BB/M017702/1). We thank Dr Neil Swainston for extracting the subset of transporters from Recon 2. This is a contribution from the Manchester Centre for Synthetic Biology of Fine and Speciality Chemicals (SYNBIOCHEM). Supplementary Material The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fphar. Additional Data Files The following additional data are available with the online version of this paper. Additional data file 1 (VolcanoPlotData.xlsx) lists (in order of abundance) all of the substructures extracted from the endogenites and marketed drugs used herein, for which a truncated version is visualized as Figure 7. Additional datafile 2 (Scaffold2DES-Fingerprint.7z)-KNIME node elements for computing the DES encoding(s).
5,864.6
2015-05-13T00:00:00.000
[ "Chemistry", "Computer Science", "Medicine" ]
Is the Coleman de Luccia action minimum?: AdS/CFT approach We use the anti-de Sitter/conformal field theory (AdS/CFT) correspondence to find the least bounce action in an AdS false vacuum state, i.e., the most probable decay process of the metastable AdS vacuum within the Euclidean formalism by Callan and Coleman. It was shown that the $O(4)$ symmetric bounce solution leads to the action minimum in the absence of gravity, but it is non-trivial in the presence of gravity. The AdS/CFT duality is used to evade the difficulties particular to a metastable gravitational system, such as the problems of negative modes and unbounded action. To this end, we show that the Fubini bounce solution in CFT, corresponding to the Coleman de Luccia bounce in AdS, gives the least action among all finite bounce solutions in a conformal scalar field theory. Thus, we prove that the Coleman de Luccia action is the least action when (i) the background is AdS, (ii) the AdS radii, $L_+$ and $L_-$, in the false and true vacua, respectively, satisfy $L_+ / L_- \simeq 1$, and (iii) a metastable potential gives a thin-wall bounce much larger than the AdS radii. I. INTRODUCTION The vacuum decay process can be important both in the early and later Universe.In the early universe, vacuum decay may lead to the graceful exit of the open inflation [1][2][3].In the later Universe, the possible Higgs metastability [4,5], predicted in the particle physics, would eventually lead to the nucleation of a negative-energy vacuum bubble and destroy the structure of the present Universe.Also, the string theory predicts the existence of many vacuum states with various values of cosmological constants, which is known as the string landscape [6].In the picture of landscape, a universe could have various cosmological constants by experiencing vacuum decay. To quantify the decay rate Γ, we consider the Euclidean path integral under the semi-classical approximation and obtain Γ = Ae −B from the bounce solution [7,8], where A is a pre-factor and B is the on-shell Eucldiean action of the bounce.The pre-factor A can be estimated by the energy scale of a metastable system and the exponent B governs the order of magnitude of the decay rate.Therefore, determining the factor B is rather important to estimate the probability of a vacuum decay.Finding the most probable process among all possible processes is equivalent to finding the least Euclidean action among all possible bounce solutions in the Euclidean formalism.In the absence of gravity, Coleman, Glaser, and Martin (CGM) has proven [9] that the O(4)-symmetric vacuum bubble leads to the least action under some conditions.However, with gravity, there exist serious issues, e.g., the negative mode problem [10,11] and unboundedness problem [12], and it is non-trivial if the maximally symmetric non-trivial solution, i.e., an O(4) bounce, leads to the minimum action in the existence of gravity.For the vacuum decay processes that we are interested in, gravity can be strong and one cannot get rid of the degrees of freedom of gravity from the system.In this sense, we could say that the theory of vacuum decay has been facing the aforementioned serious issues. We consider how the anti-de Sitter/conformal field theory (AdS/CFT) correspondence [13][14][15] can shed light on the issues.We assume that the correspondence holds for a metastable AdS and CFT, that is, there exists a one-to-one correspondence between the partition functions of a bounce solution in AdS and CFT sides.We then find the least action in the CFT side where gravity is absent, which infers what is the least action in AdS side by virtue of the AdS/CFT correspondence (see Figure 1).As mentioned, finding the least action among possible bounce solutions in the presence of gravity is challenging, but we can use the AdS/CFT correspondence to evade the complicated issues caused by gravity.We will then argue that the CdL bounce would correspond to the Fubini bounce under certain conditions.We then prove that the CdL bounce in the CFT side is always spherically symmetric and hence it is given by the Fubini bounce.Knowing that the spherically symmetric thin-wall bounce in the AdS side gives the same action as the Fubini bounce in the CFT side, we conclude that the spherically symmetric bounce gives the least action in the AdS side under certain conditions.This paper is organized as follow.In Sec.II, we set up the condition under which our strategy works.We consider a metastable scalar field theory in AdS D+1 and review a way of how to determine the corresponding CFT D with the correct coupling constant based on Ref. [16].In Sec.III, we prove that the Fubini bounce solution is the least action among possible finite non-trivial solutions to a metastable conformal scalar field theory.We then provide our conclusions in Sec.IV.Throughout the paper, we use the natural units with c = ℏ = 1 and G = 1. II. CORRESPONDENCE BETWEEN A METASTABLE ADSD+1 AND CFTD In this section, we consider the correspondence between a metastable AdS D+1 and a metastable CFT D : where ξ D ≡ D−2 4(D−1) (see e.g.Ref. [17]), U (ψ) is a metastable potential, and the coupling constant λ will be determined later but is negative in order for the CFT to be a metastable system.Following the AdS/CFT correspondence, we assume that there is a one-to-one correspondence between bounce solutions ψ = ψ in AdS D+1 and ϕ = φ in CFT D such that where the left (right) hand side is the partition function of a bounce solution nucleated in the bulk (on the boundary). As the partition functions of the bounce and the initial false vacuum determine the transition amplitude in the Euclidean path integral formalism, we may expect that the corresponding bubbles in AdS and CFT sides would be nucleated with the same transition amplitude.Such a one-to-one correspondence means that the transition amplitude of the most probable decay process in the metastable AdS is equivalent to that in the metastable CFT side.Our goal in this paper is to confirm that, with this assumption, the CdL nucleation process is the most probable process at least in the AdS background. Given the metastable potential in AdS, U (ψ), how can we determine the coupling constant λ in CFT?We can demonstrate the determination of λ from the AdS side when the U (ψ) satisfies the conditions shown below.We here consider a metastable potential U (ψ) for which the O(D + 1) bounce solution, i.e., the CdL solution, has a large wall with the exterior and the interior AdS radii, L + and L − , respectively, and a potential barrier of the tension σ such that 0 < q/σ − 1 ≪ 1 and where and Σ ≡ 8πσ/(D − 1).( 5) Here the tension of the wall is given by σ ∼ V top ∆ϕ where ∆ϕ is the separation of the true and false vacuum states in the field space and V top is the height of the potential barrier V top .The two quantity q and σ are associated with the bulk and surface energy, respectively, and the balance between them determines the size of the CdL bubble.In the condition (5), one of the possible bounce solutions, the CdL solution, has the bubble radius of R 0 which is much larger than the false AdS radius as where the explicit form of R 0 is derived below.For the CdL solution, all degrees of freedom in the bulk, i.e., the thin wall or a probe brane, lives in the vicinity of the AdS boundary, and its dynamics can be translated into that of CFT [16].Then one can read the unknown coupling constant λ from the bulk side by sending the probe brane to the vicinity of the AdS boundary at r ≫ L + 1 and obtaining the effective action of the probe brane in the canonical form [16].In the following, we review the procedure of Ref. [16]. The dynamics of a thin-wall spherical bubble can be described by the Israel junction conditions [18].It means that the effective degrees of freedom of the bulk reduces to a scalar quantity, i.e., the radius of the bubble.As we consider a spherical probe brane, the first Israel junction condition is trivially satisfied and the second Israel junction condition reduces to where r = R(τ ) denotes the radius of the brane and τ is its proper time.The junction condition (7) reduces to Note that q > 0 should hold for the positivity of the exterior and interior extrinsic curvatures.From ( 9), we find the radius at the moment of the bubble nucleation is R = R 0 , at which the potential term in (9) becomes zero, and R 0 → ∞ for σ → q.Using the asymptotic time, t, (9) can be rewritten as The action leading to the integrated equation of motion (10) is given by where Ω D−1 denotes the area of the (D − 1)-dimensional unit sphere and a dot denotes the derivative with respect to t.One can show that the action (11) indeed derives the integrated equation of motion (10) by computing the Hamiltonian (total energy) of the bubble E as and setting E = 0 as the total energy of the nucleated bubble is zero, one finds that (12) reduces to (10).In the following, we obtain the translation of R(t) → ϕ(t), by which the action (11) reduces to the canonical form of 1 The coordinate r is the radial coordinate in the static AdS patch. in the non-relativistic situation Ṙ ≪ 1.This is the case when the bubble is nucleated with a small velocity2 .In this procedure, we can read the λ in CFT from the bulk side.Expanding L( Ṙ, R) in (11) with respect to Ṙ and comparing it with (13), one can read and substituting this relation and Ṙ = 0 in (12), one finds for R ≫ L + , where The potential term (15) reduces to as the Ricci scalar is R bdy = (D − 1)(D − 2)/L 2 + on the AdS boundary whose topology is R × S D−1 .Identifying λ in (2) with λ AdS , we obtain the CFT action corresponding to the metastable AdS satisfying the condition of (4).Let us consider the correspondence in the Wick rotated space (t → −it E ) and perform the conformal transformation leading to R × S D−1 → R D .The latter procedure is possible as where u ≡ exp(t E ) and the factor 1/u 2 is the conformal factor of the tranformation.Performing the conformal transformation, R bdy vanishes and ( 17) becomes Then, the D-dimensional Fubini bounce [19] becomes a solution to the equations of motion for (2) with R bdy = 0. Here, the Fubini bounce is given by where ∆ = (D − 2)/2 is the mass dimension of a scalar field ϕ and b is an arbitrary constant determining the size of the bounce.As the conformal transformation does not affect the action, the original CFT of (2) also admits the on-shell Fubini action.Remarkably, the Fubini action with λ = λ AdS is equivalent to the Coleman de Luccia action in the limit of L + /L − → 1 as it has the form of for q/σ → 1.The exterior and interior AdS radii, L + and L − , respectively, should satisfies L + /L − − 1 ∼ 1/N ≪ 1 for the AdS/CFT correspondence to be valid, where N is a large integer.In the context of AdS/CFT, the N is the number of branes, and for N ≫ 1, the spacetime near the branes is approximated with AdS spacetime.Also, a nucleated bubble can be regarded as bundled n branes with n ≪ N [16]. In the following section, we show that the Fubini bounce leads to the least bounce action among all finite bounce solutions by extending the theorem on the minimum action proven by Coleman, Glaser, and Martin [9] to cover a scalar CFT (see Sec. III).Based on the relation of ( 22), we argue that the CdL bounce action gives the most probable transition amplitude among all possible processes, at least, in our setup. III. MOST PROBABLE DECAY PROCESS IN THE METASTABLE CFT In this section, we will prove that the Fubini bounce gives the least Euclidean action of the metastable CFT.To this end, we extend the theorem proven by Coleman, Glaser, and Martin [9] (hereinafter, we refer it as the CGM theorem).In the former part of this section, we will briefly review the CGM theorem, and in the latter part, we will extend the CGM theorem so that it applies to the Fubini bounce. A. CGM theorem CGM has shown that there exists at least one non-trivial solution to the differential equation and the solution leading to the lowest action is spherically symmetric and monotone if V (ϕ) is admissible.Here, ∇ 2 is the Laplacian in the D dimensional Euclidean space {x 1 , x 2 , ..., x D }, and V (ϕ) is said to be admissible if i) V is continuously differentiable for all ϕ, ii) V (0) = V ′ (0) = 0, iii) V is somewhere negative, and iv) there exist positive numbers a, b, α, and β such that with Notice that this is not the case for the potential of ( 19) as we need β = 2D/(D − 2) and a = 0 to satisfy the inequality. The main theorem proven by CGM is: The CGM Theorem.In D-dimensional Euclidean space with D > 2, for any admissible V , the equation of motion ( 23) has at least one monotone spherical solution vanishing at infinity, other than the trivial solution of ϕ = 0. Furthermore, this solution has Euclidean action, less than or equal to that of any other solution vanishing at infinity.If the other solution is not both spherical and monotone, the action is strictly less than that of the other solution. Before proving the theorem, CGM defined the reduced problem as follows. Definition. "The reduced problem" is the problem of finding a function vanishing at infinity which minimizes T for some fixed negative W , where It is equivalently stated as the problem to minimize the scale-invariant ratio, with negative W .The CGM theorem is proven by showing that the following theorems hold. Theorem A. If a solution of the reduced problem exists, then, for an appropriate value of W , it is a solution of ( 23) that has an action less than or equal to that of any non-trivial solution of (23). Theorem B. There exists at least one solution to the reduced problem.All solutions to the reduced problem are spherically symmetric and monotone. The proof of Theorem B is composed of a sequence of statements with short proofs.CGM start from an infinite minimizing sequence, {ϕ n }, n ∈ Z + , such that with a fixed negative W .The sequence is chosen so that ϕ n is differentiable and has compact support, and T [ϕ n ] is finite.Notice that such a choice of the sequence is always possible.Then, CGM has proven the following statements. (a). [CGM Statement 4] There exists a sequence of spherical and monotone functions, {ϕ sph n }, such that for all n ∈ I. Here, I is an infinite subset of Z + . (b). [CGM pp.220-221] There is no non-spherical or non-monotone function that has the same R as the spherical monotone rearrangement of the original function. [CGM Statement 8] Φ satisfies Here, (a) and (b) show that the solution to the reduced problem is always spherically symmetric and monotone, and (c)-(f) show the sequence converges to the actual minimum of X satisfying lim r→∞ Φ(r) = 0 and W [Φ] < 0. Hence, these statements prove Theorem B. The other statements of CGM are used to prove Statements 4, 6, 8 and 10 shown above.The dependencies of the statements are summarized in Appendix A. B. Extension of the CGM Theorem We consider non-trivial solutions to the differential equation (23) with the potential3 of where γ = 2D/(D − 2) and λ is a negative constant.The theorem we here prove is described below. The Main Theorem.In D-dimensional Euclidean space with D > 2, the equation of motion (23) with the potential of (35) has at least one monotone spherical solution vanishing at infinity, other than the trivial solution of ϕ = 0. Furthermore, the solution has the Euclidean action (26), which is less than or equal to that of any other solution vanishing at infinity.If the other solution is not both spherical and monotone, the action is strictly less than that of the other solution. To prove the Main Theorem, we show that Theorem A and B holds in our setup.Theorem A has been proven without the condition of ( 25) and thus our main focus is on Theorem B. As we have mentioned, Theorem B has been proven by showing (b) and Statements 4, 6, 8 and 10.As summarized in Appendix A, (b) and Statements 4 and 6 hold independently of (25), and Statement 10 follows from Statement 8.However, the proof of Statement 8 by CGM depends on (25) and does not apply to our case.This can be understood in the following way.Since V (ϕ) < 0 for any ϕ ̸ = 0, there is a possibility that a sequence of Φ n having a fixed negative W converges to Φ that is zero almost everywhere.If such a sequence exists, we obtain W [Φ] = 0 although W [Φ n ] < 0 for all n, which contradicts Statement 8.In fact, we can construct such a sequence utilizing the scale invariance of the theory.(One can see that any value of b in (20) gives the same bounce action, which is the consequence of the scale invariance.)For any sequence that converges to the Fubini bounce, we can execute the scale transformation at each step of n so that the new sequence converges to the Fubini bounce with b → 0 or b → ∞, which is zero almost everywhere. Let us move on to the proof of the Main Theorem.Since the proof of Statement 4 by CGM applies to our setup, there exists a minimizing sequence, {ϕ n }, such that ϕ n is spherically symmetric and monotone for all n.Hereafter, we write the functions in terms of the radius from the center, r = e y . We prove the following propositions. Proposition 1.There exists a minimizing sequence of spherically symmetric monotone functions, {Φ n }, such that (i) 2 y is symmetric under y → −y and monotone for y > 0 for all n, and (ii) there exists a bounded continuous function, f (y), such that pointwise for all y and uniformly on any finite interval. Proposition 2 (Statement 8').For the minimizing sequence of the preceeding proposition, Since the scale transformation corresponds to the translation in y space, Proposition 1 excludes the sequences that converge to the Fubini bounce with b → 0 or b → ∞.Then, Proposition 2 replaces the Statement 8 of CGM.Once Proposition 2 is proven, Statement 10 of CGM immediately follows from Proposition 2 and Statement 9 of CGM, which completes the proof of Theorem B and the Main Theorem. Definition (Spherical rearrangement).Let F (x) be a non-negative measurable function on R d (d ≥ 1) that vanishes at infinity.A spherically symmetric monotone function, F sph (r), is obtained by symmetrizing F (x) around r ≡ for any positive value M .Here, A is the Lebesgue measure.Then, F sph (r) is said to be a spherical rearrangement of F (x). (See Figure 2.) The spherical rearrangement has the following properties. proof of Prop 1.With fn = ϕ n (e y )e D−2 2 y , W and T can be rewritten as Let f sph n be the spherical rearrangement of fn in y space.Then, from (39) and (40), Thus, there exists a subsequence of { f sph n } that is a minimizing sequence satisfying property (i).Then, the sequence with property (ii) is obtained by applying Statement 6 of CGM to this sequence.The parent bullet depends on the child bullets, and ♣ indicates that the statement requires (25).For the details of each statement, see [9].Statement 5 (C), (D) and (F) further depend on other statements as follows. • FIG. 1 : FIG.1:A schematic picture showing the role of the AdS/CFT correspondence in our strategy to find the least bounce action in the presence of gravity. FIG. 2 : FIG.2:A schematic picture of the spherical rearrangement F sph (x).The area of the level set of F = Mi (i = 1, 2, 3) is equal to that of F sph = Mi.
4,811.8
2023-08-04T00:00:00.000
[ "Physics" ]
Cognitive Load Approach to Digital Comics Creation: A Student-Centered Learning Case Featured Application: The present work has applications in the field of primary and secondary education. The work describes how educators can take advantage of digital comics creation for the learning of applied science in school. The study has implications not only for educators of applied sciences but also those of other educational disciplines. The study also outlines directions for future research to further clarify the appropriate instructional approach that could render digital comics an effective educational method. Abstract: The use of comics and their creation is an especially promising tool to enable students to construct new knowledge. Comics have already been adopted in many applied sciences disciplines, as the combination of text and images has been recognized as a powerful learning tool. Educational activities and tools, however, must not create an overload on students’ working memory that could hinder learning. In the current study, we investigated, through pre-test and post-test performance, the effect of digital comics creation on students’ efforts to construct new knowledge. Furthermore, through the multidimensional NASA-TLX, we assessed the cognitive load imposed on students. The results were in favor of digital comics creation, ranking it as an efficient instructional activity. Specifically, the students’ performance after digital comics creation improved and the imposed load on students was normal. Also, studying the weighing procedure between the NASA-TLX dimensions, frustration and temporal demand were found to be the most aggravating dimensions. Finally, implications for teachers and future research recommendations are discussed. Introduction New technologies offer the contemporary classroom the possibility of using educational multimedia, which include combinations of at least two different media types, such as text (written or spoken), pictures, video, and animations [1].The presentation of educational material in the form of multimedia is based on the fact that learning with words and images is more effective than learning with words or images alone [2].However, instructional multimedia design might suffer from several conditions that potentially hinder students' knowledge construction.These conditions arise from Cognitive Load Theory (CLT) and the Cognitive Theory of Multimedia Learning (CTML).Learning is an active process of knowledge construction in which instruction aims at guiding learners' cognitive processes [3].Sweller's CLT describes the mental workload (MWL) as a result of the number of informational units that must be held in the working memory, which is limited in capacity [4].Mayer's CTML provides a useful means by which instructional efficiency can be analyzed because it sets out the principles that should govern educational multimedia in order to support learning.In this context, effectiveness is related to fostering learning, that is, the construction of knowledge [5].Learning occurs when working memory successfully processes information, leading to new schema creation.The control of cognitive load is important for meaningful learning, and consequently, CLT and CMTL contribute to the generation of a suitable instructional design [6].Multimedia designers, according to CLT and CMTL, should produce instructional materials, taking care not to overload the working memory [7]. A key element in the above theories is the measurement of the MWL imposed on learners by the educational activities they are asked to carry out.Reliable assessment of a learner's MWL would enable a range of new and improved instructional activities [8].The NASA-TLX, created by Hart and Staveland [9], is a widely used and reliable method to assess MWL [10,11].The NASA-TLX is a subjective, self-reported, multidimensional method that assesses the MWL across six dimensions.A digital learning activity, with its specific technology, influences students' cognitive load and is worthy of investigation under the theories of CLT and CMTL. Research has shown the effectiveness of text and picture integration in school activities [12].This integration is used, for example, in science textbooks that use different types of visual representations to support information reporting and explain content [13,14].Similarly, Lee [15] noted that visual representations can aid the understanding of scientific ideas; thus, they are included in modern science textbooks.Mainali [16] also stated that most textbooks today, in order to promote understanding of mathematics, use a wide variety of diagrams and pictures. Among different types of text and picture integration, comics have gained researchers' interest.Visual narratives existing in comics present information in a sequence of images and are often combined with written text.Similarities in brain responses have been found in the processing of visual narrative sequences and sentences [17].Hughes et al. [18] stated that comics, with their power of visual communication and narrative dialogue, can help students to deconstruct and reconstruct meaning.Comics creation, additionally, provides students with a popular and accessible medium to communicate their knowledge.Moreover, digital comics creation is based on computers, which appeal to students.Comics have been used as instructional tools in various disciplines, such as chemistry, computer science, biology, physics, nanotechnology, and programming, satisfying students and deepening their engagement [19][20][21][22].Comics provide students with an easy way to access information and can motivate them [23], while comics creation maintains this motivation for a longer period [19].Comics also enhance students' active participation in classroom educational tasks and promote a positive attitude toward the learning process [24]. More research has focused on how comics might be used in a classroom than on comics creation.Students prefer creating digital comics in the classroom [25], but a significant aspect of using comics creation is whether it can benefit students in their knowledge construction.In addition, as we mentioned above, the MWL imposed on students by an educational activity is a decisive factor in students' effort.Our research objectives included: a.The investigation of the effect of digital comics creation on students' effort in knowledge construction and on the imposed MWL.We used the NASA-TLX method, which is acknowledged as a reliable method of assessing MWL [10,26].b.The examination of whether comics creation by students can lead to the satisfaction of the principles of effective design of artifacts, which combine text and pictures, and support learning. The manuscript is structured as follows.The next two sections describe the theoretical framework and the research hypothesis that was investigated in the current study.The two subsequent sections describe the materials and the research methodology followed and the corresponding results.The last two sections discuss the findings of our research, its limitations, and some proposals for future work and finally present our conclusions. Cognitive Load Theory, Cognitive Theory of Multimedia Learning, and Cognitive Load Assessment Cognitive Load Theory is an educational theory that provides a conceptual model for the kind of cognitive processes that take place during instruction [3].CLT explains how students' ability to process new information and construct knowledge in long-term memory can be affected by the information processing load caused by the learning tasks [27]. CLT is concerned with the limitations of human cognitive processing in relation to learning [28] and plays a key role in connecting cognitive science and instructional practice.CLT assumes that the human cognitive architecture consists of a working memory that is limited in capacity and includes partially independent subcomponents to deal with auditory/verbal and visual information [28].CLT argues that any learning task imposes a cognitive load on working memory and identifies different components of mental resources that compete for students' working memory capacity [29].This load depends on the characteristics of the learning task, the way the learning material is presented or the type of instructional activities that the learner participates in, and the process in which the learner relates relevant information from their long-term memory to information from the present learning task [30,31].Hence, some learning tasks impose a higher cognitive load on working memory than others.Since there is a limit to working memory capacity, instructional design should intend to reduce the unnecessary working memory load in order to free capacity for learning-related processing, thereby enabling the transformation of currently attended information into long-term memory traces [32].Conversely, if the cognitive load exceeds the working memory capacity, learning could be hampered [27,31,33]. Therefore, learning may occur when the cognitive load that is associated with an instructional design does not exceed the available processing capacity of the working memory resources.Learning materials have an inherent complexity that stems from the number of information units together with the number of connections between them [32].Consequently, a balance must be found between the elements that comprise the cognitive load.This means that the load must be managed appropriately so that the simultaneous processing of all information elements leaves some spare cognitive capacity, allowing learners to invest available processing resources in schema acquisition and automation [6,28]. CLT has influenced CTML, which focuses on how to structure multimedia instructional messages and apply more effective cognitive strategies, aiming to help people to learn efficiently [5,34].Mayer [2] defined a multimedia instructional message as communication containing words and pictures that intends to foster learning.The words can be spoken or written, and the pictures can correspond to any form of graphical imagery. CTML is based on three assumptions, namely, the dual channel assumption, limited capacity assumption, and active processing assumption.According to these assumptions, humans have separate but interconnected channels with a limited capacity for processing auditory and visual information, and they construct knowledge by selecting appropriate incoming information, organizing it into a coherent mental representation, and integrating it with prior knowledge activated from the long-term memory.CTML considers three kinds of demands on students' information processing system during learning: extraneous processing, which is caused by poor instructional design and impedes learning; essential processing, which concerns the essential information presented; and generative processing, which aims to make sense of the material presented [2,34]. Multimedia learning research has provided and evidenced several principles for the effective design of multimedia instruction [2,3,7,12,29,31].Mayer [2], with the aim of reducing extraneous processing, managing essential processing, and fostering generative processing, formulated some multimedia instructional principles to be followed when multimedia instructional messages are created.Specifically, the aim of reducing extraneous processing involves the coherence principle, the signaling principle, the redundancy principle, the spatial contiguity principle, and the temporal contiguity principle.The aim of managing essential processing involves the segmenting principle, the pre-training principle, and the modality principle.Finally, the aim of fostering generative processing involves the multimedia principle, the personalization principle, and the voice principle.These principles state that people learn better (i) when extraneous material is excluded rather than included (coherence principle), (ii) when cues that highlight the organization of the essential material are added (signaling principle), (iii) when graphics and narration are used together rather than graphics, narration and printed text together (redundancy principle), (iv) when corresponding words and pictures are placed near each other rather than far from each other on the page or screen (spatial contiguity principle), (v) when corresponding words and pictures are presented at the same time (temporal contiguity principle), (vi) when a multimedia lesson is presented in learner-paced segments rather than as a continuous unit (segmenting principle), (vii) when they know the names and characteristics of key components in a multimedia message (pre-training principle), (viii) when graphics and narration are used rather than graphics and printed text (modality principle), (ix) when words and pictures are used rather than words alone (multimedia principle), (x) when the words in a multimedia presentation are in a conversational style rather than in formal style (personalization principle), and (xi) when the words in a multimedia message are spoken by a standard-accented human voice rather than a machine voice (voice principle). Digital technology offers new opportunities for learning, but several design factors involved in digital learning can expose students to a higher cognitive load [29].Therefore, in an educational environment, managing the cognitive load can help improve learning outcomes.Cognitive load can be measured by self-report measures.The NASA-TLX, developed by Hart and Staveland [9], is one of the most widely used measures of cognitive workload and has been proven to be a reliable and effective tool for evaluating the cognitive load in different fields of knowledge [10,11,26,[35][36][37].The NASA-TLX, being more sensitive than other measures [38], can capture the multidimensionality of cognitive load, assessing it in six dimensions [39].Three of these analyze dimensions of requirements concerning individuals, specifically, the mental demand, which concerns the mental and perceptual activity; the physical demand, which concerns the amount of physical effort; and the temporal demand, which is associated with the perception of time.The other three analyze dimensions that are related to the willingness of individuals, specifically, performance, which concerns the degree of goal accomplishment; effort, which concerns the amount of effort; and the frustration level, which is associated with the feeling of insecurity, discouragement, irritation, and stress.When the NASA-TLX is administered, the information provided by the participants concerns both the score and the weight for each dimension.The weight of each dimension reflects the relevance of this dimension to the task.Weighing consists of 15 binary comparisons among the six dimensions and represents the subjective importance of each dimension.For each pair of comparisons, one element is chosen that the participant considers the most important source of his/her load.The total weight of each dimension depends on the number of times that dimension was selected in the binary comparisons.The weight of each dimension ranges from zero (if the dimension is never chosen) to five (if the dimension is chosen in every pair of comparisons).The weight and the score of each dimension are provided after task completion.The participant estimates, on a scale from 0 to 100, his/her subjectively sensed magnitude cognitive load with respect to each dimension.Subsequently, using the above data, a weighted index for each dimension is calculated by multiplying the score of each dimension by its corresponding weight.Finally, the global index is calculated by adding up all the weighted scores and by dividing this sum by 15 [9,10]. There are several studies that have pointed out the worth of evaluating each NASA-TLX dimension independently in order to assess the effect of each dimension [40,41].The global score can provide information about the overall load of a task, while each dimension score can show where the load lies in this task. Comics in Education Comics, with their multimodal features [42], present a story using a set of images and words and can be placed in a narrow conceptualization of the multimedia area [2,31].The frame or panel, the bordered illustration containing visual information, is the fundamental unit of a comic.The shape and size of the frame are designed to make the emotions more noticeable, affect the speed of the action, and emphasize what the creator thinks is important [43].The gutter, the space between the frames, is sometimes a single dividing line, but at other times, it may imply action to connect the frames through closure [43,44].Images in comics, whether in the background or the foreground, have various characteristics and can provide information about the characters in the story, where and how the story develops, feelings, attitudes, opinions, etc.In addition, through technical features such as color, camera angle, zoom, etc., images can make understanding of this information easier for readers.Text expressing speech, thoughts, and sounds in comics appears in speech and thought balloons, captions, and sound effects.Balloons can be of various shapes to convey the characters' intentions, feelings, and situations.Captions convey the narrator's voice, and sound effects present environmental sounds that cannot be represented graphically or by the characters' or narrator's voices. Comics, by combining text and visual representations, exploit both systems for incoming information according to CTL and CTML.The comic reader is allowed to take control of time and space by ceasing reading to look at and study the images.Text and picture integration facilitate comprehension, enabling comics to be an interesting educational tool that helps students strengthen their ability to synthesize information [12,20,45]. Comics attract students' interest and help them deconstruct and reconstruct meaning [18].They have the ability to develop critical thinking, motivate students, and offer clues that helped students remember what they have learned [22][23][24]46,47].Lin et al. [48] stated that science comics, which combine visual representations and scientific explanations, can provide learners with visualized learning.Comics have also the ability to make the subjects more alive and accessible [20] and, consequently, to transfer knowledge.Moreover, comics provide students with the opportunity to nurture some of their intelligence according to Gardner's multiple intelligence theory [49]. Digital comics creation combines the medium of comics, which children are familiar with, and computers, which attract students and motivate them to participate in the learning activities.Students prefer creating digital comics in school [25], and comics creation keeps them active in the classroom [20].By asking students to create comics, the processes of problem solving, planning, and revision are activated [49,50].Students, during comics creation, identify the most essential content of the learning subject and then reform it and retell it in the new format.Students think of digital comics creation as a technology that is enjoyable, easy to use, and useful in a school environment that can offer them learning opportunities [25]. Therefore, in the current study, we tested the following null hypothesis: H0: There is no difference in students' performance before and after creating digital comics. The impact of digital comics creation on students' attempts was also examined through the measurement of the imposed cognitive load on their cognitive system. Materials and Methods In the current study, we investigated the efficiency of digital comics creation as a classroom activity in students' knowledge construction attempts in a computer science course.High school students aged thirteen and fourteen years old attending a school in a suburb of Athens, Greece, participated in the survey.Our experiment followed a common cognitive load research design that was connected with learning tests to understand the extent to which digital comics creation contributes to successful learning [51]. A total of 42 students, 16 boys and 26 girls, participated in the experiment.Initially, in order to familiarize students with the comic elements and their digital creation, the students attended a presentation about the comic medium and its basic elements.Students also attended a demonstration of the software interface and the software functions.Subsequently, students created a comic strip with the teacher's guidance in order to become familiarized with the software. Next, the students attended a presentation describing the issues of the learning subjects.Two subjects from the national curriculum were used in the study: computer viruses and multimedia.Before the comics creation procedure, the students took a 10min test that served as a pre-test for the digital comics creation effect in which they had to answer questions about issues of the learning subject.Next, the students created comic strips displaying all issues of the subject under study and discussed with the teacher any relevant questions that arose during the comics creation procedure, explaining these issues.After comics creation, the students took a 10min test on the issues of the learning subjects, such as what a computer virus is, how a computer virus enters a computer, what forms of information representation are combined in multimedia, and what image resolution expresses.This test served as a post-test for the digital comics creation effect.Answers to each item for all tests were marked as entirely correct, partially correct, or entirely wrong, and a total mark with a maximum value of twenty was calculated.Missing answers were marked as entirely wrong. Subsequently, the students received the NASA-TLX questionnaire and filled it out anonymously. Students used the ComicsFun Software created by [25] to create their comics.To create a comic, they first chose the number of frames and the frame they were going to fill and then added the characters and the balloons they needed to narrate their story.Since research has shown, on the one hand, that when students create drawings, an excessive cognitive load is produced and, on the other hand, that students can benefit if they are given, for example, cut-out figures [52,53], the software helps students by providing them with a set of character and object images. The software interface was simple enough to complicate the activity.The buttons had funny images to attract students' interest, and, when possible, they had images similar to those from other applications that students were familiar with, enhancing the software's ease of use.Buttons were grouped by functionality in order to help students to identify related functions easily.Button functionality was indicated by hints and more explanatory information presented in the status bar. Figure 1 presents a screenshot of the software interface.It was designed to be suitable to the students' computer skills and provide an introduction to the activity [25].Some of the students' comics, translated into English, are shown in Figure 2. Results All data were analyzed using SPSS statistical software, version 20, with a significance level of 0.05.In our experiment, the data were not normally distributed.Students performed better in the post-test.Thus, in order to check whether there was a significant improvement in students' performance, we conducted a Wilcoxon Signed Rank Test showing a significant difference between the performances in pre-and post-tests (p = 0.000).This indicated that digital comics creation helped students in their knowledge construction effort, consequently giving no support to hypothesis H0. The students, after creating their digital comics, also filled in the NASA-TLX questionnaire.In order to simplify the comprehension of the study and achieve a more plausible analysis of its results, we used the scale defined by [10].Specifically, between 0 and 20%, a very low workload was indicated; between 20 and 40%, a low workload was indicated; between 40 and 60%, a normal workload was indicated; between 60 and 80%, a high workload was indicated, and between 80 and 100%, a very high workload was indicated.According to the above scale, the overall MWL was normal (mean value = 42.21),which is considered a positive MWL in terms of motivation to keep students active.The mental demand was low to normal (mean value = 32.98), the physical effort was very low (mean value = 11.79), the temporal demand was low to normal (mean value = 36.79),the effort was normal (mean value = 45.36), the performance was high (mean value = 71.19), for the frustration was low to normal (mean value = 32.86).The results regarding the overall workload as well as those across the six dimensions were similar in general terms, except for physical demand and performance, implying that digital comics creation does not burden MWL and its dimensions unevenly.The physical demand scored the lowest, as expected, because digital comics creation does not demand any particular physical effort on behalf of the students.Performance scored the highest, implying that students' attempts to create successful digital comics was the most demanding dimension.The results also showed that digital comics creation was not considered difficult, which is notably positive for instructional goal fulfillment. Concerning the weighing procedure, we found the weights of frustration and temporal demand to be the highest (3.17, 3.12), indicating that students put great importance on the time dimension and the feelings of insecurity and discouragement through the frustration dimension.The weights of mental demand, performance, and effort were 2.71, 2.60, and 2.43, respectively.The physical demand weight was the lowest, with a value of 0.98. Finally, to better understand the relationship between the workload and the dimensions as proposed by the NASA-TLX, we conducted a Spearman correlation analysis for higher robustness, and only significant correlations with a minimum absolute value of 0.3 are mentioned because such a correlation denotes at least a medium correlation strength [8].Specifically, the following dimensions were significantly correlated: temporal demand and mental demand (rs = 0.452, p < 0.01), indicating that when the time pressure increased, the mental demands increased; mental demand and effort (rs = 0.458, p < 0.01), indicating that when the mental demand increased, the students tried harder to accomplish the task; mental demand and frustration (rs = 0.424, p < 0.01), indicating that when the mental demand increased, the students felt more stress; temporal demand and performance (rs = −0.305,p < 0.05), indicating that the limited time the students had to create the digital comics negatively influenced the successfulness of the comics; temporal demand and frustration (rs = 0.396, p < 0.01), indicating that students felt more stress and insecurity due to the limited available time; and, finally, frustration and effort (rs = 0.510, p < 0.01), indicating that the higher the frustration was, the higher the effort that students exerted. Study Observations and Lessons Learnt The present study investigated the effectiveness of digital comics creation on students' knowledge construction attempts in real classroom settings.In an educational setting, such as the classroom, the educational activity that teachers choose must provide students with a stimulating experience, leading to a better understanding of the concepts and helping them acquire knowledge.Students' test performance before and after the digital comics creation was compared.We also used the multidimensional NASA-TLX to determine the MWL the students experienced while creating their digital comics. The results showed a significant difference between pre-test and post-test performance, implying that digital comics creation can assist teachers' efforts to improve knowledge construction.Digital comics creation gives students the opportunity to participate in the learning process, thus promoting higher engagement and, consequently, enhancing knowledge construction.This is in accordance with the literature showing how Information and Communication Technology (ICT) helps students construct new knowledge [54].Linardatos and Apostolou [25] stated that students perceive digital comics creation as useful because they see it as a tool capable of providing learning opportunities.Digital comics creation, by determining the most important content from the subject under study, includes active processing of the materials.Students, while creating their digital comics, divided comic strips into frames, put images inside each frame, and added suitable text to speech or thought balloons.Digital comics creation, thereby, motivates students to select only the essential information from the lesson, expressing it in their own words and reorganizing it using their existing knowledge.According to research, this way of prompting students can enhance learning [7].Digital comics creation offers students the opportunity to evaluate their knowledge levels of the learning subject, make the necessary modifications, and integrate new information with prior knowledge into a mental model. However, all of these actions put strain on students' working memory, affecting the MWL that they experienced.Since working memory has a limited capacity [7] we investigated MWL through the NASA-TLX.The results showed that students experienced a normal MWL, which influenced them to be engaged with the digital comics creation activity and gain its corresponding benefits.Each frame in a comic presents information that is a continuation of the information of the previous frame, giving students guided access to information.The unguided discovery of information would put a heavy burden on students' cognitive load. The load for the NASA-TLX dimensions had a wide range, from low to high.The low case was, as expected, on behalf of the physical demand dimension since digital comics creation is an activity that takes place exclusively in front of a computer.The high case concerned the performance dimension, which is related to personal effectiveness in creating the digital comics.Students are interested in their performance in a school environment and, consequently, in creating successful comics that present the subject under study.This is of great importance because students were engaged in the activity and tried to accomplish the tasks. The mental demand, the temporal demand, and the frustration dimensions were low to normal, while the effort dimension was normal.As Nikulin et al. [10] mentioned, these values could be a positive indicator of motivation.The students, as creators, can now determine the information contained in the comics, thus influencing the number of interacting elements that need to be simultaneously processed in working memory.As the number of elements within a learning task and the interactions between them increase, the experienced cognitive load also increases. Students also chose the most significant source of workload during digital comics creation, comparing the NASA-TLX dimensions.The frustration and temporal demand dimensions were the most weighted, and the physical demand was again, as expected, the least weighted.Students rated limited in-classroom time as the most aggravating factor in their attempts to create digital comics.Equally important to the students was the feeling of insecurity and discouragement.Students might have had some ideas about how to create their comics, but these ideas may have proven not to be implementable.This could have resulted in the loss of valuable time, thereby increasing frustration.Students need more time to create digital comics and need to feel less stressed.Teachers, in order to lessen these negative factors, might exemplify digital comics creation by using digital comics created by other students.These results also show that students need more quickly implementable in-classroom activities and activities that make them feel more secure.Teachers need to take these characteristics into account when instructional tasks are incorporated into the classroom. The correlation analysis of the NASA-TLX dimensions pointed out a relationship between temporal demand and mental demand.The higher the temporal demand was, the higher the mental demand became.Since we used real classroom conditions, the students needed to create the comics correctly and comprehensively, presenting the issues under study, within the limited time period of the teaching hour.This caused an increase in mental demand.The analysis also pointed out a relationship between mental demand and effort.When the digital comic is mentally demanding, the students are impelled to try harder to create the digital comics and present the necessary information, gaining more benefits from the activity. Another relationship was between mental demand and frustration.The more mentally demanding the comics the students had to create were, the more pressure was put on students, which made them feel insecure about being able to create them.Choosing specific characters or creating a story that ended up not being implementable also could have added to the frustration dimension.However, as can be seen from the results, this worked to the benefit of the students, motivating them to continue their efforts and helping them reap the benefits of digital comics creation.The temporal demand was also related to the performance dimension.The limited available time the students had to create their comics negatively affected the performance they wanted to achieve.Possibly, they wanted to create more complex comics in terms of the story and characters.However, the more complex comics may have imposed an additional load on the students unrelated to the subject being studied, affecting the effectiveness of digital comics creation.This needs further investigation. The temporal demand was also related to the frustration dimension.Due to limited time, students might have felt pressure.But, as mentioned above, offering more time might negatively affect the efficiency of digital comics creation.If students had seen some comics created by other students, they might have felt less pressure and insecurity.The same effect might have been observed if students had not been creating digital comics for the first time.This also needs further research.Finally, a relationship between frustration and effort was noticed.The greater the pressure and stress the students felt, the harder they tried to create their comics and present the information in the comic format. Approaches to learning that combine text and images can enhance learning.However, in order to obtain these benefits, students need to be engaged in active cognitive processing.Students, nevertheless, often face difficulties with this process.Research on multimedia learning has provided some design principles, enabling beneficial instructional messages to be constructed [2,3,5,7,12,55,56].All these principles aim to avoid problematic designs that would hinder learning and overload the student's cognitive system.The students' digital comics satisfied some of these principles, keeping cognitive load ata normal level.The students' digital comics were simple enough and usually dialogic, mostly containing characters with spoken words.These comics mainly consisted of three frames (which was the maximum number of the available frames they were asked to create), enabling students to focus on key information.Students, as creators, depending on the way they created the comics, influenced the degree that these principles were applied.Specifically, digital comics created by students follow the coherence principle because they contain the information that students consider necessary and appropriate to express the issue under study, thus removing any nonessential information.In addition, students had at their disposal a set of character images along with images of objects to include in their comics, minimizing any extra load that could be imposed by the drawing procedure. Furthermore, the signaling principle was followed.The speech balloon (or thought balloon) is linked with the speaker (or thinker), directing the student's attention to the character's words (or thoughts) that contain the essential learning information.Moreover, students could change the font size and underline or bold words that they considered important to understand the comics. The redundancy, spatial contiguity, and temporal contiguity principles were also applied to students' comics.Students, due to the limited time and space they had, did not include superfluous or unnecessary information that would impede learning.Furthermore, due to the available images the students could use, the included images and text representations did not overlap.In comics, text usually corresponds to characters' words.Therefore, it is placed close to the characters' images, inside the same frame.Thus, the student does not try to combine information presented in previous frames with the frame they currently see. Additionally, concerning the segmenting principle, students present the subject under study in frames, segmenting the information where they think it is appropriate, thus reducing the cognitive load they process.Segmentation, in our study, was student-paced, and therefore, it allowed the student to process as much information as he/she was able to, controlling the rate at which he/she received information.Some students may apply different segmentation than others depending on their abilities and prior experience.Further research is needed to determine the characteristics of this segmentation process. The pre-training principle was also applied in our study.Students, before making the comics for the subject under study, had already made comics about an issue they liked in order to be familiar with the comic creation environment.Further research will determine exactly how long this familiarization should be.Moreover, before students created their comics, they attended a presentation describing the issues of the learning subject.Teachers can also use comics created by other students to exemplify the process.Further research is also needed on this case. Finally, the modality, multimedia, personalization, and voice principles are applied to students' comics.Presenting words as spoken text empowers students to learn better.Words in comics are presented as a simulating speaking activity by the characters' mouths.Comics, by nature, follow the multimedia principle because they are a combination of text and images that can serve as a scaffold enabling students to construct their mental models.In comics, text is expressed by characters' words, which are included in the same frame as the characters' images.Therefore, the words are in a conversational style rather than a formal style, and they do not have mechanical speech characteristics but follow human speech. From the above, we see that the students' comics, along with the design of the activity (e.g., a comic strip with a maximum of three frames and limited available time) helped students apply these principles, positively affecting the cognitive load they experienced.Students need to create a clear and effective presentation of the information on the subject under study.Comics, by combining text and images and breaking the information into frames, allow students to benefit from the proven value of this combination and allow them to determine the way they access the necessary information. Limitations and Future Work Research on digital comics creation efficiency needs to be applied to other learning conditions as well.Since students can create digital comics using online applications, further research is needed to investigate how this way of creating digital comics could enhance learning.In addition, research in various disciplines with more participants of various ages and from different levels of education will shed more light on the impact of digital comics creation on students' MWL. It is worth noting that there were few cases of students whose MWL was particularly low or particularly high.Determining the cause requires further investigation.Students who experienced a high load may not have been experienced computer users, or the specific process requiring the creation even of a simple story may have been too taxing.This could potentially be solved if a comic was given with the first frame completed and the students had to continue its development, or if the students were given suggestions for the story beginning and chose one.Students might need more guidance to overcome the difficulties they might have faced with the software or the way the information was presented.Exemplifying the process could help them.Students who experienced a low load might have perceived digital comics creation as too simple compared with the applications they usually use outside of school.They might want an activity that could also incorporate other media, such as sounds, videos, or animations.Further research must be carried out to shed light on these issues. In addition, students might have been further motivated if they could create more complex comics or draw their characters' images.However, these comics may impose an additional load on students unrelated to their learning needs.Also, comics with more frames and, consequently, more information might need more signaling.Students might experience different MWL during these cases depending on their prior experience.A longer familiarization period and the use of comics created by other students might lessen the above differences and affect the way principles like segmenting and signaling are applied.Further research is needed to clarify the above cases. Finally, other instruments assessing the cognitive load, individual assessment of the various tasks that the digital comics creation consists of (e.g., the story creation, finding characters' images), or a dual-task methodology might reveal more information about the resources that burden students' working memory and thus affect the cognitive load. Conclusions In this paper, we investigated the educational efficiency of digital comics creation in a computer science course in a real classroom setting.Digital comics creation was found to help students in their knowledge construction attempts.This finding is compliant with prior research stating that deep understanding occurs when students are encouraged to engage in productive learning activities [52].During digital comics creation, the students process the subject under study and present its issues in a new format-the comic format.Hagaman and Reid [57] concluded that a paraphrasing strategy could lead to reading comprehension improvement, and Leopold and Leutner [53] stated that students learned better with pictorial summaries. We also used the six NASA-TLX dimensions (mental demand, physical demand, temporal demand, effort, performance, and frustration) to assess the MWL and obtain knowledge of what causes this load.The average load and the loads on most dimensions imposed by digital comics creation were normal.The load on the performance dimension was high, indicating that students considered creating successful comics very important.The load on the physical demand dimension was very low since digital comics creation does not require physical strain on behalf of the students.Additionally, digital comics creation required low to normal mental demand and effort.Also, the frustration and temporal demand were the most aggravating on the students' attempts. Moreover, comics display information using the content forms of text and images.Mayer [2], stating that multimedia can be as simple as a still image with words, presented some principles that educational materials containing different content forms must satisfy in order to help students to construct knowledge.Students, as comic creators, can apply these principles to their comics, reaping the corresponding benefits.If digital comics creation is combined with appropriate instructional design, students' efforts can be facilitated and become more effective. Finally, our study, despite its limitations, shows that digital comics creation can help students in a real classroom achieve learning goals.Students might be stimulated to invest important effort and thus improve their learning by offering them an active role in their own learning process.The appropriate use of ICT can help teachers transform the learning environment into a student-centered one [1,58,59], and digital comics creation might contribute to this.Of course, further research is needed to replicate the results and determine parameters that will enhance the positive effect of digital comics creation.
9,118.2
2023-07-05T00:00:00.000
[ "Education", "Computer Science" ]
Classification of Generation By Population by Region in Indonesia Using K-Means Algorithm Population growth caused by the year of birth led to the classification of population groups into several generations. Classification is important because in each generation there is based on population growth has different characteristics and traits in each generation. This research was conducted to try to group generations based on provinces in Indonesia based on the number of residents owned. When researchers analyzed the data obtained from population census data conducted by the central statistics agency (BPS). The method used in generation classification grouping uses the K-Means algorithm method based on 3 clusters. Based on the results of calculations carried out for 3 clusters obtained cluster 1 has 25 provinces, cluster 2 has 3 provinces and cluster 3 has 6 provinces. Based on the 2020 census that has been conducted, the current population is generation Z, generation and Pre Boomer generation is last in line so that from the available data can provide information about mapping in 34 provinces to be able to improve communication patterns between generations and fulfill public facilities that can be used every generation. Introduction The 2020 population census conducted by the central statistics agency conducted in February -September 2020 [1], based on the census of the population of Indonesia as many as 270,203,917 people who have a distribution of population that can be classified based on the generation seen based on the year of birth of the population. Based on the results of the 2020 census, Indonesia's population is dominated by generation Z who were born between 1977 and 2012, and then the millennial generation whose population was born from 1981 to 1996. In the process of classifying done for the population group using the literature of William H Frey. In every generation in Indonesia so that from the process can create a good communication process. From this background, generation grouping is needed to make it easier to know the number of generation clustering deployments in provinces in Indonesia. Based on the above assessment several methods can be done to find out the clustering process based on previous research [2][3] [4]. Clustering is a method used to analyze data used to solve problems based on data grouping [5][6] [7]. For the calculation process, researchers use the K-Means method as an algorithm in the data mining method in the process of grouping data [8]. In this research activity, the data used is divided into post generation Z, Generation Z, Millennial, Generation X, Boomer, Pre Boomer based on the population of 34 provinces, namely the spread of generation with a number that dominates, dominate and less dominates. Research and Methodology To conduct research is needed by using the overall literature of the recording of total demographic data in Indonesia from the BPS website related to the 2020 census data and also looking for references related to problems from books and related journals to be able to get problem-solving and using K-Means algorithm in the calculation process based on 3 specified clusters that are very dominating, Dominate and dominate less. Data Collection Stages In the process of collecting data researchers take data from secondary parties based on population surveys conducted from census records conducted from February 2021 to September 2021 conducted online or by BPS officers then the data can be accessed on the BPS website. Stages of Data Processing and Analysis The generation clustering in 34 provinces that have been obtained will be processed first to be able to determine a cluster. The clustering process divides into 3 classes based on the data provided. Then the data is analyzed by calculating the weight of each index by selecting a randomly selected centroid number for the cluster. Stages of Application of K-Means Algorithm Method To be able to complete the K-Means algorithm several stages can be done including a) Determining the number of clusters formed from available data is 3 clustering: Very domineering, Dominating, and Less domineering. b) Determining cluster values randomly, for initial data the specified value comes from West Sumatra Province, Riau Islands Province, and South Kalimantan Province. The results of the cluster value determination can be seen in table 2. c) From each line that has been calculated, determine the cluster closest to the center of the cluster. This stage can be seen in table 3. d) Determining the value for the center of the latest cluster to perform recalculation from the initial stage until the overall data from each cluster that we have no change back then the final result can be obtained and we can find out the number of clusters. This can be seen from the processing results with Rapidminer in figures 1,2 and 3. Results and Discussion To conduct the process of grouping generation classification in the territory of Indonesia is done first with the selection of centroid data conducted randomly from 33 provinces from data obtained from BPS. After determining the centroid center then calculated based on the available data so that 3 clusters were obtained and determined the closest distance from the centroid center and the value of the cluster for each provincial data. The results of the calculation can be seen in table 3. To perform the calculation process with the Rapidminer application, the data we have is carried out the import process into the application by adjusting the data type and determination of the id, as seen in figure 1. Figure 1. Transformation Data Process After doing the process of reading the data, the next step is to determine the results of clustering, with K = 3 in the RapidMiner application, thus producing the cluster data output in figure 2. Conclusion Based on the results of research that has been done can be drawn conclusions: a. K-Means algorithm used is able to map generation clustering into 3 clusters, namely the dominant cluster has 25 provinces, the dominant cluster has 6 provinces and the non-dominant cluster has 3 provinces obtained from 34 provinces in Indonesia. b. From the results of the research that has been done, researchers suggest that further research be conducted to provide public facilities owned by a province that can be accessed by every generation.
1,444.8
2021-12-30T00:00:00.000
[ "Computer Science" ]
Electronic And Nonlinear Optical Features of Inorganic Ga12N12 Nanocage Decorated With Alkali Metals (Li, Na and K) The effect of alkali metals (Li, Na and K) interaction on the nonlinear optical response (NLO) of Ga 12 N 12 nanocage has been performed using density functional theory (DFT) calculations. The results show that the exo-M@Ga 12 N 12 structures are energetically favorable with negative interaction energies in the range of ‒ 1.50 to ‒2.28 eV. The electronic properties of decorated structures are strongly sensitive to interaction with the alkali metals. The HOMO-LUMO gap of Ga 12 N 12 is reduced by about 70% due to the decoration with alkali metals. It is obtained that the adsorption of alkali metals over the tetragonal ring of Ga 12 N 12 nanocage remarkably enhances the first hyperpolarizability up to 6.5×10 4 au. The results display that decorating Ga 12 N 12 nanocage with alkali metals can be introduced it as a novel inorganic nanomaterial with significant NLO properties. decorated structures are ‒ 1.62 (Na@fN) > ‒ 1.60 (Na@R4) > ‒ 1.50 (Na@R6) > ‒ 0.72 (Na@inside) as well as ‒ 1.691 (K@R4) > ‒ 1.679 (K@R6) > +0.281 eV (K@inside). The results show that the most stable complexes of lighter alkali metals (Li, Na) are formed by the interaction of their adsorption with the electronegative N atom of the Ga 12 N 12 nanocage. Introduction Since the discovery of C60 fullerene in 1985 by Kroto et al. [1], many theoretical and experimental researches have been done on fullerene carbon systems. Carbon fullerenes Cn (20 < n < 60) have been synthesized experimentally and have attracted much attention. Detailed studies of the carbon clusters are important in many applied fields, such as astrophysics, interstellar chemistry, electronics, and combustion processes [2][3][4][5]. The discovery of interesting properties and applications of carbon fullerenes led to extensive research in the design and synthesis of inorganic fullerenes Therefore, a variety of different inorganic-based fullerene-like nanocages have been reported. The design and synthesis of novel materials with excellent nonlinear optical (NLO) properties has attracted great interest in experimental and theoretical fields over the past several decades due to their potential application in optical, electro-optical devices, optical switching and other laser devices [22][23][24][25][26][27][28][29]. Among many strategies for enhancing the NLO response of materials, introducing the diffuse excess electron, such as alkali metals, proposed an efficient approach to improve the NLO properties of different systems. The excess electron is a kind of special anion with dispersivity, which plays an important role to improve the NLO properties of different systems and the first hyperpolarizability (β0) [30,31]. In addition to their various other applications, they are potential candidates for materials with large nonlinear optical response [32][33][34][35][36][37][38][39][40][41]. In this paper, we study the decorated Ga12N12 structures with alkali metals both exohedrally and endohedrally. The effect of decorating in different sites of the nanocage on NLO properties is investigated in detail. Computational Details All calculations are performed using Gaussian 09 quantum chemistry code [42] with default convergence criteria; the SCF convergence criteria are set to 10 -8 Hartree on the density (SCF=Tight) as well as the convergence of geometric optimizations are adjusted to maximum force and root-mean-square (RMS) force of 4.5×10 -4 and 3.0×10 -4 Hartree.Bohr -1 , respectively, and maximum and RMS displacements of 1.8×10 -3 and 1.2×10 -3 Bohr, respectively. The geometries of all considered structures are fully optimized at B3LYP/6-31+G(d) level of theory and the nature of the stationary points are checked by frequency analysis at the same computational level. The spinunrestricted approach is applied to describe the geometry optimization, electronic structure and NLO properties of M@Ga12N12 (M=Li, Na, K); whereas the restricted approach is used for the isolated clusters. The corresponding 2 S values for spin-unrestricted approach are in the range of 0.752−0.754 for these mentioned structures, which are very close to the value 0.750 for the pure doublet state, indicating that the spin contamination is negligible and the computational results are reliable. The first static hyperpolarizability is evaluated using an analytical density functional Coulomb-attenuated hybrid exchange-correlation functional CAM-B3LYP approach and 6-31+G (d) basis set. The magnitude of the applied electric field is chosen as 0.001 au for calculation of the hyperpolarizability. The interaction energy (Eint) between the alkali metal and nanocage is computed as: EM@nanocage is the total energy of the M@nanocage as well as Enanocage and EM are the energies of the isolated nanocage and alkali atoms, respectively. Energies of the frontier molecular orbitals The energy of a system in the weak and homogeneous electric field can be defined as [43,44]: where E 0 is the molecular total energy without the electric field and Fα is the electric field component along α direction. The, μα, ααβ and βαβγ denote dipole, polarizability, and the first hyperpolarizability, respectively. The dipole moment (μ), main polarizability (α), anisotropy of polarizability (Δα) and first hyperpolarizability (β0) are noted as: The polarizability (α) is a second rank tensor or a 3 × 3 matrix with 9 elements. The diagonal elements describe the response parallel to the applied electric field and their eigenvalues αii (i = x, y, and z) are used to calculate the mean polarizability α (isotropic of polarizability Eq. (6)). Some materials also become polarized in directions perpendicular to the applied electric field. Anisotropy of polarizability (Δα) is calculated from diagonal and off-diagonal elements of polarizability tensor according to the following equation: The first hyperpolarizability (β0) is a third rank tensor or a 3 × 3 × 3 matrix with 27 components and known as nonlinear optical response (NLO) coefficient. Time-dependent density functional theory (TD-DFT) calculations were performed at the CAM-B3LYP/6-31+G(d) method to obtain the crucial excited states, and the differences of dipole moments between the ground state and crucial excited state. Optimized structures The optimized geometry of inorganic Ga12N12 nanocage at B3LYP Å) [19]. The bond lengths of b64 bonds are larger than b66 ones, it indicates that more p orbital participation is responsible for such increasing of b64-bonds than b66-ones. The distance between the nitrogen atoms (lN-N) and the gallium atoms (lGa-Ga), that is, the diameters of the four-membered rings in Ga12N12, is 2.755 and 2.621 Å, indicating that the four rings are rhombic, not square. The HOMO and LUMO distribution pictures of the pristine Ga12N12 nanocage are also shown in Fig. 1. It is clear that HOMO orbitals are on nitrogen atoms and LUMO orbitals are on gallium atoms. In other words, nitrogen atoms are electron donors due to having non-bonded electron pairs and gallium atoms are electron acceptors due to having empty orbitals. The present computational study is performed to understand the influence of alkali metals interaction with Ga12N12 nanocage on its electronic and nonlinear optical properties. The most important atomic distance (l) and bond lengths of all structures are listed in The interaction energies (Eint) of alkali metals decorated Ga12N12 nanocage are computed as Eq. (1). The obtained interaction energy values are listed in Table 1 Electronic properties The obtained frontier molecular orbital energies (εL and εH), energy gap (Eg), percentage of variation of Eg (%ΔEg) and dipole moment of all studied structures are listed in Table 1 Therefore, it can be concluded that the interaction of alkali metals with Ga12N12 leads to the formation of a higher energy level as the location of the new HOMO level between the original HOMO and LUMO of the pristine Ga12N12, which is responsible for significant narrowing of energy gap. The picture of FMOs is displayed in Fig. 4. In these pictures, the presence of the LUMOs on the alkali metal is observed and it shows that the charge transfer (CT) from the alkali metal atoms to the nanocage has taken place. Linear and nonlinear optical properties The calculated dipole moment (μ), polarizability (α), the first hyperpolarizability (β0) and its components (βx, βy, and βz), anisotropy of polarizability (Δα) and the Bader charge of metal atom of considered structures are listed in Table 3. Table 3 and Fig. 5 The diffuse excess electron release from alkali metals into the Ga12N12 causes a large NLO response. And this diffuse of electrons from outside the nanocage is much more effective in the position of the R4 ring and front of nitrogen atom. It is noteworthy that the interaction in these two positions causes the highest amounts of negative interaction energy (Eint) and increased the first hyperpolarizability. In other words, complexes with significant NLO response have the highest polarizability (α) and the most interaction energy among the studied complexes. Therefore, it can be confirmed that a more stable system can show remarkable first hyperpolarizability. The ionization potential energy (IPE) of alkali metal atoms (Li, Na, and K), the interaction distance and the position of them with the Ga12N12 nanocage are important factors in the diffuse excess electron to the nanocage. Heavier alkali metals have easier electron diffusion into the nanocage and cause a large NLO response. And also, the interaction distance between the alkali atom and the nanocage, plays a very important role in the amount of electron transfer, and the shorter the interaction distance leads to a greater NLO response. These factors challenge with each other. The interaction distances of the structures considered in Table 1. Among studied structures, some of them lead to insignificant NLO response (M@inside). The results of this table indicate that the shortest interaction distance is belonged to Li metal, but the greatest values of β0 are related to Na@R4, K@R4, Na@fN and Li@fN structures. Therefore, the ionization potential, interaction distance and the position of alkali metals with Ga12N12 play an essential role in changing the NLO response of structures. TD-DFT calculations To investigate the decorating effect of an alkali atom and its position on β0 value of Ga12N12, the time-dependent density functional theory (TD-DFT) computations were performed to obtain the crucial excited states of the all systems and the two levels of expression can be used [45][46][47]: where ΔE, f0, and Δμ are the crucial transition energy, the largest oscillator strength, and the difference of dipole moment between the ground state and the crucial excited state (the excited state with the largest oscillator strength), respectively. Our computed results are listed in Table 4. The maximum absorption wavelength (λmax) and dominant transitions of all studied systems are also given in Table 4. Pristine Ga12N12 nanocage has an electron excitation with 3.18 eV which its wavelength appear at 390.1 nm in near-UV region as displayed in Fig. 6. The transition energies (ΔE) of decorated nanocage are in the range of 1.13 to 1.75 eV and are much smaller than the value of the Ga12N12 structure. According to eq. 5, the β0 value is inversely proportional to the third power of transition energy, so the β0 value increases with decreasing transition energy. Also, the β0 value is proportional to the values of f0 and Δμ. The data in Table 4 show that the f0 and Δμ for all decorated structures are larger than the pristine Ga12N12. The UV-Visible spectrum of all structures is plotted in Fig. 6. Therefore the obtained results show that the decoration of Ga12N12 nanocage with alkali metals could be introduced as an effective strategy to induce remarkable first hyperpolarizability and it could be considered as promising innovative nonlinear optical inorganic-based nanomaterial. Compare with previous reports In the study of the effect of alkali metals on the nonlinear optical properties of nitride of other elements of this group (B12N12 and Al12N12), several references can be mentioned [15][16][17]. In 2014, Niu et al. investigated the effect of alkali metals on NLO properties of inorganic Al12N12 nanocage [15], and they reported that the excess electron play a key role in enhancing the static first hyperpolarizability of Al12N12 nanocage. In 2016, Hou et al. [16] and Shakerzadeh et al. [17] separately studied the interaction of alkali metals with B12N12 nanocage on NLO properties of Mdoped structures. For comparison, significant values of β0 from the three references are listed in Table 5. Based on the data in Table 5, the interaction of alkali metals with the group (III) nitride nanocages (B12N12, Al12N12 and Ga12N12) has significantly increased the nonlinear optical response. Depending on the type of alkali atom, nanocages and the location of the alkali atom, in some structures the β0 value has increased by about 10 4 -10 5 au. Decoration of the inorganic nanocages with alkali metal atoms has improved the NLO properties of the decorated inorganic nanocages. This increase is due to the charge transfer and excess electron from the alkali metal atom to the nanocages. Conclusion The present DFT study on interaction of alkali metals with Ga12N12 nanocage shows that the electro-optical properties of Ga12N12 are remarkably sensitive to interaction with alkali metals. It is found these decorations narrow the HOMO-LUMO gaps of Ga12N12 nanocage. The results display that the interaction process are energetically favorable with negative interaction energies in the range of -1.50 to -2.28 eV. Indeed, it is found that the adsorption of alkali metals over the tetragonal ring of Ga12N12 nanocage remarkably enhances their first hyperpolarizability. Exohedrally adsorption of alkali atoms on Ga12N12 nanocage leads to significant increases in dipole moment, polarizability and remarkable NLO response. In particular, the adsorption of Na atom on nanocage (Na@R4) with Eint = -1.62 eV leads to outstanding NLO response of 64723.4 au. causes a dramatic response to NLO. It is indicated that the ionization potential of alkali metal atoms, interaction distance and the position of them with Ga12N12 nanocage are important factors in NLO response. It seems that it can be concluded that decorated Ga12N12 nanocage with alkali metals can be promising for the design and synthesis of novel NLO nanomaterial. Table 2. The interaction energy (Eint), HOMO and LUMO energies (ε H and ε L ) and energy gap (E g ) in eV, percentage of variation of E g (%ΔEg) and dipole moment in Debye of optimized structures Table 3. The calculated dipole moment (μ), polarizability (α), the first hyperpolarizability (β 0 ) and the Bader charge of metal atom in the considered structures Table 4. The calculated transition energy (ΔE), the difference of dipole moment (Δµ) between the ground state and the crucial excited state, the largest oscillator strength (f 0 ), maximum absorption wavelengths (λmax) and the dominated transition of all studied structures The optimized structure and HOMO-LUMO distribution of Ga12N12 nanocage The stable decorated structures of M@Ga12N12 (M=Li, Na, K) Figure 3 The total density of states (TDOS) spectrum of all studied nanocages Figure 4 The picture of FMOs of obtained Na@Ga12N12 and K@Ga12N12 structures The obtained values for μ, α, Δα and β0 of all decorated structures in terms the position and type of alkali atom Figure 6 The UV-Visible spectrum of all studied structures The maximum wavelength (λmax) of all studied structures in terms the position and type of alkali atom
3,455.2
2021-11-03T00:00:00.000
[ "Materials Science", "Physics" ]
Measurement and simulation of the relatively competitive advantages and weaknesses between economies based on bipartite graph theory The input-output table is very comprehensive and detailed in describing the national economic systems with abundant economic relationships, which contain supply and demand information among various industrial sectors. The complex network, a theory, and method for measuring the structure of a complex system can depict the structural characteristics of the internal structure of the researched object by measuring the structural indicators of the social and economic systems, revealing the complex relationships between the inner hierarchies and the external economic functions. In this paper, functions of industrial sectors on the global value chain are to be distinguished with bipartite graph theory, and inter-sector competitive relationships are to be extracted through resource allocation process. Furthermore, quantitative analysis indices will be proposed under the perspective of a complex network, which will be used to bring about simulations on the variation tendencies of economies’ status in different situations of commercial intercourses. Finally, a new econophysics analytical framework of international trade is to be established. Introduction Compared with firm surveys and fine industrial classification of trade, Input-Output (IO) tables enjoy more feasibility in measuring both standard and vertical trades. With the availability and utilization of global IO database, especially Inter-Country Input-Output (ICIO) tables, it is possible to construct quantitative indices to assess what degree of impact a particular sector in a country has made on the Global Value Chain (GVC). This is because it better captures the international source and use of intermediate goods than any previous databases. As a result, a large number of researchers propose distinct approaches to the measurement of sectors' function or status. Beyond all question, IO table as a quantitative technique of economic analysis presents the interdependencies between different branches of a national or regional economy in details. Its property of being in the form of checkboard enables it to reflect the movements of products or PLOS ONE | https://doi.org/10.1371/journal.pone.0197575 May 29, 2018 1 / 28 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 services within the whole economic system from production consumption to distributive utilization, which is actually the formation and distribution of values respectively. The dual identities of each sector on the network as the producer and consumer at the same time, demand it not only to produce and distribute providing inputs for the other sectors but also to consume inputs from other sectors to accomplish its own fabrication. This is indeed the inner identity proposed by Karl Marx. The sectors in the IO table could be regarded as nodes while interindustry value stream contributes to weighted and directed edges in the construction of network models. In consideration of both availability and authority, IO table is definitely the priority-first data format to establish mathematics model, e.g., it can show flows of final and intermediate goods and services defined according to industry outputs. In addition, it is provided as a matrix, which can be directly or with minor modification adopted as complex network's adjacency matrix, establishing weighted and directed networks. From an empirical perspective, a handful of studies have characterized the structure of IO networks to better understanding the topology of inter-sector dependences and their repercussions on the industrial economics. For instance, Blöchl, et al. adopted TiVA database at OECD-WTO to establish 37 countries' IO networks and derived two indicators for weighted and directed networks, which are, random walk centrality to reveal the most immediately affected nodes by a shock based on Freeman's closeness centrality, and counting betweenness to identify the most accumulatively affected nodes based on Newman' random walk betweenness [1]. Kagawa, et al. proposed an optimal combinatorial method to find industries with large CO2 emissions through industrial relations based on IO table, depicting environmentally important industrial clusters in Japanese automobile supply chain [2]. McNerney, et al. studied the structure of inter-industry relationships using networks of capital flows between industries in 20 national economies, and found that these networks vary around a typical structure characterized by a Weibull link weight distribution [3]. Martha, et al. investigated how economic shocks propagate and amplify through the IO network connecting industrial sectors in developed economies [4]. With the development of IO database, related researches are based not only on independent national systems but also on multi-regional even global systems, with wide adoption of WIOD as the data source. For instance, Ando measured the importance of industrial sectors under the impact of American gross output in the global IO model [5]. Antràs, et al. derived two distinct approaches to measure industry upstreamness and prove their significant impact on trade flows [6]. Cerina, et al. analyzed the subgraph structure and dynamics attributions of a global network with community detection techniques, pinpointing the key industries and economic entities with PageRank centrality and community coreness [7]. Grazzini and Spelta set up the cost effect index to testify the robustness of global IO network and the interdependency of intermediate inputs in production [8]. Johnson and Noguera combined input-output and bilateral trade data to quantify cross-border production linkages and computed bilateral trade in value added [9]. Amador and Cabral applied visualization tools and measures of network analysis on valueadded trade flows in order to understand the nature and dynamics of GVC [10]. Xing, et al. established industrial complex network under the perspective of econophysics, and then analyzed the spreading effect in the form of economic shock [11], furthermore, they quantified the global industrial impact of countries on the GVC based on biased random walk process [12]. The present researches mainly mine the IO data from different aspects as an econophysics context implied in the form of networks but restricted to static analyzing endogenous variables ignoring the process of fining and refining of variables to maintain equilibrium, let alone providing measurements and advises on optimal control of the evolutionary tendency of industrial structures. Bipartite graph Bipartite graph, or bigraph, divides the vertex set of a simple graph G as two nonempty sets V 1 and V 2 with no intersection in between. Letting the two nodes relevant to each edge in G belonging to V 1 and V 2 respectively, it can be noted as G = (V 1 ,V 2 ,E), in which V 1 and V 2 are the bisections of G with E as the set of edges. For the bipartite graph G, if |V 1 | = m and |V 2 | = n, and there exists an edge between two vertexes, when and only when one of the nodes belongs to V 1 and the other belongs to V 2 , the graph can be referred to as the complete bipartite graph of vertexes m and n, noted as K m,n . The bipartite graph has a wide application in complex network analysis, including cooperation and competition networks (mainly dealt with through affiliation networks), for either cooperation or competition is the common existence in social networks consisting of people of units of people. Networks of scientists (authors and papers), patents declaration (patents and holders), commodity (goods and consumers), public transportation (routes and stops) etc. can all be clarified as affiliation networks to be digested as bipartite graphs in the manner of two-mode networks. Two kinds of vertexes exist in this kind of networks, one is that of participants and the other of objects. For the cooperation or competition, focusing on the interaction among vertexes of the same kind is the practical target in building two-mode networks, It is more than common to project the networks onto one kind of the vertexes (often those of participants) reaching a one-mode network. Through this projection, edges have been granted the property to reflect the relationship of cooperation or competition on the same object by two participants. This one-mode network obtained is called the complete subgraph of the object, as shown in Fig 1. In Fig 1, the squares in the up are the objects, while the lower circles are the participants, and the edges in black belong to the two-mode networks, while those in red to the one-mode networks contributing to complete subgraphs, as each of the edges is gained through projection of two edges in the two-mode networks. Most commonly, the one-mode network obtained through projection has edges of no weights. Yet recent researches on two-mode networks find that these weights could be gained through the definition of co-occurrences, which is the counting of the fellowship of two participants in the same object, say the number of papers of two scientists as co-authors. Newman made an extension of the process on scientist network [13], and Padrón believed that this modeling process could bring distinctive simulation on the potential cooperation or competition relationship [14]. Resource Allocation Process In order to minimize the information loss in the process of projection of two-mode networks, and also to take the scarcity of vertex into consideration, the Resource Allocation Process (RAP) is adopted in this paper as the algorithm of projection [15]. Let V 1 in G = (V 1 ,V 2 ,E) as the vertex set of participants, represented as P, and V 2 as that of the objects, represented as O, then a bipartite graph G = (P,O,E) is reached, in which, E is the set of edges, while vertexes in sets P and O are (p 1 ,p 2 ,Á Á Á,p n ) and (o 1 ,o 2 ,Á Á Á,o m ) respectively. The initial resource allocated to the i th participant is f(p i )!0. First of all, all the resources of P flow in the direction of O, and the resource allocation of the l th vertex in O is where, k(p i ) is the degree of p i , a il is a n×m matrix: With all the resource flown back to set P, the final distribution to vertex p i is This formula could be rewritten as: The resource allocation process of O!P is shown in Fig 3. The w P ij in Eq (4) could be written as: where, w P ij is the relationship strength produced in the two resource allocation processes between p i and p j , so the adjacency matrix W P ¼ fw P ij g nÂn can be constructed for this complete object subgraph through RAP, as shown in Fig 4. In conclusion, the core of RAP is to have resources distributed to each participant and object in the network, with w P ij represents the proportion of resources distributed to the participant j through the object from the participant i. Say each participant equally distribute its resources to the objects it will take a part in, and then each object will redistribute resources it received back to its participants equally through the edges of the bipartite graph. So there lies the fundamental difference between RAP and traditional bipartite graph projection, as shown in Fig 5. RAP shares the following three characteristics: 1. The adjacency matrix W P of the complete object subgraph is asymmetric, and w P ij =kðp j Þ ¼ w P ji =kðp i Þ. 2. As two participants take parts in the same object for multiple times, their relationship strength goes from intimacy to saturation rapidly. 3. The relationship strength between two participants is decided by not only the number of times they jointly take parts in the same object but also the number of participants at the same time of the very object. Further extension of RAP can also be made to the condition of weighted edges in bipartite graphs, when resources are no longer distributed equally, with the weight representing the degree of membership of participant's vertex to the object's. The formula is: where, s(p j ) is the weight of participant node p j , sðp j Þ ¼ IO analysis using bipartite graph IO table is good at presenting the complicated interdependent relationship among various industrial sectors from a global prospective, with a clear embodiment of the amount of resources one sector may gain from its upper-stream sectors. So researches on IO table mainly take the advantage of depicting the topological structure of the economic system through the measurements on intermediate products as an indication of input and output relationship, so as to bring lights on analysis on the rules of value flows and industrial structural features. Making observation from the prospective of bipartite graphs on the rows of IO table indicating the supply from upper to lower-stream industrial sectors and columns indicating on the demand from lower to upper ones, it is obvious that IO table is proficient in showing the cooperation or competition relationship among different industrial sectors. Yet no coopetition relationship among industrial sectors could be reflected through direct structural measurement on the IO network, with adequate matrix transformations to be introduced for this goal. Porter proposed the nature of competitive strategy was building the relations between the corporation and its environment [16]. Actually, if there exists more than one supplier or consumer for one single industrial sector, cooperation or competition shows up, for the scarcity of resources provides a limitation to any flow of intermediate form upper to lower stream sectors. It is defined and extended by Porter's competitive advantage, rather than the conceptual "competition" mentioned from economics. Traditional IO theory uses direct consumption and complete consumption coefficients to present this scarcity, with influence and reaction coefficients presenting the relations between one industrial sector and its environment. Yet it still bears the shortcoming that its focuses are restricted to the linear technical-economic relationships among different industrial sectors and between the gross output and final usage, neglecting the scarcity of productive resources as constraints on cooperation and competition relationships. So this paper devotes to set up modeling analysis with bipartite graph theory on the IO data, aiming at restoring the competition relationship between lower stream industrial sectors from the pro To be noted that the flows in bipartite graphs are in the direction from the participant nodes to the object nodes, but the IO networks experience the complete contradiction of flowing from the upper stream sectors to the lower ones at the mercy of showing the value flows. Although the following analysis is based on the complete object subgraph of a bipartite graph, the flows on edges are in the opposite direction with those of the bipartite graphs. The upper stream sector in the IO table could be referred to as the object nodes in the bipartite graph, while the lower one as the participant nodes, as shown in Fig 6(a). The A, B, and C of squares indicate the upper stream industrial sectors, while those of circles the lower stream ones, and ab, ba, ac, ca, bc and cb are the IO values between the upper and lower industrial sectors, or in another word, the weights on the edges of the bipartite graph. aa, bb and cc indicate the input of one industrial sector's own products into itself, or the weights on the self-loop of one industrial sector. While constructing a bipartite graph using IO data, the most important is to have all the data and endowed information formatted to be applicable for the two-mode network. So long 1. A provides production resources simultaneously to B and C, with their IO values giving a quantitative measurement to the process, as shown in Fig 7(a). Competition shows up between B and C, and it gets intensified if they share more upper stream providers. One single competition strength can be defined in the course of the projection, and the multiple competition strengths will be definitely larger than the single one because that it will be intensified under the condition of existence of more than one A for both B and C. 2. A provides production resources simultaneously to B and as feedback to itself, as shown in Fig 7(b). The single competition strength is relevant to ab and aa, but there will not be multiple competition between A and B, for A cannot be multiplied in this case. 3. A provides production resources only to itself as feedback, as shown in Fig 7(c), and there exists no competition. So, if any industrial sector enjoys with any other sector more than one upper stream industrial sector as production resources provider, there exist edges in the complete object subgraph depicting the competition relationship. The above discussed three fundamental conditions coexist interdependently in the IO networks, making hurdles for the traditional methods on a reenactment of the direct or indirect competitions among industrial sectors. RAP, under this scenario, is adopted to implement projection from the upper stream industrial sectors (objects) to the lower ones (participants). Underlying database With the advent of ICIO databases, it is theoretically and empirically possible to analyze the GVC, which is composed of abundant international and domestic industrial value chains, because such tables provide globally consistent bilateral trade flows and allow comparison of production networks in different regions. The layout of a normal ICIO table is shown in As a sort of value-type IO, WIOD was chosen as the underlying database, because it provides time-series data of 40 independent countries/regions (with Taiwan as an inalienable part of Chinese territory) and the rest of world (RoW), covering the period from 1995 to 2011. These tables have been constructed in a clear conceptual framework on the basis of officially published IO tables in conjunction with national accounts and international trade statistics [17]. Furthermore, there are three different types of data in WIOD, including World Input-Output Table (WIOT), Regional Input-Output Table (RIOT) and National Input-Output Table (NIOT), with all of which as value-type IO data. In this paper, more attention is paid on the theoretical rather than empirical analysis, so RIOT of WIOD was further chosen to establish industrial complex networks and analyze the GVC constituted by worldwide industrial chains. RIOT includes 6 economic entities, which are Eurozone, Other EU (non-Eurozone), North American Free Trade Agreement, China, East Asia and BRIIAT (an economic union) as shown in Table 1, as well as RoW (detailed names and abbreviations of sectors in RIOT of WIOD are in S1 Table). Modeling The RIOT data of WIOD database is chosen as the source of modeling data for this paper. There include two steps in the proposed modeling process. The first is to set up the intermediate input matrix to portray the topological structure of the global economic system with IO table. The second is to mine the direct and indirect competition relationship among industrial sectors of economic entities based on bipartite graph theory and RAP method. GIVCN-RIOT model In order to establish an industrial complex network, a sector within a region is to be considered as a node, and the inter-industry IO relationship as a tie, whose weight represents the sale and purchase relationships between producers and consumers. Thus, a graph G = (V,E,W) containing n nodes is created, representing sectors within a nation or region, denoted as a node set V. Pairs of nodes are linked by ties reflecting their interdependencies, constituting an asymmetric tie set E. However, in valued graphs, a set E can actually be replaced by weight set Table 1. Economic entities in RIOT of WIOD. Regions Abbr. Countries 3. There is an abundant existence of self-loop with nodes in GIVCN-RIOT model, and even with very large weights on the edges, stating clearly that the consumption of its own products as the intermediate for production is more than common for many industrial sectors. GIVCN-RIOT-BIPARTITE model With its data structure enclosing the competition and cooperative relationship among industrial sectors, GIVCN-RIOT model reveals the mechanism of creation, distribution, transfer, and value-addition of value on the GVC. The classical IO analysis adopts the direct consumption and complete consumption coefficients matrices to show the direct and indirect technical-economic relationships among industrial sectors, before using influence and reaction coefficients to measure the pulling effect and demand intensity of one sector on the other. 2. Edges are directed from the upper-stream industrial sectors to the lower-stream ones, making known to the flowing directions of the intermediates. Edges between the two categories of nodes form the edges set E 0 . Self-loop of each node reflects the industrial sector's consumption of part of its own output as input, so those industrial sectors with self-loops are considered to be the upper or lower stream industrial sectors of themselves in this paper, say the object node has an edge to its shadow node as E 0 , then E = E 0 [<EMAIL_ADDRESS>3. Similar assumptions are laid to the sets of weights. The set of weights between the upper and lower industrial sectors are W 0 and those of industrial sectors' consumption of their own output as input are<EMAIL_ADDRESS>The weight set of the whole network is W = W 0 [<EMAIL_ADDRESS>Among N−1 competitors, the lower-stream industrial sector i obtains the number of w 0 li intermediates from its upper-stream industrial sector l, while the amount of self-consumption of its own output is noted as w @ li (when l = i, indicating the upper and lower stream industrial sectors are practically the same one.) Based on the above assumptions, the GIVCN-RIOT model has turned from a simple graph G = (V,E,W) to a bipartite graph G = (O, P 0 , P@, E 0 , E@, W 0 , W@), which is named GIVCN-RIOT-BIPARTITE model, as shown in Fig 10. Nodes in the shapes of rectangular in Fig 10 reflect the set of object nodes O composed of upper-stream industrial sectors, and those in shapes of dots for the set of object nodes P of the lower-steam ones. Distinguishment has also been made between the two categories of participant nodes, e.g. NAFTA35' represents the self-consumption of its own output by NAFTA "Private Households with Employed Persons". Edges exist only between nodes of different categories in the GIVCN-RIOT-BIPARTITE model, and self-loops in GIVCN-RIOT models are the edges between the nodes and their own shadow nodes. GIRCN-RIOT model With the global economic system under explanation by GIVCN-RIOT-BIPARTITE model, the lower-stream industrial sectors consume the limited output by the upper-stream ones, proving the scarcity of production resources. When several lower-stream industrial sectors enjoy the same upper-stream one as the feeder of production resources, the scarcity is transferred into competition relations among the lower-stream industrial sectors. Under the help of projection algorithm RAP, the competitive relations implied in the GIVCN-RIOT model can be shown by its complete object subgraph, and the formula of projection is as follow: where, w l is the gross output of upper-steam industrial sector l, and it is numerically equal to the output weight of industrial sector l in the GIVCN-RIOT-BIPARTITE model, say is the competitive strength of the industrial sector i against j, both of lower-stream status, when they both belong to the lower-stream industrial sectors competing for intermediates from a common upper-stream industrial sector for production resources, contributing to the edge weights set W P ¼ fw P ij g, i,j 2 {1,2,Á Á Á,N}. The e P ij connecting node v i to v j in the complete object subgraph depicts how sector i obtaining intermediate from its upper-stream sectors has influenced the benefit of sector j, with the weight of edge w P ij indicating the degree of the influence. Those on the diagonal line of matrix W P are set to be zero, for it is the competitive relations among various industrial sectors to be analyzed in this paper. It is easy to notice that there exist obvious agglomerations in the model of GIRCN-RIOT-2011 and competitions mainly exist among industrial sectors with economic entities. The integration between the Euro zone countries and other EU ones are comparatively more intensive. Compared to the GIVCN-RIOT-2011 model, the "Private Households with Employed Persons" of the Euro and non-Euro regions (EURO35 and OEURO35) get detached from the maximum conjunction branch, for these two sectors are only the source nodes at the front of the network, having no competition with any of the other industrial sectors. Moreover, although all the shadow nodes belong to the set of participant nodes, no connecting edge could be found for this sort of nodes after the projection. This paper emphasizes on the competitive relations among industrial sectors, so the connecting edges between the shadow nodes and original ones can also be ignored, as an elimination of the influence of one industrial sector's consumption of its own output upon its own benefit. Thus there are no shadow nodes in GIRCN-RIOT-2011 model. CAI and CWI The edge weight set W P of GIRCN-RIOT indicates the direct and indirect competitive relations among industrial sectors. It is to be noticed that this competitive relation is directed, e.g. w P ij is the competitive strength of industrial sector i against j, while w P ji is that of the opposite. So it is defined in this paper that the summation of the competitive strengths of an industrial sector to be its Competitive Advantage Index (CAI), and the summation of strengths of one industrial sector to be competing against as Competitive Weakness Index (CWI). Judged from the prospective of complex networks, CAI and CWI are the out-strengths S OUT and in-strengths S IN of nodes in GIRCN-RIOT, to be calculated as follow: The concept of node strength covers not only the information of the degree of the node, but also the information of weights of its connecting edges, proving itself to be the integration of local information on the network. CAI and CWI quoted in this paper serve as benchmarks of the competition of industrial sectors on the GVC, taking consideration of both the scale and intensity of competition (cumulative distribution data of out-strength and in-strength is in S2 Table). Cumulative distributions of both out-strength and in-strength are shown in Figs 12 and 13. Judged from cumulative distribution as shown in Fig 12, there is significant large difference between CAI and CWI, and the former has a more uneven distribution than the later (both CAI and CWI have the even value of 0.755, yet the standard deviation of CAI is 0.594 and that of the CWI is 0.126, and correlation data of CAI and CWI is in S2 Table). It can be reached from Fig 13 that there is no correlation between CAI and CWI, indicating that they are determined by the structure of GVC and the positions of industrial sectors on it, for there is no necessary connection between the competitive advantages and weaknesses of industrial sectors. NCAI and NCWI On the basis of CAI and CWI, notions of National Competitive Advantage Index (NCAI) and National Competitive Weakness Index (NCWI) are here introduced as follow: NCWIðtÞ ¼ The economic globalization has witnessed that comparative advantage in classical economic theory is not able to fully explain the success and failure of industrial sectors of economic entities in the global environment, and the scholars begin to digest the source and formation of competitive advantage from the prospective of value chain [18]. CAI and CWI proposed in this paper intend to show the competitive statues of industrial sectors on the GVC in the view of econophysics, through evaluation of the strengths among the lower-stream industrial sectors in their competition for the limited supply of intermediates from the upperstream industrial sectors. In this way, NCAI and NCWI can be indices of economic entity's competitive strength on the GVC. Time series analysis WIOD database has provided RIOT data of 1995-2011 covering 17 years. Statistics on NCAI and NCWI of each economic entity have been accomplished based on GIRCN-RIOT models in this paper, reaching a time sequential trend as shown in Figs 14 and 15 (time sequential data of NCAI and NCWI is in S3 Table). From the prospective of competitive weakness, these economic entities could be in 4 different categories. The rest of the world and BRIIAT nations have the highest in NCWI. NAFTA, other EU, and Easter Asia countries have similar NCWI, belonging to the second group. Eurozone countries contribute to the third group. China has the lowest NCWI, making itself the fourth group. Taking the above into consideration, it is not hard to see that the integrated competitive advantage of China in the global economic system is continuously improving with decreasing NCWI, embodying a strong competitive power and tremendous potential. Simulation GIVCN-RIOT is able to portray the flows of intermediates between industrial sectors of economic entities on the GVC, and GIRCN-RIOT depicts the competitive relationships among these industrial sectors via RAP. Any disturbance on flows of intermediates among the economic entities would bring influences on the competitive status of relevant entities on the GVC. Static time series analysis on the NCAI and NCWI are first to be carried out in this paper, followed by dynamic simulation on alterations of competitive strengths brought about by changes on trades between China and other economic entities. Basic settings A set of three simulation analysis based on GIVCN-RIOT and GIRCN-RIOT has been done in this paper to realize the impacts of international trade fluctuation on national competitive advantage and weakness, taking the RIOT data in 2011 as the base of the whole simulation application. Taking economic entity X and Y as an example, there exist mainly three kinds of international trade fluctuation in between them: one is X has varied export to Y, one is Y has varied export to X, and another is both X and Y have varied exports to each other. And adjustment has been made to the gross value of one entity's export to the other in both directions, say from 100% of the basis to 0% (decreasing) and from 100% of the basis to 200% (increasing). Simulations have been made for every 5% of the fluctuation for the calculation of NCAI and NCWI in the GIRCN-RIOT model to reveal the developing trends of NCAI and NCWI under each scenario (all of the simulation data is in S4 Table). basis, its NCWI is rising more rapidly than NCAI, indicating China will be confronted with more competitive stress while it tries to take a more active part in the globalization. On the other hand, the NCAI of NAFTA increases synchronously with that of China and even surpass when China doubles its export to NAFTA, but its NCWI declines rapidly. This is because that NAFTA depends much on the cheap intermediates as the input supplied from China, and increasing of imports from China will significantly relieve its stress on the GVC. China vs. NAFTA Scenario (2). NAFTA varies its export to China from 0% of its basis to 200%, while China remains stable. When the export of intermediates from NAFTA to China increased from 0% to 200% of the basis, the NCAI and NCWI of these two economic entities both rise, but the increase of NCAI of NAFTA is of a larger margin while that of NCWI is of a smaller margin compared with those of China. This brings to the light that China could enhance its competence on the GVC through trade with NAFTA, and NAFTA will be confronted with more competitive pressure from China in the same process. The NCAI of NAFTA has a lower increase rate than its NCWI, and that of China is higher than its NCWI. All these show that scenario is beneficial to China. China vs. other economies Along with the enforcement of international trade, China and other economic entities around the world would experience different comparative national competitive advantages and/or weaknesses. Due to the limitation of space here, this paper will dedicate to the variation of NCAI and NCWI of China vs other economic entities under scenario (3), as shown in Figs 19-23. Basically speaking, both China's NCAI and NCWI will increase under most circumstances, but its NCAI has a larger increase rate than NCWI. This shows that strengthening its economic ties with other economic entities is the prerequisite of China's growing into a trade power, for it will gain more competitive advantages than weaknesses in the process. Nevertheless, other economic entities will incur different impacts in the same process. Euro countries will have a higher increase of NCAI than NCWI, showing that strengthening their trading partnership with China will bring about more opportunity than the challenge. Take the trade between China and Germany as an example, it is rather complementary. On one hand, China's exports to Germany are mainly primary processing and labor-intensive products, such as mechanical and electrical products, textiles and raw materials, furniture / toys / miscellaneous products. On the other hand, its imports from Germany are capital intensive or technology-intensive products, such as mechanical and electrical products, transportation devices. The difference of trade commodities between the two countries has been determined by the different industrial structures. As an industrialized and developed country, Germany has its industrial structure dominated by its third industries, with add-on values obtained through importing large quantities of primary processing products and exporting technology intensive ones. But China's industrialization has only lately started. With comparatively less capital accumulation, its industrial structure is dominated by the second industries, calling for imports of massive advanced equipment and technologies. It is clear that further developing trade between China and Euro-Zone countries is mutually beneficial. Other European Union countries will experience a greater decline of its NCAI than NCWI, showing that their importing of intermediates from China may weaken their economic status in the global market. On one side, China has a complementary trade with Central and Eastern European countries, importing mainly resources intensive primary products. For instance, China imports copper and its products from Poland, primary products and raw materials from Bulgaria. Yet China's exports to Central and Eastern European countries are mainly manufactured goods. So enhanced trade with Central and Eastern European countries will bring diversification to China's trade structure, on top of providing the raw material for China's developing economy. On the other side, China has a competitive exporting market and commodity structure with Central and Eastern European countries. For instance, Poland, Czech, and Hungary have similar trade commodity structure with China of exporting mechanical and electrical products to Euro-Zone countries. But China has obviously comparative labor cost advantage, and thus more intensive specification in globalization than Central and Eastern European countries. In this way, China enjoys more competitive advantage in exporting to Euro-Zone countries, bringing about negative impacts on the comparative competitive advantages of other European Union countries. The declining of NCAI and rising of NCWI of Eastern Asian countries shows that their bilateral developing trade between China will weaken their competitive advantages while worsen their competitive weakness. And this is actually their motive in joining TPP. The nature of trade between China and Eastern Asian countries is the "triangle model" that China imports intermediates from Eastern Asian countries for further processing before finally exporting to American and European countries as final products. So China has played a pivotal role in the manufacturing network of China and Eastern Asian countries. The mutual promotion of bilateral trade between the two economic entities will boost the competitive strength of China in GVC and attenuate the competitive advantage of Eastern Asian countries. Further exporting from China might seize the Eastern Asian markets, while the competitive abilities of their industrial sectors are to abate, and aggrandize their competitive weaknesses. There is more declining of NCAI than NCWI of the BRIIAT countries under this circumstance, showing that their weakness outweighs possible gains in their trade with China. With comparative advanced manufacturing industries and technologies, China enjoys an obvious comparative advantage is their trade with BRIIAT countries. The abundance of natural resources of BRIIAT countries guarantees their advantages in primary products, which is the ultimate demand of China in the trade. For instance, China imports from Brazil primary products such as soybeans, oil, and iron ore, from India resource products such as mineral products, cotton products, and copper products, from Australia great quantities of iron ores, from Russia manufactured goods (mostly labor intensive), energy and resource intensive products. The above-mentioned properties will enhance the complementation between these two economic entities with the more distinctive global specification. But the international trade structure like these enables China to be ahead of the others in the BRIIAT markets. On the other hand, export from BRIIAT is mainly primary products of an upper stream on the GVC, making it gain a comparatively small portion of add-on value in the developing bilateral trade, with relatively little promotion of their competitive weakness. Trade with other countries with China is of limited scale, so any destabilization will bring a little variation on the NCAI and NCWI of both parties. Conclusions How to reproduce the topological structure of the global economic system from the perspective of system science and excavate its operation law has been a major problem that puzzles the academia for a long time. With the research framework based on physical economics and complex network theory, this paper analyzes the input-output relationship of intermediates among major economic entities between 1995-2011 with RIOT data from WIOD, and extract the competitive relations among them with RAP. Further introduction of four indices of CAI, CWI, NCAI, and NCWI is to reveal the competitive status of industry sectors and economies on the GVC. The contribution of the paper is as follow: (1) Construction of GIVCN-RIOT model based on ICIO data from WIOD to reproduce the topological structure of the global economic system. The adoption of ICIO data as resources in this paper is not only for its ability of reshowing flows of intermediate products, final products and, services but also for possible comparison on the same basis. The proposed infrastructure of ICIO networks based on econophysics can focus on the topological structure of GVC on top of partial analysis on international trades. Further mining of network structural characteristics in this way can reveal the (2) Extraction of competitive relations on scarce resources on the GVC among economic entities and their industry sectors based on Resource Allocation Process. Having consideration of resource scarcity, this paper uses bipartite graphs to modify the GIVCN-RIOT model, distinguishing the simultaneous roles of industry sectors on the GVC as upper-stream and lower-stream ones, by constructing GIVCN-RIOT-BIPARTITE model. Mapping this bipartite graph into the direction of participants (lower-stream industry sectors) leads to revised GIRCN-RIOT model. This will give hints on competition of any industry sector of lower-stream status with the others, which is a breakthrough in traditional IO analysis. the GVC, to reveal the evolution mechanics of international trades from the prospective of econophysics. Supporting information S1 Project administration: Jun Guan.
8,958.6
2018-05-29T00:00:00.000
[ "Economics" ]
Strain Transfer Characteristics of Multi-Layer Optical Fiber Sensors with Temperature-Dependent Properties at Low Temperature Optical fiber sensors have been potentially expected to apply in the extreme environment for their advantages of measurement in a large temperature range. The packaging measure which makes the strain sensing fiber survive in these harsh conditions will commonly introduce inevitable strain transfer errors. In this paper, the strain transfer characteristics of a multi-layer optical fiber sensing structure working at cryogenic environment with temperature gradients have been investigated theoretically. A generalized three-layer shear lag model incorporating with temperature-dependent properties of layers was developed. The strain transfer relationship between the optical fiber core and the matrix has been derived in form of a second-order ordinary differential equation (ODE) with variable coefficients, where the Young’s modulus and the coefficients of thermal expansion (CTE) are considered as functions of temperature. The strain transfer characteristics of the optical sensing structure were captured by solving the ODE boundary problems for cryogenic temperature loads. Case studies of the cooling process from room temperature to some certain low temperatures and gradient temperature loads for different low-temperature zones were addressed. The results showed that different temperature load configurations cause different strain transfer error features which can be described by the proposed model. The protective layer always plays a main role, and the optimization geometrical parameters should be carefully designed. To verify the theoretical predictions, an experiment study on the thermal strain measurement of an aluminum bar with optical fiber sensors was conducted. LUNA ODiSI 6100 integrator was used to measure the Rayleigh backscattering spectra shift of the optical fiber at a uniform temperature and a gradient temperature under liquid nitrogen temperature zone, and a reasonable agreement with the theory was presented. Introduction Optical fiber sensors, possessing great advantages of high sensitivity and flexibility, immunity to electromagnetic interference, light weight and small size, and the ability to provide multiplexed or distributed sensing, are extensively attracted in various engineering applications. As one of important kind of sensor technologies, fiber grating sensors have been widely studied and commercialized for health monitoring and oil industries, which are used to measure strain, temperature, pressure, and other quantities by modifying a fiber so that the quantity to be measured modulates the intensity, phase, polarization, wavelength or transit time of light in the fiber [1][2][3][4]. Meanwhile, other rather mature optical fiber sensor technologies such as optical time-domain reflectometers, optical frequencydomain reflectometers (OFDRs), fiber-optic gyroscopes, and optical fiber current sensors also have been attracted broad attentions [5]. So far, various optical fiber sensors have become more powerful tools in the traditional fields, for example bridges, high-speed railways, aircrafts, etc. [6][7][8][9]. However, the harsh environments such as extreme low to high temperatures, shock, radiation, corrosive conditions, high radio-frequency interference and pressure, give rise to unique challenges and opportunities to fiber optic sensors. There were some efforts in developing optical fiber sensors for harsh environments due to the excellent properties of silica fiber. By the way of the Bragg grating written into silica with femtosecond lasers using either the phase mask method or the point-by-point method, fiber Bragg gratings (FBGs) could be used for sensing strain and/or temperature in environment at temperatures less than 1000 • C [10]. Consider the most widely used optical fiber material, fused silica, being incapable of withstanding the chemically corrosive environments, the sapphire-FBG based temperature sensor was fabricated and packaged to show great linearity of temperature response from room temperature to elevated temperature [11]. For structural health monitoring of the next-generation of nuclear reactors, different technologies for realizing temperature resistant FBGs were developed for temperature and strain measurements especially for components exposed to high temperature and radiation levels [12]. On the other hand, some particular interests are extreme low temperature as low as a few Kelvin environment, for example, helium or hydrogen gas leak detection in cryogenic condition is critically important in the production and use of liquid fuels. Other applications in aerospace vehicles, superconducting magnets, and high-energy physics experiments also involve advanced technologies and devices designed to operate in cryogenic environments [13][14][15][16][17]. In 2019, the National High Magnetic Field Laboratory of USA used hybrid superconducting magnets to achieve the highest magnetic field of 45.5T to date [18]. The current carried in superconducting magnets can reach thousands of amps, where if the huge electromagnetic energy is improperly controlled it will result in a disaster and quench for the high-field magnets. At present, the quenching mechanism of superconducting magnets is not completely clear, and there is some contingency. Plenty of studies have found that the uncertainty of the quenching in superconducting magnets is largely due to forces during their operation. The method based on internal strain measurement of the magnet can detect the abnormal temperature and electromagnetic force before the expansion of the quench hot spot and start the safety measures in time [19]. Fiber optic sensor will be the best choice in this area, because it can effectively monitor the temperature and strain inside the magnet with embedded technology, and has great advantages such as electromagnetic immunity, small size, corrosion resistance and low loss. Low temperature environments also pose a challenge to existing sensing technologies. The performances of conventional room-temperature sensors, including sensitivities, response times, and lifetime, degrade rapidly when temperature gets lower. A few applications using optical fiber sensors at cryogenic temperatures have been developed lately, such as FBGs embedded in or bonded to substrates (e.g., PMMA, Teflon) with larger thermal expansion coefficients for overcoming their low temperature sensitivity, a continuous liquid level sensing system for liquid nitrogen and helium tanks [11,17].And some optical fiber sensors including FBG, Raman-scattering, Rayleigh-scattering and Brillouin-scattering for monitoring cryogenic temperature of high-temperature superconducting tapes at 77 K or even lower temperature have been attempted [20][21][22][23]. In these investigations, optical fiber sensors were mainly developed to measure cryogenic temperature in which the deformation of the materials and structures commonly were not considered. The concept of strain transfer is originated from the deformation transfer between fiber-reinforced composite matrix and fibers. It has been adopted to describe the transfer relationship between the sensing fiber and the test object with the development of optical fiber sensors. Since the 1990s, researchers have used elastic mechanics in cylindrical coordinates for modeling their behaviors [24]. Subsequent improvements have been made, and the shear lag model of micromechanics of composite materials has been put forward. Ansari and Yuan [25] firstly proposed the shear lag model of a three-layer structure including a fiber core, a protective layer and a matrix, and LeBlanc [26] further merged the protective layer with the matrix and gave a shear lag model of a two-layer structure. Subsequently, Li et al. [27,28] studied the case that the mechanical model of strain transfer under non-axial forces, in which two layers of the three-layer structure were sheared. Feng et al. [29] gave the strain transfer relationship in a four-layer structure with cracks. Sun et al. [30] focused on a desensitization method to develop a wide-range FBG sensor for extra-large strain monitoring and improve the accuracy. Wang et al. [31] reviewed the development of several classic strain transfer theories and used Goodman's hypothesis to obtain a model for asphalt pavements. Recently, the homemade polymer-FBG sensors embedded in coils were used to measure the strain responses during excitation and quench training tests. Compared to the cryogenic resistance strain gauges with complex compensation bridges, the polymer-FBG sensors exhibit more advantages to record the internal strain in the magnets [32]. However, the sensors were operated at extreme cryogenic environments around 5 K so that the thermal sensitivity of FBG can be disregarded and only the strains were measured. Although a few attempts of optical fiber sensors to low temperature usage were carried out, challenges still exist in extreme environments. The key part of optical fibers is a thin fiber core made of glass covered with a polymer layer to achieve toughening. To make the sensing fiber survive in harsh conditions, additional packaging measures are required, which not only plays a protective role, but also realizes a variety of sensing functions by using different structures and functional materials. The polymer material as the adhesive and protective layer expands and softens at high temperatures, and shrinks and cracks at low temperatures, which causes measurement errors. For strain sensing, it is the first requirement to truly reflect the strain information of the structure in tests. Because the protective layer isolates the sensing optical fiber core from the test object, a deformation difference occurs which is defined as strain transfer error. Strain transfer theory describing the transfer relationship between the sensing fiber and the test object is proposed to correct this error and to improve the measurement accuracy [25][26][27][28][29]. It can be found that the composite structures of optical fiber sensors in the abovementioned studies are mostly subjected to uniform deformations at normal temperature. However, most optical fiber sensors are sensitive to temperature and strain, which always are mixed. In extreme low temperature environments, the heat can be completely transferred to the different layers of the sensors, and materials properties like Young's modulus, coefficients of thermal expansion of those layers are always temperature dependence. Additionally, in the distributed optical fiber sensor, the micro sensor in length of an order of millimeter is evenly distributed on the optical fiber with a density of several hundred per meter. In the case of large strain or temperature gradient, for example the temperature gradient of a high-temperature superconductor structure up to 65 K/cm during a quench process [22], the distributed optical fiber sensor can measure the strain distribution properly. Strain transfer errors dependent on the load configuration will vary greatly. This study aims to analyze the strain transfer characteristics of distributed optical fiber sensors at low temperature under non-uniform loads. It is of important engineering significance to study whether the response of each position of distributed fiber reflects the true value under the extreme conditions of cryogenic temperature and large gradient. In consideration of the temperature-dependent material properties, a generalized three-layer (e.g., fiber core, protective layer and matrix) shear lag model incorporated with temperaturedependent properties for describing the strain transfer relationship between the matrix and the fiber core was developed. To examine the strain transfer response of optical fiber sensor at low-temperature zone, several temperature loads, such as uniform temperature, linearly gradient and Gaussian distributed temperature, were addressed. The correlated sensitive parameters of the sensing model on the strain transfer ratio were discussed detailly. Additionally, an experiment study was conducted to verify the theoretical predictions and the Rayleigh backscattering spectra shifts associated with thermal strain measurement of aluminum bars with optical fiber embedded were measured. Fundamental Equations An ideal embedded optical fiber sensing model can be briefly described as shown in Figure 1, which is a typical sensor structure commonly used for sensing strain and/or temperature. The core sensing element of the optical fiber is glass fiber core, which is coated with a thin polymer interlayer and a thick matrix layer. The Young's modulus of the thin interlayer is much lower than that of the fiber core or matrix, which is unfavorable for strain transfer. The lower Young's modulus means greater deformation during stretching, shearing and torsion. The interlayer will absorb a part of the strain from the matrix layer. The strain of the matrix can be caused by external mechanical and temperature loads. The glass fiber core does not directly sense the mechanical load, but indirectly feel the strain through the interlayer. Different from mechanical load, temperature load will directly affect layers simultaneously. Fundamental Equations An ideal embedded optical fiber sensing model can be brie Figure 1, which is a typical sensor structure commonly used for s perature. The core sensing element of the optical fiber is glass f with a thin polymer interlayer and a thick matrix layer. The You interlayer is much lower than that of the fiber core or matrix, strain transfer. The lower Young's modulus means greater defor shearing and torsion. The interlayer will absorb a part of the str The strain of the matrix can be caused by external mechanical an glass fiber core does not directly sense the mechanical load, bu through the interlayer. Different from mechanical load, tempera fect layers simultaneously. Generally, in order to meet various working conditions, t multi-layer structure. Considering it as a single equivalent laye difficulty of theoretical analysis. Additionally, several assumpt plify the theoretical modeling for establishing a relatively conc interfaces of the three-layer structure are perfectly combined wit normal stress in the fiber core due to the small radius is conside Since the sensing model and the operation environment, fiber sensor is assumed only undergoing normal strain in the ax strain response of the matrix to the glass fiber core is transmitte interlayer. The deformation in the protective interlayer genera the mechanical model, so that its mechanical properties are such sensing structure. The structure and stress state of the three-layer optical fiber 2a. The optical fiber length is L , the radial coordinate denotes a denotes as z . The three layers of the structure are respectively of F, P and M, corresponding to the fiber core, protective layer 2b,c illustrate the stresses and deformation of infinitesimal ele Because of the extreme environment of the optical fiber sensor low temperature, the material properties are always the tem Generally, in order to meet various working conditions, the interlayer is usually a multi-layer structure. Considering it as a single equivalent layer will greatly reduce the difficulty of theoretical analysis. Additionally, several assumptions are adopted to simplify the theoretical modeling for establishing a relatively concise equation, such as the interfaces of the three-layer structure are perfectly combined without interfacial slip, only normal stress in the fiber core due to the small radius is considered. Since the sensing model and the operation environment, the matrix of the optical fiber sensor is assumed only undergoing normal strain in the axial direction. Further, the strain response of the matrix to the glass fiber core is transmitted through the protective interlayer. The deformation in the protective interlayer generates a buffering effect in the mechanical model, so that its mechanical properties are critically important for such sensing structure. The structure and stress state of the three-layer optical fiber sensor is shown in Figure 2a. The optical fiber length is L, the radial coordinate denotes as ρ, and the central axis denotes as z. The three layers of the structure are respectively denoted by the subscripts of F, P and M, corresponding to the fiber core, protective layer and matrix layer. Figure 2b,c illustrate the stresses and deformation of infinitesimal elements of different layers. Because of the extreme environment of the optical fiber sensors, for example, the extra-low temperature, the material properties are always the temperature-dependent. The Young's modulus and coefficients of thermal expansion of the protective layer and matrix are considered as functions of temperature, E i = E i (T), α i = α i (T), (i = F, P, M). Furthermore, for a measurement of a structure in temperature gradient environment or a long distribution continuous fiber with multiple sensors, the temperature along the fiber can be not be neglected so that T = T(z). In such cases, the material properties will be function of z-axis. thermore, for a measurement of a structure in temperature gradient environment o distribution continuous fiber with multiple sensors, the temperature along the f be not be neglected so that ( ) T T z  . In such cases, the material properties will tion of z-axis. For the protective layer, the equilibrium equation is given as [28], where P  is the axial stress in the protective layer, P  represents the shear stre along the radial direction, PF  is the shear stress at the interface between the fi and protective layer. Because of the ideal interface between two materials, the co conditions of interface stresses exist, PF From Equations (1) and (2), one can get the shear stress of the protective lay following form, The relative displacement between the matrix and fiber core can be obtained deformation of the protective layer, which gives The equilibrium equation for the glass fiber core along z-axial, from Figure 2b, can be obtained in the form [24], where σ F , τ PF are the axial stress (the normal stress parallel to the z-axis) and shear stress (parallel to the z-axis on the cylindrical surface of the columnar micro-element body), ρ F denotes the radius of the fiber core. For the protective layer, the equilibrium equation is given as [28], where σ P is the axial stress in the protective layer, τ P represents the shear stress at ρ along the radial direction, τ PF is the shear stress at the interface between the fiber core and protective layer. Because of the ideal interface between two materials, the continuity conditions of interface stresses exist, τ PF = τ FP . From Equations (1) and (2), one can get the shear stress of the protective layer in the following form, The relative displacement between the matrix and fiber core can be obtained by shear deformation of the protective layer, which gives where the elastic shear constitutive relationship is used. u M , u F refer to the displacements of the matrix and fiber core along z-direction, respectively. And E P , µ P are respectively the Young's modulus and Poisson's ratio of the protective layer. γ P (ρ, z) is the shear strain of the protective layer at position (ρ, z). To further simplify the above equation, the stresses in the glass fiber core and the protective layer can be expressed by the thermoelastic constitutive relationship in which ε Fe , ε Pe and ε F , ε P are respectively the elastic strains and total strains in the two materials, ε FT , ε PT denote the thermal strain caused by the temperature from T 0 to T 1 , respectively. Taking differential operation on both sides of Equation (4) with respect to z, one can gain Since the layers are very thin and the elastic parameters and strains are reasonably assumed to be independent with the radial coordinate. Additionally, due to the fiber core being strained together with the middle layer, the elastic strain gradients are expected to be of the same order [27], that is, dε Fe /dz ∼ = dε Pe /dz. By ignoring the high order infinitesimals related to modulus, Equation (7) further can be reduced into For the fiber core, it is a kind of silicas whose temperature-dependent effect can be omitted compared to that of the protective layer and matrix, i.e., ε FT ε F , ε M . And the Poisson's ratio and coefficient of thermal expansion of the fiber core are almost independent with the temperature, so that dα F /dT ∼ = 0. Therefore, Equation (8) is rewritten as in which The above Equation (9) presents the relationship between strain of the fiber core and that of matrix material, and an important index for evaluating the performance of optical fiber sensing structures is called the strain transfer ratio, which is defined as η = ε F /ε M . Nondimensional Forms of Equations To get the general equations for the strain transfer characteristics for the multi-layer fiber sensor, the following nondimensional valuables are introduced, Then, Equation (9) can be rewritten in a nondimensional form We further can get the general form of Equation (12) as follows, in which It can be found that Equation (14) is a second-order ordinary differential equation with variable coefficients, which is commonly difficult to solve with an analytical method. However, when we consider the general usage of the optical fiber sensor in a conventional manner, for example, the material properties are independent with temperature and one can easily to get R 2 = 0, R 3 = 1. Equation (14) reduces into a simple one which is the same as that in the literature for a three-layer fiber sensor developed by Li et al. [27,28]. It indicated that the present generalized model can be degenerated into a simple one as reported in the literatures. Additionally, the fiber sensor is assumed to be free from axial stress at both ends due to the matrix material being non-contact with the fiber beyond the ends of the interface between the fiber core and the protective layer. It makes to use a boundary condition of zero of the strain transferred to the optical fiber core at both ends of the fiber, Numerical Solution to the ODE For the second-order ODE with variable coefficients (e.g., Equations (14) or (16)) and boundary conditions (Equation (17)), the shooting method is utilized for the numerical solution. The well-developed method takes its name from the situation in the two-point boundary value problem for a second-order differential equation with initial and final values of the solution prescribed. The two-point boundary value problem is treated as an initial value problem, in which z plays the role of the time variable, with z = 0 being the "initial time" and z = 1 being the "final time". Varying the initial slope gives rise to a set of profiles which suggest the trajectory of a projectile "shot" from the initial point. To obtain high-precision numerical solution of the differential equation, the explicit fourth order Runge-Kutta approach was used. When we consider the simplified case of conventional manner where material properties of multi-layer are independent with temperature. The solution of Equation (16) can be fortunately obtained in an analytical form, Here, the constants of integration, C 1 , C 2 can be determined easily from the boundary conditions of Equation (17). Temperature-Dependent Material Properties For most of the optical fiber sensors, the materials properties of their composite structures are temperature dependent, especially in a large range of temperature zone. As the temperature gradually drops, material properties of the sensing structure show considerable differences compared to that at room temperature. Generally, the temperature dependence of material properties, for example Young's modulus and the coefficients of thermal expansion, can be expressed in the form of polynomial as follow [32][33][34], where the subscripts i(= F, P, M) represent the different layers of the sensor structure, a ij (j = 0, 1, · · · , 5) and b ij (j = 0,1, · · · , 4) are the fitting coefficients from experiments. One possible and common material combination of the matrix, the protective layer, and the fiber core can be aluminum, Teflon, and glass, respectively. Figure 3 illustrates their Young's modulus and the coefficients of thermal expansion dependent upon temperature in a large zone [33,34]. One can see that, compared to those of glass fiber core, the properties of matrix and protective layer show significant variations with temperature. For Young's modulus, the values of the matrix and protective layer increase linearly as temperature drops. While the coefficients of thermal expansion show quiet noteworthy feature for the matrix and protective layer, the value of matrix slightly decreases and that of the protective layer firstly drops and reaches a minimum at about 60 K then increases with temperature decreasing. For different materials, their properties have great differences which may not omitted in a large temperature zone or temperature gradient environment as they work. Different Temperature Loads For a structure under mechanical loads, there are abundant investigations on its deformation measured with optical fiber sensors, to show the effectiveness at work. The strain-temperature simultaneous sensing characteristics also have been considered, where the interference caused by temperature when measuring strain can usually be solved by Different Temperature Loads For a structure under mechanical loads, there are abundant investigations on its deformation measured with optical fiber sensors, to show the effectiveness at work. The strain-temperature simultaneous sensing characteristics also have been considered, where the interference caused by temperature when measuring strain can usually be solved by temperature compensation technology, such as temperature compensation block, thermometer, dual optical fiber, etc. Here, we will mainly consider the temperature loads. As a structure experienced to temperature loads, there are usually two thermal equilibrium states. One situation is that the temperature of the whole structure is spatially uniform so that the temperature gradient can be ignored. The other is that temperature in the structure is temporally constant and a temperature gradient distribution may exist. Especially, in structures with a heat source, the temperature distribution is similar to the Gaussian bell curve, and the temperature gradient changes sharply, with obvious peaks. In the following examples, we will consider these two cases. Uniform Temperature Load Consider a cooling process, the thermal deformation will occur inside the structure due to expansion and contraction of materials. The multi-layer sensing models made of different materials can also cause local stress due to thermal mismatch. Since the CTE and the Young's modulus show a non-linear relationship with temperature, the internal stress and the amplitude of temperature change also show a non-linear relationship. However, for a simple case of the uniform temperature load, the material properties at a temperature can be determined from the temperature-dependent curve of experiments. The cooling process from room temperature (e.g., T 0 = 293 K) to some certain low temperature T 1 is concerned. The strain in the matrix is easily evaluated by ε M = The thermally deformed matrix transfers its strain to the optical fiber core through the protective layer, and the strain measured in the fiber sensor can be obtained from the theoretical model. Figure 4 shows the strain transfer characteristic of the optical fiber sensor under a uniform temperature drop. Since the fiber material has small CTE, the thermal deformation of the glass fiber core is far less than that of the matrix. The matrix will pull the fiber core through the protective layer, so that the fiber core and the matrix shrink simultaneously. One can see that, from Figure 4a, for different temperature drops (e.g., T 1 = 200 K, 77 K, 4.2 K) the matrix is in constriction with different constant stains while the strain in the fiber core also is compressed with a U-shape. At the end positions of the optical fiber sensor (e.g., z = 0, 1), the strains are zeroes, while the strain at the midpoint of the fiber core is close to that of the matrix. The analytical results also present a good agreement with the numerical predictions, as shown in Figure 4a. For a lower temperature (e.g., 4.2 K or 77 K) there has a quite large region where the strain in the fiber core is much closer the strain in the matrix. It means the measured strain by the sensor is consistent with the real strain of the matrix and a good performance of strain transfer is obtained. In other words, the optical fiber sensor exhibits a good capability and effectiveness at low temperatures. The reason is mainly because that the young's modulus of the protective layer increases with the decrease of temperature to increase a good performance of the strain transfer ratio. Figure 4b further illustrates the strain transfer ratio of the optical fiber sensor dependence of the temperature decreases at different locations. It clearly shows that the strain transfer ratio decreases with temperature T 1 , especially for a high temperature, and the value at the midpoint (i.e., z = 0.5) is almost 1.0 in a quite large range of low temperature which is larger than other locations along the sensor length. The performance of the optical fiber sensor is usually relied on their geome rameters. Figure 5 illustrates the maximum strain transfer ratio of the fiber sensor ent temperate drops. Figure 5a plots the strain transfer ratio of the sensor depen the relative radius ratio of the protective layer, to shows the decrease as the rad  increases. It is because the increase of the radius of the protective layer leads ferred deformation from the matrix to the fiber core being much smaller. , the strain transfer ratio is a little small and i with the sensor length. The longer sensor length means that the accumulated d of the interface induces more deformation in the fiber core. One can see that at a perature, thinner protective layer and longer embedded sensor length can brin large strain transfer ratio, which is consistent with that in the literature for a th fiber sensor [28]. The performance of the optical fiber sensor is usually relied on their geometrical parameters. Figure 5 illustrates the maximum strain transfer ratio of the fiber sensor for different temperate drops. Figure 5a plots the strain transfer ratio of the sensor dependence on the relative radius ratio of the protective layer, to shows the decrease as the radius ratio ρ increases. It is because the increase of the radius of the protective layer leads to transferred deformation from the matrix to the fiber core being much smaller. Figure 5b shows the maximum strain transfer ratio greatly increases as the fiber sensor length. When the ratio of sensor length to fiber radius L is larger than 30, the maximum strain transfer ratio is up to about 1.0 for cooling temperature T 1 less than 100 K. While for a small temperature drop, for example T 1 = 200 K, the strain transfer ratio is a little small and it increase with the sensor length. The longer sensor length means that the accumulated drag effect of the interface induces more deformation in the fiber core. One can see that at a low temperature, thinner protective layer and longer embedded sensor length can bring about a large strain transfer ratio, which is consistent with that in the literature for a three-layer fiber sensor [28]. Gradient Temperature Load Non-uniform temperature loads subjected to structures are common in practice. Structures in the cooling or heating process usually involve a large temperature gradient, for instance, the temperature gradient in a superconductor structure up to 65 K/cm during a quench process [22], and a high gradient about 200-300 K/cm for a laser heating on an absorbing layer [35]. Therefore, for a gradient temperature load on a structure, at different positions the temperature is not consistent anymore, and the material properties will be spatial dependence. The equation used to describe the microelements changes with the position. The thermoelastic response of the fiber core to the matrix will be different from that of the uniform temperature load. In this section, we will consider two cases for the optical fiber sensing structure affected by gradient temperature loads. . Figure 6 presents the strain distributions in the matrix and the fiber core of the sensing structure under a linear gradient temperature respectively for different low temperature regions. Figure 6a . One can see that the thermal strain in the matrix does not show a linear increase as the temperature gradient distribution, but exhibits a concave form for liquid helium temperature region ( Gradient Temperature Load Non-uniform temperature loads subjected to structures are common in practice. Structures in the cooling or heating process usually involve a large temperature gradient, for instance, the temperature gradient in a superconductor structure up to 65 K/cm during a quench process [22], and a high gradient about 200-300 K/cm for a laser heating on an absorbing layer [35]. Therefore, for a gradient temperature load on a structure, at different positions the temperature is not consistent anymore, and the material properties will be spatial dependence. The equation used to describe the microelements changes with the position. The thermoelastic response of the fiber core to the matrix will be different from that of the uniform temperature load. In this section, we will consider two cases for the optical fiber sensing structure affected by gradient temperature loads. (a) Linear Temperature Distribution Load In this case, we consider that a linear gradient temperature distributes along the sensing structure in the form of T 1 (z) = T 0 + z · ∆T, where the thermal deformation reference temperature denotes as T 0 (e.g., T 0 = 4.2 K, 77 K) and ∆T means the temperature increases (e.g., ∆T = 30 K, 50 K, 70 K). The thermoelastic strain in the matrix is evaluated by ε M (z) = T 1 (z) T 0 α M (T)dT. Additionally, since the strain transfer ratio η is different along the sensor, an average strain transfer ratio is introduced for the sensing structure with η = 1 0 ηdz. Figure 6 presents the strain distributions in the matrix and the fiber core of the sensing structure under a linear gradient temperature respectively for different low temperature regions. Figure 6a,b respectively show the results for T 0 = 4.2 K and T 0 = 77 K. One can see that the thermal strain in the matrix does not show a linear increase as the temperature gradient distribution, but exhibits a concave form for liquid helium temperature region (T 0 = 4.2 K) as shown in Figure 6a. While the strain shows almost linear increase along the sensor length for liquid nitrogen temperature zone (T 0 = 77 K) as plotted in Figure 6b. The feature becomes more obvious as the gradient value ∆T is high, which is because the coefficient of thermal expansion of the matrix increases with the increase of temperature, and this trend is more prominent at T 0 = 4.2 K than that at T 0 = 77 K. Although there is lack of the experiment observations for direct comparison, the thermoelastic strain induced by linear gradient temperature considered here is in analogy with the almost linear gradient bending strain in a small region measured for a cantilever [36]. Additionally, due to boundary conditions of sensing structure, the deformation of the fiber core is forced to zero at both ends. Near the left end, the strain consistency in the matrix and fiber core presents better situation, while near the right end a more obvious difference between them is illustrated. . Although there is lack of the experiment observations for direct comparison, the thermoelastic strain induced by linear gradient temperature considered here is in analogy with the almost linear gradient bending strain in a small region measured for a cantilever [36]. Additionally, due to boundary conditions of sensing structure, the deformation of the fiber core is forced to zero at both ends. Near the left end, the strain consistency in the matrix and fiber core presents better situation, while near the right end a more obvious difference between them is illustrated. Figure 7 plots the strain transfer ratio dependence of the temperature gradient at different positions along the sensor length to exhibit different characteristics with T  . One can see that the transfer ratios  are even larger than 1.0 near the left end while they are usually less than 1.0 near the right end of the sensor. Such feature is more obvious for liquid helium temperature region as shown in Figure 7a than that for liquid nitrogen temperature region as shown in Figure 7b. It is mainly because the material properties such as the difference of elastic modulus of the protective and matric layers, especially the CTEs of the protective and matrix layers at low temperature, as illustrated in Figure 3, which further results in discrepancies of strain states in the different layers. Even the strain in the One can see that the transfer ratios η are even larger than 1.0 near the left end while they are usually less than 1.0 near the right end of the sensor. Such feature is more obvious for liquid helium temperature region as shown in Figure 7a than that for liquid nitrogen temperature region as shown in Figure 7b. It is mainly because the material properties such as the difference of elastic modulus of the protective and matric layers, especially the CTEs of the protective and matrix layers at low temperature, as illustrated in Figure 3, which further results in discrepancies of strain states in the different layers. Even the strain in the protective layer is little higher than that in the matrix so that η is greater than 1.0 near the left end of the sensor structure. Such feature is more remarkable for a higher gradient temperature. protective layer is little higher than that in the matrix so that  is greater than 1.0 near the left end of the sensor structure. Such feature is more remarkable for a higher gradient temperature. To give a better understating on the strain transfer ratio of the sensing structure and their performance under different low temperature zones, the average ratios depending upon the geometrical parameters are presented in Figure 8. One can see that the average values are always less than 1.0 which is reasonable for the strain transfer mechanism of multi-layer structure. Figure 8a illustrates the average strain transfer ratio varying with the radius ratio of the protective layer thickness of the sensing structure. It can be found that the average transfer ratio decreases with the thickness of the protective layer and the values for liquid nitrogen region (i.e., 0 77K T  ) are larger than the ones for liquid helium region (i.e., 0 4.2K T  ). With the increase of the sensor length, the average strain transfer ratio increases obviously for different temperature gradients, as shown in Figure 8b. Reducing the thickness of the protective layer and increasing the sensor length can improve the strain transfer efficiency of the sensing structure at low temperature. To give a better understating on the strain transfer ratio of the sensing structure and their performance under different low temperature zones, the average ratios depending upon the geometrical parameters are presented in Figure 8. One can see that the average values are always less than 1.0 which is reasonable for the strain transfer mechanism of multi-layer structure. Figure 8a illustrates the average strain transfer ratio varying with the radius ratio of the protective layer thickness of the sensing structure. It can be found that the average transfer ratio decreases with the thickness of the protective layer and the values for liquid nitrogen region (i.e., T 0 = 77 K) are larger than the ones for liquid helium region (i.e., T 0 = 4.2 K). With the increase of the sensor length, the average strain transfer ratio increases obviously for different temperature gradients, as shown in Figure 8b. Reducing the thickness of the protective layer and increasing the sensor length can improve the strain transfer efficiency of the sensing structure at low temperature. (b) Gaussian Temperature Distribution Load In an extreme condition, a point-like heat source will be generated locally in the structure. The temperature and thermoelastic deformation near the hot spot approximately distribute according to a bell curve. For example, a recent investigation of a spatially distributed fiber-optic sensor designed for temperature measurements in the steel industry was attempted, where a high temperature was generated by small point-like heating elements [37]. We consider the sensing structure is affected by a Gaussian temperature distribution in the form of T 1 (z) = T 0 + f (z)δT, where f (z) = e A·(z−0.5) 2 represents the Gaussian distribution function and A = −4 ln 2 with a half-height width of 1, the temperature peak located at the midpoint (z = 0.5). The strain in the matrix is evaluated by ε M (z) = T 1 (z) T 0 α M (T)dT, and the different low temperature regions of liquid helium and nitrogen are considered (e.g., T 0 = 4.2 K, 77 K). In an extreme condition, a point-like heat source will be generated locally in the structure. The temperature and thermoelastic deformation near the hot spot approximately distribute according to a bell curve. For example, a recent investigation of a spatially distributed fiber-optic sensor designed for temperature measurements in the steel industry was attempted, where a high temperature was generated by small point-like heating elements [37]. We consider the sensing structure is affected by a Gaussian temperature distribution in the form of Figure 9 illustrates the strain distributions in the matrix and the fiber core. One can see that the strain response of the optical fiber core is not able to reflect completely the true matrix strain, and the thermoelastic strain of the matrix caused by the temperature rise at the middle position is usually higher than the evaluated one of the optical fiber Figure 9 illustrates the strain distributions in the matrix and the fiber core. One can see that the strain response of the optical fiber core is not able to reflect completely the true matrix strain, and the thermoelastic strain of the matrix caused by the temperature rise at the middle position is usually higher than the evaluated one of the optical fiber core, while the feature is reverse near the two ends of the sensor part. For a larger temperature peak, it shows a significant discrepancy of strains in the matrix and fiber core. There exhibit the similar features for different low temperature regions (as shown in Figure 9a,b). These discrepancy of strain distributions in the different layers is mainly caused by the temperature-dependent material properties of the layers particularly at different low temperature zones. The deformation configuration generated by the point heat source is complicated, and our model gives a possible method to explain and correct this inconsistent error. However, due to the errors introduced by the temperature and strain gradient, the difference of the real strain values measured and the predictions at each position is required further analysis. the temperature-dependent material properties of the layers particularly at diff temperature zones. The deformation configuration generated by the point heat complicated, and our model gives a possible method to explain and correct th sistent error. However, due to the errors introduced by the temperature and stra ent, the difference of the real strain values measured and the predictions at each is required further analysis. Figure 10 shows the strain transfer ratio dependence of the temperature pea Gaussian temperature gradient case at different positions along the sensor leng hibit different characteristics. At the different positions along the sensor, there different strain transfer ratios which are slightly larger than or less than 1.0, i similar mechanism as presented in the case of a linear gradient temperature l average transfer ratios depending upon the geometrical parameters are further p in Figure 11. It clearly shows that the average strain transfer ratios decrease linea thickness of the protective layer increases, while they increase nonlinearly as th length. The average ratios are usually higher for the sensing structure under t Figure 10 shows the strain transfer ratio dependence of the temperature peak for the Gaussian temperature gradient case at different positions along the sensor length to exhibit different characteristics. At the different positions along the sensor, there are quite different strain transfer ratios which are slightly larger than or less than 1.0, it has the similar mechanism as presented in the case of a linear gradient temperature load. The average transfer ratios depending upon the geometrical parameters are further presented in Figure 11. It clearly shows that the average strain transfer ratios decrease linearly as the thickness of the protective layer increases, while they increase nonlinearly as the sensor length. The average ratios are usually higher for the sensing structure under the liquid helium temperature region (e.g., T 0 = 4.2 K) that that under the liquid nitrogen temperature region (e.g., T 0 = 77 K). We also noticed that the influence of temperature peak on the average strain transfer ratio is not obvious. Based on the global performance of the optical fiber sensing structure, the protective layer always plays a main role for the strain transfer, the optimization geometrical parameters should be carefully designed. helium temperature region (e.g., 0 4.2K T  ) that that under the liquid nitrogen temperature region (e.g., 0 77K T  ). We also noticed that the influence of temperature peak on the average strain transfer ratio is not obvious. Based on the global performance of the optical fiber sensing structure, the protective layer always plays a main role for the strain transfer, the optimization geometrical parameters should be carefully designed. Average strain transfer ratio-h Ratio of sensor length to fiber radius-L 50K 70K Figure 11. Comparison of average strain transfer ratio of the optical fiber sensor for different cryogenic temperature regions dependence of geometrical parameters under a Gaussian temperature gradient load: (a) the radius ratio  , (b) the ratio of sensor length to fiber radius L . Experiment Investigation at Low Temperature In order to verify the theory that the strain transfer characteristic of the optical fiber sensing structure is related to temperature, strain measurement of a distributed optical fiber sensor at low temperature has been examined. An OFDR based on Rayleigh backscattering in the single-mode optical fiber was used to analyze the signals. The Rayleigh backscattering spectra (RBS) shift is affected by refractive index, which is determined Figure 11. Comparison of average strain transfer ratio of the optical fiber sensor for different cryogenic temperature regions dependence of geometrical parameters under a Gaussian temperature gradient load: (a) the radius ratio ρ, (b) the ratio of sensor length to fiber radius L. Experiment Investigation at Low Temperature In order to verify the theory that the strain transfer characteristic of the optical fiber sensing structure is related to temperature, strain measurement of a distributed optical fiber sensor at low temperature has been examined. An OFDR based on Rayleigh backscattering in the single-mode optical fiber was used to analyze the signals. The Rayleigh backscattering spectra (RBS) shift is affected by refractive index, which is determined by temperature and strain. The RBS shift ∆λ of optical fiber bonded to a structure caused only by temperature ∆T can be expressed as follow, where α is the thermal expansion coefficient of the structure material, ξ is the thermo-optic coefficient of the optical fiber material, P e is the elasto-optic coefficient of the optical fiber material, and λ is the RBS. LUNA ODiSI 6100 optical distributed sensor interrogator was utilized to record the RBS shift of the optical fiber. A 0.65-mm spatial resolution can be obtained when the sampling rate is 20 Hz. The temperature data acquisition (DAQ) system based on the LabVIEW software and NI devices is developed to measure the temperature distributions of the samples. Uniform Temperature Change The thermal strain related to RBS shift signal of an optical fiber for a uniform temperature variation from 77 K to 289 K has been measured. To obtain a uniform temperature load, a sample with the optical fiber sensor is placed in a thick copper tube coated with thermal insulation material, and cotton is filled at both ends of the copper tube to prevent convection with the ambient air, as shown in Figure 12a. The optical fiber RBS shift signal is collected by LUNA ODiSI 6100 interrogator which is time synchronized with the temperature acquisition device, and the RBS shift is directly associated with temperature. The sample is a T6061 aluminum bar embedded with Corning SMF-28 Ultra optical fiber. The bonding material is STYCAST 2850 FT epoxy resin, and its geometrical dimensions are shown in Figure 12b. Since the bonding layer thickness is quite thicker than the fiber core it can be taken as one layer of the multi-layer structure. The initial low-temperature of 77 K can be obtained by immersing the sample in liquid nitrogen. With the liquid nitrogen removed, temperature of the sample returns to the room temperate (e.g., 289 K) very slowly and uniformly by the natural recovery, as shown in Figure 12c. Taking the RBS shift (S) at 77 K as the reference value, the distribution of RBS shift in the sample for different position (z) and temperatures (T) is presented in Figure 13a. Good RBS shift signals were obtained except for the abnormal signals near two ends (e.g., z = 266 cm and 276 cm) of the sample. Small fluctuations at some locations are likely due to minor defects in the epoxy resin or at the boundary condition, but do not affect the overall measurement. During temperature rising slowly, the temperature distribution of the aluminum bar is uniform and stable, and its thermal deformation is therefore uniform. The influence of thermo-optic effect on the optical fiber RBS shift is the same. At the same temperature, the difference of RBS shift along the fiber is only contributed from the deformation measured by the optical fiber. At very low temperature range from 77 K to 220 K, the RBS shifts at different positions along the sample have almost the same value except at the two ends. When the temperature is up to 260 K and a higher value, the RBS shift diagram is consistent with the prediction of the theoretical model. The measured values in the middle region of the sample are large and those near two ends are small. It indicates that the strain transfer ratio near the ends decreases and a higher value is obtained at the midpoint of the sample. The main reason is that the epoxy resin has a large elastic modulus at low temperature, so that the high strain transfer ratio is obtained. With the increase of temperature, the epoxy resin becomes soft, the elastic modulus decreases, and the strain transfer ratio becomes low. ture. The sample is a T6061 aluminum bar embedded with Corning SMF-28 Ultra optical fiber. The bonding material is STYCAST 2850 FT epoxy resin, and its geometrical dimensions are shown in Figure 12b. Since the bonding layer thickness is quite thicker than the fiber core it can be taken as one layer of the multi-layer structure. The initial low-temperature of 77 K can be obtained by immersing the sample in liquid nitrogen. With the liquid nitrogen removed, temperature of the sample returns to the room temperate (e.g., 289 K) very slowly and uniformly by the natural recovery, as shown in Figure 12c. Taking the RBS shift (S) at 77 K as the reference value, the distribution of RBS shift in the sample for different position (z) and temperatures (T) is presented in Figure 13a. Good RBS shift signals were obtained except for the abnormal signals near two ends (e.g., z = In Figure 13b, the RBS shift ratio (S) of the sample at different temperatures is compared. In order to reduce the fluctuation caused by nonuniformity, the average value of RBS shift ( S) within 0.5 cm length around a certain position z 0 of optical fiber (z 0 − 0.5 < z < z 0 + 0.5) is introduced. When the temperature is less than 220 K, the RBS shift ratio of each point is around 0.99. When the temperature is higher than 240 K, the RBS shift ratio decreases obviously, and the near to the end, the decrease is obvious. The experimental results show that low temperature is beneficial to strain transfer to some extent. Temperature with Great Gradient We further make a measurement on the thermal strain of a sample with optical fiber under a temperature gradient load. As shown in Figure 14a, a winding resistance heater of power supply of 100 W is set at the midpoint of an aluminum bar sample with length of 30 cm. There are 10 tiny thermocouples (named as T1, T2, . . . , T10) evenly arranged along the sample to measure the rapidly changing temperature, whose response frequency is above 10 Hz. The data acquisition method is the same as the previous experiment. In order to obtain accurate measurement, the epoxy resin diameter in the sample is adjusted to 0.5 mm, and the diameter of aluminum rod is reduced to 10 mm for better temperature conduction, as shown in Figure 14b. The sample was cooled to 77 K with liquid nitrogen, and then removed from the liquid nitrogen and supplied power to the heater at the same time. When the thermocouples T5 or T6 reaches 300 K, we turn off the heater, and the heat conduction will make the temperature redistributed evenly, as shown in Figure 14c. The temperatures at the symmetrical positions measured by the thermocouples are very close, such as T5/T6, T4/T7, T3/T8, T2/T9, and T1/T10. During the heating period, the RBS shifts at different times (e.g., t 1 , t 2 , t 3 , t 4 , and t 5 ) are extracted for the subsequent analysis. Temperature with Great Gradient We further make a measurement on the thermal strain of a sample with optical fiber under a temperature gradient load. As shown in Figure 14a, a winding resistance heater of power supply of 100 W is set at the midpoint of an aluminum bar sample with length of 30 cm. There are 10 tiny thermocouples (named as T1, T2, …, T10) evenly arranged along the sample to measure the rapidly changing temperature, whose response frequency is above 10 Hz. The data acquisition method is the same as the previous experiment. In order to obtain accurate measurement, the epoxy resin diameter in the sample is adjusted to 0.5 mm, and the diameter of aluminum rod is reduced to 10 mm for better temperature conduction, as shown in Figure 14b. The sample was cooled to 77 K with liquid nitrogen, and then removed from the liquid nitrogen and supplied power to the heater at the same time. When the thermocouples T5 or T6 reaches 300 K, we turn off the heater, and the heat conduction will make the temperature redistributed evenly, as shown in Figure 14c. The temperatures at the symmetrical positions measured by the thermocouples are very close, such as T5/T6, T4/T7, T3/T8, T2/T9, and T1/T10. During the heating period, the RBS shifts at different times (e.g., t1, t2, t3, t4, and t5) are extracted for the subsequent analysis. The evolution of RBS shift along the optical fiber sensor with time is shown in Figure 15a. When the heater is triggered to work (at t = 13 s), which results in a temperature rise at the middle area of the sample, and the heat then propagates in the bar. In the initial stage of heating, the heat does not propagate to the ends and the temperature in most area of the sample still keeps 77 K, so the RBS shift remains zero, as shown the red color in the figure. The optical fibers, outside the sample (z < 188 and z > 218), are always unaffected by the heater, so RBS shift is also zero. The heater is turned off when the value of T5/T6 reaches 300 K (at t = 40 s) in the experiment. As seen from the figure, the RBS shift values near the heater area are high, and the heat is transferred quickly, and the temperature is gradually changed during the heating and heat conduction processes. T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T2/T9 T3/T8 t 5 (c) Figure 14. Aluminum bar embedded with optical fiber under gradient temperature load: (a) schematic diagram of the experiment instrument and sample, (b) cross section of the sample, (c) temperature characteristic of the sample measured by thermocouples. The evolution of RBS shift along the optical fiber sensor with time is shown in Figure 15a. When the heater is triggered to work (at t = 13 s), which results in a temperature rise at the middle area of the sample, and the heat then propagates in the bar. In the initial stage of heating, the heat does not propagate to the ends and the temperature in most area of the sample still keeps 77 K, so the RBS shift remains zero, as shown the red color in the figure. The optical fibers, outside the sample (z < 188 and z > 218), are always unaffected by the heater, so RBS shift is also zero. The heater is turned off when the value of T5/T6 reaches 300 K (at t = 40 s) in the experiment. As seen from the figure, the RBS shift values near the heater area are high, and the heat is transferred quickly, and the temperature is gradually changed during the heating and heat conduction processes. For comparation, the RBS shift and temperature at five times (t1, t2, t3, t4, and t5) are presented in Figure 15b, in which the dimensionless values of RBS shift ratio S = S/S * and temperature ratio T = (T − 77)/(T * − 77) respectively are used (the reference values of RBS shift S * and temperature T * are chosen at z = 200.5 cm). From the figure, one can see that the continuous distribution curves of RBS shift ratios detected by the optical fiber sensor have good consistency with the temperatures ratio at 10 points measured by thermocouples. Furthermore, to obtain the temperature distribution along the sample a fitting function T = A 1 + A 2 exp(A 3 z 2 + A 4 z + A 5 ) is given by the measured temperatures, where A i (i = 1, 2, 3, 4, 5) denote the fitting parameters. One can see that the fitting function of the real temperature distribution approximatively correspond to an analogous Gaussian temperature distribution load. From Figure 15b, one also can find that at the five times the ten temperature ratios match the RBS shift very well, and good consistency is obtained in all parts of the sample except for the resistance heater region. The main characteristics of the RBS shift ratios have reasonable comparability with the thermal strains qualitatively as predicted in the previous section for a Gaussian temperature distribution in the sensing structure. Additionally, high temperature which means low elastic modulus of epoxy and great gradient, make the strain transfer ratio smaller around the heater as the theoretical predictions, so that the strain measured by optical fiber sensor has a big difference with the true value under this condition. However, in the most area far away from the heater there always present good measurements, and the values near the heater can be deduced by proper fitting function of RBS shifts in the practice. is given by the measured temperatures, where Conclusions The strain transfer characteristic of a three-layer sensing model based on strain optical fiber sensors at low temperature has been studied theoretically and experimentally. Since the harsh working conditions for the sensing structure, the Young's modulus and CTE of the materials both are temperature-dependent. The different thermal loads including a constant temperature variation and temperature gradients have been considered for the optical sensing structure. The following conclusions can be drawn from the investigation: (1) The proposed sensing model can successfully capture the strain transfer characteristics of the three-layer optical sensor structure as a temperature gradient exists, and the deformation in the different layers were accurately obtained. Meanwhile, a traditional model under uniform temperature loading for strain transfer analysis has been gained by the proposed model as a degradation form. (2) With temperature decreasing, the Young's modulus of the protective layer of the optical sensor always increases so that a quite good strain transfer performance is achieved. It results in the measurement of optical fiber strain sensor being more reliable and accurate under low temperature than that at room temperature. (3) Since the temperature-dependent properties of layers of the fiber sensor, the strain transfer ratios are even larger than 1.0 near the sensor ends at low temperature and a high gradient temperature load, while the average strain transfer ratios are commonly less than 1.0. The protective layer always plays a main role for the strain transfer for the global performance of the optical fiber sensing structure, and the optimization geometrical parameters should be carefully designed which can be improved by reducing thickness of the protective layer and increasing sensor length of the multi-layer sensing structure. (4) The experiments on a sample embedded with an optical fiber sensor were conducted. The thermal strains related to RBS shifts of the optical fiber for a uniform temperature variation and a temperature gradient load heated by a resistance heater were measured, to qualitatively verify the theoretical predictions on the main characteristics under low temperature condition.
13,993
2021-01-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Matter-antimatter asymmetry and dark matter stability from baryon number conservation There is currently no evidence for a baryon asymmetry in our universe. Instead, cosmological observations have only demonstrated the existence of a quark-antiquark asymmetry, which does not necessarily imply a baryon asymmetric Universe, since the baryon number of the dark sector particles is unknown. In this paper we discuss a framework where the total baryon number of the Universe is equal to zero, and where the observed quark-antiquark asymmetry arises from neutron portal interactions with a dark sector fermion N that carries baryon number. In order to render a baryon symmetric universe throughout the whole cosmological history, we introduce a complex scalar χ, with opposite baryon number and with the same initial abundance as N. Notably, due to the baryon number conservation, χ is absolutely stable and could have an abundance today equal to the observed dark matter abundance. Therefore, in this simple framework, the existence of a quark-antiquark asymmetry is intimately related to the existence (and the stability) of dark matter. Introduction The Standard Model (SM) of Particle Physics describes with outstanding precision the results of a myriad of experiments involving particle reactions.However, several cosmological observations suggest that the SM should be extended.Two of the most solid evidences for New Physics beyond the SM are the existence of dark matter in our Universe [1] Ω DM,0 h 2 = 0.120 ± 0.001 , and the existence of a cosmic asymmetry between the number of SM matter particles and their antiparticles, commonly expressed as the difference between the number density of baryons and antibaryons normalized to the entropy density [1] Y B,0 = n B − n B s 0 = (8.75 ± 0.23) × 10 −11 . ( Furthermore, observations have revealed that the density in the form of dark matter is comparable to the density of Standard Model Matter (SMM) Ω DM /Ω SMM ∼ 5. In 1967, Sakharov presented three necessary conditions that must be simultaneously fulfilled in order to generate a baryon asymmetry in our Universe [2]: (i) baryon number violation; (ii) C and CP violation; (iii) departure from thermal equilibrium.Many concrete models fulfilling these three conditions have been proposed which generate a baryon asymmetry in qualitative agreement with observations (see e.g.[3][4][5][6][7]).On the other hand, in these models the dark matter is typically not accounted for and assumed to be produced through a different mechanism, involving different particles and interactions, and occurring at different cosmic times.Hence, the similarity between the densities of protons and dark matter is merely coincidental.A popular framework to explain this similarity consists in postulating that the dark sector is asymmetric under a global dark symmetry, U (1) D , while the visible sector is asymmetric under a global baryon symmetry, U (1) B .If appropriate conditions are fulfilled in the dark sector, analogous to the Sakharov conditions, an asymmetry between dark matter particle and antiparticle could be generated, which is then transferred to the visible sector [8][9][10][11][12][13][14].Alternatively, both asymmetries could be generated simultaneously in the dark and the visible sectors [15][16][17][18].For reviews on asymmetric dark matter, see [19][20][21]. From the observational standpoint one cannot conclude that the Universe is baryon asymmetric, since the baryon number of the dark sector particles is unknown.Instead, one can only asseverate that the visible sector is baryon asymmetric, or more strictly, that the Universe contains an asymmetry between the total number of quarks and antiquarks, given by Y ∆q,0 = (2.63 ± 0.07) × 10 −10 , ( which is obtained from multiplying Eq. (1.2) by 3. In this paper we will argue that not all the Sakharov conditions are necessary to generate a quarkantiquark asymmetry.We will assume that the baryon and lepton numbers are exact symmetries of Nature, and we will show that a cosmic quark-antiquark asymmetry can be generated when the three following conditions are satisfied: (i) C and CP is violated in the dark sector, (ii) there is departure from thermal equilibrium, (iii) there are portal interactions between the dark sector and the quarks.These conditions are (arguably) less restrictive that the Sakharov conditions, and seem plausible in dark sector scenarios. To illustrate the idea, we will consider a simple framework where the dark matter particle is a complex scalar carrying baryon number, and that the dark sector interacts with the visible sector via a "neutron portal" [22].Imposing an initial asymmetry between the number of dark matter particles and antiparticles, we will show that the asymmetry in the dark sector is transmitted to the visible sector thus generating an asymmetry between the number of quarks and antiquarks.In this way, the yield of baryons in the visible sector and the yield of dark matter particles are naturally comparable.Furthermore, in this simple framework the dark matter stability is ensured by the conservation of the baryon number, and does not require additional ad hoc symmetries.Therefore, in this scenario the existence of dark matter in our Universe today is intimately related to the existence of a quark-antiquark asymmetry. The paper is organized as follows.In Section 2 we present our scenario and we qualitatively describe its main characteristics.In Section 3 we present the Boltzmann equations for the temperature evolution of the yields of the various particle species, and we estimate the present value of the dark matter abundance and the quark-antiquark asymmetries in terms of the initial conditions of our scenario.In Section 4 we discuss the constraints on the neutron portal and the prospects of detecting signals from the dark sector, and in Section 5 the prospects of detecting a dark matter signal.Finally, in Section 6 we present our conclusions. Dark sector baryons and their impact on the visible sector We consider a hidden sector containing a complex scalar χ and a Dirac fermion N , both singlets under the Standard Model gauge group, with masses m χ and m N respectively.We assume that these fields transform under an exactly conserved U (1) B symmetry, with charges B(χ) = −1 and B(N ) = +1.The baryon numbers of the proton and the neutron are defined as usual as B(p) = +1, B(n) = +1 and so are the baryon numbers of the quarks and antiquarks, B(q) = +1/3, B(q) = −1/3. The kinetic and mass terms in the Lagrangian of the dark sector baryons read Further, the Lagrangian contains interaction terms among the dark sector particles, which we describe via dimension-5 effective operators of the form where the superscript c denotes charge conjugation.The first term does not change the baryon number in the fermionic current, while the second term changes the baryon number by two units, hence the notation for the suppression scale of the dimension-5 operators, Λ 0 and Λ 2 respectively.Lastly, the gauge symmetry and the baryon symmetry allow interaction terms between the dark sector and the visible sector of the form with H the Standard Model Higgs doublet, and u R and d R the right-handed up and down quarks.These two terms respectively correspond to a Higgs portal interaction and to a neutron portal interaction [17,22,23].Portals involving heavier generation quarks are also possible, but will not be discussed in what follows for simplicity. In order to generate an asymmetry in the visible sector, we assume that some unspecified mechanism in the dark sector produces an excess of N over N at high temperatures.Since this mechanism operates exclusively in the dark sector, the conservation of baryon number requires the generation of an excess of χ over χ * so that the total baryon number of the Universe remains zero, as depicted in Fig. 1.A possible mechanism generating this excess could be the out-of-equilibrium, CP -violating, and B-conserving decays of heavy fermions φ i → χN, χ * N , which will generate an asymmetry between the number densities of N and N , and the corresponding asymmetry between χ and χ * , rendering a total baryon number equal to zero.A quantitative description of this mechanism will be presented elsewhere [24].Due to the neutron portal interaction in Eq. (2.3), the scatterings N d ↔ ud and N ū ↔ dd, as well as the decays N → udd, inject a net baryon number into the visible sector (which could be partially converted into a net lepton number via sphaleron transitions, depending on the temperature at which the asymmetry in the dark sector is generated).Therefore, the excess of dark sector fermionic baryons over antibaryons is leaked to the visible sector via the neutron portal, ultimately generating an excess of quarks over antiquarks. Note that the Higgs portal does not transmit baryon number from the dark sector to the visible sector.Hence, we will set it to zero for simplicity, although it could have phenomenological implications, as we will briefly discuss in Section 5. Notably, due to the Lorentz symmetry and the baryon number conservation, χ and χ * are absolutely stable, although they can annihilate with one another generating a relic population of χ.Therefore, in this simple framework the existence of a quark-antiquark asymmetry in the visible sector is intimately related to the existence of dark matter in our Universe.We stress that the stability of the dark matter does not require any ad hoc new symmetry, but is simply due to the conservation of the total baryon number in the Universe. 1n the next section, we will describe in detail the evolution of the yields of the different particles, and the expectations for the relic abundance of dark matter and the quark-antiquark asymmetry. Evolution of the particle number densities and asymmetries The Boltzmann equations for the yields of the different particle species can be cast as ) Here x ≡ m χ /T , s = 2π 2 /45 g ⋆,s T 3 is the entropy density of the Universe at the temperature T , with g ⋆,s the number of relativistic degrees of freedom.The overall numerical factor λ is defined as The Boltzmann equations depend on the thermally averaged rates of different processes.Firstly, there are reactions involving only dark sector particles: χχ * ↔ NN , which is induced by the effective operator in the first term of Eq. (2.2), suppressed by Λ 0 ; χχ ↔ NN and the C−conjugated reaction χ * χ * ↔ N N , induced by the effective operator found in the second term of Eq. (2.2), suppressed by Λ 2 ; and χN ↔ χ * N , also induced by the same term and suppressed by Λ 2 .The explicit expressions for the cross sections of these processes read ) with s the square of the center of mass energy.Secondly, one has reactions between dark sector particles and visible sector particles, N d ↔ ud and N ū ↔ dd, as well as the decay N → udd, all mediated by the neutron portal interaction, suppressed by Λ n .The corresponding cross sections and decay rate read To simplify the discussion, we will assume that the neutron portal interaction is sufficiently strong to bring the dark sector baryons into thermal equilibrium with the visible sector.At a temperature T , this condition requires GeV. (3.12) Therefore the equilibrium number density of the dark sector particle species i with mass m i , number of internal degrees of freedom g i and chemical potential µ i is given as usual by where K 2 (x) is the modified Bessel function of the 2 nd kind.It will be convenient to work with the total yields of the different particle species, along with their corresponding asymmetries, defined as The Boltzmann equations describing their temperature evolution are ) ) The yields Y χ , Y χ * , Y N and Y N are implicit functions of the total yields and of the asymmetries.Further, we have introduced Lastly, the asymmetry present in N is transmitted to the visible sector through the neutron portal, giving rise to an asymmetry between the total number of quarks and antiquarks ∆q ≡ i ∆q i .Its time evolution is described by The constant c in this expression characterises the efficiency of the conversion of the baryon asymmetry stored in the left-handed quarks into a lepton asymmetry stored in the left-handed leptons via sphaleron processes.If the asymmetry in the dark sector is generated at temperatures above 130 GeV, the point at which sphalerons drop out of thermal equilibrium [25], then c = 36/111 [26].In turn, the asymmetry between the total number of leptons and antileptons, ∆ℓ ≡ i ∆ℓ i can be calculated from Representative evolution of the total yields of χ + χ * and N + N (dash-dotted) and the yields of the different asymmetries between particles and antiparticles (solid) for the initial condition depicted in Fig. 1.See the main text for details. If the initial asymmetry is instead generated after sphaleron freeze-out, then we set c = 1 and no asymmetry is transferred to the leptons.Let us consider a scenario where both the baryon and the lepton numbers are exactly conserved in Nature.As initial condition, we assume that at a temperature T in ≫ 130 GeV there is no asymmetry between quarks and antiquarks (nor between leptons and antileptons) but that there is a CP -violating mechanism in the dark sector that generates a primordial asymmetry between N and N .The conservation of the baryon number requires a corresponding asymmetry between χ and χ * .This initial condition is sketched in Fig. 1, where we also show the different portals relating the various particle species in our model.Under this plausible assumption, the dark matter relic abundance and the quark-antiquark asymmetry are determined by the initial asymmetry in the dark sector, and by the energy scales Λ 0 , Λ 2 and Λ n , which determine the strengths of the different portal interactions. For this representative set-up, the various yields qualitatively evolve with temperature as shown Fig. 2. At the high temperature T in all particle species are ultra-relativistic and their equilibrium yields are related solely by their different number of internal degrees of freedom, see Eq. (3.13).For the yields of the particles in the hidden sector, this implies Y eq N = 2Y eq χ .Further, for the initial condition sketched in Fig. 1 the yields of the asymmetries satisfy Let us first discuss the evolution of the asymmetries with the temperature.At temperatures very close to T in , the scatterings N ū ↔ dd and N d ↔ ud (with rates depending on Λ n ) effectively transfer a fraction of the baryon asymmetry from the dark sector to the visible sector, which is then distributed between leptons and quarks by sphaleron transitions2 .At the same time, the processes χχ ↔ NN and χN ↔ χ * N (with rates depending on Λ 2 ) contribute to the washout of the asymmetry within the hidden sector, thereby influencing the size of the asymmetries in the visible sector.The effect of the washout in ∆N follows from Eq. (3.19).Given that at high temperatures all the yields can be well approximated by their equilibrium values, the Boltzmann equation for Y ∆N simplifies to and similarly for Y ∆χ .The analytical solution for this equations reads The requirement that no more than 50% of the asymmetry is washed out implies the lower limit3 GeV. (3.27) In order to simplify our discussion, we will assume in what follows that Λ 2 satisfies this lower limit and that the initial asymmetry in ∆N (and ∆χ) is very weakly washed out.The efficient redistribution of the asymmetry within the visible sector has been displayed in Fig. 2. If that is the case, the asymmetries in the different particles species at a temperature slightly below T in take the simple expressions These asymmetries remain constant until a temperature T decay at which the decays of N and N inject an additional quark-antiquark asymmetry in the visible sector (and a lepton-antilepton asymmetry if the sphalerons are still in equilibrium at the epoch of decays).For the range of parameters of interest (see Section 4), we find that the decays typically occur when the sphalerons are out-ot-equilibrium, so that the remaining ∆N asymmetry is entirely converted into a quark-antiquark asymmetry, whereas the leptonantilepton asymmetry stays frozen.The decrease in Y ∆N at T ∼ T in and then at T ∼ T decay , along with the corresponding changes in Y ∆q and Y ∆ℓ , can be seen in Fig. 2. At the present time, the asymmetries in the visible sector read In these equations, the factor of 3 arises from the fact that N generates three quarks in its decay. Let us now turn to the evolution of the total yields Y tot χ and Y tot N , Eqs. (3.16) and (3.18).Again, to simplify our discussion we will assume that Λ 2 satisfies the condition in Eq. (3.27).Further, we will assume Λ 0 ≪ Λ n , so that the rate of conversion of N into quarks is slow compared to the rate of conversion of χ into N .This implies that the dark matter abundance is determined by the freeze-out of the annihilation process χχ * → NN .Under these assumptions the Boltzmann equations (3.16) and (3.18) simplify to which amount to a secluded sector freeze-out scenario,4 albeit with a hidden sector temperature identical to the one of the visible sector.The behaviour of Y tot χ and Y tot N is sketched in Fig. 2. The total dark matter abundance at the current epoch can be calculated using standard tools (see e.g.[27]), resulting in where the freeze-out temperature is determined by the condition Γ χχ * →NN (x f.o. ) = H(x f.o. ), and Y tot χ (T f.o. ) can be well approximated by its equilibrium value at freeze-out.Besides, due to the fact that χχ * pairs Figure 3. Possible final states from the initial state Fig. 1, corresponding to a scenario where dark matter antiparticles have efficiently annihilated (left panel), or only partially annihilated (right panel).All N have decayed into quarks.We have assumed that the initial asymmetry within the dark sector is generated while sphalerons are still in thermal equilibrium.Today, most leptons are in the form of neutrinos and anti-neutrinos, with a small relative asymmetry, leaked via sphaleron processes.In both cases, the total B − L number of the Universe is conserved and equal to zero. can only annihilate into NN pairs, one finds that Y tot N (x f.o. ) ≃ 3Y eq N (T in ), which could additionally lead to an epoch of early matter domination if N is sufficiently long-lived [28][29][30].Some implications of this early phase of matter domination will be discussed in Section 4. In the simplest scenario, all dark matter antiparticles annihilate resulting in the final state sketched in the left diagram of Fig. 3.In this case, where in the last step we have used Eq.(3.23).Therefore, in this simple scenario the dark matter abundance and the quark-antiquark asymmetry are determined by the same parameter, the initial asymmetry in the dark sector, Y in ∆N .More concretely, using Eqs.One can then adjust the initial condition Y in ∆χ to generate the observed quark-antiquark asymmetry, Y ∆q,0 ≃ 2.6 × 10 −10 , and the dark matter mass to generate the observed dark matter abundance, Ω DM,0 h 2 ≃ 0.12.We obtain m χ ≃ 1.9 GeV.(3.38)In the case where not all dark matter antiparticles annihilate, then Y tot χ (x f.o. ) = Y ∆χ (x f.o.)+2Y χ * (x f.o. ), and Eq.(3.36) must be replaced by This scenario is sketched in the right diagram of Fig. 3.In this case, the initial asymmetry in the dark sector necessary to reproduce the quark-antiquark asymmetry is still given by Eq. (3.31).However, since the total dark matter yield is larger, the observed dark matter abundance is reproduced for a smaller value of the dark matter mass, Here Y χ * (x) can be calculated from particularizing Eq. (3.2) to the weak wash-out regime, (3.41) Since at freeze-out the dark matter antiparticles were still in thermal equilibrium, one can approximate Y χ * (x f.o. ) = Y eq χ * (x f.o. ), and then the equilibrium distribution can be easily obtained by setting the right hand side of Eq. (3.41) to zero.We obtain Let us stress that these conclusions are quite insensitive to the concrete values of the portal strengths Λ 2 , Λ 0 and Λ n , provided that (i) the washout of the asymmetry is weak, (ii) Λ n is small enough to keep the hidden sector thermalized with the visible sector and (iii) Λ 0 ≪ Λ n , so that N is stable in the timescale of the freeze-out.Other scenarios are also possible, by adjusting the initial conditions in the Boltzmann equations.On the other hand, a crucial assumption of our scenario is the existence of a neutron portal that transmits the asymmetry in the dark sector to the visible sector.This neutron portal could lead to experimental signatures in our scenario, which will be discussed in detail in the next section. Constraints on the neutron portal The particle N can have implications in our visible sector through the neutron portal Eq. (2.3), e.g. through the decay of N into quarks, the production of N in proton-proton collisions, or the generation of a mass mixing term with the neutron below the QCD confinement scale.The various constraints are summarized in Fig. 4, in the parameter space defined by the mass of N (m N ) and the energy scale of the neutron portal (Λ n ).The region allowed by all the constraints, shown in white, is bounded and could in principle be probed in its totality.Let us describe in detail the multiple constraints from the parameter space. The standard Big Bang Nucleosynthesis (BBN) scenario is extremely successful in describing the evolution of the Universe after ∼ 1 s.In particular, observations indicate that the quark-antiquark asymmetry at the time of BBN does not differ significantly from the quark-antiquark asymmetry at the time of recombination.In order to preserve the standard BBN scenario, we will require that the yield of N is largely depleted at ∼ 1 s, so that their decays have practically no impact at later times.Using the fact that the width of N is given by and requiring conservatively Γ −1 N →udd ≲ 0.1 s, we exclude the region Λ n ≳ 10 5 GeV(m N /GeV) 5/4 , indicated in Fig. 4 as a hatched orange region. The neutron portal also leads to the production of N in proton-proton collisions through the partonic processes ud → N d and dd → N ū.We estimate the non-resonant production cross section at √ s = 14 TeV to be where we have estimated the effect of the partonic distributions in the protons in the parameter to be f PDF ≈ 10 −2 [31].Depending on the lifetime of N , the signal at colliders could be in the form of missing p T (if stable within the detector's volume), in the form of a displaced vertex (if the decay length is macroscopic), or in the form of dijets (if the decay length is microscopic).We show in Fig. 4 the line for which the decay length lies outside of the ATLAS or CMS detectors, ct lab = 100 m, where we have taken a Lorentz factor γ = √ ŝ/(2m N ) with √ ŝ ∼ 2 TeV for the partonic center of mass energy.We also indicate in the plot the values of Λ n corresponding to a production cross section σ pp→N +jet = 1 fb, 10 fb, 100 fb and 1 pb, and in green the ballpark area of values that can be probed at the LHC with an integrated luminosity of L = 100 fb −1 , which corresponds to effective interactions with strength Λ n ≲ 6 TeV.We note that for small values of Λ n the effective field theory breaks down at LHC energies, and instead a dedicated search for the new particles mediating the neutron portal should be performed.A detailed collider analysis is however beyond the scope of this paper.We also note that the collider constraints become weaker if the portal between the dark sector and the visible sector involves sea quarks, e.g. this variant of the portal could be probed in the decays of heavy mesons and baryons [32]. The mass of N is bounded from below from the requirement that the proton must be the lightest fermion carrying baryon number.Otherwise the proton could decay, e.g.p → N π + .This requirement translates into the lower limit m N ≥ 938 MeV (this limit on the mass could be avoided if the width is suppressed by a large Λ n , however this region is in tension with BBN [33]).Lastly, the mass of N is bounded from above from the requirement that the dark matter is not overproduced.More specifically, the annihilation process χχ * → NN must be efficient enough to deplete most of the dark matter density.Naively, this requires m N < m χ , however, as argued in [34,35], the annihilation can also occur in a "forbidden" channel, due to the existence of sufficiently energetic dark matter particles in the tail of the Maxwell-Boltzmann distribution.The thermally averaged annihilation cross section in a forbidden channel is approximately given by [35] where ∆ = (m N − m χ ) /m χ is the relative mass splitting and f (∆) is a function of ∆, which approximates to f (∆) ≈ 1+∆ for ∆ ≫ 1.In Eq. (4.3) the Boltzmann suppression of the dark matter annihilation cross section is explicit.A more rigorous upper limit on m N is derived by ensuring that dark matter overproduction is avoided for the largest possible value of the annihilation cross section, Eq. (4.3).From the s-wave unitarity requirement ⟨σv⟩ NN →χχ * ≤ 4π/m 2 N (x f.o./π) 1/2 , we obtain m N ≲ 2.7 GeV.For the milder requirement that the effective field theory remains valid, which corresponds to Λ 0 ∼ m χ , we obtain m N ≲ 2.4 GeV.This limit is shown in Fig. 4 in red, where the relaxation of constraint at large Λ n values is attributed to the freeze-out of dark matter taking place during a period of matter domination.In this case, the value of the total annihilation cross section required to obtain the correct dark matter relic abundance is smaller than in the standard WIMP paradigm [36]. 5 Constraints from Dark Matter searches Dark matter signals via the Higgs portal The Higgs portal λ χH |χ| 2 |H| 2 leads to potential dark matter signals in collider experiments and in direct detection experiments (signals in indirect detection experiments could also arise if there is a relic population of dark matter antiparticles).In our scenario, the predicted dark matter mass is ≃ 1.9 GeV; therefore, the Higgs portal could induce the invisible decay of the Higgs into a dark matter particle-antiparticle pair, h → χχ * .The rate reads where v ≃ 246 GeV is the Higgs vacuum expectation value and m h ≃ 125 GeV is the Higgs mass.Current experimental searches constraint the Higgs branching ratio into invisible final states to be Br (h → invisible) ≲ 20% [37,38].Using that the Higgs width into visible particles is Γ vis ≃ 4 MeV, one obtains λ χH ≲ 10 −2 [39].The Higgs portal interaction also induces the scattering of dark matter particles off nuclei.The spinindependent scattering cross section off a nucleon N reads [40,41] where µ = m N m χ /(m N + m χ ) is the dark matter-nucleon reduced mass, and f N ≈ 0.3 encodes the quark and gluon content of a nucleon.For dark matter in the GeV mass range, the best current sensitivity is provided by the DarkSide 50 experiment [42], which excludes cross sections larger than 2 × 10 −42 cm 2 for a dark matter mass of 1.9 GeV.This limit translates into λ χH ≲ 0.07, less sensitive than the constraint from the invisible Higgs decay.In addition to these phenomenological constraints on the Higgs portal coupling, it is worthwhile mentioning a theoretical constraint stemming from the naturalness of the dark matter mass.The dark matter mass term receives after electroweak symmetry breaking a contribution δm 2 χ = λ χH v 2 .Therefore, our favored value m χ ≃ 1.9 GeV points to λ χH < 6 × 10 −5 , unless there is a fine cancellation between the mass term in the Lagrangian Eq. (2.1) and the contribution to the mass from the electroweak symmetry breaking.This strong constraint would make any Higgs portal-induced dark matter signal very difficult to detect. Dark matter signals via the neutron portal The second portal of the dark sector to the Standard Model is the neutron portal This term induces, at energies below the QCD confinement scale, a mass mixing term between the dark matter particles and the neutrons which leads to the scattering of dark matter particles with nuclei and the "transmutation" of neutrons into antineutrons, as depicted in Fig. 5.We can estimate the rate of dark matter scatterings off neutrons as which is many orders of magnitude below the current sensitivity of the DarkSide 50 experiment [42] and of any foreseeable dark matter direct detection experiment.We also do not expect to have detectable signals of dark matter scatterings from neutron stars.In principle, the "transmutation" of neutrons into antineutrons χn → χ * n could also lead to observable signatures. 5We estimate the cross section for this process to be which we obtain by simply replacing Λ 0 by Λ 2 in Eq. (5.4).We have also normalized the "transmutation" cross section to Λ 2 = 10 11 GeV, which is a typical value for which the particle-antiparticle asymmetries are not significantly washed out, cf.Eq. (3.27).The searches for neutron-antineutron oscillation at Super-Kamiokande [46] can be recast into a limit on the neutron-antineutron transmutation rate induced by dark matter particles in the Milky Way halo.We estimate this limit to be σ χn→χ * n ≲ 10 −49 cm 2 , which is far above the expected cross section in our framework.The prospects for indirect dark matter detection are strongly influenced by whether the symmetric component has been depleted at freeze-out.In the asymmetric case, the only annihilation channel available is χχ → NN .This process is highly suppressed in the present realization due to the large effective scale Λ 2 , and no constraints on the model parameters are expected.Nevertheless, it is noteworthy that an asymmetric complex scalar dark matter candidate could have an open direct annihilation channel today.In the symmetric scenario, it is expected that strong χχ * → NN annihilations will occur, as χ is a light and thermal dark matter candidate.However, due to the conservation of baryon number, N must eventually decay into neutrons, whose masses would already account for a significant portion of the energy budget in the final state of the annihilation.The decay of N into semi-relativistic neutrons would emit a soft cascade of various hadrons and/or a gamma-ray of energy E γ = m N − m n , which could provide a window into both the nature of dark matter and its link to the neutron portal. Conclusions We have presented a scenario that accommodates both dark matter and a quark-antiquark asymmetry.We have postulated the existence of a spin-0 and a spin-1/2 particle in the dark sector, both singlets under the Standard Model gauge group and carrying baryon number.As initial conditions, we have assumed that at very high temperatures, the Universe has zero baryon and lepton numbers but that the dark sector contains an asymmetry between particles and antiparticles.We have argued that the asymmetry in the spin 1/2 particle can be transmitted to the visible sector through (baryon conserving) neutron portal interactions, thus resulting in a quark-antiquark asymmetry (and possibly a lepton-antilepton asymmetry via sphalerons). On the other hand, the spin-0 particle is stable due to the baryon number conservation and constitutes a dark matter candidate.In this framework, the B − L number of the visible sector is exactly compensated by an opposite asymmetry in the dark sector, thus linking the observed quark-antiquark asymmetry to the existence of dark matter.Under reasonable assumptions, we expect the dark matter mass to be ∼ 1.9 GeV if it is fully asymmetric or potentially lighter if a population of dark matter antiparticles remains after freeze-out.The scenario also predicts the mass of the exotic spin-1/2 particle to be comparable to that of the dark matter.Such a particle could be produced at the LHC or in flavor physics experiments through the neutron portal, generically leaving the detector before decaying.This particle would then produce a signal of missing energy and an apparent violation of baryon number due to the imbalance in the baryon number of the visible sector particles involved in the reaction. Finally, we have briefly discussed the prospects for observing dark matter signals in our scenario.We find that the most promising avenue to detect signals lies in the Higgs portal, either through the invisible Higgs decay width or in direct detection experiments, akin to the singlet scalar dark matter model. Figure 1 . Figure 1.Sketch of the initial abundances of the different particle species in our model, together with the interaction terms among them.Light yellow (blue) indicates the particles with B − L < 0 (B − L > 0); the total B − L of the Universe is equal to zero. Figure 4 . Figure 4. Constraints on the neutron portal energy scale (Λn) and mass of N (mN ) from cosmology, proton stability, and collider experiments, along with contours of production cross section and decay length at the LHC with √ s = 14 TeV.The allowed region is shown in white. Figure 5 . Figure 5. Processes induced by the neutron portal: dark matter scattering off nuclei (left panel) and dark matterinduced "transmutation" of a neutron into an antineutron (right panel).
7,903
2023-07-05T00:00:00.000
[ "Physics" ]
B meson production in Pb+Pb at 5.02 ATeV at LHC: estimating the diffusion coefficient in the infinite mass limit In the last decade a Quasi-Particle Model (QPM) has been developed to study charm quark dynamics in ultra-relativistic heavy-ion collisions supplying a satisfactory description of the main observables for $D$ meson and providing an estimate of the space-diffusion coefficient $D_s(T)$ from the phenomenology. In this paper, we extend the approach to bottom quarks describing their propagation in the quark-gluon plasma within an event-by-event full Boltzmann transport approach followed by a coalescence plus fragmentation hadronization. We find that QPM approach is able to correctly predict the first available data on $R_{AA}(p_T)$ and $v_{2}(p_T)$ of single-electron from B decays without any parameter modification w.r.t. the charm. We show also predictions for centralities where data are not yet available for both $v_{2}(p_T)$ and $v_{3}(p_T)$. Moreover, we discuss the significant breaking of the expected scaling of the thermalization time $\tau_{th}$ with $M_Q/T$, discussing the evolution with mass of $D_s(T)$ to better assess the comparison to lQCD calculations. We find that at $T=T_c$ charm quark $D_s(T)$ is about a factor of 2 larger than the asymptotic value for $M \rightarrow \infty$, while bottom $D_s(T)$ is only a $20-25\%$ higher. This implies a $D_{s}(T)$ which is consistent within the current uncertainty to the most recent lattice QCD calculations with dynamical quarks for $M \rightarrow \infty$. I. INTRODUCTION The main goal of the ongoing heavy-ion collisions performed at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) is the study of a state of matter named Quark-Gluon Plasma(QGP) that behaves like a nearly perfect fluid having a remarkably small value of shear viscosity to entropy density ratio, η/s ≈ 0.1.Heavy quarks, namely charm and bottom, thanks to their large masses, are considered as a solid probe to characterize the QGP phase [1][2][3].They are produced by pQCD processes, hence at variance with the bulk matter, their initial production is to a large extent known.Furthermore, they have a formation time τ 0 < 0.08f m/c τ QGP so probing also the strong electromagnetic fields expected in the initial stage of the collision [4][5][6].The large mass implies a larger thermalization time w.r.t.light counterpart and appears currently to be comparable to the one of the QGP itself [1,7].Therefore, HQs can probe the whole evolution of the plasma and, being produced out-of-equilibrium, they are expected to conserve memory of the history of the system evolution.Furthermore, recently it has been suggested a relevance of the early glasma phase on their dynamics in both pA and AA collisions [8][9][10].There is a general agreement that the observed R AA (p T ) and v 2 (p T ) imply that in the low-intermediate p T region charm quark dynamics is affected by large non perturbative effect [1,[11][12][13][14][15][16].In order to take into account the non-perturbative effects of the interactions, some approaches make use of pQCD framework [17][18][19] with large coupling, or supplemented by Hard-Thermal Loop (HTL) [20,21].Another way to account the non-perturbative QCD effects at non-zero temperature is to encode the lQCD thermodynamical expectations with effective temperature dependent particle masses like in Quasi-Particle Model (QPM) [22][23][24] or similarly, but including the off-shell dynamics, in the DQPM [15,25].A more sophisticated approach is based on a T-matrix calculation under a potential kernel that correctly reproduce the free energy as evaluated in lattice QCD for HQ pair in the infinite mass limit [12,26].In the past years, the QPM approach has successfully described the main observables of D mesons leading to an extrapolation of the spatial diffusion coefficient D s (T ) of charm quark in agreement with the available lQCD calculation in quenched approximation [1, 2, 7, 8, 11, 12, 16, 19-21, 23, 27-35].On the other hand, one has to consider that a proper comparison should be done with more recent calculation in lQCD where the quenched approximation is relaxed.Furthermore, the comparison with charm quark suffers from its finite mass that while being significantly larger than the QGP temperature is still nearly comparable with the average thermal momentum ∼ 3T that is also comparable with the exchanged momentum ∼ gT .In this respect, the extension of the study to the bottom sector allows to investigate the quark mass dependence of the interaction toward the infinite mass limit assumed in the present lQCD calculations [36][37][38][39][40]. Hence from the phenomenological point of view, bottom allows also to test the scaling of the thermalization time τ th with the heavy quark mass, an aspect that to our knowledge, we are focusing for the first time.Two main observables have been studied in uRHICs for HF hadrons: the heavy mesons nuclear modification factor R AA (p T ) [41][42][43] , and the so called elliptic flow, v 2 (p T ) [44,45].The first observable describes the change of the spectrum in nucleus-nucleus collision with respect to a simple proton-proton superposition, while the second is related to the anisotropy in the in the particle angular distribution giving information about the coupling of the HQs with the plasma.Further efforts have been done to extend the analysis to higher order anisotropic flows v n [46][47][48][49] that can give more constraints on the extraction of the transport coefficients which are strictly related to the initial event-by-event fluctuations.Further investigation on the HQs dynamics can be addressed by using the Event-Shape-Engineering technique [50,51] that seem to be satisfactory described by models available in literature [49,[52][53][54], at least within the current experimental data uncertainties.In this paper, within our approach already widely employed to study the charm dynamics, we want to study the bottom dynamics through the nuclear modification factor R AA (p T ), elliptic and triangular flows v 2,3 (p T ) of B mesons and electrons from semi-leptonic B meson decay which can be compared to the available experimental data from ALICE collaboration.Moreover, we discuss the extrapolation of the spatial diffusion coefficient D s for bottom quark, comparing our results with the D s of charm quark from the previous analysis and the available lQCD data points evaluated in the infinite mass limit for the heavy quarks.In particular, we discuss the D s dependence on the heavy quark masses in our QPM approach evaluating the discrepancy between both charm and bottom mass scale with respect to the saturation value of D s reached in the infinite mass limit.The paper is organized as follows.In section II, we expose briefly the Boltzmann transport approach used to describe the HQs evolution and the hybrid hadronization approach by coalescence plus fragmentation.In section III, the results for the main observables in the bottom sector are shown, in particular our predictions for the R AA , v 2,3 of B mesons and electrons from semi-leptonic B meson decays.In the section IV, we discuss the D s (T ) of bottom quark in comparison with the D s (T ) obtained of charm quark from the previous analysis and the available lQCD data points.Finally, section V contains a summary and some concluding remarks. II. TRANSPORT EVOLUTION OF BOTTOM QUARK IN QGP The results shown in this paper have been obtained using a transport code developed to perform studies of the dynamics of relativistic heavy-ion collisions at both RHIC and LHC energies [7,[55][56][57][58][59][60].In our approach, the space-time evolution of gluons (g) and light quarks (q) as well as of heavy quarks (Q) distribution functions is described by mean of the Relativistic Boltzmann Transport (RBT) equations given by: where f i (x, p) is the on-shell phase space one-body distribution function for i − th parton species (i = q, g) and C[f q , f g , f Q ](x, p) in the right-hand side of Eq. 2 is the relativistic Boltzmann collision integral allowing to describe the short range interaction between heavy quark and particles of plasma.In our calculations the HQs interact with the medium by mean of two-body collisions regulated by the scattering matrix of the processes g + Q → g + Q and q(q) + Q → q(q) + Q and the collision integral describing HQ scattering takes the form: where |M (g,q)+Q | 2 are the transition amplitude of the process.As shown by the above eq.s,we are discarding the impact of heavy quarks (charm or bottom) on the bulk dynamics, which is quite a solid approximation.Furthermore, in our simulations we are employing a bulk with thermal massive quarks and gluons according to a Quasi-Particle Model (QPM) which is able to reproduce the lattice QCD Equation of State: pressure, energy density and interaction measure T µ µ = −3P , giving a softening of the equation of state consistent with a decreasing speed of sound approaching the cross-over region [61].However, the main feature of the QPM on the HQ dynamics results in a significantly stronger coupling of HQs with the bulk medium respect to the pQCD coupling at lower temperature particularly as T → T c (see details in Refs.[7,31,62]).In the collision integral C[f q , f g ](x, p) for gluon and light quark, the total cross section is determined in order to keep the ratio η/s = 1/(4π) fixed during the evolution of the QGP, see Refs.[56,58,59] for more details.In this way, we simulate the dynamical evolution of a fluid with specified η/s by means of the Boltzmann equation.We include initial state fluctuations by means of a modified Monte Carlo Glauber model as used in Ref. [60] to study the light flavour v n and recently extended to study the dynamics of charm quarks [49].Charm and bottom quark r-space distributions follow the number of binary nucleon-nucleon collisions N coll from the Monte Carlo Glauber model.Further details of the initial condition implementation can be found in [49,60].At last, for the charm and bottom quark initial distributions in momentum space, we have used the spectra calculated at Fixed Order + Next-to-Leading Log (FONLL) [63] which describe the D-meson spectra in proton-proton collisions after fragmentation. In our simulations, the hadronization hypersurface is given by the space-time where the local temperature of a cell falls down the critical temperature T = 155 MeV in agreement with the statistical hadronization model [64].The corresponding distribution functions are employed to undergo the hadronization process by coalescence plus fragmentation; an approach that has been widely discussed and employed for charm quark, for details see Ref.s [65][66][67].In the following, we describe the main features and parameters for the bottom case.For the case of bottom quark, following Refs.[65,66,68,69] as for the charm quarks, we adopt as B mesons Wigner function a Gaussian shape in relative coordinates: where x r = x 1 − x 2 and p r = m2p1−m1p2 m1+m2 .The σ r are the widths which can be related to the root mean square charge radius of the hadron: with Q i the charge of the i-th quark.The width parameter σ r depends on the hadron species and can be calculated from the charge radius of the hadrons that have been taken from quark model [70,71] and the corresponding widths for B mesons are shown in Table I ch in f m 2 and the widths parameters σpi in GeV .The mean square charge radius are taken quark model [70,71]. resonant states are suppressed according to the statistical thermal weight with respect to the ground state.We consider the B * (5325) = lb, B 1 (5721 5840) 0 = sb.Finally, as for the charm quark sector [66,69], an overall normalization of the coalescence probability is fixed to guarantee that in the limit p → 0 all the bottom quarks hadronize by coalescence in a heavy hadron.This is imposed by requiring that the total coalescence probability gives lim p→0 P tot coal (p) = 1.In our hybrid hadronization approach, the bottom quarks that do not hadronize via coalescence are converted to hadrons via fragmentation; with probability for each bottom quark given by P f rag (p T ) = 1 − P coal (p T ).Therefore, in order to obtain the final hadron spectra coming from fragmentation; we evaluate the convolution, integrating over all the momentum fraction z, between the momentum distribution of heavy quarks which do not coalescence and the Kartvelishvili fragmentation function [72] as implemented in FONLL: where z = p had /p b is the momentum fraction carried by the heavy hadron formed from the heavy quark fragmentation and α is a parameter that we determine to reproduce the experimental HF mesons in pp collisions measured at LHC; in particular we obtain a value α = 25. III. NUCLEAR MODIFICATION FACTOR RAA AND ANISOTROPIC FLOWS v2,3 IN BOTTOM SECTOR We have generated our prediction for B meson observables employing the QPM modeling for the HQ interaction already employed to study the D meson dynamics [7,49,54], described in Sect.II.We stress that the model parameters and in particular the coupling entering the HQ scattering matrices, have not been modified going from charm to bottom, hence the difference come merely from the different mass value.In our model, event-by-event fluctuations generate an initial profile in the transverse plane ρ ⊥ (x ⊥ ), that changes in every event, and which is responsible for the initial anisotropy in coordinate space.This anisotropy is quantified in terms of eccentricities n of the initial fireball.The charm or bottom quarks interact with the QGP constituents during the evolution; they convert the initial eccentricity of the overlap region into a final anisotropy in momentum space, that is characterised in terms of the Fourier coefficients v n (p T ).The results shown in this paper have been obtained using the two particle correlation method to calculate elliptic (v 2 ) and triangular flow (v 3 ) [73,74].In order to guarantee a numerical solution that reaches convergence and stability for R AA (p T ) and v 2,3 (p T ) up to p T ∼ 10 GeV/c, we have used a total number of test particles N test = 4 • 10 5 per unit of rapidity and a lattice discretization with ∆x = ∆y = 0.5 fm and ∆η = 0.1.The only available observable to infer information about B mesons production in nucleus-nucleus collisions are the nuclear modification factor and elliptic flow of leptons coming from semi-leptonic B mesons decay in P b − P b collisions at √ s N N = 5.02 TeV at various centralities.In this section, we first show the comparison between our results and the experimental data for the nuclear modification factor R AA of electrons from semi-leptonic B meson decay and we provide predictions for B mesons R AA at LHC energies for two different centrality classes.The term "electron" throughout this paper is used for indicating both electrons and positrons.We evaluate the nuclear modification factor R AA (p T ) as the ratio between the particle spectrum in nucleus-nucleus collisions and the spectrum in proton-proton collisions scaled with the number of binary collisions.The modification of the parton distributions in nuclei, referred to as shadowing, has been taken into account by the parametrization provided in EPS09 [75].In order to evaluate the R AA (p T ), we have implemented in our code the decay channel B(→ c) → e taking into account the semi-leptonic decay matrix weighted by the different branching ratio of the decay.In particular, we consider the semi-leptonic decay channel χ b → χ c + e + ν e characterized by a BR ≈ 10% and where χ c describes the various species of D mesons.In Fig. 1, we show the nuclear modification factor R AA for bottom quark together with our prediction for B mesons and electrons from B meson decay in P bP b √ s = 5.02 T eV collisions in both 0 − 10% and 30 − 50% centrality classes.The B mesons R AA has a behaviour similar to the one observed in the charm sector [7].The hadronization via coalescence plus fragmentation gives, as expected, a shift of the peak to higher momenta which is smaller with respect to the one estimated with the same model for D mesons.This effect comes from the coalescence process that form B meson at a certain momentum combining bottom quarks with light quarks at lower momenta. We have also evaluated the v 2 (p T ) and v 3 (p T ) with two-particle correlation method for both B meson and electrons from B meson decay as shown in Fig. 2 and Fig. 3 for both 0 − 10% and 30 − 50% centrality class respectively.We predict a non-zero elliptic flow v 2 and triangular flow v 3 for bottom quarks in both 0 − 10% and 30 − 50% centrality class.This suggests that bottom quarks take part in the collective motion in a way similar to what observed also for charm quarks [49,77], but with an efficiency of conversion of 2 that is only about a 15% smaller for v 2 in most central collisions and about 40% smaller for v 2 at 30 − 50% centrality.Regarding the con- version of 3 , we find an efficiency of conversion for bottom quark of about a 30% smaller than charm quark for v 3 at both 0 − 10% and 30 − 50% centralities.We observe that moving from central to peripheral collision we get an enhancement of the elliptic flow as a consequence of the geometry of the overlapping region in more peripheral collisions which are characterized by a larger eccentricities 2 ( 0−10% 2 0.13 and 30−50% 2 0.42).On the other hand, our results show a comparable v 3 for 0 − 10% and 30 − 50% centrality class suggesting the triangular flow is generally related to the the event-by-event fluctuations of triangularity 3 of the overlap region ( 0−10% 3 0.11 and 30−50% 3 0.21).Similar results have been observed in light quark sector and recently in charm quark sector [49,58,78]. Comparing the green dashed line with the blue solid lines we observe that the role of the hadronization is to give an enhancement of the anisotropic flows in both centralities that is about lost in the electrons from B decays where the v 2 and v 3 at p T > ∼ 3 GeV in both centralities are on average at least a 25 − 30% smaller than B mesons ones.Furthermore, our simulations show a good agreement between the v 2 and the available experimental data in 30−50% centrality class from ALICE collaboration [79] suggesting that, despite the large mass, the coupling of bottom quarks to the bulk medium is strong enough to collectively drag them in the expanding fireball.Therefore, within the current data uncertainty our QPM approach is able to correctly predict available data, not only for D mesons [7], but also for B mesons.This confirm QPM provide a reasonably good description of the HQ dynamics and in particular the evolution with mass of the transport coefficient. IV. SPATIAL DIFFUSION COEFFICIENT FOR CHARM AND BOTTOM One usually compare the information on the HQ interaction in terms of the heavy quark spatial diffusion coefficient D s , a quantity that, measuring the space dispersion per unit time, can be calculated also in lQCD and in the limit p → 0. Furthermore, it can be related to the thermalization time of HQs: In our QPM approach, the scattering matrices for the interaction processes between the bulk and the heavy quarks are the same for charm and bottom, with the only difference coming from their masses (M c = 1.4 GeV for charm quarks and M b = 4.6 GeV for bottom quarks).In Fig. 4, the spatial diffusion coefficient 2πT D s for both charm (black solid line) and bottom quarks (green dashed line) is shown in comparison to the available lQCD calculations.We can see from Fig. 4 that D s of QPM for charm quark (black solid line) and lQCD data shows a good agreement within the current uncertainties, as already pointed out in several Ref.s [1,7,80].However, the lQCD data till 2020 are obtained in the infinite M Q limit and in a quenched medium, while the phenomenological QPM approach is for the finite HQ mass and in a medium including dynamical fermions.Even if commonly discarded till now, it is important to study the D s dependence on heavy quark masses in our QPM approach in order to appropriately compare the results to the lQCD calculations.As mentioned above the charm mass, even if quite large with respect to Λ QCD , is yet comparable to the average momentum of the medium ∼ 2 − 3 T and the exchange momentum q ∼ g T [1,30].Hence, it can be envisaged that its mass scale is not yet enough large to reach the limit where the thermalization time scales as τ th ∼ M Q T D s (as usually assumed) and D s is mass independent.In fact, in Fig. 4, we can see that at T c the D s (T ) in the QPM for the charm quark (black solid line) is about a 50% larger than the bottom quark one (green dashed line) which is a significant breaking of the mass independence of D s (T ) and implies a significant breaking of the M/T scaling for the thermalization time τ th .The 2πT D s (T ) shown in Fig. 4 for the charm quark correspond to an average thermalization time for low momenta of about 5 fm/c in the range of temperatures 1 − 2 T c .This would lead to estimate a bottom relaxation time τ th (b) = (M b /M c )τ th (c) ∼ 3.3 τ th (c) ∼ 16.5 fm/c, according to the τ th ∼ M scaling in the large mass limit.Instead the decrease of D s (T ) with the mass of the HQ in the QPM implies a τ th for bottom quark in the QPM is τ th (b) ∼ 11 fm/c, significantly smaller than the one extrapolated by a M/T scaling of τ th (c) and essentially only slightly larger than a factor of 2 w.r.[Kaczmarek (2014)] lQCD [Francis (2015)] lQCD [Brambilla (2020)] lQCD [Altenkort (2023)] QPM (Catania) Charm QPM (Catania) Bottom QPM (Catania) M T c = 0.155 GeV ∞ FIG.4: Spatial diffusion coefficient Ds(T ) for charm quark (solid black line) and bottom quark (dashed green line) compared to the lQCD expectations [36][37][38][39][40].In the same panel we show Ds(T ) of charm quark opportunely scaled in order to reach the saturation scale of M → ∞.For more details see Fig. 5. This aspect can be more clearly visualized in Fig. 5, where we plot by red dot-dashed line the ratio between D s (M charm ) and D s (M ) as function of M/M charm both in pQCD and QPM approach at T = T C .We can see that, in the region of charm and bottom masses, the D s is strongly mass dependent and reaches a saturation value only for masses 1.9 .The effect, as can be expected, is stronger for the QPM that incorporates non-perturbative dynamics with respect to the case of pQCD where we find D s (M charm )/D s (M → ∞) 1.4. In the right panel of Fig.We note that we can assert that the bottom mass scale is quite close to the infinite mass limit with respect to the charm quark case with a discrepancy of only about a 20 − 25%; differently from the discrepancy between D s (M charm ) and D s (M bottom ) which is of about 50% for T = T c = 0.155 GeV and not smaller than 30% at higher temperatures, while as said the D s (M charm ) may lie up to about a factor of 1.5-2 above the D s (M → ∞) as evaluated in lQCD. In Fig. 4, going back to the comparison to lQCD calculations, we have plot by red dot-dashed line the spatial diffusion coefficient in the M HQ → ∞ within the QPM approach.We observe that the 2πT D s (T ) in the large mass limit is quite close to the new lQCD data (red triangles [40]) which are obtained performing calculations in 2+1 flavours QCD with dynamical fermions differently from the other lQCD data obtained in quenched approximations.Therefore, while the QPM D s (T ) for charm is quite close to the older lQCD data in quenched approximation, the D s (T ) implied by QPM doing the appropriate comparison in the infinite mass limit, has a better agreement with lQCD simulations evaluated taking into account dynamical fermions that are the more pertinent one to compare to. V. CONCLUSION In this letter, we have studied the propagation of bottom quarks in the quark-gluon plasma (QGP) by means of an event-by-event Boltzmann transport approach.In particular, we have studied within the QPM approach the R AA and v 2,3 at LHC energies of B mesons and electrons from semi-leptonic B meson decay in different centrality class selection.The study has been developed without any tuning, employing the same parameters of the model namely the coupling, as in previous studies on D mesons [7,31].We find a good agreement with the available data on electrons for R AA (p T ) at 0 − 10% and v 2 (p T ) at 30 − 50% centrality, while for R AA (p T ) at 30 − 50% and v 2,3 at 0 − 10% and, in general directly for B mesons, data are not yet available.Our results suggest that bottom quark takes part in the collective expanding medium even if the large mass of bottom quark with respect to the charm one M B ∼ 3.3M C leads to a mass ordering effect on the collective flows resulting in a smaller but still significant value for both v 2 and v 3 of B meson that mainly comes directly from a similarly large v 2,3 of the b quark. Within kinetic theory in the M/T → ∞ limit thermalization time should scale linearly with M HQ , thus resulting in a D s (T ) parameter which is a mass independent measure of the QCD interaction.However, in the QPM approach the mass difference among charm and bottom leads to a D s (c)/D s (b) ratio of about a factor of 1.5 at T ∼ T c decreasing slightly to 1.3 at higher temperatures (T ∼ 3 − 4 T c ).This means that at the mass scale of charm quark the infinite mass limit used in lQCD is not yet reached; on the other hand, for the bottom mass scale, there is a discrepancy of only about a 20% w.r.t. the infinite mass limit.However, once the mass scale dependence is taken into account, the QPM approach appears to be in a satisfactory agreement with the most recent lQCD calculations that include dynamical fermions [40], differently from previous lQCD data in quenched approximation.To our knowledge, this is the first time this aspect is explicitly discussed and quantified.However, it would be appropriate that the various modeling of the heavy quark dynamics, that aim to evaluate the D s (T ) comparing to lQCD calculations, evaluate explicitly the mass dependence of D s (T ) implicitly present in their approach with the aim of achieving a more pertinent and solid comparison to the new and upcoming lQCD calculations.Finally, we mention that a first estimate of thermalization time for bottom quark leads to values τ th (b) ∼ 10 − 12 fm/c which is about a factor of 2 larger than charm and so quite smaller that 3.3 as suggest by a simple M HQ /T scaling.We however warn that such estimates of thermalization time through, Eq. 7, are in the p → 0 while in the realistic case at finite momentum one should substitute M HQ in Eq.7 with the HQ kinetic energy.This, for a realistic charm quark distribution at LHC energy, means an increase of about a factor 2, while for bottom only a 30%.Hence to estimate the thermalization time of HQ in uRHICs directly from lQCD 2πT D s (T ) discarding both finite mass and momentum can lead to significantly underestimate the thermalization time of HQ, in particular for charm quarks. FIG. 1 : FIG. 1: Nuclear modification factor RAA(pT ) for bottom quarks (green dashed line), B mesons from coalescence plus fragmentation (blue solid line) and electrons from B meson decay (red dot-dashed line) in P bP b √ s = 5.02 T eV collisions at mid-rapidity and in 0 − 10% (left panel) and 30 − 50% (right panel) centrality class.The electrons nuclear modification factor at 0 − 10 % is compared to the experimental measurements.Data taken from Ref. [76]. FIG. 3 : FIG. 3: Elliptic flow v2(pT ) (left) and triangular flow v3(pT ) (right) for bottom, B mesons and electrons from B mesons decay in P bP b √ s = 5.02 T eV collisions in 30−50% centrality class.Same legend as in Fig. 1.The elliptic flow v2(pT ) of electrons is compared to the available experimental data from [79]. 5, we show how the ratios D s (M charm )/D s (M bottom ) (solid red line), D s (M bottom )/D s (M * ) (red dashed line) and D s (M charm )/D s (M * ) (red dot-dashed line) evolve as function of temperature.Here the value M * = 15 GeV (M/M charm > 10) represents the mass of a fictitious super-heavy partner staying in the M Q → ∞. FIG. 5 : FIG. 5: (left) Ratio Ds(M charm )/Ds(M ) as function of M/MC for both QPM and pQCD approach at T = TC .(right) Ratio among spatial diffusion coefficient Ds calculated within a QPM interaction for three different heavy-quark masses MHQ = 1.4,4.6, 15 GeV values. TABLE I : . The B Mean square charge radius r 2
6,939.8
2023-04-06T00:00:00.000
[ "Physics" ]
How Does Government Spending Affect Labour Force Participation and Unemployment Within the WAMZ Countries? This study utilizes static and dynamic models in examining the short run and long run impacts of government spending on labour force participation and unemployment within the West African Monetary Zone (WAMZ) over the period 1991-2018. While the static models are estimated using the Pooled Ordinary Least Squares (POLS) technique and the Least Squares Dummy Variables (LSDV) technique, the dynamic models are estimated using the GMM-IV technique. The GMM-IV technique better addresses endogeneity issues relative to the other techniques utilized and also, the parameters obtained from the technique are confirmed to be consistent by the Arellano-Bond test for zero autocorrelation. Accordingly, this technique is given preference in this paper. The results from the technique reveal that government spending increases the labour force participation rate but has an ambiguous impact on unemployment rate. In the long run, the parameter estimates largely remain unchanged in terms of their sign and significance; however, they increase in size. Based on these findings, this paper firstly recommends that policy makers intensify efforts in increasing government spending; as a reduction may impact negatively on the labour force participation rate. Secondly, this paper recommends the formulation and implementation of fiscal policies that are robust enough to reduce the unemployment rate as they increase the labour force participation rate. Introduction This paper examines the impact of government spending on labour force participation and unemployment within the West African Monetary Zone (WAMZ). The need for this study arises from two profound reasons. Firstly, the literature is not at a consensus as regards the impact of government spending on labour force participation and unemployment. While some findings and theories reveal that government spending has a positive impact on these variables (see Abubakar, 2016;Murwirapachena et al., 2013;Brückner & Pappa, 2013;Calidoni, 2005) others suggest the existence of a negative impact (see Bidemi, 2016;Cottarelli, 2012;Pope, 2017;Ahearn et al., 2006). The lack of unanimity within the literature justifies the need for further studies. Secondly, the empirical literature on labour supply in WAMZ, to which this study contributes directly does not consider the impact of government spending on labour force participation. The few that do, fail to rigorously address endogeneity bias and heterogeneity issues. Accordingly, this study contributes to the literature as follows. Firstly, it examines the short run and long run impacts of government spending on labour participation and unemployment using three estimation techniques viz. the Pooled Ordinary Least Squares (POLS), Least Squares Dummy Variables (LSDV) and the GMM-IV technique. While the POLS method deals with potential endogeneity arising from the omission of observed variables, the LSDV technique addresses potential endogeneity arising from the omission of unobserved variables and the GMM-IV estimation technique deals with potential endogeneity arising from reverse causality between government spending and labour force participation rate as well as between the former and unemployment. Secondly, this paper examines the level and slope effects of the global financial crisis. Thirdly, this study focusses on the effect of government spending on unemployment within the WAMZ countries. The results obtained generally reveal that government spending has a positive impact on the labour force participation but an ambiguous impact on unemployment. As such, government spending may be increasing the labour force participation rate but not necessarily reducing unemployment. In the long run, the parameter estimates largely remain unchanged in terms of their sign ad significance. However, they increase in size. Based on these findings, this paper firstly recommends that policy makers intensify efforts in increasing government spending; as a reduction may impact negatively on the labour force participation rate. Secondly, this paper recommends the formulation and implementation of fiscal policies that are robust enough to reduce the unemployment rate as they increase the labour force participation rate. The rest of this study is organized as follows. Section 2 reviews the literature. Section 3 provides the empirical models and methodology. Section 4 presents the data analysis. Section 5 summarizes and concludes. Literature Review This section analyses both the theoretical and the empirical literature on government International Journal of Social Science Research ISSN 2327-5510 2020 spending, unemployment and labour force participation rate. Theoretical Literature The labour supply literature predicts that an increase in social spending disincentivises recipients of such spending from working. More specifically, this strand of the literature suggests that social spending shifts the budget line of an agent and if leisure is a normal good, such an increase would subsequently reduce the amount of hours an agent needs to work to attain a stipulated standard of living. The aftermath of this may be a reduction in hours supplied by labour and a potential rise in the unemployment rate (see R∅ed & Str∅m, 2002 ;Aaberge & Colombino, 2006). Consistent with the labour supply literature are the orthodox neoclassical economists' wage-based theories on unemployment. This strand of the literature argues that the level of unemployment is a function of the prevailing wage rate. When the wage rate rises so high that it exceeds the equilibrium level, labour supply equally rises considerably and ultimately surpasses labour demand, thereby resulting in a significant rise in the unemployment rate. Nonetheless, this strand of the literature argue that the labour market possesses a self-correcting mechanism which ultimately reduces the unemployment rate. Due to this self-correcting mechanism, the orthodox neoclassical economists suggest that discretionary fiscal expansions are unrequired to address unusually high rates of unemployment. The wage-based theories however note that the wage mechanisms of the labour market may not be effective enough to sustain market clearing wages, and as such, unemployment hardly reduces to zero (see Pigou, 1933 ;Say, 1971). Nonetheless, the general theory on employment, interest and money of Keynes (1936) reveals that during periods of economic downturns, when many workers get retrenched, the self-correcting mechanism of the labour market may fail to promptly address unemployment issues. In such periods, government intervention through public spending is required to reduce the unemployment rate. Keynes (op. cit.) argues that the government spends copiously towards establishing vital social amenities; and by so doing, many workers regain employment, labour supply increases and the unemployment rate ultimately reduces. Empirical Literature This sub-section examines the empirical literature on government spending, unemployment and labour force participation rate. The Empirical Literature on the Nexus Between Government Spending and Unemployment While there exists a strand of the literature which finds that government spending reduces the unemployment rate, there equally exists another strand which arrives at diametrically opposite findings. Both strands are discussed below. 2.2.1.1 StudiesThat Find That Government Spending Reduces Unemployment Bidemi (2016) utilizes the Error Correction Model (ECM) in investigating the impact of International Journal of Social Science Research ISSN 2327-5510 2020 government expenditure on unemployment in Nigeria over the period 1980-2013. The study finds that government spending reduces unemployment. This study deserves commendation for robustly examining the long run impact of government spending on unemployment. Nonetheless, the study fails to adequately address potential endogeneity bias arising from reverse causality between government expenditure and unemployment. Also Cottarelli (2012) in a largely descriptive study on a sample comprising several developing countries shows that government spending in form of unemployment benefits reduce unemployment when they are distributed with short durations. This study deserves credit for critically reviewing the existing literature on unemployment. Nonetheless, this research fails to conduct adequate formal econometric tests to justify its findings. The graphs and figures provided at best reveal the existence of a correlation between government spending and unemployment; and not necessarily a causation. Studies That Find That Government Spending Does Not Reduce Unemployment Abubakar (2016) employs the Vector Autoregressive (VAR) methodology in investigating the impact of fiscal policy shocks on unemployment in Nigeria over the period 1981-2015. The findings of the study oppose the argument that government spending reduces unemployment. This study deserves commendation for rigorously highlighting the dynamic behaviour of unemployment through the VAR model utilized. Nonetheless, the results obtained are not free from the standard shortcomings often ascribed to VAR models (see Chari et al. 2007 andRamey 2009). Also, Murwirapachena et al. (2013) utilize a Vector Error Correction Model (VECM) in examining the impact of fiscal policy on unemployment in South Africa over the period 1980-2010. The study finds that government consumption spending increases the rate of unemployment within South Africa. Nonetheless, the time-span of the dataset utilized in this study fails to capture recent years and as such, results obtained may have limited implications for current trends. The Empirical Literature on the Nexus Between Government Spending and Labour Force Participation Just as there exists a strand of the empirical literature which finds that government spending reduces the labour force participation rate, there equally exists another which find that government spending increases the labour force participation rate. Both strands are discussed below. Studies That Find That Government Spending Reduces Labour Force Participation Pope (2017) in a descriptive study on the relationship between government expenditure and economic growth shows that social spending reduces the labour force participation rate. The study reveals that social spending reduces incentive to work and encourages the low income groups to be overly reliant on government handouts. Nonetheless, this study fails to conduct econometric tests to validate the negative relationship between government spending and labour force participation. ISSN 2327-5510 2020 Similarly, Ahearn et al. (2006) employ bivariate probit regressions in investigating the impact of government payments on off-farm labour participation rate. The study finds that such payments do not increase labour force participation rate. Unlike other existing studies, this research deserves commendation for disaggregating the government payments into both coupled and decoupled payments. Studies That Find That Government Spending Does Not Reduce Labour Force Participation Brückner and Pappa (2013) employ a structural VAR approach in investigating the impact of fiscal expansions on a panel of 10 OECD countries. The study finds that government spending has a positive impact on the labour force participation rate. This study deserves commendation for employing a variety of VAR specifications in verifying its results. Nonetheless, the study ignores key variables such as trade openness and population density. Similarly, Calidoni (2005) utilizes fixed effects panel methods in investigating the effects of government expenditure on labour force productivity growth within OECD countries over the period 1976-2000. The study finds that healthcare spending and social contributions have a significantly positive impact on labour productivity growth. The research goes on to provide a time trend analysis which opposes the argument that government spending reduces labour force participation rate. This study deserves credit for conducting rigorous robustness tests validating the results obtained. Nonetheless, the study fails to adequately address potential endogeneity arising between the public spending variable and the individual effects. Empirical Model The empirical methodology adopted in this study is similar to those of O'Nwachukwu (2017) and Ogbeide et al. (2016). However, the contributions of this paper are different from those of these studies. While O'Nwachukwu (2017) This research adopts both static and dynamic models in investigating the impact of government spending on labour force participation within the WAMZ countries. While the static model is estimated using the POLS and LSDV methods. The dynamic model is estimated using the GMM-IV technique. Static Model (1) (2) International Journal of Social Science Research ISSN 2327-5510 2020 For each country , at time , and represent the labour force participation rate and unemployment rate respectively. captures the total government spending and represents the control variables. Also and represent the country and time fixed effects respectively. Likewise, , and represent the relevant parameter estimates and represents the error term. Each of the variables are expressed in logarithmic form; and as such, the model captures the degree of elasticity of the dependent variables to percentage changes within the explanatory variables. The above static model will be estimated using the POLS and LSDV techniques. Dynamic Model (3) Equations (2) and (3) include the lagged dependent variable and as such make it possible to identify the long run impact of on and . In order to robustly address endogeneity issues arising from potential endogeneity between government spending and the dependent variables, equation (2) is estimated with the GMM-IV estimation technique of Arellano and Bond (1991). Testable Hypotheses Due to absence of a consensus within the literature about the impact of government spending on unemployment and labour force participation rate, this paper adopts a two tailed test. The hypotheses are stated below: : Total government spending has no significant impact on Unemployment and Labour force participation rate within the WAMZ countries. : Total government spending has a significant impact on Unemployment and Labour force participation rate within the WAMZ countries. Endogeneity This study robustly addresses potential endogeneity issues as follows. Firstly, the POLS method -through the inclusion of control variables -deals with potential endogeneity arising from the omission of observed variables. Secondly, the LSDV technique -through the inclusion of the time and country fixed effects -addresses potential endogeneity arising from the omission of unobserved variables (such as heterogeneity across countries and policy changes over time). Thirdly, the GMM-IV estimation technique through the inclusion of the lagged dependent variable deals with potential endogeneity arising from reverse causality International Journal of Social Science Research ISSN 2327-5510 2020 between government spending and labour force participation rate as well as between the former and unemployment. Sample Selection This paper examines the impact of government spending on labour force participation and unemployment within the West African Monetary Zone (WAMZ) over the period 1991-2018. The WAMZ countries include: Gambia, Ghana, Guinea, Liberia, Nigeria and Sierra Leone. The study employs an annual data as it is popular in the literature examining government spending within the WAMZ countries. As such, the use of annual data facilitates comparability. Also, this study adopts panel data analysis due to the fact that the WAMZ countries are similar in terms of their political and economic institutions. Dependent Variables The Labour Force Participation Rate ( ) represents the dependent variable. It is measured the ratio of working population to the working age group. Similarly, Unemployment ( ) captures the number of unemployed individuals expressed as a percentage of the labour force. Data are sourced from World Bank's World Development Indicators. Primary Independent Variable The primary explanatory variable is captured by the Total Government Spending ( ). It is measured as the share of government spending within the GDP. Data are sourced from World Bank's World Development Indicators. Control Variables Based on the existing theories within the literature (see Philips, 1987;Okun, 1983), the control variables included in the model include: Inflation ( ), Population Density ( ), Natural Resource Rent ( ), Trade Openness ( ), Investment ( ) and Foreign Direct Investment inflows ( ). Data are sourced from World Bank's World Development Indicators. Table 1 reveals that the highest labour force participation rate contained in the data is 75 percent and the lowest is 54.7 percent. Similarly, the highest government spending as a percentage of GDP is 19.5 percent and the lowest is 0.9 percent. Also, the table reveals that the first-order autocorrelation coefficients for both labour force participation rate and unemployment are positive and large in size further justifying the inclusion of lagged dependent variables in the models adopted. This study accounts for this through the GMM-IV estimation technique. Main Results from POLS, LSDV and GMM-IV Estimators As regards the results obtained using the POLS and LSDV techniques, this paper deals with autocorrelation and heteroscedasticity issues through the use of the Newey-West standard errors (Newey and West, 1987). Also, unit root tests were carried out and the tests revealed that the variables utilized in the study are stationary. Table 2 shows the results obtained from the regression of labour force participation rate on government spending using the POLS, LSDV and GMM-IV estimation techniques. The results from the POLS estimation technique reveals that a percentage increase in the government spending brings about a 0.072 percent rise in the labour force participation rate at 1 percent significance level. Compared to the POLS, results from the LSDV and GMM-IV techniques show that the impact of a percentage increase in government spending on labour force participation rate remains positive and significant at 1 percent significance level; however, the size of the parameter estimate reduce slightly to 0.0176 and 0.002 respectively. Table 2 also shows that the models utilized in the three estimation techniques are generally significant at the 1 percent significance level. Also, Table 2 reveals that the fixed effects are generally significant at 1 percent significance level and this reveals the potential existence of heterogeneity across countries and policy changes over time; which the LSDV technique rigorously addresses. Additionally, Table 2 shows that the parameter estimates of the lagged dependent variable is positive and significant and this underscores the need for a dynamic model. Also, the Table reveals that the parameter estimates of the GMM-IV technique are confirmed to be consistent by the Arellano-Bond test for zero autocorrelation. Since, the GMM-IV technique better addresses endogeneity issues relative to the other techniques utilized, this technique is given preference in this paper. ISSN 2327-5510 2020 Moving on to the results obtained on the regression of government spending on unemployment using the POLS, LSDV and GMM techniques. The results generally indicate that the impact of government spending on unemployment is rather ambiguous. Specifically, the results from the POLS estimation technique shows that a percentage increase in the government spending brings about a 0.052 percent rise in the unemployment rate. Compared to the POLS, results from the LSDV and GMM-IV techniques show that the impact of a percentage increase in government spending on unemployment rate remains positive; however, the size of the parameter estimate reduce slightly to 0.0139 and 0.006 respectively. Nonetheless, these parameter estimates should be interpreted with caution as they are not significant. Source: Author's own computation. Note: The parentheses contain the standard errors. *p < .10. **p < .05. ***p < .01. Results for the Control Variables This sub-section provides a brief interpretation of the results obtained on the control variables based on the GMM-IV technique the preferred technique of this paper. As regards the results obtained from the regression of labour force participation on government spending, Table 2 reveals that a percentage increase in inflation brings about a 0.001 percent rise in the labour force participation rate at 1 percent significance level. Also, a percentage increase in population density brings about a 0.012 percent rise in the labour force participation rate at 1 percent significance level. Additionally, a percentage increase in the GDP brings about a International Journal of Social Science Research ISSN 2327-5510 2020 0.011 percent reduction in the labour force participation rate at 1 percent significance level. Likewise, a percentage increase in natural resource rents brings about a 0.002 percent reduction in the labour force participation rate at 1 percent significance level. Similarly, a percentage increase in trade openness brings about a 0.001 percent reduction in the labour force participation rate at 1 percent significance level. Although investment and foreign direct investment have a negative impact on labour force participation rate, the parameter estimates obtained on both are not significant. Moving on to the results obtained from the regression of unemployment on government spending, Table 2 revels that a percentage increase in inflation brings about a 0.001 percent rise in the unemployment rate. Also, a percentage increase in population density brings about a 0.173 percent decrease in the unemployment rate. Additionally, a percentage increase in the GDP brings about a 0.075 percent increase in the unemployment rate. Likewise, a percentage increase in natural resource rents brings about a 0.007 percent reduction in the unemployment rate. Similarly, a percentage increase in trade openness brings about a 0.070 percent reduction in the unemployment rate. Also, a percentage increase in investment brings about a 0.027 percent decrease in the unemployment rate. Further, a percentage increase in foreign direct investment brings about a 0.001 percent decrease in the unemployment rate. Nonetheless, these results should be interpreted with caution, apart from trade openness and investmentwhich are both significant at 1 percent significance level -other parameter estimates are not significant. Government Spending, Labour Force Participation and Unemployment in the Long Run In this sub-section, this paper utilizes the GMM-IV estimator in examining the long run impact of the variables on unemployment and labour force participation. Table 3 shows that although, the parameter estimates largely remain unchanged in terms of their signs and significance, their absolute values increase in size. More specifically, a percentage increase in government spending brings about a 0.020 percent increase in the labour force participation rate in the long run at 1 percent significance level. The parameter estimates for the control variables obtained in this regression also reveal that a percentage increase in inflation brings about a 0.012 percent rise in the labour force participation rate in the long run at 1 percent significance level. Also, a percentage increase in population density brings about a 0.123 percent rise in the labour force participation rate in the long run at 1 percent significance level. Additionally, a percentage increase in the GDP brings about a 0.109 percent reduction in the labour force participation rate in the long run at 1 percent significance level. Likewise, a percentage increase in natural resource rents brings about a 0.019 percent reduction in the labour force participation rate in the long run at 1 percent significance level. Similarly, a percentage increase in trade openness brings about a 0.019 percent reduction in the labour force participation rate in the long run at 1 percent significance level. Both investment and foreign direct investment retain their negative impact on labour force participation rate. Also, the parameter estimates remain non-significant in the long run. Turning to the results obtained for the long run impact of government spending on unemployment; Table 3 shows that a percentage increase in government spending brings about a 0.103 percent increase in the unemployment rate in the long run. The parameter estimates for the control variables obtained in this regression also reveal that a percentage increase in inflation brings about a 0.023 percent rise in the unemployment rate in the long run. Also, a percentage increase in population density brings about a 2.66 percent decrease in the unemployment rate in the long run. Additionally, a percentage increase in the GDP brings about a 1.161 percent rise in the unemployment rate in the long run. Likewise, a percentage increase in natural resource rents brings about a 0.120 percent reduction in the unemployment rate in the long run. Similarly, a percentage increase in trade openness brings about a 1.081 percent reduction in the unemployment rate in the long run. Both investment and foreign direct investment retain their negative impact unemployment rate. Again, these parameter estimates should be interpreted with caution as they are not significant. Table 3. Government Spending, Labour Force Participation and Unemployment in the Long run Source: Author's own computation Note: The parentheses contain the standard errors. *p < .10. **p < .05. ***p < .01. Controlling for the Global Financial Crisis In this sub-section, the 2007-2009 global financial crisis is included as an additional variable. Following Arestis and Phelps (2018), this research captures the crisis years using dummy variables which take the values of 1 during crisis years and 0 outside the same period. Also, a dummy variable interaction term comprising the crisis variables and government spending is included in the model. As regards the regression of government spending on labour force participation rate, results from the POLS technique as seen in Table 4 show that the level effect of the global financial crisis is 0.510 percent at 1 percent significance level. This implies that on the average, labour force participation rate increases by this percentage during the crisis years. Also, the slope effect of the global financial crisis is -0.220 percent at 1 percent significance level. This implies that the impact of government spending on labour force participation rate reduces by 0.220 percent during the crisis years. Compared to the POLS results, the parameter estimates from the LSDV and GMM techniques remain unchanged in terms of their signs. However, there exist some differences in terms of their size and significance. Moving on to regression of government spending on unemployment rate, results from the POLS technique as seen in Table 4 show that the level effect of the global financial crisis is 1.236 percent at 1 percent significance level. This implies that on the average, unemployment rate increases by this percentage during the crisis years. Also, the slope effect of the global financial crisis is -0.611 percent also at 1 percent significance level. This implies that the impact of government spending on unemployment rate reduces by 0.611 percent during the crisis years. Again, other parameter estimates are similar to those obtained from the baseline regression model. Exclusion of Liberia from Sample Selection Due to data limitations with regards to Liberia, this study carries out robustness tests by excluding Liberia from the sample. As seen in Table 5 the parameter estimates obtained from the multiple estimation techniques utilized are largely similar to those presented previously. This confirms that results obtained from the baseline regression are not affected by the data limitations experienced with regards to Liberia. Conclusion This paper examines the impact of government spending on labour force participation and unemployment within the West African Monetary Zone (WAMZ). The relevance of this paper derives from two crucial reasons. Firstly, the literature is not at a consensus as regards the impact of government spending on labour force participation and unemployment. While some findings reveal that government spending has a positive impact on these variables (see Abubakar, 2016;Murwirapachena et al., 2013;Brückner & Pappa, 2013;Calidoni, 2005) others suggest the existence of a negative impact (see Bidemi, 2016;Cottarelli, 2012;Pope, 2017;Ahearn et al., 2006). The absence of a consensus within the literature highlights the academic and empirical relevance of this study. Secondly, the empirical literature on labour supply in WAMZ, to which this study contributes directly does not consider the impact of government spending on labour force participation. The few studies that do, fail to rigorously address endogeneity bias and heterogeneity issues. Accordingly, this study contributes to the literature by examining the short run and long run impacts of government spending on labour force participation and unemployment using three estimation techniques viz. the Pooled Ordinary Least Squares (POLS), Least Squares Dummy Variables (LSDV) and the GMM-IV technique. Since, the GMM-IV technique better addresses endogeneity issues relative to the other techniques utilized, this technique is given preference in this paper. The results obtained from the study generally reveal that government spending has a positive impact on the labour force participation rate. Meanwhile, the results also indicate that the impact of government spending on unemployment is ambiguous. As such, government spending may be increasing the labour force participation rate but may not necessarily be reducing unemployment. The findings of this study are similar to those of Bruckner and Pappa (2012). Also, in the long run, the parameter estimates largely remain unchanged in terms of their sign ad significance. However, they increase in size. Based on these findings, this study firstly recommends that policy makers intensify efforts in increasing government spending; as a reduction may impact negatively on the labour force participation rate. Secondly, this paper recommends the formulation and implementation of fiscal policies that are robust enough to reduce the unemployment rate as they increase the labour force participation rate. At this point it is necessary to observe that this study encountered data limitations with regards to Liberia. Nevertheless, in the sensitivity analysis section, the regression model was re-estimated with the exclusion of Liberia and the results obtained where similar to those obtained from the baseline regression model. This suggests that the results obtained in this study are not biased or driven by the data of Liberia. Finally, future research works may widen the scope of this study by considering the impact of government spending on labour participation and unemployment within the ECOWAS countries.
6,500.2
2020-04-02T00:00:00.000
[ "Economics" ]
Investigation of the Effect of Larestan’s Pipeline Water on the Mechanical Properties of Concretes Containing Granite Aggregates In this study, the compressive strength of the concretes made by the pipeline water of Larestan has been investigated. Although the used water for the concretes must be clean, standard, and generally drinkable water, in Larestan city, the pipeline water is nonpotable water; meanwhile, this type of water is still being used in the mixture of the concretes by companies and contractors. Since in the initial tests the compressive strength of the normal samples did not satisfy the standards, 50% of granite aggregate was replaced with the purpose of increasing strength of the samples. Then four types of samples were made, which are (1) normal concrete with pipeline water, (2) normal concrete with potable water, (3) granite concrete with pipeline water, and (4) granite concrete with potable water. The results showed that the compressive strength of normal samples is not standard in the case of using the pipeline water. This issue can be seen during the first four weeks of the samples, whereas these samples are placed in the standard zone by replacing 50% of granite aggregate instead of normal aggregates. This may be attributed to the compensating effect of granite aggregates in opposition to damaging effect of water. Also, by using the granite aggregates in the mixture, the compressive strengths of the samples were standard and almost identical in both cases of pipeline water and tap water. As a result, the concretes made in this city must include additives for increasing the strength, or the tap water should be used as a replacement for pipeline water. Introduction e use of concrete in construction has widened recently. Concrete is a good material in the pressure situations, and thus it can have high performance as a column in structures. e compressive strength is the primary parameter in design, implementation, and quality control of the concretes. Moreover, compressive strength is the most useful parameter in the national and international agencies and other parameters such as modulus of elasticity, tensile strength, and compressive strain are expressed as a function of compressive strength [1]. ere are many important factors that can affect the specifications of the concrete and its strength, such as the degree of consolidation [2], water-tocement ratio [3], moisture [4], cement hydration [5], replacements [6], and type and size of aggregates [7,8]. Also, water is one of the parameters that can affect the specification of concrete. Usually, water represents about one-third of the whole concrete mixture and its direct influence is on the water-cement ratio. But another aspect of water influence is the content of water. American Concrete Institute (ACI) suggests that any drinkable water is suitable for use in the concrete. Currently, the use of some types of water like seawater is prohibited because of the high content of chloride as the chloride of water intensifies the corrosion of reinforcing [9]. e concrete industries consume about one billion cubic meters of water annually. is volume of water is just for mixing, so the enormous amount that must be added is used for washing the mixer, equipment, and pumps, as well as curing the concrete [10]. Studies of prediction showed that, until 2025, the 75% of water demand for concrete use will be in areas that have water reduction [11]. As a result, the investigation for replacement of other sources of water such as seawater [9], wastewater [10], underground water [12], and recycled water [13] has started. Some experimental work on compressive strength of concrete with 100% replacement of treated wastewater instead of tap water has been done. e two types of samples had the same curing process. e result showed that the compressive strength of samples with treated-water replacement is about 85-94% of normal concrete [14]. An investigation on the use of treated grey water (TGW) and raw grey water (RGW) showed an increased rate in the initial setting time and a decrease in the slump. Also, an increase in the compressive strength of TGW samples was seen but, in RGW samples, the compressive strength decreased [10]. e application of seawater, tap water, and salt water in the concrete has been investigated. e compressive strength of the samples was tested after 20 years. e seawater samples had earlier strength gain but they didn't show different compressive strength after a long time [15]. On the other hand, in some investigations, the compressive strength of concrete samples decreased due to the use of seawater. In this study, the difference between the compressive strength of tap water samples and that of seawater samples was less but, after 12 months, the difference got larger and reached 10 MPa in some cases [16]. Also, an investigation on the use of magnetic field treated water (MFTW) showed a better compressive strength of concrete compared to tap water concrete and it had a better strength at an early stage [17]. is early strength could happen because of the interaction between magnetic water and cement hydration. However, tap water in some countries or some areas of a country is very scarce. So finding a replacement for tap water in these areas is important and this new type of water should satisfy the criteria of standards. Larestan is a city in the south of Iran. It is located in Fars province and is about 370 km from Shiraz and 220 km from Bandar Abbas (Figure 1). e prevailing climate there is hot and dry. So the water resources are very scarce in this area. In the past, the people used underground water or rainwater gathered in a kind of source called "Berkeh." Nowadays, with the development of cities, they became unusable and have been replaced by plumbing. But plumbing water in this city is not drinkable and the people must get the tap water from other sources or by using water desalination machine. Since getting tap water is not easy in the work zones, the contractors use plumbing water in their construction processes such as making concretes, mortars, curing, and so forth. e aim of this study is to investigate the properties of concretes that have been made by the plumbing water of this city. Although the plumbing water of this city is not drinkable, no one has investigated the effect of that type of water on the concrete structures. Since the water has a salty taste, it is probable that the water contains chloride ions. e presence of chloride is one of the main reasons for reinforcement corrosion and many other problems in the concrete [19] and also has an effect on the cement paste by raising Friedel's salt creation, increasing the Ca(OH) 2 content, and retarding the diffusion of chloride ion [20,21]. However, Shi et al. [22] investigated the effect of seawater on the mechanical properties, mineralogy, and microstructure of alkali-activated materials (AAMs) and showed that seawater is suitable for use as mixing water of the AAMs in the marine environment. Nevertheless, until now, no investigations were conducted on the effect of water on the concretes and steel bars which are made in this region. Materials and Methods As mentioned, there are many solutions to improve concrete performance. Some alternatives are improvement or replacement of aggregate, changing of mix design, or using additives. Based on Iran's concrete standard, the quality of the materials must satisfy certain criteria to prevent any damage to the structures. So, in tropical zones where the possibility of corrosion is higher, the use of more covers for reinforcement and making concrete with low porosity are inevitable [23]. e materials below have been used in this study to create concrete samples; further information on them is given in the following. Cement. Cement is the most consumed material around the world, of which people use four billion tons annually. is is about 560 kg for every person [24]. Generally, the increase of cement in the concrete and consequently decrease of w/c ratio in the mix will produce stronger concrete [3,25,26]. Also, the quality of the concrete is important in compressive strength of the samples. e use of organic cement instead of normal Portland cement has been investigated. e compressive strength of organic samples was about 1/3 of normal samples. But this ratio was 1/2 in the tensile test [27]. e used cement in this study is type II Portland cement that is provided from Bandar Abbas. Since Larestan's cement factory produces white cement and there isn't any Portland cement factory in this area, almost all of the required cement of the region is provided from there. e physical and chemical properties (XRF analysis) of the cement can be seen in Tables 1 and 2 respectively. 2.2. Aggregates. Sand, gravel, and crushed stone that have been known as natural aggregates form the most part of the concrete mix in both aspects of volume and mass. For any concrete construction, many mined aggregates are needed. Investigation on the aggregates consumption showed that more than 40 billion tons are consumed annually. From this huge amount, between 67% and 75% belongs to concrete making [28]. According to the standard, the used aggregate must be clean and durable and must have no harmful chemicals [29]. In some studies, the replacement of aggregate was mentioned as a good condition and in some cases the properties decreased. is can be referred to the origin of aggregates and their strength. But it is clear that the specifications of the aggregate such as shape, angularity, strength, and durability can affect the compressive strength of concretes. e effect of using marble as a replacement for natural coarse aggregate in concrete has been investigated. e natural aggregate was replaced by marble aggregate in the weight percentage of 0-100%. e results showed 14% better workability for marble aggregate samples and an increase of 40% and 18% in the compressive strength at 7 days and 28 days, respectively [30]. Figure 2 shows the used aggregates in this study, which were crushed aggregates and aggregates from local mines in Larestan (Figures 2(a) and 2(b)). Both fine aggregate and coarse aggregate were mixed with 50% of granite aggregate in the second type of concrete samples (Figure 2(c)). e reason for avoiding full replacement is the high cost of granite aggregate. Granite is one of the powerful aggregates that can be used in making concrete to increase the strength. Investigation on the use of granite aggregate as replacement for fine aggregate showed a 22% increase in the compressive strength [31]. Also, positive effects on the mechanical properties of concrete were seen in some studies with the replacement of granite [32]. Also, the gradation curves for aggregates have been shown in Figures 3(a) and 3(b) for fine aggregates and coarse aggregates, respectively. e black-dashed lines are the margins of limitations between which the curves are located and they show the acceptable gradation of aggregates for concrete mixing. Water. e used water must be clean and must have no chemical content, and, generally, tap water from any source is suitable for making concrete. Any other types of water may have some effects on the properties of concrete or corrosion of the reinforcement [23]. As an example, the use of salty water like seawater is not permitted because after using it in making concrete it can cause some problems such as corrosion, swelling, and fracture [33]. Investigations on the use of magnetized water in making concrete showed that, in comparison to normal concrete, it has the same or even better compressive strength, while it decreases the required cement [34]. Regarding the country's vast territory, large variety of water quality is expected and it is possible that in some areas the quality does not meet the standards. So it seems necessary to investigate this issue. As a result, providing good quality of water for making and curing of concrete is inevitable. One of these probable areas that are expected to not have a good quality of water is Larestan in the south part of Iran. e chemical properties of water in Larestan are shown in Table 3. is table is based on the monthly field measurements of the pipeline water characteristics of the city in different stations which have been reported by the local water administration. Also, for better comparison, the quantity of each indicator for drinking water of Tehran is shown as well. e water in Tehran is drinkable and it is known as standard water for making concrete in Iran. ese parameters were compared to acceptable limits in the Iranian Water Standard Book No. 1053 and the status was shown in the table. e acceptable limit consists of properties such as physical, chemical, and biological properties as well as radioactivity of the drinkable water. Considering the above table, some noticeable point can be discussed. e water degree of hardness is very high and it is classified as very hard water based on the content of CaCO 3 in the classification of WQA (Water Quality Association) for water hardness [37] (Table 4). e temperature of the concrete mix is one of the most important parameters in the design steps, especially in the hot areas where the rate of water loss is higher. As can be seen, the temperature of Larestan's water is higher than the acceptable limits, so the vaporization of water must be included in addition to the previous parameters. Experimental Method In this study, four different types of mixture have been used. e first one is the samples made with plumbing water of the city and the second type is the samples made with drinkable water (or tap water). Moreover, the granite aggregates are used in these two categories to support concrete samples against the probability of weakness due to the effect of salty water. Normal water was used in the curing procedure for all the samples. As a result, four groups of samples were created and, by following the same curing procedure for all samples, the variable parameters can be seen from the compressive strength of normal samples and granite samples (Table 5). e drinkable water that was used had been got in the lab from the "water desalination system" that can be seen in Figure 4. is system works with the reverse osmosis procedure and the output is tap water. e ACI absolute volume method has been chosen for concrete mixture design and the slump of the mixture was taken at 6 cm. Also, the water-tocement ratio has been considered as 0.45 and the maximum aggregate size has been limited to 19 mm. e materials were mixed in the manual concrete mixture at the lab and then they were poured into the molds. e molds should be clean and clear of any physical or chemical wastes so they were cleaned and dried before use. e plastic molds were used in this study because they have no corrosion or sticking probability in making concrete ( Figure 5). e molds are cubical with the dimensions of 15 × 15 × 15 cm. Each mold was filled in three stages and each stage was consolidated by the minimum 10 tamping rod to avoid honeycombing concrete. Four different types of concrete were made in this study. e first one is the samples with the normal aggregates where the tap water was used. e second one has the same design and the type of water used is the pipeline water. e two other types of concrete were created by replacing 50% of aggregates by granite aggregate. Because there are 5 timing stages for breaking the samples, a total of 60 concrete samples were made in the laboratory. All samples were left in their mold for 24 hours to achieve the initial set of concrete. en the samples were named separately, moved to the curing place, and kept there until the time of the test (Figure 6). e curing conditions are as follows: (i) Samples are completely soaked in water (Figure 7) (ii) e samples were rotated for the same curing conditions for all sides of the samples (iii) e water temperature is kept at 30°C (iv) e type of water used for the curing procedure is tap water Since there are two di erent types of concrete samples, the curing system and condition were the same for both of them. e temperature of the water is controlled with the digital thermometer. For better analysis and less error in the test, 3 samples of each type of concrete were made and e compressive strength test was based on the ASTM C39-86 and with the automatic hydraulic jack. Based on the concrete standards, the compressive strength of any concrete which is made using any type of water other than tap water must be at least 90% of the strength in the normal concrete. So this study has investigated the effect of Larestan's water on the short-term and long-term performance of concretes. Results Based on the previous plan for breaking the samples, they were broken on days 7, 14, 28, 56, and 90. For each test, three samples were used and the average strength was reported. e variance of outputs was acceptable for all samples because no dispersion larger than 4 MPa (±2 MPa) was seen in the results. Also, the samples were checked after the test for the probability of honeycombing concrete or any other problems, but no defect was seen in any of them. e main compressive strength that is presented in the tables and figures is the average measure of them. e output data of breaking the samples for both tap water and pipeline water used in concrete based on the age of the samples are presented in Tables 6 and 7 and Figures 8 Compressive strength (MPa) Figure 9: Comparison of compressive strength in tap water concrete and pipeline water concrete by replacing 50% of granite aggregate. Advances in Civil Engineering 7 showed less reduction in the strength by using pipeline water. ere is more reduction on the initial days than on the nal days but, for the normal concretes, the reduction percentage was higher compared to granite samples. is can also be the e ect of granite aggregates on the shrinkage of samples or the in uence of water ingredient on the aggregates. e percentage of the reduction in concrete strength has been shown in Figure 10 for normal concrete and granite concrete. Figure 10 shows that the total amounts of reduction for granite concrete samples are less than 10% for all the stages. Also, these samples showed less reduction in strength over time. But, for normal samples, the reduction of strength was higher than the limitation on the initial days and it continued until the 56th day. Even at the nal stage, the reduction of strength was still near to the upper limit and it can go higher with any intentional fault in concrete-making procedures. ) 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Compressive strength (MPa) 14 28 56 90 7 Days Figure 11: Comparative chart of compressive strength results for normal concrete samples made by using tap water and pipeline water. 8 Advances in Civil Engineering e comparative charts of the compressive strengths of samples are shown in Figures 11 and 12 for normal concrete and granite concrete, respectively. e dashed line shows the maximum limit for each concrete mixture design. As a result, the compressive strength of normal concretes made by pipeline water is not acceptable in the initial days and it is very near to the lower limit of standard in the final stages. e standard strength is almost achieved on the 56th day. So any loading or proceeding of the construction must be stopped during this time. But by using the granite aggregates the reduction in the strength will be less and acceptable. is probably is the compensatory effect of granite on the weakness effect of pipeline water. Any other kind of reinforcement can be used for increasing the strength of the concrete in opposition to the weakness of pipeline water. As a result, the use of normal water for making concretes in this city is not acceptable at least for the initial days and the concretes must be mixed with some reinforced materials for increasing the strength of the concretes. Also, the different type of cement may have an effect on the initial strength of concrete and increase it to be inside the limitations. Moreover, the effect of curing water must be studied on both mixed designs. Conclusion In this study, the effect of pipeline water of Larestan on the compressive strength of concretes has been investigated. Four groups of samples were made and tested to illustrate the effect of pipeline water on the concretes. Also, the granite aggregates were used as 50% replacement in concrete samples for their reinforcement with the purpose of increasing the strength. e achieved conclusions are as follows: (1) e compressive strength of concrete made by using pipeline water is less than 90% of that of the same concrete made by using tap water and this is the opposite of the "concrete standard association." So, at this time, the concretes made in this city cannot satisfy the standard criteria. (2) By replacing 50% of granite aggregates, the compressive strength of concrete increased by more than 12%. is increase was higher for the samples made by using pipeline water. As a result, 50% replacement of granite aggregates is suitable for standardizing the concretes made in this city by using the pipeline water. (3) e effect of water on the strength reduction in the normal concrete was higher compared to that in granite concrete. It may be attributed to the damaging effect of pipeline water on the concrete strength which has been compensated by the granite aggregates. is reduction was more than 10% in the normal concretes (nonstandard) but for the granite concretes it was standard. (4) In the case of using normal pipeline water concrete, any loading or proceeding of construction must be stopped during the first 56 days due to the lower strength value of normal pipeline water concrete in the initial days. But it can be seen that in this city the concretes will load just after 7 days. Figure 12: Comparative chart of compressive strength results for granite concrete samples made by using tap water and pipeline water. Advances in Civil Engineering Data Availability e results of the compressive strength of the samples, broken samples pictures, and pictures of work procedure are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,303.2
2019-11-30T00:00:00.000
[ "Materials Science" ]
Analysis of Quenching Parameters in AISI 4340 Steel by Using Design of Experiments This paper aims to investigate the effects of quenching parameters (temperature and time of austenitizing and cooling rate) on the microstructure, hardness and distortion of AISI 4340 steel by Design of Experiments (DoE). The factorial design was used to determine the influence of the factors on the response variable. After quench samples were characterized by optical and scanning electron microscopy, hardness test and dimensional analysis. Navy C-rings samples are used to determine the distortions after quenching due to development of residual stresses caused by non-uniform cooling. Results show that the cooling rate has a significant effect on the steel after quenching, however, the suitability of all factors is important to achieve the desired properties. Introduction Over the years, industries have begun to use different types of materials, however, steel is one of the most widely used due to its low cost 1 . Steel has many engineering applications and is used in many processes involving high strain rates and high stresses 2 . Furthermore, steel can have its mechanical properties easily improved by heat treatment 1 . Among the high number of steels, AISI 4340 is widely used due to a combination of high mechanical strength, ductility and hardness. This steel is applied in tractor and airplane crankshafts, shafts with high mechanical demands and vehicles in general. The aeronautical industry uses AISI 4340 for diverse applications including tools that are applied in the aircraft manufacturing 3 . Good mechanical properties are required for AISI 4340 applications so heat treatments should be taken into consideration 2 . The heat treatment can be used to improve machinability and formability, increase mechanical strength and restore the ductility of the material. Quenching is a commonly used heat treatment in steels to obtain desired properties in several industrial segments. In the quenching heat treatment, the steel is heated to the austenitization temperature and rapidly cooled 4,5 . The formation of a single-phase austenitic structure and the dissolution of carbides and other phases are the major aims of austenitizing. For this purpose, austenitizing temperature range should be high enough to homogenize the austenite and low enough avoid excessive grain growth 6,7 . Thus, the main parameters of the quenching process are temperature and time of austenitizing and cooling rate. The grain size is a function of austenitization temperature and time. The coarse austenite grain size may promote quenching cracks and increase the fraction of retained austenite. On the other hand, the finer austenite grain size which is obtained at lower austenitization temperatures leads to the finer martensite units which provide higher strength and toughness 6,7 . The most common quenching media are: mineral oils, water, aqueous solutions and salts 4,5,8 . The severity of a quenching media is dependent on its ability to mediate heat transfer at the hot metal interface during quenching. Its selection depends on the hardenability of the alloy and the cooling rate required to achieve the desired microstructure 4,5 . During quenching process residual stresses and distortions are developed in response to non-uniform cooling and phase transformations [8][9][10][11][12] . The resulted distortions by the manufacturing processes, such as heat treatment, can increase the cost of producing a component by 20-40% where additional machining steps are required. Consequently, it is very important to predict and minimize distortions 13,14 . Some methods are used to evaluate and simulate the distortions promoted after quenching. The Navy C-ring has been one of the most common types of samples used to observe dimensional changes (distortion). The sample geometry with thin and thick sections prevents uniform cooling and causes phase transformation at different times that can result in residual stresses and distortions 14,15 . The distortion of the C-rings is usually examined by the change in dimensions. The change in internal and external diameter and gap opening occurs by expansion and contraction of C-rings after quenching 13 . In this work quenching parameters in the AISI 4340 steel were evaluated in order to verify how time and temperature of austenitization and cooling medium influence the specimen distortions and hardness. In this way, the Design of Experiments (DoE) was used to generate the experiments as well as to verify the behavior of the variables and their interactions. The DoE consists of three stages: pre-experimental planning, execution of the experiments and statistical analysis of the data collected. The pre-experimental planning stage is very important because in this phase the factors and their levels, as well as the response variables should be selected according to the objective of the experiment 16 . The experiment planning method highlight the factorial planning which allows a particular process to be studied with the realization of few experiments. The factorial designs allow engineers to determine the process factors that have significant effect on response variable and also allows to measure interactions between different factors, which make this technique an important tool for process optimization 17 . In this sense, the 2 k factorial design was used, the procedure of this technique consists of an experiment with k factors (process variables), where each parameter is tested with two levels (-1 (minimum), +1 (maximum)) with r replicates. The factorial design requires running experiments for all possible combinations of the levels of the factors. Thus, the results of such an experiment can lead to improved process yield, reduced design and development time, as well as providing reduced cost of operation 17 . Experimental Procedure The chemical composition of AISI 4340 is show in Table 1. Navy C-rings samples were obtained according to Figure 1 and submitted to a quench heat treatment with different temperatures (800, 850 and 900 °C) and time (30, 45 and 60 min) of austenitization and quenchant medium (water and oil). For microstructural characterization the samples underwent a conventional metallographic process with sectioning, hot mounting, grinding, polishing and etching with Nital. Hardness measurements (HRC -Rockwell C) were performed before and after quenching using a load of 150 kgf. The measurements of the ring dimensions were performed before and after the heat treatments by using a profile projector with an accuracy of 0.001 mm. The gap openings were measured with a digital caliper rule with an accuracy of 0.01 mm. The statistical analysis was performed by software MINITAB ® , the parameters were maintained at two levels, as shown in Table 2. Two center points without replicates were used, totaling ten experiments, according to Table 3. It is worth noting that the choice of these levels was only possible after the preliminary tests, which allowed to obtain the conditions for the development of the work. The choice of austenitization temperature was based on the dilatometric test, Figure 2. The sample was heated to 1300 °C at a rate of 10 °C/min to obtain the values of A c1 (727 °C) and A c3 (764 °C). Results and Discussion The analysis of variance (ANOVA) was used to determine the factors of the heat treating process that are affecting hardness and distortion of the steel. The results of hardness and dimension distortion obtained from the experiments are shown in Table 4. The distortion percentages were calculated by comparing the measurements before and after quench. Analysis of hardness The hardness measurements were performed in the cross section, as shown in Figure 3. The mean Rockwell C hardness measurements and its respective standard deviation are listed in Table 4. There was no significant variation of hardness between the C-rings samples. All quench processes result in an increase in the hardness value, since the sample without heat treatment has an average hardness of 31.0 ± 1.0 HRC. This increase in hardness was expected, rapid cooling makes it difficult to diffuse the carbon atoms thus promoting a distortion of the structure giving rise to the martensite (body-centered tetragonal structure) 4,5,18 . Table 5 shows the analysis of variance of the hardness values, a confidence level of 95% was adopted (α = 0.05). Only the cooling medium showed P-value less than 5% of significance and F-value larger than Fisher's tabulated value (F95% (1,4) = 7.71), which shows that statistically there was a difference between the cooling medium in terms of average The main effects for hardness as a function of temperature, time and cooling medium are shown in Figure 4. Note that with increasing temperature and soaking time there was a slight increase in the hardness. At higher austenitizing times and temperatures it is possible to dissolve a greater amount of carbides 19,20 . This leads to an increase in the carbon content in the formed martensite, carbon supersaturation increases the deformation of the lattice and the hardness of the material 18,21 . The water has higher hardening severity than oil, which generates high thermal stress in the material 4,5 . The interactions plots are shown in Figure 5 and indicate that the one factor has impact on other factors. Figure 5 (a) shows that hardness increased with higher temperature and water. It is also observed that temperature had a greater effect on hardness when the oil was used. When the samples were cooled in water the variation of austenitization temperature had little influence on the hardness. Figure 5 (b) depicts the effect of time and quenchant medium on hardness. It is observed that greater hardness was achieved by using long time and water. When the C-rings were cooled in oil the average hardness practically did not change, regardless of the time used. Dimensional analysis of samples Navy C-rings According to the percentage of distortion after heat treatment, Table 4, all samples had their dimensions increased. A high distortion was observed in the gap opening, such behavior was also observed in other works 9,22 . The final gap opening and outer diameter of the Navy C-ring are associated with martensite formation in the thicker portion of the specimen. The formation of martensite results in a volumetric expansion of the material 22 . hardness. It is also noted that there was interaction between the factors, except between time and temperature. Therefore, the suitability of all factors is important for the quenching process. The ANOVA result indicates that the model presented an excellent fit, R 2 (adj) equal to 97.56%. The coefficient of determination (R 2 ) measures the proportion of the total behavior was expected, since low-viscosity fluids promote higher heat transfer rates and, consequently, larger distortion due to non-uniform temperature distribution 5,9 . It is noted that an increase in the temperature and time results in an increase in the percentage of distortion. High temperatures and long holding time lead to an increase in the solubilization of the carbides, increasing the distortion. In addition, increasing the austenitization temperature the thermal gradient between the sections of the austenitized piece at 900 °C was higher when compared to samples heated to 800 °C. Microstructural characterization Microscopic examination is an extremely useful tool in the study and characterization of materials. The structural characterization allows to obtain qualitative information, observation of the morphology and homogeneity of the structures, as well as quantitative, measure the volumetric fraction of the present phases and distribution and size of the grain in the microstructure. The study of the microstructure is also useful to predict the mechanical properties of alloys and to show if an alloy was correctly heat treated 23 . The analysis of variance was performed and the effects of the factors on the gap opening percentage were evaluated as shown in Table 6. Note that none of the factors had significant effects. The ANOVA result indicates that the model presented a good fit, R 2 (adj) equal to 77.23 %. The main effects, Figure 6, allowed to evaluate the influence of each factor on the measured distortion. The percentage of average distortion increased when the cooling medium used was water. This The AISI 4340 steel has high hardenability, so it is expected that all the analyzed samples present the martensitic phase. The micrographs, Figure 7, show darkening of the martensitic structure. Note that there was no significant difference between the samples cooled in water and oil. The two-cooling media had a sufficient heat extraction rate to form martensite. Based on the thermal treatments performed, the presence of the ferrite structure is not expected, but the white regions may present retained austenite and ferrite. SEM observation shows that both the sample heated at 800 °C ( Fig. 8 (a)) and the sample heated at 900 °C ( Fig. 8 (b)) showed microstructure with a lath appearance, being this characteristic aspect of the martensitic microstructure. Conclusions The use of the experiment planning plan allowed to evaluate the effect of the austenitization temperature, soaking time and cooling medium on hardness and the distortion of the C-rings. The models showed good adjustments, R 2 (adj) equal to 97.56% for hardness and 77.23% for distortion. The analysis of the interactions between the parameters was shown to be an important element for the control of the quenching process, evidencing that the joint action of parameters with little influence on hardness can lead to significant effects. The hardness increases with the homogenization of austenite and the distortion increases with increasing austenitizing temperature and time. The cooling medium was the only parameter that showed a significant effect on hardness value, being thus one of the most important parameters of the quenching process. The use of oil as a cooling medium for the quenching process of AISI 4340 steel has shown to be more indicated, quenched samples in this medium had high hardness and a lower percentage of distortion.
3,155.6
2018-11-14T00:00:00.000
[ "Materials Science" ]
Neural Insights into the Relation between Language and Communication The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding. Language and communication It is obvious that literal de-coding of linguistic utterances cannot be an adequate and complete explanation for communicative behavior (it is "commonsensical" in Sperber and Wilson's words; Sperber and Wilson, 1995, p. 23), and is clearly illustrated by two examples: 1) "Do you know what time it is?" 2) "Oh man, a beer would do me good after all this hard work" The expected answer to (1) is not just the affirmative "yes" (except in slapstick movies) and (2) may be more adequately interpreted as a request for a drink than as a factual statement about the belief of the speaker with regard to the relationship between beer and emotional wellbeing. In natural communication it is the capacity to infer someone's intention from an utterance which seems more fundamental than the linguistic de-coding of the message. This common-sense statement has been discussed and refined at length in the study of pragmatics ("the study of language usage," Levinson, 1983; see e.g., Sperber and Wilson, 1995;Tirassa, 1999;Levinson, 2006;Tomasello, 2008;Airenti, 2010;Bara, 2010). For the present purpose it is important to highlight the argument that to be able to adequately understand or generate a communicative act, one needs to be able to infer another person's intentions or beliefs. Although it may seem trivial to claim that language and communication are not the same cognitive construct, there is a potent literature arguing that the cognitive system for language crucially underlies our ability to infer others' intentions. Some argue that the structure of human language is crucial for representing higherorder beliefs, as required for mentalizing (Carruthers, 2002;Pyers, 2006;Newton and de Villiers, 2007). For instance, it is proposed that the human mind can only construct representations in which one proposition is embedded inside of another (e.g., Mary thinks [that the money is in the safe]) through the mediation of language and, introduction Communication can be viewed as a matter of coding and de-coding linguistic information. The speaker codes information and puts his thoughts into words, while the listener de-codes the linguistic information, taking the input from the speaker and translating it back into a thought. In this scenario, it is the code (in this case language) that matters for communication. Individuals with a common code can communicate because they share that code. This is an intuitively appealing view given that communication in our everyday lives so often relies on language, be it in face-to-face conversation, talking on the phone, writing an e-mail, or other forms of exchange. The position that it is the code that matters for communication is nicely phrased by the philosopher John Searle: "One can in certain special circumstances 'request' someone to leave the room without employing any conventions, but unless someone has a language one cannot request of someone that he e.g., undertake a research project on the problem of diagnosing and treating mononucleosis in undergraduates in American universities." (Searle, 1969, p. 38) By this view, we are capable of communicating to some degree without language, but real communication requires language and all essential communication is linguistic. In terms of cognitive architecture, this has led to the proposal that understanding others and communicating with others by necessity involves the language system (e.g., Carruthers, 2002). By contrast, numerous scholars have argued for at least an additional inferential ability which crucially underlies our communicative skills, as we will describe below. In the context of this special issue (Understanding human intentional communication), we consider how evidence from human neuroscience provides insight into the question of whether the capacity for language and the capacity to communicate are cognitively (and neurally) distinct, or whether they are best understood as a single cognitive capacity. in particular, the recursive capacity of the grammar. Evidence for this position comes from the finding that language development and performance on false belief tasks, requiring such higher order structures, are strongly correlated (see Milligan et al., 2007 for review). In a false belief task a participant is confronted with a scenario in which one of the characters has an incorrect belief about the state of the world. The dependent variable is whether the participant (often a child) will evaluate the character's behavior based upon the present ("actual") state of the world, or based upon the false belief that the character has. If the participant takes the belief of the character into account, she is said to possess a "theory of mind" about others, or to "mentalize" about others' beliefs, desires, and intentions (Wimmer and Perner, 1983;Baron-Cohen et al., 1985). A second proposal is that verbs describing speech or cognitivementalizing states such as thinking and remembering are necessary for representing the intentions of others. An intriguing demonstration supporting such a link is the finding that deaf adults learning a sign language only start to perform well on false belief tasks after they master typical "mentalizing" verbs such as "believe" or "think" (Pyers and Senghas, 2009). In this paper we explore the question of the relationship between language and communication/intention understanding from the perspective of human neuroscience. In particular, we address the question whether there is evidence from neuroimaging and neuropsychological studies to suggest that language and communication are supported by overlapping or distinct parts of the brain. a neuraL perspective evidence from patient popuLations If inferences regarding someone's beliefs and intentions necessarily require the resources of the language faculty, either from the lexicon (e.g., mentalizing verbs) or the recursive capacity of the grammar in embedding one proposition within another, then individuals with severe language impairment would fail on theory of mind (ToM) tasks. Patients with severe agrammatic aphasia usually display deficits in the comprehension and production of verbs, and these impairments are particularly evident on low imageability, abstract verbs such as those that describe mental states (McCarthy and Warrington, 1985). Severe agrammatism is also characterized by difficulties in de-coding the structure of sentences, with impairment in assigning correct agent-patient roles in reversible sentences such as "the diver splashed the dolphin"/"the dolphin splashed the diver." More complex structures containing subordinate clauses provoke even greater difficulties in comprehension. There are parallel difficulties in creating structured sentences, with output at best consisting of strings of words (usually nouns), and at worst, restricted to social forms such as "hi," "yes," or "bye." Despite the presence of such profound language impairments, patients with severe aphasia are able to succeed on tests of false belief understanding (Varley and Siegal, 2000;Varley et al., 2001;Apperly et al., 2006). These patients all had extensive damage to left hemisphere perisylvian cortex including the traditional fronto-temporal language network (see Figure 1). That these patients could perform false belief tasks despite their severe damage to the language network and concomitant linguistic difficulties, is a first indication that mentalizing tasks do not necessarily rely upon the language system. With regard to the inter-relationship between language and communication, patients with severe aphasia have often been observed to communicate better than they talk. Through the use of alternative communicative resources such as drawing, facial expression, and gesture some severely aphasic individuals are able to convey quite sophisticated messages (see Siegal and Varley, 2002 for an example). Goodwin (2006) provides an in-depth analysis of the communicative abilities of a severely aphasic man, with a special focus on conversational aspects. For instance, by using expressive prosodic cues (e.g., emphasis and pitch), the aphasic person was able to communicate messages while only using non-sense syllables (Goodwin, 1995(Goodwin, , 2006. In a recent study, we investigated the capacities for communicative intention generation in several aphasic patients in an experimentally controlled set-up (Willems et al., submitted). Three profoundly aphasic patients engaged in a communicative paradigm called tacit communication game (TCG), which involves two players with different communicative roles. The paradigm consists of a 3 × 3 grid on which each player can move around his/her visual token, consisting of a simple shape (Figure 2). The overall goal is to move the tokens to a preconfigured end-state. There is an imbalance in knowledge between the two players: one player knows the desired end-state of a trial, whereas the other does not. Hence, one player has to use his own token to convey the desired end-position and orientation of the other player's token. Since the means of communication in this paradigm is novel (moving Varley and Siegal (2000). Note the extensive damage in the left hemisphere, encompassing the whole cortical language network. The images are displayed following radiological convention which means that the left hemisphere is on the right side of the image. Figure 1 | An axial slice of an anatomical scan of one of the patients tested on a false belief task in these may impact on performance on arguably high-order types of language tasks such as understanding deceit or irony. Consequently, it remains unclear as to whether an impairment observed in mentalizing represents a primary deficit, or whether it is secondary to disruption of another cognitive process that is necessarily engaged in high-order cognitive processing. In addition to illuminating the role of language in theory of mind reasoning, these studies indicate the continuing importance of patient-lesion studies in cognitive neuroscience. While functional imaging studies reveal the activations associated with a particular behavioral-cognitive performance, the lesion method represents a means of determining whether the activations reflect a necessary neurocognitive component of the processing network (Bird et al., 2004). If performance is maintained despite "knockout" of a substrate, the findings point either to a non-mandatory processing component or more generally, to plastic, adaptive neural networks underpinning some forms of cognition. evidence from human neuroimaging Very few neuroimaging studies have looked directly at the relationship between communicative and linguistic abilities. There is a relatively large literature investigating the neural basis of "mentalizing" or ToM (see Amodio and Frith, 2006;Frith and Frith, 2006 for reviews), as well as a sizeable number of studies investigating psycholinguistic factors in language production/comprehension such as semantic, syntactic, and phonological factors (see Bookheimer, 2002;Indefrey and Levelt, 2004;Hagoort, 2005;Vigneau et al., 2006 for review), but only a few studies have looked at communicative and linguistic capacities within the same experiment. tokens on a playing board), participants have to generate a new communicative strategy in order to convey the relevant information to the other player. Examples of such strategies are to pause longer on the desired end-location of the other player, or to move back and forth between the desired end-location and a neighboring location to indicate the desired orientation (de Ruiter et al., 2007(de Ruiter et al., , 2010Newman-Norlund et al., 2009). The rationale of our patient study was that if language is crucially involved in communicative intention generation, the aphasic patients should not fare well on this task. However, the patients exhibited strategies for communication that were entirely comparable to those observed in the neurologically healthy population. These findings indicate considerable autonomy between language and intentional communicative capacity. Adopting a different experimental strategy, Bara et al. (2001) showed that patients with non-focal closed head injuries can be unimpaired on standard linguistic test batteries, but exhibit difficulties with pragmatic aspects of language understanding such as understanding deceit. Similarly, Happé et al. (1999) revealed that patients with right hemisphere lesions who were not aphasic, displayed impairment on tasks requiring attribution of mental states to others. Taken together, the capacity to form pragmatic inferences requiring some degree of mentalizing can be preserved in aphasia, but non-aphasic brain-injured patients may display the reverse dissociation, with retention of core linguistic systems of syntax and lexis, but disruption of pragmatic-mentalizing capacity. However, interpretation of data from patients such as those with closed head injury is complex. These individuals may have deficits in cognitive capacities such as inhibiting a potent response and Both these studies suggest that producing a communicative act for another person relies on different brain areas than those involved in language. Furthermore, the areas that are activated during communicative message generation are those that have previously been observed to be activated in response to mentalizing tasks. Besides these direct comparisons of language and communicative processes, there have a been a number of studies investigating intention understanding during communication as such, without necessarily focusing on the relationship with language. As we noted above, the neuroimaging literature on mentalizing is extensive (see Amodio and Frith, 2006;Frith and Frith, 2006 for reviews) and we will focus here only on studies that involve materials specifically tailored to investigating communicative intentions. Walter et al. (2004) investigated the neural distinction between understanding of private intentions/beliefs and understanding of communicative intentions. In their fMRI study, participants viewed short cartoon stories involving people performing actions driven by private, non-communicative intentions (e.g., changing a broken light bulb because you want to read) as well as stories in which characters act with a clear communicative intent (e.g., pointing to a bottle to request it). Their main finding was increased activation in MPFC for the communicative stories as compared to the individual intentional action stories. Moreover, the latter condition did not activate MPFC more strongly than a control condition of nonintentional, physical interactions (e.g., a leaf being blown away by the wind). This study highlights the importance of a communicative component in driving MPFC activation as opposed to intention recognition per se (see also Ciaramidaro et al., 2007). Kampe et al. (2003) investigated which areas become more activated when someone is called by his own name ("Hey John!") versus by another person's name. The rationale is that calling someone by his name is a potent indicator of the intention to communicate with this person, whereas shouting another person's name is not. The main result was activation in parts of the mentalizing system (MPFC and temporal poles) when participants heard and saw someone calling their name as opposed to another's name. Adopting a different approach, Tylen et al. (2009) used photographs of objects or signs that had either a communicative intention or no communicative intention. For instance, they compared photographs of an arrow on the ground (communicating direction) or chairs blocking a parking space (signaling "do not park here") versus objects lying on the floor and chairs around a table, not signaling a communicative intent. The results showed increased activation to communicative versus non-communicative objects in a set of language related areas including the inferior frontal gyri, but not in the traditional mentalizing network. These results are not easy to interpret in the context of this review since it is conceivable that looking at the intentional photos leads to greater or more elaborate vocalizing/ language processing than looking at the non-intentional pictures. Another contrasting finding comes from Schippers et al. (2009) who did not observe sensitivity of the MPFC (or other mentalizing network areas) to the intentional observation of communicative gestures as compared to observation of the same gestures with the explicit instruction not to interpret their meaning. Finally, Noordzij et al. (2009) found right posterior temporal cortex to be specifically involved in the generation as well as understanding of an intentional communicative act. They required Sassa et al. (2007) presented healthy young participants with short movie clips of a person handling a familiar object (e.g., someone playing guitar). Participants responded to these movie clips in two different task settings. In one task they talked to the person on the screen in a "casual," communicative manner (Communicative trials), whereas in the other task they were required to describe the scene presented in the movie and not to direct their speech to the person in the movie clips (Descriptive trials). Both conditions involved speech production, but only in the Communicative trials was there an intentionally communicative component to the speech. Comparison of Communicative to Descriptive trials showed increased activation in medial prefrontal cortex (MPFC), left temporo-parietal junction and the temporal poles bilaterally. This set of regions is part of what has been described as the mentalizing network (Amodio and Frith, 2006;Frith and Frith, 2006). Both Descriptive and Communicative trials compared to baseline led to activation in parts of the traditional language production network, such as left inferior frontal gyrus ("Broca's area"; Indefrey and Levelt, 2004), but these regions were not sensitive to the Communicative/Descriptive manipulation. Willems et al. (2010a) directly tested sensitivity of cortical areas to an increase in communicative intent on the one hand and general linguistic processes on the other hand. Participants were engaged in a communicative paradigm called the "Taboo game." In this set-up the participant's task is to describe a "target word" (e.g., "beard") to another individual without using certain predetermined "Taboo words." That is, there was one person inside an MR scanner generating verbal descriptions of various words, while the other player listened to these descriptions outside of the MR scanner and guessed the target word. There were two experimental manipulations: First, communicative intent was manipulated by changing whether the listener already knew the target word or not. Importantly, the participant was aware of whether the other player knew the target word or not. If the listener already knows the target word, the utterance that the participant is creating is not helpful to the listener. We labeled these trials Non-targeted. By contrast, in Targeted trials, the utterance provided by the participant was generated in order to help the other player guess the target word. Second, the linguistic difficulty of a trial was manipulated. If the Taboo words are closely related to the Target word (e.g., "moustache," "chin," and "man" in the case of the target word "beard"), one needs to search a wider semantic space to come up with a helpful description, which makes the task more semantically difficult as compared to when the Taboo words are more distantly related to the target word. These trials were labeled Difficult and Easy. The distinct manipulation of Communicative intent and Linguistic difficulty was neatly reflected in activation patterns in different brain regions (Figure 3). A part of MPFC was more strongly activated to Targeted as compared to Non-Targeted trials (the Communicative intent manipulation), but did not show sensitivity to the linguistic difficulty manipulation (Figure 3A). By contrast, left inferior frontal, and left inferior parietal cortex were more strongly activated to Difficult as compared to Easy trials, but were not influenced by the communicative intent manipulation (Figure 3B). In summary, this study provides neural evidence for a dissociation between communicative message generation and lexico-semantic language processes. discussion The work that we have reviewed here argues for a neural dissociation between communicative and linguistic capabilities. First, there is evidence from lesion patients who despite severe damage to the language system perform well on mentalizing tasks, as well as on tasks involving the generation of a communicative message for another person. Second, neuroimaging in the neurologically healthy population indicates that distinct parts of the brain are involved in the generation of a communicative message as compared to linguistic processes. Hence, our main conclusion is that communicative abilities should be best understood as neurally -and cognitively -distinct from language and that successful communication does not necessitate, nor rely upon a functioning language system. We take this as strong evidence for the proposal healthy volunteers to engage in the visuo-spatial communication paradigm that we described above (TCG). Participants communicated the position and orientation of a visual token to another individual, using only limited visuo-spatial means. It was observed that activation in right posterior superior temporal cortex/temporo-parietal junction was increased when an individual designed a communicative act for another person, as well as when that second person interpreted the communicative act of the first person. This region has been implicated in mentalizing tasks before, and echoes the findings of mentalizing deficits in right hemisphere damaged patients (Happé et al., 1999 see Amodio andFrith and Frith, 2006 for reviews). The overlap is interpreted as evidence for similar mechanisms engaged in generation and interpretation of communicative intentions. Figure 3 | A neural dissociation between communicative and linguistic abilities. Results from an fMRI study in young healthy participants. Participants generated a description of a given concept ("Target word") while they were prohibited to use certain Taboo words. Two factors were manipulated: (1) Communicative intent: The participants created the description either for another individual ("Targeted trials") or not for a specific other individual ("Non-Targeted trials") and (2) Linguistic difficulty: Taboo words were either semantically closely related to the Target word, making it more difficult to come up with a description, or were semantically less closely related to the Target word, which makes it more Easy to come up with a description. (A) Shows the result of comparing Targeted versus Non-Targeted trials, (B) shows results of the Difficult versus Easy comparison. The results show that medial prefrontal cortex was sensitive to the Communicative intent manipulation (A), but not to the linguistic difficulty manipulated, whereas an opposite pattern was observed in left inferior frontal cortex (B). From this association, it has been taken that the capacity to understand intentions of others (that is, to possess a ToM) is crucially dependent upon certain aspects of language being in place. A full exposition of this literature is beyond the scope of our paper, but it should be noted that this conclusion relies on equating performance on false belief tasks with intention recognition or mentalizing abilities. This relationship has been criticized because false belief tasks tax multiple cognitive systems, and plausibly involve other factors than "just" mentalizing, such as working memory load (e.g., Bloom and German, 2000). Furthermore, false belief tasks may only tap into a subset of ToM capabilities and indeed, research with preverbal infants seems to suggest that equating false belief understanding with mentalizing abilities is not well justified. That is, despite the absence of syntactic structures and lexical forms that have been claimed to be necessary for the representation of false beliefs, there is evidence that preverbal infants show the capacity to understand another person's intentions Liszkowski, 2006;Liszkowski et al., 2008;cf. Aureli et al., 2009), and to attribute false beliefs to another individual (Onishi and Baillargeon, 2005;Baillargeon et al., 2010). Some have argued that, although the abilities of infants show that intention recognition can precede language development, it is not until relatively late in preschool years (around 4 years of age) that children develop "the real thing" for ToM, namely false belief understanding (Pyers, 2006). This argument is reminiscent of the quote with which we started our contribution. Searle argued that communication without language is perhaps possible, but that this is just a marginal phenomenon, as one cannot communicate about abstract and difficult concepts such as mononucleosis without language (see Introduction and Searle, 1969). In the case of false belief understanding and language development, this argument breaks down when we again recognize that false belief tasks include components other than intention recognition alone. Moreover, the compelling evidence for intention recognition abilities in preverbal infants cannot be marginalized by referring to the inability of children under 4 years of age to pass a false belief task. On top of the evidence from pre-verbal infants, we described findings from severely aphasic patients who, despite their severe limitation in language ability, are able to pass false belief tests. This casts further doubt upon the relationship between language and mentalizing abilities in the sense that even on standard measures of mentalizing, performance can be maintained without a fully functioning language system. It is possible that the role of language in mentalizing may be restricted to configuring the capacity for ToM in early childhood, and thus if language is impaired in later life as in acquired aphasia, mentalizing ability is not lost with the loss of language. However, the convergence of evidence from infancy and adults with aphasia strengthens the case as to the considerable autonomy between these two cognitive capacities. concLusion In summary, we reviewed evidence from neuroimaging in healthy participants as well as results from neuropsychological populations which show that the generation of a communicative message is best thought of as a capacity which is distinct from core linguistic processes. Hence the perspective from neuroscience compellingly argues for loosening the presumed causal ties between communicative that communicative and linguistic abilities are cognitively distinct Levinson, 2006;Tomasello, 2008;Airenti, 2010;Bara, 2010). separate capacities or extended Language network? It would be a mistake to interpret our conclusion of separate cognitive capacities for language and communication as meaning that language and communication have little to do with each other. On the contrary, as we described in the introduction, it is trivially the case that language is used mainly and perhaps almost exclusively in a communicative manner in everyday life. Indeed, part of the success story of the human species is due to its capacity to use language as a communicative device. The fact that our capacity to understand the intentions of others is interlinked with normal language use, has led some to propose an "extended language network," encompassing mentalizing related areas such as MPFC as well as areas more traditionally implicated in language (Ferstl et al., 2008). We are sympathetic to the notion that language entails more than the traditional semantic, syntactic and phonological processing of spoken and written words. Indeed, neuroimaging studies show that "non-linguistic" input such as from hand gestures and from visual information activate parts of the traditional language network (Willems et al., , 2008(Willems et al., , 2009Holle et al., 2008;Straube et al., 2009), and that areas outside of the traditional language network can be involved in language understanding. An example of the latter is activation of the motor cortex when participants read action-related language (e.g., "He kicks the ball"; Hauk et al., 2004;Tettamanti et al., 2005;Aziz-Zadeh et al., 2006;Willems et al., 2010b,c). However, we feel that incorporating mentalizing abilities into an extended language network is not a helpful conceptualization. We showed evidence for a separation of mentalizing abilities and linguistic abilities in the human brain, such as in the case of patients who have lost the capacity for language, but are still able to communicate. Moreover, lexico-semantic processing can be distinguished from communicative message generation in the healthy human brain. The separation of linguistic and communicative abilities therefore seems a more fruitful characterization rather than calling both "language," and allows for some forms of communication that are not linguistic. Although it is clear that there is not a single, monolithic neural network only involved in language (see e.g., , it seems reasonable to use the term "language network" for areas involved in the traditional semantic, syntactic and phonological processing triangle as a shorthand in scientific literature as well as in clinical practice. However, it must be realized that other parts of the brain are crucially involved in everyday language production and comprehension. a counter-argument from deveLopment There is a sizeable literature which argues for the opposite conclusion to the one that we have reached, namely that language and mentalizing/communicative abilities are causally related and that the capacity to mentalize about others' beliefs, intentions and desires crucially depends upon language abilities. An important source of evidence for this position is the observation that performance on false belief tasks correlates with several aspects of language development (Milligan et al., 2007;see Pyers, 2006 for discussion). they are obviously closely related. Second, research in healthy as well as in neuropsychological populations should be used to gain more adequate tools for assessment and improvement of communicative abilities in those with severe language difficulties. Finally, neuroimaging work in developing populations should investigate the intricate interplay between neural language development and development of mentalizing abilities. acknowLedgments Supported by the Netherlands Organisation for Scientific Research (NWO Rubicon 446-08-008) and the Niels Stensen Foundation. Publication costs of this article were paid through a grant from the Netherlands Organisation for Scientific Research Open Access initiative. We are most grateful to the constructive criticism and recommendations from two reviewers. abilities and language, and between mentalizing abilities and language (e.g., Sperber and Wilson, 1995;Tomasello et al., 2005;Levinson, 2006;Tomasello, 2008;Airenti, 2010;Bara, 2010). A similar conclusion can be reached from research in infants. The claimed interdependence of mentalizing and language seems to be mainly due to the heavy reliance in experimental studies on false belief tasks, which are theoretically well motivated (Wimmer and Perner, 1983), but should perhaps not be taken as the only proxy for testing mentalizing abilities (Bloom and German, 2000). It will be a challenge for future research to develop new paradigms to test communicative abilities which suffer less from confounding factors (see de Ruiter et al., 2010 for an example). Given the paucity of available data, there are many directions for future research. First, future work on healthy adults should be aimed at investigating how the two neural systems interact since
6,823.6
2010-05-04T00:00:00.000
[ "Linguistics" ]
The intersection of DNA replication with antisense 3′ RNA processing in Arabidopsis FLC chromatin silencing How noncoding transcription influences chromatin states is still unclear. The Arabidopsis floral repressor gene FLC is quantitatively regulated through an antisense-mediated chromatin silencing mechanism. The FLC antisense transcripts form a cotranscriptional R-loop that is dynamically resolved by RNA 3′ processing factors (FCA and FY), and this is linked to chromatin silencing. Here, we investigate this silencing mechanism and show, using single-molecule DNA fiber analysis, that FCA and FY are required for unimpeded replication fork progression across the Arabidopsis genome. We then employ the chicken DT40 cell line system, developed to investigate sequence-dependent replication and chromatin inheritance, and find that FLC R-loop sequences have an orientation-dependent ability to stall replication forks. These data suggest a coordination between RNA 3′ processing of antisense RNA and replication fork progression in the inheritance of chromatin silencing at FLC. Nuclei extraction: After labelling, the whole seedlings were ground in liquid nitrogen to a very fine powder. The powder was gradually and gently resuspended in nuclei extraction buffer (20 mM MOPS pH7, 20 mM NaCl, 90 mM KCl, 2 mM EDTA pH8, 0.5 M sucrose, 0.1% v/v 2mercaptoethanol) using 10 ml of buffer for each gram of seedlings. The suspension was filtered through a double layer of Miracloth into conical tubes and spun at 1000 rcf. The pellet was resuspended into nuclei extraction buffer and loaded on the top of 2.5 M sucrose and 60% Percoll gradient and spun for 45 min at 1000 rcf. The upper phase was collected and diluted 1:4 with nuclei extraction buffer, mixed gently and spun at 1000 rcf for 10 min. The supernatant was removed, nuclei were washed twice in extraction buffer and resuspended. Plugs preparation: The nuclei were resuspended in a small amount of nuclei extraction buffer and an equal volume of 1.4% low melting point agarose was added mixing gently. The nuclei solution was cast into plug molds (BioRad) and plugs were allowed to polymerize at 4 o C. To deproteinize the nuclei the plugs were incubated in proteinase K solution (0.5 M EDTA pH8, 1% sodium lauroylsarcosine, 1 mg/ml proteinase K) for 16 hours in a water bath at 50 o C, the proteinase K solution was then replaced with freshly made solution and incubated for an additional 8 hours. The proteinase K solution was removed, and plugs were washed three times in an excess of wash buffer (10 mM Tris-HCl pH 8 and 10 mM EDTA pH 8) for three hours each time under constant, but gentle agitation. Plugs were stored in buffer 5 from FiberPrep kit (Genomic Vision, Bagneux, France) and shipped to Genomic Vision, Bagneux France, for processing. DNA combing and immunodetection: DNA combing and immunodetection were performed according to the EasyComb service procedures (Genomic Vision, Bagneux, France). Briefly, from the plugs containing the nuclei, single and long DNA molecules were extracted and stretched at a constant speed on specifically treated coverslips. After immunodetection of CldU and ssDNA, coverslips were scanned with FiberVision® scanner and images were analyzed. Image and data analysis: Two intact CldU (magenta) replication tracks, which were no more than four microns apart and were flanked by counterstained ssDNA (blue) were classed as a fork pair. Only those tracks fulfilling these criteria were selected and their length was measured using Fiji software (https://imagej.nih.gov/ij/), a version of ImageJ. We single-labelled our tracks with only CldU therefore we cannot discriminate between diverging replication forks, which derive from the same origin, and converging replication forks, which instead derive from separate origins. However, we minimized the presence of such converging events by keeping the labelling time to the minimum that allowed us to detect measurable tracks. The labelling time of 30 minutes we used produced relatively short tracks, when taking into consideration the length of the tracks obtained by other researchers using equivalent labelling times with mammalian cell cultures. The uptake of thymidine analogues by an intact multicellular living organism, like our seedlings, would be expected to take longer than the uptake by cells in cultures, hence the reduced length of tracks. However, the length of our replication tracks is consistent with that reported in the only previously published analysis of Arabidopsis DNA labelled fibers dating back to 1978, in which fibers were labelled with tritiated thymidine (3). The symmetry index of a fork pair was calculated by dividing the shorter track by the longer track within each fork pair. Symmetry index =1 when the left and right tracks within a fork pair are of equal length and so they are symmetric, numbers lower than 1 point to asymmetric fork pairs. In Fig.1, in the graphs the whiskers span the 5 to 95 percentiles, the dots represent the outliers, the box extends from the 25 th to the 75 th percentiles, the line in the middle of box represents the median and the cross represents the mean. DT40 cell culture and mutants The DT40 cell culture is described in (4), the WT ∆G4 cell line in (5), the primpol ∆G4 line in (6), the WT GAA10 and primpol (GAA10) lines in (7). The 3' region of FLC was PCR amplified using the primers FLC forward CTATCAGGCGCGCCTGCTTCCAAACTTAAAAGCTTAAAC and FLC reverse TATTCTAGGCGCGCCCCTTCATGGATGACGGAACTACGG, which have an AscI site added, and digested with AscI. The construct was then cloned into the BU-1 targeting construct using the MluI sites and screened for orientation by Sanger sequencing. The targeting constructs were then digested with NotI and transfected into WT ∆G4 and primpol ∆G4 cells by electroporation. Successfully transfected cells were selected by puromycin treatment and flow cytometry. The targeting cassette was removed by transfecting the cell with a Cre expressing vector and successful removal was screened for by flow cytometry as described (5). Fluctuation analysis BU-1 fluctuation analyses were performed as described previously in (5,8) with some minor changes. In brief, confluent cells (0.4 × 10 6 to 2 × 10 6 /ml) were plated into 96-well plates to obtain a single cell per well either by limiting dilutions or BU-1 positive sorting on the MOFLO sorting cytometer. Cells were grown for 20 generations and then directly stained with anti-BU-1a conjugated with phycoerythrin (Invitrogen 21-1A4-PE MA5-28754) at 1:100 dilution for 10 min at 37 o C. Cells were analyzed by flow cytometry using an LSR Fortessa cytometer (BD Biosciences). At least two independent fluctuation analyses were performed with at least two cell lines derived from individual transfected clones per construct. Within each experiment 22-48 individual clones were analyzed for each cell line. DNA:RNA immunoprecipitation (DRIP) DRIP was performed mostly as described in (7). In brief, 30 million DT40 cells were harvested by centrifugation, washed in PBS and lysed using hypotonic lysis buffer (10mM Tris-HCl pH 7.5, 10 mM NaCl, 2.5 mM MgCl2) with 0.5% (vol/vol) NP-40. The nuclei were pelleted by centrifugation, resuspended in nuclei lysis buffer (25 mM Tris-HCl pH 7.5, 1% SDS, 5 mM EDTA) and treated with Proteinase K (Thermo Fischer) overnight. SDS and contaminating proteins were removed by adding 5 M KOAc pH 5.5 and centrifuging. DNA was precipitated from the supernatant with isopropanol. Samples were then split and treated with 40 µg RNaseA (Thermo Fischer, EN0531) alone or in combination with 20U RNaseH (NEB, M0297) overnight. The resulting DNA was then sonicated using the Bioruptor sonicator (Diganode) using 30 seconds on, 30 seconds off for 30 cycles on high setting, yielding an average fragment size of 1 kb. Input control fractions were taken and samples were subsequently diluted to 1 ml with IP dilution buffer (16.6 mM Tris-HCl pH 7.5, 1.2 mM EDTA, 165 mM NaCl, 1.1% Triton X-100, 0.01% SDS) and immunoprecipitated with 10 µg S9.6 antibody overnight at 4°C. Protein G Dynabeads (Thermo Fischer, 10003D) were then added to the samples, incubated for 1 hours at 4°C. The beads were then washed with low-salt, high-salt and LiCl wash buffers and TE buffer. The DNA:RNA hybrids were eluted for 2 h at 65°C in elution buffer (10 mM sodium phosphate buffer pH 7.0, 140 mM NaCl, 0.05% Triton X-100) and purified with PCR Purification Kit (QIAGEN). The samples were then analyzed by qPCR using the primers FLC forward CTGCTGGACAAATCTCCGA and FLC reverse GGATTTTGATTTCAACCGCCGA, the signal quantified as a percentage of the input signal. DNA secondary structure prediction G-quadruplexes were predicted using the online platform QGRS mapper (10) with a maximum length of 30 nucleotides, a minimum G-group of 2, and a loop size of 0 to 36. Triplexes were predicted using the Triplex Bioconductor package in R (11) which predicts triplexes based on the purine content.
1,996.4
2021-07-01T00:00:00.000
[ "Biology" ]
ADAPTIVE GUARANTEED ESTIMATION OF A CONSTANT SIGNAL UNDER UNCERTAINTY OF MEASUREMENT ERRORS ditions, the values of measurement errors , 1, , k v k N  are unknown (uncontrolled). A priori information about measurement errors is formalized by choosing a hypothesis about the properties of errors k v . The following hypotheses are traditional. 1. The measurement errors k v are random and given by probability density function with known parameters. 2. The measurement errors k v are uncertain quantities: k v V  , where V is a given convex set of their possible values. Acceptance of the hypothesis about the probabilistic nature of measurement errors makes it possible to formulate the problem within the framework of the stochastic approach as the problem of finding the optimal estimate in the mean square sense and to use statistical methods [2]. The most common is the use of the least-squares method (LS) [1, 2], i.e. minimizing a function Introduction The estimation problem of a constant signal x from noisy measurements is considered [1] , 1, 2, ..., , where 1 x R  is a constant value (useful signal), 1 k v R  are the measurement errors. Under natural conditions, the values of measurement errors , 1, , are unknown (uncontrolled). A priori information about measurement errors is formalized by choosing a hypothesis about the properties of errors k v . The following hypotheses are traditional. 1. The measurement errors k v are random and given by probability density function with known parameters. 2. The measurement errors k v are uncertain quantities: k v V  , where V is a given convex set of their possible values. Acceptance of the hypothesis about the probabilistic nature of measurement errors makes it possible to formulate the problem within the framework of the stochastic approach as the problem of finding the optimal estimate in the mean square sense and to use statistical methods [2]. The most common is the use of the least-squares method (LS) [1,2], i.e. minimizing a function       2 1, arg min . In the guaranteed estimation problems under uncertainty relative to disturbances and measurement errors, the admissible sets of their possible values are determined. The solution is chosen due to conditions of guaranteed bounded estimates optimization corresponding to the worst realization of disturbances and measurement errors. The result of the guaranteed estimation is an unimprovable bounded estimate (information set), which turns to be overly pessimistic (reinsurance) if a prior admissible set of measurement errors is too large compared to their realized values. The admissible sets of disturbances and measurement errors can turn to be only rough upper estimates on a short observation interval. The goal of research is the accuracy enhancement problem of guaranteed estimation when measurement errors are not realized in the worst way, i.e. the environment in which the object operates does not behave as aggressively as it is built in a priori data on the permissible set of error values. Research design. The problem of adaptive guaranteed estimation of a constant signal from noisy measurements is considered. The adaptive filtering problem is, according to the results of measurement processing, from the whole set of possible realizations of errors, to choose the one that would generate the measurement sequence. Results. An adaptive guaranteed estimation algorithm is presented. The adaptive algorithm construction is based on a multi-alternative method based on the Kalman filter bank. The method uses a set of filters, each of which is tuned to a specific hypothesis about the measurement error model. Filter residuals are used to compute estimates of realized measurement errors. The choice of the realization of possible errors is performed using a function that has the meaning of the residual variance over a short time interval. Conclusion. The computational scheme of the adaptive algorithm, the numerical example, and comparative analysis of obtained estimates are presented. Recurrent algorithms are most widely used in solving problems of processing noisy measurements when an estimate of an unknown quantity is formed by the sequential processing of each available measurement and the results obtained at the previous processing step. The recurrent LS-method is the relations of the Kalman filter (KF) for the considered problem (1) [3,4]. However, any inaccuracy in the knowledge of the probabilistic characteristics of errors k v can cause divergence of the filtering process [5][6][7][8]. However, in many situations, the application of stochastic estimation methods can be difficult: due to the small number of available measurements, based on the results of which the search for the best estimate is carried out, or the absence of probabilistic characteristics of measurement errors. Besides, the assumption about the random nature of measurement errors is not always justified [5,8]: often it is only known that the measurement errors k v are bounded. Given a set of possible values of measurement errors, it is possible to formulate the problem within the guaranteed (set-membership approach) as the problem of finding the bounded set of possible values of an unknown quantity [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. In this case, the problem solution is selected from the condition of the optimization of guaranteed bounded estimates corresponding to the worst realization of measurement errors [8,12,18,24]. The advantage of guaranteed estimation methods is the absence of random filtering errors [10-15, 21, 23, 27]. However, the resulting bounded estimate (information set) may turn out to be overly pessimistic (reinsurance) if the set of possible values of measurement errors is too wide [8,17,18]. The problem of adaptive algorithm development for guaranteed estimating becomes relevant [28]. The adaptive guaranteed estimation problem is, according to the results of measurement processing, from the whole set of possible realizations of errors, to choose the one that would generate the sequence of measurements [8]. One of the central issues of modern estimation theory [29][30][31][32] is the synthesis of adaptive filters enabling of providing a sufficiently accurate estimate of the state vector in the absence of accurate a priori information about disturbances and measurement errors is one of the. In [6,7,29,32], various algorithms for adaptive filtering of stochastic systems with unknown values of the noise covariance matrices are discussed. This article is focused on the problem of adaptive guaranteed estimation of a constant signal from noisy measurements. The development of an adaptive estimation algorithm is based on a multialternative method based on a Kalman filter bank, which was first proposed in [33] for estimating random processes with unknown constant parameters [34,35]. This method has found wide application in problems with a multi-alternative description of a system state or process [36][37][38]. The work continues research [39,40]. Statement of the problem Consider the estimation problem solution of unknown constant signal from a single realization of measurements (1) in the framework of a guaranteed (set-membership) approach. A priori information about the initial value 0 x of a variable and errors k v is represented in the form of admissible sets of the corresponding quantities [9-12, 16-20, 24, 26] are respectively left and right bounds of the set 0 X , v  , v  are respectively left and right bounds of the set V . The result of guaranteed estimation is the construction of the information set k X that is guaranteed to contain an unknown signal x [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. The information set is defined as follows [18,23]: where   k X y is the measurement consistent set The presence of the estimate k X (4) is fundamentally important from determining the consistency of a priori information (2) [23]. The algorithm efficiency mainly depends on a priori estimate V which is adequate to the realized errors k v : 1. Errors in the set V definition, i.e. a failure of the assumptions (2) when k v V  , can lead to the fact that the information set k X may be empty at some time step k : k X   . Errors in set 0 X definition can also lead to such a situation. 2. If the set V is too wide, then the information set k X will regularly within the measurement con- . In this case, measurement processing is useless, i.e. it does not lead to an increase in the estimation accuracy -a decrease in estimation errors. Consider an algorithm for solving the guaranteed estimation problem for the case, when a prior admissible set V is too wide, as a result Adaptive algorithm of guaranteed estimation By following the LS-method and the KF, consider the measurement residual formed as the difference between the measured value and the estimate obtained at the previous step [4,8,9]   Substituting the measurement equation (1) into this equation, we find that , 1, 2, ..., , is the estimation error of unknown signal x . Thus, the residual k  (7) corresponding to the current moment of time k is an estimate of the realized measurement error k v , and the estimation error of the measurement error is equal to the estimation error of the signal . x As for estimation error k e , it is known that is the centered set symmetric about zero, (8) is guaranteed and means that the actual estimation error k e can take any value from the set 0 1 k X  . Taking into account the constraint (8) on the error value k e , the permissible set of measurement errors k v can be represented as In the equation for the measurement residual (6) substitute the estimate * 0 x given a priori for the estimate obtained at the previous time step * The value * 0 e x x   (12) is the error of the initialization of x . The centered set is the set of possible values of errors e (12), symmetric about zero. Taking into account equations (10) and conditions (11), (13), represent the admissible set of measurement errors in the form Thus, the width of the permissible set k V (14) of measurement errors k v is determined by the width of the permissible set of errors e in setting information about the actual value x (13). Explain this choice. As shown above, if the admissible set , given too wide, so that 0 X V  , then the information set k X is within the measurement consistent set: . According to the minimax principle, the estimation error and is constant over the entire considered time interval. The admissible set (13) of the estimation error e can be represented as the sum of two subsets The value and sign of the actual estimation error e are unknown. Therefore, we can talk about accepting one of two hypotheses, a hypothesis 0 H :  . The acceptance of the hypothesis 0 H with the fulfillment of conditions The acceptance of the hypothesis 0 H , while An error in setting the set k V (17) can lead to the fact that the information set k X at some time step k may turn out to be empty: k X   . In this case, further construction of the sets using the filter equations (4), (5) becomes impossible. However, it may turn out that x . To characterize the actual quality of estimation, one can use a sequence of a posteriori residuals of measurements [8,9,39,40]. * * , , , 1, 2, ..., , where * k x is the estimate of unknown signal x obtained by the time step k . Of particular interest is to obtain the best (optimal) estimates * k x . These estimates can be obtained by solving the problem of minimizing the function * 2 1 min . Function (19), which is the sum of the squares of a posteriori residuals, carries information about the estimated error of estimation [8,9]. Therefore, the criterion for choosing the admissible set of realized measurement errors is the accuracy of the obtained point estimates * k x of the signal x for different values of k V (16), (17). In this case, the algorithm accuracy for the selected set k V is estimated by averaging over the considered measurement interval. Thus, it is possible to specify the following guaranteed estimation algorithm, which is adaptive to the realized measurement errors.  respectively, rather accurately describes the behavior of actual estimation errors on the measurement interval 1 2 , , ..., l y y y . 3. Following the accepted hypotheses, the admissible sets of measurement errors are calculated (16) and (17), respectively. We will consider the results of the estimation algorithm for different admissible sets of measurement errors. 4. The estimate of the signal x obtained on this measurement interval will be denoted * l x and will be found by the criterion of the minimum squared residuals (19), comparing the results of the algorithm for different admissible sets of measurement errors. 5. For the next measurement interval 1 2 , , ..., l l l l y y y    as a priori estimate of the signal x , we will consider the estimate obtained from measurements at the last time steps l : The measurement processing on the interval 1, k l l l    is carried out in the same way as the measurement processing on the interval 1, k l  . The application of the algorithm does not require storing l measurements, but only calculating and storing estimates with the width of the measurement interval equal to l . Represent a multi-alternative model of the algorithm in the following form. Step 2. Calculate * 0 x following (11). Accept the hypothesis Step 3. Calculate k  following (10) and the admissible set of measurement errors k v following (16). Step 2. Calculate * 0 x following (11). Accept the hypothesis Step 3. Calculate k  following (10) and the admissible set of measurement errors k v following (17). Step 5. If k X   , go to Step 2 of Algorithm 1. Otherwise, go to Step 6. Step 6. Calculate k  following (18). Step 7. Define 1 k k   go to Step 3. If k l  , go to Step 8. Step 8. Calculate Step 9. If the value Numerical simulations The problem of constant signal estimation from noisy measurements is considered , 1, ..., . , the number of measurements is 100 N  . The noisy measurements k y (20) and measurement errors k v are shown in Fig. 1. The measurement errors are assumed to be zero mean Gaussian white noise sequence with standard deviation 0.17 v   . The prior admissible sets are taken as follows: Fig. 1b shows, the realization of measurement errors is such that 3 Fig. 1 . The processes considered in the example: a -noisy measurements k y ; b -measurement errors k v The measurement interval was divided into 5 equal sub-intervals. According to the results of measurement processing, the information set of possible values of the signal x is obtained (Fig. 2 The information set of possible values of the signal x computed by "non-adaptive" filter is (Fig. 2  . The quantity  shows what part of the prior uncertainty is the information set [41]. The information set computed by the adaptive guaranteed algorithm does not exceed 2%   1.56   of the prior uncertainty value, while the information set computed by the "non-adaptive" guaranteed algorithm exceeds 11%   11.74   of the prior uncertainty value. Application of the Kalman Filter Recurrence equations of LSE [3,8]   As mentioned above, equations (21), (22) are the KF equations for the considered problem (20). The variance of measurement errors is known: 2 v r   . Initial conditions for the KF are: From a comparison of the results of the adaptive guaranteed estimation and the KF (Fig. 3, Table 1), it follows that the implementation of the adaptive guaranteed estimation algorithm made it possible to reduce the initial uncertainty in the knowledge of the signal x by 64 times, and the use of the Kalman filter -by 20 times. x of the adaptive guaranteed algorithm, respectively, we have (Fig. 4, Table 2) * 3 ,3ˆ1 00% 17.74%, 100% 50%. max max x of the adaptive guaranteed algorithm. The Kalman estimate turns out to be more accurate since the real probability distribution law of measurement errors k v is Gaussian. The estimate of the guaranteed algorithm is selected based on the worst realization of measurement errors. In the case of a single realization of measurements   1 N k k y  , the solution of the guaranteed estimation problem, when the estimate is a point which is equidistant from bounds of the information set (middle point of the interval), is nonrational [41]. In the considered example, the true value of the signal x is on the border of the information set. However, in practice, such a situation cannot be recognized. Consider the measurement errors k v in terms of uniformly distributed in the interval   , v v  white noise at level of about 0.5 v  (Fig. 5). The prior admissible sets are Initial conditions for the KF are: Fig. 5 shows, the realization of measurement errors is such that at some time steps 0,5 k v  . A comparison of the results of "non-adaptive" guaranteed estimation and the KF is shown in Fig. 6 and Table. 3. Thus, in the case when the admissible set of measurement errors ,  is adequate to the realized measurement errors so that the measurement errors can take values on the set bound or close to its bound, the guaranteed estimation errors are minimal. For the considered realization of measurement errors (Fig. 5), at time steps 13, 34, 84 k  the values of measurement errors are closest to the boundary values. At these time steps, the guaranteed algorithm provides the most accurate estimates. In this case, the application of adaptive methods is not required. Conclusion The article has proposed a solution to the problem of adaptive guaranteed estimation for a constant signal from noisy measurements. It is based on a multi-alternative method when a set (bank) of filters is used, with each of which tuned to a specific hypothesis about possible realizations of measurement errors. Filter residuals are used to compute estimates of realized measurement errors. Choosing the possi-ble implementation of errors is made by using a composed function that makes sense of the variance of residuals over a short time interval.
4,328.8
2020-12-01T00:00:00.000
[ "Mathematics" ]
Review on Coffee Production and Marketing in Ethiopia Coffee, Ethiopia’s largest export crop is the backbone of the Ethiopian economy. However, Ethiopia has not yet fully exploited its position as the producer of some of the best coffees in the world. Hence, a little information has been gathered on different aspects of the coffee with the objectives of reviewing coffee production and marketing in Ethiopia, and marketing actors and margin distribution of coffee in Ethiopia. Data from FAOSTAT, CSA and different published materials on coffee were used. According to the review, lack of competitiveness, lack of infrastructure, in adequate access to services, low value addition, in adequate technology transfer and research, competition of khat and rainfall variability are among major constraints of coffee production in Ethiopia. Price volatility, Poor accesses to market, little market promotion and incentive mechanism, and low price were reported to be the major problem of coffee marketing in Ethiopia. Licensing of more traders and inspecting their activities, enhancing infrastructural and institutional facilities and improving of coffee production technologies through development, and extension of improved coffee varieties and other related agronomic practices were among the major recommendations forwarded from the review. clearing to allow movement in the forest during harvesting time (Tadesse, 2015). According to Labouisse et al. (2008) it includes simple coffee gathering and forest production where coffee trees are simply protected and tended for convenient picking. This system is found in southeastern and southwestern parts of the country (mainly in areas like Bale, Bench-Maji, Illubabor, Kafa, Jimma, Shaka, and West Wollega) (Boansi and Crentsil, 2013). These areas are the centers of origin of Coffea Arabica. This system accounts for about 10% of the total coffee production of the country (Melkamu, 2015). Semi-Forest Coffee: Semi-forest coffee is more intensive, with increased farming interventions (e.g. thinning of trees, understory clearance and weed cutting, and planting of coffee seedlings) (Moat et al., 2017). Farmers acquires forest land for coffee farms, and then thin and select the forest trees to ensure both adequate sunlight and proper shade for the coffee trees (Melkamu, 2015). It is a type of coffee production system where instantly the forest coffee system is converted to semi-managed forest coffee system through reduction of plant composition, diversity and density. This is the dominant production system in southwester Ethiopia (mainly Bench-Maji, Illubabor, Jimma, Kafa, Shaka, and Wollega) and in the Bale Mountains of southeastern Ethiopia (Tadesse, 2015). This system accounts for about 35% of the total coffee production of the country (Tesfu, 2012). Garden Coffee: is a further step in the cultivation process. Seedlings are taken from forest coffee plantations and transplanted closer to farmers' dwellings. In this system, coffee is grown in smallholdings under a few shade trees usually combined with other crops and fruit trees (Tesfu, 2012). It accounts for approximately 50% of national production and is located near residences of growers. It is planted at low densities and is mostly fertilized with organic materials (Boansi and Crentsil, 2013). Geographically, this coffee production system is mainly found in the southern and eastern and some in southwestern parts of the country; and specifically in Gedeo, Guji, Hararghe, Jimma, Sidama, Wollega and some other places (Tadesse, 2015). Plantation Coffee: Plantation coffee is grown on plantations owned by the state and on some well managed smallholders coffee farms. In this system, recommended agronomic practices like improved seedlings, spacing, proper mulching, manuring, weeding, shade regulation and pruning are practiced (Melkamu, 2015). This sector includes a few large private and state farms mainly located in the south-west, as well as many smallholder plantations spread all over the coffee growing areas. It accounts for about 10% of national production (Labouisse et al., 2008). Coffee production in Ethiopia Coffee is grown by over 4 million small holder farmers. Farmers engaged in growing and producing stimulant crops such as coffee are greater in number than those growing fruits (CSA, 2018). It employs 15 million people, or roughly 15 percent of the country's population at different points along the value chain. Nearly 95 percent is cultivated on small plots, generally less than half a hectare. Ethiopia is the world's sixth largest coffee producer, accounting for 4 percent of production. It is also the largest producer in Africa, accounting for about 40 percent of continental production (Francom, 2018). Number of coffee producers has increased from 2012/13 to 2016/17 and then declined. Regarding total area of land allocated for the production of coffee, it has increased over the considered years though at different rates. Table (1) also indicated that there was a fluctuation in yield of coffee over the last six years in the country. From figure 2, one can easily understand that the trend of coffee yield has declined. Negative value of trend line of yield implied that yield has declined over the considered 25 years. The maximum and minimum of 8.65 and 5.2 quintals per hectares was obtained in 1999 and 2012 respectively. The low value of R-square in yield trend line implies that the trend line of yield did not fit the data or there was high variation of yield over the past 25 years in Ethiopia. Opportunities of Coffee Production in Ethiopia Genetic diversity and favorable Environments, Agro-forestry based production system, already known brands in the world market, trademarked and licensed benefit to all, Modern Marketing System (ECX), Encouraging policy and coffee price are among major opportunities of coffee production in Ethiopia (Tesfu, 2012). The Ethiopian coffee sector has bright prospects. The country has suitable altitude, optimum temperature, low labor costs and fertile soil. It can sustainably produce and supply fine specialty coffee with potential of producing all coffee types of the various world coffee growing origins (Jose, 2012). Another opportunities of coffee production in Ethiopia are; high national and international demand for the product, increasing interest of private sector with high investment potential, high support by both regional and federal governments (Berhanu, 2017). The Ethiopian Coffee and Tea Development and Marketing Authority has been re-established as per the proclamation endorsed by the House of Peoples' Representatives on December 2015, with a view to boosting the country's benefit from the sector. The Authority has mandates and responsibilities; to strengthen modern extension services to attain higher level of production and increased productivity, to establish quality based effective and efficient marketing systems, and to support, supervise and regulating of coffee processing industries (Zelalem, 2016). Ethiopian coffee is top in both color and taste. To maintain these qualities, there is a well-established and linked structure that connects coffee farmers, processing-plant owners, governmental organization and coffee processing (Melkamu, 2015). Production constraints of coffee in Ethiopia Coffee production in Ethiopia is constrained by lack of competitiveness, lack of infrastructure, in adequate access to services, low value addition, and in adequate technology transfer and research (Jose, 2012). In recent days, khat, a plant chewed by humans for its stimulating effect, is competing for farm land with coffee. Some small holder coffee farmers resorted to producing khat instead of coffee as they are increasingly attracted by the high prices and greater yield they get from cultivation of khat. A significant number of farmers particularly in the eastern part of the country have switched from coffee production to khat production. Khat is drought, diseases and pest resistant plant which can be harvested three to four times a year and generates better income for farmers than other cash crops including coffee (Tolera and Gebremedin, 2015). Moat et al. (2017) reported that the other challenge of coffee production in Ethiopia is the variability of weather pattern such as rainfall variability on the onset of the wet season, extension of dry season and more extreme (drier and hotter). According to Tadesse (2003) deforestation and change in land use are threatening coffee forest gene pools in Ethiopia. This has been aggravated with the recent coffee price crisis on the world market as a result of market liberalization. Farmers are shifting their coffee farm or forest to other monoculture crop production. Tesfu (2012) also added deforestation and land degradation, diseases, predominant traditional production, failure of using appropriate coffee technologies, inadequate services (credit, inputs, equipments), and lack of sustainability and competitiveness in the coffee sector are challenging coffee production and quality improvement in Ethiopia. Coffee export Ethiopia produces and exports one of the best highland coffees in the world (Samuel and Eva, 2008). Total earnings from goods export grew by 3% in 2018 over the same quarter of last year on account of the rise in export earnings from Coffee (19.1%), Oilseeds (4.9%), Leather and Leather products (27.7%), Fruits and Vegetables (16%), Meat and Meat products (10.1%), Flower (8.1%), Electricity (23.8%) and other exports (35.1%). Earnings from coffee picked up 19.1% in 2018 as compared to last year same quarter and reached USD 215.6 million on account of a 16.5% rise in export volume and 2.2% increase in international price. As a result, the share of coffee in total merchandise export earnings increased to 31.8% from 27.5% a year earlier (NBE, 2018). Countries such as Germany, France, Italy, Belgium, Sweden, Norway, Finland, Denmark, UK, Switzerland, USA, Japan, Saudi Arabia, Canada, Taiwan, South Korea, Australia and South Africa are traditional buyers of Ethiopian coffee (Melkamu, 2015). Agricultural exports share of total exports was declined from 86% in 2013/14 to 84% in 2016/17. However, coffee exports share of total exports was increased from 30% in 2013/14 to 33% in 2016/17. Figure 2 showed that trend of coffee yield has declined. This implied, the increment in both quantity and value, was not because of increase in yield. Rather, it is because of increase in area allocation and output harvested. The highest and lowest value of export was 1,023,691,000 and 129,177,000 US$, respectively. The former was recorded in 2014 while the latter was obtained in 1993. Figure 4: Trends of coffee export quantity and value Source; Own computation from FAOSTAT data, 2019. Domestic consumption Ethiopian is not only a major producer and exporter of coffee, but the highest consumer as well in Africa. Share of consumption in total production in the immediate years following initiation of the reform were relatively lower, decreasing from as high as 65% in 1987 to 25% in 2003. The relatively smaller share of domestic consumption in production in the early years of the reform could be attributed to increases in exports observed in the country during that period as a result of increases in the number of exporters following the liberalization of internal marketing. It however has taken on an increasing trend since the year 2004 (Boansi and Crentsil, 2013). Similarly, Francom (2018) reported that Coffee consumption in Ethiopia is growing, albeit slowly as the population expands. In 2015/16 total production was 6.4 million 60 kg bags, of which 3.7 million were consumed within Ethiopia (Moat et al., 2017). An interesting new development in Ethiopian major cities regarding coffee consumption is the emergence of small roadside stalls selling coffee to passer by customers. The small roadside stalls serve coffee in a traditional manner. They have emerged and flourished in Ethiopia's major towns, growing very popular among coffee consumers who are frustrated by the escalating price of coffee and the deteriorating quality of coffee served in cafes and coffee shops. Unlike regular coffee shops, the small roadside stalls pay neither VAT nor house rents making their cost of serving coffee much lower and more competitive than the regular coffee shops (Alemayehu, 2014). Coffee in the local market is mainly coffee destined for export through the Ethiopian Commodity Exchange (ECX) but rejected for failing to meet ECX's quality standards. But, local coffee price is usually higher than international Coffee arabica prices (Alemayehu, 2014). According to Getachew (2011), mixing coffee with other grain crops such as barely is becoming a usual in the country. Coffee plays a vital role in both cultural and social life of Ethiopian community. Among coffee producing countries in the world, Ethiopia is the first country in consumption of coffee. From the 200,000-250,000 tons of average annual production, about 50% is consumed in the country. Preparation and drinking of coffee is a unique culture in Ethiopia; coffee ceremony. Coffee is not drunk alone. It is a social activity to be shared with others. Sharing coffee with others means you are at peace with them and cultivates community and friendship (Melkamu, 2015). Empirical Review on Marketing Margin Analysis of coffee along its value chain in Ethiopia According to Getachew (2011) the total marketing margin for coffee marketing in the eastern Ethiopia was Birr 20 per a kilogram of dried cherries produced and sold by farmers. Among the actors in coffee marketing channel, Journal of Marketing andConsumer Research www.iiste.org ISSN 2422-8451 An International Peer-reviewed Journal Vol.67, 2020 13 retailers receive the highest marketing margin, which is more than 79% of the total marketing margin. They have also the highest margin as percent of the cost prices (average buying price and marketing costs), which is 31.75%. For instance, retailers earn a profit of Birr 2891.76 from selling 1 quintal of hulled and prepared coffee, while wholesalers only obtain a profit of Birr 394.4, which is 7 fold smaller than the amount earned by retailers. Even farmers, who are engaged in coffee production the whole year, obtained only 45% of the margin grained by retailers. It can also been seen that 67% of the retail value (final price paid by consumer) goes to farmers, while the remaining 33% is taken by traders (7% by wholesalers and 26% by retailers). Similarly, Zekarias et al. (2012) studied market chains of forest coffee in southwest Ethiopia and reported that the producers obtained less net benefit than other intermediaries. Net benefit analysis was used for the actors in the chain to examine/evaluate the performance of the market. Larger average net profit was obtained by the intermediaries than the producers and producers ware less beneficiaries in the chain than the other actors. Marketing Constraints of Coffee in Ethiopia Despite positive image of the country as birthplace of coffee, a strong local coffee culture, genetic diversity and easy branding opportunities, diverse agro-ecology and climatic conditions, unique distinct characters of coffee quality, a favorable national agriculture ecosystem for coffee development, the country, however, so far failed to fully capitalize its potential. The effect of price volatility has been a direct factor in increasing rural poverty in rural communities; up to 85% of coffee farmers have cited coffee price volatility as a leading risk factor for their farms. Episodic pricing pressures on smallholders lead some to shift away from traditional forest coffee production systems towards more immediately profitable zero-shade systems that can yield higher returns in the short term. However, this is having a profound impact on the degradation of coffee landscapes on longer time
3,428.6
2020-04-01T00:00:00.000
[ "Economics" ]
How Permanent Are the Permanent Macrodipoles of Anthranilamide Bioinspired Molecular Electrets? Dipoles are ubiquitous, and their impacts on materials and interfaces affect many aspects of daily life. Despite their importance, dipoles remain underutilized, often because of insufficient knowledge about the structures producing them. As electrostatic analogues of magnets, electrets possess ordered electric dipoles. Here, we characterize the structural dynamics of bioinspired electret oligomers based on anthranilamide motifs. We report dynamics simulations, employing a force field that allows dynamic polarization, in a variety of solvents. The results show a linear increase in macrodipoles with oligomer length that strongly depends on solvent polarity and hydrogen-bonding (HB) propensity, as well as on the anthranilamide side chains. An increase in solvent polarity increases the dipole moments of the electret structures while decreasing the dipole effects on the moieties outside the solvation cavities. The former is due to enhancement of the Onsager reaction field and the latter to screening of the dipole-generated fields. Solvent dynamics hugely contributes to the fluctuations and magnitude of the electret dipoles. HB with the solvent weakens electret macrodipoles without breaking the intramolecular HB that maintains their extended conformation. This study provides design principles for developing a new class of organic materials with controllable electronic properties. An animated version of the TOC graphic showing a sequence of the MD trajectories of short and long molecular electrets in three solvents with different polarities is available in the HTML version of this paper. ■ INTRODUCTION −3 Protein helices represent some of the best examples of molecular electrets. 4,5−9 The structural fragility of polypeptides composed of α-amino acids, however, limits their utility outside their native environment.Moreover, being susceptible to redox degradation, protein backbones mediate electron and hole transfer solely via tunneling, limiting the practical application of long-range CT to about 2 nm. 10,11imilar to protein αand 3 10 -helices, 12 anthranilamide (AA) oligomers possess permanent macrodipoles originating from the ordered orientation of their amide and hydrogen bonds (HBs) (Figure 1). 13Unlike proteins, however, many AA structures exhibit reversible oxidation, and the aromatic moieties along their backbones provide sites for charge hopping important for long-range CT. 14 Furthermore, the electric dipoles of AA residues strongly impact CT kinetics. 15,16As X-ray analysis shows, AA oligomers assume extended conformations, 17 which was also supported by quantum mechanical (QM) calculations. 18Nevertheless, such QM analyses are applicable only to small oligomers, and X-ray crystallography provides only a "rigid" picture of the AA structures.Using force fields (FFs) to describe structures and integrations as they evolve, molecular dynamics (MD) addresses these challenges facing experimental and QMbased assessment.Proven invaluable for chemistry, biology, and materials science, MD simulations go far beyond the time and length QM scales, provide fundamental insights into structural dynamics and physical properties, and produce important guidelines for experimental designs. 19The standard FFs, however, employ fixed point charges, imposing severe limitations in describing the fluctuating electrostatic environments present during the dynamics of polar conjugates, such as electrets. In order to accurately describe dynamic electrostatic interactions involving charge and polarization fluctuations, we developed the polarizable charge equilibration (PQEq) method. 20PQEq implements Gaussian-shaped electrondensity clouds on each atom and describes the charge and polarization fluctuations at the femtosecond time scale.Moreover, we used QM methods to develop a new generation of long-range nonbonded interactions, i.e., universal nonbonded (UNB) interactions, to describe van der Waals (vdW) attraction and Pauli repulsion interactions. 21These methods increase the accuracy of depicting the response to electric fields, providing a powerful tool for estimating the dipole dynamics of systems such as molecular electrets.For the MD simulations, we combine these UNB interactions with the valence bond, angle, and torsion characteristics of the universal force field (UFF), which has parameters for all atoms of the periodic table (up to Lr, Z = 103). 22erein, MD simulations, combining this improved representation of electrostatic and nonbonded interactions, allow us to demonstrate the first dynamic description of the behavior of AA electrets immersed in explicitly introduced solvents with different properties.While the variations of the extended AA conformations are relatively small, our results show significant fluctuations of the AA permanent electric macrodipoles with a clear dependence on the oligomer length and solvent polarity.Moreover, solute−solvent HB interactions and the AA side chains emerge as important modifiers of the molecular dipoles, demonstrating the multifaceted nature of designing large polar systems, such as amide molecular electrets. ■ RESULTS AND DISCUSSION Initial Selection of Electrets and Solvents.Since AAs with ether substituents at position 5 manifest reversible electrochemical oxidation at relatively large positive potentials, making them feasible for transducing high-energy holes, 23 our initial focus is on conjugates composed of Box residues possessing isobutyl ether groups as R 2 side chains (Figures 1 and 2a,b).Conversely, an AA residue with an N-amide at position 5, denoted as Aaa, provides a means for covalent connectivity with favorable electronic coupling for hole injection by photoexcited electron acceptors. 24It motivates the selection of Aaa for capping the termini of the AA oligomers and polymers to form Aaa-Box l−2 -Aaa, where the numbers of residues l = 5 for the density functional theory (DFT) analysis and up to 40 for the MD simulations.Since HB interactions along the backbone chain are important for maintaining the structural integrity of AA electrets, we employ solvents with various polarities that do not form HBs, i.e., toluene (Tol), dichloromethane (DCM), and acetonitrile (MeCN). This study focuses on electret macrodipoles and their dynamics.Originating from displacement of positive and negative charges, dipoles depend on molecular geometry and electronic structure.Prior to diving into the MD analysis of the AA oligomers and their macrodipoles, therefore, it is paramount to discuss the electronic features of the bonding patterns along their backbones.Resorting to QM calculations, the next section demonstrates how the electronic structure of the bonds between the aromatic rings and the amides of the AA conjugates impacts their geometry. Are the AA Electrets Flat?The common notion is that AA oligomers assume flat extended conformations. 17HBs between the amides at each residue and the π-conjugation with the aromatic rings favor planarity of these structures (Figure 1).Nonetheless, DFT calculations reveal close vdW contacts between (1) the AA hydrogen at position 3 and the oxygen of the N-terminal amide, i.e., amide I, as well as (2) the AA hydrogen at position 6 and the hydrogen of the C-terminal amide, i.e., amide II (Figures 2a and S2).Despite the HB and the π-conjugation along the AA backbone, this steric hindrance twists the amides slightly off the plane of the aromatic ring (Figure 2b and Table S9).Based on DFT calculations for Boxcontaining oligomers with five residues (l = 5), we find that an increase in solvent polarity shifts the dihedral angles between the amides and the aromatic rings away from 180°.That is, nonpolar solvents enhance the planarity of the oligomers (Table S9). Amide I tends to be less twisted out of the aromatic ring plane than amide II; i.e., ϕ is closer to 180°than φ (Figure 2a and Table S9).In contrast, the vdW steric hindrance of amide I with the aromatic ring is larger than that of amide II (Figure 2a), considering the sizes of carbonyl oxygens and amide hydrogens and the lengths of C�O and N−H bonds.Differences in the π-conjugation of the two amides with the aromatic ring can elucidate the reason for this conundrum. Analyzing the topology of the total charge density (ρ(r)) obtained from atoms-in-molecules (AIM) theory 25 provides insight into the effects of the solvent environment on the flexibility of bonds between π-conjugated moieties.Defined in terms of the cylindrical asymmetry of ρ(r) around the axis connecting two atoms, the ellipticity (ϵ) of a bond reveals the extent of π-conjugation along it. 25A single σ-bond has a symmetric ρ(r) distribution and ϵ = 0. Adding a single π-bond between the atoms increases ϵ.Our results show a larger electricity of N I −C 2 than that of the C II −C 1 bonds (Figure 2c,d), where the superscripts indicate the atomic positions (Figure 2a).That is, π-conjugation along the N-terminal N I − C 2 bonds is considerably more pronounced than that along the C-terminal C II −C 1 ones.The asymmetric ϵ along the N I −C 2 bond indicates polarization with enhancement of electron density at the nitrogen.Conversely, the C II −C 1 bond is not as polar as the N I −C 2 bond.Furthermore, an increase in solvent polarity decreases the ellipticity of N I −C 2 and C II −C 1 (Figure S3), which is consistent with reducing the partial double-bond character between the amides and the aromatic rings, favoring enhanced twisting between the residues. The difference between the π-bond character of N I −C 2 and C II −C 1 , as revealed by their ellipticities, reflects the larger twists of the C-terminal than the N-terminal amides off the planes of the aromatic rings, i.e., |φ| < |ϕ| (Figure 2a and Table S9).This increased deviation of φ from 180°concurs with the weakened bonds between the carbonyls and the aromatic rings, which is consistent with the π nb -orbital nodes through the amide carbons. 26Conversely, the enhanced electron density on the amide nitrogens strengthens π-conjugation with the aromatic rings, keeping ϕ closer to 180°than φ.Empirical characteristics, such as the Swain and Lupton resonance (R SL ) and field (F SL ) parameters, accounting for π-conjugation and inductive effects of aromatic substituents, respectively, 27 reflect well this difference between the bonding patterns of amide substituents.Both N-acylamides and C-acylamides exert electron-withdrawing inductive effects with F SL ≈ 0.3.Nevertheless, N-acylamides are mesomerically electron-donating with R SL ≈ −0.3, while R SL ≈ 0 for C-acylamides suggests negligible π-conjugation. 27he HBs between the amides and their π-conjugation with aromatic rings counter the steric hindrance with hydrogens 3 and 6, favoring planarity of the AA oligomers.Nonetheless, these structures are not truly planar.The dihedral angles between the amides and the aromatic rings deviate from 180°b y less than 30°(Table S9).These relatively small deviations, however, do not appear to compromise the extended conformation of short AA oligomers of less than about 5 or 10 residues (Figure S4a,b), as X-ray crystallography and NMR analysis reveal. 17,28Adding the multiple deviations of ϕ and φ from 180°upon expansion of the oligomer length beyond 10 residues, however, leads to the emergence of curvatures in the AA backbones (Figure S4c,d).The electret macrodipoles rely on codirectional alignment of the polar functional groups, such as the amides, along the oligomer backbones.Structural deviations from linearity of the AA conjugates thus impact the magnitude of their macrodipoles.Macrodipoles of the AA Electrets.Although the DFT results are informative, they describe single optimized structures in implicit solvents as continuum media characterized by dielectric constants.To elucidate the dynamics of the AA oligomers immersed in explicit solvents with defined molecular structures, we perform MD simulations for Aaa-Box l−2 -Aaa oligomers with l = 5, 10, 20, and 40 (Figures 3, 4a, and S4).In order to accurately describe the dynamic behavior of these molecular structures, including the dynamics of atomic charge fluctuations and polarization, we implement (1) modified UFF 22 for the bonded interactions combined with (2) PQEq (electrostatics), 20 UNB (vdW), 21 and HB 29 for nonbonded interactions.We validate this computational methodology with MD simulations of small aliphatic amides that we previously studied employing NMR, impedance spectroscopy, and DFT calculations. 30The MD simulations reproduce the results from the experimental and DFT analyses (see Supporting Information), giving us confidence in applying this methodology to exploring the structural dynamics of other amide conjugates, such as AA electrets. The 1-ns MD simulations show an overall extended conformation of the AA oligomers, even for the longest structure, with 40 residues (Figures 3 and S5).The average end-to-end distances increase linearly with increasing oligomer length (Figure 4a,b).For the nonpolar solvent Tol, the AA oligomers exhibit the longest end-to-end distances, which agrees with the DFT result that nonpolar solvents enhance the planarity of the residues.In comparison, the polar solvents MeCN and DCM shorten the average end-to-end distances of the AA oligomers, particularly for the long oligomers (Figures 4b,c and S6).The fluctuations of the end-to-end distance of the pentamer do not exceed 15% (Figures 3a−c and S5a).For the oligomers with l ≥ 10, temporary formation of bends along their backbones emerges, which is more pronounced for DCM and MeCN than for toluene (Figure 3d−i).The amplitudes of end-to-end distance fluctuations of the 20-mer and 40-mer increase from about 10 to 30 Å when the oligomers are transferred from Tol to MeCN (Figure 4c).These temporary bends are over several residues and do not lead to π−π-stacking interactions between the aromatic moieties (Figure 3). The planarity of the AA oligomers results from HB interactions between the amides at each residue along the backbone and their π-conjugation with the aromatic rings.The calculated HB energy is about 0.2 eV per residue for Tol and ∼10% smaller for DCM and MeCN, with little dependence on oligomer length (Figure S7).As mentioned above, the AA oligomers intrinsically exhibit slightly twisted dihedral angles due to steric hindrance between the amide backbone and the aromatic ring.The MD simulations show a higher rigidity for the ϕ dihedral angles compared with φ, which agrees with the DFT findings (Figure S8). Amides possess sizable intrinsic permanent electric dipoles. 30Hence, molecules with long backbones containing codirectionally ordered amides should exhibit large dipole moments.Indeed, the calculated dipoles from the MD simulations show a linear increase with the length of the oligomers for all solvent (Figures 4d,e and S9).That is, the average magnitude of the macrodipole is proportional to the number of residues in the oligomer.The dipole magnitude, however, depends significantly on the polarity of the solvents, i.e., |μ MeCN | > |μ DCM | > |μ Tol |, and the predicted dipoles per residue are about 6.0 D for MeCN, 5.0 D for DCM, and 2.2 D for Tol (Figure 4e,f).This trend is consistent with the Onsager reaction field that polar media induces in the solvation cavity. 30,31For the explicit solvent description, the medium polarization involves (1) alignment of the polar solvent molecules along the localized electric fields generated by the solute dipoles, i.e., orientational polarization leading to electrofreeze, 32 and (2) shifts in the nuclear coordinates and the electron density of the solvent molecules, i.e., vibrational , where ε is the relative static dielectric constant and n 2 , i.e., the square of the refractive index, represents the dynamic dielectric constant at optical frequencies. 34(c) Estimated intramolecular HB energy, E HB , per residue.Representative HB interactions between the electret molecule and solvent molecules for (d) MeCN, (e) DCM, and (f) MeOH.Black (intramolecular HB), blue (HB acceptor), and red (HB donor) dotted lines show the different types of HB interactions, and gray, white, red, and blue represent carbon, hydrogen, oxygen, and nitrogen atoms, respectively.and electronic polarizations, respectively.Entropic randomization of the solvent balances the electrofreeze from prevailing as the distance from the solvation cavity increases. Our MD simulations reveal an intriguing dynamic phenomenon: the macrodipoles exhibit large rapid fluctuations which intensify with increasing solvent polarity and oligomer length (Figures 3, 4g−i, and S10).These fluctuations are considerably more drastic than the structural dynamics illustrated by the variations in end-to-end distance of the AA oligomers (Figures S6 and S10).This finding suggests that transient arrangements of solvent molecules surrounding the AA oligomer play a pivotal role in generating these huge, shortlived transient dipoles.It is worth noting that the estimated dipoles of the AA oligomer without the solvents remain small and exhibit minimal fluctuations regardless of the solvent polarity (gray lines in Figures 4g−i and S10).The dipole of the DFT-optimized pentamer also shows contributions from the implicit solvents, but not as large as those from the MD analysis (Table S10).The MD simulations reveal both substantial contributions from the explicitly described solvents to the dipoles of the solvated oligomers and the emergence of transient macrodipoles with large magnitudes which cannot be explained solely by the ordered arrangement of the functional groups of the AA conjugates.These findings are consistent with fluctuations of the medium-induced Onsager reaction field in the solvation cavities. Is HB with the Solvation Media Important?The results showing enhancement of the macrodipoles with increasing medium polarity and oligomer length are for solvents that lack specific intermolecular interactions with the AA conjugates (Figure 4).The power of MD simulation to introduce solvents explicitly provides the means for exploring the effects of specific intermolecular interactions, such as HB, on the properties of the solvated oligomers.To examine how HB between the AA electrets and the solvent affects the structural integrity of these oligomers, we resort to MD simulations on the pentamer, Aaa-Box 3 -Aaa (due to its conformational stability, Figures 3a−c and S5a), with various solvents capable of HB interactions, i.e., dimethylformamide (DMF), tetrahydrofuran (THF), methanol (MeOH), and 1-octanol (OcOH) (Figure 5a).DMF is a broadly used organic solvent with polarity similar to that of MeCN, and THF has polarity similar to that of DCM.These two solvents are HB acceptors but not HB donors.While MeOH and OcOH have polarities similar to those of MeCN and DCM, respectively, they can act as both HB donors and acceptors. The 1-ns MD simulations for Aaa-Box 3 -Aaa in solvents that form HBs do not show breaks in its intramolecular HB network.Nevertheless, the AA dipole shows a dependence on the HB capability of the solvents, even though the fundamental trend remains the same; i.e., an increase in solvent polarity increases the oligomer macrodipole (Figure 5b).The oligomer in DMF and THF, which are only HB acceptors, exhibits significantly lower dipoles compared to those in MeCN and DCM, despite the similar solvent polarities.When placed in MeOH and OcOH that act as both HB donors and acceptors, the AA conjugate shows even smaller dipoles than when in DMF and THF.These results indicate that the HB capability of the solvent significantly affects the dipoles of the AA electrets.This finding is consistent with the trends of the estimated intramolecular HB energies of the AA oligomer, showing a decrease with enhanced HB capability of the solvent (Figures 5c and S15).Hence, HB interactions with the solvation media do not necessarily break the HB of the AA structures.Nevertheless, they weaken the intramolecular HB network by intermolecular HB interactions (Figure 5d−f). These MD simulations also reveal that these solute−solvent HB interactions can induce conformational changes in the amide bond.Typically, the trans conformation of the amide is about 0.2 eV more stable than the cis. 35We observe, however, 8.1% cis amides for MeOH that is polar and acts as HB donor and acceptor (Figure S13).Furthermore, the HB solvents induce large fluctuations in the dihedral angles of the electret molecules (Figure S14).These results demonstrate that solvation media with HB propensity compromise the structural integrity and the macrodipoles of molecular electret systems. Do the Side Chains of the AA Electrets Matter?Although the ordered amide orientation and the HB network along the backbone of the electrets are principally responsible for maintaining their extended conformation and macrodipoles, the side chain substituents, i.e., R 1 and R 2 at positions 4 and 5 (Figures 1 and 2a), also affect the AA properties.For example, not only the type of substituent but also its position, i.e., 4 vs 5, affect the electrochemical potentials of the AA residues and their susceptibility to oxidative degradation. 14o examine the effects of the side chains on the macrodipoles, we perform DFT calculations and MD simulations on AA pentamers, each composed of the same electron-rich residue with various R 1 and R 2 substituents (Figure 6a−e).Placing an electron-donating substituent as R 2 at position 5, such as in Box and Ceb, polarizes the aromatic rings in the direction of the macrodipole of the AA electrets, which point from their N-to their C-termini (Figure 1).That is, electron-donating R 2 groups enhance the electret macrodipoles.Attaching an electronegative substituent as R 1 at position 4, such as fluorine in Feb, 36 further enhances this polarization of the aromatic ring The MD simulations, indeed, show average macrodipoles of Feb 5 in solvents with various polarities that are substantially larger than those of Ceb 5 and Box 5 (Figure 6f). Conversely, electron-donating R 1 groups exert an opposite polarization effect, and the macrodipole of Neb 5 is the smallest for all solvents (Figure 6f).In Dmx, these effects from the two ethers, R 1 and R 2 , should cancel each other.Nevertheless, the average dipole of Dmx 5 in Tol is similar to that of Box 5 (Figure 6f).The solvent polarity, therefore, affects to different extents the polarization that side chains R 1 and R 2 induce on the aromatic residues. Counterintuitively, our results show that it is the position of the substituents, rather than their electron-donating capability, that affects the electret macrodipole.Despite the difference between the electron-donating strengths of amines and alkyloxyls 27 and the drastically different potentials for oxidizing the residues that contain them, 14 their effect on the electret macrodipoles are quite similar, as the overlapping trends for Box 5 and Ceb 5 reveal (Figure 6f). Bond ellipticity analysis from DFT calculations of the residues with different side chains elucidates the effects of the R 1 and R 2 substituents on the backbone structure and concurs with the MD findings about the oligomer macrodipoles.Electron-donating substituents at the R 2 position para to the N I −C 2 bond increase its ellipticity, as in Feb, Box, and Ceb (Figure 6l).Conversely, using an electron-donating R 1 group at the meta position, as in Neb, redirects the electron density and lowers the N I −C 1 ellipticity. The side chains have a stronger effect on the C II −C 1 bonds than the N I −C 2 bonds between the amides and the aromatic rings (Figure 6g−m).Electron-donating R 2 substituents at the position meta to the C-terminal amides reduce the π-character of the C II −C 1 bonds, as in Box and Ceb (Figure 6g,h,m).Conversely, an electron-donating R1, i.e., para to C 1 , not only enhances the π-character of the C II −C 1 bonds but also increases its asymmetry, with electron density drawn toward the aromatic ring, as in Neb (Figure 6k,m).With two identical electron-donating substituents, the C II −C 1 bond of Dmx is similar to that of Neb rather than Box (Figure 6j,m), indicating that the R 1 group para to C 1 has a stronger effect on the πcharacter of the C II −C 1 bond than the meta R 2 side chain.While strongly electron-withdrawing along the σ-skeleton, fluorine is slightly electron-donating along the π-bonds, 37 and placing it as R 1 next to an R 2 amine indicates some increase in ellipticity and asymmetry of the C II −C 1 bond of Feb (Figure 6i,m). The side chains affect the polarization and rigidity of the bonds between the backbone amides and the aromatic moieties.An increase in the double-bond C II −C 1 character, as well as pulling the π-electron density toward the aromatic ring along C II −C 1 and N I −C 2 bonds, correlates with decreasing the electret macrodipoles. Do the Macrodipoles Matter?The short answer is "yes, they do".Nevertheless, the inherently strong nature of electrostatic interactions warrants revisiting this question.Polar molecules, indeed, tend to have a propensity for aggregating with opposing orientations of their dipoles.The cancellation of the macrodipoles in such aggregates appears to question the need to pursue and optimize the designs of Journal of the American Chemical Society electret structures.Even AA molecular electrets without side chains R 1 and R 2 �needed for improving solubility and suppressing π-stacking�form aggregates exhibiting macrodipole cancelation. 28onversely, numerous examples demonstrate the need for macromolecular structures with large electric dipole moments.Technologies employing liquid crystals, comprising assemblies of linear polar molecular structures that improve their order under external electric fields, are an inherent component of everyday life. 38,39Macrodipoles strongly affect CT thermodynamics and kinetics and can play crucial roles in enhancing the rates of desired processes while suppressing undesired ones. 3,40he intrinsic dipoles of protein α-helices are responsible for the functioning of transmembrane ion channels that maintain living cells alive. 6,41ith macrodipoles reaching 5 D per residue, polypeptide helices are among the most polar linear molecular structures known. 12The amino acids sequence can control the state of aggregation of these biomolecules, 42−44 and designed polypeptide helices without propensity for aggregation at submillimolar concentrations have allowed demonstrating dipole effects on CT kinetics. 7,45,46nterfacing such macromolecular electrets with solid conductors and semiconductors is essential for device designs and technology developments. 3The amino acid side chains and the method of self-assembly govern the structures of monolayers of polypeptide α-helices formed at liquid−air interfaces or physisorbed on metal surfaces. 47Resorting to strong chemisorption involving, for example, the formation of sulfur−gold bonds, along with sequence designs favoring codirectional orientation, allows self-assembly of polypeptide α-helices on conductive surfaces with their dipoles pointing in the same direction, to or from the solid substrate.The codirectionally oriented dipoles of such self-assembled monolayers of polypeptide helices induce rectification of photocurrents and charge transport. 8,48n addition to the advances that demonstrate the importance of molecular electrets, however, it is important to consider the implementation of their macrodipoles.The orientation of polypeptide α-helices can appear to have no effect on CT between charged electron donors and acceptors attached to them. 49The counterions of the charged moieties and polar solvating media screen dipole-generated fields and suppress or completely eliminate their effects on CT. 15,16 The polarization of the media around solvation cavities damps the localized fields originating from solvated dipoles.Concurrently, the same medium polarization enhances the magnitudes of such solvated dipoles.Dipole-generated fields force orientation, along with nuclear and electronic polarization, of the surrounding molecules of polar solvents.The dipoles and the induced dipoles of such polarized media generate a reaction electric field inside the cavity that is codirectional with the field of the solvated (macro)dipole.That is, an increase in medium polarity has two opposing effects on solvated dipoles: (1) it suppresses the propagation of the dipole-generated fields outside the solvation cavities, diminishing the dipole effects on the surrounding species, and (2) it enhances the magnitudes of the solvated dipoles by inducing Onsager reaction fields inside the solvation cavities. 31Increases not only in solvent polarity but also in solvent polarizability induce sizable enhancement of solvated dipoles, as impedance spectroscopy and QM calculations reveal. 30 addition to these two opposing solvent effects on molecular dipoles, an increase in the medium polarity compromises the planarity of the AA electrets.These multifaceted solvent effects on macrodipoles warrant careful approaches to not only the design but also the implementation of molecular electrets. ■ CONCLUSIONS The MD-PQEq methodology allows interrogating the structural dynamics and dipole properties of large polar systems, such as bioinspired molecular electrets with length exceeding 100 Å.Such MD simulations reveal unexpected external and internal effects on the dipole dynamics.Specific interactions with the solvents and polarization from the residue side chains strongly affect the oligomer macrodipoles.An increase in solvent polarity enhances not only the electret dipoles but also the amplitude of their fluctuations.Decreased rigidity of the oligomer backbones accenuates the latter, as DFT calculations demonstrate.When averaged over tens of picoseconds, the macrodipoles appear to be quite permanent.Corollary mostly to the solvent dynamics and the reaction-field fluctuations, however, the dipoles manifest huge picosecond transient jumps, making them not so permanent at such fast time scales.Therefore, such macromolecular dipoles should impact differently processes with different rates, providing key guidelines for implementing these bioinspired structures for crafting localized electric fields in charge-transfer and energyconversion systems.Beyond the AA structures, our findings provide design principles for developing a class of organic materials with novel electronic properties.Furthermore, this study demonstrates the power of the PQEq-MD methodology, in synergy with QM calculations, for multifaceted characterization of the dynamic complexity of large dipolar systems in condensed media. ■ EXPERIMENTAL SECTION MD Simulations.All MD simulations were performed using the RexPoN-integrated version of the LAMMPS 21,50 molecular dynamics package.The time step was set to 1 fs, and a Nose−Hoover thermostat (100 fs damping constant) was employed for NVT (constant particles, volume, and temperature) simulations.After minimization, the systems were first heated from 10 to 300 K over 100 ps.Next, NVT simulations were performed for 1 ns at 300 K. We used the modified universal force field (UFF) 22 for the bonded interactions and PQEq (electrostatic), 20 UNB (vdW), 21 and UHB (HB) for nonbonded interactions.More detailed information is provided in the Supporting Information. The electret oligomers were placed in 40 × 30 × 30 Å 3 , 70 × 40 × 40 Å 3 , 100 × 53 × 53 Å 3 , and 165 × 53 × 53 Å 3 boxes for residue lengths l = 5, 10, 20, and 40, respectively.The solvent molecules were placed within each box to match the experimental densities: 1.03 (DO), 1.48 (Chl), 1.33 (DCM), 0.79 (MeCN), 0.86 (Tol), 0.94 (DMF), 0.88 (THF), 0.79 (MeOH), and 0.83 (OcOH) g cm −3 . 51For each system, we performed three independent MD simulations with different initial structures (n = 3 runs) and calculated averages with standard errors from the three replicates.All initial structures were generated by packmol, 52 while VMD 53 was used for visualization and analysis of the MD trajectories.Movies were generated by OVITO. 54ensity Functional Theory (DFT) Calculations.We used the B3LYP functional within the DFT framework along with the Grimme dispersion DFT-D3 correction. 55We employed the 6-31G(d) basis set. 56,57Our convergence criteria were 10 −4 au for the average residual forces for geometry optimization and 10 −8 au for the self-consistent field energy.Solvation effects were included using the integral equation formalism variant of the polarizable continuum model (IEFPCM). 58All molecular structures of the monomers and pentamers were optimized using the Gaussian 09 program package. 59or Box oligomer calculations, we truncated the alkyl chains to methyl groups, as the conformations of the flexible alkyl chains were often improperly optimized, trapping the entire structure in a local minimum. ■ ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/jacs.3c10525.Supplementary computational results and simulation details (PDF) Figure 1 . Figure 1.Schematic illustration of an AA bioinspired molecular electret with the macrodipole originating from ordered amide bonds and polarization induced by HB. Figure 2 . Figure 2. Structural and electronic DFT analyses of a Box AA pentamer.(a) DFT-optimized structure of a unit residue of the AA oligomer, where gray, white, red, and blue represent carbon, hydrogen, oxygen, and nitrogen, respectively.(b) DFT-optimized structure of the Aaa-capped Box oligomer.(c) Average N-terminus and C-terminus bond ellipticity for the Aaa-capped Box oligomer in the gas phase.(d) Localized molecular orbitals for a single resonance structure, showing the π-character of the bonds between the amides and the aromatic rings of the Aaa-capped Box oligomer (isovalue: 0.08). Figure 4 . Figure 4. Structural and dipole analyses of Box AA oligomers with different lengths from 1-ns MD simulations.(a) Chemical structure of the Aaacapped Box oligomer, where we consider residue lengths of l = 5, 10, 20, and 40.(b) Average end-to-end distances for the AA oligomers (as indicated by a red arrow in panel a) in three solvents with differing polarity: MeCN, DCM, and Tol.(c) Average end-to-end distances for the AA oligomers (l = 40) in three solvents over time, where shaded areas represent standard error from three replicas.(d) Average dipoles for the AA oligomers in MeCN, where the average was calculated from moving averages of three replicas with a window size of 20 ps and the shaded areas represent standard error.(e) Calculated dipoles in the three solvents with different polarity.(f) Dipoles per residue, estimated from the total dipoles divided by l + 1, which is the number of backbone amides.Dipole fluctuations of the AA oligomer (l = 40) in (g) MeCN, (h) DCM, (i) and Tol solvents over time, where the thin pink, sky-blue, and light-green lines show the dipole of the AA oligomer at each picosecond, and the thick red, blue, and green lines indicate moving averages with a window size of 20 ps.The gray lines show the dipoles of the AA oligomers in the gas phase calculated by removing solvents from the trajectories. Figure 5 . Figure 5. Effects of solvents with different HB capabilities revealed by MD calculations.(a) Structures of the seven solvents used in this study.(b) Calculated average dipole moments of Aaa-Box3-Aaa as a function of Onsager solvent polarity (f 0 ): 33f 0 (x) = 2(x − 1)(2x + 1) −1 and f 0 = f 0 (ε) − f 0 (n 2 ), where ε is the relative static dielectric constant and n 2 , i.e., the square of the refractive index, represents the dynamic dielectric constant at optical frequencies.34(c) Estimated intramolecular HB energy, E HB , per residue.Representative HB interactions between the electret molecule and solvent molecules for (d) MeCN, (e) DCM, and (f) MeOH.Black (intramolecular HB), blue (HB acceptor), and red (HB donor) dotted lines show the different types of HB interactions, and gray, white, red, and blue represent carbon, hydrogen, oxygen, and nitrogen atoms, respectively. Figure 6 . Figure 6.Effects of side chains of the AA electret residues.(a−e)Chemical structures of the AA pentamers for l = 5, with different side chains.(f) Calculated average dipole moments as a function of Onsager solvent polarity for three solvents from 1-ns MD simulations.(g−k) Localized πorbitals of the monomeric residues of each of the pentamers, obtained from DFT calculations for the gas phase (isovalue: 0.08).Bond ellipticities of the (l) N-terminus and (m) C-terminus amides to the phenyl ring of the five oligomers from DFT calculations for the gas phase.
7,732.2
2024-01-16T00:00:00.000
[ "Materials Science", "Chemistry" ]
Modelling Thomson scattering for systems with non-equilibrium electron distributions We investigate the effect of non-equilibrium electron distributions in the analysis of Thomson scattering for a range of conditions of interest to inertial confinement fusion experiments. Firstly, a generalised one-component model based on quantum statistical theory is given in the random phase approximation (RPA). The Chihara expression for electron-ion plasmas is then adapted to include the new non-equilibrium electron physics. The theoretical scattering spectra for both diffuse and dense plasmas in which non-equilibrium electron distributions are expected to arise are considered. We find that such distributions strongly influence the spectra and are hence an important consideration for accurately determining the plasma conditions. INTRODUCTION Thomson scattering (TS) is being developed as a diagnostic for inertial confinement fusion (ICF) experiments, which can cover a wide range of conditions.Weakly coupled plasmas are created from the ablated hohlraum wall and the fill gas, whereas high-density states with temperatures ranging from 1 eV − 10 keV occur during the compression and burn stages.Current theoretical approaches for modelling the scattered spectrum are based on the equilibrium fluctuation-dissipation theorem.However, it is well known that the electrons are not always in equilibrium, e.g. for measurements made on time scales that are short compared to the relaxation time or in strongly driven systems, whereby a drive or probe pulse continuously perturbs the plasma [1,2].Under these circumstances a more general description is required to model the TS signal. The scattered power spectrum measured in TS experiments is given in terms of the dynamic structure factor (DSF) S tot ee (q, ), which is defined as the Fourier transform of the density-fluctuation autocorrelation function and describes the microscopic correlations between all the electrons in space and time [3].A completely general form of the DSF can be written in terms of the correlation function for the density response of the fully coupled system, which contains all non-ideal effects such as strong coupling and partial ionization.Unfortunately no such general solution currently exists.However, the electrons are often weakly coupled for ICF conditions due to either high temperature or degeneracy, and thus the random phase approximation (RPA) is often sufficient.The basic conditions of the sample under study, in particular the mean electron density and temperature, can then be inferred by fitting the electronic response to experimental data.a e-mail<EMAIL_ADDRESS>is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ELECTRONIC STRUCTURE IN NON-EQUILIBRIUM PLASMAS The fully coupled dielectric response of an electron gas can be derived for an arbitrary distribution function within a quantum statistical framework using non-equilibrium Green's functions [4].Specifically, the DSF is defined in terms of the correlation function for the density response, which in a local approximation is related to the polarisation function via for a one-component electron plasma.In the RPA the polarisation bubble is constructed from bare propagators, i.e. undressed Green's functions, in which no self-energy or exchange effects are considered.The time-ordered polarisation function is then simply the product of two singleparticle Green's functions.Using the Kadanoff-Baym ansatz in the quasi-particle picture the resulting expressions can be shown to be > ee (q, ; t) = −2i where f e (p; t) and E e (p) = p 2 /2m e are the time-dependent distribution function and kinetic energy of the electrons respectively.Furthermore, Eq. ( 1) may be equivalently expressed in terms of the structure of the non-interacting system 2 n e S 0 ee (q, ) = i > ee (q, ) S ee (q, ; t) The scattering process may be qualitatively understood by considering the Feynman diagram for s-wave Compton scattering, averaging over the distribution of initial electron momenta with the blocking factor f e (p + q; t) for the outgoing states, and including the effects of dynamic screening via the dielectric function (q, ; t) = 1 − V ee (q) R ee (q, ; t) [5].Finally, taking the equilibrium Fermi-Dirac distribution function for the electrons and inserting into Eqs.( 1)-(3) then gives the familiar form of the fluctuation-dissipation theorem as expected The theoretical scattering spectrum for a two-component plasma of electrons and ions can be constructed via the approach of Chihara [6], in which the dynamic response is separated into a high-frequency plasmon term, a low-frequency acoustic term and an inelastic bound-free term.For weakly coupled electrons the plasmon term can be taken in RPA using Eq. ( 5).The non-equilibrium physics must also be included self-consistently in low-frequency response by means of a modified screening form factor, sc (q; t).In linear response approximation for small momenta this is simply [7] sc (q; t) = Z f V ee (q) R ee (q, 0; t) (q, 0; t) IFSA 2011 for the number of free per atom Z f .The bound-free term may be neglected for low-Z plasmas or for long wavelength probing radiation with insufficient energy to produce photoionization.In this work we focus only on the plasmon term in order to not confuse the effects of strong coupling and non-equilibrium physics in the low-frequency component of the spectrum. RESULTS In previous work [5,8] we have shown that the spectrum of TS from strongly driven warm dense hydrogen probed with soft x-rays from the FLASH free electron laser (FEL) is significantly modified by non-equilibrium electron physics.In particular, the distribution function was taken from simulations of the FEL-target interaction using an atomic kinetics code, and a time-integrated treatment was shown to reproduce experimental data [2]. Here we consider two other common models for non-equilibrium electron distributions of relevance to TS under ICF conditions.Firstly, we consider collective TS from diffuse, high temperature plasmas created in long-pulse laser-driven targets probed with optical radiation [1].In this case, strong inverse Bremsstrahlung heating from the probe beam leads to a steady-state distribution function with a roughly super-Gaussian bulk and a Maxwellian tail [9,10].Figure 1(a) shows the red-shifted plasmon data from Ref. [1] fitted with calculations using this modified super-Gaussian form compared to equilibrium calculations for different conditions.As expected, our model gives qualitatively similar results to those of Ref. [10] since the wave numbers probed are small and the degeneracy is low, so that quantum mechanical effects such as Compton recoil and Pauli blocking are not important. It is clear that whilst an equilibrium calculation can reproduce the damping and shift of the plasmon, although the temperature and density differ by ∼20% to the best fit conditions, a robust fit cannot be obtained over the whole range of scattered wavelengths.On the other hand, the signal can be well fitted over the entire range when the non-equilibrium physics is considered.Furthermore, decreasing the index of the super-Gaussian both shifts and broadens the peak, demonstrating the sensitivity of the plasmon dispersion and damping to the distribution function.The errors introduced by an equilibrium analysis are therefore potentially substantial for subsequent calculations of interest to ICF, e.g.parametric and turbulent instability growth in hohlraums [11]. We have also previously considered the effect of a population of hot electrons on the spectrum of collective TS from isochorically heated Be probed with x-rays [12].Due to the high density and low temperature of such states the model of Ref. [10] is insufficient as quantum effects are important, and a full quantum statistical treatment is needed.In this case, the plasma was considered to be in equilibrium due to the ∼0.5 ns delay between the pump and the probe.However, it is reasonable to expect that a quasithermal hot tail may persist due to the substantially longer equilibration rate of the hot electrons.Taking a simple two-temperature model for the distribution function , where the fraction of hot electrons is h = n h /n e and the cold and hot components are represented by Fermi-Dirac distributions at different temperatures, the detailed balance in particular was shown to be significantly affected [5]. Here we investigate the effect of a similar hot tail on the noncollective spectrum for the same conditions by increasing the scattering angle to = 130 • .Figure 1(b) shows that the noncollective spectrum is not as sensitive to the hot component, although increasing h both noticeably alters the amplitude and the wings of the Compton peak.Increasing the temperature of the hot tail reduces the prominence of the wings, but further decreases the amplitude of the main peak.It is also interesting to note that for a modest tail (with a large fraction and relatively low temperature) the spectrum appears to develop a small peak (see inset) on the blue-shifted side of the probe, shown by the vertical dashed line, despite the scattering parameter = e /q being slightly reduced compared to the equilibrium conditions.The main effect of the hot tail on the backscattered spectrum is therefore on the amplitude of the Compton peak.This is potentially important as the charge state required to balance the relative strengths of the inelastic and elastic scattering signals may be in error relative and n e = 3 × 10 23 cm −3 including a hot tail with T h = 100 eV (solid curves) and T h = 10 keV (dashed curves), and various weights h ; 5% (red curve), 10% (blue curve) and 20% (green curve) compared to an equilibrium calculation (black curve) [12]. to equilibrium.Specifically, a larger Z f would be need to be invoked, which could be attributed to enhanced impact ionization due to the hot electrons. CONCLUSIONS We have developed a model for TS based on quantum statistics which may be applied to a wide range of experiments of interest to ICF.We have produced theoretical fits to data from optically probed laserproduced Au plasmas and have shown qualitative agreement with previous work; the fit to the spectrum is essentially unmodified since quantum mechanical effects such as Compton recoil or Pauli blocking are not important under these conditions.We have also applied our model to investigating the effect of a hot tail component on the spectrum of x-rays scattered by warm dense Be in the noncollective regime.We found that the spectrum is not as noticeably modified as that of the forward scattered spectrum, although a reduction in the height of the Compton feature was observed.Consequently, the charge state inferred from the relative amplitudes of the elastic and inelastic peaks may be in error if the plasma is not in equilibrium.We conclude that a rigorous treatment of non-equilibrium physics in modelling TS is an important consideration for plasmas with ICF-relevent conditions. 2 e f e (p; t) E e (p),
2,437.4
2013-11-15T00:00:00.000
[ "Physics" ]
Social Risk Dissociates Social Network Structure across Lateralized Behaviors in Spider Monkeys Reports of lateralized behavior are widespread, although the majority of findings have focused on the visual or motor domains. Less is known about laterality with regards to the social domain. We previously observed a left-side bias in two social affiliative behaviors—embrace and face-embrace—in captive Colombian spider monkeys (Ateles fusciceps rufiventris). Here we applied social network analysis to laterality for the first time. Our findings suggest that laterality influences social structure in spider monkeys with structural differences between networks based on direction of behavioral bias and social interaction type. We attribute these network differences to a graded spectrum of social risk comprised of three dimensions. Introduction Reports of lateralized behavior are widespread, particularly in the visual and motor domains [1,2].Decades of research has led to the general consensus that behavioral lateralization is subserved by asymmetric brain function.These brain-behavior asymmetries may serve to streamline neurobiological processes, thereby increasing behavioral efficiency in unpredictable or arousing situations, such as social interactions [3,4].Thus, laterality may be particularly advantageous in gregarious species such as primates. In a recent synthesis of prior research, Rogers and Vallortigara [1] linked left biases in social behavior to the right hemisphere as a general pattern of lateralization in vertebrates.However, we later showed that not all social behaviors are associated with this pattern of laterality [5].Specifically, we found that two variations of embracing, but not grooming, were lateralized in Colombian spider monkeys.We argued that the differences in lateralization in social affiliative behaviors were due to the social dynamic in which these behaviors occurred, with grooming considered a low-stakes routine state while embraces were high-stakes risky events.In this study, we focused on assessing the behavioral patterns among individuals within a group, and did not take into account the relational patterns of the group as a whole (e.g., interaction history).While consistent with other laterality investigators, this reductionist approach does not capture the true dynamics of a social system, begging the question: does laterality influence social structure? Spider monkeys are one of a handful of primates living in fission-fusion [6], a social dynamic defined by separations and reunions.Embraces are a contact greeting gesture that occur at the time of reunions in spider monkeys [7].In the standard embrace, the hands are wrapped around the body and the face is placed along the trunk [7,8].A variation is the face-embrace, in which faces touch [5].Fission-fusion is characterized by marked unpredictability and low social cohesion compared with species that have a known stable hierarchy, cohesive social groups, and low variability in interactive exchanges [9,10].With these differences in mind, social interactions within species living in fission-fusion may consist of a level of risk unlike that experienced in other social dynamics, and laterality may play a role in negotiating this risk [2].In general, social behavior in fission-fusion species is remarkably multi-dimensional, and can be difficult to tease apart. One method for teasing apart complex social systems is social network analysis [11], a concept with roots in the mathematical field of graph theory.Social network analysis is a tool used to compute and visualize structural relationships in relational data.There is a long history of applying network analysis in the study of sociality in primates (for a review, see [12]) and other species [13].Yet social network analysis has never been applied in the area of behavioral laterality.Network analysis alone has the unique ability to characterize and mathematically represent global inter-connected elements [14].Within behavioral laterality, network level information may provide a more sophisticated method to examine topological patterns that represent potential advantages of laterality for behavior, and to accurately depict the multi-dimensional nature of social interaction. As our primary objective, we leveraged social network analysis in the dataset reported by Boeving, Belnap and Nelson [5] to examine whether similarly lateralized behaviors (i.e., embrace and face-embrace) also have similar network structures, and we predicted that these networks would not differ.In our secondary objective, we examined social networks based on direction of laterality (i.e., left or right) regardless of behavior type by pooling embrace and face-embrace into an affiliative category.We hypothesized that laterality would influence network structure, and we predicted that global left and right affiliative networks would diverge.Finally, we examined the influence of both direction of laterality and behavior type on social network structure by creating four sub-networks of left embrace, left face-embrace, right embrace, and right face-embrace.We hypothesized that laterality, but not behavior type, would alter network structure.We predicted that the left sub-networks would differ from the right sub-networks, but that sub-networks within a behavior (i.e., embrace or face-embrace) would not differ. Social Network Construction from Live Coded Behavior We constructed social networks from live coded behavioral observations of 15 captive Colombian spider monkeys (Ateles fusciceps rufiventris).Portions of these data were previously reported in Boeving, Belnap and Nelson [5].To briefly summarize, 186 h of data were captured between May and August 2015 using the Animal Behaviour Pro mobile iOS application on apple iPod 5th generation [15].The application was programmed with information about the individual monkeys to capture initiators and receivers of embrace and face-embrace with the modifier set as side (i.e., left or right positioning).Left or right was recorded with reference to the positioning of the faces regardless of whether there was contact or not.Directionality was not determined by any positioning of the limbs.Data were collected using the continuous sampling method, and ad libitum recording method [16,17] so that all occurrences of the target behaviors could be captured across three equally distributed time periods throughout the day to avoid disruptions due to husbandry procedures.The DuMond Conservancy Institutional Animal Care and Use Committee approved the research, and the study was conducted in accordance with the laws of the United States.The research adhered to the American Society of Primatologists (ASP) Principles for the Ethical Treatment of Non-Human Primates. Social Network Analysis We utilized social network analysis as the computational method to investigate potential structural differences within all networks.Networks were computed and visualized in Cytoscape (http://www.cytoscape.com)(Version 3.4.0;[18]), an open source software project for modeling interaction networks.The network metric of degree centrality, which provides a composite score from the in-degree value (i.e., interactions directed towards a monkey) and out-degree value (i.e., interactions directed by a monkey to others), was examined because this metric quantifies the number of edges (i.e., social interactions) shared between nodes (i.e., monkeys).The degree centrality of node (v) for a given graph (G) = (V, E) with |V| nodes and |E| edges defined as: Using the metric degree centrality, the total number of interactions for each individual was computed where monkeys with the most connected interactions (initiated or received) were positioned in the center of the graph and monkeys with fewer connected interactions were positioned along the perimeter.Within Cytoscape, we used a variant of the "Kamada-Kawai Algorithm," a spring-embedded algorithm that forces connected nodes together while also forcing disconnected nodes away from the center [19].We constructed weighted networks because this method is best suited for graphically representing the variation in social bonds [20,21].All edges were weighted based on frequency of interaction with thicker edges denoting more interactions and thinner edges denoting fewer interactions.Node size denotes variation in rank of degree centrality where larger nodes indicate higher values of degree centrality and smaller nodes indicate lower values of degree centrality. Statistical Analysis To examine whether similarly lateralized behaviors (i.e., embrace and face-embrace) have similar network structures, we first pooled frequency data from each behavior separately regardless of side to create global embrace and global face-embrace networks.To investigate the potential effect of laterality on social network structure, we then pooled affiliative frequency data according to side of positioning to create global left affiliative and global right affiliative networks.Finally, we examined the effect of laterality within each type of embrace by constructing four direction x behavior networks: left embrace, right embrace, left face-embrace, and right face-embrace.t-Tests and ANOVA with post hoc comparisons were used to compare the resulting networks. Results A total of 1623 social interactions were examined.Of these, 1270 were embraces and 353 were face-embraces, corresponding to 1227 left affiliative and 396 right affiliative interactions.Individual raw frequency scores for each behavior are reported in Table A1.Four juveniles were excluded from further analysis due to multiple zero values for out-degree, which we suggest is age-related and would not accurately portray degree centrality in the spider monkey group.Network degree centrality values for the global comparisons can be found in Table 1.Unpaired t-tests found a significant difference in degree centrality between the global embrace and face-embrace networks (t(28) = 3.43, p < 0.01, d = 1.296; Figure A1), and a significant difference in degree centrality between the global left and right affiliative networks (t(20) = 3.92, p < 0.001, d = 1.753; Figure A2).There was no sex difference in the global left affiliative, global right affiliative, or global embrace networks (all p > 0.05).However, there was a sex difference in the face-embrace network such that females initiated the face-embrace behavior more than males, and males received more of these interactions compared to females (F(1,13) = 4.82, p < 0.05, η 2 = 0.270).To further examine structural differences between embrace and face-embrace within the context of laterality, we examined the four sub-networks (left embrace, right embrace, left face-embrace, right face-embrace).ANOVA revealed a significant difference in degree centrality among the sub-networks (F(3,40) = 20.72,p < 0.001, η 2 = 0.608; Figure 1).Post hoc analyses found that each sub-network was different from the others (all p < 0.05).M = Male, F = Female.The higher the degree centrality value, the more highly connected a monkey is to others.Nodes are weighted such that the larger the node, the higher the degree centrality.Edges are weighted such that thickness denotes frequency of interactions. Discussion The primary objective of this study was to examine if behaviors with similar patterns of behavioral laterality would also have similar social network structures.We examined the social affiliative behaviors, embrace and face-embrace, which we previously have shown to be left lateralized in spider monkey behavior [5].Contrary to our predictions, we found that the network for embrace was structurally different from that of face-embrace.We then explored our secondary objective examining whether the side with which the social affiliative behaviors were performed had an effect on network structure.Here our results confirmed our prediction that the global left affiliative network was structurally different from the global right affiliative network.Finally, our analysis of sub-networks parsing direction within each behavior partially supported our prediction.All four sub-networks were different from each other, suggesting an interaction between laterality and behavior type.We discuss these differences in social network structure in the context of three dimensions of social risk. The concept of risk is often described in the non-human primate literature in the context of risk of aggression from neighboring groups [22], predation [23], and loss of resources [24], all of which are typical challenges for species living in the wild.Rebecchini et al. [25] first identified embracing as a component of risk in spider monkeys, and Boeving, Belnap and Nelson [5] suggested that embrace risk may be graded according to the type of physical contact with face-embrace having higher risk given the close placement of the faces.By comparison, embrace is lower risk because the faces do not touch.Here, we label this type of risk contact risk.Although embrace and face-embrace have a similar left behavioral lateralization pattern, the finding that they do not have similar network structures supports the conclusion that these behaviors are related but distinct.The graphical representation of the embrace network conveys the robustness of this behavior (Figure A1A).Specifically, most individuals engaged in embracing, and with high frequencies, yielding a network graph with most monkeys having high values for degree centrality.Overall, this pattern indicates strong cohesion in the embrace network.In contrast, the face-embrace network depicts interactive patterns in which only a few males were strongly bonded (Figure A1B).When in-degree and out-degree were examined, both males and females initiated and received within the embrace network, but there was a significant difference in the face-embrace network where females initiated more face-embrace and males received more of this behavior.This sex difference is notable because aggression towards females from male spider monkeys is a known pattern [26], making the social lives of female spider monkeys especially risky.In captivity, intra-group aggression is an important consideration given that wild female spider monkeys emigrate from their natal group [26,27].We envisioned the face-embrace to be the riskier of the two embraces given the close face contact.Yet, with the known pattern of aggression towards females in mind, our social network analysis points to a second aspect of social risk within the face-embrace: partner risk.Social risk in relation to sex roles has been widely discussed in the human literature.For example, female sexual risk taking within certain communities is associated with greater risk of male aggression towards them [28,29].Contact and partner variables have also been examined in the literature on social touch laterality in human kissing [30][31][32][33][34] and embracing [35,36], although these studies have not framed their findings in the context of risk, which may be an avenue in the future to connect these two streams of research. A third type of risk identified by our network analyses is laterality risk.This dimension of risk was informed by our analyses that identified a structural difference between the global left affiliative and global right affiliative networks.In the left affiliative network, several monkeys were central.In contrast, the right affiliative network had a significantly different architecture in which fewer monkeys were central to the network, and in which the behavior occurred less frequently.Previous work has suggested that the right hemisphere plays an important role in the monitoring and detection of uncertain events in the environment, while the left hemisphere is more involved in routine behavior [2].This role differentiation between hemispheres is particularly relevant when considering the positioning of the body for embrace and face-embrace.Specifically, if the functional split between hemispheres is correct, then positioning others on the right side for either behavior would be risky.Moreover, face-embrace would be especially risky given the close contact of the face coupled with the hypothesized decrease in ability for social monitoring when engaging others on the right side.It would thus be advantageous to position conspecifics on the left side given the hypothesized neural processing benefit.In line with this hypothesis, the structure of the left lateralized affiliative network pattern can be characterized as a highly cohesive network where all monkeys engaged in the behavior, and engaged frequently (Figure A2A).In contrast, the right lateralized network was lower in cohesion; engagement occurred less frequently, with only a few monkeys reaching high values of degree centrality (Figure A2B).Although not recorded in this study, capturing the sequence of behaviors that follow these risky interactions would further test this theory, and is a goal for future work. Although we collected data over a four-month period, one limitation of this study is that we were not able to assess the stability of these networks over time.Juvenile data were excluded from analyses due to the low frequency of engagement in the behaviors we examined.However, we would expect this pattern to change as individuals mature and develop social bonds.The novel application of social network analysis could quantify this process, not only in primates, but other highly social species.Moreover, here we have utilized a between-networks approach based on our research question, but a within-networks approach across two or more timepoints could provide information about how an individual's position in a network changes as a function of development.A developmental network approach would also broaden our knowledge of the factors that contribute to the emergence of social laterality and its function. Taken together, the structural differences between the four sub-networks confirmed a graded spectrum of social risk in spider monkeys along the three dimensions of risk: contact, partner, and laterality (Table 2).The sub-network with the lowest risk (i.e., left embrace) had the most participation and strongest cohesion, whereas the sub-network with the highest risk (i.e., right face-embrace) had the least participation and was the most disjointed of the networks indicating low cohesion (Figure 1).To answer our original question posed in the introduction, these findings suggest that laterality influences social structure.However, we acknowledge that social structure may also influence laterality, or that the relationship is bidirectional.Future work using longitudinal designs may address this point.Additional studies should also aim to include network analyses of other behavioral domains that could be related to laterality, such as cognition and motor skill.In conclusion, social network analysis is an exciting new avenue for characterizing brain-behavior relationships.In using this unique computational method to elucidate factors that drive global differences in social network topology, we advance our understanding of laterality within a social framework. Figure 1 . Figure 1.Clockwise from top left: (A) Left embrace; (B) Right embrace; (C) Left face-embrace; and (D) Right face-embrace.Networks are ordered on social risk index (see text for details).Red denotes females, and blue denotes males.Nodes are weighted such that the larger the node, the higher the degree centrality.Edges are weighted such that thickness denotes frequency of interactions. Figure 1 . Figure 1.Clockwise from top left: (A) Left embrace; (B) Right embrace; (C) Left face-embrace; and (D) Right face-embrace.Networks are ordered on social risk index (see text for details).Red denotes females, and blue denotes males.Nodes are weighted such that the larger the node, the higher the degree centrality.Edges are weighted such that thickness denotes frequency of interactions. Figure A1 . Figure A1.Global embrace and global face-embrace networks differ. Figure A2 . Figure A2.Global left affiliative and global right affiliative networks differ.Red denotes females, and blue denotes males.Nodes are weighted such that the larger the node, the higher the degree centrality.Edges are weighted such that thickness denotes frequency of interactions. Table 1 . Individual degree centrality values.The higher the degree centrality value, the more highly connected a monkey is to others. Table 2 . Dimensions of social risk. Table A1 . Individual Raw Frequency Scores.
4,144
2018-09-09T00:00:00.000
[ "Psychology", "Biology" ]
Towards a Mathematical Description of Biodiversity Evolution We outline in this work a mathematical description of biodiversity evolution based on a second-order differential equation (also known as the " inertial/Galilean view "). After discussing the motivations and explicit forms of the simplest " forces " , we are lead to an equation analogue to a harmonic oscillator. The known solutions for the homogeneous problem are then tentatively related to the biodiversity curves of Sepkoski and Alroy et al., suggesting mostly an inertial behavior of the time evolution of the number of genera and a quadratic behavior in some long-term evolution after extinction events. We present the Green function for the dynamical system and apply it to the description of the recovery curve after the Permo-Triassic extinction, as recently analyzed by Burgess, Bowring and Shen. Even though the agreement is not satisfactory, we point out direct connections between observed drop times after massive extinctions and mathematical constants and discuss why the failure ensues, suggesting a more complex form of the second-order mathematical description. Introduction The search for a mathematical description of reality has a rich and long history [1].Biological problems, however, have resisted outright general mathematization and prompted several discussions about the nature of living phenomena and their similarity/radical difference with the rest of the physical world [2] in this respect.There is, nevertheless, ample evidence that mathematical tools and models (with a varying degree of phenomenological content and fundamental roots) have played a major role and helped in understanding the evolution of various biological systems and populations. OPEN ACCESS Among all of these developments, population dynamics has a history of almost a century of successful mathematical models, as pioneered by Lotka [3], Volterra [4] and others [5].As is well known, a general Lotka-Volterra type equation relates the rate of change of the number of individuals of one population to the population itself and other functions involving features that contribute to it.This is conceptually quite simple and flexible enough to accommodate a variety of known biological features that determine the evolution of a given population.However, and quite analogously to the Aristotelian view of physical dynamics, the rate of change is a first order derivative and, therefore, is not sensitive to anything, but the instantaneous variations.On this basis, the Lotka-Volterra type of description has been criticized in the past, and several authors [6][7][8] pioneered the formulation of a second order formalism to deal with the temporal variation of the rate of change.The reasons for this "Galilean turn" have been discussed elsewhere [9,10] and established the second order theory on quite firm grounds.It has been shown that a second order theory complies with the recorded evidence of population dynamics and offers a new perspective of several observed phenomena. The aim of this work is to present and discuss a second order description of the biodiversity (more specifically, the number of genera ) inspired by this Galilean approach.We shall show in Section 2 below that a fundamental equation for the evolution of can be constructed and justified.The general character of the solutions and their relevance to the existing fossil record (including massive extinctions, with the particular example of the Permo-Triassic recovery phase) will be the subject of Section 3. The conclusions of this work will be presented in Section 4. Second Order Formalism in Population Dynamics and Its Extension to Biodiversity The basic idea of the inertial (Galilean) approach to population dynamics, as discussed by Guinzburg and Colvayn [9], starts with the mathematical form of Malthus' law of population growth: in the absence of external perturbations and given unlimited resources, a population () will grow exponentially → () = ( = 0) × (); with the grow rate.This simple expression can be immediately put in the form: and interpreted as showing that there is no net "acceleration" in this case to the function .The introduction of "external forces" (or rather particularly, predator-prey interactions, finite population limits, etc.) is quite straightforward and assumes the general form (, ) at the r.h.s.(in the first order Lotka-Volterra theory, Equation 1 is replaced by (/) = and = () , respectively, yielding a quite different view of the population evolution without any inertia).A complete dynamical description is thus achieved, and its solutions can be compared to actual population data, as discussed in [10][11][12] and many other publications.For instance, the "accelerated death" of bacteria subject to sudden starvation is indicative of a second order theory in which the death rate can vary.The extension to tackle the biodiversity problem within this Galilean description framework starts with an analogous postulate, which is even more transparent than the Malthusian law, because it does not involve a logarithmic function: in the absence of any perturbation, the number of genera (assumed to represent biodiversity, so chosen to avoid the problems faced at the lowest species level) will tend to a constant in time, that is: This is of course a direct consequence of = at sufficiently long times of the unperturbed system, when all "forces" cancel out in steady state.The main idea here is that life develops unperturbed, and after a certain time, the number of genera does not change for fixed external conditions (see, for instance, [10]).Note that the general inertial view of Mc Shea and Brandon [12] of a zero-force law (that is, the increase of diversity and complexity with time in the absence of any "force") is also included in this description.Equation (2) sates that the growth should be linear with time at most.Strictly speaking, a number of fluctuations seem unavoidable, and we should write something like 〈 〉, the average number of genera, as a true stochastic variable (see Section 4), but for the moment, we will retain the simpler continuum picture. Motivating Basic Ecological Forces As a further development, we shall assume that putative "forces" stressing the ecosystem on the right hand side are represented by a general function ( , ̇), to be determined for specific cases.An obvious choice for one of these forces is to assume that the rate of change of the growth rate is proportional to the number of genera itself, having in mind that it is reasonable to expect an increasing number of genera when the diversity is correspondingly large for a process even happening at a constant/growing rate.For a given maximum number of genera (the asymptotic value of at sufficient large times, akin to the carrying capacity discussed in population dynamics [11]), which will be different for different environments, this "force" can be written as: with < 0, complying with Equation (2) when this term vanishes for long times and → . Other "forces" may be added depending on the situation, for example a constant force (note that − from Equation ( 3) is already of this type) or a force proportional to the rate itself ̇, ( < 0), interpreted as environmental effects that slow down the response of acting as ecological "friction".This simple term introduces a damping timescale ≡ 1/ possibly related to the action of the biosphere in the relaxation of perturbations, much in the same way as is observed in other natural systems (see below).It is clear that non-linear terms containing both and ̇ in the polynomials or other complex functions may be expected on general grounds, but shall not be addressed in this exploratory work.Therefore, the simplest form for the complete differential description of reads: with a constant stress, to be refined later if necessary. Simple Solutions The "inertial" solution when becomes asymptotically constant is, by construction, the behavior to be expected for an unperturbed system.For shorter times, however, the system may show a different transient evolution provided the initial condition determines a non-zero constant initially, and therefore, a linear growth should result after the transient, recovery phase set by a sudden perturbation. In a more general case and provided that the "forces" are fairly represented by Equation ( 4), the stationary solutions are analogous to a damped harmonic oscillator.These may be written as () = ( 1 ) + ( 2 ), with 1,2 = ( ± √− 2 + 4)/2.Depending on the relation between and 2 /4, a damped oscillatory, critically damped or overdamped behavior results [13,14].The last two solutions possess a characteristic timescale, but no period.The period of the first solution is altered by the damping term with respect to its "natural" value 1/√.Note that we have ignored for the moment the constant term related to the value and the constant , which would induce a phase difference, as is well known from the general study of harmonic oscillators.Taking advantage of the extensive studies performed in several contexts of the damped harmonic oscillator, we may immediately write down one of the most interesting solutions of possible biological interest: the response of the system to a sudden, shortly-lived, large perturbation of the type () = ( − 0 ).The delta function with unity amplitude as an impulsive force model yields the so-called Green function as the general homogeneous solution: where 1,2 are the roots of the quadratic equation 2 + + = 0. Again, real (damped) or imaginary (oscillatory) values may be obtained, depending on the hierarchy between and . Generally, when a more complex force on the r.h.s. is acting on the system, it would produce a variation of , written as its convolution with the Green function, namely: which allows a complete description whenever a long-term force needs to be modeled (for example, a change in atmospheric composition or some other long-term environmental effect).It should be noted that the precise modeling of evolutionary forces is largely unknown, and even the most natural choice may not have a mathematical form easy to grasp and employ in the r.h.s.[15].With the presentation of these mathematical tools, we will try to interpret the recorded fossil history in the next section, paying attention to the biological meaning of the presented formal solutions Comparison with Existing Fossil Records The study of the biosphere's diversity history is a delicate business, since various biases could occur and mask fundamental facts.Compilations of marine taxa collected from a variety of sources has been the main source for these studies, and generally speaking, they have shown an initial Cambrian radiation followed by a fluctuating pattern affected by large extinctions and, finally (starting in the Triassic), a large exponential-like increase, often taken as a radiation [16].Synoptic data gathered by Sepkoski [17] have been a chief element leading to this view of diversity since the Cambrian explosion. Recent work on this problem [18], however, challenged the former general vision.The main improvement has been a careful treatment of the sampling and taxa identification issues, both somewhat problematic in previous works.The new diversity curve, precisely vs. time (Figure 1 of [18]), has been discussed at length and displays, according to these authors, some novel features to be noted.A very general conclusion of the study is that the net increase in diversity was modest and may be limited overall, not exceeding a small numerical factor since the beginning of the records.In particular, it seems that the increase since the beginning of the Paleogene was about 15% at most.They also obtained that the Meso-Cenozoic radiation started before the late Cretaceous and is not as dramatic as previously thought.Some of the "Big Five" extinctions stand very clearly in the new curve, and some appear less dramatic than before within this 11 My bin of data subject to the procedures detailed in [18]. The application of the results of Section 2 to this curve must be done with great care, and we can only advance some general qualitative trends in the present work.First, we note that constant periods (notably the whole Paleogene plateau) plus (linear?) risings and drops are the rule, rather than the exception in this curve.This suggests the purely inertial view of the diversity evolution on the Earth, subject to changes epoch-by-epoch and complying with the expectation of a generally increasing trend for the upper limit ( ) as argued by many researchers [19][20][21].It may be even possible to embrace massive extinctions without a major departure of the inertial view, although then, some non-linear force or sudden perturbation should be invoked to model the extinctions themselves.This is where the Green function of the last Section is a useful tool: a sudden large perturbation of short (geological) duration will then produce a recovery with constants 1 , 2 , which are functions of , directly related to the observed evolution, provided reliable, higher resolution curves can be obtained instead of the relatively large 11 My binning considered in [18].It is important to note that due to the "extant genera" problem of Alroy et al. [18], the curve may underestimate the growth near present times, a feature that is not problematic in the original Sepkoski compilation [22].Figure 1 displays a simple prediction of the model in the long-term recovery leading to steady states after two large extinctions.It is assumed here that the extinction event resets the conditions, creating a constant ecological stress of different magnitude on the r.h.s. of Equation ( 4) and also that the , ̇ compensate for each other.The solutions are then parabolic in time.This is as acceptable as an exponential growth in both situations.and Cretaceous-Paleogene (right red segment) extinctions.The black curve is the data by Sepkoski [22], which suggests a more dramatic growth for the latter and overall for .The model performs well within a quadratic evolution steaming from the solutions of ( 2 / 2 ) = . In addition to these steady states, the recent work by Burgess, Bowring and Shen [23] provides an excellent opportunity to test the quantitative short-term recovery response of the system described by Equation ( 4) to a sudden perturbation modeled as a delta function in time, that is its Green function Equation (5).The authors constructed a time history of the end-Permian mass extinction based on U-Pb zircon dates from five volcanic ashes in the well-studied site of Meshian, China.Using this chronology, the duration of the extinction itself was determined to be 60 ± 48 kiloyears (that is, virtually instantaneous from a geological point of view and even more instantaneous when the fossil record is analyzed [24,25]) and a plot of carbonate carbon isotopic composition history plotted for the aftermath.Under the assumption that the latter quantity is a proxy for the (which is, nevertheless, subject to some caveats), a comparison with the theoretical picture may be attempted.A "best fit" curve is shown in blue.The agreement is not satisfactory, although the calculated 2 is acceptable, and a decaying amplitude plus a quasi-frequency (of around 2.6 My −1 ) in the late curve are present. Two general remarks may be in order to achieve a better understanding of the global features.First, the reports on the existence of a 5-10 My recovery timescale present in this extinction [26][27][28] could be directly associated with the exponentially decaying amplitude of the transient originated by the presence of a timescale τ ≡ 1/β, clearly seen in the fit, which falls precisely in this range.Its origin is suggested to be related to the environment itself, rather than being a result of the specific trigger mechanism.In fact, the suggestion [29] that the dynamics of extinctions should shift from just the abiotic effects to the study of a trophic + non-trophic chain of interactions characteristic of the collapse of the biosphere suggest an exploration of the origin of this timescale from first principles.Second, we believe that the poor agreement of the peaks seen in the fit has to be traced back to the presence of a single frequency in the Green function Equation (5).Actually, the knowledge of dynamical systems similar to those described by Equation ( 4) suggests that the data curve is the result of a beat between two different frequencies.Therefore, the presented description is still short of providing a satisfactory quantitative picture of the recovery of after an extinction event (Figure 2).5) (blue curve) with the time-resolved data corresponding to the Permo-Triassic extinction event analyzed by Burgess, Bowring and Shen [23] (black curve).The origin of the time axis has been set to the end of the extinction interval given by these authors.See the text for details. On the other hand, an important claim in the literature [30,31] is about a strong periodicity of ~62 My in the fossil record.One of the suggested explanations is that short-lived and long-lived genera behave differently in secular timescales and produce a sinusoidal pattern.This feature has no room in a formalism that treats as a single quantity without discriminating both types of genera.However, it is fair to state that such a description would be richer and possibly related to the problem just discussed above, for which we have no answer by now. We finally point out that an attempt to study the global diversity recoveries has been published by Kirchner and Weil [32], and a lagged correlation (at ≈10 My) between extinction and origination rates is reported ( is just the difference of both), even outside the Big Five extinctions.If true, this timescale appears to be a robust property of the global ecosystem and, therefore, also related to a combination of , (and, as pointed out above, perhaps other participating quantities or more complex forms of these parameters), helping to unravel the nature of a successful mathematical description.It is important to note in this context that such a lagged correlation is equivalent to a phase difference in the oscillator-like picture and can be easily justified, being a very common feature of oscillator-like dynamics [14] that remains to be studied. Conclusions We have outlined in this work a theory of global diversity based on a second-order differential equation.Just like its analogue in population dynamics [9], the Galilean (inertial) character of the resulting description has some advantages over a first-order "Aristotelian" approach, but it is premature to insist on its advantages until the features are developed and compared in more detail with the existing records.At face value, the history of diversity on the planet shows the features of the inertial behavior.In other words, a naive qualitative analysis could conclude that the inertial description of Equation ( 2) suffices to account for most of the reconstructed marine genera curve [18], at least outside of the massive extinction episodes and for relatively short timescales.Alternatively, a different (quadratic) behavior is obtained if the system becomes stressed by a constant force, possibly after an extinction event, resetting the environmental/trophic conditions.After the transient and provided that the forces cancel out quick enough, the system could enter the inertial regime again in which Equation (2) applies and the diversity evolves linearly or tends to a constant instead, or it could evolve quadratically if stressed by a constant ecological force .It is more difficult to obtain and justify these behaviors within a first-order (Aristotelian) formalism, and there are hints for their presence in the vs. time actual curves. Geological long-term trends need also to consider the issue of changing conditions, together with the existence of a growing .Extinctions themselves pose a different problem not present in a pure inertial view: either a large external perturbation should be introduced (like the exogenous Alvarez et al. [33] asteroid), breaking the inertial behavior; or an explicit non-linearity related to the environment/population itself should be identified and modeled.In the first case, Green functions could be more or less directly used to infer the character of relevant quantities, while in the second case, the need of a deeper mathematically-oriented discussion of the biological interactions would be called for, and we would not be dealing with a pure "Galilean" system any more.For the moment, we have established that in the specific comparison with the short-term recovery after the P-T extinction, the Green function of the simplest equation lacks enough features to describe the detailed time-resolved history [23], pointing towards a more complex form than the one suggested in Equation ( 4).Alternatively, one can attempt a different class of descriptions, starting from the analysis of the endogenous/exogenous relation, and try to "build-up" the patterns of extinctions, as recently shown by Stollmeier, Geisel and Nagler [34]. It should be acknowledged that, strictly speaking, a stochastic term added to the oscillator forces would produce an even better description, since an average ( ) would emerge naturally from the effects of the expected fluctuating effects of the environment.Such an improvement is also on the list of future tasks, and the present discussion is intended just to open the debate on these issues. Figure 1 . Figure 1.Best fits to the long-term recovery after the Triassic-Jurassic (left red segment) and Cretaceous-Paleogene (right red segment) extinctions.The black curve is the data by Sepkoski[22], which suggests a more dramatic growth for the latter and overall for .The model performs well within a quadratic evolution steaming from the solutions of ( 2 / 2 ) = . Figure 2 . Figure 2. A comparison between the Green function response Equation (5) (blue curve) with the time-resolved data corresponding to the Permo-Triassic extinction event analyzed by Burgess, Bowring and Shen[23] (black curve).The origin of the time axis has been set to the end of the extinction interval given by these authors.See the text for details.
4,922.4
2014-09-23T00:00:00.000
[ "Biology", "Mathematics" ]
Resistive cooling circuits for charged particle traps using crystal resonators Resistive cooling circuits for charged particle traps using crystal resonators T. Kaltenbacher,1,2 F. Caspers,1 M. Doser,1 A. Kellerbauer,3 and W. Pribyl2 1Physics and Accelerator Departments, CERN, 1211 Geneva 23, Switzerland 2Institute of Electronics, Graz University of Technology, Inffeldgasse 12, 8010 Graz, Austria 3Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, 69117 Heidelberg, Germany Introduction In many experiments using ion traps, the kinetic energy of confined particles must first be reduced in order to perform high precision measurements [1].Laser cooling is widely used and very effective, but it is limited to specific ions [1].Methods which can theoretically be applied for all ions are active-feedback cooling, collisional cooling, RF cooling, evaporative cooling and resistive cooling (occasionally called electronic cooling) [2,3].In resistive cooling, the trap electrodes are connected to an external circuit to dissipate energy from the ion through induced currents [1].In other words, the kinetic energy of the confined particles is damped by i 2 R losses in a resistive circuit.Grossly speaking, the resistor absorbs the particles' energy and is heated up in the process.Since the resistor has a specific physical temperature, it generates Johnson noise that in turn stochastically drives the trapped particles [4].In the absence of other heating sources, a thermal equilibrium between the particles and the resistive circuit is ultimately established. A single ion confined in an ideal Penning trap, consisting of a solenoidal magnetic field and a threedimensional electric quadrupole field, performs three fundamental particle motions, namely axial (ω z ), cyclotron (ω c ) and magnetron (ω m ) oscillations [4].The axial motion is the oscillation parallel to the magnetic field (z direction) and its frequency is determined by the electrical field only [4].The axial frequency of a single particle with mass m and charge q is given by where V 0 is the electrostatic potential applied to the electrode and d is a characteristic trap parameter [4].A large collection of ions in a Penning trap arranges into a plasma whose spatial extent is large compared with the pene-tration depth of a static electric field, called the Debye length.In thermal equilibrium, the motions of the particles become strongly correlated and they form an ellipsoid which performs a rigid rotation about the trap axis [5]. Along the trap axis, the ellipsoid performs collective oscillations which deform or displace the ion cloud.In the low-temperature limit, the plasma oscillations can be described by a cold-fluid theory [6] and their respective frequencies can be determined analytically.The lowest-order axial mode, also called the 'bounce' mode, corresponds to a periodic axial displacement of the entire plasma.Its oscillation frequency ω 1 = ω z is identical to the axial frequency of a single ion in a Penning trap. To estimate the cooling time of the axial motion, a simple single-particle model is used, where the particle is harmonically confined between two infinite capacitor plates [7].According to this model, the energy is damped with a time constant τ given by where R stands for the real part of the impedance of the attached external circuit.The induced current is given by where 2z 0 is the separation of the capacitor plates and v z is the particles' velocity in axial direction. From Eq. (2) one can easily deduce that light, highly charged particles are efficiently cooled.The cooling rate can be further improved by developing a high resistance in the external circuit.In general, the external circuit includes a low-noise amplifier to couple the induced current signal to room temperature, and thus enable stored particle and plasma diagnostics.Traditionally, the impedance Z shown in Fig. 1 is implemented as an inductance L to tune out the parasitic capacitance C of the electrodes [4].This inductance usually consists of a discrete coil made of copper or superconducting wire.At the resonance frequency ω r = (LC) −1/2 of a parallel LC circuit, the real part R of the impedance is maximal and is given by !R = " r LQ. (4) Therefore, the quality factor Q of the tuned circuit has to be as large as possible to guarantee efficient resistive cooling.Furthermore, also the coil's inductance L is made as large as possible to increase the resistance in resonance.However, this only holds when the capacitance of the circuit can be freely chosen.Usually the capacitance is given by the Penning trap itself and all parasitic capacitances in the system.A large inductance in turn means that the coil has to have a certain size, since the inductance is basically given by its dimensions and the number of turns.Moreover, superconducting coils require careful shielding from magnetic fields as well as care in the design of the interface between the superconducting and resistive wire [8]. Resistive Cooling with a crystal resonator We propose and have investigated the use of a crystal resonator in parallel resonance instead of a coil.This setup provides several advantages: • The crystal's operation is not negatively affected by the magnetic field of the Penning trap; • The Q factors of crystal resonators are very high, resulting in a very high impedance in resonance; • A crystal resonator circuit is mechanically smaller than a superconducting coil with its shielding. Quartz crystals are used in electronic circuits to provide very stable oscillation frequencies.Figure 2 shows the electrical equivalent circuit of a quartz crystal resonator.Its series and parallel structure leads to two resonances with the series and parallel resonance frequencies ω s and ω p : The parallel resonance is also called anti-resonance and occurs when the parallel combination of C 1 and C 0 tunes out the inductance L 1 .The capacitance of the crystal holder and parasitic capacitances is denoted by C 0 .The resistance R 1 stands for electrical losses due to damping, and L 1 and C 1 are related to the mechanical properties of the quartz [9].The equivalent circuit model, so called Butterworth Van-Dyke model, is valid for the single series, parallel resonance combination at frequencies near resonance [10].Depending on the crystal geometry, a crystal oscillator may also oscillate at odd multiples of its series resonance frequency. Measurement setup Fig. 2: Equivalent circuit for crystal resonator and simplified impedance. Fig. 3: Vector Network Analyzer measurement setup without crystal resonator. To measure high impedances within a 50Ω system one has to use transmission measurements since the unknown, high impedance cannot be measured in parallel.A sensitive spectrum analyzer with tracking generator could be used as well as a network analyzer.Our measurements were made with a vector network analyzer (VNA; Agilent 5071C).Figure 3 shows the schematic of the measurement setup without and Fig. 4 with the impedance to be measured (Z m ).The scatter parameter measured with the VNA is S 21 since in the transmission measurement the stimulus is applied to Port 1 and the signal is measured at Port 2. When using a VNA the measured quantity at Port 2 is complex, which in turn means that the signal's magnitude and phase are determined. The measured voltage U 2 is given by For Z L = Z S it is equal to U 0 /2 and is called the reference voltage U ref, since it is used normalize the measured potential.The normalization of the measured quantity including the unknown impedance Z m is expressed by Thus Eq. ( 8) gives the transmission scatter parameter S 21 which is measured with the VNA. To simulate the electrical effects of a connected Penning trap, a coupling capacitance with C C ≥ 10 pF was introduced (see Fig. 5), which represents the electrode's capacitance as well as parasitic capacitances.In fact the load capacitances of the crystal resonators recommended by the suppliers were used as coupling capacitance.A capacitance in series with the quartz shifts the series resonance frequency ω s toward the parallel resonance frequency: Fig. 4: Measurement setup including Vector Network Analyzer and crystal resonator.The parallel resonance frequency is only affected by capacitances in parallel with the quartz [9]. Tab. 1: Summary of the impedance amplitude Z res and the resulting quality factor Q p at parallel resonance ν p = ω p /(2π) of the measured crystal resonators at room temperature. Frequency response First measurements with commercially available 20MHz crystals were carried out in order to determine the range of possible impedance values and to verify that the absolute value of the impedance at parallel resonance is large enough for efficient resistive cooling of trapped particles.Tab. 1 summarizes the measured values of three off-the-shelf crystals in order to show that there is no severe effect of a coupling capacitance C C on the impedance at parallel resonance and to compare their performance.The coupling capacitances were chosen according to the specifications given in the datasheets [11,12,13].The 20MHz AEL Crystals and Euro Quartz resonators are housed in a hermetically sealed can with HC49 holder and nitrogen atmosphere [11,12].The 19.44MHz quartz resonator from KVG Quartz Crystal Technology has a HC43/U holder and is vacuum-sealed [13].All three resonators oscillate in the fundamental mode.The observed shift of the parallel resonance frequency may be due to parasitic capacitances caused by the incorporation of the coupling capacitance into the circuit.Fig. 7: Frequency response of KVG crystal at parallel resonance at room temperature with and without magnetic field. Since Penning traps are usually placed in cryogenic environments with a large DC magnetic field, further measurements were made to investigate the proper operation of the quartz under these conditions.In order to operate quartz crystals in a cryogenic environment, vacuum-sealed crystals must be used since gases like nitrogen used as atmosphere in the crystal housing would freeze out and could severely affect the crystal's function.This is the reason why measurements at 4K were done with the KVG crystal only.The measurements shown in Fig. 6 were made at room temperature and in the absence of a magnetic field.Compared to the previously shown results, the KVG device is clearly superior with a higher impedance of about 4.5 MΩ at parallel resonance (see Tab. 1).This value is about four times larger than the highest value measured with the Euro Quartz device at room temperature.The parallel resonance frequency was found to be ν p = 19.4787MHz and the quality factor Q p ≈ 173,100 without coupling capacitor.The resonance frequency varied by less than 1 kHz when the coupling capacitors were attached to the circuit, and there was no significant reduction of the impedance at ν p . A large magnetic field was created by two permanent magnets (Webcraft) [14] which can develop a field magnitude of about 1 T close to the magnet.By attaching the magnets directly to the crystal's housing, the distance between the quartz and the magnet was minimized, thereby maximizing the magnetic field.No decisive effects were observed when the magnetic field was present from two different directions.Figure 7 shows the magnitude of the impedance when the magnetic field is perpendicular to the crystal disc ("with magnet 90º") and in parallel ("with magnet 0º") respectively.The cryogenic environment was realized with a Dewar filled with liquid helium.The cool-down from room temperature to liquid-helium temperature was carried out over about 15 min in order to minimize mechanical stress to the crystal.Figure 8 shows the result of the measurement at cryogenic temperature compared to the one at room temperature.As expected, there is an increase of the quality factor Q p and the resonance frequency is shifted toward lower frequencies [15].Q p is increased by factor of two.The measurement results show that the crystal resonator is fully functional in a cryogenic environment and in the presence of a strong magnetic field, as present in a Penning trap.Our results confirm earlier measurements presented in Refs.[16] and [17]. Resonance frequency shift In a real Penning trap, ions may exhibit a range of resonance frequencies, depending on trap imperfections or their interactions with other trapped particles.It is thus desirable to be able to vary the resonance frequency and possibly chirp the parallel resonance of the crystal resonator over a range of frequencies.To shift the parallel resonance frequency of a quartz the schematic shown in Fig. 9 is used.The resistor R B1 is chosen as 1.5 MΩ to be large enough to not degrade the peak impedance at parallel resonance and to decouple the voltage source from the resonance circuit.The capacitance of the blocking capacitor C Bl must be larger than the minimum capacitance C diode of the diode D 1 since C Bl and C D1 are in series and the lowest capacitance dominates.The diode D 1 is a GaAs Schottky diode which has a capacitance of C diode = 5.8 pF at U diode = 0 V and C diode = 1.4 pF at U diode = 30 V and functions at cryogenic temperatures.The voltage-adjustable parallel capacitance is thus tunable from C P = 1.48 pF to 0.82 pF.The result of the parallel resonance sweeping at liquid Helium temperature is shown in Figs. 10 and 11.With this setup a tunability of the parallel resonance of more than 3 kHz is realized. The change of the parallel resonance frequency due to a parallel capacitance C P is given by The capacitance obtained by combining the blocking capacitance and the tuning capacitance of the diode in series is given by Conclusions and outlook We have proposed a type of resonator which has to our knowledge not yet been used for the purpose of resistive cooling.In order to be ensure that a resonant circuit created by a single crystal resonator is able to efficiently cool trapped particles further measurements are necessary.For the near future, measurements using a Penning trap with trapped electrons or protons are planned.If it turns out that the selectivity needs to be reduced in order to couple more modes of trapped particles the crystal's parallel resonance could be chirped over the limited range of accessible frequencies (see Figs. 10 and 11).Another option might be to incorporate a crystal bandstop filter instead of the single crystal resonator since the filter combines a high input impedance with a higher bandwidth.The bandstop filter allows to dynamically cover a frequency range comparable to that of parallel tuned circuit with a coil. Fig. 5 : Fig. 5: Measurement setup including Vector Network Analyzer, coupling capacitance C C and crystal resonator. Fig. 9 : Fig. 9: Schematic for the circuit which allows to tune the parallel resonance frequency of the quartz resonator.
3,325
2011-11-29T00:00:00.000
[ "Engineering", "Physics" ]
A Unified Approach to Solvability and Stability of Multipoint BVPs for Langevin and Sturm–Liouville Equations with CH–Fractional Derivatives and Impulses via Coincidence Theory : The Langevin equation is a model for describing Brownian motion, while the Sturm– Liouville equation is an important mechanical model. This paper focuses on the solvability and stability of nonlinear impulsive Langevin and Sturm–Liouville equations with Caputo–Hadamard (CH) fractional derivatives and multipoint boundary value conditions. To unify the two types of equations, we investigate a general nonlinear impulsive coupled implicit system. By cleverly constructing relevant operators involving impulsive terms, we establish the coincidence degree theory and obtain the solvability. We explore the stability of solutions using nonlinear analysis and inequality techniques. As the most direct application, we naturally obtained the solvability and stability of the Langevin and Sturm–Liouville equations mentioned above. Finally, an example is provided to demonstrate the validity and availability of our major findings. Remark 1.In (1) and (2), the impulse functions I l , J l are only related to U (x l ) since CH D * x l U(x l ) ≡ 0. In addition, (1) is implicit and (2) is explicit. As is well known, the Langevin equation is a famous mathematical model that describes the random motion of particles annihilating in a fluid due to collisions between particles and fluid molecules.Compared with the integer-order Langevin equation, the fractional-order Langevin equation is more accurate in describing the random motion of particles in complex viscoelastic fluids.In recent years, many papers dealing with the fractional Langevin equations have been published.For example, Ahmadova and Mahmudov [1] studied the explicit analytical solutions for several families of Langevin differential equations with general fractional orders.Salem et al. [2] applied Darbo's fixed-point theorem to investigate the existence of solutions for the three-point boundary value problem of a fractional Langevin equation in the noncompact Hausdorff space.Zhao, in [3][4][5], discussed the stability of several types of nonlinear fractional Langevin equations with delays and controls.In [6][7][8], the authors explored the controllability problem of fractional Langevin equations.Other papers are [9][10][11] concerned with the dynamics of stochastic Langevin equations. Furthermore, the Sturm-Liouville equation, which includes the Helmholtz equation, Bessel equation, and Legendre equation, also represents another important class of mathematical and physical equations.Therefore, study of the fractional Sturm-Liouville equation has also become a hot topic in recent years.Afarideh et al. [12] used the pseudospectral method and Chebyshev cardinal functions to solve the Caputo fractional Sturm-Liouville eigenvalue problems.Sadabad and Akbarfam [13] provided an efficient numerical method to estimate the eigenvalues and eigenfunctions of the fractional Sturm-Liouville equation.Allahverdiev et al. [14] obtained a completeness theorem of singular dissipative conformable fractional Sturm-Liouville operators.Goel et al. [15] probed the numerical calculation of mixed boundary value problems for the generalized fractional Sturm-Liouville system.Kumar and Mehra [16] adopted the wavelet method to solve the Sturm-Liouville fractional optimal control problem.In fact, there are many research achievements on fractional Langevin and Sturm-Liouville equations.We will not elaborate further here.However, previous works in the literature have studied the two types of equations separately, and there is rarely a unified approach.Accordingly, it is novel and fascinating to unify Equations ( 1) and ( 2) for research purposes. To address the solvability and stability of Equations ( 1) and ( 2) together, we consider a general system including (1) and (2) as follows: where The other conditions are the same as (1). Remark 2. When boundary conditions 3) becomes an impulsive implicit antiperiodic boundary value problem. The Hadamard fractional calculus proposed by Hadamard in 1892 [17] is a direct and effective extension of Riemann-Liouville (RL) fractional calculus.Its prominent feature is that the logarithmic kernel H(x, s) = (log x s ) ϑ−1 replaces the polynomial kernel G(x, s) = (x − s) ϑ−1 in the RH-calculus definition.These two kernels maintain certain mathematical commonalities.For example, they have singularity when 0 < ϑ ≤ 1, that is, G(x, s) → ∞, H(x, s) → ∞, as s → x.Some of their properties are also significantly distinguished.For instance, As an important class of differential equations, the theory and application of Hadamardtype (H-type) fractional differential equations (FDEs) have received extensive and in-depth research, which has achieved fruitful results (see [18][19][20][21][22][23][24][25][26][27][28][29][30][31]).Until now, the exploration of various dynamic properties of H-type fractional differential equations has been a very lively research topic.For example, in [32], the authors discussed the logarithmic decay stability of an H-type fractional equation.Rao et al. [33] considered the problem of multiplicity of solutions for a mixed H-fractional Laplacian system.Zhao [34,35] thought about the approximation and Hyers-Ulam-type stability of two classes of H-fractional boundary value problems.In [36,37], the authors studied the numerical calculation problem of H-fractional equations.Ortigueira et al. [38] explored the unification of H-calculus and RL-calculus.Dhawan et al. [39] applied the upper and lower solution method to analyze a neutral Hfractional equation.Ahmad et al. [40] investigated a coupled system of Hilfer-Hadamard fractional equations.Ben Makhlouf et al. [41] studied the existence, uniqueness, and averaging principle for Hadamard Ito-Doob stochastic delay fractional integral equations.Briefly, the properties, research approaches, and generalization of the concept of H-derivative, as well as the effects of delay, impulse, and random factors on H-fractional differential systems, have always attracted the attention of scholars.We further refer to [42][43][44][45][46][47]. Generally speaking, the study of implicit forms of differential equations is relatively more difficult than explicit forms.Therefore, the research achievements on implicit differential equations are also rarer than those on explicit differential equations.Only a small number of published papers deal with the solvability and stability of implicit Hadamard fractional differential equations (see [48][49][50][51][52][53][54][55][56]).Some academic researchers have applied the theory of coincidence degree to study the solvability of integer-order nonlinear functional differential equations and have achieved fruitful results (see [57][58][59][60][61][62][63][64][65][66][67][68]).In the theory of coincidence degree, the construction of relevant operators is highly skilled, which brings difficulties to the application of this method.Consequently, there are relatively few works [69][70][71][72][73] on the existence of solutions to fractional differential equations via coincidence degree theory. Owed to the aforementioned, it is fascinating and challenging to investigate the solvability of system (3) by coincidence degree theory.The highlights of this paper mainly comprise the following.(a) Our work enriches and fills the gap in the study of nonlocal boundary value problems for implicit and impulsive fractional coupled systems.(b) In the establishment of coincidence degree theory, we cleverly constructed and proved the complete continuity of the relevant operators for the first time in the study of impulsive fractional differential equations.(c) As an important application of our basic results, we obtained the solvability and stability of the Langevin system and Sturm-Liouville system. The remaining content of this paper is arranged as follows.Some necessary concepts and lemmas are stated in Section 2. Section 3 studies the existence, uniqueness, and stability of solutions to (3).Section 4 discusses the solvability and stability of the Langevin system (1) and Sturm-Liouville system (2), and gives an example to check the validity and availability of our basic findings.Finally, we provide a simple conclusion of research approaches, results, and significance in Section 5. Preliminaries This section mainly introduces some basic knowledge required for this article.We first state an important result of the coincidence theory for solving operator equations as follows. Lemma 1 (Mawhin [74]).Let E, F be Banach spaces, ∅ ̸ = Θ ⊂ E, a bounded open subset.If L : E → F is a 0-index Fredholm operator, and then there has to be at least X * ∈ Θ ∩ Dom L s.t.LX * = N (X * , 1) provided that the following is true: where Q, J : F → F are projected and homotopy, respectively. Next, we need to review the basic concepts and results of Caputo-Hadamard fractional calculus. To obtain a prior estimate of the solution to BVP (3), the following lemma is required. where , by Lemma 2 and (3), we have For x ∈ (x 1 , x 2 ], similar to (5), we obtain From ( 5) and ( 6) and the impulsive conditions of (3), we yield that In the same manner, for x ∈ (x l , x l+1 ], 2 ≤ l ≤ m, we obtain and In view of ( 7) and ( 9), we derive that, for 1 It follows from the boundary value conditions in (3) that From ( 10) and (11), we have From ( 5), (8), and ( 12), we gain the integral Equation ( 4).The proof is completed. Solvability and Stability of (3) In this section, we first establish the theory of coincidence degree for BVP (3), and apply Lemma 1 to explore its solvability.Let Define some norms as follows: where Lemma 4. L defined by ( 13) is a 0-index Fredholm operator. Proof.L is obviously linear.The kernel of L, Ker L is defined by From ( 13) and (15), it is similar to Lemma 3 that and We derive from ( 16) and ( 17) Obviously, Im L ⊂ Y.For all V ∈ Y, Similar to Lemma 3, Equation ( 19) allows a unique solution U * = (U * 1 , U * 2 ) T as follows: where Based on the definition of 0-index Fredholm operator, we know that Lemma 4 is true.The proof is completed.P : X → X is defined by Obviously, P 2 = P and Ker P = X.Noticing that Ker L is zero space, we yield that Im P = Ker L and X = Ker L ⊕ Ker P. Therefore, L| Dom L∩Ker P : X = Dom L ∩ Ker P → Im L = Y; there exists an inverse operator K P .For each V ∈ Y, K P V = (U * 1 (x), U * 2 (x)) T ∈ X is defined as (19) and (20).Define then . Proof.For all bounded subsets Θ ⊂ X, it suffices to prove that K P (I − Q)N (Θ) is relatively compact.Indeed, it follows from the continuity of F i , I il and which means that K P (I − Q)N (Θ) is relatively compact.The proof is completed. where B > 0 is a constant and U = (U 1 , U 2 ) ∈ X is any solution of the following inequality Theorem 2. BVP (3) is HU-stable provided that the conditions (A1)-(A4) are true. Solvability and Stability of (1) and (2) In this section, we apply our main methods and results to discuss the existence, uniqueness, and HU-stability of solutions for the Langevin system (1) and Sturm-Liouville system (2).Theorem 3. The Langevin Equation (1) has a unique solution in PC[α, β] which is HU-stable, provided that the following conditions (A ′ 1)-(A ′ 4) are fulfilled. ; then, the Langevin system (1) becomes Therefore, the solvability of the Langevin system (1) and BVP (60) is equivalent.It suffices to discuss the existence of solutions for BVP (60).Indeed, let x l U 2 (x)); then, BVP ( 60) is transformed into the form of (3).Condition (A ′ 1) and Condition (A1) correspond exactly.From (A ′ 2) and (A ′ 3), a simple calculation provides that Substituting these values into Condition (A4) yields Condition (A ′ 4).From Theorems 1 and 2, we declare that BVP (60) has a unique solution in X which is HU-stable.The proof is completed. So it suffices to discuss the existence of solutions for BVP (61).In fact, let 61) is transformed into the form of (3).Condition (A ′′ 1) and Condition (A1) correspond exactly.From (A ′′ 2) and (A ′′ 3), a simple computation gives that Substituting these values into Condition (A4) yields Condition (A ′′ 4).From Theorems 1 and 2, we declare that BVP (61) has a unique solution in X which is HU-stable.The proof is completed. To illustrate the availability and correctness of Theorem 1, we provide an example of the three-point boundary value problem with two impulse points as follows. Example 1.Consider the following nonlinear impulsive coupled implicit system where R). Therefore, the condition (A1) holds.We perform a simple calculation to yield that ∂F . Hence, we derive that Thus, the conditions (A2)-(A4) are also satisfied.From Theorems 1 and 2, we conclude that system (60) admits a unique solution, which is HU-stable. Conclusions This section first provides further analysis and then discussion of our main results.In Theorems 1-4, the most important condition is that 0 < ρ 1 , ρ 2 , ρ ′ 1 , ρ ′ 2 , ρ ′′ 1 , ρ ′′ 2 < 1.This condition is determined by the response functions F 1 , F 2 , pulse functions I l , J l , impulsive points x l , boundary value points α, β, ξ l , and coefficients a l , b l together, l = 1, 2, . . ., m.The more complex calculation to verify this condition is that x k−1 θ > 1.So, the condition 0 < ρ 1 , ρ 2 , ρ ′ 1 , ρ ′ 2 , ρ ′′ 1 , ρ ′′ 2 < 1 is more difficult to satisfy.This requires controlling the values of L ij , M ij , N ij , a l and b l , i, j = 1, 2; l = 1, 2, . . ., m.In addition, since this paper considers the CH-fractional differential equations with certain singularities, the ODE toolboxes in MATHLAB cannot be applied in numerical simulations.This requires the design of new numerical algorithms, which is also one of our future research directions. Next, we make a brief summary.Hadamard fractional calculus in the Caputo sense is an important type of fractional calculus, which is a generalization of RL-fractional calculus in the Caputo sense.The CH-fractional differential equation is used to solve many practical problems and has become a popular object of concern for many academic researchers.There have been some good results in the study of CH-fractional differential explicit systems.However, studies on the solvability and stability of CH-fractional coupled implicit systems under impulsive influence are relatively rare because it is difficult to estimate the existence region of the solution.Additionally, the theory of coincidence degree is an important route to solve the existence of solutions to nonlinear differential equations.In this paper, we creatively establish a framework for the coincidence degree theory for system (3) with impulsive effects and prove the existence of a solution.Simultaneously, our main results are applied to solve the solvability of the Langevin system (1) and Sturm-Liouville system (2).Our research objects and findings enrich the theory of CH-fractional differential equations.Our approach also provides a paradigm for uniformly solving such problems.
3,365.6
2024-02-13T00:00:00.000
[ "Mathematics" ]
Modeling human retinoblastoma using embryonic stem cell-derived retinal organoids Summary Retinoblastoma (Rb) is the most prevalent intraocular malignancy in early childhood. Traditional models are unable to accurately recapitulate the origin and development of human Rb. Here, we present a protocol to establish a novel human Rb organoid (hRBO) model derived from genetically engineered human embryonic stem cells (hESCs). This hRBO model exhibits properties highly consistent with human primary Rb and can be used effectively for dissecting the origination and pathogenesis of Rb as well as for screening of potential therapies. For complete details on the use and execution of this protocol, please refer to Liu et al. (2020). MATERIALS AND EQUIPMENT Aliquoting and plating matrigel At least 1 day before aliquoting, thaw a bottle of Growth Factor Reduced Matrigel (7-10 mg/mL, Corning) on ice at 4 C. Aliquot 2 mg of Matrigel into each pre-chilled 1.5-mL microcentrifuge tube using pre-chilled tips, all on ice. Immediately freeze the matrigel aliquots at À20 C or À80 C. It should be stored at À80 C and are stable for at least 1 year. CRITICAL: Avoid multiple freeze-thaw cycles. Keep the Matrigel on ice during the entire aliquoting and plating process to prevent it from solidifying. Pipette tips and tubes should be also pre-chilled before use. Use 12 mL of ice-cold DMEM/F12 medium to thaw and resuspend each Matrigel aliquot (final concentration 0.16 mg/mL), which can be used to coat two 6-well plates. Mix the Matrigel well, add 1 mL of diluted Matrigel into each well of 6-well plates (250 mL into each well of 24-well plate), and ensure that the entire surface of plate well is covered. Incubate the plate at 37 C for 60 min, or at 4 C for 12 h. CRITICAL: Matrigel should be removed from the freezer right before the experiment and should still be frozen when the DMEM/F12 medium is added. Note: The plates are incubated with Matrigel at 37 C for at least 30 min, but do not exceed 90 min. Minimize the amount of time that the coated plates are exposed to air. Drying can damage the Matrigel coating. Matrigel-Coated plates can be stored at 4 C for 2 weeks. mM ROCK inhibitor Y-27632 Dissolve ROCK inhibitor (Y-27632) in sterile DMSO or H 2 O to a final concentration of 10 mM (1,0003), and then aliquot and store it at À80 C. The solution is stable for at least 1 year. M Taurine Dissolve 0.125 g Taurine in 10 mL sterile H 2 O and sterilize by filtration through a 0.22 mm filter. This makes a 0.1 M (1,000) stock that can be divided into 568 mL aliquots and store it at À20 C for 1 year. mg/mL DNase I Reconstitute lyophilized DNase I (100 mg) in 20 mL sterile H 2 O. Aliquot and store at À20 C up to 6 months. Avoid freeze/thaw cycles. mM Retinoic acid Dissolve 30 mg/mL retinoic acid in DMSO to obtain a master stock solution (100 mM), aliquot (vortexing may be needed) and store in light protected vials at À80 C. Master stock is reconstituted at 203 of the subsequent stock solution (5 mM) in DMSO and store at À20 C for up to 2 weeks. Note: Retinoic acid is more sensitive to light, heat, and air in solution. Protect it from light, heat, and air. 55 mg/mL recombinant human BMP4 (hBMP4) For a stock solution, reconstitute at 55 mg/mL in sterile 4 mM HCl containing at least 0.1% bovine serum albumin, aliquot, and store at À20 C for up to 3 months. hBMP4 can be stored at 4 C for up to 2 weeks once it is thawed. Note: Store the stock solution in a manual defrost freezer and avoid repeated freeze thaw cycles. hESC culture medium hESC cryopreservation medium (23) Add 2 mL of DMSO into 8 mL of TeSR-E8 to make 23 Cryopreservation Medium. Medium can be stored at 4 C for up to 1 week. Note: The dosage of Monothioglycerol must be particularly accurate, excess will affect cell aggregation. Store up to 2 weeks at 4 C. Protect from light. Note: After preparation, the differentiation medium should be stored at 4 C and used within 2 weeks. Note: Filter the medium before adding ethylene glycol and DMSO. Store up to 2 weeks at 4 C. Protect from light. STEP-BY-STEP METHOD DETAILS Thawing hESCs Timing: 30 min 1. Prepare a Matrigel-coated 6-well plate (refer to Aliquoting and Plating Matrigel), and keep the hESC culture medium ready, which has been warmed to 22 C-25 C. 2. Remove the cryopreserved hESC vial from the liquid nitrogen storage tank, transfer to the 37 C water bath to thaw quickly. 3. When most the contents are thawed, slowly transfer it to 15-mL tube, and then add 5 mL of hESC culture medium in a drop wise manner. Gently shake the tube to mix the cells. Centrifuge at 200 3 g for 5 min at 22 C-25 C. 4. Carefully aspirate and discard the supernatant, resuspend the cell pellet in 2 mL hESC culture medium supplemented with 10 mM Y-27632. 5. Completely aspirate Matrigel from one well of precoated 6-well plate, add the cell suspension into the well immediately, Place the plate into 37 C, 5% CO 2 incubator, gently shake to evenly distribute the cells. Note: ROCK Inhibitor Y-27632 can markedly diminish dissociation-induced apoptosis of hESCs, enhance colony formation of dissociated hESCs after passaging. CRITICAL: Do not break apart the colonies too much by excess pipetting. See Figure 2E for optimal aggregate size. Feeding and passaging hESCs Timing: 60 min 6. The next day after thawing hESCs, prewarm DPBS and hESC culture medium to 20 C-25 C, remove the spent medium with debris, wash the cells once with 1 mL prewarmed DPBS, and then add 2 mL fresh and prewarmed hESC culture medium without Y-27632 for continue feeding. Refresh culture medium daily until cells require passaging. CRITICAL: Cell morphology is monitored under the inverted microscope. Healthy undifferentiated hESCs display round colony morphology with high cell density and clear boundary (Figures 2A and 2B), while the differentiated cells exhibit changed morphology (e.g., enlarged cells and cell-cell spacing, fibroblast-like morphology) ( Figure 2C). Scrape off large differentiated colonies with a P1000 pipette tip by visual recognition. Small differentiated colonies will spontaneously disappear upon passaging. 7. Proceed to passage when the hESC colonies are becoming too large or reaching 60%-80% confluent (approx. 3-5 days). Note: For optimal results, cells should be approximately 60%-80% confluent after 3-5 days in culture ( Figure 2B). If cells do not reach this confluency, adjust the timing to start the differentiation later. 8. Prewarm hESC culture medium, DPBS and EDTA Dissociation Buffer to 20 C-25 C. Prepare a Matrigel-coated 6-well plate (refer to Aliquoting and Plating Matrigel), aspirate the Matrigel from the well, and then add 2 mL of hESC culture medium supplemented with 10 mM Y-27632 per well. 9. Aspirate the spent medium from the hESCs to be passaged. Rinse the cells with 2 mL DPBS and EDTA solution sequentially, and then add 1 mL the EDTA solution to each well. 10. Incubate for 2-5 min at 22 C-25 C within the hood. Remove the EDTA solution carefully without disturbing the attached cell layer. Use 2 mL of hESC culture medium with 10 mM Y-27632 to wash the colonies off the plate and dissociate cells by gently pipetting up and down three times. CRITICAL: The longer the hESCs incubated in the EDTA solution, the smaller the colonies that will result. Keep the movement of the plate to a minimum to avoid lifting the colonies completely off the plate during incubation ( Figure 2D). OPEN ACCESS Note: Do not break apart the colonies too much, avoid excessive pipetting as hESCs are very sensitive ( Figure 2E). 11. Transfer the desired cell amount per well (the splitting ratio is about 1:6-1:10) into the readied Matrigel-coated plates. 12. Shake the plates back and forth and side to side to distribute the cells, leave it in the incubator for 12 h to allow maximum cell attachment. 13. On the next day, remove the culture medium from cells, rinse the cells with 1 mL prewarmed DPBS, and then add 2 mL of prewarmed hESC culture medium (without Y-27632) to each well in the 6-well plate. 14. On each following day, repeat step 13 to change culture medium, and monitor cells daily. Note: Long time culture (exceeding 48 h) in Y-27632 conditioned culture medium will cause significant irreversible changes in cell morphology, thus it may affect the cell state of hESCs. The conditioned medium should be replaced by hESC culture medium (without Y-27632) the next day after thawing or passaging. CRITICAL: Passages of stem cell always have impacts on the subsequent applications, we recommend using hESCs between passages 30 and 60. For subsequent applications, it is recommended to use hESCs that have been passaged at least twice after thawing to ensure the cells have returned to normal status. Optional: Pretreatment with 10 mM Y-27632 in hESC culture medium for 2 h before single-cell plating can effectively enhance the survival rate of hESCs (Liu et al., 2016). Alternatives: Other commercial reagents (i.e., ACCUTASE) for single cell dissociation can be used as a substitute. 16. Using P1000 pipette tip, gently pipette up and down the TrypLE Select solution in the well three times to make a single-cell suspension. CRITICAL: Avoid excessive pipetting as hESCs are very sensitive. Monitor dissociation under a bright-field microscope to ensure that cells are dissociated into single cells. If some cell lines appear to be more difficult to dissociate, the dissociation time can be extended to 10 min. 17. Transfer the single-cell suspension to a 15-mL centrifuge tube containing 5 mL of hESC culture medium. 18. Cell density is counted using a Handheld Automated Cell Counter (Millipore). Meanwhile, centrifuge the cell suspension at 200 3 g for 5 min. 19. Remove the supernatant, and approximately 2 3 10 6 cells are re-suspended with nucleofector solution prepared according to the manufacturer's protocol by mixing 82 mL P3 primary cell solution and 18 mL supplement 1 (Lonza), mixed with 5 mg of plasmid cocktail, including 2.5 mg of Note: Avoid trapping bubbles when transferring, as it may impact the efficiency of electroporation. Alternatives: Other programs or Nucleofector equipment for hESCs electroporation can be optimized or tested to achieve the best transfection efficiency. 21. Following nucleofection, gently transfer the cells into Matrigel-coated plates containing hESC culture medium supplemented with 10 mM Y-27632, and cultured in a 37 C, 5% CO 2 incubator. 22. 48 h after electroporation, treat cells with 2 mg/mL puromycin (Gene Operation) for about 7 days. After puromycin selection, the surviving clones will appear and should be ready for picking, expanding and for further genotyping. 23. For picking clones, prepare a Matrigel-coated 24-well plate (refer to Aliquoting and Plating Matrigel). After coating, replace Matrigel with 0.5 mL of hESC culture medium supplemented with 10 mM Y-27632 per well. 24. Find cell colonies under the microscope. By using a smaller-gauge needle, cross-hatch the colony so that it will come off the plate in smaller pieces ( Figure 3A). Use a P1000 pipette tip to push the colony off the plate and suck it into a pipette tip. Transfer the colony pieces into one well of the 24-well plate. Note: Pick the colonies under a normal inverted microscope. The microscope can be placed inside a Class II biological hood to allow for a sterile field while picking colonies. Optional: Ideally, resistance gene (puro) in RB1-mutated hESCs can be further removed using the Cre/LoxP system. CRITICAL: Only biallelic RB1-mutated or knockout hESCs with the loss of RB protein function can be used effectively to generate Rb organoids. Characterization of RB1 Mut/Mut hESC lines Timing: 5-7 days With the knockin of nonsense mutation p.R320X, the generated RB1 Mut/Mut hESC lines should absent the expression of RB protein (pRB), and sustain the primordial state without changing pluripotency and genetic integrity. Note: The plating density affects cell viability and differentiation efficiency. We recommend using the cell density at 12,000 live cells/well. Approximately 144310 4 cells needed per 96-well low-attachment V-bottom plate, and re-suspended with 12 mL differentiation medium I supplemented with 20 mM Y-27632. Plate 100 mL cell suspension into each well of a 96-well low-attachment V-bottom plate. CRITICAL: 96-well low-attachment V-bottom plates are needed for rapid cell reaggregation ( Figure 4A). Y-27632 can also significantly increase cell reaggregation. 34. Put the plates on a spiral mixer device, and shake for 10 min at 60-80 rpm to allow cells to gather at the bottom of the well. 35. Place the plates into 37 C, 5% CO 2 incubator. 36. On day 6, take out the plate and slightly tilt it, use an 8-channel P100 pipette to carefully aspirate the medium from each well, leaving EB-like aggregates at the bottom of wells. Aggregates in each well can be visible with the naked eye ( Figure 4B). 37. Quickly add 100 mL fresh differentiation medium I containing 55 ng/mL hBMP4 to each well to re-suspend the aggregates, and incubate at 37 C, 5% CO 2 for 3 days. 38. On day 9, carefully remove 50 mL medium from each well, change equal volume of fresh differentiation medium I (without hBMP4) back to each well for 3 additional days ( Figure 4C). 39. On day 12/day 15, perform routine half medium change as described in step 38. OPEN ACCESS Note: Optimal EBs should have a round and smooth morphology, the anterior neuroepithelium develops on the outer surface and is quite optically translucent ( Figure 4D). Suboptimal EBs usually exhibit uneven surface with dead or unhealthy cells attached, fails to form neuroepithelium, and will eventually form cystic organoids ( Figure 4D). CRITICAL: Timed hBMP4 treatment is critical for differentiation induction. A volume of 55 ng/mL human BMP4 is added to the culture on day 6, and its concentration is semireduced with a half medium change every 3 days ( Figure 4C). Early stage of retinal differentiation Timing: 6 weeks 40. On day 18, take out the plate and gently tap from both sides to make the aggregates (organoids) free-floating. Organoids can be visible with the naked eye ( Figure 4B). Using a wide-mouth P1000 pipette tip (simply cut the tip with scissors), transfer the organoids from each well into a 15-mL conical tube. CRITICAL: To avoid the damage of aggregates, use wide-mouth pipette tips to transfer aggregates. Gentle operation is also required for the transfer process. 41. Let the organoids to settle at the bottom of the tube (approx. 1-2 min) ( Figure 4B) and carefully remove the supernatant from the top. 42. Add 5 mL differentiation medium II to wash the organoids, let the organoids settle to the bottom again and then remove excess differentiation medium II. 43. Carefully resuspend the organoids with 10 mL differentiation medium II and transfer into a low attachment 9-cm Petri dish using a 10-mL pipette. 44. Manually trisect the organoids using a V-Lance Knife (Alcon Surgical) under the inverted microscope in a Class II biological hood (Methods video S1). Add 10 mL of additional differentiation medium II to the Petri dish. CRITICAL: Turn off the light of the hood to avoid isomerization of retinoic acid in differentiation medium II. 45. Gently shake the dish and incubate at 37 C, 5% CO 2 /40% O 2 for continue induction. 46. Refresh the differentiation medium II every 7 days (Methods video S2). a. For media refreshing, rotate the dish to gather the organoids in the center. b. Carefully aspirating the medium from the surrounding using vacuum aspiration system. c. Aspirate as much media as possible without disturbing organoids. d. Replace with 20 mL fresh differentiation medium II. 47. Gently swirl the Petri dish every day to avoid the organoids adhere to the bottom. 48. On day 20, take out the dish, remove the badly differentiated organoids using a 10-mL pipette, and separate the fused organoids using a V-Lance Knife under the inverted microscope. Place the dish into incubator for further culture. Note: The state and morphology of organoids can monitor in real time using an inverted microscope ( Figures 5A and 5B). Organoids in poor condition tend to adhere to the bottom of Petri dish ( Figure 5A), pipette the adhered organoids with the medium to resuspend it, or remove it using a 10-mL pipette. Alternatives: When organoids are culture in suspension in a Petri dish, half medium can be changed every 5 days. 49. On day 30, repeat step 48 to remove the badly differentiated organoids, separate the fused organoids under the inverted microscope ( Figure 5A). 50. Using a 10-mL pipette, gently distribute the organoids into multiple dishes, with no more than 30 organoids in each dish. Return the dishes to 37 C, 5% CO 2 /40% O 2 incubator, change the medium every 7 days as described in step 46. CRITICAL: Do not put too many organoids in a single plate, which is not conducive to longterm culture. Note: There was no observable difference between Rb and retinal organoids before differentiation day 45 ( Figures 5A and 5B). Tumorigenesis and long-term culture of human Rb organoids Timing: 5 weeks 51. On day 60, tumor-like ''primary foci'', with the obvious uneven density inside and ill-defined edge, are visible in a minority of retinal organoids (named as human Rb organoids) under the microscope ( Figure 5B). It will expand rapidly from the masses thereafter ( Figures 5B and 5C). 52. Perform media change routinely every 5-7 days as described in step 46. 53. On day 75, obvious tumor-like ''primary foci'' structures can be seen in most organoids (Figure 5B). 54. Using a 10-mL pipette, gently distribute the organoids into multiple dishes, with no more than 15 organoids in each dish, refreshing the medium every 5 days as described in step 46. CRITICAL: The tumor structures are relatively loose and easy to detach from the organoids. Be careful when changing the medium or collecting samples to avoid damage of Rb organoids ( Figure 5B). 55. After day 90, the tumor-like ''primary foci'' structures will wrap around the entire mass (Figure 5B), and excessively proliferating Rb cells can migrate into the medium in suspension conditions ( Figure 5D). 56. Gently trisect the larger organoids using a V-Lance Knife under the inverted microscope for long-term culture. Note: Avoid organoids from growing too large, which will cause nutrient deficiency in the cells inside the organoids. Characterization and tumor growth monitoring of human Rb organoids Timing: 5-7 days For the characterization of human Rb Organoids, they can be cryopreserved, sectioned, and immunocytochemically stained for the expression of proliferative or Rb marker, such as Ki67, SYK, DEK and p16 INK4a . OPEN ACCESS STAR Protocols 2, 100444, June 18, 2021 58. Carefully aspirate the supernatant, wash the organoids three times with 1 mL DPBS, then add 0.5 mL 4% paraformaldehyde (PFA) solution to fix them at 37 C for 1 h. Wash the organoids three times with PBS. 59. Stain the organoids with Ponceau S solution at 37 C for 5 min, and then wash them with NEG-50 Frozen Section Medium. 60. Carefully embed the organoids into a disposable embedding mold containing NEG-50 Frozen Section Medium. 61. Quickly place the molds into À80 C refrigerator for 20-30 min to freeze the organoids, and then cryosection at a thickness of 12-16 mm on slides using a cryostat. Pause point: Once fixed and embedded, organoids can be stored at À80 C for more than 12 months. After cryosection, the sections can subject to immunocytochemistry or storage at À80 C for up to 12 months 62. For immunocytochemistry, wash the sections three times for 10 min with PBS, block and permeabilize the sections in 4% BSA with 0.5% Triton X-100 for 1 h at 22 C-25 C. 63. Dilute the primary antibody in PBS with 1% BSA with 0.5% Triton X-100. 64. Aspirate the spent blocking buffer of the sections, and incubate with appropriate dilutions (1:100-1:400) of primary antibody at 4 C for 12 h. Note: Different primary antibodies may require specific immunostaining conditions, please refer to the manufacturer's instructions. 65. Dilute the secondary antibody (1:400) in PBS with 1% BSA with 0.5% Triton X-100. 66. Aspirate the primary antibody dilution, wash three times for 10 min with PBS, add 100-200 mL secondary antibody dilution, and incubate for 1 h in the dark at 22 C-25 C. 67. Aspirate the secondary antibody dilution and wash once with PBS. Add 300 nM DAPI nuclear stain solution and incubate for 5-10 min in the dark at 22 C-25 C. 68. Remove the solution and wash the sections three times with PBS. 69. Using absorbent paper absorb moisture around the sections, add 10 mL antifade mounting medium, and gently cover the cover slips. Seal the edges of the coverslip using transparent nail varnish and dry for 3 min. Store the slides at 4 C in the dark. 70. The stained sections can be visualized using confocal microscopy. Cryopreservation of human Rb organoids Timing: 1.5 h Organoids are typically cryopreserved intact at the relatively early stage (day 30-50) and can be used for further applications after thawing. 71. During days 30-50, take out the dishes, carefully collect the organoids using a 10-mL pipette and transfer into a 15-mL tube containing 1 mL differentiation medium II. 72. Place the tube on ice for 10 min. 73. Remove the supernatant from the top, replace with 1 mL pre-chilled cryopreservation pretreatment solution, and incubate on ice for 10-15 min. 74. Remove the Pretreatment Solution, add pre-chilled organoid cryopreservation medium to resuspend the organoids. 75. Transfer 200 mL of the cryopreservation medium with 15-20 organoids to a labeled 1.5-mL cryovial. 76. Directly frozen the vials in liquid nitrogen for storage. OPEN ACCESS CRITICAL: Do not use narrow-mouth pipette tip to collect or transfer the organoids, which will seriously affect the survival efficiency of frozen organoids. Note: Human Rb organoids can be cryopreserved for at least one year using this protocol. 77. For organoid thawing, take out the stock vials from liquid nitrogen, add 1 mL prewarmed (20 C-25 C) differentiation medium II to thaw the frozen human Rb organoids quickly. 78. Remove the supernatant, and then wash twice with differentiation medium II. 79. Transfer the organoids to a new Petri dish using a 10-mL pipette, add 15 mL of differentiation medium II for post-maintenance culture. Note: Five days after thawing, neural retina-like structures are clearly visible in the organoids that are alive and in good condition ( Figure 5E). In our experiences, the survival rate of Rb organoids after thawing is ranges between 60%-80%. EXPECTED OUTCOMES As a genetically related malignancy, Rb is caused by RB1 mutations. This protocol describes an efficient method to generate an in-dish human Rb organoid system from the genetically engineered embryonic stem cells with a biallelic RB1-mutation (RB1 Mut/Mut ). The key to establish this Rb organoid system is to generate gene-edited human embryonic stem cells (hESCs) with a biallelic RB1-mutation (RB1 Mut/Mut ) at the first stage. In our experience, only biallelic RB1-mutated hESCs with the loss of RB protein function can be used effectively to generate human Rb organoids. Using this protocol, the efficiency of homozygous RB1-mutation knock in clones is about 10%-15% (5 out of 40 in puro-resistant clones). It takes 2-3 months for generation of these biallelic RB1-mutated hESCs. We have identified that healthy RB1 Mut/Mut hESC lines sustains the primordial state without changing pluripotency, genetic integrity as well as the cell cycle of the hESC lines . RB1 Mut/Mut hESC lines can further differentiate into human Rb organoids in a stepwise manner. Like normal retinal organoid derived from WT hESCs (Kuwahara et al., 2015;Jin et al., 2019;Pan et al., 2020), the morphogenetic and molecular properties of RB1 Mut/Mut hESC-derived Rb organoids recapitulate the developing human retina normally before the differentiation day 60, i.e., multilayered neural retina (NR) containing all retinal cell types . Remarkably, tumor-like ''primary foci'' were clearly visible in Rb organoids at day 60-75( Figure 5B). These tumor-like structures exhibited an obvious uneven density inside, ill-defined edge and larger size ( Figures 5B-5E). And these tumor-like foci expanded rapidly from the masses thereafter ( Figure 5C). At this early stage (day 60-75, referring to Rb organoids at the onset of tumorigenesis), the derived organoids can be analyzed for Rb characteristics (e.g., molecular signatures, histological features, tumorigenicity in vivo) and applied in further experiments . After ages of day 90, the derived Rb organoids with significant tumorigenesis were relatively ''mature'' and homogeneous ( Figure 5B). And the tumor-like ''primary foci'' structures wrapped around the entire organoids, and excessively proliferating Rb cells can migrate into the medium in suspension conditions ( Figures 5B and 5D). As described in our original paper , these Rb organoids exhibit properties highly consistent with Rb tumorigenesis, transcriptome, and genome-wide methylation. This organoid system offers an innovative, convenient, and yet elegant model that can be used efficiently and effectively for dissecting the origination of tumor cells and mechanisms of Rb tumorigenesis as well as for screening of novel therapies in terms of efficacy and safety. LIMITATIONS One limitation of this protocol is that differentiation efficiency varies substantially by the state (including the pluripotency, self-renewal ability and differentiation capability) of the cultured hESCs and their derivatives, which might be attributed to genetic background, frozen batches, passages, and culture conditions. In addition, given the loss of RB1 function in the genetically engineered hESCs at the beginning of differentiation, all cells in the Rb organoids derived from these hESCs will lose RB1 function, which will also lead to significant decrease of normal retinal cells in this model system, which may be not suitable for studying tumor microenvironment, invasion or metastasis. TROUBLESHOOTING Problem 1 Failure in establishing RB1 Mut/Mut hESCs with the point mutation of c.958C>T; p.R320X. Potential solution Be sure that the constructed guide-carrying and targeting vector are effective. Before transfecting hESCs, we recommend verifying their effectiveness on 293T cells. In addition, hESCs are difficult to transfect (Liu et al., 2016), and efficient transfection is the key to successfully obtaining the gene-edited hESCs, so the transfection efficiency needs to be improved as much as possible. We routinely get 50%-60% of transfection efficiency. Due to the difference in cell lines and experimental conditions, the recommended program of nucleofection in this protocol can be appropriately adjusted to achieve efficient transfection. As an alternative, hESCs with a biallelic RB1 knockout (RB1 À/À ) can be generated and further used to establish the human Rb organoids, as we demonstrated in our recent publication . Problem 2 Low efficiency to form embryoid bodies. Potential solution Adjust cell state of hESCs and their derivatives. The efficiency of EB formation can be seriously impacted by the state of the cultured hESCs and their derivatives. Adjust the cell state before single-cell seeding by optimizing the culture conditions for expansion, transduction, passage, cryopreservation, and recovery, or changing the culture medium. We achieve best results using hESC culture medium, TeSR-E8 (Stem cell Technologies). Optimize Single-cell Seeding. ROCK inhibitor (Y-27632) is described as required for single-cell passaging (10 mM) and seeding (20 mM) in this protocol. ROCK inhibitor can significantly improve the survival rate of single cells and greatly increase cell reaggregation. In addition, 96-well low-attachment V-bottom plates are critical for rapid cell reaggregation. PrimeSurface 96-well low-attachment V-bottom plates (Sumitomo Bakelite) do work well in this protocol and are recommended. Lastly, plating density should be accurately controlled, which also affect the formation of EBs and subsequent differentiation (Jin and Takahashi, 2012). A low number of hESCs per well leads to small embryoid bodies with insufficient differentiation capacity, while an excessive number of cells per well results in large embryoid bodies with apoptotic cells. The recommended planting density (12,000 live cells/well) is suitable for most of the stem cell lines (Deng et al., 2018;Pan et al., 2020). For different cell lines, optimization may be needed to obtain optimal plating density if the efficiency of EB formation is still low after several attempts. Problem 3 No tumorigenesis happens in the organoids. Potential solution Tumorigenesis in the derived Rb organoids usually occurs from day 60 to 75. A possible explanation for the lack of visible tumor might be RB1 heterozygous mutation or mutation loss in genetically engineered hESCs or its derived organoids. Only biallelic RB1-mutated or knockout hESCs with the loss of RB protein function can be used effectively to generate Rb tumor. Confirm the expression of RB protein (pRB) is absent in RB1 Mut/Mut hESCs and its derived organoids. Another possible reason is that retinal organoids induction fails in the early stage of differentiation. Make sure the cellular compositions and morphogenetic properties of the early-stage organoids can recapitulate the developing human retina. If not, modify differentiation conditions or avoid damage to organoid integrity during induction to generate the healthy early-stage organoids containing all major retinal cell types, especially cone precursors, which are the cell-of-origin of Rb Singh et al., 2018). Spontaneously degenerating retinal organoids sometimes show morphology similar to Rb organoids, but they have essential differences. For example, due to the abnormal proliferation of tumor cells and tumor foci, Rb organoids have obvious proliferation and expansion trends, while the degenerating organoids lacks this tendency. This results in Rb organoids being generally larger than degenerating organoids. Of course, by identifying the expression of tumor markers, it is easy to distinguish these two types of organoids . RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Zi-Bing Jin (jinzibing@foxmail.com). Materials availability This study did not generate new unique reagents. Data and code availability This study did not generate any unique datasets or code.
6,934.2
2021-04-07T00:00:00.000
[ "Medicine", "Biology" ]
The Triad “Cause-Mean-Effect” as a Way of Approximating the Efficiency of Technical or Non-technical Creations or Systems The paper presents a method of approximating the efficiency of a Cause-Mean-Effect (CME) triad of technical or non-technical nature, which may be used in the phylosophy of technology, especially to estimate the effects over a longer period of time of a CME triad with cyclical and variable evolution. The method consists in the studying of the CME triad’s evolution by a graphical representation with three axes in which the position on its axis of representation of the cause C or of the effect E indicates its intensity and the position on its axis of representation of the mean M represents its value expressed by the maximal possible intrinsic negentropy: OM = SM and by its reliability p(τ), the efficiency of the CME triad being approximated in a simplified form as given by the product of the ratios (E/M) and (E/C). The use value of the mean M can be empirically but generally approximated by a relation: M = KM SM⋅p(τ) wherein KM is a quasi-constant of proportionality whose value is inversely proportional to the value of the utilities necessary for the maintaining of the reliability p(τ) of the M-mean and which may be taken also with decreasing valuein the case of a relative triad. There are presented examples of studying the efficiency of CME triads associated with technical or non-technical or mixed systems, which reveals the possibility of the method’s using in the theory of technical or non-technical systems, in particular in assessing the risk of the society’s regression by the degrading of the environment by irrational using of some technologies or by the excessive exploitation of natural resources. It is evidenced also the link with the known principle of “agglomeration of results”, by the variant of “tetradic CME triad”, with two different but useful effects, E1, E2, obtained by a single cause C and a single mean M. Introduction It is currently discussed in the society of philosophers about the so-called "philosophy of technology". Philosophy of technology is a sub-field of philosophy that study the nature of technology and its social effects. The term "philosophy of technology" was first used in the late 19th century by German philosopher and geographer Ernst Kapp from Texas, (USA), [1]. The western term "technology" comes from the Greek term "techne" (τέχνη -art or craft knowledge), and the roots of philosophical views on technology can be found in the roots of Western philosophy. A common theme in the Greek vision of "techne" is that it appears as an imitation of nature (for example, the weaving technique that mimics the spider's weaving technique). Studies of the philosophy of technology include interest in various topics of geoengineering, internet and confidentiality, technological function and epistemology of technology, computer ethics, biotechnology and its implications, transcendence in space and technological ethics, how technological progress affects human society and culture, and so on. Technological determinism is based on the idea that the particularities of the technology determine its use and the role of a progressive society is to adapt to and benefit from technological change, [2]. An alternative perspective would be the social determinism that regards society as being responsible for the development and evolution of the technologies [2]. In direct connection with this philosophical aspect is the problem of the efficiency of technological creations such as inventions, innovations (utility models). Because these technical creations represent new and inventive technical solutions to known or new technical problems, aiming at the conversion of technical or non-technical C-causes (eg. natural causes such as the wind energy) in effects E useful for the society (the obtaining of electricity, etc.), it appears the social-technological problem of the efficiency of the triad: Cause-Mean-Effect, (CME), problem that it is generally related to the general evolution of human society (evolution that involves technological progress but is not limited to it), and in particular it is related to the technological evolution of the society and implicitly-to the field of the philosophy of technology. The distinction between a non-technical CME and a technical triad consists in that in the case of a technical CME triad, at least the means by which the cause generates effect(s) is of technical nature. The correspondence in the patent law of the invention of this particular feature of the technical CME triads is the legal provision that a patent may be granted for any invention having as object a product or a process in any technological field, which is new, inventive and susceptible of industrial application and that the discoveries, the scientific theories and mathematical methods, the aesthetic creations, the plans, principles and the methods in the exercise of mental activities, in the field of games or in the field economic activities, are not considered inventions. The Approximating of the Efficiency of a Triad Cause-Mean-Effect The social efficiency of a CME-triad will be given by all three components of it. This efficiency can be approximated by means of a graphical representation with three axis, starting from the general concept of Triad in which the three interdependent characteristics can generate a stable relation, in closed triangle, (S. Baiculescu, [3]), but considering a cyclic and variable evolution, through the following general technical-philosophical considerations: 1. The position on its axis of the representation of the cause C or of the effect E indicates its intensity; The useful effect is considered positive and the harmful effect is considered negative. 2. The position on its axis of the representation of the mean M represents its use value expressed by the amount of maximal negentropy (-S M ) included by the mean M, (usually: material values + labor) and its reliability, p(τ); In technique, the mean M is a technical solution to a technical problem, that is-an invention or an innovation, (utility model); 3. The ratio between the area of the equilateral triangle with the side equal to the intensity E of the effect, A E and the effective area A e of the triangle CME gives the efficiency, ∈ = A E /A e ; in a simplified semi-empiric form we may take: In particular: ∈ ≥ 1 -efficient triad, ∈ < 1 -partially efficient or inefficient triad; In a stationary but relative CME triad, ∈ r , the mean M may be variable compared to a stationary similar triad. 4. The CME triad to which the E-effect increases continuously in intensity or decreases continuously until cancellation, represents an unstable ascending/(decreasing) triad; 5. The CME triad whose evolution ends with a stable cycle (a closed triangle C k M k E k with the same point E) represents a stabilized ascending/decreasing CME triad; 6. The CME triad whose evolution is partly ascending and partly descending represents an oscillatory triad; 7. The C K M k E k (k = 1, 2, 3..n) triad of a multi-cyclic CME triad represents the c k (τ k ) cycle of the triad, having the period τ k , (c k (τ k ) = (cme) k ); 8. The efficiency of a multi-cycle CME triad is given by the average efficiency, (sum of the efficiencies of the cycles c k divided by the number of cycles, n c ): 9. The CME triad in which the E effect is a non-technical effect is a non-technical triad and the triad in which the E effect is a technical effect is a technical triad; The CME triad with at least one technical effect and at least one nontechnical effect represents a mixt triad. We may use-in a simplified way, for a specific associated CME triad, the notation: ∆ L = C L M L E L . Figure 1, a, b, ascending un-stabilized CME triad, a) and descent stabilized CME triad, b). Examples of CME Triads A. Examples of non-technical CME triads: A1: The specialization (training) with the help of the computer As it is known, purely mental activities, such as business plans, musical compositions, computer programs, literary Non-technical Creations or Systems works or rules of play, etc., are non-technical activities, excluded from patentability, according to the law of inventions, although they can use technical products such as a computer or a telephone as an auxiliary device, for example. The associated CME triads are also non-technical. The training process uses a non-technical CME triad, usually-ascending-stabilized, ∆ L a , consisting in that a smaller initial volume of information/knowledge (initial cause, C 1 ) is assimilated through human means (teachers, speakers) or/and by technically means (computer, video projector, etc.) which are used for training/specialization (effect, E) in correlation with the previous life and the professional experience, this effect E allowing the accumulation of new information/specialized knowledge (forming the C 2 subcyclic cause), of increased volume (C 2 > C 1 ) which can generate a greater training cycle effect, (E 2 > E 1 ), the triadic cycle being repeated until a final cycle which stabilizes the CME training triad, characteristic to the change of profession or to the retirement of the trained, (to an old age). A2. The developing of a business In the development of a business, the purpose (the effect E) pursued is usually to obtain profit, an effect that allows the development of the company and the business. This business development can also start with a small firm, with 2-4 employees (of start-up or spin-off type) and with a modest technical-material endowment, which together form the) initially mean M 1 (formed as assembly of means). Of course, the M 1 -means must be used intelligently, rationally, according to a business plan that together with an initial investment fund F 1 represents-within the CME triad, the initial cause C 1 . In the context in which the C 1 cause (whose intensity can be appreciated through the investment fund F 1 ) generates-through the M 1 means, an E 1 effect whose intensity can be appreciated through the V 1 income, if V 1 > F 1 , will result that E 1 /C 1 > 1 and the evolution the specific CME triad results in this case as ascending to the cycle c 1 . However, the efficiency of the CME triad also depends on the value of the M-means, which -since it also includes human resources -is proportional to the S 1 salary expenses of the employees, during the time period of the c 1 cycle: ∈ ≈ E 2 /M·C. If E 1 /M 1 < 1, (S 1 -wage costs higher than V 1 -income), it results that ∈ 1 ≈ E 1 2 /M 1 C 1 <1, i.e. an inefficient triad for (sub)cycle c 1 . In order to make the triad more efficient, it is therefore necessary that the S -salary expenses be lower than the V-income, resulting in this case a benefit B = V -S which, summed over a given number of cycles c k , will amortize the initial investment F 1 and will bring the company to a profit: P = V T -S T -F 1 . The specific CME triad will be profitable in this case, and if the P profit is maintained at a quasi-constant value, the specific CME triad results as of stabilized ascending type. Otherwise it is of oscillatory type. A3: The Nature-Human-Society (NOS) relation The CME triad (cause-mean-effect) associated to the Nature-Human-Society (NOS) relation explains the harmonic or an-harmonic development of the society by the fact that the Nature, by its natural resources R i (the initial cause, C 1 ), through human individuals (the mean M) contributes to the well-being and the biological, psychological and moral health of the society as a whole, (the effect E pursued). The harmonic development of the society is greater when the potential for rational use of the natural and individual resources (social-useful value of individuals) is greater, case which can ensure an upward evolution of the CME triad associated with the NOS relationship by maintaining the balance of the Nature at adequate values of regeneration of natural resources (physical, vegetable, animal, fish, etc.), up to a stabilized cycle c n given by the fact that the society, through its individuals, ensures the maintaining of the level of natural resources by greening and regeneration, or it can increase this level, for example -by the transformation of some initially arid areas into agricultural areas, by favoring the propagation of useful species, etc. In order to express the efficiency of the associated CME triad: ∈ ≈ E 2 /M·C, it is necessary that the human M-means be expressed by the amount of natural resources M r consumed directly or indirectly by each individual (necessary to maintain their social-useful value) and by their spiritual value M V : M = ΣM r ·M V , for example by their negentropy, -S. The efficiency of the associated CME triad: ∈ k will be greater on a c k (τ k ) cycle of the triad when the ratio (E/M) or/and (E/C) will be higher, so-when more individuals in an agricultural company-for example, (in a farm), produce an E k effect useful for the welfare of the company with a lower consumption of natural resources δR k , the time period T a (T s ) of ascending or relative stable evolution of the associated CME triad depending to the total natural resources R T available to the respective company: This last condition imply also the preservation of the natural environment and of the total natural resources R T . Because the natural resources are realistically declining, it logically follows that for obtain a longer period T a (T s ), the members of the company must reduce the specific consumption of natural resources over a given period of time, characteristic of the restoration of these natural resources R T , in accordance also with the philosophical conclusions regarding the general case of a cyclical CME-triad. A4: The malpractice increasing in the social life An example of ascending managerial CME triad is the managerial malpractice increasing (the effect E) generated by a lack of legislative provision (the cause C) by the action of the manager of a political, economical or administrative state institution (the mean M), which-by the cause C, having a low value of reliability p(τ) (i.e.-a low use value M), use abusively the incomes of the state institution for personal use. If this effect E is not sufficiently punished by a judicial court, it may increase by repetition, by a lower p(τ), in a next (cme) cycle, generating managerial corruption and dangerous malpractice, in particular-the institution's bankruptcy. A5: The tree A tree may be considered as a biological ascending CME triad in which a quantity q 1 of water with mineral salts and the initial seed represents toghether the initial cause C 1 which -by a quantity k 1 of carbon dioxide from air and solar energy, forming the mean M 1 , is converted into a little tree (the effect E 1 ), by vegetal cells multiplication, which toghether with a new quantity q 2 > q 1 of water with mineral salts will represent the successive cause C 2 > C 1 determining the successive effect E 2 > E 1 of the tree's growing by a successive quantity k 2 > k 1 of carbon dioxide and solar energy-representing the successive mean M 2 > M 1 and so on, until the tree's death. B. Examples of technical CME triads: B1: Periodic conversion of the potential energy of a weight G into kinetic energy by a technical mean. A pendulum is an example of technical mean M of a CME triad. In this case, the value of the weight G and the height h at which it is raised give the intensity of the cause, C(G,h), and the value of the kinetic energy K at the lowest position of the weight G gives the intensity of the effect E(K), which becomes cause for the lifting of the weight G by transforming it into potential energy, the process being repeated. However, as it is known, due to the losses by air friction and weight bearing, the kinetic energy K obtained by free fall of the weight G of the pendulum is slightly less than the potential energy U from the beginning of the free falling, so that the oscillation of the pendulum is decreased over a period of time t which is inverse proportional with the difference between the initial potential energy (cause C) and the kinetic energy resulting from it (effect E), i.e.-a ratio between A E and A e smaller than 1 when the C -cause is greater than the effect E. Because for the maintaining of the pendulum's motion is necessary to transform the unstable descending CME triad into a stabilized CME triad by giving periodically an additional energy to the weight G, equal to the difference between C and E, for example -by magnetic attraction during transformation, it is explained that the efficiency of the pendulum is lower when the ratio A E /A e of the CME triangle corresponding to a half-period of the oscillation is lower, because the value M of the technical means of the CME triad contributes to the efficiency' value: ∈ = A E /A e by the fact that a more expensive source of energy necessary for loss compensation decreases the ratio: ∈ ≈ E 2 /M·C corresponding therefore to a less technically efficient CME triad. B2. The cascade amplification of audio signal (microphone amplifier): -In this case, the initial cause C 1 is the intensity I 1 of the electrical signal of sound conversion through the microphone, which is transformed into an amplified electrical signal I 2 , (effect E 1 ), through a first technical means M 1 type electronic audio amplifier. The amplified signal I 2 becomes -through a new M 2 audio amplifier stronger than M 1 , (M 2 > M 1 ), a secondary cause (C 2 ) to the obtaining of a new amplified electrical signal I 3 , etc., an finally obtained audio signal I n being converted into sound with a radio speaker. The CME triad specific to this electronic amplifier type audio system represents an example of stabilized CME triad in which each cycle c k (k = 1, 2,.. n) gradually increases both the intensity of the C k cause and the intensity of the E k effect (as well as the use value of the mean M k ). Similar to the previous case, the efficiency of the CME triad is given by the average efficiency, ∈ T , resulted from the efficiency ∈ k = A Ek /A ek of each cycle c k , which is inversely proportional to the area of the triangle C k M k E k by the fact that for a given effect E k (the output current I k+1 of the amplifier M k ), the efficiency of the corresponding CME triad is higher by a lower value of the cause C k , (the input current I k ), if the value of the mean M k is approximately the same. Also, if a given E k effect is obtained by the same C k cause but with a more expensive M k means, the associated CME triad is less efficient as technical triad. B3: The production of nuclear fission energy: In the case of nuclear power generation by nuclear fission in chain, in which from three initial fission neutrons of a 235 U nucleus (cause C) at least one produces the fission of another 235 U nucleus, (effect E), depending on the technical means M which can be a nuclear reactor or a nuclear bomb, we have in the first case a stabilized CME triad (in which the reaction is controlled so as it not exceed the multiplication factor equal to the unit) and in the second case we have an un-stabilized ascending CME triad, in which the cause C (fission of a 235 U nucleus) generates a larger effect (by the fission of other 2 or 3 235 U nuclei for each previously fissioned 235 U nucleus, which in turn will generate the fission of 4 ÷6 nuclei of 235 U), with the production of chain reaction and nuclear explosion. From the point of view of the efficient production of nuclear energy, the CME2 triad (specific to the atomic bomb with nuclear fission) is more efficient than the triad CME 1 (specific to the nuclear reactor energy production), because overall, the same amount of energy generated by the same quantity of nuclear fuel of 235 U as the core of a nuclear reactor is released by a considerably cheaper mean (a nuclear bomb), the E 2 /CM ratio being thus higher, but with the particularity that this energy is released explosively and not gradually, in a controlled manner. If it is desired the controlled use of nuclear energy, for conversion into electricity, it is obvious that in relation to this objective, which will represent the desired effect (E') in this case, the use of a nuclear reactor is more efficient, becauseeven if it is much more expensive, it ensures the obtaining of the desired effect E', the ratio E'/C·M being higher in the case of the CME1 triad than in the case of the CME2 triad, in this case, as consequence of a different desired effect, E'. In conclusion, the efficiency of the triad is a characteristic relative to the objective for which it is used, identifiable with the pursued effect E. If the pursued objective is the obtaining of an E-effect that can be produced with a technical M-mean, this efficiency also characterizes the technical efficiency of the technical means specific to the respective CME-triad. B4: The generating of light by an electric bulb: The associated CME-triad of the conversion of electric energy (W E = t·P c -the cause C; P-the electric power) into light (the effect E ≅ t·P l ; P l -the light's power) by an electric bulb (the mean M) is generally more efficient for a bulb with LEDs, M L , than those of a bulb with filament, M F , because the same light power P E is obtained with a lower electric power consumption: P cL < P cF . The associated CME triads: ∆ L and ∆ F are stationary, (Econstant), but the triad ∆ L may be considered as a relative triad ∆ L r with a relative efficiency: ∈ L r = t·P l 2 /P cL M L . Generally, for t = 1hr, considering the same cause' intensity: C ≅ P c , ∈ L > ∈ F imply, according to eqn. (1), that: But because generally a bulb with LEDs is sensible more expensive than a bulb with filament, its use value: M L > M F may be enough higher also than those necessary for give from the beginning, on a short period of use, a more efficient CME triad ∆ L or equal with those of a bulb with filament, ∆ F , if: M L P L > M F P F . But in the expression (1), because the amortization of costs difference by electric energy economy, in the relative triad ∆ L r we must use a decreasing use value: k p being the costs of 1KWh and t a -the period until the amortization of the supplentary costs ∆M = M F -M L 0 : The time moment t e at which the ∆ L triad is as efficient as ∆ F is given by eqns. ∆P c /P cL = k p ∆P c ·(t a -t e )/M F ⇒ t e = t a -M F /P cL k p (6) By eqns. (4), (6) it is observed that the ∆ L r -triad becomes more efficient than ∆ F -triad in a shorter time t e when: ∆M/∆P c → M F /P cL , (if M L 0 →M F or ∆P c = P cF -P cL is higher). Similarly it may me studied the efficiency of the triad associated to the electric energy generating by a solar photovoltaic panel compared to those associated to a chemothermal moto-generator of electric energy. C. Tetradic or pentadic CME triads The cases with two causes C1, C2, (ex. wind energy + solar energy converted into kinetic energy of irrigation water (effect E) by means of an electric pump, (mean M)), or/and by two means M1, M2, (ex. -airplane engine + autopilot, for moving a plane on a predetermined route) or/and with two effects E1, E2, (ex. the effect E1 of precious metals recovery from electronic wastes + the effect E2 of environment's pollution), philosophically correspond to some tetradic or pentadic (or hexadic) triads, whose evolution can be studied similar to the case of a simple CME triad, with the difference that the graphical representation of the triad evolution will be made with an axial system that instead of a single axis C, M or/and E will use two adjacent axes, (figure 2 -bi-causal CME triad). In the case of a triad with two effects the triad's efficiency ∈ ∆ may be analyzed individually (separate for each effect) or globally -by the sumation of effects, as in the case of a triad with two causes or two means, taking the harmful effect as negative, resulting a global triad, ∆ G . The Approximation of the Use Value of M-means (particularly -Invention or Innovation) By comparing with different market products, it follows that the production value of a product considered as a mean M for converting a C cause into a socially useful E effect is proportional to the degree of complexity and its internal organization, which in systemic terms is can be expressed by the maximum entropy of the product (obtained at its total destruction) taken with the minus sign, (i.e. by the negentropy comprised by the product (-S M )) and with the reliability of the product, p(τ), which represents the "confidence" that we can have in the product functioning for a longer period of time. As it is known, the reliability theory (the "safety" theory [4]) describes in technique the probability "p f " as after a time τ, a functional system with N components of which n 1 components have the "lifetime" (service life) T 1 , n 2 components have T 2 lifetime, n i components have the lifetime T i etc., still work. This probability represents the reliability of the system (the possibility of "trust" in those system) and it is expressed by the "danger of failure" λ i of the component elements, being determined by the relations: (8) with: n i -the number of elements in the sub-system "i"; T ithe average "functional life" of the sub-system "i" of the system; ∆n di -the number of faults that occur within the time interval τ; n di -the number of defects after which the subsystem "i" becomes inoperative (destroyed [5]); v di -the speed of destruction of the sub-system "i". It can be shown mathematically [5] that in a general way, the reliability of a system which functions with N = ∑n i component parts can be expressed also by a function representing the "operating potential" of the system (subsystem), having the expression: (9) in which the expression: (10) represents the "danger of blocking" (functional destruction) of the system having the functional "life" duration "T s ", the factor "c" characterizing the influence of the connections between the components of the system, with: c = c 1 c 2 ∼ (nr. of links) -1 , for informatical systems. The operating potential of the system, previously defined, has the property that it is in the Boltzmann relation with the functional negentropy of the system, that is given according to the Boltzmann's relation, by the expression: (11) in which the maximum negentropy has the value O M = -S M and the functional entropy (at the moment τ) is: S M representing the maximum entropy that the total disorganized system can have. The use value of a mean M at a given τ 0 moment can be empirically but generally approximated by a relation: wherein: O M -the embedded negentropy, Q S (τ 0 ) -the operating potential of the M-mean(s) at the τ 0 -moment and k M is a quasi-constant of proportionality whose value is inversely proportional to the value of the utilities necessary for the maintaining of the reliability of the M-type product, for example, the oil, antifreeze liquid, etc. -in the case of a car engine-considered as M -mean of converting the chemical energy of a fuel (gasoline, diesel) or the electrical energy of some batteries (cause C) into mechanical energy of moving the car (the effect E). The use value of a physical mean M t at a given moment τ may be approximated by eqns (9) and (13): In the case of a relative triad, the values of k M , K M can decrease in time as in the example B3. Generally, for a general physical system, the previous relations (13), (14) become more complex by the fact thatsimilar to some technical systems, many non-technical systems: biophysical, ecological, etc., are also open functional systems. The man himself, in relation to the Nature and the Society, represents a subsystem with a certain medium and momentary potential for harmonizing the macrosystem (Nature + Society). But-compared to a technical system, a psychological and a phycho-social system or even a technical but informatic system, may decrease its internal entropy in time, (increasing its internal organization) and the relations (7) - (8), specific mainly to a technical system, are not applied, even if the relations (9)-(14) may be applied but with a more complex expression of the reliability p(τ), in which the proportionality factor "c" have the form: c = c 1 ⋅ c 2 because it depends not only on the connections between the components of the system, by c 1 , but also on the links between peripherical sensors + informational database and the informations processing unit, (microprocessor-for an informatical system, or brain-for a psycho-biological system), by c 2 , being known that -for a brain, the number of neuronal links, n l , increases by learning, (c 2 ∼ (nr. of links between neurons) -1 = 1/ n l ). The previous conclusions are in concordance with the fact that generally the systems have component parts which are in their turn systems (subsystems), thus forming a "holon" [6], a collective unit ("holos" = "whole"), part of a larger one. The holons of a system interfere with each other and through this they increase or decrease each other's their organizing (or their entropy). If the holons increase each other's organizing, we can talk about their harmonization, and if they decrease their functional organization, it results their disharmony. The philosophical considerations about the approximating of the efficiency of a CME-triad and the examples presented for it, although not strictly accurate, can be used in the field of philosophy of technology, for example -in estimating the risk of the environmental destruction through the evolution of a technical CME-triad, (ex. -the risk of chemical pollution by oil or gold extraction technology, etc.). Conclusions The proposed method of approximating the efficiency of a Cause-Mean-Effect (CME) triad of technical or non-technical nature, considered with cyclical and variable evolution, is based by a graphical representation with three axes in which the position on its axis of representation of the cause C or of the effect E indicates its intensity and the position on its axis of representation of the mean M represents its value expressed by the amount of the intrinsic negentropy: O M = -S M and its reliability p(τ), the efficiency of the CME triad being approximated in a simplified form as given by the product of the ratios (E/M) and (E/C): ∈ ≈ E 2 /M⋅C. A relevant particular case of analysis of a technical CME triad is those of the relative efficiency ∈ L of an electric bulb with LEDs comparative with the efficiency ∈ F of an electric bulb with filament, in which the use value M L of the compared electric bulb must be considered variable in an initial period of time, for a comparative study, when ∈ L is not from the beginning (on a short time period) higher than ∈ F . The proposed method, in the variant of tetradic triad with two different positive effects E1, E2 and a single cause and a single mean, is linked also -in the phylosophy of technology, with the principle of the economy by 'agglomeration of results', (economy by 'doing two different things by one stroke'- Kotarbiński, 1965, p. 109, [7],). A technical example in this sense is the result (effect E1) of electric energy generating with increase efficiency by the wind energy conversion (the cause, C) with a wind turbine with magnetic bearing (the mean, M), for which the magnetic bearing gives also a secondary positive result (effect E2): the turbine's noise reduction. The associated CME triad is a tetradic oscillatory triad, with variable E1-effect proportional with the wind's intensity variation, whose efficiency may be studied as relative efficiency, as in the example B4, either individually, only for the effect E1 or E2 or globally, for the effect E G = E1 + E2 -by expressing the effect's value by the same measure unit. The use value of the mean M can be empirically but generally approximated by a relation: M = K M ⋅S M ⋅p(τ) wherein: S M is the maximal possible intrinsic entropy and K M is a quasi-constant of proportionality whose value is inversely proportional to the value of the utilities necessary for the maintaining of the reliability of the M-mean. A special case is represented by the systems with capability to au-decrease their internal entropy, such as a psychological, a phycho-social system or even a technical but informatic system, which-compared to a technical system, may increase their internal organization, for which the p(τ)reliability' expression is more complex than those of a physical/technical system, it depending also on a factor "c" characterizing the influence of the connections between the components of the system, with: c = c 1 c 2 ∼(nr. of links) -1 for informatical / neuronal systems. An example of the method's application to a complex system is those of the managerial malpractice increasing in a state institution, (political, economical or administrative), by a low reliability p(τ) of the institution's manager or by a low technoscientization of the institution. From the presented examples it results that the proposed method have links also with the domain of the praxiology [8], [9], with the phylosophy of science and with the domain of the technoscience [10], particularly-also with the politic's technoscientization, (Callon [11], Cetina [12], Hacking [13]). The presented examples of the method's application for the studiyng of technical or non-technical or mixed systems, reveals the possibility of the method's using also in the domain of philosophy of technology, in particular -in assessing the risk of the society's regression by the degrading of the environment or by the excessive exploitation of natural resources.
8,241.8
2020-02-28T00:00:00.000
[ "Computer Science" ]
The loss of glycocalyx integrity impairs complement factor H binding and contributes to cyclosporine-induced endothelial cell injury Background Calcineurin inhibitors (CNIs) are associated with nephrotoxicity, endothelial cell dysfunction, and thrombotic microangiopathy (TMA). Evolving evidence suggests an important role for complement dysregulation in the pathogenesis of CNI-induced TMA. However, the exact mechanism(s) of CNI-induced TMA remain(s) unknown. Methods Using blood outgrowth endothelial cells (BOECs) from healthy donors, we evaluated the effects of cyclosporine on endothelial cell integrity. Specifically, we determined complement activation (C3c and C9) and regulation (CD46, CD55, CD59, and complement factor H [CFH] deposition) as these occurred on the endothelial cell surface membrane and glycocalyx. Results We found that exposing the endothelium to cyclosporine resulted in a dose- and time-dependent enhancement of complement deposition and cytotoxicity. We, therefore, employed flow cytometry, Western blotting/CFH cofactor assays, and immunofluorescence imaging to determine the expression of complement regulators and the functional activity and localization of CFH. Notably, while cyclosporine led to the upregulation of complement regulators CD46, CD55, and CD59 on the endothelial cell surface, it also diminished the endothelial cell glycocalyx through the shedding of heparan sulfate side chains. The weakened endothelial cell glycocalyx resulted in decreased CFH surface binding and surface cofactor activity. Conclusion Our findings confirm a role for complement in cyclosporine-induced endothelial injury and suggest that decreased glycocalyx density, induced by cyclosporine, is a mechanism that leads to complement alternative pathway dysregulation via decreased CFH surface binding and cofactor activity. This mechanism may apply to other secondary TMAs—in which a role for complement has so far not been recognized—and provide a potential therapeutic target and an important marker for patients on calcineurin inhibitors. Introduction Thrombotic microangiopathies (TMAs) are defined by their common clinical features: microangiopathic hemolytic anemia (MAHA), non-immune thrombocytopenia, and end-organ injury (1)(2)(3). TMAs are systemic conditions with the potential for multi-organ involvement, including the kidneys, the brain, the gastrointestinal tract, the respiratory tract, and the skin. Crucial to the development of TMA is injury to the microvascular endothelium. Injuries to the endothelium post its activation lead to excessive platelet and neutrophil recruitment and eventually to thrombus formation, chronic inflammation, and organ failure (1,4,5). While complement cascades are critical to mounting appropriate immune responses, the regulation of their products is critical to maintaining host cell integrity, notably for the vascular endothelium. Indeed, the loss of complement regulation favors spontaneous complement activation, resulting in endothelial injury and the formation of (micro-)thrombi (5)(6)(7). Complement dysregulation is also increasingly recognized in the pathogenesis of TMAs and is found in patients with various forms of secondary comorbidities (i.e., TMA spectrum) (1,(8)(9)(10)(11). The alternative pathway (AP) of complement is constitutively active (spontaneous tick-over), resulting in a low, but constant, level of circulating C3b in the plasma, which can bind to either host cell or pathogen surfaces. Since C3b is free to coat and disrupt surfaces without distinction, there are regulatory mechanisms that tightly protect host cells from complement-mediated injury, including membrane-associated proteins like membrane cofactor protein (MCP/CD46), decay-accelerating factor (DAF/CD55), and protectin (CD59) as well as the secreted protein complement factor H (CFH), which circulates in human plasma at high (200-300 ug/mL) concentrations (8,12,13). The density and localization of these regulatory proteins represent one of the key principles of complement control and are critical to maintaining the integrity of self-surfaces such as the vascular endothelium. Genetic mutations in CD46 or CFH, as well as the expression of anti-CFH autoantibodies, result in excessive complement activation-in particular via the alternative pathway-and increase patient susceptibility to develop TMA via endothelial injury (14)(15)(16)(17)(18)(19). A number of additional mutations in complement (modulator) genes, including C3 itself, complement factor B (CFB), factor I (CFI), and thrombomodulin (THBD/CD141), have also been linked to endothelial cell injury and TMAs. There is, however, variable penetrance described in patient families within a pedigree with complement mutations, implicating a contribution from the environment as being necessary to trigger TMA manifestations in a patient who is genetically susceptible ("multiple-hit" hypothesis) (1,8,15). Among the events that precede the onset of TMA, the most relevant are respiratory and gastrointestinal tract infections and pregnancy (16,20). Secondary TMA can also occur post-transplant when it is associated with antibody-mediated rejection and immunosuppressive medications like calcineurin inhibitors (CNIs) (9,(21)(22)(23)(24). Calcineurin inhibitors (CNIs) such as cyclosporine and tacrolimus are highly effective immunosuppressive agents, which are widely used to prevent allograft rejection in solid organ and hematopoietic stem cell transplantation and to treat autoimmune disorders. Their use is also associated with adverse effects, such as hypertension, nephrotoxicity, vascular injury, and the development of CNI-induced arteriolopathy, which negatively impact patient and allograft survival (25-32). In addition, CNIs are known to trigger post-transplant TMA (28,29,31,33). The possible cause for these adverse effects, in particular TMA, in endothelial injury associated with CNI use, secondary to vasoconstriction-associated ischemia, increased platelet aggregation, and the activation of prothrombotic factors (27). Evolving evidence suggests an important role for complement dysregulation in the pathogenesis of CNI-induced microvascular endothelial cell injury, which is crucial for the development of TMA (34,35). Recently, CNI-mediated endothelial injuryin particular in the glomerular capillaries-has been linked to complement activation in vivo, and a central role of the complement alternative pathway has been identified (34). The exact mechanism, by which CNI induces complement activation, however, remains poorly understood. Because cyclosporine use is associated with vascular injury, development of TMA, and nephrotoxicity, we examined whether cyclosporine exposure leads to complement-mediated endothelial cell injury and investigated the mechanism by which complement dysregulation is induced in an in vitro model utilizing blood outgrowth endothelial cells (BOECs). Patient samples BOECs were isolated from the peripheral blood of two healthy adult volunteers. Normal human serum (NHS) was derived from three healthy adult volunteers. Cyclosporine treatment and complement fixation on endothelial cells BOECs grown to confluence were exposed to cyclosporine 10, 20, 50, or 100 µg/ml in media for up to 24 h. Cyclosporine stock solution (Sandimmune IV, Novartis Pharmaceuticals Canada Inc., Dorval, Detection of complement deposition on endothelial cells C3b and C5b-9 deposition on BOEC surfaces were demonstrated by flow cytometry using a C3c antibody detecting the C3c portion of native C3 and C3b (C3c-FITC conjugated antibody, Abcam, ab4212, 1:50 dilution) and C9 (Complement Technologies Inc, TX, A226, 1:100 dilution). Cells were grown to confluence and exposed to cyclosporine treatment and complement fixation as described. Cells were washed with phosphatebuffered saline (PBS) and incubated with Fixable Viability Dye eFluor780 (eBioscience, San Diego, CA, 1:1,000 dilution reconstituted in PBS) at 4 • C for 30 min. For flow cytometry, cells were harvested by scraping and washed with PBS before use (Supplementary material). Assessment of Weibel-Palade body mobilization and von Willebrand factor release from endothelial cells von Willebrand Factor release from BOECs was detected via immunofluorescence as described previously (36). BOECs treated with media for 24 h, followed by incubation with anti-CD59 blocking antibody for 30 min and 50% NHS/50% SFM for 30 min, were used as positive control and compared to cells kept in media (negative control). Cells were then washed and fixed with 2% paraformaldehyde and permeabilized with 0.2% Triton in PBS, followed by incubation with rabbit anti-VWF (Dako, Carpinteria, CA, A0082, 1:1,000) and goat anti-VE-cadherin (Santa Cruz Biotechnology, Dallas, TX, sc-6458, 1:250) for 4 h. Alexa Fluor 488-and Alexa Fluor 555conjugated species-specific secondary antibodies were used at a dilution of 1:1,000. Nuclei of cells were stained with 0.12 µg/ml Hoechst stain (Thermo Fisher Scientific, Waltham, MA) for 5 min. Characterization of membrane-anchored complement regulators To determine the expression level of the membrane-anchored complement regulators MCP/CD46, decay-accelerating factor (DAF/CD55), and CD59 on BOECs, BOEC lysates were utilized for flow cytometry and Western blotting analysis (Supplementary material). Detection of CFH binding to endothelial cell surfaces The binding of CFH to BOEC surfaces was demonstrated by flow cytometry as described previously (39), using purified CFH (CSL Behring, Marburg, Germany) tagged with Alexa Fluor 488 succinimidyl ester (10 µg/mL, Life Technologies) for 1 h at room temperature before being dialyzed overnight in PBS. Cells exposed to 500 mU/mL neuraminidase (MilliporeSigma; N2876) were used as the positive control. Cells were washed two times with PBS and scraped off. Cells were then incubated with Fixable Viability Dye eFluor780 at 4 • C for 30 min. They were then washed with PBS and resuspended in 100 µL PBS. Each sample was then incubated with 4 µg of Alexa Fluor 488-tagged CFH for 1 min, after which 500-1,000 µL of Attune focusing fluid (Thermo Fisher Scientific, 4449791) was added and assessed by flow cytometry (Supplementary material). For immunofluorescence experiments, cells were cultured to a minimum of 80% confluency on collagenized coverslips and exposed to cyclosporine A as described. Cells exposed to 500 mU/mL Neuraminidase for 1 h and 0.5 U/mL Heparinase III (H8891-5UN, Sigma-Aldrich, St. Louis, MO) for 30 min were used as positive controls. Cells were washed and fixed with 4% paraformaldehyde, blocked for 1 h with 3% BSA, followed by incubation with goat anti-Factor H (1:100, Complement Technology Inc., TX; A237) and mouse anti-heparan sulfate (1:50, US Biological Life Sciences, Salem, MA; H1890) overnight at 4 • C. Goat Alexa Fluor 488 and Mouse Alexa Fluor 555 secondary antibodies were used, respectively, at a dilution of 1:200 for 1 h at room temperature. The nuclei of the cells were stained with 0.12 µg/ml Hoechst stain (Thermo Fisher Scientific, Waltham, MA) for 5 min. Confocal microscopy was performed as detailed in Supplementary material, and total fluorescence intensity was measured using ImageJ software. CFH surface cofactor activity assay To determine CFH cofactor activity on BOEC surfaces, cells exposed to 500 mU/mL neuraminidase for 1 h (Millipore Sigma; N2876) were used as the positive control. Cofactor activity of surfacebound CFH was detected as previously described (40). Cells were incubated with 10 µg/ml CFH (CSL Behring, Marburg, Germany) at 37 • C for 1 h, 10 µg/ml CFI (EMD Millipore Corp., MA, 341280) and with 3.3 µg/ml C3b (EMD Millipore Corp., MA, 204860). The supernatant was collected at baseline and various subsequent time points (up to 180 min), and the samples were transferred to a reduced sample buffer and separated by 10% SDS-PAGE. The appearance of C3b degradation fragments was detected by Western blotting (Figure 6). Primary goat anti-C3, 1:1,000 dilution (Complement Technology Inc., TX, A213) with corresponding secondary HRP-conjugated antibody at a dilution of 1:5,000 was used for detection. Cambridge, UK, ab23418, 1:100), and peanut agglutinin (PNA, Vector Labs, Ontario, CA, FL-1071-5, 1:200) were used. Cells were cultured to confluence on coverslips and exposed to cyclosporine as described. Cells exposed to 500 mU/mL neuraminidase for 1 h were used as a positive control in WGA and PNA experiments. Cells exposed to 0.5 U/mL Heparinase III (H8891-5UN, Sigma-Aldrich, St. Louis, MO) for 30 min were used as a positive control in heparan sulfate experiments. Cells were incubated with Alexa Fluor 594-conjugated WGA for 5 min on ice and washed two times with ice-cold HBSS, and the coverslips were mounted in a Chamlide magnetic chamber (Life Cell Instrument, Seoul, Korea) and overlaid with media. Confocal microscopy was performed as detailed in Supplementary material, and total fluorescence intensity was measured using ImageJ software. For experiments using antiheparan sulfate and PNA, cells were washed and fixed with 2% paraformaldehyde, followed by incubation with mouse anti-heparan sulfate (1:100) and anti-PNA (1:100) for 1 h. Alexa Fluor 488conjugated species-specific secondary antibodies were used at a dilution of 1:1,000. Nuclei of cells were stained with 0.12 µg/ml Hoechst stain (Thermo Fisher Scientific, Waltham, MA) for 5 min. Statistics Figures were generated with GraphPad Prism (Version 6.0c; GraphPad Software, La Jolla, CA) and displayed as the mean and standard deviation. Statistical analysis was performed via paired ttest or two-way ANOVA with post-hoc analysis. A p < 0.05 was considered statistically significant. In the figure legends, p-values are presented as follows: * p < 0.05, * * p < 0.01, * * * p < 0.001, and * * * * p < 0.0001. Cyclosporine causes endothelial cell injury and complement deposition The use of cyclosporine is associated with a vascular injury in pathophysiological situations. We, therefore, tested whether cyclosporine treatment of cultured BOECs caused endothelial cell toxicity using an established lactate dehydrogenase (LDH) assay for lytic cell death. We found that cyclosporine caused cytotoxicity of BOEC cultures in a dose-and time-dependent fashion (Supplementary Figure S1). Specifically, the acute (1 h) treatment of BOECs with low concentrations (<50 µg/mL) of cyclosporine did not cause cell lysis, while a 24-h treatment of the cells with cyclosporine used above 250 µg/mL led to lysis of nearly the entire culture. Intermediate concentrations of cyclosporine (50 µg/mL) caused ∼60% of the cells to rupture, and lower concentrations of 10 µg/ml did not lead to any detectable LDH release (Supplementary Figure S1). We, therefore, chose to treat BOECs within the range of 10 µg/mL (non-lethal) and 50 µg/mL (∼half-maximal lysis) concentrations of cyclosporine in subsequent experiments. To that end, confluent monolayers of BOECs were treated with these concentrations of cyclosporine in medium containing 10% fetal bovine serum (FBS) for 24 h and subsequently exposed to 50% NHS in serum-free medium (SFM) for 30 min as established in Supplementary Figure S1. Under these conditions, we found that the treatment of BOECs with 50 µg/ml of cyclosporine caused a significant increase in complement C3 deposition ( Figure 1A, MFI: cyclosporine 50 µg/mL 441.1 ± 67.1 vs. control 265.8 ± 50.1, n = 4, p = 0.023). Using lower doses of cyclosporine (10 µg/ml), we determined that the increased deposition of C3c was enhanced in the absence of serum. Factors in the serum prevented C3c deposition on the cyclosporine-treated BOEC cultures: >2.5% FBS prevented C3c deposition, while at <0.5% FBS, significantly increased C3c was detected on the surface of cyclosporine-treated cells ( Figure 1B, MFI: cyclosporine 10 µg/mL in serum-free media 622.5 ± 32.72 vs. control 343.1 ± 65.84, n = 6, p < 0.01). Inhibiting the function of CD59, a membrane-anchored complement regulator, is an established means of sensitizing complement fixation on endothelial cells. Blocking CD59 with antibodies has the dual effect of complement induction mainly via sensitization (classical pathway) but also through complement amplification (alternative pathway) (36,38,(41)(42)(43). We were interested to examine whether cyclosporine had general effects on the membrane topology that impact C3c deposition or if its effect was via CD59. Using the same flow cytometry approach used in Figure 1B, we found that blocking CD59 indeed led to a large increase in C3c associated with the endothelial cells ( Figure 1C). However, cyclosporine treatment further increased C3 deposition ∼2-fold beyond the level achieved by blocking CD59 alone. This effect was also observed for C5b-9 to an even greater extend fold increase ( Figure 1D). Thus, BOECs treated with cyclosporine had a dose-dependent injury concomitant with increased complement deposition that could be enhanced by the removal of serum or complement regulators. Cyclosporine induces von Willebrand factor release from endothelial cells Weibel-Palade bodies (WPBs) are endothelial storage granules containing pro-hemostatic and pro-inflammatory molecules, including VWF, P-selectin, interleukin-8, endothelin-1, and angiopoietin-2 (44-46). As previously demonstrated by us and others, WPBs are exocytosed upon endothelial cell injury and activation to release their contents, which potentiates inflammatory responses, vascular leakage, and leukocyte adhesion (36,45,47). Given that cyclosporine resulted in endothelial cell injury and complement deposition, we hypothesized that cyclosporine treatment may also lead to the endothelial release of VWF. Using the previously established protocol, we first showed that complement activation indeed caused the release of intracellular VWF (Supplementary Figure S2-positive control using anti-CD59 sensitization) (36). We then found that BOECs treated with cyclosporine 10 µg/mL had less intense staining of intracellular VWF ( Figure 2). Taken together, our results showed that cyclosporine induces VWF release from BOECs. Cyclosporine treatment leads to the increased expression of membrane-associated complement regulators The regulation of the alternative pathway of complement activation is executed by a combination of fluid-phase (CFH and CFI) . FIGURE Cyclosporine causes complement deposition on endothelial cell surfaces, enhanced by serum starvation and anti-CD sensitization. Blood outgrowth endothelial cells (BOECs) were incubated in cyclosporine (CsA) for h, followed by % NHS for min. Unless specified, cyclosporine was reconstituted with media/ % FBS. C c and C deposition on BOEC surfaces was detected by flow cytometry. Non-viable cells were excluded from analysis with Fixable Viability Dye eFluor . (A) Incubating BOECs with cyclosporine µg/ml resulted in significantly higher C c deposition (n = , p = . , paired, two-tailed t-test). (B) Incubating BOECs with cyclosporine µg/ml reconstituted in media supplemented with decreasing amounts of FBS resulted in significantly higher C c deposition (n = , p < . , paired, two-tailed t-test). (C) Addition of anti-CD antibody enhanced cyclosporine-induced C c deposition. Incubation of BOECs with cyclosporine µg/ml for h, followed by anti-CD antibody incubation for min, prior to % NHS for min caused a significantly increased C c deposition (n = , p = . , two-way ANOVA, Sidak's multiple comparison test). (D) Addition of anti-CD antibody also enhanced cyclosporine-induced C deposition (n = , p = . , two-way ANOVA, Sidak's multiple comparison test). In keeping with previous data, no increase in C deposition was detected when BOECs were incubated with media or cyclosporine µg/ml alone. This *** signifies the degree of statistical significance as denoted by the p value in the figure and in the "Statistics" section in Materials & Methods. and membrane-bound regulators (mainly MCP/CD46, DAF/CD55, and CD59) that maintain the balance between complement activation and inhibition (8, 13). Given that cyclosporine caused an increase in complement activation on the surface of BOECs, it was conceivable that cyclosporine decreased the expression of membrane-bound complement regulators. We, therefore, assessed the expression of MCP/CD46, DAF/CD55, and CD59 on the surface of BOECs after their treatment with cyclosporine using flow cytometry, and the total cell expression of these regulators by probing cell lysates with Western blotting. We found that treatment of the BOECs with low concentrations (10 µg/mL) of cyclosporine resulted in the increased surface and total cell expression of MCP/CD46, DAF/CD55, and CD59 (Figure 3). Incubation with cyclosporine at higher concentrations of cyclosporine (50 µg/mL) resulted in a similar effect (data not shown). Thus, the increased complement deposition on the surface of cyclosporine-treated cells was not the result of the lost expression of membrane-bound complement regulators. Cyclosporine treatment leads to impaired CFH binding and regulation on endothelial cells Since enhanced complement deposition induced by cyclosporine occurred in the context of increased expression of membrane-bound complement regulators, we hypothesized that cyclosporine may instead impair CFH-mediated complement regulation. CFH is the central circulating alternative pathway inhibitor, which competitively prevents C3b deposition on cell surfaces, acts as a cofactor to CFI to cleave surface-bound C3b, and accelerates the decay of the C3bBb complex (48-50). To exert these functions, CFH is known to be closely associated with endothelial surfaces via its multiple glycosaminoglycan/sialic acid-binding domains (51-55). To test whether cyclosporine impaired CFH binding, we pretreated BOECs with cyclosporine and then assessed the ability of the cells to secure Alexa Fluor 488-conjugated CFH from the culture . /fmed. . . Nuclei were stained with . µg/ml Hoechst stain for min. Images were taken using an IX inverted microscope (Olympus Corp., Tokyo, Japan) with a / . oil immersion objective and a C back-thinned EM-CCD camera (Hamamatsu Photonics, Hamamatsu City, Shizuoka Pref., Japan) with a CSU X spinning disk confocal scan head (Yokogawa, Yokogawa Canada Inc., AB). Bar = µm. Treatment with cyclosporine µg/ml for h led to less intracellular VWF and less intense staining of VE-cadherin (A-E) (n = , **p = . , two-tailed t-test). medium. The Alexa Fluor 488-labeled CFH was added for 1 min to live cells before their analysis by flow cytometry. We found that incubation of BOECs with cyclosporine at 10 µg/mL for 24 h caused a significant reduction in CFH binding ( Figures 4A, B, MFI control 386.3 ± 97.8 vs. cyclosporine 10 µg/mL 78.3 ± 45.8, n = 3, p = 0.0078). A brief (1 h) treatment of the cells with neuraminidase used at 500 mU/mL, an enzyme that cleaves terminal sialic acid groups from glycoproteins, was used as a positive control. The functionality of neuraminidase in cleaving sialic acid was confirmed by live imaging with wheat germ agglutinin (WGA; see the section below) and by CFH binding. Removal of sialic acids inhibited CFH binding to the endothelium to nearly the same extent as cyclosporine treatment ( Figure 4B). This reduction in CFH binding on cells treated with cyclosporine 10 µg/mL for 24 h was also confirmed by immunofluorescence ( Figures 4C-L, MFI control 6.43 ± 0.44 vs. cyclosporine 10 µg/mL 3.03 ± 0.26, n = 3, p < 0.001). Locally concentrating CFH to the membrane of the vascular endothelium is critical for the protection of the membrane from complement deposition. The activity of the CFH, once docked to the endothelial surface, can subsequently be measured by assays that determined the degradation of complement. We assessed the functional consequences of the cyclosporine-induced reduction in CFH binding to BOECs by employing a previously established CFH surface cofactor activity assay (40). In this assay, endothelial cell-bound CFH was used as the sole source of CFH. The incubation of C3b with CFI and CFH results in C3b degradation with the appearance of C3b fragments with molecular weights of 68 kDa (C3b α'1), 43/46 kDa (C3b α'2), all of which can be detected via the same Western blotting approach. We first assessed the endogenous cofactor activity of the membrane-bound complement regulator MCP/CD46 in the absence of CFH when exposed to media (control) and various concentrations of cyclosporine (10, 50, and 100 µg/mL). Degradation products were detectable after 90 min, with no detectable significant differences between cyclosporine concentrations ( Figure 5A, Supplementary Figures 3A, C, E, G, I). Pre-incubation of BOECs with CFH resulted in the appearance of C3b degradation products after 15 min ( Figure 5B), demonstrating the expected significantly higher cofactor activity of CFH on endothelial surfaces. However, when BOECs were pre-incubated with neuraminidase followed by incubation with CFH and subsequently with C3b and CFI in the absence of additional CFH, degradation products were detectable only after 60 min. This result was in keeping with a lack of surface CFH in cells devoid of sialic acids ( Figure 5C). We then assessed the effect of cyclosporine exposure on the cofactor activity of surface-bound CFH. BOECs exposed to increasing doses of cyclosporine (10, 50, and 100 µg/mL for 24 h) demonstrated a dose-dependent decrease in CFH cofactor activity as evidenced by the later appearances of C3b degradation products: cyclosporine 10 µg/mL after 45 min, cyclosporine 50 µg/mL after 45 min, and cyclosporine 100 µg/mL after 90 min (Figures 5D-F). Taken together, we found decreased cofactor activity of CFH on BOECs pre-treated with cyclosporine ( Figure 5G). Cyclosporine treatment weakens the endothelium glycocalyx with reduced CFH surface binding CFH has been reported to bind to endothelial surfaces via its glycosaminoglycan/sialic acid-binding domains (51-55). Since removing sialic acids with neuraminidase ablated CFH binding to the same extent as cyclosporine treatment, we assessed whether cyclosporine exerted its inhibitory effects on CFH binding via remodeling of the glycocalyx. We first stained . /fmed. . glycans/polysaccharides containing sialic acid and N-acetyl-Dglucosamine using the lectin wheat germ agglutinin (WGA) conjugated to Alexa Fluor 594. Of note, we imaged the cells live as fixation resulted in a dramatic decrease in overall fluorescence. To prevent endocytosis of the lectin, incubation with Alexa Fluor 594-WGA was performed in the cold (4 • C). We determined a decrease in Alexa Fluor 594-WGA staining in BOECs treated with neuraminidase used at 500 mU/mL for 1 h, with conditions identical to those that inhibited CFH binding ( Figures 5C, 6A, B: MFI neuraminidase 18,459 ± 6,154 vs. control 32,525 ± 8,990, p < 0.0001). Treatment with cyclosporine at 10 µg/mL also resulted in less intense staining with Alexa Fluor 594-WGA when compared to control ( Figures 6C, D: MFI cyclosporine 10 µg/mL 18,752 ± 6,154 vs. control 32,525 ± 8,990, p < 0.0001). The decrease in the WGA signal in cyclosporine was more apparent in the clusters on the apical surface of the endothelial cells and less visible at cell-cell junctions (Figures 6A, C). We further assessed whether cyclosporine had additional effects on the endothelial glycocalyx, specifically on the surface density of heparan sulfates. Heparan sulfates are covalently attached to the proteoglycans process in the Golgi apparatus (e.g., syndecans and glypicans). These side chains can be detected by immunostaining: While the polysaccharides may not be immunogenic on their own, in the context of proteoglycans, good antibodies have been generated and made commercially available. We, therefore, immunostained non-permeabilized control or cyclosporine-treated endothelial cells with anti-heparan sulfate antibodies. When compared to control, treatment with cyclosporine used at 10 µg/mL resulted in an ∼60% decrease in the intensity of heparan sulfate per cell ( Figures 6E-N , paired, two-tailed t-test). Treatment with neuraminidase mU/ml, which cleaves sialic acid groups from glycoproteins, also resulted in reduced CFH binding on BOEC surface (n = , *p = . , paired, two-tailed t-test). CFH was also assessed by immunofluorescence (C-L). Representative images (C-K) and mean fluorescence intensity from three sets of experiments with representative images taken per condition (each dot represents image) were measured with ImageJ and summarized (L). Compared to control, incubating BOECs in CsA µg/ml for h resulted in reduced CFH binding on BOEC surface (n = , ***p < . , paired, two-tailed t-test) Treatment with cyclosporine 10 µg/mL and heparinase III led to a similar decrease in CFH (Supplementary Figure S4: MFI cyclosporine 10 µg/mL 3.03 ± 0.26 vs. heparinase III 4.11 ± 0.20 vs. control 6.43 ± 0.44, p < 0.0001). Finally, the modifications to the glycocalyx upon cyclosporine treatment could be the result of overactive hydrolases (i.e., glycosidases or proteases) or the result of mistrafficking and expression of proteoglycans and glycoproteins. To determine whether surface glycoproteins in cyclosporine-treated cells were devoid of sialic acids, we used a lectin, peanut agglutinin (PNA), that recognizes exposed, terminal galactose sugars. We found that cyclosporine-treated cells did not have cleaved sialic acids from . /fmed. . FIGURE Cyclosporine causes impaired complement factor H regulation on surfaces of endothelial cells. (A-F) Cyclosporine (CsA) leads to impaired complement factor H (CFH) surface cofactor activity detected by a CFH surface cofactor activity assay. Blood outgrowth endothelial cells (BOECs) were incubated with C b . µg/ml and complement factor I (CFI) µg/ml at degrees Celsius, with or without pre-incubation with CFH µg/ml at degrees Celsius. The appearance of C b degradation fragments was analyzed by Western blotting (representative Western blots are shown in (A-F). (A) Endogenous cofactor activity on BOEC without CFH. BOECs were incubated with C b and CFI at degrees Celsius. Degradation products (α' , α' , and α' kDa fragments of the C b α' chain) were detectable after min and increased with time. (B) Cofactor activity of CFH on the surface of BOEC. BOECs were pre-incubated with CFH for h at degrees Celsius and thoroughly washed, prior to incubation with C b and CFI at degrees Celsius. Degradation products were detectable after min. (C) Cofactor activity of CFH on the surface of neuraminidase-treated BOEC. Neuraminidase cleaves sialic acid groups from cell surfaces. BOECs were pre-incubated with neuraminidase mU/ml for h followed by CFH for h at degrees Celsius, prior to being thoroughly washed and incubated with C b and CFI at degrees Celsius. Degradation products were detectable after min. (D-F) Cofactor activity of CFH on the surface of cyclosporine-treated BOEC. BOECs were pre-incubated with (D) cyclosporine µg/ml, (E) cyclosporine µg/ml, and (F) cyclosporine µg/ml for h. They were then incubated with CFH for h at degrees Celsius, and C b degradation products were detectable: (D) cyclosporine µg/ml after min, (E) cyclosporine µg/ml after min, and (F) cyclosporine µg/ml after min. These results suggest that cyclosporine causes impaired CFH binding and regulation on surfaces of BOECs. (G) Graphical presentation of CFH surface cofactor activity assay experiments. For statistical analysis, we formulated a ratio of the mean gray value of the α' kDa band with the mean gray value of the α' kDa band. An increased ratio indicates that the α' chain was cleaved into its split products, indicative of C b inactivation. There was a significant reduction in CFH cofactor activity on the surfaces of BOECs treated with cyclosporine when compared with control (n = , p < . for control vs. cyclosporine µg/ml from min onwards; p < . for control vs. cyclosporine µg/ml from min onwards; p < . for control vs. cyclosporine µg/ml from min onwards; p < . for control vs. neuraminidase mU/ml from min onwards paired, two-tailed t-test). Taken together, our findings suggest that cyclosporine treatment results in endothelial glycocalyx breakdown via the loss of surface glycoproteins and heparan sulfates, which leads to impaired CFH surface binding. Discussion Calcineurin inhibitor use is associated with acute and chronic tubulo-interstitial, arteriolar, and glomerular injury (27, 32). While possible mechanisms of injury relate to vasoconstriction-associated ischemia, increased platelet aggregation, activation of prothrombotic factors, and disruption of vascular endothelial growth factor (VEGF) regulation of angiogenesis (56), evolving evidence also suggests the involvement of the complement system (34). The association between CNI use and the development of TMA in patients (28,30,31) and the observation of complement deposition in areas of endothelial injury in kidney biopsy specimens affected by CNI toxicity hint the involvement of complement (57). Animal models of CNI toxicity implicate the complement system and offer explanations of how further complement-mediated injury can be propagated (34,35). However, the exact mechanism by which CNIs induce complement activation is still unknown. Our findings shed light on the pathogenesis of CNI toxicity and specifically identify complement activation on the vascular endothelium as a mechanism. To our knowledge, we are the first to establish an in vitro model utilizing BOECs to study the effect of cyclosporine and complement activation on endothelial cells. We found that cyclosporine treatment causes complement deposition and endothelial cell injury, which results in VWF release from Weibel-Palade bodies. Our findings suggest a role for complement-mediated endothelial cell injury induced by cyclosporine and, for the first time, implicate CFH surface dysregulation in cyclosporine-induced complement activation on endothelial cells. CFH, a plasma protein acting as a cofactor to CFI-mediated cleavage of C3b, must recognize and bind to endothelial cell glycocalyx glycosaminoglycans and terminal sialic residues via short consensus repeats (SCRs) 6-8 and 19-20 (48, 51-54). Adapting a previously described flow cytometry protocol of quantifying the binding of CFH and a previously established method of assessing the surface cofactor activity of CFH (39,40), we found that cyclosporine treatment led to decreased CFH binding to endothelial cell surfaces and impaired CFH surface cofactor activity. In these assays, we also treated BOECs with neuraminidase to test whether the absence of sialic acid on the glycocalyx of endothelial cells affected the binding and surface cofactor activity of CFH. The neuraminidase used (derived from Clostridium perfringens) primarily targets sialic acids in α2,3 (to a lesser extent α2,6 and α2,8) configuration and can cleave terminal sialic acid from O-linked glycans, N-linked glycans, and glycolipids. Of particular interest, we found that neuraminidase treatment led to a similar impairment of CFH surface binding and cofactor activity, suggesting the possibility that cyclosporine affects CFH binding to endothelial cell surfaces by reduction of the glycocalyx. Utilizing live cell imaging of endothelial cells stained with wheat germ agglutinin (WGA) that binds to sialic acid and Nacetylglucosaminyl residues within the endothelial cell glycocalyx, we found that cyclosporine and neuraminidase treatment significantly diminished the endothelial cell glycocalyx. Furthermore, we found that cyclosporine-induced endothelial cell glycocalyx breakdown occurred mainly through the loss of heparan sulfate. Taken together, these findings suggest that cyclosporine treatment leads to the shedding of heparan sulfate in the endothelial cell glycocalyx, leading to impaired CFH recognition of and binding to host endothelial cell surfaces, which impairs its surface regulation of the alternative pathway. The inability of CFH to inactivate C3b covalently bound to endothelial cell surfaces results in an uninhibited amplification loop that allows for the full activation of the complement cascade. This mechanism leading to alternative pathway dysregulation by CFH could potentially be generalized to other forms of TMA where endothelial cell glycocalyx injury is involved. Contrary to our initial hypothesis, we found that cyclosporine treatment caused increased expression of the surface membranebound complement regulators MCP/CD46, DAF/CD55, and CD59, a possible compensatory cellular response to cyclosporine treatment and the resultant impaired CFH regulation of the alternative pathway. MCP/CD46 aids in the inactivation of C3b as a cofactor in the CFIcatalyzed cleavage of C3b, DAF/CD55 accelerates the disintegration of the C3 and C5 convertases, and CD59 prevents the formation of the membrane attack complex (C5b-9) by binding to C8. The failure of CFH to bind to endothelial cell surfaces and exert its function that is induced in our model by cyclosporine leads to an increased C3b load, which, when not tightly regulated, will be amplified with the formation of the C3 convertases and even more C3b, eventually leading to the activation of the terminal pathway. In this context, we speculate that increasing the expression of the other complement regulatory armamentarium would be in the host endothelial cells' best survival interest. When cyclosporine was reconstituted in standard endothelial growth medium, there was increased complement deposition (C3 and C9) with cyclosporine 50 µg/ml or higher. When reconstituted in serum-free media, increased complement deposition occurred with cyclosporine 10 µg/ml, suggesting that serum-starved BOECs were . /fmed. . Our work presented in this study identified a role for complement in cyclosporine-induced endothelial cell injury. We showed that endothelial cells exposed to cyclosporine had decreased glycocalyx density, leading to complement AP dysregulation via decreased CFH surface binding and cofactor activity. This mechanism of endothelial cell and glycocalyx injury leading to complement AP dysfunction could potentially be applicable to other forms of secondary thrombotic microangiopathy (TMA). more susceptible to cyclosporine-induced complement deposition. Incubating cells with an anti-CD59 blocking antibody, an established model to induce complement deposition on endothelial cells (36,(41)(42)(43), led to further enhancement of cyclosporine-induced complement deposition on endothelial cells. Given the ∼2-fold increase in surface expression of CD59 after exposure to cyclosporine, the fact that the anti-CD59 is a monoclonal IgG2b antibodyan isotype that activates complement via the classical pathwayand the fact that anti-CD59 inhibits the action of the surfacebound complement regulator CD59, the increased complement deposition on endothelial cells induced by cyclosporine is likely due to anti-CD59 antibody-initiated activation of the classical pathway, exacerbated by a reduced capacity to regulate the amplification propagated via the alternative pathway (36,58). Within our model, we found an optimal balance of endothelial cell survival and CNI effect with cyclosporine doses between 10 and 100 µg/ml for up to 24 h. In the clinical setting, the therapeutic target trough range for cyclosporine is maintained between 100 and 400 ng/ml but varies depending on the indication of its use, the type of transplant, the use of concomitant immunosuppression, and time post-transplant. Suggested target 2-h post-dose levels could be as high as 2 µg/ml (59). In vitro experimental studies of cyclosporine effect on various endothelial cell lines used a wide range of drug concentrations ranging from 0.1 µg/ml to 4000 µg/ml over varying exposure durations (up to 72 h) (25, 34, 60-64). Although the levels of cyclosporine maintained clinically are lower than those used in experimental in vitro studies, they are not directly comparable. It is a limitation of in vitro models of disease, and the differences reflect different susceptibility of various endothelial cell lines and interspecies differences. The duration of exposure used in in vitro models is also limited to 24-72 h, whereas many patients are on life-long immunosuppression. To our knowledge, we are the first to study the effect of cyclosporine utilizing BOECs. In conclusion, we found that cyclosporine leads to injury of the endothelial cell glycocalyx and breakdown of heparan sulfate that negatively impacts CFH regulation of the alternative pathway of complement via decreased CFH binding to the endothelial cell surface (Figure 7). Enhanced susceptibility to complement-mediated injury secondary to impaired regulation of the alternative pathway might represent a shared mechanism of endothelial injury applicable to various forms of (secondary) TMA, including those caused by toxic agents, mechanical stress, and autoantibodies, which warrants further elucidation. Data availability statement The original contributions presented in the study are included in the article/Supplementary material. Further inquiries can be directed to the corresponding author. Ethics statement The study was approved by the Research Ethics Board of the Hospital for Sick Children (SickKids), Toronto, ON. Signed written informed consent was obtained from all volunteers whose samples were used in the study. The study was performed in keeping with the Declaration of Helsinki. Author contributions CWT designed and coordinated the project, performed experiments, interpreted the results, and wrote the initial and subsequent revised versions of the manuscript. MR designed the project, performed experiments, interpreted the results, and reviewed the manuscript. CO-S performed experiments, interpreted the results, and reviewed the manuscript. SF designed experiments, interpreted the results, and reviewed the manuscript. JP, JL, AB-H, VB, and EB performed experiments and reviewed the manuscript. LR interpreted the results and reviewed the manuscript. CL designed and coordinated the project, interpreted the results, and reviewed the manuscript. All authors contributed to the article and approved the submitted version.
8,671.6
2023-02-13T00:00:00.000
[ "Medicine", "Biology" ]
Narcolepsy Diagnosis With Sleep Stage Features Using PSG Recordings Narcolepsy is a sleep disorder affecting millions of people worldwide and causes serious public health problems. It is hard for doctors to correctly and objectively diagnose narcolepsy. Polysomnography (PSG) recordings, a gold standard for sleep monitoring and quality measurement, can provide abundant and objective cues for the narcolepsy diagnosis. There have been some studies on automatic narcolepsy diagnosis using PSG recordings. However, the sleep stage information, an important cue for narcolepsy diagnosis, has not been fully utilized. For example, some studies have not considered the sleep stage information to diagnose narcolepsy. Although some studies consider the sleep stage information, the stages are manually scored by experts, which is time-consuming and subjective. And the framework using sleep stages scored automatically for narcolepsy diagnosis is designed in a two-phase learning manner, where sleep staging in the first phase and diagnosis in the second phase, causing cumulative error and degrading the performance. To address these challenges, we propose a novel end-to-end framework for automatic narcolepsy diagnosis using PSG recordings. In particular, adopting the idea of multi-task learning, we take the sleep staging as our auxiliary task, and then combine the sleep stage related features with narcolepsy related features for our primary task of narcolepsy diagnosis. We collected a dataset of PSG recordings from 77 participants and evaluated our framework on it. Both of the sleep stage features and the end-to-end fashion contribute to diagnosis performance. Moreover, we do a comprehensive analysis on the relationship between sleep stages and narcolepsy, correlation of different channels, predictive ability of different sensing data, and diagnosis results in subject level. I. INTRODUCTION S LEEP plays a critical role in promoting mental and physical health [1], [2].Problems with the quality, timing and amount of sleep severely interfere with normal physical, mental, social and emotional functioning.Such problems are brought about by sleep disorders which affect millions of people worldwide and cause serious public health problems [3].There are about 50 to 70 millions people in America suffering from a chronic sleep or wakefulness disorder [4], such as narcolepsy, insomnia, restless legs syndrome, and sleep apnea.Among the disorders, narcolepsy, characterized by excessive daytime sleepiness and brief episodes of involuntary sleep, may severely interfere with work or social commitments in daily life [1].Patients suffer from the sudden onset and irresistible urges to sleep.Meantime, about 70% of patients affected also experience episodes of sudden loss of muscle strength, known as cataplexy [5].Moreover, narcolepsy tends to happen among relatively young people, for which 15 and 36 years of age is the peak time periods [6].It is extremely harmful to young people's physical and mental health and even leads to a variety of complications, such as depression, mania, bipolar disorder and schizophrenia.Given the fact that narcolepsy has great harm, it is critical to diagnose narcolepsy, so as to timely protect mental and physical health. Early diagnosis of narcolepsy is typically based on the presented symptoms.In clinical practice, doctors usually determine subjectively whether one has narcolepsy by asking the patient through direct inquiries or questionnaires.In this way, This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.misdiagnosis may be caused, because narcolepsy and other sleep disorders have some similar symptoms and patients may not describe their symptoms accurately and objectively enough.In fact, since people with narcolepsy are often misdiagnosed with other conditions, such as psychiatric disorders or emotional problems, it can take years for someone to get the proper diagnosis [6].Due to the difficulty in diagnosing narcolepsy, a comprehensive, objective, and high-quality manner is urgently needed to help diagnose narcolepsy.With the development of biomedical engineering and sleep medicine, polysomnography (PSG) in hospitals or sleep centers has become the most effective way to understand the sleep status of subjects.The PSG consists of electroencephalogram (EEG), electrooculogram (EOG), electromyography (EMG), and other physiological signals (e.g., electrocardiogram (ECG), Nasal pressure, and body position).PSG recordings are typically segmented into epochs of 30-second duration, each of which is manually assigned a sleep stage by an expert or technician.This process of sleep staging follows the rule of the American Academy of Sleep Medicine (AASM) sleep standard [7], which defines five different sleep stages: Wake (W), rapid eye movement (REM), and three types of non-REM sleep (N1, N2, N3).A real example of PSG signals in our used dataset is given in Fig. 1.Here, EEG, EOG and EMG signals in each sleep stage are presented.Given the richness and objectivity of sensing recordings, PSG has been considered the gold standard for sleep monitoring [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19].Some studies [20], [21], [22], [23] have adopted traditional machine learning methods for automatic sleep staging from PSG.Some studies [11], [14], [24], [25], [26], [27], [28] have proposed deep learning models to predict sleep stages from PSG.It provides abundant and objective cues for narcolepsy diagnosis. There have been some studies using PSG signals for sleep disorder diagnosis [29], [30], [31], including narcolepsy.Some studies [30], [32] extracted hand-crafted features from PSG recordings, and then fed them into traditional machine learning classifiers (e.g., random forest) for identifying narcolepsy.The sequential relationship within each epoch and between epochs is missed, which is important due to the sequential nature of sleep.With the development of deep learning, the deep neural network is used for narcolepsy diagnosis.However, there are still some limitations that the sleep stage information, an important cue for narcolepsy diagnosis, has not been fully utilized.For example, some studies have not considered the sleep stage information to diagnose narcolepsy.The difference of sleep stage label could be potential biomarkers to classify narcolepsy [33], and the PSG signals belonging to different sleep stages have different performance on narcolepsy diagnosis [31].However, some studies ignore such information, directly using PSG recordings for narcolepsy diagnosis [30], [34].Although the sleep stage information is considered for diagnosis by some studies, the sleep stages are manually scored by experts, which is time-consuming and requires incredible amount of human labor.In some previous studies, the sleep stage labels are first manually assigned, and then combined with PSG recordings for narcolepsy diagnosis [31], [33], [35].Besides, although some studies scored sleep stages automatically, the methods for narcolepsy diagnosis are designed in a two-phase learning manner, and then they are combined with PSG signals for diagnosis in the second phase [36].The sleep stages scored in the first phase contain incorrect labels which could not be well optimized in the second phase, causing cumulative error and degrading the performance for the disorder diagnosis. In order to address the limitations mentioned above, we propose a novel end-to-end framework for narcolepsy diagnosis from PSG signals.We automatically score the sleep stages, and then take advantages of them for narcolepsy diagnosis, by adopting the idea of multi-task learning [37].To evaluate the framework, we collected a dataset of PSG recordings in our cooperated hospital, consisting of 50 narcolepsy patients and 27 people without disabilities.For convenience, we will later call narcolepsy patients as "patients", and people without disabilities as "normals".Compared with other approaches, our framework achieves the state-of-the-art performance.Our contributions are as follows: • Considering that PSG recordings are the gold standard for sleep monitoring, we collected a dataset of PSG recordings in the cooperated hospital from 50 narcolepsy and 27 healthy people to analyze the relationship between sleep stages and narcolepsy and evaluate our method. In the future, we will release the dataset. • We design a novel end-to-end framework for automatically diagnosing narcolepsy from PSG recordings by adopting the idea of multi-task learning and setting sleep staging as auxiliary task.Experimental results show that both of the sleep stage related features and the end-toend fashion significantly contribute to the performance of narcolepsy diagnosis.In the collection procedure, each participant was asked to be in a special ward in the hospital.Before collection, we first need to place the multiple sensors to each participant's body.The technician put more than 20 wired attachments, including the pulse oximeter, pressure transducer, thermocouple, and electrodes on different positions of the subject's body (such as head, eyes, nose, chin, and leg).After that, each subject lies in the bed and falls asleep gradually.The wired attachments begin to collect physical signal from different parts of the subject's body.The PSG recordings were collected according to the AASM sleep standard [7].During the collection process, EEG, EOG, ECG, Chin EMG and Leg EMG signals were sampled at 512Hz which can capture the fine-grained information for these signals.For each subject, we collected her/his PSG recordings for one whole night, from about 21:00 to 5:00 the next morning, about 8 hours in total.All signals were stored using standard EDF+ data formats with .edfextension.The recordings were segmented into epochs of 30 seconds, and then each epoch was manually labeled as a sleep stage by sleep experts or technician according to AASM [7], including Wake, N1, N2, N3, REM, MOVEMENT, and UNKNOWN. To ensure a fair comparison, we initially performed preprocessing on the datasets, and subsequently evaluated all the methods using the same prepared datasets.Some signals such as EEG, EOG, Chin EMG, and ECG were band-pass filtered and notch filtered.In subsequent experiments, we removed the epochs annotated as MOVEMENT or UNKNOWN. B. Dataset Analysis In order to give a better understanding of our SSND dataset, we analyze it from different perspectives.The statistical results of REM stage are consistent with the previous discovery that patients with narcolepsy typically have higher REM sleep density than normals [38]. To further analyze the relationship between sleep stage distribution and narcolepsy, we conducted a significance test on the number of epochs in each sleep stage and whether one subject is a patient or normal, shown in Fig. 2. Here, the "p" value is an indicator of the difference between the patients and the normals on each stage.The "Sig" is an indicator of significance.From Fig. 2 we can see, p values in Wake, N1, N2, N3 and REM stages are repectively 0.1545, 0.0609, 0.5783, 0.0078 and 0.0012."Sig:ns" denotes p≥0.05, indicating there no significant difference between patients and normals."Sig:**" denotes p≤0.01, which indicates there a significant difference between patients and normals.Obviously, compared with other stages, the differences in N3 and REM stage between patients and normals are more significant (p=0.0078 in N3 stage and p=0.0012 in REM stage).This result indicates that patients with narcolepsy are more likely to enter the N3 and REM stages than normals. 2) Hypnogram Analysis: To further analyze the relationship between sleep stage and narcolepsy, we compare two examples of hypnograms manually scored by a sleep expert from two whole-night PSG recordings of a patient and a normal in Fig. 3. Hypnogram is a graph that represents the stages of sleep as a function of time.Hypnograms are usually obtained by scoring the recordings from EEG, EOG and EMG.From Fig. 3(a) we can see that transitions of sleep stages happen frequently in a patient with narcolepsy.On the contrary, It further proves that the known sleep stages can help diagnose narcolepsy.Therefore, we try to introduce a sleep staging task as an auxiliary task [39] in our deep learning model for narcolepsy diagnosis, which helps compete the primary task of narcolepsy diagnosis and improve the performance. 3) Correlation Analysis of Different Channels: In pervious work, EEG, EOG, EMG and ECG have frequently been given higher importance compared to other signals.Here, we investigate the correlation between different modalities, by calculating the Pearson correlation coefficient between different signals from 13 important channels of EEG, EOG, EMG, and ECG.The heatmaps of Pearson correlation coefficient are shown in Fig. 4. Firstly, the heatmaps of all the subjects, patients and normals are similar in our dataset, which illustrates that overall results of Pearson correlation coefficient on patients and normals are coincident.Then, the values of Pearson correlation coefficient between 6 EEG channels are high, especially the value between F4 and C4.It illustrates that single-channel EEG may achieve the performance similar to that of the fusion of 6 EEG channels.It is worth noting that the value of Pearson correlation coefficient between Chin1-Chin2 EMG and Chin3-Chin2 EMG is high, which shows two Chin EMG channels are similar and single-channel Chin EMG may represent information of two-channels Chin EMG. A. Problem Formulation Our model is designed in an end-to-end fashion, which processes a sequence of sleep epochs and outputs a narcolepsy prediction with a sequence of predicted sleep stages.We denote x ∈ R n×C as a sleep epoch, where n is the number of sampling points in a sleep epoch and C is the number of channels.The input sequence of sleep epochs is defined as For automatic sleep staging, we denote the number of sleep stages as N , and N = 5 (Wake, N1, N2, N3, REM), according to the AASM sleep standard [7].We define Ŷ = { ŷ1 , ŷ2 , ŷ3 , . . ., ŷL } as the sequence of sleep stages corresponding to X = {x 1 , x 2 , x 3 , . . ., x L }, where ŷi ∈ {0, 1} N is the one-hot encoding of ground-truth sleep stage of x i . Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.For automatic narcolepsy diagnosis, we denote the number of narcolepsy diagnosis as M, and M = 2 (patient and normal).ẑ ∈ {0, 1} M is defined as an one-hot encoding of ground-truth narcolepsy diagnosis.For a sequence of sleep epochs Therefore, our sleep staging task is defined as learning a mapping function F that maps a sequence of sleep epochs X into the corresponding sequence of sleep stages Ŷ and a narcolepsy diagnosis ẑ. B. Overview In order to diagnose narcolepsy from PSG recordings, we design an end-to-end framework, which captures sequential relationship within each epoch and between epochs, automatically scores sleep stages and combines the scored stages with PSG recordings for narcolepsy diagnosis.Specifically, we adopt the idea of multi-task learning, and take the sleep staging as our auxiliary task which contributes to the performance improvement of our primary task: narcolepsy diagnosis.In the auxiliary task, we automatically score the sleep stages and simultaneously learn the sleep stage features that are then combined with narcolepsy features extracted from PSG recordings for the primary task. Our deep learning model is illustrated in Fig. 5.The model consists of seven modules: (1) [30], [31] show that local salient wave features from each epoch are also helpful for diseases diagnosis, such as narcolepsy. in our deep learning network, we design an Epoch Feature Extraction Module to extract local feature within each epoch.Epoch Feature Extraction Module consists of Convolutional Neural Network (CNN), Batch Normalization [40], and GELU [41] activation function.Existing studies on sleep staging from PSG [8], [9], [10], [11] have proved that CNN is able to capture the local features of significant waveforms.Therefore, we utilize CNN to extract local features from salient waveforms within each epoch. We feed sleep sequence X = {x 1 , x 2 , x 3 , . . ., x L } into Epoch Feature Extraction Module.The process is as follows: where X j is the j-th features (X 0 is X ), Conv j is the j-th convolution layer of Epoch Feature Extraction Module, B N is Batch Normalization, G is GELU activation function, Max Pooling j is the j-th max pooling layer, and Avg Pooling is an average pooling layer.Finally, Epoch Feature Extraction Module outputs the epoch features X epoch = {x In pervious work on automatic sleep staging [12], [13], [14], Transformer or multi-head attention is used to model global temporal context and achieves a high performance.Inspired by these studies, we use a Transformer Encoder as a Sequence Feature Extraction Module, which can encode global context features through multi-head attention. The Transformer layer, just like standard Transformer [42], adopts scaled dot-product attention, which is defined as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 6.Illustration of multi-task process (here only L = 4 for visual purposes).Here, FC is fully-connected layer, and + is element-wise addition. follows: where matrices Q, K , and V consist of queries, keys, and values, respectively, and d k is the dimension of keys.We feed epoch features The process is as follows: where T rans f or mer is standard Transformer Encoder, X seq = {x D. Narcolepsy Diagnosis With Sleep Staging Features As Fig. 2 and Fig. 3 in Section II-B show, there are significant differences in the proportion and transition of sleep stages between patients and normals.Some existing studies [31], [33] have proved that known sleep stage information can improve the performance of narcolepsy diagnosis.Therefore, we try to take advantages of sleep staging for narcolepsy diagnosis.Here, we adopt the idea of multi-task learning, where we take sleep staging as the auxiliary task to automatically extract sleep stage features for narcolepsy diagnosis.For where y i, j ∈ R, the j-th element of y i , denotes the probability that the i-th epoch is predicted to the j-th sleep stage class, and ŷi, j ∈ {0, 1}, the j-th element of ŷi , denotes the probability that the i-th epoch actually belongs to the j-th class. 2) Primary Task: Narcolepsy Diagnosis: Our narcolepsy diagnosis process is shown in Fig. 6.Considering that the task of narcolepsy diagnosis is sequence-level, we calculate the average feature of sequence context features X seq before feeding it into MLP.The process of narcolepsy feature mapping is as follow: where x seq i is i-th feature of sequence context features X seq , M L P is multilayer perceptrons consisting of two fullyconnected layers, x nar colepsy ∈ R d ′ is narcolepsy feature.For making sleep staging task as the auxiliary task of narcolepsy diagnosis, we design a Task Feature Fusion Component to fuse sleep stage feature and narcolepsy feature together.The Task Feature Fusion Component is as follows: where is the i-th feature of sleep stage features X stage and x f usion ∈ R d ′ is the fused narcolepsy feature.Then we feed x f usion into Sequence-level Narcolepsy Classifier, which consists of fully-connected layer and a softmax function, to obtain z ∈ R M .z is the predicted probability in M Narcolepsy classes of the sleep sequence.We use the cross-Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. where z j ∈ R, the j-th element of z, denotes the probability that the sequence is predicted to the j-th narcolepsy diagnosis class, and ẑ j ∈ {0, 1}, the j-th element of ẑ, denotes the probability that the sequence actually belongs to the j-th narcolepsy diagnosis class. E. Joint Training In the training procedure, sleep staging and narcolepsy diagnosis are jointly trained together by the same one objective function, which consists of two different parts, staging loss and diagnosing loss, described in Equation 10: where λ is the coefficient of two loss functions for sleep staging and narcolepsy diagnosis. IV. EXPERIMENT A. Performance Measurement and Implementation We use ACC (accuracy) and F 1 -score (F 1 ) to measure model performance.In particular, given that the task of sleep staging is a multi-class classification problem, we replace F 1 score with Macro F 1 score.In other word, we calculated the F 1 scores in a class-wise manner, and reported the mean value to get Macro F 1 score. Inspired by most of existing methods on automatic sleep staging, we adopted a subject-wise 6-fold cross-validation policy by dividing the subjects in the dataset into 6 groups.In each fold, five groups were used for training, and the left one for testing, ensuring that the data from the same one subject never appear in the training set and testing set simultaneously.In addition, we ensured that each fold has the same number of subjects and includes patients and normals.The details of data splitting are shown in Tab.III. We implemented our deep learning model based on the PyTorch [43].We evaluated EEG, EOG, ECG, EMG (including chin EMG and Leg EMG) and Nasal Presure from PSG recordings on our deep learning model.The model was trained using the Adam optimizer with default settings and the learning rate was set to 1e-4.The mini-batch size was set to 32 and dropout [44] rate was set to 0.1.We adopted early stopping [45] policy in the training process.If the model does not achieve a better performance any more for ten consecutive B. Compared Methods In our experiment, we compared our proposed method with the following approaches on sleep staging and narcolepsy diagnosis.For fair comparison, all the approaches were evaluated on the same dataset, and adopted subject-wise training policy: SVM (Support Vector Machine) [46] uses a Gaussian kernel function for automatic sleep staging and narcolepsy diagnosis. RF (Random Forests) [47] is an ensemble learning method.CNN (Convolutional Neural Network) is used as the feature extractor of raw PSG recordings for automatic sleep staging and narcolepsy diagnosis. CNN + RNN, where CNN is used to extract local features within each epoch and RNN is used to extract context features from an epoch sequence. Transformer is used as the feature extractor of PSG recordings for automatic sleep staging and narcolepsy diagnosis. C. Overall Results We first compared our model with other approaches for sleep staging and narcolepsy diagnosis on single-channel EEG (F4-M1).Previous studies have proved that using EEG achieves good performance [8], [36].Here, all the approaches were evaluated using EEG signals for sleep staging and narcolepsy diagnosis.As we can see from Tab. IV, our method achieves the best performance.SVM and RF perform the worst, about 16% lower in accuracy than our method (65.85% v.s. A. Analysis of Sleep Staging Task To investigate the effectiveness of the auxiliary task of sleep staging, the Task Feature Fusion Component and the end-toend manner, we compared our model with the three following methods: Single-Task Method: We set single-task method as a baseline method, where we ablate the Sleep Feature Mapping Module, the Task Feature Fusion Component and the Epoch-level Sleep Stage Classifier from our model. No-Fusion Method: We set no-fusion method as another baseline method, where we only ablate the Task Feature Fusion Component.This model can be used to classify sleep stages and narcolepsy, but the sleep stage features and narcolepsy features are not fused together for narcolepsy diagnosis. Two-Phase Method: In the two-phase method, the sleep staging is automatically scored in the first phase, and the narcolepsy is diagnosed in the second step.The two tasks are trained separately. For fair comparison, we set the same hyperparameters for these models as our model.The results of ablation experiments are shown in Tab.V. From Tab.V we can see that single-task method performs the worst, 2.97% lower in accuracy and 2.76% lower in F 1 than our model, on narcolepsy diagnosis (75.97% v.s.78.94% in accurasy and 82.69% v.s.85.46% in F 1 ).It is reasonable that the single-task method without sleep staging task can not well extract features and learn the transition rules about sleep stages, which can help classify narcolepsy.No-fusion method performs close to our model on sleep staging (80.84% v.s.81.24% in accuracy and 75.04% v.s.74.85% in Macro-F 1 ).Obviously, ablating Task Feature Fusion Component has no significant impact on performance of sleep .These indicate that two phase method works well in sleep staging.However, the sleep stages scored in the first phase contain incorrect labels which could not be well optimized in the second phase, causing cumulative error and leading the poor performance for the disorder diagnosis.All the results prove the importance of setting sleep staging as the auxiliary task for narcolepsy diagnosis. B. Analysis of Highly Correlated Channels As shown in Fig. 4, some channels are highly correlated, such as the six EEG channels and the two chin EMG channels.In EEG, F4-M1 and C4-M1 channels are highly correlated.In EMG, the correlation coefficient between two chin EMG is high.Channels that exhibit high correlation with each other can lead to information redundancy.Among them, we could choose only one channel to feed into our deep learning model to achieve a high performance.Here, we tested the model performance when using single channel and using highly correlated channels, respectively.Specifically, we evaluated our model on single channel (F4-M1, C4-M1, F3-M2, C3-M2, O2-M1, O1-M1 in EEG and Chin1-Chin2, Chin3-Chin2, LegL, LegR in EMG), two highly correlated channels (F4-M1 + C4-M1 and Chin1-Chin2 + Chin3-Chin2), all EEG channels and all EMG channels, shown in Tab.VI. For EEG, when only using F4-M1, our model achieves the best performance on narcolepsy diagnosis (78.94% in accuracy and 85.45% in F 1 ).Compared with F4-M1, our model using C4-M1 performs a little worse on sleep staging (81.24% v.s.80.45% in accuracy and 74.85% v.s.74.31% in Macro-F 1 ) and narcolepsy diagnosis (78.94% v.s.77.10% in accuracy and 85.45% v.s.83.48% in F 1 ).In addition, our model using other single-channel EEG performs much worse than F4-M1 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. C. Analysis of Multiple Modalities In this experiment, we investigated the predictive abilities of different modalities and their combination for narcolepsy diagnosis, including EEG, EOG, EMG, ECG, Nasal Pressure, EEG+EOG, EEG+EMG, EEG+ECG, and EEG+Nasal Pressure.In single-modailty experiments, the EEG here refers to the single-channel EEG of F4-M1 channel which achieves the best performance in narcolepsy diagnosis.[7], EOG is also an important standard for experts to assign sleep stages.In addition, EMG, ECG and Nasal pressure are not so helpful for sleep staging for which all the accuracy are lower than 60%, but they are relatively useful for narcolepsy diagnosis for which all the accuracy values are higher than 70%. When using the combined modalities, on sleep staging, our model using EEG+EOG performs the best, about D. Subject-Level Case Study In order to give a better understanding of the narcolepsy diagnosis, we selected one patient from our dataset to illustrate her/his hypnogram through the whole night.Here, we present Fig. 7. Case study on one patient.In (c), "Yes" denotes that this sequence is predicted as narcolepsy."No" denotes that this sequence is not predicted as narcolepsy. the groundtruth hypnogram and the hypnogram automatically scored by our model of the patient, respectively, shown in Fig. 7 In our model, when input a sequence of 20 epochs, it will output a diagnosis result.In this way, for each subject, there are multiple diagnosis results.Therefore, we could determine the diagnosis result in subject-level.Specifically, for each subject, we take all of her/his diagnosis results into account, and if more than 50% of the results determine the subject with narcolepsy, we determine that she/he is with narcolepsy.In this way, our model achieves a 100% in accuracy in subject-level narcolepsy diagnosis.It suggests that we could improve the robustness of our model by taking more diagnosis results in sequence-level. E. Limitations We must acknowledge the limitations of the dataset used in this work.First, due to the difficulty in recruiting a large number of patients, the total number of subjects in our dataset was relatively small, 77 in total.All the subjects in our dataset are from China, and the conclusions we obtained were mainly for a Chinese population.Second, types of sleep disorders in our dataset were limited to narcolepsy.There were many other sleeping disorders, such as insomnia, restless legs syndrome, and sleep apnea, that were endangering people's health.We cannot research on these sleep disorder in our dataset.Finally, there are many challenges on classifying narcolepsy into fine-grained categories, including type 1 narcolepsy, type 2 narcolepsy, and unspecified narcolepsy.The labels provided for narcolepsy were limited to nacolepsy and normal, without fine-grained categories.In the future, we will continue to study sleep disorders and try to address these challenges. VI. CONCLUSION In clinic, it is difficult for doctors correctly and objectively to diagnose narcolepsy.In this paper, we address the problem of diagnosing narcolepsy automatically and objectively using PSG signals.We collected a dataset of PSG recordings from 77 participants.We propose a novel end-to-end framework for narcolepsy diagnosis, which embeds the sequential relationship within each epoch and between epochs in PSG signals, automatically scores the sleep staging, and combines the sleep stage related features with narcolepsy features together for narcolepsy diagnosis.In particular, we adopt the idea of multitask learning, where we take the sleep staging as the auxiliary task, and take the narcolepsy diagnosis as the primary task.The framework was evaluated on the collected dataset, and the results show that both of the sleep stage features and the endto-end fashion help diagnose narcolepsy.Moreover, we do a comprehensive analysis on the PSG recordings, including the importance of sleep staging for the diagnosis, highly correlated channels, and the predictive ability of different modality (e.g., EEG, EOG, EMG, and ECG). Fig. 2 . Fig. 2. Significance test on the number of epochs in each sleep stage for normals and patients. Fig. 3 . Fig. 3.The hypogram of one whole-night recording from (a) one patient and (b) one normal. ∈ R d and d is the feature dimention.2) Sequence Feature Extraction Module: Transition patterns of sleep stages between epochs play an critical role in sleep staging[7].Therefore, modeling the relationship between sleep epochs in sequence is helpful for sleep staging.In addition, for narcolepsy diagnosis, extracting global context features from the sequence of sleep epoch can avoid being limited to the local characteristics of the waveform within an epoch.In other words, modeling the sleep sequence can expand the receptive field of model to learn global characteristics of the waveform, which can improve the performance of narcolepsy diagnosis.Due to effectiveness of modeling global relationship, we propose a Sequence Feature Extraction Module to extract context features between epochs in a sleep sequence. (a) and (b).Meanwhile, we present the narcolepsy diagnosis results obtained by our model, shown in Fig. 7(c).It can be seen from Fig. 7(a) and (b), the sleep stages of most epochs of this patient are correctly scored by our model, and only a few epochs are misclassified.It is difficult to correctly score the sleep stages with rapid sleep transitions.The sequential relationship among such sleep fragments is hard to model.As we can see from Fig. 7(c), we can correctly diagnose the narcolepsy for most sequences by our model. We first design Epoch Feature Extraction Module to extract the local features within each epoch of raw signals from PSG.Then, the epoch features are input to Sequence Feature Extraction Module.Next, we design two task-guided feature mapping modules, Sleep Stage Feature Mapping Module and Narcolepsy Feature Mapping Module.Sleep Stage Feature Mapping Module is used to map features for sleep staging and Narcolepsy Feature Mapping Module is used to map features for narcolepsy diagnosis.The sequence features are fed into Sleep Stage Feature Mapping Module and Narcolepsy Feature Mapping Module to obtain sleep stage features and narcolepsy features, respectively.Then, sleep stage features are fed into Epoch-level Sleep Stage classifier to predict sleep stages and are also fed into Task Feature Fusion Component with narcolepsy features to obtain fused narcolepsy features.Finally, fused narcolepsy features are fed into Sequence-level Narcolepsy Classifier to diagnose narcolepsy.For automatic sleep staging, extracting features from local salient waveforms within each epoch can help classify sleep staging in epoch level.In addition, existing studies on sleep disorder [7]Feature Extraction Module1) Epoch Feature Extraction Module: Local salient wave features are critical in sleep staging for sleep experts[7]. ∈∈ R d ′ and d ′ is task-guided feature dimention.After mapping sequence context features X seq into sleep stage features X stage , we feed X stage into Epoch-level Sleep Stage Classifier, which consists of fully-connected layer and a softmax funtion, to obtain Y = {y 1 , y 2 , y 3 , . . ., y L }, where y i ∈ R N is the predicted probability in N sleep stage classes of the i-th epoch.We use the cross-entropy (CE) function as sleep staging loss function: epochs, the ends.The Transformer block of Sequence Feature Extraction Module has 8 heads and 512 hidden states.We set the length of sleep epoch sequence as L = 20, feature dimension as d = 512, task-guided feature dimension as d = 128 and the coefficient of two loss functions as λ = 0.5.Before being fed into deep learning model, EEG, EOG, ECG, EMG and Nasal Pressure signals were resampled to 100Hz.We trained the model on the machine with Intel Core i9 10900K CPU and eight NVIDIA RTX 3080 GPUs. 67.21% v.s.81.24% on sleep staging and 61.66% v.s.61.78% v.s.78.94% on narcolepsy diagnosis).It indicates the sequential relationship in EEG signals is important for the diagnosis, which the traditional machine learning methods cannot model yet.CNN performs worse than our method on sleep staging (79.32% v.s.81.24% in accuracy and 72.09% v.s.74.85% in Macro-F 1 ) and narcolepsy diagnosis (72.82% v.s.78.94% in accuracy and 80.72% v.s.85.45% in F 1 ), indicating fully CNN without context features extractor cannot well model sequential relationship between epochs, which helps for the disorder diagnosis.CNN+RNN, where CNN is used as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. TABLE V ANALYSIS OF AUXILIARY TASK IN SINGLE-CHANNEL EEG (F4-M1) epoch feature extractor and RNN is used as sequence feature extractor, performs worse than our method on sleep staging (79.91% v.s.81.24% in accuracy and 71.87% v.s.74.95% in Macro-F 1 ) and on narcolepsy diagnosis (74.81% v.s.78.94% in accuracy and 81.81% v.s.85.45% in F 1 ).Transformer, using fully Transformer to capture local and global features from EEG signals, performs worse about 1.7% accuracy and 1.9% F 1 than our method on narcolepsy diagnosis (76.34% v.s.78.94% in accuracy and 83.59% v.s.85.45% in F 1 ).The Transformer cannot well extract local features within each epoch from EEG signals.Compared with other approaches, our deep learning model, using CNN as epoch feature extractor and Transformer as sequence feature extractor, utilizing sleep staging as the auxiliary task, can well model local and global features from EEG signals and make full use of sleep stage information to improve the performance of narcolepsy diagnosis. TABLE VI THE RESULTS OF USING SINGLE CHANNEL AND MULTIPLE CHANNELS staging.However, on narcolepsy diagnosis, no-fusion method performs 1.84% lower in accuracy and 0.9% lower in F 1 than our model (77.10% v.s.78.94% in accuracy and 84.55% v.s.85.45% in F 1 ).It further indicates that the sleep stage features can improve the performance of narcolepsy diagnosis.Two phase method performs close to our model on sleep staging (80.69% v.s.81.24% in accuracy and 74.58% v.s.74.85% in Macro-F 1 ).However, for narcolepsy diagnosis, it performs 1.83% lower in accuracy and 2.89% lower in F 1 than our model (77.11% v.s.78.94% in accuracy and 82.56% v.s.85.45% in F 1 TABLE VII THE RESULTS OF USING SINGLE MODALITIES AND MULTI MODALITIES 15% v.s.80.40% v.s.79.97% in F 1 ).In Tab. 4, the values of Pearson correlation coefficient between Chin EMG and Leg EMG are low.It indicates that Leg EMG contains different information from Chin EMG, which performs worse on sleep staging but performs better on narcolepsy diagnosis than Chin EMG.It is worth noting that our model in all EMG channels performs better than Chin3-Chin2 on narcolepsy diagnosis (74.18% v.s.70.88% in accuracy and 80.57% v.s.79.97% in F 1 ).It further indicates that Leg EMG can help provide effective features for narcolepsy diagnosis. Table VII shows the performance comparison.As we can see from Tab. VII, when using single modality of EEG, our method achieves the best performance (81.24% in accuracy and 74.85% in Macro-F 1 ) on sleep staging and the best performance (78.94% in accurasy and 85.45% in F 1 ) on narcolepsy diagnosis compared with other single-modality results, indicating that EEG is the most predictive for sleep staging and narcolepsy diagnosis in PSG recordings.Using EOG also has a good performance.The sleep staging results are close to EEG (81.11% v.s.81.24% in accuracy, 74.28% v.s.74.85% in Macro-F 1 ), but the narcolepsy diagnosis results are lower than EEG (76.63% v.s.78.94% in accuracy and 84.06% v.s.85.46% in F 1 ).According to AASM sleep standard
8,295
2023-09-06T00:00:00.000
[ "Psychology", "Computer Science" ]
Optimization, Characterization, and Antibacterial Activity of Copper Nanoparticles Synthesized Using Senna didymobotrya Root Extract The economic burden and high mortality associated with multidrug-resistant bacteria is a major public health concern. Biosynthesized copper nanoparticles (CuNPs) could be a potential alternative to combat bacterial resistance to conventional medicine. This study for the first time aimed at optimizing the synthesis conditions (concentration of copper ions, temperature, and pH) to obtain the smallest size of CuNPs, characterizing and testing the antibacterial efficacy of CuNPs prepared from Senna didymobotrya ( S. didymobotrya ) roots. Extraction was done by the Soxhlet method using methanol as the solvent. Gas chro-matography-mass spectrometry (GC-MS) analysis was performed to identify compounds in S. didymobotrya root extracts. Box–Behnken design was used to obtain optimal synthesis conditions as determined using a particle analyzer. Characterization was done using ultraviolet-visible (UV-Vis), particle size analyzer, X-ray diffraction, zeta potentiometer, and Fourier transform infrared (FT-IR). Bioassay was conducted using the Kirby–Bauer disk diffusion susceptibility test. The major compounds identified by GC-MS in reference to the NIST library were benzoic acid, thymol, N-benzyl-2-phenethylamine, benzaldehyde, vanillin, phenylacetic acid, and benzothiazole. UV-Vis spectrum showed a characteristic peak at 570nm indicating the formation of CuNPs. The optimum synthesis conditions were temperature of 80 ° C, pH 3.0, and copper ion concentration of 0.0125M. The FT-IR spectrum showed absorptions in the range 3500–3400cm − 1 (N-H stretch), 3400–2400cm − 1 (O-H stretch), and 988–830 cm − 1 (C-H bend) and peak at 1612cm − 1 (C � C stretch), and 1271cm − 1 (C-O bend). Cu nanoparticle sizes were 5.55–63.60 nm. The zeta potential value was − 69.4 mV indicating that they were stable. The biosynthesized nanoparticles exhibited significant antimicrobial activity on Escherichia coli and Staphylococcus aureus with the zone of inhibition diameters of 26.00 ± 0.58mm and 30.00 ± 0.58mm compared to amoxicillin clavulanate (standard) with inhibition diameters of 20 ± 0.58mm and 28.00 ± 0.58mm, Introduction Nanotechnology is of great scientific interest due to its wide application in pharmaceutical products, electronics, biotechnology, and medicine [1,2]. Nanoparticles are solid particles with sizes approximately extending from 1 nm to 100 nm in length in at least one dimension [3]. eir application in the field of biotechnology has grown because of their comparable size range scale to biomolecules and their versatile properties that can be controlled using the method used for their biosynthesis [4]. Copper nanoparticle (CuNP) is one of the most common nanoparticles utilized in medicine. ey have been synthesized using both physical and chemical methods [4]. Physical methods experience low time 2,6,4′-trihydroxy-trans-stilbene (a stilbenoid derivative) and 4-(2′-oxymethylene-4′-hydroxyphenyl) chrysophanol (a phenyl anthraquinone) from chloroform/ methanol extract of S. didymobotrya roots. Later, chromatographic separation of the hexane and dichloromethane extracts of S. didymobotrya roots led to the identification of terpenoids (3β-sitosterol and stigmasterol) and anthraquinones (chrysophanol and physcion) [35]. Recently, terpinolene and alpha-pinene were reported as the main antipyretic compounds in dichloromethane extract of S. didymobotrya leaves [27]. ough some pharmacological activities and toxicity of different extracts of S. didymobotrya parts have been reported, there is no report on the synthesis of CuNPs and antimicrobial activity of nanoparticles synthesized from this plant. e current study therefore for the first time investigated the synthesis of CuNPs using S. didymobotrya roots extracts and their antimicrobial efficacy against E. coli and Staphylococcus aureus. Since the synthesis of nanoparticles of smaller and uniformly distributed size, crystalline, and good stability requires control of experimental conditions [47,48], synthesis conditions (concentration of copper ions, temperature, and pH) for the CuNPs were optimized. Chemicals and Reagents. Copper (II) sulphate pentahydrate (CuSO 4 ·5H 2 O), anhydrous sodium sulphate, silica gel, and methanol were purchased from Merck Ltd., USA. All the chemicals and reagents were of analytical grade and were used without further purification. E. coli (ATCC 25922), S. aureus (ATCC 25923), Kirby-Bauer disks, amoxicillin clavulanate, and 0.5 McFarland standards were obtained from Cypress Diagnostics, Belgium. Sample Collection and Preparation. S. didymobotrya roots were harvested from plants growing in their natural habitat in West Uyoma sublocation, Siaya County, Kenya (0°15 ′ 8S 34°16 ′ 02.8E). ey were identified and authenticated at the Department of Biological Sciences, Moi University (Kenya), where a voucher specimen (SD 2018/03) was deposited for future reference. e collected roots were washed several times with distilled water to remove dust. ey were dried at room temperature under shade for three weeks. After, they were chopped into small pieces and pulverized using a laboratory mill. e extraction was carried out according to the method described by Kigondu et al. [49] with slight modifications. Weighed 50 g of the root powder was transferred into the Soxhlet apparatus and extracted with 250 mL of methanol for 48 hours. Methanol was used as the solvent of extraction because it was the best solvent of extraction according to trial extractions done using diethyl ether, methanol, and distilled water. e crude extract was concentrated by rotary evaporation at 40°C and transferred to a desiccator containing anhydrous sodium sulphate. e percentage yield of the crude extract was determined as per the following equation [50]: extractive yield value � weight of concentrated extract weight of plant dried powder × 100. (1) Gas Chromatography-Mass Spectrometry Analyses. GC-MS analysis was performed using an Agilent 8890A GC system interfaced with a 5977B mass spectrometer detector fused with a capillary column (30 × 0.25 mm, 0.25 μm). For GC-MS detection, an electron ionization system was operated in electron impact mode with an ionization energy of 70 eV. Helium gas (99.999%) was used as a carrier gas at a constant flow mode of 1.2 ml/min, and an injection volume of 2 μL was employed (a split ratio of 10:1). e injector temperature was maintained at 250°C; the ion-source temperature was 200°C; and the oven temperature was programmed from 60°C (for 1.5 min), with an increase of 20°C/min to 220°C, then 5°C/min to 280°C (4 min), and ending with a 10 min isothermal at 280°C. Mass spectra were taken at 70 eV. e Jet-Clean Ion Source temperature was at 320°C, and MS Quadrupole was at 180°C with a scan interval of 0.5 s. e solvent delay was 0 to 3 min, and the total GC-MS running time was 36 min. Identification of the peaks was based on computer matching of the mass spectra with the National Institute of Standards and Technology (NIST 08) library; direct comparison with the published data was also utilized. (40,60, and 80°C) and pH (3, 6.5, and 10 pH). e reaction mixture was centrifuged at 5,000 rpm for 5 min to remove any free biomass residue. e supernatant was again centrifuged at 12,000 rpm for 40 min to obtain pellets. e pellets of CuNPs were resuspended using distilled water. e reduction of Cu ions was measured by UV-Vis spectrophotometer (Beckham Coulter DU 720, Beckham Coulter Inc., USA) after 4 hours. (2) Synthesis, Optimization, and Characterization of where Conc � concentration and Temp � temperature. Ultraviolet-Visible Spectroscopy. Synthesized CuNPs (300 μL) were diluted with 3 mL of distilled water and scanned on a UV-Vis spectrophotometer (Beckham Coulter DU 720, Beckham Coulter Inc., USA) from 300 to 700 nm at a resolution of 1 nm using distilled water as the blank [52]. Particle Size Analysis. e sizes of synthesized CuNPs were measured using a particle size analyzer (Microtrac Nanotrac Wave II, SL-PS-25 Rev. H) with a laser diode detector. X-Ray Diffraction Analysis. e synthesized CuNPs were subjected to an X-ray diffraction (XRD) analyzer operated at the voltage of 40 kV and 20 mA with copper Kα radiation in the range of θ-2θ configuration with a scanning rate of 0.030°C/s. e crystallite size (CS) was calculated using Debye-Scherrer equation as follows [52,53]: where constant (K) � 0.94, λ � 1.5406 × 10 −10 , cos θ � Bragg angle, and β is the full width at half maximum (FWHM). Full width at half maximum in radius (β) � FWHM × π/180. FT-IR Analysis. FT-IR analysis was performed to identify functional groups bound on the surface of the CuNPs. e specimen and potassium bromide granules were powdered together in a ratio of 1:100 (w/w) and then compressed into pellets. Subsequently, the analysis was performed and measured using FT-IR spectrophotometer in the range of 400-4,000 cm −1 and with a resolution of 4 cm −1 [52,53]. Antibacterial Activity of the Synthesized CuNPs from the S. didymobotrya Root Extract. e antimicrobial efficacy of biosynthesized 4 cm −1 CuNPs was assessed using Kirby-Bauer disk diffusion susceptibility test protocol [54]. e test microorganisms were chosen according to the National Journal of Nanotechnology Committee for Clinical Laboratory Standards 2010 protocols [55]. Gram-negative E. coli and Gram-positive S. aureus were tested. Amoxicillin clavulanate impregnated antimicrobial susceptibility testing discs were used as a positive control. All bioassay was done with 30 μL of solution of CuNPs resuspended in distilled water, S. didymobotrya root extract, and copper sulphate solution as per the specification of the positive control (amoxicillin clavulanate). After 18 hours of incubation, the zone of inhibition diameter (ZOI) was measured to the nearest millimetre using a ruler and recorded. e susceptibility or resistance of the test organism to each drug tested was determined using the published Clinical Laboratory Standards Institute (CLSI). e ZOI was classified as susceptible (S), intermediate (I), or resistant (R) based on the CLSI interpretive criteria [50]. Results and Discussion e percentage yield of S. didymobotrya root powder was 9.94% as calculated using equation (1). GC-MS Results. e GC-MS analysis on S. didymobotrya methanolic root extract was conducted to identify the active phytochemicals that might take part in the fabrication of CuNPs. e results indicated that the extract contained mainly fatty acids and some volatile organic compounds. e compounds along with their retention times, abundances, molecular formulae, and molecular weights are presented in Table 1. e major compounds identified were benzoic acid, thymol, n-benzyl-2-phenethylamine, benzaldehyde, vanillin, phenylacetic acid, and benzothiazole. ere is a paucity of literature on volatile compounds in S. didymobotrya. is study presented the first comprehensive report on the GC-MS analysis of volatile compounds in S. didymobotrya extract. Previously, Mworia et al. [27] reported the presence of terpinolene and alpha-pinene as the main antipyretic compounds in dichloromethane extract of S. didymobotrya leaves using GC-MS. None of the foregoing compounds were identified in this study. Interestingly, some compounds identified in the methanolic extract of S. didymobotrya roots in this study have a potential to take part in the formation of nanoparticles. For instance, alizarin (a dihydroxyanthraquinone with two hydroxyl groups on a phenyl ring) possesses a structure similar to compounds proposed to take part in chelation and reduction of copper ions to CuNPs [2,56]. Synthesis and Characterization of CuNPs. Bioreduction of copper ions to CuNPs on exposure to methanolic extract of S. didymobotrya roots was monitored by observing the colour change and using UV-visible spectroscopy. ere was a gradual colour change from light orange solution to dark brown, indicating the formation of CuNPs after 4 hours [57][58][59][60][61]. Pretrial runs indicated that no significant changes occurred after 3 hours. Usually, small metal nanoparticles absorb visible electromagnetic waves through the collective oscillation of conduction electrons at the surface, a phenomenon known as surface plasmon resonance (SPR) effect [62]. us, the final dark colour observed could be ascribed to the excitation of surface plasmon vibrations, indicating the formation of CuNPs [10,52]. Copper oxides are thermodynamically more stable than copper sulphates, which leads to the aggregation and oxidation of copper without proper protection [62]. us, the addition of the S. didymobotrya root extract might have inhibited the oxidation of copper, thereby acting as a reducing and capping agent for the CuNPs [10]. e UV-Visible spectrum of the methanolic root extract of S. didymobotrya (Figure 1(a)) showed bands at λ max 338 nm (band II). e band at 338 nm (band II) can either be due to n ⟶ π * transition or a combination of n ⟶ π * and π ⟶ π * transitions of heteroatoms linked in a double bond. e presence of quercetin, a class of flavonoids, has also been reported as a major constituent of the crude aqueous root extract of S. didymobotrya [63]. e observed transitions are probably related to quercetin involved in the reduction process and formation of CuNPs via π-electron interactions [56,64]. Hence, the extract of S. didymobotrya roots further acted as a reductant and stabilizer agent. e UV-Vis spectrum of the CuNPs (Figure 1(b)) showed changes in the absorbance maxima due to surface SPR, demonstrating the formation of CuNPs [65,66]. e SPR peak, which is a signature of the formation of CuNPs, appears in the visible region [67,68] at 542, 570, 604, 616, 638, 662, and 694, nm with absorbances of 0.064, 0.153, 0.066, 0.064, 0.065, 0.072, and 0.970, respectively ( Figure 1). According to Mei's theory, the occurrence of a single UVvisible peak in the UV-Visible spectrum of synthesized nanoparticles confirms that they are spherical in shape [58]. Table 2 contains the list of experimental runs and the corresponding responses obtained from the experiments projected by Box-Behnken design. e design optimized parameters that would yield CuNPs with the least average particle size. e experiments were done as per the run order to eliminate experimental bias. e mean particle size of CuNPs was recorded on particle size analyzer (Nanotrac). Design of Experiments and Optimization Analysis. A regression coefficient (R 2 ) of 0.9964 was obtained with a second-order quadratic equation generated for the optimization process. e adequacy of the model was checked using ANOVA. e predictor variables, that is, pH, concentration of copper ions, and temperature, of the mixture were all significant [47]. e value of p ≤ 0.05 indicated that pH (p ≤ 0.001) is the most influencing factor when compared to the concentration of copper ion (p ≤ 0.003) and temperature of the mixture (p ≤ 0.001). Variance inflation factors (VIF) value close to 1 indicates that the predictors are not correlated (Table 3) [69]. e qualities of the fitted models were evaluated based on the coefficients of determination (R 2 ) that was 0.9964. e model explains 99.64% of the variation in the average size data. e adjusted R 2 is 99.00%. R 2 (pred) is 94.31%, which indicates that the model explains 94.31% of the variation in the average size of CuNPs when used for prediction. Journal of Nanotechnology Figure 3 presents an interaction effects plot for mean size for Cu NPs. From the plot, it is seen that there was the interaction of temperature and concentration and the interaction of concentration and pH as shown by lines intersecting at a point, but there was no possible interaction of temperature and pH as indicated by lines being approximately parallel from each other. Effect of pH. e pH of range 3-10 was varied during CuNPs average size optimization process. e study revealed that pH as a parameter strongly influenced the size of CuNPs as shown by Figure 3 of the interaction effects plot for mean particle size. e least average size of the nanoparticles was recorded at a lower pH of 3.0. It was observed that increasing the pH increased the mean size of the nanoparticles. Similar observations have been reported by Honary et al. [70] and Dang et al. [62]. A possible explanation for this observation is that at a pH of 3.0, nanoparticles were experiencing high electrostatic repulsion, hence reducing agglomeration. erefore, at alkaline pH, the nanoparticles were exhibiting lower electrostatic forces hence allowing particle growth. Effect of Copper Ion Concentration. Copper ion concentration (0.0125-0.05 M) was varied for CuNPs average size optimization. e least mean size of nanoparticles was recorded at lower concentrations of copper salt as revealed in Figure 3. is finding agrees with previous findings [70,71] that reported that high salt ion concentrations led to large particle sizes and broad size distribution of synthesized nanoparticles. is could be because a low concentration of salt reduced the probability of coppercopper interactions, hence reducing agglomeration. Effect of Temperature. A temperature of range 40-80°C was controlled for CuNPs mean size optimization. e study showed that an increase in temperature from 40-80°C led to a reduction in the mean size of CuNPs. A previous research has reported similar findings [71]. is could be due to possible agglomeration at lower temperatures. (Figure 4). According to Figure 5, the predicted average particle size is 1.7862 nm. Increasing temperature yields small particle size nanoparticles. Decrease in salt concentration and pH favours synthesis of CuNPs of the least mean size. ese observations agree with those of Dang et al. [62]. Previous studies [72,73] indicated that the pH of aqueous media influences copper reduction reaction in CuNPs synthesis. Probable kinetic enhancement is thus conducive for the reduction of crystallite size because of the enhancement of the nucleation rate [62]. Particle Size Analysis. Particle size analysis was conducted for thirteen (13) samples of CuNPs prepared at varied conditions of pH of the reaction medium, copper ion concentration, and temperature of the solution. e smallest particle was for CuNPs prepared at 80°C, pH 3.0, and copper ion concentration of 0.03125 M (Figure 6). X-Ray Diffraction Results. e XRD peaks were assigned in comparison with the standard powder diffraction card of the Joint Committee on Powder Diffraction Standards (JCPDS card no. 89-2838). e peak positions were consistent with metallic copper of a crystalline nature. X-ray diffraction spectrum (Figure 7) revealed diffraction peaks at 2θ values of 43.30°, 50.02°, and 73.41°corresponding to the Miller indices (111), (200), and (220), respectively, which represent face-centred cubic structure of copper [66]. Further, the peak at 30.0°showed that a small amount of copper is oxidized to copper (II) oxide. e average size of CuNPs as determined using Debye-Scherrer's formula was 6 nm, which is close to 5.55 nm as established by XRD analysis. e size of the crystal under 100 nm suggested that the nanocrystalline nature of the biosynthesized CuNPs was Figure 4: 3 D response surface curves: (a) mean particle size versus concentration and pH, (b) mean particle size versus temperature and concentration, and (c) mean particle size versus temperature and pH. 8 Journal of Nanotechnology below 15 nm [57]. Similar results were reported by other researchers from the structure analysis of XRD for biosynthesized CuNPs [57,58,60]. Zeta Potential of the CuNPs. e zeta potential value of biosynthesized CuNPs was −69.4 mV (Figure 8). is indicated that the biosynthesized CuNPs surfaces possessed Journal of Nanotechnology strong electrostatic repulsion hence good stability. A recent study [61] indicated that CuNPs of size 82.32 nm had a negative zeta potential of −11.9 mV. Such negative zeta potentials suggest that charge distribution of the nanoparticles as well as their sizes could play a role in promoting or enhancing their biological properties [74]. In other words, high negative zeta potential translates into strong repulsion between the particles causing amplification or enhancement of their stabilities [75]. FT-IR Analysis. FT-IR analysis was done to identify the functional groups of the phytochemicals that participated in synthesizing CuNPs and their stabilization. e spectrum shown in Figure 9 revealed a broadband in the range 3,400-3,500 cm −1 characteristic of N-H stretch of amines and amides, and a band of the range 3400-2400 cm −1 indicating the presence of O-H of carboxylic acids, alcohols, and phenols. e peak at 1,612.25 cm −1 is assigned to C�C of alkenes and aromatic compounds. e presence of aromatic compounds was confirmed by the two peaks at 988.30 cm −1 and 830.60 cm −1 known for C-H out-of-plane bend for aromatic compounds. A peak at 1,271.13 cm −1 is attributed to C-O bond of alcohols, carboxylic acids, and esters. e presence of these functional groups indicated the possible involvement of reductive groups on the surfaces of the CuNPs [76]. ey are also involved in the capping of the CuNPs, as observed in previous studies that synthesized CuNPs from plant extracts [10,77]. e spectrum indicated new chemical linkages on the surface of CuNPs, suggesting that S. didymobotrya root extract can bind to CuNPs through hydroxyl and carbonyl groups of the amino acid residues in the protein of the extracts, therefore acting as reducing, stabilizing, and dispersing agents for synthesized copper oxide nanoparticles and preventing agglomeration of the CuNPs [60,61]. Antibacterial Activity of S. didymobotrya Root Extract and Synthesized CuNPs. Figure 10 shows that the CuNPs had an inhibitory effect against E. coli and S. aureus. Pearson's product-moment correlation was performed to evaluate the association between the size of CuNPs and ZOI of CuNPs against E. coli and S. aureus (n � 13). e analysis showed that there was a negative correlation between the size of CuNPs and the zone of inhibition of E. coli (r � −0.74; p ≥ 0.01). Similarly, a negative correlation was observed between the size of CuNPs and the zone of inhibition of S. aureus by the CuNPs (r � 0.74; p ≥ 0.05). e highest zone of inhibition was 30.00 ± 0.58 mm for S. aureus and 26.00 ± 0.58 mm for E. coli was achieved for CuNPs with the least mean particle size that was synthesized at optimum conditions of 80°C, copper ion concentration of 0.03125 M, and pH 3.0. is indicated that CuNPs of least mean particle size had a high surface area to volume ratio, hence effectively binding to the microbial membrane and probably altered its permeability that could have caused growth inhibition. Sathiyavimal et al. [10] reported similar results in which 100 μL of CuNPs prepared from Sida acuta extract highly inhibited E. coli with a zone of inhibition maximum of 15 mm, showed a lower antibacterial activity against S. aureus while the lowest inhibition diameter was 11 mm against P. vulgaris. Previous authors [66,78,79] have reported similar results, in which E. coli was the most inhibited bacteria when compared with S. aureus and other Gram-positive bacteria. e higher inhibition of Gram-negative bacteria by CuNPs could be partially explained by the facilitated influx of smallersized nanoparticles into the cell wall of Gram-negative bacteria that consists of a unique outer membrane layer and a single peptidoglycan layer as compared to the cell wall of Grampositive bacteria with several peptidoglycan layers [80,81]. Furthermore, CuNPs have been speculated to adhere to Gramnegative bacterial cell walls due to electrostatic interaction, or the copper ions facilitate rapid DNA degradation and reduction of bacterial respiration [82]. In some Gram-negative strains, copper ions alter the conformation and electron transferase of the associated reductases, culminating in the inhibition of cytochromes in the membrane [83]. Conclusion Synthesis of CuNPs from S. didymobotrya methanolic root extract had the following optimum synthesis conditions: temperature 80°C, pH 3.0, and copper ion concentration of 0.0125 M. e mean particle size of CuNPs predicted by the design at the optimum conditions was 1.7862 nm. UV-Vis analysis showed a characteristic surface plasmon resonance peak at 571 nm indicating the formation of CuNPs. FT-IR analysis revealed that the nanoparticles were bound by carboxylic acids, amines, and amides, phenols, and esters. Particle size analysis conducted using Nanotrac particle analyzer showed that synthesized CuNPs were of the range of 5.55-63.60 nm particle size. X-ray diffraction measurement confirmed the presence of cubic face centred CuNPs. e measured zeta potential value of CuNPs was −69.4 mV indicating that they were stable. In conclusion, the biosynthesized Cu nanoparticles are stable and displayed better antimicrobial activity against E. coli and S. aureus compared to amoxicillin clavulanate (standard). e study recommends the testing of biosynthesized CuNPs against other potential multidrug resistant microbes to enable their development into antimicrobial agents. Journal of Nanotechnology Data Availability e datasets supporting the conclusions of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,503.4
2021-10-15T00:00:00.000
[ "Materials Science" ]
Visual Object Tracking Robust to Illumination Variation Based on Hyperline Clustering : Color histogram-based trackers have obtained excellent performance against many challenging situations. However, since the appearance of color is sensitive to illumination, they tend to achieve lower accuracy when illumination is severely variant throughout a sequence. To overcome this limitation, we propose a novel hyperline clustering based discriminant model, an illumination invariant model that is able to distinguish the object from its surrounding background. Furthermore, we exploit this model and propose an anchor based scale estimation to cope with shape deformation and scale variation. Numerous experiments on recent online tracking benchmark datasets demonstrate that our approach achieve favorable performance compared with several state-of-the-art tracking algorithms. In particular, our approach achieves higher accuracy than comparative methods in the illumination variant and shape deformation challenging situations. Introduction Visual object tracking, which aims at estimating locations of a target object in an image sequence, is an important problem in computer vision.It plays a critical role in many applications, such as visual surveillance, robot navigation, activity recognition, intelligent user interfaces, and sensor networks [1][2][3][4][5].Despite significant progress has been made in recent years, it is still a challenging problem to develop a robust tracker for complex scenes due to appearance changes caused by partial occlusions, background clutter, shape deformation, illumination changes, and other variations. For visual tracking, an appearance model based on feature is of prime importance for representing and locating the object of interest in each frame.Most state-of-the-art trackers rely on different features such as color [6,7], intensity [8,9], texture [1], Haar feature [10,11], and HOG feature [12,13].Color feature is insensitive to shape variation and robust to object deformation.Numerous effective color-based representation schemes have been proposed for robust visual tracking.One common method is to adopt color statistics as an appearance description.Color histogram is the most commonly used descriptor representing object [14].Distractor-Aware Tracker (DAT) uses the color histograms of distractor to distinguish object and background pixels [7].Another successful method is to transform color space.Adaptive Color attributes Tracker (ACT) [6], on the other hand, maps the RGB value of pixel to a probabilistic 11 dimensional color representation and learns a kernelized classifier to locate the target using multi-dimensional color feature. Although these color-based trackers have performed state-of-the-art results on recent tracking benchmark datasets [15], they fail to cope with the scene where color features vary significantly, particularly in illumination changes.To solve this problem, we propose an illumination invariant Hyperline Clustering-based Tracker (HCT).The main components of the proposed HCT are shown in Figure 1.We exploit the observation that the color distributions of the same object under different illuminations locate in an identical line [16].Using an hyperline clustering algorithm which is able to identify the direction vectors of all hyperlines, we can track the object throughout the sequence where the illumination is not consistent.Moreover, to distinguish the object pixels from surrounding background regions, a Bayes classifier is trained to suppress the background hyperlines, reducing the drift problem.Due to the favorable robustness of the proposed approach, it is well suited for illumination vary scenes such as Singer1, Singer2, Trans, and so on.The contributions of this paper are as follows.First, we present a light yet discriminative object observation model in which the representation of object is formulated as the direction of hyperlines.Although it relies on direction initialization, this representation is able to distinguish the object of interest from background and achieves competitive performance on many challenging tracking sequences.Second, a Bayes classifier is trained in advance to identify and suppress the the background hyperlines, which improve the tracking robustness.Third, we adopt an anchor box scale estimation which allows us to cope with large variations of target scale and appearance (i.e., Trans sequence).Finally, we evaluate our approach on multiple tracking benchmarks demonstrating favorable results against several state-of-the-art trackers. Notation: A boldface capital letter Y denotes a matrix and a boldface lowercase letter y a vector.Y denotes the Euclid norm of Y.The transpose and complex conjugate are denoted by Y and Y * , respectively.The inner product is denoted by •, • .The element-wise product is denoted by .F denotes Discrete Fourier Transform (DFT) and F −1 denotes the Inverse Discrete Fourier Transform (IDFT). Related Work Based on object appearance models, visual tracking approaches can be classified into two families: Generative and discriminative approaches.Generative approaches tackle the tracking problem by searching the image region that is the most similar to the target template.Such trackers either rely on templates or subspace models.Comaniciu et al. [17] present a histogram-based generative model with attraction of the local maxima to handle appearance change of object.Sevilla-Lara et al. [18] present a distribution field based generative tracking method.Shen et al. [19] propose a generalized Kernel-based mean shift tracker whose template model can be built from a single image and adaptively updated during tracking.In [20], a sparse representation based generative model is adopted to locate the target using a sparse linear combination of the templates.Although it is robust to various occlusions, a lot of manipulations of time-consuming sparse representation leads to low frame-rate.Additionally, these generative approaches fail to use the background information which is likely to alleviate drifts and improve the tracking accuracy. Discriminative approaches typically train a classifier to separate the target object from background.For example, Zhang et al. [10] train a naive Bayes to locate the target in compressive projection where the features of the target appearance are efficiently extracted.Liu et al. [21] propose a robust tracking algorithm using sparse representation based voting map and sparsity constraint regularized mean shift.Yang et al. [22] present a discriminative appearance model based on superpixels, which is able to distinguish the target from the background with mid-level cues. Recent benchmark evaluations [15,23,24] have demonstrated that Discriminative Correlation Filter (DCF) based visual tracking approaches achieve state-of-the-art results while operating at real-time [25].Circulant Structure with Kernels (CSK) tracker employs a dense sampling strategy while exploiting circulant structure with Fast Fourier Transform to learn and track the object [26].Its extension, called kernelized correlation filter (KCF) [12], incorporate multi-channel features via a linear kernel achieving excellent performance while running at more than 100 frames per second.However, the standard KCF is only to robust to linearly scale changes.This implies inferior performance when the target encounters with large scale variations.To address this problem, Danelljan et al. [25] propose Discriminative Scale Space Tracking (DSST), which is capable of learning explicit scale filter using variant scales of target appearance.Despite of its competitive performance and efficient implementation, DSST starts to drift from object that is non-rigidly deforming.To further improve the robustness to deformation of tracker, both Distractor-Aware Tracker (DAT) [7] and Adaptive Color attributes Tracker (ACT) [6] adopt color-based representation that is invariant to significant shape deformation.Sum of Template And Pixel-wise LEarners (Staple) [14] combines the template to discriminate the object and the color-based model to cope with deformation in a ridge regression framework, outperforming many sophisticated trackers.However, color distribution is sensitive to varying illumination.Thus, the color-based trackers are likely to drift when the illumination significantly changes throughout a sequence. Hyperline Clustering-Based Tracking The proposed Hyperline Clustering-based Tracking (HCT) is motivated by the observation that the RGB values' distribution of the same kind color under different illuminations locate in the same lines [16].Thus, the representation of object can be cast to hyperline clustering problem as shown in Section 3.1.Furthermore, a Bayes classifier based discriminative model, which is capable of separating the target from background, is proposed in Section 3.2.Section 3.3 demonstrates the capability of accurate object localization and update.Inspired by recent state-of-the-art object detection [27,28], we propose a Anchor Box based scale estimation to achieve accurate tracking, as described in Section 3.4. Hyperline Clustering Representation Hyperline clustering have been successfully applied in sparse component analysis [29] and image segmentation [16].Given a set of observed data points {y i } T i=1 , which respectively locate on K hyperlines L(l k ), where l k is the directional vector of the corresponding hyperline and k = 1, • • • , K (see Figure 2).K-HLC aims to estimate Mathematically, K-HLC can be cast into the following optimization problem [30]: where The indicator function I i∈Ω k is given by where Ω k denotes the k-th cluster set.The distance d(y, l) from y to L(l k ) is A robust K-hyperline clustering algorithm was proposed in [30], where it is implemented in a similar way to K-means clustering by two steps after initialization: The cluster assignment and the cluster centroid update.For the cluster assignment step, the observed data {y i } T i=1 are assigned to For the second step, the cluster centroid is obtained by eigenvalue decomposition (EVD). In RGB color model, three primary colors (red, green, and blue) are exploited together to reproduce a array of color [31].Thus, the RGB vector of a color pixel can be represented by To illustrate the color represent ability of hyperline, we perform a scatter plot of three regions of a image from Trans sequences (see Figure 3).The foreground and two background regions are represented by yellow, blue, and black arrow lines, respectively.As we can observe, distributions of the pixels from three different regions approximatively locate in three different hyperlines.Hence, the representation of color can be considered as hyperline clustering problem.Moreover, when suffered illumination change, the distribution of the same color still locates in the same hyperline (see the red arrow).Besides, the prior probability can be approximated as P(x ∈ O) ≈ |O|/(|O| + |S|).Then, Equation ( 6) can be simplified as follows: Applying the model (7), we are able to distinguish object pixels from background region, as illustrated in Figure 4.The proposed model is capable of eliminating the effect of background and reducing the risk of drifting.In addition, because model ( 7) estimates the likelihood using the hyperline directional vector of object and background regions, it requires fewer memory to obtain accurate estimation than other color-based algorithms.However, it is dangerous to learn the model directly from the first frame image regions.To adaptively represent the changing object appearance and capture the object in different illuminations, we develop an update scheme in which the object and surrounding image hyperlines are updated independently.Because the color distribution discards the spatial position of image pixels, the proposed object hyperline representation is robust to shape deformation.Thus, the object hyperlines are fixed during the tracking process.For the surrounding hyperlines, the update scheme is summarized Algorithm 1. Algorithm 1 The update scheme of surrounding hyperlines. Require: Surrounding image pixels {y i } T i=1 , the surrounding hyperlines L S (l k ) and the number of hyperline K. The updated surrounding hyperlines L S (l k ). Localization Similar to the state-of-the-art trackers adapting tracking-by-detection principle [6,7,12], we iteratively localize the object in a new frame after initializing the tracker in the first frame. At the frame t, we use a trained classifier to predict the object location O t basing on the previous object location O t−1 .Instead of utilizing representation of gray [19] or HOG [12], we perform correlation filtering on the likelihood map directly.The training of classifier is achieved by find a function f (x) that minimizes the squared error where s are the sample likelihood patches, which are cyclic shifts of previous location O t−1 , g are the regression targets and λ is a regularization constant.The work of [12] shows that the Ridge Regression ( 8) can be simplified by calculating where ϕ is the Hilbert space mapping, which is induced by the Gaussian kernel κ. then we derive where K is the kernel matrix.The elements of K are Since κ is shift invariant, K can be computed efficiently using Fast Fourier Transform (FFT). In the detection step, the base patch z is firstly cropped out from the likelihood P t using the previous location O t−1 .The candidate patches are cyclic shifts of z.Then, the kernel matrix K z is calculated by k z ij = κ(z i , s j ).The detection scores are obtained by Finally, the target location in frame t is achieved by maximizing the score ŷ. Anchor Box Based Scale Estimation Motivated by recent advances of anchor box for object detection [27,28], as well as visual tracking [32], we present an anchor box based scale estimation to predict the resolution of object.A standard strategy to localize the object at different scales is to perform scale estimation at multiple resolutions [33].To account for the scale change and geometrical deformation of target, a feature pyramid is first extracted from a rectangular likelihood map centered around the target.At each scale of pyramid, we predict multiple region proposals, which is called anchors.An anchor is centered at the target location and is associated with an aspect ratio.As shown in Figure 5, we first sample the likelihood map around the previously estimated target location at S different scales.At each scale, we use R aspect ratios, yielding a = S × R anchors at the feature pyramid.However, sampling the feature into multiple resolution is computationally demanding.To boost the speed of the tracker, we perform a multi-scale of anchors instead of a feature map.The scale s and aspect ratio r of the target at current likelihood map P t is obtained by searching the anchor with the highest vote score as following: where y denotes the location of pixel and A(s, r) denote the anchor region in scale s aspect ratio r.This formulation calculates the average likelihood of anchor region whilst penalizing the maximal region likelihood. Experiments We validate the effectiveness of our Hyperline Clustering based Tracker (HCT) on the recent tracking evaluation benchmark [15].In Section 4.1, we describe the details about the parameters and machine used in our experiments.Section 4.2 presents the used benchmark datasets and evaluation protocols.Section 4.3 shows a comprehensive comparison of our HCT with state-of-the-art color based methods. Implementation Details We set the detection region three times the size of the previous object hypothesis O t−1 and the surrounding regions is twice the size of O t .The scale S and aspect ratio R are set 12 and 0.8:1:1.2respectively, thus we have 12 × 3 = 36 anchors each frame.Additionally, we use the regularization parameter λ = 0.001.All algorithms are tested in MATLAB R2015b, and run on a Lenovo laptop with Intel I7 CPU 3.4 GHz under Windows 7 Professional. Experiment Setup To test the ability of HCT on handling illumination change problem, we employ the color sequences posing illumination variation challenging from OTB-100 dataset [15], namely Basketball, Singer1, Singer2, CarScale, Woman, and Trans.These sequences are also suffered other challenging, such as scale variation, occlusion, deformation, and background clutters. To report the result of HCT, We use two standard evaluation metrics: Precision and success plots.The precision plot contains the distance precision over a range of center location error threshold.Given the center location of tracked object (x t , y t ) and ground truth (x g , y g ), the location error is defined as location error = x t − x g 2 + y t − y g 2 .( In the success plot, the overlap precision is plotted over a range of overlap thresholds.Given the tracked bounding box r t and the ground truth bounding box r g , the overlap is defined as where and represent the intersection and union of two regions, respectively, and | | denotes the number of pixels in the region. Comparison with State-of-the-Art We compare the proposed method with mentioned state-of-the-art trackers including three color based trackers, namely Staple [14], DAT [7], and ACT [6], and three gray pixel based correlation filtering trackers, namely DSST [25], KCF [12], and CSK [26].For all methods, we use the publicly available code and suggested parameters corresponding to the authors. Figure 6a shows the precision plot illustrating the mean location error over 6 sequences.For clarity, we only report the result of one pass evaluation that the trackers are initialized at the first frame.From the figure, we can observe that the proposed HCT performs favorably compared to existing trackers.Staple, which has been demonstrated to acquire best performance in a recent benchmark [24], also outperform other trackers in our experiment.However, HCT outperforms the Staple tracker by 16% in average precision. Figure 6b shows the success rate plots containing the overlap precision.The mean precision scores for each tracker are presented in the legends.Again, our proposed HCT outperforms Staple by 30% and the baseline KCF tracker by 90% in mean success rate.Finally, we analyze the running time performance of HCT.For hyperline clustering, the most time-consuming calculation is multilayer initialization [30], which involves a lot of manipulations of matrix decomposition.However, due to the efficient localization and scale estimation, our pure MATLAB prototype of HCT runs at 15 frame per second.Additionally, since HCT only stores the hyperline vectors of object and background, it desires less memory than other trackers.Thus, HCT is able to be utilized in time-critical application and embedded development platform.Comparison of the proposed approach with state-of-the-art trackers in illumination changes sequence.The results of distractor-aware tracker (DAT) [7], adaptive color attributes tracker (ACT) [6], kernelized correlation filter (KCF) [12], Staple [14], and the proposed approach are represent by green, yellow, blue, magenta, and red respectively. Conclusions In this work, we investigate the RGB values' distribution of color pixels under different illuminations and propose a novel Hyperline Clustering based Tracker (HCT).Unlike other color based trackers that predominantly apply simple color histogram and are sensitive to illumination changes, the proposed HCT directly extracts hyperlines from both object and surrounding regions to build the likelihood map, enhancing its robustness to illumination changes.The location of an object is estimated by implementing correlation filtering.Furthermore, we propose an anchor based scale estimation to deal with the problem of scale variation and shape deformation.Numerous experiments on the Online Tracking Benchmark demonstrate the favorable performance of the proposed HCT compared with several state-of-the-art trackers.An interesting direction of future work is to apply HCT to multi-object tracking [34] and multi-camera tracking [35], which require an illumination invariant representation of object. Figure 3 . Figure 3. Scatter plot of three different regions, which are represented by blue yellow and black arrow respectively.The red arrow represents the foreground region in different illumination.The directional vectors of hyperlines that represent the foreground in different illuminations are approximately the same, as shown by the red lines. 3. 2 .. ( 6 ) Discriminative Model To build a discriminative model which is capable of distinguishing the object from the surrounding, we propose a hyperline clustering based Bayes classifier for visual tracking.Let x denote the object pixels in a rectangular object region O and S denote the surrounding region of object.Additionally, let b x I,k denote pixel x assigned to the k-th hyperline of image I. Thus, we formulate the object likelihood at location x by P(x ∈ O|O, S, b) ≈ P(b x O,k |x ∈ O)P(x ∈ O) ∑ Ω∈{O,S} P(b x Ω,k |x ∈ Ω)P(x ∈ Ω) Particularly, the likelihood terms are estimated directly from the distance using Equation (3), i.e., P(b x O,ko |x ∈ O) ≈ d(x, l ko )/|O| and P(b x S,k |x ∈ S) ≈ d(x, l ks )/|S|, where | • | denotes the cardinality; ko and ks represent the ko-th and ks-th of hyperline of image O and S, respectively. Figure 4 . Figure 4. Exemplary object likelihood map for the discriminant model illustrating the object region O and surrounding region S. Figure 5 . Figure 5. Visualization of anchors region based on feature pyramid.Red rectangles represent anchors at different aspect ratios. Figure 6 . Figure 6.Quantitative comparison of the proposed HCT with several state-of-the-art trackers. Figure 7 Figure 7 shows a qualitative comparison of the proposed approach with existing trackers.In the Basketball sequence, all of compared trackers perform well on this sequence.However, only HCT and Staple are able to estimate the size of object accurately.For the Singer1 sequence, the target undergoes severe illumination change and scale variation.The color-based trackers, like ACT and DAT, start drifting from the target when the singer suffers severe illumination change in frame #100.Staple works well due to combination of HOG and color-based models.HCT is able to estimate the size of target.This can be attributed to the anchor based scale estimation.In the Singer2 sequence, the ACT and DAT are less effective in handling illumination change while both HCT and Staple are able to track the target accurately.For the CarScale and Woman sequences, the targets undergo occlusion and illumination change.HCT and Staple perform well on these sequences with higher overlap scores than other methods.The target object in the Trans sequence undergoes shape deformation and severe illumination change.Both the color-based and the gray-based trackers including Staple fail to cope with shape deformation and illumination change simultaneously.However, HCT is able to track the object accurately despite appearance change owing to the deformation and illumination variant.This further confirms the effectiveness of the proposed hyper-line clustering discriminative model and anchor based scale estimation.Finally, we analyze the running time performance of HCT.For hyperline clustering, the most time-consuming calculation is multilayer initialization[30], which involves a lot of manipulations of matrix decomposition.However, due to the efficient localization and scale estimation, our pure MATLAB prototype of HCT runs at 15 frame per second.Additionally, since HCT only stores the hyperline vectors of object and background, it desires less memory than other trackers.Thus, HCT is able to be utilized in time-critical application and embedded development platform.
4,958.8
2019-01-14T00:00:00.000
[ "Computer Science" ]
Twisted particles as a new tool to study micro world phenomena and processesa Twisted or vortex particles are a new powerful tool to study atomic and molecular processes as well as phenomena that occur at the level of nano-objects. The main feature of such particles is that they carry a non-zero projection of orbital angular momentum along the beam propagation direction. The process of twisted electron scattering from diatomic molecule targets has been studied in this paper for the first time. The Yukawa potential is selected as a model potential. Numerical calculations are carried out for the case of scattering from a hydrogen molecule H2. Introduction Over the last two decades, conceptually new tools became available to be used in studying such micro world phenomena as twisted, or vortex, particles.Twisted photons were theoretically predicted [1] and experimentally implemented [2] in the end of the 20th century.In 2007, the properties of twisted electrons were described in detail, the behaviour of twisted electrons in external fields was studied, and a number of experiments to obtain twisted electrons was proposed in the publication [3].The publication gave rise to a series of experimental work on obtaining twisted electron beams.Today, it is possible to produce twisted electron beams with an energy of about a few hundreds keV and projections of the angular momentum mħ~100ħ.Moreover, first steps to obtain twisted neutron beams [4] and even twisted matter waves [5] have recently been taken.Twisted particles serve as a powerful research tool in various fields of modern physics.For example, in quantum information theory, twisted photons may be used to transmit data since the information capacity of twisted states is significantly higher than that one of preset helicity states, as well as to multiplex data flows.Furthermore, twisted photons may be used to manipulate individual atoms, to carry out fundamental verifications of the STR and the GTR, and to analyse fundamental processes of the interaction of light with matter.Twisted light in optical traps enables not only to trap a nano-object (e.g. a biological structure or the Bose-Einstein condensate), but also to rotate it.Twisted electrons may be used to study structural and electronic properties of a matter or magnetic properties of surfaces and solid bodies. Theory and results The state of a twisted particle is described by a wave; the wave front represents a helical curve.Therewith, there is a relationship between the wave phase and the azimuth angle φ (Fig. 1 a).The intensity profile of such wave in the plane perpendicular to its direction of propagation is a set of concentric rings with a central null-intensity spot (vortex) (Fig. 1 b).Moreover, the state of a twisted particle in momentum space may be described by a superposition of plane waves with the wave vector k = ( ⊥ , z ) which lays on the surface of a cone with the opening angle θ k (Fig. 1 c).When the opening angle θ k becomes zero, the twisted wave turns into a common plane wave. where A series of studies of various processes of collisions with twisted electrons was carried out over the last decade.However, in all these studies, twisted electron beam interacted with either a single counter-propagating beam (Compton scattering [6], electron-electron scattering [7]) or a target consisting of atoms (radiative recombination [8][9], Mott scattering [10]).The process of twisted electron scattering on target molecules has been studied herein for the first time.It is well known that when colliding with a molecule, highspeed electrons generally interact with nuclei and electrons of the inner shell of atoms which form the given molecule.Therewith, molecular centres play the role similar to that one of optical slit in light scattering, since they are localized in space and distinctly separated from each other.As a result of the interaction of electrons with the molecule, a strong interference occurs in the differential cross-section.The use in this experiment of such twisted electrons instead of the usual electrons described by a plane wave leads to a number of peculiarities.Three possible scenarios are studied in this paper: (I) scattering from one molecule which centre of gravity lays on the beam axis (Fig. 2), (II) scattering from a molecule with one of its atoms laying on the beam axis (Fig. 3), and (III) scattering from a macroscopic target consisting of a plurality of molecules (Fig. 4).All calculations have been carried out in the first Born approximation.The amplitude of twisted wave scattering from two centres takes the following form: where is the amplitude of plane wave scattering from one centre, is the atomic spacing in the molecule, is the distance between the beam axis and the nearest atom, = ′ − is the momentum transferred.The calculation results are shown in Fig. 5 to 7. As is clear from Fig. 5, when the molecule centre is on the beam axis (Scenario (I)), there is a strong dependence from the projection of angular momentum m, and the maximums and minimums of the angular distribution are interchanged.As the energy increases, the number of minimums (and maximums) grows.This scenario is a quantum analogue of the classical Young's double-slit experiment.Scenario (II) corresponds to a classical experiment when one of the slits is closed.The strong dependence from the projection of angular momentum m is still observed (Fig. 6).However, the interference disappears, depending on the projection and the opening angle.Scenario (III) corresponds to scattering from a macroscopic target.In this case, the scattering cross-section no longer depends on the angular momentum projection.As one should expect, the contributions of multiple slits are non-coherently summed, which results in disappearance of the interference pattern (Fig. 7). Fig. 1 . Fig. 1. a) twisted wave phase, b) twisted wave intensity, c) wave vector k of the twisted wave in momentum space.Twisted waves may be described by what is known as the Bessel wave.The Bessel wave is a stationary state having a definite energy ω, a longitudinal momentum z , helicity λ, and an orbital angular momentum projection m: Fig. 2 . Fig. 2. Scenario (I) geometry: the centre of gravity of the molecule lays on the beam axis. Fig. 3 . Fig. 3. Scenario (II) geometry: one of the atoms in the molecule lays on the beam axis. Fig. 4 . Fig. 4. Scenario (III) geometry: macroscopic target.Numerical calculations of angular distributions and cross-sections for different scenarios have been carried out for the hydrogen molecule H 2 .This study uses the relativistic system of units ħ = с = m e = 1.The Yukawa potential is selected as a model potential: () = −/ Fig. 5 . Fig. 5. Results of numerical calculations for Scenario (I).Angular distribution in case of scattering of twisted electrons with E= 300 eV from the hydrogen molecule H 2 . Fig. 6 .Fig. 7 . Fig. 6. Results of numerical calculations for Scenario (II).Angular distribution in case of scattering of twisted electrons with E= 300 eV from the hydrogen molecule H 2 .
1,597.4
2018-01-01T00:00:00.000
[ "Physics" ]
Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire Objectives The patients’ view on the implementation of artificial intelligence (AI) in radiology is still mainly unexplored territory. The aim of this article is to develop and validate a standardized patient questionnaire on the implementation of AI in radiology. Methods Six domains derived from a previous qualitative study were used to develop a questionnaire, and cognitive interviews were used as pretest method. One hundred fifty-five patients scheduled for CT, MRI, and/or conventional radiography filled out the questionnaire. To find underlying latent variables, we used exploratory factor analysis with principal axis factoring and oblique promax rotation. Internal consistency of the factors was measured with Cronbach’s alpha and composite reliability. Results The exploratory factor analysis revealed five factors on AI in radiology: (1) distrust and accountability (overall, patients were moderately negative on this subject), (2) procedural knowledge (patients generally indicated the need for their active engagement), (3) personal interaction (overall, patients preferred personal interaction), (4) efficiency (overall, patients were ambiguous on this subject), and (5) being informed (overall, scores on these items were not outspoken within this factor). Internal consistency was good for three factors (1, 2, and 3), and acceptable for two (4 and 5). Conclusions This study yielded a viable questionnaire to measure acceptance among patients of the implementation of AI in radiology. Additional data collection with confirmatory factor analysis may provide further refinement of the scale. Key Points • Although AI systems are increasingly developed, not much is known about patients’ views on AI in radiology. • Since it is important that newly developed questionnaires are adequately tested and validated, we did so for a questionnaire measuring patients’ views on AI in radiology, revealing five factors. • Successful implementation of AI in radiology requires assessment of social factors such as subjective norms towards the technology. Introduction Artificial intelligence (AI) is expected to revolutionize the practice of radiology by improving image acquisition, image evaluation, and speed of workflow [1,2]. More and more sophisticated AI systems are being developed for use in clinical practice [1,2]. Importantly, unilateral development of AI systems from the perspective of the radiologist ignores the needs and expectations of patients who are perhaps the most important stakeholders. AI systems may need to fulfill certain preconditions for this technology to be embraced by society [3]. Patient preferences determine the boundaries within which an AI system should function. At present, however, little is known on patients' views on the use of AI in radiology [3]. Implementation of AI in radiology is an example of the much broader concept of consumer health information technology (CHIT). CHIT refers to the use of computers and mobile devices for decision-making and management of health information between healthcare consumers and providers [4]. In order to measure patients' acceptance of CHIT, several questionnaires have been developed [5,6], using Davis' widely accepted technology acceptance model (TAM [7,8]). However, since patients are not active users in the setting of AI in radiology, there is a need for a new method to measure technology acceptance when the patient is not actively using the technology, but is subjected to it. To the best of our knowledge, there are no validated standardized questionnaires available for mapping patients' views on the implementation of AI in radiology. The aim of this study was therefore to develop and, by means of expert evaluation, qualitative pretests, and factor analysis, validate a standardized patient questionnaire on the implementation of AI in radiology. Materials and methods This prospective study was performed and approved by the local institutional review board of the University Medical Center Groningen (IRB number: 201800873), which is a tertiary care hospital that provides both primary and specialty care to approximately 2.2 million inhabitants in the Netherlands. All patients provided written informed consent. Questionnaire development To develop the questionnaire, we conducted semi-structured qualitative interviews with 20 participants in a previous study (see Haan et al [3]). Based on these interviews [3], six key domains of patients' perspective on the implementation of AI in radiology were identified: proof of technology, procedural knowledge, competence, efficiency, personal interaction, and accountability. In the present study, we use these six domains as a framework for the questionnaire. Within each domain, a minimum of seven items, predominantly 5-point Likert-type agree-disagree scales, were developed. Using the rule of thumb that respondents answer about 4 to 6 items per minute [9], we limited the questionnaire to 48 attitudinal items (in 6 blocks of agree-disagree questions). We also used eight attitudinal items in an item-specific format. Since the response options in this format are content-related, the questions are assumed to require less cognitive processing and have shown to receive more conscientious responding [10]. In addition, an existing scale with adequate reliability (Cronbach's alpha = 0.89) on orientation towards change [11] was used. We also included five demographic questions (birthdate, gender, education, digital device ownership and use), four yes-no questions on hypothetical situations, a check-all-that-apply question on trust, and one question asking participants to estimate the time range of implementation of AI in radiology practice. In accordance with general recommendations for paper-andpencil questionnaires [12], we used a darker background to make answer boxes stand out (see Fig. 1). Questionnaire pretesting with cognitive interviews A qualitative pretest of the first version questionnaire was done by means of cognitive interviews [13]. The main purpose of these interviews was first to ask participants to fill out the questionnaire, while thinking aloud. The interviewer probed after any cues of uncertainty of respondents. Seven graduate students in communication sciences, all with prior experience in interviewing, conducted a total of 21 interviews, based on a convenience sample with patients scheduled for a CT scan of the chest and abdomen on an outpatient basis. The 21 patients' age ranged between 35 and 76 years (median, 63 years) and 11 of them were male. The interviews yielded several suggestions for improving the questionnaire. Firstly, from the cognitive interviews, it appeared that the difference between Bagree^and Bdisagree^may be easily overlooked, and therefore, we added plus and minus signs (Fig. 1). Secondly, we adjusted terminology that was sometimes interpreted too general or too specific. In order to make it as clear as possible that questions are about AI replacing physicians specifically, we used the term Bdoctorâ nd Bartificial intelligence^as often as possible in the questionnaire. Thirdly, we used shorter and clearer question wording for some statements by deleting superfluous wording such as BIt is the question whether….P rocedure data collection The patients for the quantitative data collection were recruited from December 19, 2018, until March 15, 2019. The patients that were scheduled for CT, MRI, and/or conventional radiography were approached by one of seven students in communication sciences (the same students that also conducted the cognitive interviews). All patients who were in the waiting room of our department for a radiological examination were approached. Based on the cognitive interviews, we estimated filling out the questionnaire would take about 15 min. We aimed for a sample with a subject to item ratio of at least 1:3 [14]. With 48 items in the original pool, this required a sample of at least 141. Sample size determination in exploratory factor analysis (EFA) is difficult, but with strong data, a smaller sample still enables accurate analysis. Following guidelines for Bstrong data^ [14], we verified that none of the variable communalities (the proportion of each variable's variance that can be explained by the factors) was lower than 0.40. We took 0.35 as a minimum factor loading and omitted items with cross-loadings higher than 0.50. Furthermore, we only included factors with more than 3 items. Data analysis Data were analyzed by using IBM SPSS Statistics (Version 24). Exploratory factor analysis (EFA) was used to examine to which extent the items measured constructs related to AI in radiology, and to find underlying latent variables. In EFA, the relation between each item and the underlying factor is expressed in factor loadings, which can be interpreted similar to standardized regression coefficients. Principal axis factoring was used as the extraction method, since this method does not require multivariate normality. Oblique promax rotation was selected because correlations between factors were (somewhat) expected. The data were suitable for EFA as shown by Bartlett's test of sphericity (< 0.001) and the Kaiser-Meyer-Olkin measure of sampling adequacy (0.719). The decision for the number of factors was made based on the Kaiser [15] criterion, a parallel analysis [16], and a scree test [17]. Items with low factor loadings were dropped (e.g., loading, < 0.35). Cronbach's alpha was used to calculate the internal consistency of items within each factor. In general, Cronbach's alpha of 0.7 is taken as indication of good internal consistency. In some cases, an alpha of 0.5 or 0.6 can be acceptable [18]. In order to overcome the disadvantages of Cronbach's alpha (e.g., underestimation of reliability; see Peterson and Kim [18]), we also computed composite reliability in R [19]. This measure is interpreted similar to Cronbach's alpha. In order to explore the meaningfulness of the factors that emerged from our factor analysis, we computed Pearson correlations with numerical demographic data (age and inclination to change) and performed analysis of variance for categorical demographics (gender and education). Sample The respondents' (N = 155) age ranged between 18 and 86 years (mean = 55.62, SD = 16.56); 55.6% of the respondents were male. 9.7% were educated at master or PhD level, 21.4% were at bachelor level, 24.0% were on mediate vocational level, 39.6% had high school level, and 5.2% had completed elementary-school-level education. There were several patients who indicated that they were not able to participate; in the far majority of cases, this was due to a lack of time (because of parking issues, work, or school-related activities, or because these patients had another scheduled appointment in the hospital). Results of EFA The EFA generated five factors representing the following underlying latent variables: (1) Bdistrust and accountability of AI in radiology,^(2) Bprocedural knowledge of AI in radiology,^(3) Bpersonal interaction with AI in radiology,( 4) Befficiency of AI in radiology,^and (5) Bbeing informed of AI in radiology.F actors 1, 4, and 5 consist of a combination of items of the initial domains proof of technology, competence, and efficiency that were identified in our previous qualitative study [3]. Factors 2 (procedural knowledge) and 3 (personal interaction) correspond with the same domains as identified in the aforementioned qualitative study [3]. Originally, 17 items loaded on factor 1 Bdistrust and accountability.^Two items were dropped, to increase Cronbach's alpha to 0.863. Originally, 6 items loaded on factor 4 Befficiency of AI in radiology,F ig. 1 Layout of matrix with agree-disagree statements which resulted in Cronbach's alpha of 0.594. One item was dropped, to increase Cronbach's alpha to 0.670. Five items, from the original domains accountability, procedural knowledge, and efficiency did not load on any factor and were therefore also dropped from the scale. For factor 5, Cronbach's alpha remained just below 0.6. This factor includes items that do not directly assess the direction of attitude towards AI in radiology, and some items loaded negatively, which implies that items were not all positively correlated with the underlying variable. Moreover, in this case, we considered it better to not delete more items from this scale because the artificial effort to increase alpha above a certain level may harm reliability and validity [20]. In most cases, the composite reliability and Cronbach's alpha were identical, but for factors 3 and 4, the composite reliability score was higher. Table 1 shows all the 39 items that remained for each of the 5 factors of the questionnaire. Table 2 shows the 8 items that were dropped from the questionnaire. We also verified correlations between factors, and concluded that none were strongly inter-correlated (Table 3). Factors 1 and 3 were moderately correlated, which indicates that patients value trust and accountability and personal interaction similarly. Patients' views on AI in radiology The average score for factor 1 Bdistrust and accountabilityŵ as 3.28, which indicates that patients are moderately negative when it comes to their trust in AI in taking over diagnostic interpretation tasks of the radiologist, both with regard to accuracy, communication, and confidentiality. The average score for factor 2 Bprocedural knowledge^was 4.47, which indicates that patients are engaged in understanding how their imaging examinations are acquired, interpreted, and communicated. Patients also indicate to appreciate and prefer personal interaction over AI-based communication, with an average score of 4.38 for factor 3 Bpersonal interaction.^In addition, patients were rather ambiguous as to whether AI will improve diagnostic workflow, given the average score of 2.89 for factor 4 Befficiency.^Within factor 5 Bbeing informed,^scores on several items were not outspoken. For example, within this factor, patients tended to prefer AI systems to look at the entire body instead of specific body parts only (average score of 3.88) and to be informed by AI systems about future diseases they will experience when possible (average score of 3.69). On the other hand, patients indicated that they would feel a lack of emotional support when computers would provide them results (average score of 4.21). Table 4 shows associations of factors with respondents' characteristics. Factors 1 (Bdistrust and accountability^) and 3 (Bpersonal interaction^) were significantly associated with inclination to change; the more respondents distrust AI in radiology (factor 1) or the more the respondents appreciate personal interaction, the lower their score on inclination to change (factor 1, r = − 0.39814, p < 0.01; factor 3, r = − 0.179, p < 0.5). Factor 1 was also significantly related to the education level of respondents; the level of trust steadily increased for each higher category in education level of respondents (F(4, 4) = 6.99, p < 0.01). Associations of factors with other variables Factor 4 (Befficiency^) was weakly negatively associated with age (r = − 0.200, p < 0.05), which means that the older the respondents are, the less they think that AI increases efficiency, while factor 2 (Bprocedural knowledge^) was weakly positively associated with age (r = 0.196, p < 0.05). Gender was not significantly associated with any of the factors, nor did gender and education have significant interaction effects. Discussion AI has advanced tremendously over the last years and is expected to cause a new digital revolution in the coming decades [21]. It is anticipated that radiology is one of the fields that will be transformed significantly. Many speculate about the potentially profound changes it will cause in the daily practice of a radiologist [22]. However, there is a lack of debate on how patients would perceive such a transformation. For example, would patients trust a computer algorithm? Would they prefer human interaction over technology? To the best of our knowledge, there are no studies on this topic in the literature. In this study, we documented the development of a standardized questionnaire to measure patients' attitudes towards AI in radiology. The questionnaire was developed on the basis of a previous qualitative study in a collaboration between radiologists and survey methodologists [3] and pretested for clarity and feasibility by means of cognitive interviews. Subsequently, 155 patients scheduled for CT, MRI, and/or conventional radiography on an outpatient basis filled out the questionnaire. An exploratory factor analysis, which took several rounds in the selection of factors and items within each factor, revealed five factors: (1) Bdistrust and accountability of AI in radiology,^(2) Bprocedural knowledge of AI in radiology,( 3) Bpersonal interaction with AI in radiology,^(4) Befficiency of AI in radiology,^and (5) Bbeing informed of AI in radiology.^Two of these factors (Bprocedural knowledge^and Bpersonal interaction^) almost exactly corresponded with the domains identified in the qualitative study [3]. For three factors (1, 2, and 3), the internal consistency was good (Cronbach's alpha > 0.8); for one factor (4), it was acceptable (only just below 0.7); and for one factor (5), it was acceptable considering the lower number of items (n = 4) included (Cronbach's alpha just below 0.6). Some items of factor 5 loaded negatively, and although reverse coding easily solves this problem, it may also mean that items within this factor are multi-dimensional. Factor 1 still included a large number of items. Since including many items will increase respondent burden, it may be worthwhile to reduce the number of items per scale, with preferably no more than 8 items per scale. Thus, additional data collection with confirmatory factor analysis can be recommended to further refine the scale. Nevertheless, overall, the developed questionnaire provides a solid foundation to map patients' views on AI in radiology. Our findings with respect to associations between several demographic variables and trust and acceptance of AI are in line with earlier studies on acceptance of CHIT [22]. As Or and Kash [23] concluded in their review of 52 studies examining 94 factors that predict the acceptance of CHIT, successful implementation is only possible when patients accept the technology and, to this end, social factors such as subjective norm (opinions of doctors, family, and friends) need to be addressed. Interestingly, the results of our survey show that patients are generally not overly optimistic about AI systems taking over diagnostic interpretations that are currently performed by radiologists. Patients indicated a general need to be well and completely informed on all aspects of the diagnostic process, both when it comes to how and which of their imaging data are acquired and processed. A strong need of patients to keep human interaction also emerged, particularly when communicating the results of their imaging examinations. These findings indicate that it is important to actively involve patients when developing AI systems for diagnostic, treatment planning, or prognostic purposes, and that patient information and education may be valuable when AI systems with proven value are to enter clinical practice. They also signify the patients' need for the development of ethical and legal frameworks within which AI systems are allowed to operate. Furthermore, the clear need for human interaction and communication also indicates a potential role for radiologists in directly counseling patients about the results of their imaging examinations. Such a shift in practice may particularly be considered when AI takes over more and more tasks that are currently performed by radiologists. Importantly, the findings of our survey only provide a current understanding on patients' views on AI in general radiology. The developed questionnaire can be used in future time points and in more specific patient groups that undergo specific types of imaging, which will provide valuable information on how to adapt radiological AI systems and their use to the needs of patients. Limitations of our study include the fact that validation was done by means of cognitive interviews and exploratory factor analysis, which may be viewed as subjective. Validation with other criteria, such as comparison with existing scales, was not possible due to unavailability of such scales. Furthermore, our questionnaire was tested in patients on an outpatient basis, which may not be representative of the entire population of radiology patients. In addition, although we explored the acceptability of purely AI-generated reports with patients, the acceptability of radiologist-written, AI-enhanced reports, which may well be the norm in the future, was not addressed. It should also be mentioned that we did not systematically record the number and reasons of patients who were not able or refused to participate. Nevertheless, in the far majority of patients who did not participate, this was due to a lack of time. In conclusion, our study yielded a viable questionnaire to measure acceptance among patients of the implementation of AI in radiology. Additional data collection may provide further refinement of the scale.
4,558.4
2019-11-08T00:00:00.000
[ "Medicine", "Computer Science" ]
A comprehensive review on microbial production of 1,2-propanediol: micro-organisms, metabolic pathways, and metabolic engineering 1,2-Propanediol is an important building block as a component used in the manufacture of unsaturated polyester resin, antifreeze, biofuel, nonionic detergent, etc. Commercial production of 1,2-propanediol through microbial biosynthesis is limited by low efficiency, and chemical production of 1,2-propanediol requires petrochemically derived routes involving wasteful power consumption and high pollution emissions. With the development of various strategies based on metabolic engineering, a series of obstacles are expected to be overcome. This review provides an extensive overview of the progress in the microbial production of 1,2-propanediol, particularly the different micro-organisms used for 1,2-propanediol biosynthesis and microbial production pathways. In addition, outstanding challenges associated with microbial biosynthesis and feasible metabolic engineering strategies, as well as perspectives on the future microbial production of 1,2-propanediol, are discussed. Background The traditional petrochemical industry needs further reforms due to public concerns over the pollution of the environment and the shortage of petroleum resources [1,2]. However, because of the wider range of biomass resources, safer manufacturing processes and lower effects on the environment today, the bio-based chemical industry is becoming increasingly powerful in the chemical manufacturing arena [3]. Currently, the increasing production of chemicals from biomass via biotechnological routes has captured the attention of researchers, and these chemicals include biofuels (ethanol, butanol) [4,5], pharmaceuticals (vitamins) [6], organic acids (lactic acid and succinic acid) [7,8], diols (1,2-propanediol, 1,3-propanediol) [9,10] and other platform bulk and specialty chemicals. 1,2-Propanediol (1,2-PDO), as a C3 diol, is an important platform chemical with high demand in industry [9]. To date, 1,2-PDO has been widely used in the building material, chemical and pharmaceutical industries as a monomer for use in producing polyester resins, antifreeze agents, liquid detergents, biofuels, cosmetics, food, etc. [11][12][13][14][15] Annually, more than 1.36 million tons of merely racemic 1,2-PDO is produced due to global demand and reached approximately $0.373 billion globally in 2020, and it is expected to reach more than $0.398 billion by 2026 with a CAGR (Compound Annual Growth Rate) of 1.6%. Currently, the commercial route to 1,2-PDO involves the hydration of fossil fuel-based propylene via chemical methods [16]. The use of fossil resources as the initial raw material in Open Access Biotechnology for Biofuels *Correspondence<EMAIL_ADDRESS>2 College of Chemical Engineering, Nanjing Forestry University, Nanjing 210037, People's Republic of China Full list of author information is available at the end of the article Tao et al. Biotechnology for Biofuels (2021) 14:216 these methods not only pollutes the environment but also results in a racemic mixture [17]. In addition, two stereoisomers exist in 1,2-PDO: R-1,2-PDO and S-1,2-PDO. Compared with racemic products, the pure stereoisomers of this chemical demonstrate greater potential as chiral synthons in the organic synthesis of chiral pharmaceutical products. Nevertheless, the application of these pure stereoisomers is often restricted due to their high price and low availability, except at the laboratory scale [18]. For these reasons, special concern has emerged with respect to the production of 1,2-PDO, especially in a pure stereoisomer form from biomass via biological processes. Although the production of 1,2-PDO by a variety of bacteria and yeasts has been achieved for many years, the biological process is still highly challenging due to the lack of naturally efficient synthetic pathways. For example, microbial 1,2-PDO production has not been applied to industrial scale manufacturing because of low yield. However, in recent years, along with the rapid development in metabolic strategies, including modification of natural pathways and design of artificial pathways, the production of a pure stereoisomer of 1,2-PDO from inexpensive substrates via biological routes for industrial applications is currently possible. Herein, recent efforts on strain exploration, process optimization, pathway designation and various metabolic engineering strategies to improve microbial production of 1,2-PDO are summarized. Furthermore, the drawbacks, challenges, and future trends towards economical manufacturing of 1,2-PDO via biotechnological routes are discussed. Discovery of natural micro-organisms with the capability to produce 1,2-propanediol A variety of micro-organisms have been reported to have the capability of producing 1,2-PDO in nature, including bacterial strains of the genera Prevotella [19], Salmonella [20], Klebsiella [20], Clostridium [21][22][23] and Lentilactobacillus [24]; fungal strains of the genera Yamadazyma and Debaryomyces [25]; and several Saccharomyces species [26]. However, for different types of micro-organisms, the substrates and fermentation conditions used and the reaction mechanisms vary greatly. For example, 1,2-PDO production by bacteria requires strictly anaerobic conditions, the direct opposite of that needed for production by fungi. Metabolism of different substrates leads to the stereochemistry of 1,2-PDO produced naturally by bacteria; fucose and rhamnose can generate S-1,2-PDO, but glucose and xylose can generate the R-isomer. Among the reported microbial genera with 1,2-PDO production capacity, most have been confirmed to produce 1,2-PDO mainly through the use of various sugars, such as fucose, rhamnose, and glucose. The formation of 1,2-PDO was first reported as a product of cellulose decomposition in Clostridium thermobutyricum. Thereafter, many bacteria were found to produce S-1,2-PDO from fucose or rhamnose [27]. As shown in Table 1, in an early report, Turner et al. suggested that Bacteroides ruminicola has the capability to utilize l-rhamnose for naturally producing 1,2-PDO. Approximately 0.92 mol of 1,2-PDO was detected per mol of rhamnose, while 0.36 mol acetate, 0.02 mol formate and 0.29 mol succinate were synchronously produced [19]. Subsequently, the fermentation mechanism of fucose and rhamnose in Salmonella typhimurium and Klebsiella pneumonia was investigated [20]. It was found that both of these species excreted 1,2-PDO when grown merely anaerobically with fucose or rhamnose. This phenomenon was explained that a propanediol oxidoreductase critical for the reduction of lactaldehyde to 1,2-PDO was induced in S. typhimurium in an anaerobic environment, and the presence of oxygen possibly prevented the enzymatic activity. During this process, the production of 1,2-PDO was a result of an attempt to regenerate oxidized NAD. Anaerobic conditions seemed to be important for the production of 1,2-PDO when fucose or rhamnose was the sole source of carbon. More recently, a thermophilic anaerobe, Clostridium strain AK-1, was found to produce S-1,2-PDO from l-rhamnose. Approximately 22.13 mM 1,2-PDO was produced with a maximum yield of 0.81 mol 1,2-PDO/mol from l-rhamnose [22]. As mentioned above, these studies confirmed the microbial production of S-1,2-PDO from rhamnose or fucose under anaerobic conditions. However, the route of this production is not commercially feasible due to the high cost of the substrate and low level of production [17]. Hence, a search for 1,2-PDO production from inexpensive, readily available sugars, such as glucose, xylose and arabinose, was performed, and then these substrates were developed for 1,2-PDO production. Compared with bacteria utilizing fucose or rhamnose as a source of 1,2-PDO synthesis, Clostridium strains ferment glucose or xylose anaerobically for 1,2.-PDO production. Several studies found that these strains can produce enantiomerically pure R-1,2-PDO during this process [18,21]. Moreover, a methylglyoxal pathway of 1,2-PDO production by Clostridium strains has been proposed, and methylglyoxal synthase (mgsA) has been found to be important for 1,2-PDO production. In Clostridium sphenoides, R-1,2-PDO was found to be produced from glucose only under phosphate limitation, as methylglyoxal synthase activity is strongly inhibited by phosphate [28]. Clostridium thermosaccharolyticum can produce R-1,2-PDO greater than 99% enantiomeric excess from many kinds of sugars, including glucose, xylose, mannose and cellobiose [23]. R-1,2-PDO (9.05 g/L) with the best yield of 0.20 g/g glucose was achieved from 45 g/L glucose after 25 h fermentation at 60 °C and pH 6.0 under a N 2 atmosphere, and d-lactate was the major product of this fermentation process with 11.12 g/L [21]. To our knowledge, this is the highest level of 1,2-PDO produced using natural organisms. Later, in 2001, C. thermosaccharolyticum HG-8 was found to use a wider range of sugars to produce 1,2-PDO than previously reported, including lactose found in cheese whey, and d-glucose, d-galactose, l-arabinose, and d-xylose found in corn and wood byproducts; this strain afforded a maximum 1,2-PDO concentration of 2.8 g/L when hydrolysed whey permeate in yeast extract was used; in addition, it produced 7.9 g/L lactate, 3.9 g/L acetate and 2.1 g/L acetol [29]. These studies showed C. thermosaccharolyticum to be a suitable natural producer for enantiopure R-1,2-PDO due to its thermophilic fermentation properties and ability to utilize various renewable residues. In addition to the abovementioned organisms using various different sugars as substrates for 1,2-PDO production, lactate, an inexpensive and readily available chemical obtained by fermentation, can be applied for 1,2-PDO production by Lactobacillus buchneri and its close relatives [24]. Due to fewer reaction steps and more accessible substrates, the production of useful chemicals from lactic acid via chemical and biotechnological routes represents the green chemistry of the future, compared to their production from various sugars. Elferink et al. found that L. buchneri is capable of converting lactate into equimolar amounts of acetic acid and 1,2-PDO [24]. In this study, acidic and anoxic conditions seemed to be necessary for lactate degradation by L. buchneri. Thus, it was proposed that its anaerobic lactate-degrading capacity needs to be induced by environmental conditions, such as pH and temperature. Besides 1,2-PDO production by bacteria, it is also worth mentioning that there were several reports on aerobic 1,2-PDO production by some yeasts, such as Candida polymorpha and Pichia robertsii, in the late 1960s [25]. A good 1,2-PDO yield of 38% from sugar consumed was obtained, especially in Candida polymorpha. In addition, although a small quantity of 1,2-PDO has been detected in several industrial efforts based on Saccharomyces cerevisiae fermentation, the full metabolic network of 1,2-PDO production in yeast is still unclear [26]. Early phenomena suggested some basic enzymes for 1,2-PDO production are present in yeast [30]. Subsequently, the successful isolation of methylglyoxal synthase from S. cerevisiae clearly supported this hypothesis [31]. This enzyme was proven to be insensitive to phosphate, in contrast to E. coli Biosynthetic pathways for the production of 1,2-propanediol and the enzymes involved Currently, 1,2-PDO is mainly produced through chemical routes using propylene oxide from the petrochemical industry [32]. Although some efforts have been made to use biological methods to synthesize 1,2-PDO from biomass, the titres, yields and productivity of biological methods remain low, and the bioprocess is cost-ineffective. Hence, there is a great need to deeply investigate and understand these biosynthetic pathways with natural organisms to develop bioprocesses for more-efficient industrial production. As mentioned above, bacteria have multiple strategies for producing 1,2-PDO with different substrates. These pathways are summarized below. The metabolic pathways of 1,2-PDO production can be divided into three routes: the deoxyhexose pathway [33], methylglyoxal pathway [23], and lactate pathway [24]. Although these pathways involve different intermediates and enzymes, these bioprocesses are all effective under only anaerobic conditions. Among these pathways, the deoxyhexose pathway is the primary route of S-1,2-PDO production, and the methylglyoxal pathway is the primary route of R-1,2-PDO production, but the chiral status of 1,2-propanediol synthesized via the lactic acid pathway is not clear. Deoxyhexose pathway Because l-fucose and l-rhamnose are known to be catabolized in anaerobic environments, the biosynthetic pathway of 1,2-PDO production from l-fucose and l-rhamnose was first explored and identified in Salmonella, Klebsiella [20], Clostridium [21][22][23] and Prevotella [19]. As shown in Fig. 1, the deoxyhexose pathway consists of several steps. First, the two main deoxyhexoses are converted into l-rhamnose-1-phosphate or l-fucose-1-phosphate in the presence of isomerase and kinase, respectively, which are subsequently broken down by aldolase to l-lactaldehyde and dihydroxyacetone phosphate (DHAP). Then DHAP is converted into pyruvate through a series of reactions. NADH generated in the metabolism is consumed in the 1,2-PDO oxidoreductasecatalyzed reduction of lactaldehyde into S-1,2-PDO. For 1,2-PDO biosynthesis from l-fucose or l-rhamnose, four key enzymes are involved, i.e., l-fucose/rhamnose isomerase (fucI/rhaA) [34,35], l-fuculokinase/ rhamnulokinase (fucK/rhaB) [36,37], l-fuculose-1-phosphate/rhamnulose-1-phosphate aldolase (fucA/rhaD) [38,39] and propanediol oxidoreductase (fucO) [40]. The isomerase, kinase, and aldolase were found to be functional in both aerobic and anaerobic environments, which means that the formation of neither DHAP nor l-lactaldehyde was influenced by the environment [41]. Upon the release of DHAP and l-lactaldehyde, DHAP participates in both gluconeogenic and glycolytic processes as an important intermediate in central metabolism. Fig. 1 Metabolic pathways for the production of 1,2-PDO from l-fucose and l-rhamnose (Deoxyhexose pathway). The genes in Fig. 1 are all from E. coli. fucP: l-fucose permease; rhaT: l-rhamnose permease; fucI: l-fucose isomerase; rhaA: l-rhamnose isomerase; fucK: l-fuculokinase; rhaB: l-rhamnulokinase; fucA: l-fuculose-1-phosphate aldolase; rhaD: l-rhamnulose-1-phosphate aldolase; fucO: propanediol oxidoreductase However, the fate of l-lactaldehyde is influenced by the activity of propanediol oxidoreductase [42]. Earlier reports showed that propanediol oxidoreductase exhibits almost 70% post-transcriptional inactivation in the presence of oxygen, while l-lactaldehyde is completely converted to pyruvate through two steps of the oxidation process [41]. Accordingly, S-1,2-PDO is obtained only from l-lactaldehyde by fucO under anaerobic conditions. Nevertheless, although early studies indicated that an anaerobic environment is necessary for the deoxyhexose pathway, recent research revealed that l-lactaldehyde can be reduced into S-1,2-PDO in aerobic environments, and for these cases, the regulatory mechanism represented by the NADH/NAD + ratio and an efficient l-lactaldehyde detoxification process are recognized as possible explanations. In addition, a rather detailed intracellular flux distribution of this pathway has been identified by the stable isotope tracer technique, which is useful for obtaining profound information on the functioning of a metabolic network [43]. Although it has been extensively investigated for years, the deoxyhexose metabolic pathway is not economical for commercialization due to the high price and difficult acquisition of l-fucose and l-rhamnose; these challenges have prompted further study into the reliability of the methylglyoxal pathway. Methylglyoxal pathway In addition to the deoxyhexose pathway, another metabolic pathway based on methylglyoxal has been reported in Clostridium thermosaccharolyticum and Clostridium sphenoides, which have been found to produce 1,2-PDO by fermenting glucose, fructose, mannose, galactose, xylose, arabinose, lactose or cellobiose [23,28]. In this pathway, take the glucose as an example, the substrate is first converted to fructose-1,6-biphosphate, which is cleaved into DHAP and glyceraldehyde 3-phosphate ( Fig. 2). Glyceraldehyde 3-phosphate is then converted to L-lactate or enters the TCA cycle. On the other hand, DHAP is converted into methylglyoxal as a key intermediate by mgsA. The latter is subsequently reduced to 1,2-PDO through acetol or lactaldehyde in the presence of propanediol oxidoreductase/alcohol dehydrogenase (fucO/yqhD) and then glycerol dehydrogenase (gldA) [23]. Methylglyoxal synthase was considered as the key enzyme in the methylglyoxal pathway. A large number of micro-organisms have been reported to possess methylglyoxal synthase activity, including Pseudomonas saccharophila [23], Escherichia coli [44], Proteus vulgaris [45] and several Clostridium [28]. As previously reported, methylglyoxal synthase is strongly inhibited by phosphate in most micro-organisms. In Clostridium sphenoides, R-1,2-PDO was synthesized only via the methylglyoxal pathway when the phosphate concentration was less than 80 μM, which was insufficient to trigger the phosphate-induced inhibitory mechanism. Nevertheless, the results of later studies on methylglyoxal synthase in C. thermosaccharolyticum HG-8 indicating the nonexistence of phosphate inhibition up to 113 mM phosphate [23]. This phenomenon was also observed in S. cerevisiae. Therefore, further research on the phosphate inhibitory mechanism is required for the in-depth application of the methylglyoxal pathway. Compared to the deoxyhexose pathway, the substrates for the methylglyoxal pathway are less expensive, and R-1,2-PDO is mainly produced. Nevertheless, the cytotoxic effect of methylglyoxal, which can suppress protein synthesis, is not negligible [46]. Notably, methylglyoxal can take advantage of the interaction with ribosomes to suppress protein synthesis. Thus, slow cell growth and metabolic imbalance are important issues regarding 1,2-PDO production through this pathway. Lactate pathway Consequently, the lactate pathway, which is capable of preventing the synthesis of methylglyoxal, is a newly recognized and promising route for 1,2-PDO microbial production. In 2001, Elferink et al. reported an observation that lactic acid was reduced to 1,2-PDO under anaerobic and acidic condition. Pathway was proposed for the anaerobic degradation of lactic by the authors (Fig. 3) [24]. In this study, Lactobacillus brucei and Lactobacillus parabuchneri successfully degraded 1 mol of lactic acid into 0.5 mol of acetic acid and 0.5 mol of 1,2-PDO with the concomitant accumulation of ethanol but without an external electron acceptor. One of the explanations for these outcomes suggests that L. buchneri and L. parabuchneri eliminate excess reducing equivalents by producing 1,2-PDO. In addition, because the degradation of lactic acid is strongly affected by pH, a protective mechanism against a low-pH environment has been proposed. That is, the degradation of lactic acid into 1,2-PDO and acetic acid, which has a higher pK a , is thought to protect against cell destruction due to the overaccumulation of undissociated organic acids in acidic environments. In the proposed pathway, nearly one-half of the lactic acid is first reduced to lactaldehyde and then the 1,2-PDO is achieved from the lactaldehyde. The another one-half of the lactic acid is oxidized to acetate to provide the required reducing equivalents in the form of NADH, along with small amounts of ethanol that is excretion over the same time period. The proposed pathway may not produce toxic intermediate methylglyoxal and the Lactobacillus strains can also be cultured under acidic conditions, which provides a new example for the synthesis of 1,2-PDO by micro-organisms. However, biochemical details of the pathway are unknown. Based on retrosynthetic analysis, an artificial pathway for biosynthesis of 1,2-PDO from glucose via the intermediacy of lactic acid was devised [47]. The lactaldehyde is formed from the lactic acid under the combined action of propionate CoA-transferase (pct) and propanal dehydrogenase (pduP) and then further reduced to 1, 2-PDO by lactaldehyde reductase (yahK). By overexpressing these enzymes in E. coli, the highest titre of 1,2-PDO was successfully synthesized (R-1,2-PDO was produced at 17.3 g/L, while the S-isomer was produced at 9.3 g/L), which shows the great potential of this reaction pathway [48]. Metabolic engineering strategies for the enhanced production of 1,2-propandiol Although various micro-organisms with different metabolic pathways have been identified that are available for microbial production of 1,2-PDO, there are many factors hindering the 1,2-PDO commercialization process. For example, biosynthesis of 1,2-PDO is a reduction process in which many of the reductases depend on the cofactor NAD(P)H; therefore, a process to increase the supply of redox cofactors sufficiently is urgently needed to improve product yield. In addition, in the three abovementioned pathways, 1,2-PDO is not the sole product, regardless of whether sugars or lactic acid are the substrates. This means that a low theoretical yield (< 50%) seems to be unavoidable. In addition, the accumulation of toxic intermediates is an obstacle that must be removed to increase product concentration and yield. Recently, with the advancement of genetic engineering technology, applying systematic metabolic engineering and coenzyme regulation strategies to strengthen 1,2-PDO organisms has provided feasible strategies for solving the aforementioned problems, including the enhancement of major metabolic pathways and substrate utilization range through the introduction of heterologous genes and overexpression of endogenous genes, the redistribution of carbon flux through the knockout of genes encoding by-product pathways and the improvement of the supply of cofactors through cutting off additional NAD(P)H consumption pathways or introducing NAD(P)H generation pathways [49][50][51]. In the following sections, metabolic engineering strategies applied to microbial production of 1,2-PDO will be reviewed. The details are depicted in Table 2. Enhancement of major metabolic pathways and substrate utilization range For 1,2-PDO microbial production through the methylglyoxal pathway, three key enzymes are involved, i.e., mgsA, yqhD and gldA. Many methods to upregulate the expression of these three key enzymes have been implemented to increase the production of 1,2-PDO. A recombinant E. coli strain was constructed by Altaras and associates for synthesizing 1,2-PDO as a fermentation product of glucose, in which the expression of mgsA and gldA were upregulated simultaneously. Under the anaerobic condition of flask fermentation, compared with the original strain, 1,2-PDO production of the recombinant strain increased by 180% [52]. Enhancement of synthetic pathways involves both modification of endogenous pathways, as described in the case study above, and introduction of heterologous metabolic pathways. Niimi and Suzuki reported that introduction of mgsA from E. coli to Corynebacterium glutamicum increased the 1,2-PDO yield 100-fold compared with that produced by wild-type C. glutamicum. Furthermore, after overexpressing mgsA and cgR_2242, one of the genes annotated as AKRs that functions as a methylglyoxal reductase in the synthetic pathway, the production of 1,2-PDO doubled from 12 to 24 mM [53]. Similarly, to produce 1,2-PDO using glycerol as the main carbon source in Saccharomyces cerevisiae, a 1,2-PDO-producing S. cerevisiae was successfully metabolically engineered by combining overexpression of endogenous pathway genes with introduction of heterologous genes. Both glycerol utilization and the growth rate of the engineered strain increased after overexpressing endogenous glycerol dissimilation pathway genes, including glycerol kinase (GUT1) [54], glycerol 3-phosphate dehydrogenase (GUT2) [55], glycerol dehydrogenase (gdh) [56], and a glycerol transporter gene (GUP1) [57]. The redox balance of the strain was further improved by introducing the 1,2-PDO pathway genes mgsA and gldA from E. coli, and a titre of 2.19 g/L 1,2-PDO was obtained [58]. In addition, 0.45 g/L 1,2-PDO was produced from galactose after 72 h of batch fermentation by introducing the mgsA gene of E. coli-K12 MG1655 and the dhaD (glycerol dehydrogenase) gene of Citrobacter freundii in S. cerevisiae [59]. Both the overexpression of endogenous genes and the introduction of heterologous genes are widely used in metabolic engineering to improve the biosynthetic efficiency [60]. In addition to enhancing biosynthetic pathways, utilizing less expensive alternative material as a substrate to reduce the cost of fermentation and push the industrialization process of 1,2-PDO biosynthesis is another valuable strategy [61]. Sato et al. took the first step towards extending the substrate spectrum of 1,2-PDO biosynthesis. They successfully accomplished 1,2-PDO direct production from starch by an engineered E. coli expressing heterologous α-amylase and 1,2-PDO synthetic genes; 13 mg/L 1,2-PDO was achieved [62]. This was the first attempt to simplify the upstream saccharification process of 1,2-PDO biosynthesis. To sustain the profitability and efficiency of the conversion process in the pyrolysis of wheat straw, Lange et al. established a 1,2-PDO microbial fermentation system in pyrolysis water [63]. After introducing glycerol dehydrogenase from E. coli in C. glutamicum, a two-phase aerobic/microaerobic fed-batch process was carried out, and 18.3 ± 1.2 mM 1,2-PDO was obtained with pyrolysis water as the substrate. This result achieved the so far highest overall volumetric productivity with 1.4 ± 0.1 mmol 1,2-PDO L −1 h −1 in an engineered microbial strain, which shows the huge prospect of converting the side stream pyrolysis water to other valuable chemicals. It is well known that reducing the amount of greenhouse gas CO 2 emitted by industry into the environment helps mitigate the effects of global warming [64]. In this respect, the production of basic chemicals through direct fermentation of CO 2 is a possible solution. An engineered cyanobacterium S. elongatus PCC 7942 produced ~ 150 mg/L 1,2-PDO, which takes in mgsA and yqhD both from E. coli and a second alcohol dehydrogenase (sADH) from Clostridium beijerinckii [65]. Furthermore, David al et al. achieved an ~ 1 g/L 1,2-PDO yield through optimization of cultivation conditions based on research [66]. Both these studies revealed the potential of engineered cyanobacteria to produce chemicals. In addition, direct utilization of CO 2 dispels any concerns over competition for arable land with food crops, in contrast with biological 1,2-PDO production processes that are based on sugar or glycerol as a substrate. Redistribution of carbon flux Because the 1,2-PDO biosynthetic pathways are complex, the accumulation of byproducts, which mainly include lactate, formate, acetate, succinate, pyruvate and ethanol, inevitably inhibits the production of 1,2-PDO, for example, cell growth and protein expression may be influenced by the accumulation of acetate at harmful levels [67]. With the development of genetic engineering technology in the twenty-first century, the low productivity of 1,2-PDO biosynthesis can be potentially improved using gene editing technology to redirect carbon flux [68]. Surprisingly, a substantial decrease in 1,2-PDO production was obtained with engineered E. coli via the methylglyoxal pathway, along with the accumulation of pyruvate and an increase in other fermentative byproducts; only the major fermentative byproduct pathway was eliminated, such as the acetate and lactate synthesis pathways. Although the engineered strain that disrupted acetate-producing pathways (acetate kinase, ackA; pyruvate dehydrogenase, poxB) [69,70] showed lower levels of accumulated acetate (2.97 g/L) than the wild-type strain at 4 g/L, the production of 1,2-PDO was reduced, from 0.25 to 0.17 g/L, under shake flask conditions. Analogously, disruption of lactate-producing pathways (glyoxalase I, gloA; l-lactate dehydrogenase, ldhA) [71,72] in another engineered E. coli strain resulted in a reduction in the 1,2-PDO titre by ~ 41%, compared with the wild-type strain, with an increase in the accumulation of other byproducts at the same time [73,74]. Consistent with most studies, it was observed that deletion of a few key genes did not completely eliminate major byproduct production. These results all indicated that the disruption of only major byproduct pathways alone was insufficient to tap into the carbon flux for the production of 1,2-PDO. Hence, improving the accumulation of DHAP, which is considered the key precursor of the 1,2-PDO synthetic pathway, was speculated to be a potential strategy to improve 1,2-PDO production. Either the disruption of glucose 6-phosphate dehydrogenase (zwf) [75], leading to the activation of the pentose phosphate pathway to increase the accumulation of upstream products, or the disruption of triose phosphate isomerase (tpiA) [76], leading to increased glyceraldehyde 3-phosphate levels to decrease the consumption of DHAP, was successfully applied to improve 1,2-PDO production. Combining these two strategies with the deletion of genes encoding byproducts, the engineered strain generated a lower level of byproducts, with lactate at 0.14 g/L, succinate at 0.22 g/L, formate at 0.33 g/L, acetate at 0.65 g/L, and ethanol at 0.05 g/L, and more 1,2-PDO at 0.38 g/L, after 96 h, compared to 0.25 g/L from the wild-type strain. Furthermore, the galactose permease/glucokinase system (GGS) was substituted for the phosphotransferase system (PTS) to reduce phosphoenolpyruvate (PEP) consumption and carbon flow to mitigate downstream glycolysis in E. coli. The expression PTS-related gene ptsG (fused glucose-specific PTS enzymes) [77] was disrupted, and then, the GGS operon containing the galP (galactose permease) [78] and glk (glucokinase) [79] genes was introduced. As expected, 1,2-PDO was 1.57-fold more concentrated in the mutant than that in the unmodified strain (0.59 ± 0.13 g/L at 74 h) [73]. Although most of the transformations are carried out with E. coli, Saccharomyces cerevisiae is also a good choice. 1,2-PDO production from glycerol was increased 1.5-fold in S. cerevisiae upon deletion of the tpi1 gene encoding glyceraldehyde 3-phosphate, which shifted the carbon flux to the DHAP side [80]. Improving the supply of cofactors The oxidation-reduction reaction of micro-organisms usually requires the participation of specific cofactors [81]. An insufficient supply of cofactors is often a limiting factor affecting product accumulation [82]. Therefore, genetic engineering of redox cofactors has gradually become an important metabolic engineering strategy for optimizing microbial production. In the 1,2-PDO microbial synthetic pathway, the formation of many byproducts is often accompanied by the consumption of NADH, including lactate, ethanol and succinate. Hence, it is necessary to block the synthetic pathways of all these byproducts. In addition, the overexpression of formate dehydrogenase, which shows an efficient catalytic ability to regenerate NADH from formate, is another universal strategy to increase NADH availability [83]. By introducing Candida boidinii formate dehydrogenase (fdh1), the titre of 1,2-PDO was increased by 68.57% compared to the engineered strain without an NADH regenerating system. Together with the deletion of zwf encoding glucose 6-phosphate dehydrogenase, tpiA encoding triose phosphate isomerase, ldhA encoding lactate dehydrogenase, gloA encoding glyoxalase I, and adhE encoding alcohol dehydrogenase (ethanol generating pathway) and the development of cell adaptation in low-phosphate formate medium, the engineered strain achieved 5.13 g/L 1,2-PDO production with a high yield of 0.48 g of 1,2-PDO/g of glucose. In addition, it has been previously suggested that the disruption of the ubiquinone biosynthesis pathway of chorismate pyruvate lyase (ubiC) is conducive to conserving intracellular NADH for reduction reactions. The combined effects of ubiC deletion with carbon flux redirection resulted in a titre of 1.2 g/L 1,2-PDO in a shake flask [73,74]. As another commonly employed industrial strain, S. cerevisiae does not provide the same level of cytosolic reducing equivalents to 1,2-PDO production as the native FAD-dependent glycerol catabolic pathway. It has been recently revealed that an NAD + -dependent 'DHA pathway' successfully replaced the native pathway in S. cerevisiae through the heterologous expression of glycerol dehydrogenase from Ogataea parapolymorpha (Opgdh), overexpression of endogenous dihydroxyacetone kinase (DAK1) and deletion of endogenous glycerol kinase (GUT1). These modifications enabled efficient S. cerevisiae delivery of cytosolic NADH during 1,2-PDO microbial production. Applying strategies to increase both metabolic precursor and cofactor supplies, the modified S. cerevisiae strain obtained the highest titre, > 4 g/L 1,2-PDO, in yeast thus far [51]. Interestingly, NADPH-dependent alcohol dehydrogenases exhibited better reduction ability than NADH-dependent dehydrogenases in a recombined cyanobacterium producing 1,2-PDO [65]. The same result was apparent in recombined C. glutamicum, which proved that NADPH-dependent alcohol dehydrogenase is beneficial to anabolic metabolism [13]. Furthermore, developing genetic engineering strategies to improve the provision of NADPH, which has been proven effective in C. glutamicum, may be helpful to increase 1,2-PDO production. These strategies include the following: (a) overexpression of the E. coli pntAB genes encoding a membrane-bound transhydrogenase to leverage the electrochemical proton gradient across the cell membrane to drive the reduction of NADP + upon the oxidation of NADH [84]; (b) construction of phosphoglucose isomerase (PGI) deletion mutants in C. glutamicum for redirecting carbon flux to the pentose phosphate pathway and increase the NADPH level [85]; (c) construction of a new NADPH supply channel by changing the coenzyme specificity of natural NAD + -dependent glyceraldehyde 3-phosphate dehydrogenase to NADP + [86]; and (d) overexpression of the key enzyme NAD + kinase, which converts NADP + into NADPH, to increase the supply of NADPH [87]. All of these strategies may provide new solutions to the problem of an insufficient supply of cofactors in the process of microbial synthesis. New directions of metabolic engineering strategies Toxic intermediates often emerge in the process of microbial synthesis, which affects not only the growth of cells but also the synthesis of the product. Hence, preventing exposure of cells to toxic intermediates cannot be overlooked [88]. The synthesis process of 1,2-PDO via the methylglyoxal pathway is often accompanied by the accumulation of toxic intermediates, such as methylglyoxal and lactaldehyde. In this case, reducing the accumulation of toxic intermediates is particularly important for the synthesis of 1,2-PDO. A technology involving scaffold strategy may be a possible solution strategy. Many signal proteins contain modular protein interaction domains that can specifically bind to other domains or short peptides. The scaffold protein with multiple protein interaction domains can interact with enzymes with polypeptide ligand tags to co-locate enzymes in metabolic pathways [89,90]. By changing the number of domains on the protein scaffold, the relative proportions of different enzymes can be controlled. The accumulation of the toxic intermediate HMG-CoA (Hydroxymethylglutaryl-CoA) is an inevitable problem in the production of mevalonate from acetyl-CoA. Dueber et al. used protein scaffold technology to adjust the ratio of HMG-CoA production and consumption enzymes, which increased the production of the target product mevalonate by 10 times [91]. Analogously, Conrado et al. used 1,2-PDO biosynthetic enzymes to express in cells carrying DNA scaffolds containing the corresponding zinc-finger binding domains, increasing the yield of 1,2-PDO by 3.5-fold compared with that of the control without a DNA scaffold [92]. Microcompartments (bacterial microcompartments, BMCs) are another aspect of the transformation strategy of micro-organisms; BMCs can encapsulate pathway enzymes into protein shells to resolve issues of microbial instability and metabolic intermediates [93][94][95]. The Pdu BMC, which is a natural microcompartment, has been applied to the synthetic pathway of 1,2-PDO [96]. Lee et al. reduced the impact of toxic intermediates on cells through microcompartment technology in which the enzymes in the synthetic pathway were wrapped in a protein shell. An artificial microcompartment for synthesizing 1,2-PDO was then constructed by combining the enzymes related to synthesizing 1,2-PDO in E. coli with the N-terminal targeting sequence of the Pdu BMCs. Compared with that of the strain including free enzymes, the 1,2-PDO yield of the strain containing the fusion enzymes was increased by 245% [97]. In recent years, to avoid the production of toxic intermediates, an artificial and methylglyoxal-independent 1,2-PDO synthesis route was proposed and demonstrated. As a metabolic precursor, lactic acid is reduced to lactaldehyde through the joint action of propionate CoA transferase and CoA-dependent lactaldehyde dehydrogenase or the one-step action of carboxylic acid reductase, which is further reduced by alcohol dehydrogenase to 1,2-PDO (Fig. 3). In recombinant E. coli with the l-lactate dehydrogenase encoding gene lldD and the d-lactate dehydrogenase encoding gene dld deleted, 1,2-PDO stereoisomers were produced through the catalysis of a propionate CoA transferase, encoded by pct gene from Megasphaera elsdenii, a CoA-dependent aldehyde dehydrogenase, encoded by pduP from Salmonella enterica, and an alcohol dehydrogenase, encoded by yahK from E. coli. The modified strain produced 1.5 g/L R-1,2-PDO and 1.7 g/L S-1,2-PDO from d-lactic acid and l-lactic acid under shake flask conditions, respectively [47]. Furthermore, 17.3 g/L R-1,2-PDO and 9.3 g/L S-1,2-PDO were biosynthesized from glucose under fermentationcontrolled conditions, while 97.5% ee (R) and 99.3% ee (S) of the optical purity was obtained. These are the highest titres of 1,2-PDO microbial synthesis obtained thus far [48]. The same pathway was constructed in E. coli to convert l-lactic acid to S-1,2-PDO with the deletion of genes related to the methylglyoxal bypass pathway. S-1,2-PDO (13.7 mM) was produced from glucose by redistribution of carbon flux and introduction of a cofactor regeneration system [98]. In addition, Kramer et al. synthesized 1,2-PDO from lactic acid by directly overexpressing the carboxylic acid reductase gene MavCAR from Mycobacterium avium and the alcohol dehydrogenase-related gene yahK from E. coli. R-1,2-PDO accumulated at 7.0 mM with a molar yield of 1.0%, while the S-isomer was produced from glucose at 9.6 mM with a molar yield of 1.4% [99]. Compared to the CoA-dependent 1,2-PDO synthesis pathway, this route is simpler and more convenient, and only requires a one-step reaction, from which lactic acid produces lactaldehyde. With the development of more efficient enzymes, novel synthetic pathways and new metabolic engineering strategies, the field of microbial synthesis will inevitably expand with new vitality. Future perspectives As described at the beginning of this review, 1,2-PDO has immense potential in the global market but an immature industrial microbial synthetic process. Current 1,2-PDO biosynthesis routes based on biomaterials, including glycerol, starch and cellulose, and its microbial conversion reaction do not fully correct the limited substrate spectrum or relatively low yields nor produce economic benefits. However, the recent construction of several complete biosynthetic routes in different strains is coming to the forefront, especially for the production of 1,2-PDO stereoisomers from glucose in E. coli, which is instructive to the development of effective 1,2-PDO microbial production. First, novel inexpensive substrates should be developed to reduce costs of 1,2-PDO commercial manufacturing. Using genetic engineering technology, it is possible to utilize inexpensive substrates for 1,2-PDO microbial biotransformation, such as glycerol, cellulose, CO 2 , etc. Second, a series of bioengineering strategies need to be applied to improving the yields and titre of 1,2-PDO. Mining the key high-efficiency enzymes used in the reduction reaction is an effective method to increase the production of 1,2-PDO. Li et al. exploited NADPH-specific secondary alcohol dehydrogenases, which increased the production from 22 to 150 mg/L [65]. Hence, obtaining a highly active enzyme with good thermostability and desired substrate specificity is significant for increasing production. The structural biology analysis of different key rate-limiting enzymes has been helpful in understanding their catalytic mechanism and substrate specificity; hence, enzymes can be directly modified through protein engineering methods such as rational or semirational design to achieve higher yields of target products [100]. In addition, rate-limiting enzyme obtained by screening biological databases using computational techniques [101] and their functionalities predicted using BLAST searches are the primary virtual means employed for key enzyme discovery in 1,2-PDO biosynthesis [102]. Moreover, a previous study revealed that methylglyoxal and lactaldehyde have toxic inhibitory effects on strains to suppress cell growth. Therefore, solving the problem of toxic intermediate is vital for the improvement of production. Recent research has mostly focused on the reduction of toxic intermediates, while another neglected potential strategy is the enhancement of the tolerance of chassis cells to toxic intermediates [103]. This strategy is conducive to building more robust chassis cells through the combination of adaptive laboratory evolution [104], high-throughput screening [105] and leveraging the toxic mechanisms of related intermediates. Furthermore, different chassis micro-organisms have different advantages, and nonmodel organisms have received extensive attention from researchers because of their specific metabolic networks [106]. With the rapid development and improvement of synthetic biological tools, for example CRISPR/Cas9 [107], gene editing of micro-organisms with different chassis cells, and even unmodelled organisms, has gradually become possible [108]. Hence, in-depth research on chassis cells will hopefully establish more suitable 1,2-PDO synthetic chassis cells and accelerate the industrialization of 1,2-PDO biosynthesis. Finally, it is necessary to develop the downstream product transformation of a variety of high value-added chemicals to further increase the value of 1,2-PDO and broaden the application field. Conclusion Now the process of 1,2-PDO by biosynthesis is commercially infeasible because of the expensive cost. One of the most important causes of this is the low efficiency of enzymes involved in the 1,2-PDO microbial synthetic pathways. Hence, it is essential to look for high-efficiency enzymes, which are suitable for the production of 1,2-PDO. In addition, although the microorganisms and biosynthetic pathways for microbial production of 1,2-PDO are relatively abundant, it is the primary task to select strains and pathways suitable for industrial production. In this review, we summarized a variety of micro-organisms and 1,2-PDO biosynthetic pathways that have the potential to be applied to industrialized manufacturing. The applicable environment of different strains varies a lot, so it is important to select the corresponding strains according to the environment. For instance, C. thermosaccharolyticum prefers the high-temperature environment, while Lactobacillus buchneri prefers the acidic environment. Furthermore, the three known 1,2-PDO biosynthetic pathways also have their pros and cons. Although the deoxyhexose pathway is convenient to obtain strains, the cost of the substrate is relatively high; the research on the methylglyoxal pathway is relatively detailed, but the problem of toxic intermediates is difficult to solve; the lactic acid pathway is relatively simple and fast, but the details still need to be further explored. We also summarized strategies for enhancing the synthesis of 1,2-PDO, such as the overexpression and introduction of key genes to improve the synthesis efficiency of 1,2-PDO, the knockout of related genes in the byproduct synthesis pathways to reduce the accumulation of byproducts, and the construction of cofactors circulation system to supply sufficient NAD(P)H for a series of reduction reactions. With the optimization strategies summarized in this review, it is promising to use micro-organisms to produce 1,2-PDO with relatively high yields. Collectively, further studies on the biological synthesis of 1,2-PDO will contribute to alleviating petroleum shortages. A series of microbial cell factories producing 1,2-PDO have recently been constructed, while efficient microbial synthetic methods of 1,2-PDO are still lacking, and more exploration in synthetic biology is needed. In summary, we can expect that further metabolic engineering strategies will lead to a highly efficient 1,2-PDO production process in recombinant microbes.
9,077.6
2021-11-18T00:00:00.000
[ "Chemistry", "Biology", "Engineering" ]
Heat Transfer to MHD Oscillatory Viscoelastic Flow in a Channel Filled with Porous Medium The combined effect of a transverse magnetic field and radiative heat transfer on unsteady flow of a conducting optically thin viscoelastic fluid through a channel filled with saturated porous medium and nonuniform walls temperature has been discussed. It is assumed that the fluid has small electrical conductivity and the electromagnetic force produced is very small. Closed-form analytical solutions are constructed for the problem. The effects of the radiation and the magnetic field parameters on velocity profile and shear stress for different values of the viscoelastic parameter with the combination of the other flow parameters are illustrated graphically, and physical aspects of the problem are discussed. Introduction The flow of an electrically conducting fluid has important applications in many branches of engineering science such as magnetohydrodynamics (MHD) generators, plasma studies, nuclear reactor, geothermal energy extraction, electromagnetic propulsion, and the boundary layer control in the field of aerodynamics. In the light of these applications, MHD flow in a channel has been studied by many authors; some of them are Nigam and Singh [1], Soundalgekar and Bhat [2], Vajravelu [3], and Attia and Kotb [4]. A survey of MHD studies in the technological fields can be found in Moreau [5]. The flow of fluids through porous media is an important topic because of the recovery of crude oil from the pores of the reservoir rocks; in this case, Darcy's law represents the gross effect. Raptis et al. [6] have analysed the hydromagnetic free convection flow through a porous medium between two parallel plates. Aldoss et al. [7] have studied mixed convection flow from a vertical plate embedded in a porous medium in the presence of a magnetic field. Makinde and Mhone [8] have considered heat transfer to MHD oscillatory flow in a channel filled with porous medium. In this study, an attempt has been made to extend the problem studied by Makinde and Mhone [8] to the case of viscoelastic fluid characterised by second-order fluid. The constitutive equation for the incompressible secondorder fluid is of the form where σ is the stress tensor, p is the hydrostatic pressure, I is the unit tensor, A n (n = 1, 2) are the kinematic Rivlin-Ericksen tensors, μ 1 , μ 2 , and μ 3 are the material coefficients describing viscosity, elasticity, and cross-viscosity, respectively. The material coefficients μ 1 , μ 2 , and μ 3 have taken constants with μ 1 and μ 3 as positive and μ 2 as negative (Markovitz and Coleman [9]). Equation (1) was derived by Coleman and Noll [10] from that of the simple fluids by assuming that stress is more sensitive to the recent deformation than to the deformation that occurred in the distant past. Mathematical Formulation of the Problem Consider the flow of a conducting optically thin fluid in a channel filled with saturated porous medium under the influence of an externally applied homogeneous magnetic field and radiative heat transfer as shown in Figure 1. It is assumed that the fluid has small electrical conductivity and the electromagnetic force produced is very small. The x-axis 2 Physics Research International is taken along the centre of the channel, and the y-axis is taken normal to it. Then, assuming a Boussinesq incompressible fluid model, the equations governing the motion are given ∂T ∂t subject to boundary conditions where u is the axial velocity, t is the time, T is the fluid temperature, P is the pressure, g is the gravitational force, q is the radiative heat flux, β is the co-efficient of volume expansion due to temperature, C p is the specific heat at constant pressure, k is the thermal conductivity, K is the porous medium permeability co-efficient, B 0 (= μ e H 0 ) is the electromagnetic induction, μ e is the magnetic permeability, H 0 is the intensity of the magnetic field, σ e is the conductivity of the fluid, ρ is the fluid density, and υ i = μ i /ρ, (i = 1, 2). It is assumed that both walls of temperature T 0 , T w are high enough to induce radiative heat transfer. Following Cogley et al. [11], it is assumed that the fluid is optically thin with a relatively low density and the radiative heat flux is given by where α is the mean radiation absorption co-efficient. The following nondimensional quantities are introduced: where U is the flow mean velocity. The dimensionless governing equations together with the appropriate boundary conditions (neglecting the bars for clarity) can be written as Pe ∂θ ∂t with where Gr, H, N, Pe, Re, Da, S(= 1/Da), and γ = (υ 2 Re)/a 2 are Grashoff number, Hartmann number, Radiation parameter, Péclet number, Reynolds number, Darcy number, porous medium shape factor parameter, and viscoelastic parameter, respectively. Method of Solution In order to solve (7) and (8) for purely oscillatory flow, let where λ is a constant and ω is the frequency of oscillation. Substituting the above expressions into (7) and (8) and using (9), we get The nondimensional shear stress σ at the wall y = 0 is given by The rate of heat transfer across the channel's wall is given as Discussions and Conclusion The purpose of this study is to bring out the effects of the viscoelastic parameter γ on the governing flow with the combination of the other flow parameters. The corresponding results for Newtonian fluid can be deduced from the above results by setting γ = 0, and it is worth mentioning here that these results coincide with that of Makinde and Mhone [8]. We have considered the real parts of the results throughout for numerical validation. The velocity profile u against y is plotted in Figures 2-4 It has also been observed that the temperature field is not significantly affected by the viscoelastic parameter.
1,369.2
2012-01-29T00:00:00.000
[ "Physics", "Engineering", "Environmental Science" ]
BERTrand—peptide:TCR binding prediction using Bidirectional Encoder Representations from Transformers augmented with random TCR pairing Abstract Motivation The advent of T-cell receptor (TCR) sequencing experiments allowed for a significant increase in the amount of peptide:TCR binding data available and a number of machine-learning models appeared in recent years. High-quality prediction models for a fixed epitope sequence are feasible, provided enough known binding TCR sequences are available. However, their performance drops significantly for previously unseen peptides. Results We prepare the dataset of known peptide:TCR binders and augment it with negative decoys created using healthy donors’ T-cell repertoires. We employ deep learning methods commonly applied in Natural Language Processing to train part a peptide:TCR binding model with a degree of cross-peptide generalization (0.69 AUROC). We demonstrate that BERTrand outperforms the published methods when evaluated on peptide sequences not used during model training. Availability and implementation The datasets and the code for model training are available at https://github.com/SFGLab/bertrand. Introduction Cytotoxic T-cells play a major role in adaptive immune response in humans. Intracellular proteins are degraded by proteasome into peptides. The antigen processing machinery of the cell allows the presentation of peptides on the cell surface using Major Histocompatibility Complexes (MHC). The focus of this work is MHC class I, which typically presents a peptide of 8-11 amino acids long. These peptide-MHC (pMHC) complexes in turn can be recognized and engaged by CD8þ cytotoxic T-cells. Due to negative selection in the thymus and the high degree diversity of the T-cell receptor (TCR) repertoire, T-cells are capable of recognizing a variety of foreign and mutated epitopes (Rudolph et al. 2006). The binding properties of a TCR to a given pMHC is regulated by hypervariable Complementarity Determining Regions (CDRs). For the a and b chains of the TCR, three such regions exist. The CDR1 and CDR2 of both a and b chains mostly interact with the MHC complex, while CDR3 is predominantly interacting with the peptide. The CDR3 region is the most variable region of the TCR and is thought to be the major factor in determining the binding preference of the TCR toward its conjugated pMHC (La Gruta et al. 2018). While both a and b chains contribute to the interaction, some studies suggest that the b chain plays a more important role in antigen recognition (Sidhom et al. 2021) and is significantly more prevalent in the data. It should be noted that many researchers- Dash et al. (2017) and Montemurro et al. (2021)-have demonstrated the importance of the a chain of the TCR in antigen recognition. However, this work implies the use of machine learning (ML) to infer the interaction between TCRs and pMHC complexes, so it can benefit from a high number of observations. In the data we collected, only 18% of the observations have the CDR3a annotation. Thus, we will be focusing solely on the sequence of the CDR3b part of the TCR, which is readily available in multiple datasets. This is the approach also adopted by Weber et al. (2021) and Lu et al. (2021). The MHC is not able to present each and every peptide. Thanks to a high amount of pMHC binding data as well as peptide presentation data from mass spectrometry experiments, researchers were able to produce high accuracy models of pMHC binding and presentation. However, even if the peptide is presented on the surface of the cell, it is still unlikely to be immunogenic. The study by Parkhurst et al. (2019) of peptide T-cell immunogenicity for 75 cancer patients has been able to find only 57 CD8þ positive mutations along with over 8000 non-immunogenic ones, which account for <1%. One of the frontiers of computational immunology research currently is peptide:TCR binding prediction, which represents a key component for understanding T-cell activation. The amount of data for peptide:TCR binding prediction has been growing in recent years, with the popularization of pMHC dextramer production, single-cell TCR sequencing, and TCR barcoding. In this work, we compile a collection of peptide:TCR sequence data from a number of databases and publications into a single curated dataset of known TCR binders with their cognate epitope sequences. We augment the dataset with negative decoy examples generated from reference T-cell repertoires. The growing amount of data on peptide:TCR specificity allowed for the creation of many computational tools that facilitate peptide:TCR binding prediction. These tools can be broadly categorized into three groups. The first group of methods uses the similarity between TCRs to produce clusters and determine their peptide specificity. GLIPH (Glanville et al. 2017) and TCRdist (Dash et al. 2017) have demonstrated that for a given epitope sequence, TCR binding can be predicted accurately using distance-based methods. The second group of methods involves the training of peptide-specific TCR binding models-DeepTCR (Sidhom et al. 2021), TCRex (Gielis et al. 2019), and TCRGP (Jokinen et al. 2021). These algorithms often work remarkably well for known peptides but are unable to predict binding for unseen peptides. The third group of methods are the methods that allow prediction for unseen peptides-NetTCR2.0 (Montemurro et al. 2021), ERGO (Springer et al. 2020), pMTnet (Lu et al. 2021), DLpTCR (Xu et al. 2021), TITAN (Weber et al. 2021), and PanPEP (Gao et al. 2023). The goal of this research is to provide immunologists with better tools for in silico TCR therapy design. As most of the peptides in the published data originate from viruses, peptide-centric models from the second group have limited applicability for cancer neoantigens or tumor-associated antigens. The focus of this work is thus the peptide:TCR pairing task, specifically the case when the model has not previously seen neither the peptide nor the TCR. We believe that high accuracy for this task would bring the most benefit for potential users. Recent breakthroughs in Natural Language Processing (NLP), such as TAPE (Rao et al. 2019) and DNABERT (Ji et al. 2021), have prompted many researchers to apply Transformer architectures to solve sequence-based biological problems, such as transcription factors prediction, protein-protein interaction prediction, and binding pockets prediction to name a few. One useful feature of models from the NLP domain is the ability to process variable-length sequences of symbols from a fixed alphabet. Another important aspect is the ability to benefit from unsupervised pre-training, which is often an option in bioinformatics. Researchers usually pre-train the language model on large sequence databases, such as UniPROT. In this work, we construct a pre-training set for the peptide:TCR model, which comprises a hypothetical human peptide:TCR repertoire, based on peptides from MHC-I mass spectrometry peptide presentation experiments and TCRs from healthy donors. After pre-training our model is fine-tuned to predict peptide:TCR binding and was shown to outperform the existing methods in cross-peptide generalization task. The overall flow of the analysis is demonstrated in Fig. 1. In the left part of the figure, we show the process of the NLP model pre-training. Reference TCR sequences from healthy donors are paired randomly with the presented peptides to produce a hypothetical peptide:TCR repertoire, which is then used to perform masked language modeling (MLM) pre-training of the Bidirectional Encoder Representations from Transformers (BERT) neural network. In the right part of the figure, the process of generating negative decoy observations is illustrated. Negative decoys are also based on reference TCRs. A ML model Figure 1. Flow diagram of the analysis. The left part illustrates the creation of the hypothetical peptide:TCR repertoire and MLM pre-training. The right part shows the process of negative decoys generation and various filtering steps leading to it. These two paths converge on BERT supervised training for peptide:TCR binding prediction is used to remove outliers, and then the remaining reference TCRs are clustered together with binding TCRs. TCR sequences that are too similar to any binding TCR are removed, and the rest of the reference TCR clusters are randomly paired with peptides from the binding peptide:TCR set. Pre-trained BERT network is then trained to predict peptide:TCR binding. 2 Materials and methods 2.1 Data 2.1.1 Data curation Our data curation process is shown in Fig. 2. We collected data containing the amino acid sequences of binding peptide:TCR pairs from a number of databases and publications (Table 1). We narrowed down the dataset to human CD8þ T-cells for peptides of 8-11 amino acids long. We only considered CDR3b observations, as CDR3a annotations are available for only 18% of the data (<6k observations). Over 99% of the CDR3b sequences have a length between 10 and 20 amino acids. Another filtering criterion was the requirement of having specific amino acids in the first and last anchor positions in CDR3b chains, cysteine (C) and phenylalanine (F), respectively. This way, we compiled 33k unique CDR3b sequences of T-cells binding with a total of 401 epitope sequences. To compensate for the lack of negative examples, we generated negative decoy observations in a 3-to-1 ratio, using a dataset of reference T-cell repertoires from healthy donors (Oakes et al. 2017) and paired it randomly with peptides from the binders dataset. Cluster analysis of the peptides was done using hierarchical clustering with Levenshtein distance and single linkage. It revealed that a number of similar epitope sequences are present, differing by three amino acids at most. For example, MART-1 human melanoma-related antigen (ELAGIGILTV) was extensively studied with minor modifications: EAAGIGILTV, LLLGIGILV, ELAGIGLTV, AAGIGILTV, and ALGIGILTV. We argue that using such similar observations in the training and validation set may bias any model trained on peptide sequences and could introduce unwanted overfitting (if the TCR repertoires of two similar peptides are also similar), or unwanted underfitting if the repertoires are too different. Our analysis revealed 261 peptide clusters with a minimum Levenshtein distance of three. Peptide clusters were used as groups during cross-validation. Sources of overfitting Training a ML model on a limited set of data (i.e. positive peptide:TCR pairs for 400 peptides only) and with a lack of negative examples forces us to create negative decoy observations, which may easily introduce some bias that an ML model could easily exploit. Randomly pairing peptides and TCRs to create negative decoys is certainly a viable approach for this problem. TCRs are highly cross-reactive and are estimated to bind 10 6 multiple epitope sequences (Mason, 1998), hence a TCR could potentially bind some other peptide. However, TCRs are also highly specific, as the probability of a specific TCR binding a randomly chosen peptide is estimated to be 10 À4 (Frank, 2020), which is acceptable for a ML approach. Besides possible false negative observations produced by random pairing and biases that may arise from experimental conditions, we have identified a handful of potential problems for a training setup with negative decoys (see Table 2). Mismatch pairing Mismatch pairing is an approach used in Montemurro et al. (2021) and Springer et al. (2020) to create negative decoy observations by randomly pairing a peptide with a TCRs from a different peptide:TCR pair. Mismatch pairing approach guarantees that the TCR distribution of the negatives will match the positive one. The number of different TCRs in a human body is estimated to be around 10 8 À 10 10 (Qi et al. 2014, Lythe et al. 2016, and the number of MHC-I-presented peptides on human cells is around 10 4 (Mester et al. 2011), thus mismatch pairing is largely under-representing both peptides and TCRs. Moreover, due to publication bias, different potential immunogenicity of viral and cancer peptides in humans, the distribution of T-cell clonotype (Bolkhovskaya et al. 2014) and other factors, the number of unique TCR observations per peptide has a power-law decay distribution (see Supplementary Fig. S1). In practice, this means that random peptide:TCR pairing will produce negative decoys, where over 61% of TCR sequences come from the five most popular peptides in the dataset. The repertoires of these peptides are too homogeneous and a model might learn to identify those as negatives. Mismatch pairing also produces correlated observations, thus during cross-validation there would be a considerable amount of examples in the training set sharing peptide or TCR sequence with pairs in the test set. These recurring peptides and TCRs may be exploited by the model if not removed. Deep learning models have a tendency to "remember" training data, so correlated observations in the training and test sets should be avoided to ensure test set independence. Reference pairing Our approach to negative decoys generation was designed to overcome some of the aforementioned biases. We collected around 560k TCR CDR3b sequences from repertoires of three healthy donors from Oakes et al. (2017) and randomly paired them with peptides from the binding dataset CDR3b sequences. The study by Oakes et al. (2017) was used for reference pairing instead of Dean et al. (2015) due to the improved sequencing protocol that captured the rare clones in the TCR distribution, allowing for a more biologically plausible CDR3b sequence space, and the availability of the T-cell type annotation. We used only the CD8þ T-cells from the patients' repertoires, which matches the MHC class I peptides in the binders' dataset. Reference TCRs represent a much larger region of the TCR theoretical distribution compared to binding TCRs, although both TCRs and peptides remain still under-represented. Reference TCRs obtained through a different kind of TCR sequencing experiment might introduce CDR3b sequences that are out-ofdistribution relative to the binding TCRs and thus might be easy targets for a neural network. To address this issue, we filtered the sequences using a ML-based approach, see Outliers filtering section in the Supplementary Material for a detailed description. To avoid correlated observations, we performed TCR clustering analysis of the binding TCR sequences together with reference TCRs: hierarchical clustering was used with Levenshtein distance and complete linkage with distance cut-off equal to three. Clustering naturally produced three types of clusters: 1) clusters with only TCRs from binding peptide:TCR pairs 2) mixed clusters positive and reference TCRs 3) clusters with only reference TCRs. During decoy generation, we rejected reference TCRs from mixed clusters, as they have a much higher probability of being false negatives, because their sequences are very similar to those of the binding TCRs. TCRs from reference-only clusters were randomly paired with a single peptide from the pool of 401 available peptides in the binding dataset in a 3-to-1 ratio to the number of positive observations for a given peptide. In a cluster of reference TCRs, every TCR is paired with the same peptide, which limits the generation of positive and negative observations with the same peptide sequence and very similar TCR sequences. Such observations would in fact be quite useless, as they should be filtered from the training set if they appear in the test set during cross-validation, otherwise the model would be biased toward predicting the negative class. As the decoy generation is random, the dataset was replicated three times for different seeds. TCR clusters were also used as groups during cross-validation. Model For this problem, we applied the BERT artificial neural network (Devlin et al. 2019) from "transformers" python package (Wolf et al. 2019). The model was initially pre-trained to perform a MLM task on a hypothetical TCR repertoire and then fine-tuned for actual peptide:TCR classification. Model architecture The architecture of BERT is illustrated in Fig. 3. Initially, peptides and TCRs need to be represented as a sequence of tokens. The token vocabulary consists of 20 amino acids and 5 additional special tokens: CLS is used to indicate the starting position of the sequence, SEP to indicate the end of the sequence, MASK for MLM, PAD for padding, and UNK for non-standard amino acids. Each peptide and TCR are concatenated into a single sequence of tokens with an additional CLS token at the beginning and two SEP tokens between sequences and at the end of the sequence. The sequence IDs indicating the absolute position of the token and the token type IDs, indicating whether a token belongs to the peptide or to the TCR, are generated as well. Token, position and type IDs are embedded and added creating the sequence of input embeddings for each token. The input is then passed to a BERT network with eight transformer blocks. BERT produces an output embedding for each token, which later on is passed to either a token classification head and to a sequence classification head during pre-training and fine-tuning, respectively. Below is the description of the BERT embedding procedure. (Abelin et al. 2017, Di Marco et al. 2017, Faridi et al. 2018, Sarkizova et al. 2020. We randomly paired the MHC-I-presented peptides with reference TCRs and pre-trained a BERT neural network using MLM. Fifteen percentage of randomly chosen amino acids in the input sequence were masked and the network was trained to predict the masked amino acid. The weights from this stage were used as a starting point for the supervised training task. The effect of MLM pre-training can be seen in Supplementary Fig. S2 Fine-tuning A separate sequence classification head was trained to classify binding peptide:TCR pairs and negative decoys. The hidden representation of the first token in the sequence was passed Figure 3. BERT architecture (bottom to top): first, peptide and TCR are tokenized. Then, each token is embedded in a 512-dimensional token space. The absolute position and token type-peptide or TCR-are encoded using separate embeddings. All three embeddings are added to form the input. Eight transformer blocks process the input to produce the output for each token. During MLM pre-training, the token classification head (right) is trained to predict the true token for 15% randomly masked tokens. During sequence classification training (left), the sequence classification head takes output of the CLS token and predicts binding for a peptide:TCR pair into a feed-forward layer combined with a softmax layer output. Focal loss with c ¼ 3 and a ¼ 0:25 was used. Below is the description of the BERT fine-tuning procedure. embeddings ¼ BERTðtoken ids; token type ids; position idsÞ embedding CLS ¼ embeddings 0 y ¼ sequence classification headðembedding CLSÞ supervised loss ¼ focal lossðy;ŷ; c ¼ 3; a ¼ 0:25Þ: Benchmarks and evaluation Existing approaches to peptide:TCR binding predictions based on peptide and CDR3b sequences were evaluated alongside our model. The selection criteria for the benchmarks were the following-ability for peptide:TCR binding prediction for unseen peptides, code availability with the possibility of re-training the model on our dataset, and the ability to be trained without CDR3a chain information. Among the peptide:TCR binding methods mentioned earlier, four algorithms fulfill the above criteria, namely NetTCR2.0, ERGO, TITAN, and DLpTCR. NetTCR2.0 was published by Montemurro et al. (2021), which uses a convolutional neural network (CNN) to predict peptide:TCR binding for A02:01restricted peptides. ERGO is the algorithm published by Springer et al. (2020), which uses a pre-trained long shortterm memory neural network (LSTM) architecture. TITAN from Weber et al. (2021) is a bimodal neural network, which is pre-trained on general protein-ligand interactions and uses an atomic-level SMILES input for peptide that is combined with a TCR input using cross-attention. DLpTCR from Xu et al. (2021) uses an ensemble of CNNs, LSTMs and fully connected neural networks. We are grateful to the authors of these approaches for providing the full source code for model training. To test the generalization power of our model and the benchmarks, we performed repeated cross-validation grouped by peptide and TCR clusters, to avoid correlated observations in the training and testing set, which can cause inflated results due to neural networks' ability to remember training examples. For each training episode, the dataset was split into train and test sets. Fourteen peptide clusters and all their associated TCRs were chosen for each test set. Train observations with a TCR belonging to any test set TCR cluster were removed. All models were trained using the same dataset splits to ensure fairness in terms of data availability. The train and test sets were restricted to viral peptides and the final quality of the predictions was measured on an independent set of 76 cancer peptides. This process was repeated 21 times for 3 repetitions of random pairing, resulting in 63 rounds of cross-validation in total. The metric used for model validation was AUROC, which is the most popular metric for this problem (Lu et al. 2021, Weber et al. 2021, Xu et al. 2021, Meysman et al. 2023. It was computed separately for each peptide and then averaged. Using this averaging procedure, we limit the bias, which originates from the high number of observations for some peptides and from the differences between peptide repertoires. We believe that such a measure is preferable to AUROC computed across all the observations, as the latter might likely lead to inflated results. Bertrand was trained for 25 epochs. The benchmarks were trained according to the training procedures provided in their corresponding repositories. BERTrand, ERGO, TITAN, and DLpTCR use early stopping on a separate subset of the data, however, NetTCR2.0 used training loss for early stopping. The specifics of the evaluation procedure are explained in detail in the Evaluation section of the Supplementary Material. Two kinds of baselines were considered during the evaluation: a random baseline (which is equal to 0.5 for AUROC), and baseline based on the prediction using TCR sequence only. The second baseline was estimated by training BERTrand without the peptide sequence. This is a more realistic baseline that represents the biases in the CDR3b sequences introduced by the negative decoys generation. For more details about the baselines, see the Baseline estimation section in the Supplementary Material. Results and discussion We performed 21 rounds of cross-validation, repeated three times with different random pairings, and we evaluated the average per-peptide AUROC for each model. BERTrand converged to the optimal solution at around five epochs, with further training only introducing overfitting (see Supplementary Fig. S2). The cross-validation results shown in Fig. 4 indicates that our model can achieve better predictive performance than the state-of-the-art models when tested in multiple scenarios. While the average AUROC indicates a model that is definitely better than the baseline, the high variability in AUROC between peptides is definitely a concern. Out of 76 cancer peptides BERTrand achieved over 0.58 AUROC for 62 of them. While the model may be not optimally suited to perform prediction tasks on single peptide targets, it is applicable to groups of peptide targets in a practical in silico TCR prioritization scenario. With a large enough set of previously unseen peptide targets, BERTrand can generate predictions that prioritize binding peptide:TCR pairs, yielding an expected AUROC of 0.69. We performed additional validation by using a different metric-namely average precision (AP). The AP on the cancer dataset for BERTrand is 0.55, which is better than all four benchmarks. See the Average precision section in the Supplementary Material for these results and the discussion on the metric choice. Table 3 summarizes the effect of the two most computationally intensive steps of the pipeline-NLP pre-training (12 days using 4 NVIDIA A100 GPUs) and outliers filtering (7 days using 128 CPUs). More detailed results on the convergence of BERTrand without pre-training can be found in the NLP pre-training section of the Supplementary Material. We believe that the biggest challenge in peptide:TCR binding prediction is peptide bias due to low diversity of the peptides available. It is obvious that neither the peptide space (i.e. all possible peptides to be bound by TCRs) nor the TCR space (all possible TCRs) is sufficiently explored in the published data. NetTCR2.0 uses CNNs to predict the binding, which are very susceptible to overfitting when an obvious bias, such as a very popular peptide in the training dataset, is present. ERGO overcomes the TCR diversity problem by applying unsupervised TCR pre-training, but the peptide bias remains unaddressed. TITAN also uses a pre-trained model, but it is pre-trained on general protein-ligand interactions, which may be too different from the peptide:TCR distribution. The strategy of pre-training on a biologically plausible joint peptide:TCR distribution might help to address the peptide bias, as we have shown in this work. Language-based models can be successfully applied to the peptide:TCR binding problem outperforming other state-of-the-art methods, as highlighted in the results of our tests. Conclusions Cross-peptide generalization in peptide:TCR binding prediction remains a hard problem. However, results presented in this work demonstrate that peptide:TCR binding is predictable beyond known peptide targets. We believe the biggest obstacle in this field is data availability. Recent advances in single-cell TCR sequencing allowed for producing more experimental data, so we hope this work will encourage future peptide:TCR binding experiments and in the end, allow predictive models to become very useful for researchers. Potential applications of the peptide:TCR prediction model include the design of off-the-shelf TCR-based therapies for cancer (e.g. TCR-engineered T-cell therapies, TCR-mimicking antibodies, and TCR bispecific antibodies), the development of de-immunization strategies for autoimmune diseases, and the selection of optimal candidates for antiviral vaccine design. Even a model with a limited predictive power can already represent a very useful tool for the optimization of in vitro experiments (e.g. reducing the experimental time and costs associated with the typical experimental testing of a large number of putative non-prioritized TCR candidates). An important limitation of this work is the lack of CDR3a due to low availability of those sequences in databases. Although CDR3a sequence has been reported to be important for peptide:TCR binding prediction (Sidhom et al. 2021), it is only available for <18% of observations. BERTrand architecture can be easily adapted to CDR3a-annotated data in the future, when more such data are available. Due to the low MHC diversity in the data, the aspects of the interactions between the MHC and the TCR were also omitted. However, we are confident that the growing collective effort we are witnessing in this field will eventually lead to populating databases with large amounts of fully annotated data. We believe this will open new doors for a holistic solution to the pMHC: cross-validation test set without outliers filtering demonstrate the potential metrics inflation due to outliers. (C) Results for the independent cancer set. BERTrand demonstrates better performance that the other models. (D) Results for the independent cancer set without outliers filtering are inflated and again demonstrate the importance of outliers filtering. (E) Results for individual peptides in the independent cancer set. Note that the confidence interval of the mean AUROC is wider as it includes high per-peptide variation Table 3. Comparison of the results for the independent cancer set without the most computationally intensive steps. Condition AUROC Explanation Baseline 0.69 All steps in the pipeline No pre-training 0.62 Poor results due to the a large number of weights trained from scratch No outliers filtering 0.73 Inflated results due to out-of-distribution reference TCRs BERTrand-peptide:TCR binding prediction using BERT augmented with random pairing 7
6,178.2
2023-06-13T00:00:00.000
[ "Biology", "Computer Science" ]
Demodulation Method for Loran-C at Low SNR Based on Envelope Correlation–Phase Detection Loran-C is the most important backup and supplement system for the global navigation satellite system (GNSS). However, existing Loran-C demodulation methods are easily affected by noise and skywave interference (SWI). Therefore, this article proposes a demodulation method based on Loran-C pulse envelope correlation–phase detection (EC–PD), in which EC has two implementation schemes, namely moving average-cross correlation and matched correlation, to reduce the effects of noise and SWI. The mathematical models of the EC, calculation of the signal-to-noise ratio (SNR) gain, and selection of the EC schemes are given. The simulation results show that compared with an existing method, the proposed method has clear advantages: (1) The demodulation SNR threshold under Gaussian channel is only −2 dB, a reduction of 12.5 dB; (2) The probability of the demodulated SNR threshold, being less than zero under the SWI environment, can reach 0.78, a 26-fold increase. The test results show that the average data availability of the proposed method is 3.3 times higher than that of the existing method. Thus, our demodulation method has higher engineering application value. This will improve the performance of the modern Loran-C system, making it a more reliable backup for the GNSS. Introduction The positioning, navigation, and timing (PNT) system is the key infrastructure in any country considering national economy and security. It provides PNT services for military, commercial, and civil users worldwide [1,2]. One of the high-precision ground-based PNT systems is Loran-C, which has advantages such as long-distance propagation, low frequency, high power, and goodstability [3][4][5][6]. These features make the Loran-C an ideal backup system for the global navigation satellite system (GNSS) in PNT applications [7][8][9][10][11], especially when GNSS signals are rejected or interfered. When using the signal transmitted from a Loran-C system to implement a timing function, the receiver must determine the time deviation (TD) between the local time and the standard time. The TD is composed of the time of arrival (TOA) and broadcast time (BT) of the signal. The TOA refers to the absolute propagation time of the current Loran-C pulse group signal from the Loran-C station to the current position of the receiver. The BT, which is obtained through data demodulation, refers to the time interval between the start time of the current Loran-C pulse group signal and the standard time. Currently, the international standard Loran-C signal system typically uses EUROFIX technology for data dissemination [12,13]. The EUROFIX datalink is implemented by an additional three-level pulse position modulation (PPM) of the Loran-C pulses. The above-mentioned Loran-C signal system is adopted in China's BPL long wave time service system and Changhe 2 navigation system [14,15]. The signal-to-noise ratio (SNR) required for Loran-C data demodulation is higher than that for signal acquisition and detection. Therefore, the demodulation performance determines the timing capability of the Loran-C receiver. In recent years, studies on Loran-C signal receiving methods have mainly focused on ways of enhancing the accuracy of TOA measurements, such as by signal acquisition and detection [16][17][18], skywave identification [19][20][21][22], cycle identification [23][24][25], and additional secondary phase factor correction [26][27][28]. However, little attention has been paid to enhancing the demodulation performance of the PPM. The basic method used involves converting the PPM into phase modulation [14,29,30], i.e., the demodulation of the PPM is converted to the detection of the phase of the Loran-C pulse envelope. Based on the basic method, an envelope phase detection-majority decision (EPD-MD) method has been proposed in [14]. This method uses multiple phases of the orthogonal envelope to determine the modulation polarity via majority decision to improve the demodulation performance; however, the performance deteriorates sharply at low SNRs. In [29,30], a demodulation method based on signal matching correlation-pulse position detection (SMC-PPD) has been proposed. Since the matching correlation peak of the Loran-C pulse signal is not sharp, the SMC-PPD method cannot significantly improve the SNR performance. In addition, when there is skywave interference (SWI) in the received signal, the SMC-PPD method cannot detect the position of the Loran-C pulse signal correctly. Therefore, this article proposes an envelope correlation-phase detection (EC-PD) method to demodulate the Loran-C signal at low SNRs. In this method, two EC schemes, namely moving average-cross correlation (MA-CC) and matched correlation (MC), are used to reduce the effects of noise and SWI, thus significantly improving the timing capability and sensitivity of the Loran-C receiver. Basic Principle of PPM The Loran-C signal has been formally defined by the United States Coast Guard (USCG) as a sequence of pulses in the radio frequency (RF) energy range with a central frequency of 100 kHz [31]. The definition of a single Loran-C pulse can be found in [31]. The first pulse in the Loran-C pulse group is called the reference pulse, as shown in Figure 1. The EUROFIX modulation scheme uses the last six pulses of the Loran-C pulse group. These pulses are pulse position modulated by ±1 µs (1 µs advance, a prompt, or 1 µs delay) [13], as shown in Figure 2. To minimize the impact to users, PPM encoding uses 128 of the possible 141 balanced patterns to represent seven bits of data per group repetition interval (GRI). In this article, pulses with modulated data in the Loran-C pulse group are called data pulses in short. Envelope Model of Loran-C Pulse Let A(n) denote the normalized envelope of the reference pulse, as shown in Figure 1, where n = 0, 1, · · · N − 1 represents the sampling time. In this study, the sampling rate of the system is assumed to be 1 MHz (i.e., the sampling interval is 1 µs). The PPM time shifts are equivalent to a phase shift. Since a Loran-C signal has a period of 10 µs, a 1 µs advance is equivalent to a-π/5 radian shift, a 1 µs delay is equivalent to a π/5 radian shift, and no time shift is equivalent to a zero radian shift. Typically, the Loran-C receiver receives a mixed signal containing SWI, groundwave signal, and noise [19][20][21]. The skywave-to-groundwave amplitude ratio is represented by λ, and the delay of the SWI is represented by τ. Let m ∈ {1, 2, 3, 4, · · · , 8} denote one of the m-th Loran-C pulse envelopes in the Loran-C pulse group; thus, the mathematical model of the Loran-C pulse envelope can be expressed as: = e iψ n A 2 (n) + 2λA(n)A(n − τ) cos(τπ/5) + λ 2 A(n − τ) where V op is the amplitude, B(n) is the Loran-C pulse envelope with the SWI, w m (n) is the white Gaussian noise with zero mean and variance σ 2 w , θ m is the modulation phase (θ 1 is equal to 0), φ m is the phase code (0 or π, and φ 1 is equal to 0), and ϕ 0 is the initial phase of the carrier. To reduce the complexity of demodulation, and considering that the energy of a Loran-C pulse signal is mainly concentrated near the envelope peak, we take the duration of the Loran-C pulse signal as 200 µs, i.e., N is equal to 200. According to [20], τ ≥ 35 µs. Therefore, the range of τ discussed in this article is 35-200 µs. In addition, to distinguish the data pulse from the reference pulse, the subscript k is used to express the data pulse, where k ∈ {3, 4, · · · , 8}. Description of the PPM Demodulation Method In this study, the EC-PD method is used for a low SNR demodulation of the PPM, in which EC has two implementation schemes: MA-CC and MC. Before the demodulation, the receiver needs to complete the acquisition of the Loran-C pulse signal, which is used to determine the starting positions of the reference and data pulses. Moreover, it needs to identify the skywave to obtain the estimated values of λ Figure 3 shows the flow diagram of the demodulation method. The first step in the demodulation process is to store the envelope sampling data (ESD) of the reference and data pulses; the storage depth is N. The stored ESD of the reference and data pulses are represented by column vectors R 1 and R k , respectively, as follows: In scheme 1, the MA technique is first used to process the stored ESD of the reference and data pulses, and the results are recorded as Y 1 and Y k , respectively. Subsequently, c k = Y H 1 Y k is obtained through the CC between Y 1 and Y k . In scheme 2, The scheme selector makes a selection between schemes 1 and 2 based onλ (estimated value ofλ). When scheme 1 is selected, the selector outputs c k ; otherwise, it outputs d 1 and d k . The scheme selection strategy is given in Section 2.3.4. In addition, if scheme 1 is selected, the phases introduced by the skywave and the carrier can be eliminated through the CC, and then a non-coherent demodulation is carried out. However, if scheme 2 is selected, a phase tracking loop is required to eliminate the phases introduced by the skywave and the carrier. The phase detection can be carried out using an inverse tangent function with return values in the interval [−π/2, π/2] to eliminate the phase code given that it has only two values, 0 or π. We record the phase detection result asθ k , which can be expressed as follows: where atan(·) is the inverse tangent function, imag(·) and real(·) denote the imaginary and real parts of complex numbers, respectively. Additionally, then the demodulation judgment is made based on the following rules: if −π/10 ≤θ k ≤ π/10, the judgment is no time shift. 2) ifθ k > π/10, the judgment is 1 µs delay. Let G denote the SNR gain obtained by the EC, expressed as G = SNR out − SNR in , where SNR out is the selector output SNR, and SNR in is the RF input SNR. Furthermore, let G sche1 and G sche2 denote the SNR gains of schemes 1 and 2, respectively. Evidently, G sche1 and G sche2 are not equivalent. The SNR gain is one of the key parameters used to evaluate the performance of the EC-PD method. Therefore, the calculation and analysis of the SNR gain will be focused upon next. Mathematical Model of the EC This section presents the mathematical models of the EC schemes, laying a foundation for the calculation and analysis of the SNR gain. First, we define a N × N real symmetric moving average matrix, denoted by Q, which is determined by N and R, where R is the moving window radius of the MA. For example, when N = 5 and R = 2, Q can be expressed as follows: In other words, when the stored ESD are processed by the MA, it is equivalent to adding R zeros before and after the stored ESD, and then replacing the middle value of 2R + 1 points with the average. Thus, c k in scheme 1 can be further expressed as: where I k is the noise introduced by the CC and can be expanded as follows: Evidently, B H Q 2 B in Equation (5) does not contain any phase information. Therefore, scheme 1 can realize non-coherent demodulation without an additional carrier phase tracking loop. In scheme 2, d 1 = A H R 1 , and using Equation (2), we can rewrite d 1 as where f ph (λ, τ) and f am (λ, τ) represent the phase and amplitude of A H B, respectively. According to Equation (7), when there is a skywave interference in the received signal, the MC will introduce an interference phase, which affects the detection of the modulation phase. Therefore, in the demodulation process of scheme 2, it is necessary to use the carrier phase tracking loop to ensure that ϕ 0 + f ph (λ, τ) approaches zero. In this study, we assume ϕ 0 + f ph (λ, τ) = 0; thus, d k can be expressed as: SNR Gain From Equation (6), we can easily prove that the noise terms in I k are independent of each other. Therefore, the mean of I k is equal to zero, and its variance σ 2 can be given as: where q is the trace of matrix Q 4 . According to Equations (5) and (9), the output SNR of scheme 1 can be expressed as: SNR out = 10 log 10 V 4 op α 2 σ 2 = 10 log 10 where α = B H Q 2 B, and β = B H Q 4 B. Since SNR in = 10 log 10 (V 2 op /σ 2 w ), the equation for calculating the SNR gain of scheme 1 can be derived as follows: The above equation indicates that in the case of low SNR, the SNR gain of scheme 1 can be increased by reducing q. A simulation is carried out to determine the value of R. In this simulation, we set SNR in = 0 dB. Figure 4 shows the result. As shown, when R = 0 (i.e., the MA is not adopted), G sche1 is only 12.72 dB, whereas when R = 23, G sche1 reaches the maximum value of 16.06 dB. When R ≥ 23, q is small enough, so that the effect of SNR in on G sche1 can be ignored. However, with further increase in R, the loss in the signal energy due to the MA is evident, thereby reducing G sche1 . To sum up, we set the sliding window radius of the MA as 23. According to Equation (7), and from the above analysis process, we can easily obtain the equation for calculating the SNR gain of scheme 2, as follows: Selection of EC Schemes The magnitude of the SNR gain is an important basis for selecting the EC schemes. Evidently, when there is no SWI, G sche2 is at least 3 dB higher than G sche1 . Thus, scheme 2 is the best scheme under a Gaussian channel. However, in the presence of SWI, the selection of the EC schemes is more complicated. In Figure 5, the relationship between the SNR gain and the delay of the SWI is simulated in the case of λ = 1.6 dB. The SNR gain fluctuates with the change in the delay. This is mainly because when the delay is an odd multiple of 5 µs, the coincidence part of the skywave and the groundwave will cancel each other, and the SNR gain will be reduced. On the contrary, when the delay is an even multiple of 5 µs, the coincidence part of the skywave and the groundwave will overlap each other, which is conducive to the SNR gain. In addition, when the coincidence part of the skywave and the groundwave cancel each other, A(n) and B(n) will be seriously mismatched, which will lead to a sharp deterioration in the SNR gain of scheme 2, whereas scheme 1 will not cause mismatching owing to its CC. Therefore, in the extreme case, the deterioration degree of the SNR gain of scheme 1 is significantly less than that of scheme 2. In other words, the robustness of scheme 1 under a CWI environment is significantly better than that of scheme 2. There are two strategies, namely strategy A and strategy B, for selecting the EC schemes. In strategy A, the receiver needs to obtainλ andτ through skywave identification, calculate G sche1 and G sche2 using Equations (11) and (12), respectively, and finally select the EC scheme with a high SNR gain. In strategy B, the receiver comparesλ with the threshold value λ thred ; whenλ ≥ λ thred , scheme 1 is selected; otherwise, scheme 2 is selected. In strategy A, the receiver is required to estimate λ and τ with a very high accuracy; otherwise, it will cause a large deviation between the calculation result of the SNR gain and the real value, thus making the scheme selection invalid. In strategy B, the receiver only needs to estimate λ, which is relatively simple to implement. Therefore, this strategy is recommended for selecting the EC schemes in this study. We present a method to determine λ thred based on the minimum SNR gain, where the minimum SNR gain refers to the minimum value that the SNR gain can reach when the skywave-to-groundwave amplitude ratio is λ. According to Equations (11) and (12), the minimum SNR gain of the two schemes can be obtained by simulation, as shown in Figure 6. The simulation results show that when λ ≥ −2.3 dB, the minimum SNR of scheme 1 is greater than that of scheme 2; otherwise, the minimum SNR of scheme 1 is less than that of scheme 2. Therefore, λ thred is set to −2.3 dB in this study. This method can overcome the problem where the SNR ratio gain deteriorates rapidly in extreme cases, thereby improving the robustness and stability of anti-SWI demodulation. Validation Method We verified the effectiveness of the demodulation method from three aspects. In Section 3.2, the simulation results of the error probability under Gaussian channel for the EC-PD, EPD-MD, and basic methods are given. In Section 3.3, we consider the influence of the SWI on the demodulation to further analyze and compare the demodulation performances of the EC-PD and EPD-MD methods. In Section 3.4, an experimental verification platform set up to receive and demodulate an actual Loran-C signal is presented, and the demodulation performance of the EC-PD method is verified. Figure 7 shows the simulation results of the error probability under Gaussian channel. When the SNR in is greater than 8 dB, the demodulation performance of the EPD-MD method is better than that of the basic method, thus demonstrating the effectiveness of the EPD-MD method at high SNRs. However, with the decrease in the SNR in , its demodulation performance deteriorates sharply. The demodulation performance of the EC-PD method is significantly better than that of the EPD-MD and basic methods. For example, when the SNR in is in the range of −9-0 dB, the error probability of the EC-PD method is lower than that of the EPD-MD method by one to four orders of magnitude. In addition, we take the SNR threshold as one of the quality parameters to compare the demodulation performances of the above three methods. The SNR threshold mentioned in this article refers to the minimum SNR in required to make the error probability no more than 10 −3 , and is recorded as SNR th The simulation results show that the SNR th of the EC-PD method is only −2 dB, which is 12.5 and 19.5 dB lower than that of the EPD-MD and basic methods, respectively. Anti-SWI Performance In this section, the SNR th is used to simulate and compare the anti-SWI performances of the EC-PD and EPD-MD methods. As demonstrated in Section 2.3.4, the SWI is most unfavorable when λ = 35 µs and most favorable when λ = 40 µs, for demodulation. Therefore, in the simulation, λ is assigned a range of 35-40 µs with a resolution of 0.1 µs. Moreover, the SNR in is assigned a range of −10-10 dB with a resolution of 0.1 dB. Figure 8 shows the simulation results. In Figure 8, it can be observed that: (1) The range of SNR th required for the EPD-MD method is −2-19 dB, and the dynamic value is 21 dB; (2) The range of SNR th required for the EC-PD method is −11-7 dB, and the dynamic value is 18 dB. Compared with the EPD-MD method, the robustness (represented by the maximum SNR th ) of the anti-SWI demodulation of the EC-PD method is improved by 14 dB, and the stability (represented by the dynamic value) is improved by 3 dB. Furthermore, based on the above simulation data, we determined the statistical characteristics of the SNR threshold, represented by the probability distribution, as shown in Figure 9. In Figure 9, it can be observed that: (1) The values of the probability of SNR th are less than 0 for the EPD-MD and EC-PD methods are 0.03 and 0.78, respectively; (2) The average SNR th values required for the EPD-MD and EC-PD methods are 7.54 and −2.91 dB, respectively. The EC-PD method can still achieve data demodulation at low SNRs under the SWI environment. Experimental Verification We test the data demodulation method with signals transmitted by a real Loran-C system. The actual received signal is the Loran-C signal (the GRI is 74.30 ms) emitted by the main station (station ID is 09) of Shandong Rongcheng. This station belongs to China's Changhe 2 navigation system. The test platform is placed in national time service center of China, 1227.1 km away from the station. Figure 10 shows the test platform. The following is the description of the test platform: (1). In RF signal processing, the input Loran-C signal is sampled by analogue-to-digital and filtered by an adaptive notch and finite impulse response band-pass, thus obtaining the digital signal. (2). The complex envelope of the Loran-C pulse is obtained through orthogonal down conversion. (3). The baseband signal processing includes signal acquisition [16], carrier phase tracking, and skywave identification [20]. The signal acquisition step provides the starting positions of the reference and data pulses, and the skywave identification provides the estimated value of λ. (4). The EC-PD and EPD-MD methods are alternately selected for signal demodulation every half an hour. (5). The experimental data are composed of message frames, as shown in Figure 11. The serial port outputs one message frame to the PC every second, including $test ID, method ID ("0" references the EPD-MD method, and "1" refers to the EC-PD method), experimental period, number of correct message frames in each experimental period, message type, message subtype, station ID, time code 1 (yyyy:mm:dd), time code 2 (hh:mm:ss), precise time information (ms:µs:10ns), broadcasting deviation, and leap second. The correctness of the message frame is examined by Reed-Solomon (RS) decoding and cyclic redundancy check (CRC). Figure 11. Screenshot of some experimental data. Figure 12 shows the number of correct message frames of the two demodulation methods in each experimental period. As shown, the demodulation performance of the EPD-MD method is similar to that of the EC-PD method in only a few experimental periods, whereas in most experimental periods, the demodulation performance of the EC-PD method is significantly better than that of the EPD-MD method. In the next step, we define two variables, namely the maximum data availability η max and average data availability η avg , to compare the effectiveness of the two demodulation methods in detail, as follows where CMF max is the maximum number of correct message frames in a certain experimental period, EP = 3600 s is the duration of one experimental period, F t = 30 × GRI is the duration of one message frame, CMF all is the total number of correct message frames during the entire experimental time, and NP = 120 is the total number of experimental periods. Since the GRI of the received Loran-C signal is 74.30 ms, we have F t = 2.229 s. From the experimental data: (1) The CMF max values of the EC-PD and EPD-MD methods are 396 and 247, respectively, and their η max values are calculated to be 49 and 30.6% respectively; (2) The CMF all values of the EC-PD and EPD-MD methods are 10526 and 3190, respectively, obtained by summing up the number of correct message frames in each experimental period; the η avg values of the EC-PD and EPD-MD methods are 0.9 and 3.3%, respectively. The calculation results show that compared with the EPD-MD method, the η max and η avg values of the EC-PD method are increased by approximately 1.6 and 3.3 times, respectively. Discussion The realization and application of the Loran-C system data link technology can make it possible to build a relatively perfect PNT system by combining with the satellite-based PNT system. However, the existing demodulation method used in the Loran-C system cannot effectively suppress noise and SWI. Therefore, with the development of modern Loran-C systems, a more advanced Loran-C signal processing capability is required. In this study, we developed a Loran-C demodulation method at low SNRs based on the EC-PD, where EC includes two schemes: MA-CC and MC. The mathematical models of the MA-CC and MC, calculation of the SNR gain, and selection of the EC schemes based on the skywave identification results were described in detail. The theoretical analysis results showed that the MA-CC is more suitable for scenarios with SWI, whereas the MC is more suitable for scenarios with only noise. Therefore, the combination of the MA-CC and MC could effectively reduce the effects of noise and SWI on the demodulation process. In addition, a simulation was conducted to verify the effectiveness of the demodulation method and analyze its anti-noise and anti-SWI performances. The simulation results showed that compared with the existing method in [14], the proposed method has clear advantages: (1) The demodulation SNR threshold under Gaussian channel is only −2 dB, which represents a reduction of 12.5 dB by comparison; (2) The probability of the demodulated SNR threshold being less than zero under the SWI environment can reach 0.78, which is a 26-fold increase by comparison, and the robustness and stability of anti-SWI demodulation are improved by 14 and 3 dB, respectively. Finally, we set up an experimental verification platform that can receive and demodulate an actual Loran-C signal. The test results showed that the average data availability of our demodulation method is 3.3 times higher than that of the method proposed in [14]. Thus, our demodulation method has a higher engineering application value, and has been optimized for the design of new Loran-C timing receivers. This will improve the performance of modern Loran-C systems, making them a more reliable backup for the GNSS. The EC technology proposed in this article has a very low implementation complexity compared with some techniques, such as singular value decomposition [32,33] and wavelet transform [34,35], which involve a lot of complex multiplication operations to improve the SNR. The combination of EC and the above technologies could be an effective way to further improve the Loran-C data demodulation performance in the future.
6,144.6
2020-08-01T00:00:00.000
[ "Computer Science" ]
An investigation of the nature of a Pc5 pulsation event using SuperDARN and magnetometer data Pc5 pulsations are global magnetohydrodynamic events in the magnetosphere. We employed an Automated Pulsation Finder program to identify significant Pc5 pulsation events in SuperDARN radar data. The event presented here was visible in the Goose Bay, Saskatoon and Þykkvibaer high-frequency radars, located in the northern polar region. These observations were coordinated with magnetometers within their field of view. These two instrument types – radars and magnetometers – complement each other. These observations represent a significant fraction of the globe in longitude. Pulsation studies of this nature are rare in the literature. Combining these two instrument types, we investigated the nature of the pulsation and determined its qualitative polarisation characteristics. A complex demodulation technique was employed to determine amplitude and phase relationships between field components observed by the radars and magnetometers, which, in turn, afforded resolution of other characteristics of pulsations, such as wave number and phase velocity. The results are discussed in the context of the magnetohydrodynamic theory of magnetic pulsations, speculating on its generation mechanism. Investigation of this mechanism will be the subject of a future publication. Introduction Ultra-low frequency (ULF) hydromagnetic waves, often referred to as geomagnetic pulsations, have been observed for many years in magnetometer data as well as by very high frequency (VHF) and high-frequency (HF) radars, and are endemic within the magnetosphere. A subset of these waves, pulsations in the Pc5 band (1)(2)(3)(4)(5), are global magnetohydrodynamic events in the magnetosphere. Numerous papers have reported the observation of Pc5 pulsations with power at discrete and stable frequencies, the most commonly occurring being at frequencies of 1.3 mHz, 1.9 mHz, 2.6 mHz and 3.3-3.4 mHz. [1][2][3][4][5] The quantisation of the resonance frequencies is predicted by the cavity mode theory which was first developed by Kivelson and Southwood 1 and later modified into waveguide theory 1,6 to account for the azimuthal propagation of the compressional mode. However, Ziesolleck and McDiarmid 7 showed that the waveguide/cavity mode frequencies do not necessarily represent a unique set of frequencies. In order to understand the stability and reproducibility of these frequencies, many authors have focused on investigating possible excitation mechanisms. 4,[8][9][10][11] A field line resonance is essentially a standing shear Alfvén wave on a magnetic field line between the two conducting ionospheres. 12 Field line resonances can arise from an external influence in the solar wind, such as Kelvin-Helmholtz instability, or from an abrupt change in solar wind dynamic pressure, coherent waves in solar wind, which in turn can excite field lines inside the magnetosphere into compressional oscillation. 13 The field lines on the L-shell with the same characteristic frequency will resonate and large oscillations will be set up over a narrow range of latitudes. Southwood 14 and Chen and Hasegawa 15 predicted that an enhancement in amplitude at the resonance L-shell is accompanied by a phase change of approximately 180°. This behaviour is the result of the fact that L-shells nearer the earth have higher frequencies than the driver and lead it in phase, whereas those that are near magnetopause have lower frequencies than the driver and lag it in phase. Walker et al. 16 provided compelling evidence in favour of this theory by using STARE radar data to plot the amplitude and phase of an electric field. Field line resonances generated by all these mechanisms tend to have small azimuthal wave numbers, m. The generation mechanism remains in debate. All the mechanisms mentioned above have been shown to be present. Another type of pulsation in the Pc5 band can be generated by drift-bounce resonance with energetic ring current particles. These are often compressional in nature and tend to have larger values of m. The drift-bounce resonance occurs when a particle drifts in one bounce period by an integral number of azimuthal wavelengths, Doppler shifted by the azimuthal phase velocity of the wave. The high-energy particles which constitute the bulk of the ring current may be responsible for exciting compressional Pc5 through drift-bounce resonance. 17,18 The resonant particle has a drift path which oscillates between L-shells at the same characteristic frequency as the oscillations in perpendicular, parallel and total particle kinetic energy. The drift-bounce resonant interaction of the energetic ring current particle with Alfvén waves has been proposed as a possible excitation mechanism. 19 Magnetometers and HF radars are synergistic instrument types. Magnetometers have high temporal resolution whilst HF radars have good spatial resolution. Furthermore, ionospheric conditions not conducive to observing pulsations in radar data are favourable for observations in magnetometer data, and vice versa. In this study, we used three SuperDARN HF radars together with ground-based magnetometers to study a Pc5 pulsation event. SuperDARN is an international project of which the South African radar at SANAE, Antarctica forms a part. Our emphasis in this study was the determination of the physical character of the Pc5 pulsation, using HF radar and magnetometer ground-based instruments. An event was identified by using the pulsation finder to find a significant pulsation in one HF radar and then proceeding to search for a similar resonance in other radars and magnetometers at similar magnetic latitudes. Instrumentation We focused on a Pc5 pulsation event occurring between 20:00 and 22:00 UT on 6 October 2006 observed by the Saskatoon, Goose Bay, and Þykkvibaer HF radars. These three radars are part of the SuperDARN international network of HF radars that monitor ionospheric plasma convection over an extensive area of mid-and high latitudes in both hemispheres. The HF radars of the system each operate in a frequency range of 8-20 MHz, and use an electronically phased antenna array to sweep the beam through successive positions with azimuthal separation of 3.24°. In full scan mode, a radar runs through a 16-beam scan with a dwell time of between 3 s and 7 s (depending on the radar), which gives a full 16-beam scan that covers 52° in azimuth once every 1 min or 2 min, respectively. For each beam, the backscatter power, line-ofsight Doppler velocity, and spectral width are gated in up to 75 cells. These are 45 km long in standard operation extending from an initial range of 180 km. The spatial coverage of an HF radar is up to 2000 × 2000 km 2 . HF radars operate by utilising coherent scatter from fieldaligned irregularities of electron density in the E-and F-regions of the ionosphere. They 'see' the electric field perturbation associated with the pulsation and not the magnetic field. This distinction has implications for which kind of pulsations (compressional or Alfvén) are seen by the different beams. Their spatial coverage makes them ideal tools for resolving pulsation resonance structures. The ULF oscillations in these ionospheric regions are observable in the line-of-sight Doppler velocity, with the magnitude of the measured flow oscillations being dependent upon the direction of the flow oscillations relative to the beam direction. Pc5 oscillations in the F-region are therefore visible as alternating bands of negative and positive Doppler velocities. The F-region has previously been shown to be associated with Pc5 field line resonance. 2,20,21 The magnetometer data were obtained from the Greenland Magneto meter Array 22 , the Canadian Array for Realtime Investigations of Magnetic Activity (CARISMA) 23 and the International Monitor for Auroral Geomagnetic Effects (IMAGE) 24 . The Greenland, CARISMA and IMAGE arrays cover polar cap, cusp and auroral regions. The large latitudinal coverage allows features such as the phase change across resonance and the amplitude peak of the wave to be observed. Each station is equipped with a three-component ring core fluxgate magnetometer, which records the geographical (X, Y and Z) coordinates, which are rotated into geomagnetic (H, D and Z) coordinates before analysis. The data are sampled at a rate of 20 s, 1 s and 10 s, respectively. These magnetometer arrays give exceptionally high quality data and they all have a good temporal resolution. High-frequency radars We analysed in detail a Pc5 pulsation event, which was observed between 20:00 and 22:00 UT on 6 October 2006. The investigation started by using an Automated Pulsation Finder program. 25 The Automated Pulsation Finder highlighted a significant peak in beam 10 and range gate 32 of the Goose Bay radar. Groundscatter was not excluded from the analysis as it has been shown that pulsations may be visible in such scatter. 26 Although the pulsation was initially detected in beam 10, we chose to analyse only those cells that were aligned along a specific magnetic latitude. Each record of data corresponding to a beam number and range gate to be processed by the Automated Pulsation Finder had an uneven sampling period, because of periods of no backscatter, and the original data record was not always suitable for pulsation analysis. The temporal resolution of the data used in the pulsation finder was 1 s. This higher time resolution had no impact on the frequencies of interest. The results of the Automated Pulsation Finder are shown in Figure 1a, which shows the line-of-sight Doppler velocity recorded at Goose Bay, filtered in the Pc5 band, and the corresponding power spectrum. The red line shows the level at which peaks in the spectrum are significant. The term 'significance detector' is used to describe a routine to process a set of data and then return those points that are either a significant part of a data group or not part of a group. The routine is as follows: • Clean the data by removing outliers by removing echoes with a spectral width greater than 150 m/s; the remaining velocity values should fill more than 80% of the 2-h cell record. • Interpolate the missing points using cubic spline interpolation. • If there is a peak in the whole or part of the spectrum, calculate the mean and standard deviation of all the amplitude values within the 1-5 mHz frequency. • Set the significance level at mean plus three standard deviations of the data. If any amplitude values in the 1-5-mHz range fall above the significance level, then the frequency that corresponds to the amplitude is considered significant. In our data, the detector recorded a significant resonance in the frequency band 2. other radars, we could not plot the latitude profile of amplitude and phase because there were not enough cells that were magnetically aligned or close to each other. Figure 3 shows a map of the locations of the three SuperDARN radars -Saskatoon, Goose Bay and Þykkvibaer -and the magnetometer stations of the CARISMA, Greenland and IMAGE arrays. These radars have a good spatial resolution over a large range of magnetic latitude and longitude. Once we identified the Pc5 ULF pulsation event in one radar (Goose Bay), we then proceeded to analyse all the beams and range gates of the Saskatoon and Þykkvibaer radars which were located within the same range of magnetic latitudes (i.e. within the solid red lines). The beams are numbered from west to east in the field of view of the radar. Figure 4 shows the time series of Doppler velocities for different beams and range gates at the magnetic latitudes of the Goose Bay radar that are of interest. The time series was analysed to obtain power spectra that are similar to the one observed using the Automated Pulsation Finder, as shown in Figure 5. The beam and range gate plots presented here are magnetic latitude aligned and showed peaks in the 2.2-2.5-mHz frequency band. The data used in these plots had a time resolution of 120 s, which limits the pulsation that can be observed to an upper frequency of 4.17 mHz. The investigation of the Pc5 pulsation event observed by the Automated Pulsation Finder was extended to other radars within the same magnetic latitudes. Velocity data from 20:00 to 22:00 UT from these two radars were passed through a Fourier analysis to obtain the power spectra. The results from Saskatoon and Þykkvibaer radars are shown in Figures 6 and 7 Magnetometers Magnetometer stations that lie within the red lines shown in Figure 3 were chosen for more detailed analysis. The following magnetometer stations are located within or near the field of view of one of the three radars: Contwoyto (Saskatoon); Sukkertoppen/Maniitsok and Nuuk/ Godthaab (Goose Bay); and Hornsund (Þykkvibaer). The Contwoyto station is in the same magnetic latitude range as other stations but not in the field of view of the mentioned radars. Some other magnetometer stations within the field of view do not have data while others did not have a significant peak in the 2.2-2.5-mHz band. Those magnetometer stations were not included in the study. The shading highlights peaks in the 2.2-2.5-mHz frequency band, which confirms the Pc5 pulsation event observed by these SuperDARN radars, as shown in Figure 8. The event observed using the Automated Pulsation Finder appears in other radars and magnetometer chains as a field line resonance, with an H-component amplitude peak and associated phase change across the resonant magnetic latitude. During this interval, the CARISMA, Greenland and IMAGE magnetometer networks were located in the magnetosphere in the evening and nightside sectors in local time: 16:00-18:00, 17:00-19:00 and 21:00-23:00, respectively. This allowed us to investigate the spatial behaviour of the wave. Field line resonances are toroidal mode waves with magnetospheric magnetic perturbations in the azimuthal direction. Because of the rotational effect of the ionosphere, the large ionospheric perturbations on the ground are observed in the magnetic H-component. 27 Similarly, poloidal mode waves are characterised by magnetic perturbations in the radial direction which translates to a large perturbation observed in the magnetic D-component in ground magnetometer data. However, no ULF wave is purely Alfvén in nature; there will always be an accompanying compressional mode component. When identifying field line resonance signatures, one must take into account both the amplitude of the H-or D-component and the associated phase change. Figure 8 shows the corresponding spectral power for the four magnetometer stations selected from CARISMA (Contwoyto), Greenland (Sukkertoppen/Maniitsok and Nuuk/Godthaab) and IMAGE (Hornsund) for H-component only. The geographical and geomagnetic coordinates of these magnetometer stations are shown in Table 1. These magnetometer stations lie within a narrow magnetic latitude but cover an extensive range (nearly 180°) of longitude. Complex demodulation To analyse the instantaneous characteristics of the signal, we applied a complex demodulation technique to determine the analytic signal. 28 This allowed the examination of the variation with time of the instantaneous amplitude and phase of a selected frequency band. In the analysis of the resonance frequency band 2.2-2.5 mHz, the analytic signal was calculated from different beams and range gates of the radars in which a given field line resonance was maximum, as shown in Figure 9 for Goose Bay. Similar analyses of Saskatoon and Þykkvibaer radars were performed, but are not shown here due to space constraints. The phase of the resonance was measured across the field of view at constant magnetic latitude, yielding the phase versus longitude relations shown in Figure 10. where is the phase difference of the H-component between two stations and is their geomagnetic longitude difference. The coordinates for the relevant stations are provided in Table 1. Positive m-values represent waves with westward phase propagation while negative m-values represent waves with eastward phase propagation. An m-value of ~+11 represents a wave with westward phase propagation. The resonance frequency observed at Goose Bay radar using the Automated Pulsation Finder also appears in other radars, but not as the dominant peak; other beams and range gates in all three radars show significant peaks. The magnetometer stations that are within the field of view of the radars and those in the magnetic latitude of radars showed some peaks. These peaks show characteristic features of the field line resonances as resolved by the spatial resolution of radars, as it is shown in Figure 2. The region of resonance is clearly visible as a narrow peak in amplitude 73.7° magnetic latitude with standard phase change with increasing latitude. Once again, the m number increases from west to east as you cross the field of view; this finding is not surprising because a constant m-value would assume cylindrical symmetry. There is no reason to assume the wave would have this property. The observed azimuthal wave m from magnetometer stations is smaller than the azimuthal wave number observed from SuperDARN radars. The difference could be because when the ionospheric distribution of Hall currents also has a finite scale size in longitude, then azimuthal wave m numbers measured by ground-based magnetometers will be different compared with values in the ionosphere. 29 The azimuthal wave number is calculated using the phase versus longitude relation. Conclusions The Pc5 pulsation event investigated was a field line resonance. Analysis of the Goose Bay radar beam most aligned with the magnetic meridian demonstrated an amplitude peak over a narrow latitude range and associated phase change across the resonance (Figure 2). Field line resonances tend to have lower azimuthal wave number (m) values and thus often have external sources. In addition, pulsations excited by external sources have phase velocities that are anti-sunward. However, in this instance, the field line resonance is sunward propagating with a relatively high azimuthal wave number. This may be more consistent with a drift resonance source, although a drift resonance generation mechanism is more commonly associated with a compressional oscillation. It has been shown that drift resonance can be associated with
4,034.8
2015-03-31T00:00:00.000
[ "Physics" ]
On a toroidal method to solve the sessile-drop oscillation problem Abstract We present a fully analytical solution for the natural oscillation of an inviscid sessile drop with small Bond number (surface tension dominates gravity) and a fixed contact line on a flat horizontal plate. The governing equations are expressed in terms of the toroidal coordinate system which yields solutions involving hypergeometric functions. Resonant frequencies are identified for zonal, sectoral and tesseral vibration modes. The predictions show excellent agreement with experimental data reported in the literature, particularly for flatter drops (lower $\theta _{c}$, but not so low as to incur significant viscous dissipation) and higher modes of vibration. inviscid drop. While the free drop is generally assumed to be spherical, a sessile drop takes the form of a spherical cap when surface tension dominates gravity (i.e. √ γ /ρg c, where γ is the surface tension, ρ is the density and c is the contact radius of the drop). To find natural frequencies of the latter, analytical models in the literature either converted the geometry to a simplified form (replacing the planar substrate by a spherical one, Strani & Sabetta 1984) or developed a solution using spherical coordinates (Bostwick & Steen 2014). Although the former approach leads to a highly simplified physical model, the latter requires hybrid analytical-numerical schemes: neither is then suitably accurate and accessible for use by an experimentalist. , Steen, Chang & Bostwick (2016) have presented a detailed account of the underlying physics and mechanics of this problem. The contact angle θ c (and shape) of the drop is established at static equilibrium by balancing the liquid-gas, liquid-solid and solid-gas interfacial tensions. The drop stability is determined by the behaviour of the contact line (CL), via its speed u CL . Stick-slip behaviour of the CL (Shaikeea et al. 2017) gives rise to hysteresis, which is captured using a CL model. In the 'Hocking condition' presented by Davis (1980), contact-angle deviations are expressed in the form Δθ c ∝ u CL , with a constant of proportionality Λ which quantifies the CL resistance. This phenomenological parameter characterises the CL mobility; Λ = 0 corresponds to a fully mobile CL and Λ = ∞ to a pinned CL. In the current work, the toroidal framework imposes the pinned CL condition on the problem (see § 2.4). We present here an analytical solution to this long-standing problem by using a toroidal coordinate system. The liquid-vapour and liquid-solid boundaries of a spherical cap, δD f and δD s (cf. figure 1a), correspond to a pair of β-coordinate curves in this system, where the boundary conditions can be directly expressed without any geometric conversions or complex computations. Solving the hydrodynamic equations in this framework requires the use of hypergeometric functions, which ultimately yield a fully analytical solution in the form of (2.18). The importance of choosing this framework to solve the sessile-drop evaporation problem was first presented by Popov (2005) and we believe this is the first time it has been extended to the oscillating sessile drop. Bostwick & Steen (2014) (hereafter referred to as Bo-St) presented a hybrid analytical-numerical model which solves the same problem and employs inverse operators to find the solution. Theirs is the most comprehensive investigation of the sessile-drop oscillation problem to date. They considered different types of vibration mode shapes, namely zonal, sectoral and tesseral, which were subsequently validated experimentally by Chang et al. (2015). In the current work, the resonant frequencies for the mode shapes discussed by Bo-St are calculated and compared with the experimental data reported in the literature. The purpose of this work is to show that our model, based on toroidal coordinates, yields fully analytical solutions for the case of an inviscid drop with fixed CL in the shape of a spherical cap. Stating the hydrodynamic equations with boundary conditions, we perform an eigenmode analysis to find the solution ( § 2). This model is then used to identify resonant frequencies for zonal, sectoral and tesseral vibration modes ( § 3). Its predictions are compared with experimental data reported in the literature. Future work and possible extensions of this model are discussed in ( § 4). Sessile-drop geometry The liquid-vapour interface of a sessile drop with contact angle θ c ∈ (0, π) can be expressed in toroidal coordinates as r Figure 1. (a) Three-dimensional schematic of toroidal coordinate system r = r (α, β, ϕ) overlaid on a sessile drop. Based on Li, Kar & Kumar (2019). (b) Diametral section of the drop (blue shaded region) with toroidal gridlines embedded into it. On a red circle, β is constant, on a purple circle, α is constant. Defining expressions for α and β are also displayed. varies along the surface ∂D f , where β ∈ [0, π] is the angle subtended by foci F 1 , F 2 on ∂D f and ϕ ∈ [0, 2π] varies in the azimuthal direction. The equilibrium (base) state Γ of the drop can be defined as (2.1) A small perturbation η (α, ϕ, t) on Γ (with the CL being fixed) leads to a competition between the drop's inertia and capillarity, and the resulting motion is oscillatory in nature (cf. figure 1b). These disturbances are often expressed in terms of where c is the drop contact radius and h α , h β and h ϕ are the scale factors of the toroidal system. Here the prime notation indicates that the variables are dimensional. In the following text, prime notation will be dropped from dimensionless variables, except for density ρ, surface tension γ and contact radius c of the drop. The scale factor quantifies the change in position of a point on changing one of its coordinates, so a Δα change in α (keeping other coordinates constant) corresponds to a change in distance alongα of h α Δα (cf. figure 1a). Equations and boundary conditions The flow is assumed to be incompressible and irrotational. The velocity field v is described as v = ∇ψ , where the velocity potential ψ satisfies Laplace's equation in the drop domain D. The equation becomes closed form when subject to the no-penetration condition at the substrate ∂D s , and a free-surface kinematic boundary condition at the interface ∂D f , where the normal velocity is set equal to the time derivative of perturbation. For an inviscid fluid, applying linear wave theory, the pressure field is described by the momentum equation (2.6) A small disturbance η to the equilibrium surface Γ causes a deviation from the initially spherical shape which is described by the modified Laplace equation where k 1 , k 2 are the principal curvatures, Δ T is the Laplace-Beltrami operator; definitions are given in Appendix A. In subsequent sections, we have replaced the term cosh α − cos β with b(α, β) while simplifying the terms involving scale factors h α , h β and h ϕ . Curvatures and Laplace-Beltrami operator for toroidal coordinates The first and second fundamental forms of a surface allow the calculation of curvature and Laplace-Beltrami operators, respectively, for a parametric surface x(u 1 , u 2 ). The coefficients for first fundamental form are given by the metric tensor (Kreyszig 1959). The derivation of the principal curvatures and the Laplace-Beltrami operator from the coefficients E, F and G is given in Appendix A. An important consequence of (2.14) is that y = 0 at the CL, which arises because P → 0 as α → ∞, and so use of the toroidal coordinate system imposes the fixed CL condition on the problem. The mobility of the CL is defined as 1/Λ by Bo-St, which is zero for an immobile CL and infinite for a fully mobile CL. Only the immobile CL case, Λ = 0, is considered in the current work. Substituting the above equation into (2.10d) gives, at β = β 0 , This can be rearranged as where I and II are, respectively, The term I is equivalent to (v 2 − 1 4 )P (see Lebedev 1965, p. 224). In fact, an analogous simplification is performed while deriving an expression for the eigenfrequencies of a free spherical drop in Rayleigh's derivation (see Landau & Lifshitz 1987, p. 246). Further simplification of the right-hand side of (2.16) gives −λ 2 = 2 sin 2 β 0 −b 2 (α, β 0 ) τ 2 + 1 4 where a single or double subscript α on a function denotes single or double derivative of the function with respect to α. The expressions for T α , T αα , P and P α (which fall under the class of hypergeometric functions) are given in Appendix B. Results The variation of dimensionless frequency λ with contact angle θ c = π − β 0 is determined by solving (2.18). Previous studies such as Bo-St classified the vibrational modes as zonal (l = 0), sectoral (τ = l) and tesseral (l / = 0, τ / = l). Results are presented for each type of mode in turn. 3.1. Zonal (l = 0) modes When the disturbance of the interface is axisymmetric, the mode shapes are termed zonal. For a sessile drop of fixed contact radius c, increasing the contact angle θ c increases the volume of the drop (inertia) and thus decreases the frequency λ (cf. figure 2a). There is excellent agreement between the model and the data of Chang et al. (2015) in this figure, particularly at higher mode numbers. For instance, for τ = 10 and θ c = 40 • , our model Further comparisons of predicted zonal mode frequencies with experimental measurements are shown for the data sets reported by Chang et al. (2013) and Mettu & Chaudhury (2012) in figures 2(b) and 2(c), respectively. In the former, the experimental values lie within the range of theoretical frequencies calculated for the range of contact angles θ c involved. For the higher modes, τ = 8 and 10, the frequencies lie at the upper end of the theoretical span, where there were a limited number of data points as these require larger droplets ( 5μL (see Mettu & Chaudhury 2012, figure 4a), whereas lower modes (τ = 2, 4, 6) were experimentally accessible for droplets with smaller volume, ≤5μL. In addition, there is a small increase in slope for τ = 10 (compared to slope at τ = 6); this feature is also present in figure 2(a) for θ c ≈ 65 • . In figure 2(c), the width of the predicted frequency band is small and lies at the lower end of the range of observed frequencies. A possible explanation for this mismatch is that the model neglects contributions from viscous effects. Chang et al. (2013) reported that the bandwidth of predicted frequencies increased when viscous contributions were added (noting that the dimensional frequency is plotted here). Chang et al. (2015) subsequently showed that the viscous contribution is characterised by the Ohnesorge number, Oh = μ/ √ ρcγ , and even at a small value of Oh = 0.003 for water (instead of Oh = 0 for the inviscid case) the resonant peak changed from an infinite to a finite value and, thus, increased the bandwidth of predicted frequency (see Chang et al. 2015, p. 446). The effect of viscosity on a drop undergoing oscillations of arbitrary amplitude has been discussed both for free drops and sessile/pendant drops (see Basaran 1992 andBasaran 1997). For the latter case, it has been reported that as the viscosity increases, the resonant frequency also increases, so that excluding viscous effects can lead to predicted frequencies lying at the lower end of the observed spectrum. Figure 3. Effect of contact angle on dimensionless frequency for sectoral modes, τ = l, for (a) [5,5], (b) [7,7] and (c) [9,9]. Solid loci show the solutions to (2.18), dashed loci are the results presented by Bostwick & Steen (2014). Symbols indicate experimental data reported by Chang et al. (2015). Bostwick & Steen (2014). Symbols show experimental data reported by Chang et al. (2015). The shaded region in (d) represents the range of frequencies calculated using VPF theory by Chang et al. (2015) for water, with substrate forcing and viscosity included. 3.2. Sectoral (τ = l, l / = 0) modes A non-axisymmetric mode with wavenumber pair [τ, l] has l longitudinal intersections and (τ − l)/2 latitudinal intersections (or τ − l nodes on the interface) with the undisturbed interface Γ (see Bostwick & Steen 2014, p. 19). A sectoral mode, with τ = l, is a special case where there are only longitudinal intersections. Figure 3 compares the experimental frequencies reported by Chang et al. (2015) with our model and the Bo-St model. There is good agreement with our model for τ = 9. For τ = 5 and 7, the two models bracket the data. 3.3. Tesseral (τ / = l, l / = 0) modes A tesseral mode shape with wavenumber pair [τ, l] has non-zero longitudinal and latitudinal intersections because τ / = l. Figure 4 compares the results for our model and the Bo-St model in a similar fashion to the sectoral mode. For the τ = 9 cases, our model agrees with the experimental data quite well for all θ c values investigated. For θ c ≤ 65 • , the Bo-St model does not capture the observed trend, for 50 • and l = 7, it overpredicts by a factor of 1.17, whereas our model underpredicts slightly by a factor of 1.05. For τ = 7, there is good agreement between the experimental data and both models as θ c decreases from 140 • to 70 • , below which our model continues to perform well and Bo-St starts to diverge. For τ = 5, our model captures the frequencies at low θ c whereas the Bo-St model works well at higher values. It should be noted that our model cannot predict the frequencies for small contact angles, because in this case a larger fraction of the sessile-drop volume lies within the solid-liquid boundary layer and drop-substrate interactions then cause damping of oscillations. The range of contact angles suggested to avoid boundary layer viscous dissipation effects, discussed in Sharp (2012), is 30 • -150 • . For the lowest mode, [5,3], there is a slight overprediction for larger contact angles. This can be attributed to the assumption of a pinned CL in the current work. If the CL is, instead, assumed to be mobile and not pinned, the slope of the frequency-versus-contact-angle curve will decrease at larger contact angles (see Bostwick & Steen 2014, figures 10 and 11). This represents a limitation in the current model in that mobile CL behaviour is not readily incorporated in the toroidal coordinate approach. Discussion and conclusions The superior performance of our model for lower θ c and higher modes is probably the result of using toroidal coordinates, which fit the sessile drop naturally. An interesting physical insight from this work is that the slope of the λ versus θ c curve decreases as θ c decreases; the curve almost reaching a plateau. This is also suggested by experiments. The physical models present in literature incorporate bulk dissipation and CL (Davis) dissipation, e.g. , to account for this plateau. On the one hand, the current toroidal model, while established on zero viscosity and fixed CL assumptions, can still predict this plateau with good success. On the other hand, incorporating more dissipation terms will improve this model and bring more understanding of observations such as mode mixing and mode competition . Note that the strength of this inviscid theory coupled with an appropriate coordinate system points to the importance of choosing a framework which maps the complicated geometries of physics problems perfectly, as previously done in Fokas & Nachbin (2012) and Richardson (1992). There are exceptions, e.g. figures 3(a) and 4(a), and we here consider whether the mismatch between the predictions of the model and the experimental data could arise from the assumptions made in our model. The larger error incurred by the Bo-St model can be attributed to the approach used to enforce the no-penetration condition. Earlier works on constrained drops (e.g. Ramalingam, Ramkrishna &Basaran 2012 andProsperetti 2012) essentially used the Lagrange multiplier method to enforce the no-penetration condition at the pinning circle and these methods permitted a discontinuity in the interface shape at the contact point. The Bo-St method does not allow a discontinuity at the pinning sites, which leads to overprediction of the frequency (see Bostwick & Steen 2015, p. 558). The model considers the sessile drop on a substrate as a mass-spring system, where viscous effects and substrate-drop interactions are neglected. These assumptions were also made in the Bo-St model and were subsequently relaxed in their subsequent work, for example, , where they studied damping for viscous drops (with fixed CL) undergoing substrate-forced oscillation. To extend our model along the lines discussed in Chang et al. (2015), viscous contributions could be incorporated by adding a damping term, iλ C [y], to the right-hand side of (2.15), where C is the dissipation operator and = μ/(ρcγ ) 1/2 is the Ohnesorge number. Substrate-drop interactions can be modelled using two main assumptions: (i) constant contact radius; and (ii) modelling the substrate forcing via the bulk pressure in the drop in the form of Faraday oscillations. With regard to (i), it should be noted that contact-angle hysteresis on the modes cannot be incorporated using the toroidal framework presented here because it requires the incorporation of a dynamic CL condition (Bostwick & Steen 2014). With (ii), the substrate contribution is incorporated by adding a term F 0 e iλt to the right-hand side of (2.15), where λ is the frequency of substrate forcing (not the natural frequency) and F 0 its amplitude. Chang et al. (2015) used these assumptions and incorporated the aforementioned effects in their viscous potential flow (VPF) theory. The envelope of solutions which they obtained is shown as a shaded region in figure 4(d) and it spans Bo-St and our model (both inviscid). It is, thus, expected that the addition of viscous and substrate contributions to our model will modify (2.18) and increase the bandwidth of predicted frequencies. This is the subject of ongoing work, where the aim is to identify the contributions of viscous damping and substrate forcing, and thereby establish when significant differences will arise from the inviscid model. The description of an oscillating drop presented here is not suitable for cases where the drop shape is influenced by gravity, which is quantified by Bo (ratio of gravity to surface tension). As the volume of the drop increases, the drop shape changes from that of a truncated sphere (Bo = 0), towards being ellipsoidal (0 < Bo < 5) until it forms a flat puddle (Bo > 5), with uniform depth except near the edges (Lubarda & Talke 2011). We believe that it should be possible to model a flattened drop using confocal ellipsoidal coordinates system in the 0.5 < Bo < 5 regime. Finding resonant frequencies for a flattened drop will allow us to extend our Bo = 0 theory to 0 < Bo < 5, and compare the results with the theory for flattened drops presented by Noblin, Buguin & Brochard-Wyart (2004) where the drop is modelled as a liquid bath and the resonant frequency is that associated with a standing wave on its interface. This work introduces, for the first time, an analytical solution to the sessile-drop oscillation problem. The superiority of this model lies in the fact that its predictions work well for lower contact angles (<75 • ) compared with the existing models. It also predicts a decrease in slope as θ c decreases, which is consistent with experiments. The behaviour at lower contact angles (<30 • ) remains to be experimentally validated (and physically understood) for all types of modes. To summarise, our model provides a concise solution to the sessile-drop oscillation problem which opens a new window to researchers interested in this and related problems. A clear next step could be to test the θ c < 30 • regime experimentally, model a drop being vibrated on an inclined plane (see Brunet, Eggers & Deegan 2007) and extend the model to larger drops by including the effects of gravity.
4,744.6
2021-06-02T00:00:00.000
[ "Physics" ]
Application of Inverse Mapping for Automated Determination of Normalized Indices Useful for Land Surface Classification Precise surface classification is essential for glacial health monitoring, where normalized indices have traditionally been used. These indices are created empirically for a specific sensor. The transferability of these indices to other sensors can be affected by differences in spectral and spatial resolution. Thus, it is essential to evaluate the transferability of an index before applying it to a new sensor to ensure accuracy and reliability. However, as the number of satellites, sensors, and observation bands increases, there is a need for automated methods for determining application-specific normalized indices. In this article, we propose using all the bands of multispectral optical sensors to generate multiple normalized indices and determining application-specific indices using inverse mapping. We use these normalized indices for pixel-by-pixel surface classification using neural networks. First, we employ all the bands for generating normalized indices and then eliminate low-spatial-resolution bands to evaluate classification performance by using only high-spatial-resolution indices. We apply this method to a glacial region and observe 81.98% and 84.81% overall accuracy compared to the ground truth data for the two classifications, respectively. We then apply inverse mapping dynamics to the classification results to determine prominent indices useful for glacier classification. The results show that although some of the determined indices are not traditional indices, they are still useful for classification due to the relative differences between various land types. The proposed method has the potential to automate normalized index determination, thereby eliminating the need for empirical band assessment methods and making the index development process more efficient. I. INTRODUCTION M OUNTAIN glaciers are highly sensitive indicators of climate change [1], [2]. Thus, a detailed and accurate classification of the size of a glacier is an important step in order to understand their overall health and other applications such as water resource management and early prevention of glacial lake outburst floods (GLOFs) [3], [4], [5]. The rugged terrain and remote locations of glacierized regions limit the applicability of traditional in situ observation methods [6], [7]. However, the recent increase in the availability of multispectral optical remote sensing data with high resolution spatially and temporally allows frequent coverage of glacial regions, even in rugged terrain [6]. In most studies focusing on optical sensor data, index-based methods are used, wherein normalized indices are calculated from various spectral bands of optical sensor data and used to analyze changes. By determining the normalized difference of the spectral bands, it is possible to highlight specific characteristics and analyze trends and patterns that may not be apparent when looking at the raw data alone. Traditionally, normalized indices are obtained by observing the reflectance of the bands, determining target-specific high and low reflectance bands, and calculating their normalized differences. Over the past three decades, several normalized indices have been suggested for different applications, and some of these indices are widely used for glacial applications. Among the commonly used indices is the Normalized Difference Snow Index (NDSI) [8], which was initially developed for Moderate Resolution Imaging Spectroradiometer to distinguish between snow and ice by utilizing the green and the shortwave infrared band. The Normalized Difference Vegetation Index [9] is also utilized often as it can indicate the presence of vegetation on the glacier surface. In addition, for glacial lake monitoring, the Normalized Difference Water Index (NDWI) [10] is often used and is useful to identify the presence of water in glacial lakes and to monitor changes in lake extent over time. In addition to NDWI, the Modified Normalized Difference Water Index (MNDWI) [11], which uses the same spectral bands as NDSI, has been widely used in glacial lake monitoring. Glacierized terrains are spectrally complex regions and comprise different land covers with similar spectral characteristics [12]. The spectral similarity between glaciers and the surrounding material makes many of the conventional methods ineffective for identifying glaciers [13]. To address this issue, various new normalized difference indices have been proposed by combining different bands to enhance the distinctions between glaciers and the spectrally similar surrounding terrain [14]. A recent work [15] proposed methods for distinguishing between snow cover glaciers (SCGs) and water bodies. They introduced two indices, namely, the NDWI with no SCG information for extracting lake water and the NDSI with no water information to highlight SCG and to suppress the influence of lake water. The newly developed indices were proven successful in distinguishing between two land types that exhibited similar spectral features. However, their reliance on specific thresholds poses limitations to their application beyond the study area. In order to apply these indices in new areas, it is necessary to determine new thresholds, which in turn requires the user's judgment. Moreover, in the context of water body extraction, it has been observed that indices such as NDWI and MNDWI were initially proposed and tested on a specific sensor, and the accuracy of their results may differ when applied to different sensors. Given that the Sentinel-2 sensor shares similar spectral and spatial characteristics with the Landsat series, it is expected that water indices developed on Landsat could presumably be applied to Sentinel-2 data [16]. However, despite the efficacy of these indices in land-type extraction of their respective sensors, a slight change in the spectral range of the corresponding band of another sensor could alter the results [17]. Moreover, the transferability of indices could also be influenced by the difference in spatial resolution between two sensors [18]. It, thus, becomes imperative to carefully evaluate the transferability of an index to a new sensor before applying it to ensure the accuracy and reliability of the results. Thus, as the number of sensors and observation bands increases, it is essential to develop automated methods for the determination of application-specific indices. In the last few decades, researchers have developed various automated and semiautomated methods to classify different types of remote sensing satellite imagery [19], [20], [21], and recently, a lot of researchers have also focused on making explainable networks to understand the effect of the inputs on the classification results [22], [23]. In the realm of neural network explainability for remote sensing applications, inverse mapping has proven effective in giving reliable and convincing results [24]. In this article, we propose the use of all the combinations of raw band data of Sentinel-2 optical sensor for generating normalized indices. A fully connected feedforward neural network is employed for the pixel-by-pixel classification of a glacial region, and the efficacy of the indices for the classification task is evaluated. In addition to the classification, inverse mapping is used to determine the most significant indices useful for characterizing a particular land type. This article first focuses on the proposed method in Section II, where we elaborate on the normalized index generation for land surface classification. In addition, prominent normalized index identification by using inverse mapping is introduced. Section III presents the experiments and results of the application of the proposed method on the dataset for the Imja glacier region. Section IV provides a detailed discussion on the results. Finally, Section V concludes this article. A. Normalized Index Generation In general, a normalized index is calculated as where B i and B j denote the spectral bands i and j, respectively; ρ band i denotes the reflectance of the corresponding band i, while ρ band j denotes that of band j. In this experiment, a normalized index generator is modeled as follows. The input to the normalized index generator is N spectral bands of a particular sensor. It generates I = N C 2 combinations of N spectral bands in the form of (1). Consequently, the I combinations are channeled as inputs to the neural network. For land surface classification, each input terminal of the network is fed pixel values of a specific normalized index, as detailed in the following subsection. B. Land Surface Classification In this experiment, a feedforward neural network is employed for generating the pixel-by-pixel classification map. In this study, our aim is to achieve high-resolution classification focusing on the reflectance values of individual pixels. Unlike convolutional neural networks that diminish data resolution through pooling, the fully connected neural network introduced in this experiment directly handles the physical values instead of identifying texture or shape characteristics from the images and ensures that the spatial resolution remains uncompromised. As shown in Fig. 1, the I normalized indices generated by the normalized index generator are used as inputs. This feedforward neural network is a fully connected single-hidden layer neural network consisting of an input terminal layer, a single hidden layer, and an output neuron layer. Considering x = [x 1 x 2 . . . x I ] T ([·] T : transpose) and y = [y 1 . . . y J ] T to be the input and hidden signals, respectively, to be the biases for the hidden and output layers, respectively, and W 1 and W 2 to be the input-hidden weights and the hidden-output weights obtained after a learning phase, respectively, the values of z, where z = [z 1 z 2 . . . z K ] T , at the output layer of the neural network can be calculated as This network employs the modified logarithmic activation function proposed in [25], working componentwise and is defined as . (3) C. Normalized Index Identification Inverse mapping is a dynamics that determines the most prominent contributing input of a forward processing neural network by accessing the signal flow in the network. To gain insight into the inverse mapping process, it is beneficial to examine how a neural network identifies a winning neuron. This can be achieved by focusing on the role of the weights in the forward processing network. As depicted as Fig. 1, the input signals progress in a forward direction through W 1 and W 2 matrices. The respective weight values determine the "conductivity" of the neural connections for these signals, resulting in a set of outputs. Then, we can consider a reciprocal signal flow by taking transposed matrices W T 1 and W T 2 as the inverse signal propagation. No additional learning or weight optimization is necessary for this inverse mapping process [25]. Fig. 2 shows the inverse mapping process. In this process, the value obtained at the winning output node of the forward processing network is fed to the same node as the input of the inverse mapping, with all other nodes fed zeros as inputs. This approach effectively suppresses any potential influence that nonprominent classes might have. The network identifies the most significant features by tracking the backward signal flow from the winning node [25]. The following equation governs the inverse mapping: where z = [0 0 . . . zk . . . 0] T is the modified output values of the forward processing network fed as an input to the inverse mapping network, x is the output of the inverse mapping network, and the inverse activation function f −1 is defined as In this experiment, the inverse mapping identifies which of the input normalized indices are decisive and influential to the decision making of the neural network. The analysis is done pixel by pixel and class by class, wherein the output classes are correlated with the relevant input features. The inverse mapping results are displayed by using box-whisker plots. The results are displayed for all those pixels of a specific class that the neural network is confident of in the selected area of interest (AOI). The proximity of the individual box plot to either −1 or +1 level represents high significance of the corresponding input feature [25]. The proximity of a feature to +1 (positively prominent feature) implies that the said feature positively influences the decision making of the feedforward neural network. On the other hand, the proximity of a feature to −1 (negatively prominent feature) implies that the said feature oppositely influences the decision-making process. A. Study Area Imja glacier is situated in the Khumbu region of eastern Nepal (27 • 54 17 N, 86 • 55 31 E). Imja glacier has two branch tributary glaciers: the Lhotse Shar in the northeast and the Ambulapcha glacier in the south [26]. The meltwater from Imja and Lhotse glaciers creates the Imja Tsho glacial lake, which is considered to be at high risk for GLOFs. Due to its close proximity to the Mt. Everest base-camp trekking route, the Imja glacier has been the focus of comprehensive research using both on-site and remote sensing techniques, making it one of the most extensively studied glaciers [27], [28], [29], [30]. Fig. 3 shows the AOI for this study. The AOI also includes parts of the Lhotse glacier and the Ambulapcha Tsho lake, a circular basin. Unlike the Imja Tsho, this lake has no visible watershed and the water discharges via springs [31]. Despite extensive research on Imja Tsho lake, there is a scarcity of literature regarding Ambulapcha Tsho lake. Given its close proximity to a glacier that is at risk, it is imperative to conduct in-depth study on Ambulapcha Tsho. Since the lake is not fed by a glacier, hereafter we consider it as a freshwater lake basin. In addition, the topographic complexity, such as the presence of steep slopes in the AOI, introduces the effect of topographic shadows. Some of these shadow-contaminated regions are indicated in Fig. 3 by red outlines. B. Dataset The AOI dataset was obtained from the Sentinel-2 multispectral mission of the European Space Agency. Glacier inspections are usually carried out toward the end of the ablation season, as imagery obtained during this period can be useful for determining the end-of-summer snow-line altitude [6]. Accordingly, the acquisition of the dataset was done on October 14, 2021, which was toward the end of the ablation season. The Sentinel-2 dataset was obtained as Level 1C images for all the bands, and the Sen2cor algorithm was applied to produce Level 2A images of all the bands, with each pixel having a spatial size of 10 m × 10 m. This ensures that each pixel of the corresponding bands represents the same geographical location when collocation is performed. Fig. 4 shows the ground truth map of the AOI. The AOI consists of 769 × 973 = 748 237 pixels. We classified each of the pixels in the AOI into five different land types: 1) snow; 2) glacial debris; 3) rock; 4) glacial lake; and 5) freshwater lake. C. Ground Truth The ground truth data were mapped pixel by pixel manually by careful visual inspection of Sentinel-2 and Google Earth Pro imagery. Despite the meticulous manual mapping efforts, the possibility of errors cannot be completely ruled out, especially in regions that are subject to shadows or located on steep slopes. D. Feedforward Network for Land Surface Classification As mentioned in Section II-B, the normalized index generator is used to generate I combinations of N spectral bands. The normalized indices generated from the spectral bands are used 2) only high-spatial-resolution spectral bands (10 and 20 m). The architecture of the neural network consists of three layers: an input layer with I nodes, a single-hidden layer with 64 neurons, and an output layer with five neurons. Each neuron in the output layer represents one of the five distinct land cover classes. As shown in Fig. 1, the input layer of the neural network is fed with pixel-by-pixel values of the generated normalized indices. At the output layer, the network classifies each pixel as one of the five land cover types: snow, glacial debris, rock, glacial lake, and freshwater lake. Both the feedforward neural networks are trained and validated using a total of 2500 ground truth pixels (enclosed in red boxes in Fig. 4) of their respective input datasets. The training pixels were carefully chosen considering both ground truth data and visual inspection of the RGB composite image of the region. The pixel selection involved a detailed examination of the composite image to identify suitable pixels, while the ground truth data ensured the accuracy of the labeling. The training validation dataset is partitioned into training and validation sets in an 80:20 ratio. 1) Feedforward Network for Land Surface Classification Using All Spectral Bands: The raw Sentinel-2 data contain 13 bands, as shown in Table I. The normalized indices are generated by utilizing all 13 spectral bands, resulting in I = 13 C 2 = 66 bands in total. The feedforward neural network's input layer consists of 66 nodes, and each node is fed with a normalized index's pixel value. The hyperparameters of the network are empirically selected through a rigorous process of trial and error as follows. The learning rate is set at 10 −5 and the network is trained for 2500 epochs. Fig. 5(a) displays the learning curves of the network. At the end of the training process, the final training loss and validation loss are 0.0703 and 0.0737, respectively. The feedforward neural network is employed for generating the classification map by using the 45 normalized indices generated by the normalized index generator. The hyperparameters of this network are also chosen empirically, the learning rate is set at 10 −5 , and the network is trained for 2500 epochs. Fig. 5(b) depicts the achieved training loss and validation loss, which are 0.2057 and 0.2017, respectively. Fig. 6(a) and (b) shows the classification maps generated by the neural networks with all resolution normalized indices and high-spatial-resolution normalized indices, respectively. A visual comparison of the two classification maps shows that both the maps have a good classification. However, it is worth noting that Fig. 6(a) exhibits a relatively coarse classification when compared to Fig. 6(b), which can be attributed to the inclusion of indices containing 60-m resolution bands. E. Classification Results The classification results indicate that both the regions are successful in accurately classifying snow-covered areas and distinguishing between different types of water bodies, including supraglacial ponds on glacial debris. In addition, both the regions are effective in differentiating between spectrally similar glacial debris and rock land types. In addition, the results demonstrate the effectiveness of both the networks in classifying areas under shadow, which has been a long-standing challenge in the field of optical remote sensing. However, Fig. 6(b) exhibits that some misclassification still remains in regions located on mountain slopes. F. Comparison With Ground Truth Data The classification maps presented in Fig. 6(a) and (b) are compared pixel by pixel to the ground truth data depicted in Fig. 4. The classifications have also been quantitatively evaluated by using recall scores (RSs). Each individual RS has been calculated as RS = True positive True positive + False negative × 100. Fig. 7(a) shows the confusion matrix for the classification using all the normalized indices. The classification results exhibit an overall accuracy of 81.98%. Fig. 7(b) shows the confusion matrix for the classification using high-spatial-resolution normalized indices only. The classification results exhibit an overall accuracy of 84.81%. In both the confusion matrices, the most prominent misclassification arises from the incorrect identification of rock as glacial debris. This misidentification can be attributed to the similarity in spectral signatures or to limitations in the mapping of the ground truth data for these two land types, particularly in regions affected by shadows and steep slopes. G. Normalized Index Identification Using Inverse Mapping The classification results presented in Section III-E are utilized to identify the prominent input via inverse mapping. As mentioned in Section II-C, the inverse mapping results are displayed by using box-whisker plots, and the proximity of the individual box plot to either −1 or +1 level represents the significance of the corresponding input feature. If an index is close to +1 level, it indicates a positively prominent index, while if an index is close to −1, it is considered a negatively prominent index. In addition, for ease of interpretation of inverse mapping results, normalized index maps of the input feature identified as prominent have also been provided. By analyzing the index maps, we identify two types of normalized indices: 1) absolute indices that independently denote a target region with a −1 or +1 value and 2) relative indices that rely on the relative differences between the land types. 1) Inverse Mapping for Classification Using All 66 Normalized Indices: Fig. 8 depicts the results of inverse mapping for a) Snow and glacial lake: Fig. 8(a) and (d) shows that B 5 : B 9 is a prominent index for the detection of snow and glacial lake pixels and is calculated by (7). Based on the proximity to the index to −1 and +1 level, it can be noted that for the determination of the snow pixels, B 5 : B 9 is a negatively prominent index. However, for the determination of the glacial lake pixels, it is a positively prominent index. The corresponding index map for B 5 : B 9 is shown in Fig. 9(a). Here, the glacial lake pixels are indicated by +1 value, while the snow pixels are determined by using their relative difference from the glacial lake pixels. Thus, B 5 : B 9 is considered to be a relative index. b) Debris and rock: Fig. 8(b) and (c) indicates that debris pixels and rock pixels also have a complementary relationship. The most significant index for both is the B 1 : B 3 index which is calculated as (8). B 1 : B 3 positively affects the neural network for the classification of debris pixels and negatively affects the neural network's classification of rock pixels. The index map for B 1 : B 3 is shown in Fig. 9(b), where the rock regions are indicated by a −1 value, while the debris regions are indicated by a value closer to zero. Since this index is determined by the relative differences between two classes, it is classified as a relative index. c) Freshwater lake: From Fig. 8(e), the most prominent indices for freshwater lake pixels are determined to be B 6 : B 11 and B 6 : B 12 , given by (9) and (10), respectively. The corresponding index maps for B 6 : B 11 and B 6 : B 12 are shown in Fig. 9(c) and (d), respectively, where the freshwater lake pixels are indicated by a −1 value. The index maps clearly distinguish between the two water bodies. In these index maps, since the target area is indicated by a −1 value, both indices are considered to be negatively prominent absolute indices. 2) Inverse Mapping for Classification Using 45 High-Spatial-Resolution Normalized Indices: The inverse mapping results using only high-spatial-resolution indices are given by Fig. 10, and Table III summarizes the indices that are found prominent. a) Snow and glacial lake: Fig. 10 b) Debris and rock: From Fig. 10(b) and (c), B 2 : B 8A is found to be a positive index for glacial debris and a negative index for rock pixels. B 2 : B 8A is calculated in (15). The corresponding map is shown in Fig. 12. The rock regions are denoted by a value of −1 in the maps, whereas the debris regions are represented by values closer to zero. Since the results of the inverse mapping of the debris class exhibit weakly positive values, only B 2 : B 8A is considered as a prominent normalized index and other normalized indices have been excluded from the analysis. c) Freshwater lake: Similar to the results shown in Section III-G1c, the inverse mapping for the high-spatial-resolution freshwater lake classification also indicates the prominence of the B 6 : B 11 and B 6 : B 12 indices. In addition, B 3 : B 6 , B 3 : B 8 , (19). The corresponding index maps for these indices are shown in Fig. 13. Based on the inverse mapping results and the index map, it is evident that B 6 : B 11 and B 6 : B 12 are negatively prominent indices, while B 3 : B 6 , B 3 : B 8 , B 4 : B 6 , and B 5 : B 6 are positively prominent indices. Since these indices clearly exhibit a value of +1 or −1 for the target region, they can be classified as absolute indices. IV. DISCUSSION This article focuses on determining normalized indices that are useful for identifying and classifying glacial facies. A normalized index generator and a fully connected feedforward neural network have been utilized to accomplish this goal. In addition, inverse mapping dynamics was applied to the classification results to determine prominent indices useful for glacier classification. It was found that the indices were successful in classifying the glacier region, and the inverse mapping gave reliable results for the determination of new indices. It was found that even though some of the determined indices are not traditional absolute normalized indices, they can still determine the classes by utilizing the relative differences between the various land types. From the results of this study, it was found that, for an absolute index in the negatively prominent index's map, the target region pixels have a value <0. Additional experiments showed that a negatively prominent index B i : B j implies that its complementary index B j : B i is also a significant index, and the pixels in the target area in the index map have a value >0. In addition, we were able to successfully identify numerous impactful indices for the classification of glacial facies. It is noteworthy that some of these indices were already being utilized before their significance was identified in this study. For instance, the B 3 : B 8 index/Freshwater Index-4 (FWI 4 ), which was significantly impactful for the high-resolution classification of freshwater lake, is calculated using the same bands as NDWI [10]. Similarly, the B 2 : B 4 index/Snow and glacial lake index-2 (SGI 2 ), determined to be a significant index for snow and glacial lake classification, employed the same formula as NDWI 1 [32], [33], [34]. Furthermore, our findings supported a recent study [17] that proposed the Sentinel-2 water index (SWI), by taking into account the influence of the vegetation-sensitive-red-edge bands of Sentinel-2. SWI was calculated by normalizing the VRE 1 III PROMINENT INDICES DETERMINED FROM THE INVERSE MAPPING OF THE CLASSIFICATION OBTAINED BY USING ONLY 45 HIGH-SPATIAL-RESOLUTION NORMALIZED INDICES band with the SWIR 2 band. In our study, we introduced the B 6 : B 11 index/Freshwater index-1 (FWI 1 ), which exhibited similarities to SWI as it involved normalizing the VRE 2 band with the SWIR 2 band. The identification of these relevant preexisting indices supports the generalization ability of the proposed method. Inverse mapping had previously shown efficacy in determining important features related to earthquake disaster assessment [35]. The effectiveness of inverse mapping in the current application is demonstrated by the fact that the indices identified by our method have already been adopted in practical applications. This highlights the potential of our method to be used as a valuable tool for identifying and developing new indices that could be of great value for classification tasks in various domains. By leveraging the relationships between different spectral bands, our method can effectively capture the unique characteristics of different features, making it a powerful tool for extracting information from remotely sensed data. Moreover, in order to further evaluate the impact of normalized indices on classification, additional experiments were conducted by using only raw Sentinel-2 band data as input. These experiments showed that the accurate classification of shadowed areas was not possible without the use of normalized indices. This is because the presence of shadows can cause significant variations in the reflectance values of different bands, leading to inaccurate classification results. By incorporating normalized indices that account for these variations, the effects of shadows on classification accuracy can be mitigated. This highlights the importance of using normalized indices in classification, especially in areas with shadows or other topographically challenging conditions that can affect the accuracy of classification results. Our proposed method in this study has the potential to be extended to determine additional indices beyond those examined in this study. These could include indices based on methods such as simple band ratios or double differences, which have been shown to be effective in various applications [36], [37]. Therefore, the ability of our proposed method to identify effective indices, such as simple band ratios or double differences, could have broader applications in remote sensing and environmental monitoring. V. CONCLUSION In this article, we proposed the utilization of all the combinations of Sentinel-2 indices for land classification and the determination of useful indices. The effectiveness of this approach was demonstrated through its application in the classification of a glacial region, where indices prominent for glacial classification were identified by using inverse mapping. In addition, the method effectively classifies shadowed regions, which has been a long-standing challenge in optical remote sensing. Moreover, some of the indices determined to be effective by the proposed method are already being used in practical applications. Given the anticipated growth in the number of sensors and observation bands in the near future, this method has the potential to become an effective method for identifying application-specific indices.
6,788.6
2023-01-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Dielectric Susceptibility of Liquid Water : Microscopic Insights from Coherent and Incoherent Neutron Scattering A. Arbe, P. Malo de Molina, F. Alvarez, B. Frick, and J. Colmenero Centro de Física de Materiales (CFM) (CSIC–UPV/EHU)—Materials Physics Center (MPC), Paseo Manuel de Lardizabal 5, 20018 San Sebastián, Spain Departamento de Física de Materiales (UPV/EHU), Apartado 1072, 20080 San Sebastián, Spain Institut Laue-Langevin, 71 Avenue des Martyrs, CS 20156, F-38042 Grenoble Cedex 9, France Donostia International Physics Center, Paseo Manuel de Lardizabal 4, 20018 San Sebastián, Spain (Received 16 April 2016; published 24 October 2016) Water dynamics has paramount importance in many areas of research and industrial applications.One of the main techniques used from the early times to investigate water dynamics is dielectric spectroscopy (DS) [1].Thanks to the development of the terahertz (THz) techniques [2][3][4] it was recently possible to fill the gap between dipolar relaxation and intermolecular stretching vibrations at ≈5 THz, and to have a full picture of the dielectric permittivity ε ⋆ ðνÞ of liquid water in a broad frequency range.This is displayed in Fig. 1(a), which includes data from different sources at 298 K [2,5].The main contribution to the imaginary part of ε ⋆ ðνÞ, ε 00 ðνÞ, is the well-known Debye peak centered at ν max ≈ 20 GHz, which corresponds to a single exponential decay of the sample polarization with a characteristic time τ D ¼ ð2πν max Þ −1 ≈ 8.3 ps.This peak-which is also present in other hydrogen-bond (HB) liquids at different frequencies-is associated with the collective relaxation of the dipole moment MðtÞ ¼ P i μi ðtÞ, with μi ðtÞ the dipole of the ith water molecule.Figure 1(a) also shows that this peak is strongly suppressed in the susceptibility χ ⋆ ðνÞ measured by light scattering (LS) [6,7].In addition to this main contribution, other low amplitude processes have recently been invoked to describe the high-frequency part of the spectrum [2,5,6,8].In Fig. 1(a) we have reproduced the most recently proposed description [5], which includes two additional Debye-like processes with characteristic times τ 2 ≈ 1 and τ 3 ≈ 0.18 ps.Despite the evident need for two additional contributions to the main Debye peak to properly describe ε 00 ðνÞ, the situation is still confused.The values of τ 2 and τ 3 are rather scattered (see Ref. [5] for a recent compilation) and strongly depend on the model function used for process 3 [5].Moreover, the interpretation of the molecular motions involved in the different processes is very unclear, mainly because DS is a "macroscopic" technique, which follows the total dipole moment MðtÞ without spatial resolution. The relevant frequency range for the dielectric response of water can also be covered by neutron scattering (NS), a technique delivering microscopic information with space and time resolution.NS has advantages for identifying processes at the THz range, avoiding interferences from the peak at 5 THz, which is barely visible by NS.More importantly, by measuring D 2 O samples NS reveals the dynamic structure factor SðQ; νÞ, with Q the wave vector, i.e., it allows following the actual structural relaxation [9].However, apart from a few exceptions [10,11], most of the NS studies of water dynamics, from the paper of Teixeira et al. [12], have been focused on incoherent scattering from protonated samples (see Ref. [13] for a critical discussion of the works carried out).Although the synergetic combination of NS and DS has proven to be a powerful tool in different but likely related problems as, for instance, polymer melt dynamics [14], this methodology has never been explored for water dynamics.With these ideas in mind, we have considered incoherent and coherent NS in a wide Q range covering the first maximum of the static structure factor SðQÞ (Q max ≈ 2 Å −1 ) and the so-called intermediate Q range (0.3 Å −1 ≲ Q ≲ Q max ).The NS data were analyzed in terms of the corresponding susceptibility χ ⋆ Q ðνÞ.Its imaginary part can be calculated as χ 00 Q ðνÞ ∝ SðQ; −νÞ=nðνÞ from the scattering function corresponding to 'system energy loss' with nðνÞ ¼ ðe hν=kT − 1Þ −1 the Bose occupation factor (k: Boltzmann constant) (see the Supplemental Material [15]).This less conventional analysis of NS data allows distinguishing better the different processes involved in SðQ; νÞ and a more direct comparison with spectroscopy data. The NS experiments were carried out at 298 K on H 2 O and D 2 O samples for incoherent and coherent scattering, respectively, by the time-of-flight instrument IN5 [16] at the ILL.Diffraction measurements with polarization analysis [17] were also performed at 298 K on the D7 (ILL) instrument [18].See the Supplemental Material for experimental details [15]. The χ 00 Q ðνÞ obtained for the incoherent and coherent case and two Q values are shown in Figs.1(b) and 1(c), respectively.A first qualitative inspection of χ 00 Q ðνÞ (see Figs. S2 and S3 for other Q values) suggests the presence of three different processes.The one dominating at low frequencies shows dispersion in Q, indicating diffusive behavior.In the other extreme of the spectra, the relevant process shows a Q-independent and rather high characteristic frequency (≈THz) suggesting an inelastic vibrational origin.We note that the vibrational density of states of liquid water measured by NS has a low-frequency main peak centered at ≈2 THz [19], which was identified with bending fluctuations of O-O-O units in the water-molecule network.The presence of a third intermediate process is more evident in the low-Q coherent data.This process seems to be also roughly Q independent, suggesting some kind of localized process.Based on these qualitative arguments, to fit the data we have first considered the addition of a vibrational and a relaxational contribution.In the time domain this general expression reads FðQ; tÞ ¼ ½1 − CðQÞF V ðQ; tÞ þ CðQÞF R ðQ; tÞ: ð1Þ FðQ; tÞ represents either the intermediate incoherent scattering function for H nuclei S inc;H ðQ; tÞ or the normalized dynamic structure factor SðQ; tÞ=SðQÞ-functions related through Fourier transformation with those measured on the protonated and deuterated samples, respectively.For the relaxational contribution we have assumed the convolution of two independent processes: a diffusive contribution F d ðQ; tÞ and a local-restricted in space-contribution F l ðQ; tÞ.In the time domain this convolution reduces to a simple product: F R ðQ; tÞ ¼ F d ðQ; tÞF l ðQ; tÞ.We note that a similar procedure was previously used to describe both NS [20] and DS data [21] of a qualitatively similar problem: the merging of the α relaxation and the local β process in glass-forming polymers.The same scheme has also been applied to describe MD-simulation data of water [13,22].Here we assume that F d ðQ; tÞ ¼ e −t=τ d with τ d ðQÞ a diffusive time.For F l ðQ; tÞ we take F l ðQ; tÞ ¼ AðQÞ þ ½1 − AðQÞe −t=τ l , where τ l is a Q-independent relaxation time.Then, the relaxation contribution becomes F R ðQ; tÞ ¼ ½1 − AðQÞe −t=τ ld þ AðQÞe −t=τ d , where the first term-with the effective local time τ ld ≡ð1=τ l þ1=τ d Þ −1means the local process modified by the presence of the diffusive process and AðQÞ the relative amplitude of the pure diffusive process.According to Eq. ( 1) with this F R ðQ; tÞ, χ 00 Q ðνÞ has three contributions: and the vibrational contribution χ V Q 00 ðνÞ.To represent the latter, we have assumed a resonance Here, ν 0 is the frequency and k 0 is the damping coefficient of the damped resonance.For incoherent scattering, χ 00 Q;inc ðνÞ ¼ P χ α Q;inc 00 ðνÞ, and for coherent scattering, χ 00 Q;coh ðνÞ ¼ SðQÞ V i þ hu 2 l i would mean the MSA of the total nondiffusive process.We note that, although AðQÞ may be regarded as the EISF of the local process [13], the available data do not allow going beyond an effective DWF interpretation.The coherent amplitudes display a more complex Q dependence involving some modulation with SðQÞ.The local component is highly visible in the intermediate Q regime [1-AðQ ≈ 1 Å −1 Þ ≈ 0.32] as it was predicted in the above mentioned scenario for the αβ merging [20,21].Figure 2(b) shows τ d ðQÞ, τ l , and the vibrational time τ This time is τ V ¼ 0.16 ps for D 2 O and τ V ¼ 0.12 ps for H 2 O.In the low-Q range where SðQÞ is almost flat, τ d ðQÞ obtained either from coherent or from incoherent scattering is the same, within the uncertainties.τ d ðQÞ from incoherent scattering deviates from the purely diffusive behavior at high Q values, where it approaches τ l .The collective τ d ðQÞ exhibits-as expected-some kind of "deGennes narrowing" [23] in the vicinity of Q max .We note that in the glass-forming community the α relaxation is identified with the structural relaxation leading to the decay of SðQ; tÞ at the intermolecular distances, i.e., at Q max .Therefore, τ α is the average relaxation time of the relaxation contribution to SðQ max ; tÞ=SðQ max Þ.According to F R ðQ; tÞ, τ α ¼½1−AðQ max Þτ ld ðQ max ÞþAðQ max Þτ d ðQ max Þ, where all the parameters correspond to coherent scattering.Taking AðQ max Þ ¼ 0.77 [see Fig. 2(a)] Then, τ α has contributions from both local (through τ ld ) and diffusive processes, although it seems to be dominated by τ d at least at 298 K. Comparing now the time scales identified by NS with those reported in the DS studies [5], we observe that (i) τ D ¼ 8.37 ps coincides with (ii) τ l and τ V are in the range usually reported for the additional high-frequency processes of ε 00 ðνÞ.Then, we have tried to fit the DS spectrum by the same model used for the neutron susceptibility at Q ⋆ ≈ 0.7 Å −1 .We have fixed the two time scales involved [τ D ¼τ d ðQ ⋆ ¼0.7Å −1 Þ; τ l ¼ 1.3 ps] and the vibrational contribution of H 2 O. Thereby, the only free fitting parameters were the two amplitude factors C and A. As in Ref. [5], the fitting was restricted to ν ≤ 1 THz to minimize the influence of the peak at ≈5 THz not included in the model.Figure 1(a) shows the perfect description of the DS spectrum in the considered frequency range.Moreover, the subtraction of the fitting curve from the experimental data at ν > 2 THz (shown in the inset) can be well described by the expression and the parameter values given by Yada et al. [3] for the intermolecular stretching vibrational peak.These are remarkable results taking into account that 4 out of 6 fitting parameters were already fixed.The values obtained, A ¼ 0.98 and C ¼ 0.98, translate into relative amplitudes to the DS spectrum (96.04% for the Debye peak, 1.96% for the effective local process, and 2% for the vibrational contribution) that are in the range of those previously reported [5]. Figure 1(a) also shows the three contributions of our model.Our effective local process (ld) and our vibrational contribution almost coincide with the processes called 2 and 3 in Ref. [5].This agreement allows the univocal identification of these DS contributions; in particular, the vibrational nature of process 3, due to [19,24,25].As expected [19], the relative contribution of this process for NS is larger than for DS.On the other hand, the above introduced Q ⋆ -which means a link between molecular diffusion and dipolar relaxation-can be expressed (see SM [26]) as Q ⋆ ¼ ½Dτ D −1=2 , where D is the diffusion coefficient.With the values of DðTÞ [13] and τ D ðTÞ [8,30], Q ⋆ ≈ 0.7 Å −1 independent of temperature in the range 270K-330K (see SM [26]).With some approximations Q ⋆ can also be expressed as Q ⋆ ≈ ½ð2=3Þa 2 G K =J K −1=2 , i.e., in terms of a "single-molecule" magnitude-the effective radius, a-and a factor, G K =J K , measuring the strength of many-body-effects on dipolar relaxation (G K is the Kirkwood static parameter and J K the Kirkwood dynamical coupling [31]).If we use Q ⋆ ¼ 0.7 Å −1 and reported values [32,33] for a (∼1.3-1.44 Å) the above expression delivers G K =J K ∼ 1.5-2, in the range usually reported [34,35]. To get information about the atomic displacements at the time scales of the different processes, we have calculated the H mean squared displacement (MSD) hr 2 H ðtÞi from S inc;H ðQ; tÞ, by assuming the Gaussian approximation: hr 2 H ðtÞi ¼ −6 ln½S inc;H ðQ; tÞ=Q 2 .The results obtained from different Q values are shown in Fig. 3(a).Within the uncertainties, they lead to the same hr 2 H ðtÞi for t ≥ 1 ps, supporting the approximation in this range.This figure also includes the MSD and the non-Gaussian parameter α 2 ðtÞ ¼ 3hr 4 ðtÞi=ð5hr 2 ðtÞi 2 Þ − 1 corresponding to H and O atoms calculated from the MD simulations carried out by us and described in the Supplemental Material [36].In the time scale of the Debye peak, hr 2 ðτ D ¼ 8.37 psÞi ≈ 11.3 Å 2 for both atomic species.Thus, the collective dipolar relaxation can only take place when the atoms move in average large distances ξ D ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi hr 2 ðτ D Þi p ≈ 3.4 Å, of the order of the intermolecular distance 2π=Q max .Large atomic displacements of ≈3.3 Å were proposed in Refs.[38,39]-the socalled "tetrahedral displacement mechanism"-for explaining the Debye peak.Although our results prove the involvement of such large atomic displacements in the Debye peak, they cannot be identified with a characteristic hopping length as proposed for such a mechanism (see the Supplemental Material [40]). The different dynamic regimes displayed in Fig. 3(a) are highlighted in Fig. 3(b), where we have represented the effective power exponent y for H and O atoms, defined as y ¼ d½loghr 2 ðtÞi=d½log t, as a function of the mean displacement of H atoms, ξ H ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi hr 2 H ðtÞi p .We note that y ¼ 2 corresponds to ballistic motion and y ¼ 1 to pure diffusion.A deep minimum in yðtÞ would mean a spatial localization or delocalization process.Hydrogen atoms show a well-defined deep minimum at ξ H V ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi hr 2 H ðτ V Þi p ≈ 0.5 Å.This first "cage" is vibrational and the decaging would likely involve HB breaking.In fact, the critical time separating "fluctuation and breaking" of the HB network has been estimated as τ c ≈ 0.3 ps [44,45], which roughly corresponds to the end of this caging [see Fig. 3(a)].Figure 3(b) also shows that this vibrational caging for H atoms is hardly reflected for O atoms.The second cage corresponds to mean displacements in the range of the local processes where ξ H l ≈ ξ O l .This cage, which is visible for both H and O atoms, is less defined, likely due to the convolution of local and diffusive processes.Delocalization from this smooth cage leads to pure diffusive behavior, which for O atoms are established at t ≳ τ D (ξ O ≳ ξ D ).In fact, the maximum of α O 2 ðtÞ, usually marking the crossover to diffusive behavior [46], takes place at ≈τ l .Since the total reorientation of MðtÞ (collective Debye peak) requires large O displacements (≈3.3 Å), it is expected that the motions inside this cage (ξ O l ≈ 1.4 Å) only contribute to hindered rotations of MðtÞ, which translate into the low amplitude dipolar relaxation observed in this short-time-highfrequency range.In conclusion, we have achieved a unified description of NS and DS susceptibilities of liquid water, which (i) allows a microscopic interpretation of the different processes; (ii) identifies the molecular motions involved in the DS spectra; (iii) clarifies the nature of the actual structural relaxation time, τ α ; and provides a link between diffusion and collective dipolar relaxation through Q ⋆ .This description also opens a new way of approaching dynamics of water under different conditions (supercooled, confined, etc.) and that of other H-bonded liquids. 1 . 3 FIG.1.Imaginary part of water susceptibility at 298 K. (a) DS (closed[5] and open[2] circles), Raman[6] (diamonds) and LS[7] (squares; T ¼ 293 K) results.Lines are fitting curves of DS results from Ref.[5] and respective components: dashed-dotted lines correspond to the fit proposed in Ref.[5] with processes 1 (main Debye), 2, and 3; solid line to the fit obtained in this work with process d for Q ⋆ ¼ 0.7 Å −1 and processes ld and V (dashed lines).Inset: difference between the DS results and our fit, and model resonance given in Ref.[3] (line).(b) Incoherent NS results.(c) Coherent NS results.In (b) and (c), black solid lines are fits with the three components (red, diffusive; green, effective local; blue, vibrational) to the data at Q ¼ 0.7 Å −1 (circles, dashed lines) and 2.0 Å −1 ≈ Q max (squares, dotted lines). FIG. 3 . FIG. 3. (a) MSD experimentally obtained for H atoms (different symbols for different Q values in the range 0.19 ≤ Q ≤ 2.0 Å −1 ) and calculated from the simulations for H (solid line) and O atoms (dashed-dotted line).The computed α 2 ðtÞ are shown as dashed (H atoms) and dotted (O atoms) lines.(b) Effective power exponent y for H (solid line) and O atoms (dashed-dotted line) and mean displacement of O atoms ξ O (dashed line) as functions of the mean displacement of H atoms ξ H . Gray dotted line: ξ O ¼ ξ H -law.
4,107.6
2016-10-24T00:00:00.000
[ "Physics" ]
Magnetization control by angular momentum transfer from surface acoustic wave to ferromagnetic spin moments Interconversion between electron spin and other forms of angular momentum is useful for spin-based information processing. Well-studied examples of this are the conversion of photon angular momentum and rotation into ferromagnetic moment. Recently, several theoretical studies have suggested that the circular vibration of atoms work as phonon angular momentum; however, conversion between phonon angular momentum and spin-moment has yet to be demonstrated. Here, we demonstrate that the phonon angular momentum of surface acoustic wave can control the magnetization of a ferromagnetic Ni film by means of the phononic-to-electronic conversion of angular momentum in a Ni/LiNbO3 hybrid device. The result clearly shows that the phonon angular momentum is useful for increasing the functionality of spintronic devices. A ngular momentum is conserved when a system has rotational symmetry. While this law is, strictly speaking, broken in crystals, approximate conservation remains valid in the microscale range 1 . For example, when a spinpolarized electrical current is injected into a microscale ferromagnet, the spin angular momentum of the conduction electrons is transferred to ferromagnetic localized moment (Fig. 1a). This mechanism is used to control magnetic storage in magnetoresistive random access memories 2 . The angular momentum of a rigid body rotation 3 and photon 4,5 can be also used to control the magnetization. One might wonder whether it is possible to control the magnetization via the angular momentum transfer from phonons [6][7][8][9] (Fig. 1b). The phonon angular momentum is activated by the breaking of time-reversal or spatial-inversion symmetry 6,[10][11][12][13][14] . In timereversal symmetry-broken ferromagnets, the polarization of a transverse acoustic wave (low-energy phonon mode) is observed to rotate while propagating along the magnetization direction 15 , which indicates an eigenstate with circular polarization. Similar circularly polarized phonons are also observed in spatial inversion symmetry-broken chiral materials, corresponding to the phonon version of natural activity 16 . The phonon angular momentum is also emergent on the surface of a substance. A Rayleigh-type surface acoustic wave (SAW) has elliptically polarized displacement 17 , which indicates that the phonon angular momentum is finite (Fig. 1c). The angular momentum is parallel to the vector product of the SAW wave vector k and surface normal vector, and it shows a sign change when k is reversed 18 . Here, we use the SAW current to demonstrate the conversion from phonon angular momentum to magnetization. Results Nonreciprocal propagation of surface acoustic wave. Figure 2a shows the SAW device used in this work. This device is composed of a piezoelectric LiNbO 3 substrate, two interdigital transducers (IDTs) and a ferromagnetic Ni film [19][20][21][22] . The xyz-coordinate system is defined as shown in the right panel. To understand the coupling between the SAW and ferromagnetism, we demonstrate how the magnetization direction of the Ni film affects the SAW propagation. Figure 2b, c, e, f shows the SAW transmission in magnetic fields nearly parallel to the x-axis. The magnetic field angle ϕ is slightly tilted (ϕ = 2 ∘ ) to z direction from the x-axis in Fig. 2b, c whereas the tilted direction is reversed in Fig. 2e, f. The magnetic field increased from −400 to 400 mT (decreased from 400 to −400 mT) during the measurements shown in Fig. 2c, f (Fig. 2b, e). Before discussing the SAW transmission, let us explain the variation of magnetization in the magnetic fields. The insets illustrate the expected magnetization direction in the magnetic field sweep. Considering the shape anisotropy of the Ni film, the easy and hard axes are the z-and y-axis, respectively. In this case, the magnetization variation is very sensitive to the tilt direction and the sign of the magnetic field variation. In decreasing the magnetic field at ϕ = 2 ∘ (Fig. 2b), the tilt angle of the magnetization θ is positive, and the magnitude increases. At zero magnetic field, the magnetization points along the +z direction. When the magnetic field changes its sign, the magnetic state with negative θ is more energetically stable. Therefore, the sign of θ is abruptly reversed at some negative field, which is denoted as θ flop. In an increasing field (Fig. 2c), the magnetization points to the −z direction at zero magnetic field and θ flop shows up at a positive magnetic field. For the ϕ = −2 ∘ measurements (Fig. 2e, f), the magnetization shows similar variation but the sign of θ and the magnetization direction at zero field are opposite. Next, we discuss the SAW transmission. We measured the transmission intensity from IDT1 to IDT2 T +k (H) and that from IDT2 to IDT1 T −k (H) at various magnetic fields H (see Supplementary Information for precise definitions of T +k (H) and T −k (H)). For all the measurements, T +k (H) and T −k (H) showed a broad dip around ± 90 mT. This was ascribed to the ferromagnetic resonance (FMR) excitation by the acoustic wave via magnetoelastic coupling 19,20 . The discontinuous changes at −70 mT in Fig. 2b, e and those at +70 mT in Fig. 2c, f were caused by the θ flops mentioned above. Importantly, the intensity of acoustically excited FMR depends on the propagation direction of the SAW. We plot the difference of transmittance T NR (H) = T +k (H) − T −k (H) at ϕ = +2 ∘ and −2 ∘ in Fig. 2d and g, respectively. T NR (H) was independent of the magnetic field sweep direction except for the region around the θ flop fields, but it showed the opposite sign when either the sign of the field or ϕ was reversed. This phenomenon is denoted as nonreciprocal SAW propagation induced by the simultaneous breaking of timereversal and spatial inversion symmetries 21,22 . In this case, the ferromagnetism and surface break the time reversal and spatial inversion symmetries, respectively. The nonreciprocity originates microscopically from the different polarizations of the +k and −k modes. As mentioned above, the SAW has elliptical polarization, and the rotational direction is reversed by the reversal of k. On the other hand, FMR can be excited only by an effective field with right-handed circular polarization. These effects were the origin of the difference in the acoustic FMR intensity between +k and −k SAWs. Conversely, the ratio of nonreciprocity to absorption reflects the ellipticity of the SAW polarization. Numerical demonstration of magnetization control. Now we discuss the inverse effect of the nonreciprocal propagation. Intuitively, the inverse effect would be control of the time reversal symmetry or magnetization by using the spatial inversion symmetry-broken surface state and the unidirectional SAW flux. To demonstrate this, we consider the magnetization variation in the field decreasing process with SAW flux after applying a strong magnetic field along the x-axis (Fig. 3a). In this case, the magnetization points along the x-axis at first, and then it is tilted in either the +z or −z direction due to the shape anisotropy. The SAW flux along the +x or −x direction is expected to control whether the magnetization is tilted to the +z or −z direction. To confirm this conjecture we have performed a numerical simulation. The magnetization should vary following the Landau Lifshitz Gilbert (LLG) equation expressed as where m = (m x , m y , m z ), γ, α, and M s are a uniform magnetization vector, the gyromagnetic ratio, Gilbert damping, and saturation magnetization, respectively. vector), magnetic anisotropy F a , and a magentoelastic field F me . For simplicity, we assume uniaxial magnetic anisotropy F a ¼ ÀKm 2 z (K is constant) and the magnetoelastic coupling energy for a polycrystal given by 20,23 where b is the magneto-elastic coupling constant, and m i and e ij are the ith component of the magnetization and the strain tensor, respectively. For a Rayleigh-type SAW propagating along the x direction, non-vanishing components of the strain tensor are e xx , e xy , and e yy . By introducing spin moment S = (S x , S y , S z ) = −m/gμ B , S ± = S x ± iS y , and e x± = e xx ± 2ie xy , F me can be reduced to Here, g and μ B are the g-factor and Bohr magneton, respectively. This formula clearly shows the angular momentum transfer from the phononic to magnetic systems (see Supplementary Information for details). Figure 3b shows the calculated magnetic field variation of m z under a SAW current. We assumed the x and y components of SAW-induced dynamical displacement to be purely circular as u x ðtÞ ¼ u 0 cosðkx À ωtÞ and u y ðtÞ ¼ ÀsgnðkÞu 0 sinðkx À ωtÞ, and neglected decay along the y direction for simplicity. The sign of the rotational motion depends on the sign of the wave vector k. Then the relevant strains are expressed as e xx ðtÞ ¼ ∂u x ∂x ¼ Àku 0 sinðkx À ωtÞ and e xy ðtÞ ¼ 1 2 ∂u x ∂y þ ∂u y ∂x ¼ Àðjkju 0 =2Þ cosðkx À ωtÞ. The other parameters used for the numerical calculation are shown in Methods. At t = 0, a strong magnetic field is applied along the x-axis, and we assumed m = (M s , 0, 0). Then the magnetic field is slowly decreased as H 0 expðÀt=t 0 Þ. m z evolves below the anisotropy field 2M s K. Importantly, the m z direction depends on whether the SAW polarization is clockwise (CW) or counter-clockwise (CCW), which correspond to SAWs propagating along the +x and −x directions, respectively. This demonstrates deterministic control of magnetization by selecting SAW direction. The mechanism involved seems to be the damping torque which is dependent on the direction of polarization rotation. Circular polarization induces rotational motion of magnetization in the xy plane. The damping torque due to the rotational motion τ z = (m×dm/dt) z is parallel or antiparallel to the z-axis, and the sign depends on the direction of rotation. Note that these results are independent of the phase of the SAW (see Supplementary Information) and the effect is thus different from precessional switching 24 . Experimental demonstration of magnetization control. Next, we describe the corresponding experimental demonstration of magnetization control by using the phonon angular momentum. We first applied the magnetic field as large as 400 mT along the xaxis and set the SAW current along either the +x or −x direction (from IDT1 to IDT2 or IDT2 to IDT1), of which the excitation power and frequency were 25 dBm and 2.906 GHz, respectively. To be precise, the magnetic field seemed to be slightly tilted in the +z direction but the angle between the magnetic field and the x-axis was less than 0. 5 ∘ (see Supplementary Information). Then we decreased the magnetic field to zero at a rate of 0.01 T s −1 . Hereafter, we call the sequence of these operations "poling procedure". To detect the magnetization direction after the poling procedure, we used the magnetoresistance. Figure 3c shows the magnetoresistance R(H) measured in magnetic fields parallel to the electric current in the Ni film. It shows a butterfly-shaped hysteresis loop. Since the magnetoresistance is dependent on the magnitude of m z , the magnetization curve with a finite coercive force can be inferred as shown in Fig. 3d. The magnetization should be saturated in the high field region where R(H) is constant. The decreases of magnetoresistance before the discontinuous jump in Fig. 3c corresponds to the gradual decrease of |m z | before the flip. One can distinguish the magnetization state at zero magnetic field (m|| +z or m|| −z) by measuring the magnetoresistance along the z-axis. When the resistance decreases with increasing magnetic field from H = 0 and discontinuously increases at a certain magnetic field, the magnetization should have pointed in the −z direction at zero field. On the other hand, when it increases continuously without any discontinuity, the magnetization direction was opposite. If the magnetic field is decreased from H = 0, the field dependences of resistance for the magnetic states of m|| +z and m|| −z should be reversed. To probe the magnetization in this way, we rotated the device by 90 ∘ around the y-axis after the poling procedure and measured the field dependence of the resistance along the z-axis while increasing or decreasing the field from 0 mT. Note that the positive magnetic field points the +z direction after the rotation. Figure 3e and f shows the magnetic field dependence of ΔR(H) = R(H) − R(0) after the field poling with SAW currents along the +x and −x directions, respectively. In the case of the SAW current along the +x direction, the resistance decreased at first with increasing the field from 0 mT and showed a discontinuous increase around 10 mT whereas it increased with decreasing magnetic field from 0 mT almost continuously. Conversely, after the poling with the SAW current along the −x direction, the resistance increased (decreased) with increasing (decreasing) magnetic field from 0 mT. A discontinuous increase of resistance was observed only when the magnetic field was decreased. These results demonstrate that the SAW currents along the +x and −x direction aligned the magnetization along the −z and +z directions, respectively. Thus, control of the magnetization by means of the SAW current was realized. The SAW direction dependence of the controlled magnetization cannot be explained by the effects of some static strains or other trivial effects, but it is naturally explained by the effects of phonon angular momentum transfer. The magnetization direction was opposite to the phonon angular momentum direction, which is consistent with the numerical simulation shown in Fig. 3b. While the slight tilting magnetic field induces the energy imbalance between m|| +z and m|| −z states, the angular momentum transfer from SAW could reversely control the magnetization overcoming the energy imbalance. For more details about the input power and angle ϕ dependence, see the Supplementary Information. To confirm the experimental demonstration of magnetization control, we also probed the magnetization after the poling procedure by using nonreciprocal SAW transmission T NR (H). Figure 3g and h shows T NR (H) in increasing and decreasing the magnetic field almost parallel to the x-axis, respectively. Precisely speaking, the angle of the magnetic field ϕ seems to have slightly deviated from zero to the positive side (ϕ < 0. 5 ∘ ) because the magnetic field dependence of T NR (H) was similar to the case of ϕ = 2 ∘ . The magnetic hysteresis became larger than that at ϕ = 2 ∘ , and the discontinuous sign change overlapped with the dip or peak. One can probe the magnetization at zero magnetic field by using the magnetic field dependence of T NR (H). If T NR (H) shows a simple dip as the magnetic field is increased from zero, the magnetization direction at 0 mT was parallel to +z. On the other hand, if T NR (H) shows a peak followed by a discontinuous sign change, the magnetization pointed in the −z direction at zero field. Figures 3i and j show T NR (H) after the poling with SAW currents along the +x and −x directions, respectively. As shown in Fig. 3i, T NR (H) showed a peak and a sign change around 80 mT after poling with the SAW current along the +x direction. Therefore, the magnetization pointed in the −z direction at H = 0. On the other hand, T NR (H) after poling with a SAW current along the −x direction showed a simple dip, indicating that the magnetization pointed along the +x direction at H = 0. These results are consistent with the magnetoresistance measurements. The same measurement was also performed at a small negative angle, and the same result was found (see Supplementary Information). Discussion We have demonstrated magnetization control by the angular momentum transfer from a SAW to ferromagnetic spin moments. However, it should be noted that the volume fraction of the controlled magnetization seems to be less than 100 % in the experiments. The magnitude of the resistance discontinuities in Fig. 3e and f is nearly 40 % of that in Fig. 3c. The magnetic field variations shown in Fig. 3i and j is weaker than those in Fig. 3g and h. The volume fraction of the controlled ferromagnetic domain seems to be several tens of percent. A number of related experimental and theoretical studies have already been reported. While the interconversion between the mechanical rotation and spin moment is known as the Einstein-de Haas effect and the Barnet effect 3,25 , the present result demonstrates the conversion from the angular momentum of a microscopic phonon excitation. More recently, Kobayashi et al. reported the generation of alternating spin current from a SAW 26 . This phenomenon is qualitatively different from the present result, which originates from the time-independent angular momentum in a SAW. The concept of phonon angular momentum has recently been theoretically developed in the spintronics field 6,6-14 . The present results experimentally demonstrate an important functionality of angular momentum, the conversion to ferromagnetic spin moments, which shows the validity of phonon angular momentum. If the effective magnetic field produced by SAW becomes large enough, even SAW application alone without the help of magnetic field can control the magnetization. This might be achieved by optimization of the devise structure and material. Recent literature reported that similar optimization makes the closely related phenomenon of nonreciprocal SAW propagation gigantic 22,27 . The magnetization manipulation by SAW seems useful to transfer the information carried by the SAW or microwave signal to the magnetic storage. In this sense, it is expected to provide a bridge between telecommunication technology and spintronics because SAW devices are indispensable in contemporary telecommunications technology. Methods Device fabrication. The SAW device in this work was fabricated by electron beam lithography. The device substrate was Y-cut LiNbO 3 and the SAW propagation direction was along the Z-axis of the substrate. Both the IDTs and electrodes were made of Ti (5 nm) and Au (20 nm). One IDT had 200 pairs of 100 μm fingers, and the distance between two IDTs was 500 μm. The finger width and spacing of the IDTs were both 300 nm. The corresponding wavelength and frequency were 1.2 μm and 2.9 GHz, respectively. A Ni film was sputtered between two IDTs on the LiNbO 3 substrate and was connected to six electrodes for resistance measurement. The thickness, width, and length of the Ni film were 30 nm, 10 μm, and 175 μm, respectively. After the Ni film was sputtered, the device was kept at 200 ∘ C for 30 min to eliminate strain in the Ni film arising from the sputtering process 28 . Measurements of SAW transmission and resistance. All of the measurements in this work were done at 100 K. The SAW transmission was measured with a vector network analyzer. The microwave power was 10 dBm in Fig. 2b-g and 3g and h, and was -10 dBm in Fig. 3i and j. The magnetoresistance was measured using a lock-in amplifier with a frequency of 11.15 Hz. Numerical simulations. The LLG equation was numerically solved by using Mathematica. We set realistic values for the coefficients used in the calculation. The saturation magnetization and Gilbert damping coefficient were M s = 370 kA m −323 and α = 0.064 29 , respectively. The magnetic anisotropy constant was assumed to be Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Code availability The code that support the findings of this study are available from the corresponding author upon reasonable request.
4,589.2
2020-07-07T00:00:00.000
[ "Physics" ]
Enriched Environment Attenuates Pyroptosis to Improve Functional Recovery After Cerebral Ischemia/Reperfusion Injury Enriched environment (EE) is a complex containing social, cognitive, and motor stimuli. Exposure to EE can promote functional recovery after ischemia/reperfusion (I/R) injury. However, the underlying mechanisms remained unclear. Pyroptosis has recently been identified and demonstrated a significant role in ischemic stroke. The purpose of this study was to explore the effect of EE on neuronal pyroptosis after cerebral I/R injury. In the current study, middle cerebral artery occlusion/reperfusion (MCAO/R) was applied to establish the cerebral I/R injury model. Behavior tests including the modified Neurological Severity Scores (mNSS) and the Morris Water Maze (MWM) were performed. The infarct volume was evaluated by Nissl staining. To evaluate the levels of pyroptosis-related proteins, the levels of GSDMD-N and nod-like receptor protein 1/3 (NLRP1/3) inflammasome-related proteins were examined. The mRNA levels of IL-1β and IL-18 were detected by Quantitative Real-Time PCR (qPCR). The secretion levels of IL-1β and IL-18 were analyzed by ELISA. Also, the expression of p65 and p-p65 were detected. The results showed that EE treatment improved functional recovery, reduced infarct volume, attenuated neuronal pyroptosis after cerebral I/R injury. EE treatment also suppressed the activities of NLRP1/NLRP3 inflammasomes. These may be affected by inhabiting the NF-κB p65 signaling pathway. Our findings suggested that neuronal pyroptosis was probably the neuroprotective mechanism that EE treatment rescued neurological deficits after I/R injury. INTRODUCTION Stroke is a disease with the highest mortality and disability rates in the world. Ischemic stroke is responsible for the majority of strokes (Campbell et al., 2019;Stinear et al., 2020). Despite the increasing improvement in the treatment of stroke, many survivors remain with residual functional deficits (Winstein et al., 2016). Therefore, the need for effective stroke rehabilitation is essential for the patients to deal with life challenges after stroke. Enriched environment (EE) is a complex containing social, cognitive, and motor stimuli (Kempermann, 2019). EE provides laboratory animals with greater living space, further sensory stimulation, more possibilities for social interaction, and increasing opportunities for learning than the standard environment (Dahlqvist et al., 2003). The benefits of EE in neurological diseases have been extensive studied (Begenisic et al., 2015;Jungling et al., 2017;Song et al., 2017;Shin et al., 2018). In the EE, the ability of learning and memory has been significantly improved while the anxiety behavior was reduced (Benaroya-Milshtein et al., 2004). Meanwhile, exposure to EE enhanced the experience-dependent plasticity of the brain and promoted the recovery of cognitive and motor functions after ischemia/reperfusion (I/R) injury (Livingston-Thomas et al., 2016). EE improved cognitive function via neurogenesis and angiogenesis by regulating the activation of PI3K/AKT/GSK-3/β-catenin signaling pathways and intrinsic axon guidance molecules following I/R (Zhan et al., 2020). Moreover, EE mediated neurogenesis by inhibiting the production and secretion of IL-17A from astrocyte via NF-κB/p65 after I/R injury (Zhang et al., 2018). EE also facilitated cognitive recovery through remodeling bilateral synaptic after ischemic stroke . However, there has been little research on the relationship between EE-mediated ischemic stroke recovery and cell death. Evidence has shown that EE reduced spontaneous apoptotic cell death in the rat hippocampus (Young et al., 1999). A recent study has demonstrated that enriched environmentinduced neuronal autophagy boosted the post-stroke recovery of neurological function (Deng et al., 2021). Our previous studies have illustrated that EE reduced neuronal apoptosis conducting to the superior recovery after I/R injury . However, the mechanisms by which EE attenuated cell death following stroke remained unclear. Pyroptosis is a type of lytic cell death that features cell swelling, rapid rupture of the plasma membrane, and release of proinflammatory intracellular contents as a result of cleaving pore-forming proteins gasdermin D (GSDMD) following by activation of inflammasomes (Shi et al., 2017). Inflammasomes are large multimolecular complexes formed of a cytosolic sensor (nucleotide-binding domain and leucine-rich-repeat-containing [NLR] Pyrin domain containing NLRP1 and NLRP3), an adaptor protein (apoptosis-associated speck-like protein containing a CARD [ASC]), and an effector caspase pro-caspase-1 (Rathinam and Fitzgerald, 2016). After ischemic stroke attacks, the expression of inflammasomes is abundant in the brain (Abulafia et al., 2009). Pro-caspase-1 is activated through NLRP1 and NLRP3 signal cleaving GSDMD into the N-terminal gasdermin-N domain and the C-terminal gasdermin-C domain (Shi et al., 2015;Wang et al., 2020b). Then the pore-forming GSDMD-N domain causes membrane lysis inducing pyroptosis (Ding et al., 2016;Sborgi et al., 2016). Also, activated caspase-1 mediates the maturation of Interleukin-1β (IL-1β) and which are released into the extracellular environment subsequently (Brough and Rothwell, 2007). In recent years, increasing evidence indicated that inflammasome-mediated pyroptosis following ischemic stroke performed a crucial role in the course of functional recovery (Xu et al., 2019;Li et al., 2020). In addition, the activation of inflammasome was considered an essential step for neuroinflammation in subsequent brain injury (Walsh et al., 2014). Increasing expression of NLRP1 and NLRP3 inflammasome has been confirmed in neurons, microglia, and astrocytes (Barrington et al., 2017). Particularly, NLRP1 and NLRP3 inflammasome-mediated neuronal pyroptosis performed an increasingly crucial role in the course of ischemic stroke (Yang-Wei Fann et al., 2013;Ito et al., 2015). As a widely studied inflammation-associated transcriptional element, NF-κB regulated numerous genes and signaling pathways associated with inflammation (Afonina et al., 2017). Furthermore, emerging evidence suggested that the elevated expression of NLRP1 and NLRP3 inflammasome proteins could be modulated by the NF-κB signaling pathway in ischemic stroke (Gross et al., 2011;Fann et al., 2018). Although the accumulated evidence has shown that pyroptosis was involved in ischemic stroke injury, the relationship between neuronal pyroptosis and EE-mediated functional recovery following ischemic stroke was still unknown. Since EE was neuroprotective and pyroptosis was involved in the progress of ischemic stroke. We formulated the hypothesis that post-stroke neurological outcomes could be improved by EE treatment to attenuate neuronal pyroptosis. In the current study, we investigated pyroptosis-related protein expression levels in the penumbra in a rat I/R injury model. Additionally, EE decreased the expression levels of NLRP1, NLRP3, and GSDMD-N. We firstly demonstrated that EE rescued neurological deficits after I/R injury involving the suppressing of neuronal pyroptosis. Generally, our findings indicated EE as a promising therapeutic method for ischemic stroke-mediated inflammasome activity. Animals Male Sprague-Dawley rats (6-7 weeks old, 200-220 g) from Beijing Vital River Laboratory Animal Technology Company were kept in individually ventilated cages (temperature: 20 ± 1 • C, relative humidity: 55 ± 5%, lighting period: 8:00 ∼ 20:00) with free access to water and rat feed. Upon arrival, all rats had a 3-day acclimation before receiving the operation. Following acclimation, the animals were numbered and randomly divided into various groups: the sham + standard condition group (SSC), the sham + enriched environment group (SEE), the ischemia/reperfusion + standard condition group (ISC), and the ischemia/reperfusion + enriched environment group (IEE). The schematic representation of the experimental timeline and the setting of EE were shown in Figure 1. All animal experimental procedures were approved according to the Animal Care and Use Committee of Wuhan University. All efforts were made to minimize the mortality of animals and their suffering. Middle Cerebral Artery Occlusion and Reperfusion Following adapting, male rats were subjected to transient Middle Cerebral Artery Occlusion and Reperfusion (MCAO/R) injury as previously described (Longa et al., 1989). All experimental animals were anesthetized by isoflurane through a face mask (inducing concentration: 4%, maintaining concentration: 2%, respectively, in 2:1 N 2 O:O 2 ). An approximately 2 cm incision was made in the middle of the neck. The common carotid artery (CCA), internal carotid artery (ICA), and external carotid artery (ECA) were meticulously separated. And a 5-0 silk thread was used to ligate the left ECA. Then ligating the CCA with 5-0 silk thread, and clamping it at the bifurcation of the ICA with a blood vessel clip. The CCA was cut, and a monofilament nylon filament (Cinontech) was gently inserted into the ICA to approximately 18-20 mm distal to the carotid artery bifurcation. Then the left MCA was occluded. After 90 min, the filament was carefully removed to initiate reperfusion. All surgery procedures except insertion of the nylon filament were performed on rats in the sham-operation group. After recovering from anesthesia, all rats were assessed by a five-point neurological deficit score in a blinded fashion (Longa et al., 1989). Rats with scores of 1-3 points were included in this study, while the rats with scores of 0 or 4 were excluded from the study. N = 18/group in this study. All the experimental procedures in vivo were approved by The Animal Care and Use Committee of Wuhan University. Housing Conditions Twenty four hour after MCAO/R, the rats were returned to their respective housing conditions. The rats of the SSC, ISC groups were kept in the standard conditions (SC) while the rats in the SEE, IEE groups were kept in the enriched environment. The details of SC and EE were as follows: Standard Conditions The rats were kept in individually ventilated cages (length: 44 cm, width: 32 cm, height: 20 cm) with bedding for animals inside. Three rats were kept in one cage. Enriched Environment The animals are placed in a stainless-steel net cage (length: 75 cm, width: 90 cm, height: 50 cm) which contained ladders, platforms, swings, colorful balls, different-shaped wooden blocks, plastic tunnels, and a running wheel for sensorimotor stimulations. And 6-10 rats were grouping housed in the EE for social stimulations. The type and location of the items in the cage were changed three times a week to ensure novelty and exploration ( Figure 1B). For spatial learning and memory testing, Morris Water Maze (MWM) test was performed on days 21-26 following I/R in a blinded situation (Morris, 1984). A round black platform (9 cm diameter, 30 cm height) was concealed in a pool (150 cm diameter, 60 cm deep, water temperature: 20 ± 1 • C). On day 1-5, rats were dropped into the water from four different quadrants in turn while the position of the platform was fixed. The mean of escaping tendency to the platform in the four trials was recorded. The rat was required to stay on the platform for 15 s when reaching the platform within 60 s. The rat was guided to the platform for 15 s when reaching the platform exceeding 60 s. On day 6, the rats underwent the probe trial that allowed them to swim freely for 60 s without the platform. Swimming trajectories and the average times to reach the submerged platform were captured using an Animal Video Tracking Analysis System (n = 12/group) (Anilab Scientific Instruments Co., Ltd., China). Nissl Staining After being fixed with 4% paraformaldehyde, tissues were embedded in paraffin cut into seriatim 4-µm-thick coronal sections with adjacent sections separated by 400 µm. The sections were placed in xylene, xylene, xylene, 100, 95, and 80% ethanol for 5 min each and rinsed under running water for 5 min. Then the sections were treated with Cresyl Violet Solution (Servicebio, China) for 3min. After washing in running water and drying thoroughly, the sections were coverslipped with neutral resin. The stained sections were scanned and measured with the ImageJ software. The total infarct volume was calculated by the formula as previously described (n = 6/group) . Western Blotting Protein samples were harvested from penumbra. Tissues were ground separately in RIPA buffer comprising protease and phosphatase inhibitors (cocktails and PMSF from Aspen) for 30 min at 4 • C. A BCA kit (Aspen) was used to detect the total protein concentration of each sample. Proteins were processed by SDS-PAGE (10-12.5%) and electro-blotted onto a PVDF membrane. And the membrane was then incubated in blocking buffer (5% skim milk) for 1 h at room temperature and incubated with primary antibodies including GSDMD (Abclonal), NLRP1, NLRP3, Caspase-1 (Novus), IL-1β,IL-18 (R&D), total p-65 (Proteintech),phosphorylated p-65 (Abclonal), GAPDH (Proteintech) overnight at 4 • C. After washing three times, the membrane was incubated in secondary antibody for 1 h at 24 • C. The proteins were scanned with a Bio-Rad system. ImageJ software was used to quantify protein levels which were normalized to GAPDH (n = 6/group). Immunofluorescence Assays Brain paraffin sections (4 µm) were hydrated and Tris/EDTA buffer performed heat-mediated antigen retrieval for 20 min. The sections blocked with 5% BSA for 1 h were incubated with Neun (Proteintech) along with primary antibodies Caspase-1 (Novus) overnight at 4 • C and subsequently incubated in fluorescent secondary antibodies (Proteintech) for 1 h at 24 • C. DAPI (Antgene) was used for nuclei staining. Images were taken with an Olympus BX53 microscope (Olympus). Positive cells were counted using ImageJ software (n = 6/group). Immunohistochemistry Brain paraffin sections (4 µm) were hydrated and Tris/EDTA buffer performed heat-mediated antigen retrieval for 20 min. The sections were then processed with 3% H 2 O 2 for 10 min. The sections blocked with 5% BSA for 1 h were incubated with primary antibodies GSDMD (Abclonal), phosphorylated p65 (Abclonal) overnight at 4 • C, and then incubated in HRPlabeled secondary antibodies (Proteintech). DAB (Servicebio) was utilized for dyeing while hematoxylin was used for nuclei staining. Images were acquired using the Olympus BX53 microscope (Olympus). The distribution and intensity of GSDMD and p-p65 staining was described by a semiquantitative score in a blinded fashion (0-negative, 1-weak, 2-moderate, 3strong, and 4-strong and widely distributed) (n = 6/group) (Xu et al., 2020). Enzyme-Linked Immunosorbent Assay (ELISA) Rat (n = 3/group) penumbra tissues were separated and homogenized with PBS. After being centrifuged at 5000 rpm for 10 min at 4 • C, the supernatants were collected. The secretion levels of inflammatory cytokines (IL-1β and IL-18) were analyzed by ELISA (Elabscience). Following the instructions on the ELISA kit, the optical density (OD) at 450 nm was measured by an enzyme-labeled instrument (PerkinElmer Singapore Pte. Ltd). Quantitative Real-Time PCR Rat (n = 3/group) penumbra tissues were separated and homogenized with Trizol reagent (Invitrogen, United States). The PrimeScript RT Reagent Kit (RR047A, Takara, Japan) was used for the reverse transcription of RNA. According to the manufacturers' protocol, we performed qPCR to detect the mRNA levels using SYBR Premix Ex Taq II (RR820A, Takara) in a 2.1 Real-Time PCR System (Bio-Rad, United States). The relative Ct method was adopted to compare the data and GAPDH was set as internal control. The primer sequences were listed as follows: IL Statistical Analysis SPSS 23.0 software and GraphPad Prism 8.0 were used for data analysis. Analysis of mNSS was implemented by a nonparametric Kruskal-Wallis test. Analysis of escape latency in the MWM test was implemented by two-way repeated-measures ANOVA followed by Tukey's post hoc test. And differences between groups were compared by two-tailed Student's t-test and one-way ANOVA followed by Tukey's post hoc test. All experimental data are expressed as mean ± standard deviation (SD). Statistical significance was determined as p < 0.05. RESULTS Enriched Environment Improved Long-Term Neurobehavioral Function After Ischemia/Reperfusion Injury Ischemia/reperfusion injury caused marked Behavioral dysfunction (Yirmiya and Goshen, 2011). To determine whether EE treatment improved functional recovery after I/R injury, a series of behavioral tests were performed. To evaluate the neurological function, mNSS was assessed on day 3, 7, 14, 21 after I/R injury. Rats had persistent sensorimotor defects after MCAO/R operation and EE treatment could effectively reverse the defects (Figure 2A; p < 0.001). To assess long-term spatial learning and memory functions, MWM was assessed on day 21-26 after I/R injury. In the spatial learning phase, the escape latency of rats in all groups decreased as the training days progressed. And MCAO/R rats spent more time reaching the platform compared with sham-operated rats on days 1-5 of training. However, rats housed in EE demonstrated the superior performance of shorter escape latency than rats housed in standard conditions following I/R injury ( Figure 2B; p < 0.001). Probe trials proceeded 24 h after the final spatial learning trial. Rats of the IEE group spent more time in the correct quadrant and revealed more crossovers compared to rats of the ISC group (Figures 2C-E; p < 0.001 and p < 0.01). To sum up, EE treatment improved long-term neurobehavioral function after I/R injury. Enriched Environment Decreased Ischemic Infarction and Inhibited Pyroptosis After Ischemia/Reperfusion Injury As the improvement in functional outcome could be attributed to a reduction in brain damage, Nissl staining was performed to confirm the effects of EE on infarct volume after I/R injury. The schematic diagram of the ischemic border was shown in Figure 3A. EE treatment significantly reduced infarct volume in comparison with the ISC group (Figures 3B,D; p < 0.001). No lesion was found in SSC and SEE groups. Reduced poststroke ischemic infarction has been reported to be associated with inhibition of pyroptosis (Ye et al., 2020). Next, we explored whether EE could rescue ischemia-induced pyroptosis. GSDMD was downstream of pyroptosis and GSDMD-N fragments transferred to the plasma membrane to form pores that caused lytic cell death and the secretion of mature IL-1β and mature IL-18 (Kovacs and Miao, 2017). The expression levels of GSDMD-N from the ischemic border were detected. Western blot results demonstrated enhanced expression levels of GSDMD-N in the ISC group. And the expression levels of GSDMD-N were apparently reduced in the IEE group in comparison with the ISC group (Figures 3C,E; p < 0.001). Furthermore, we performed immunohistochemistry GSDMD and found that the IEE group expressed an obviously lower level of GSDMD in comparison with the ISC group (Figures 3F,G; p < 0.01). Collectively, all of these data suggested that ischemic infarction and pyroptosis after I/R injury were negatively regulated by EE treatment. Enriched Environment Suppressed the Activities of NLRP1/NLRP3 Inflammasomes To investigate how EE influenced neuronal pyroptosis after ischemic stroke, the activation of inflammasomes which was regarded as the upstream signal in the early stage of pyroptosis was detected. As neuronal pyroptosis might be mediated by the activation of NLRP1 and NLRP3 inflammasomes in the course of ischemic stroke, western blot was performed to explore whether EE inhibited pyroptosis by suppressing the activities of NLRP1/NLRP3 inflammasomes. The expression levels of NLRP1 and NLRP3 inflammasome proteins, mature IL-1β and mature IL-18 in the ischemic border of MCAO/R rats were measured. It was obvious that the expression levels of NLRP1 and NLRP3 were increased following I/R in comparison with sham controls while EE treatment decreased the expression levels of NLRP1 and NLRP3 in comparison with the ISC group (Figures 4A-C; p < 0.05 and p < 0.01). The elevated levels of cleaved caspases-1, mature IL-1β, and mature IL-18 indicated the activation of NLRP1/NLRP3 inflammasomes. I/R increased the expression levels of cleaved caspases-1, mature IL-1β, and mature IL-18 in comparison with sham controls while rats in the IEE group expressed a lower level (Figures 4D-G; p < 0.01, p < 0.001, and p < 0.001). To compare the expression levels of inflammatory factors, ELISAs were performed to detect inflammatory cytokines IL-1β and IL-18. We found that the expression levels of IL-1β and IL-18 were significantly increased following I/R in comparison with sham controls while EE treatment decreased the expression levels of IL-1β and IL-18 in comparison with the ISC group (Figures 5A,B; p < 0.01 and p < 0.01). The IL-1β and IL-18 mRNA expression levels were further examined by q-PCR. The mRNA levels of IL-1β and IL-18 were significantly reduced following I/R when the rats were housed in EE (Figures 5C,D; p < 0.01 and p < 0.05). In addition, immunofluorescence analysis in the penumbra from the IEE group demonstrated a lower level of caspase-1compared with the ISC group (Figures 4H,I, p < 0.001). Notably, caspase-1 was highly colocalized with Neun + neurons, suggesting the inflammasome activity in neurons. These results demonstrated EE inhibited neuronal pyroptosis by suppressing the activities of NLRP1/NLRP3 inflammasomes. Enriched Environment Inhibited p65 Phosphorylation After Ischemia/Reperfusion Injury As NF-κB was reported to mediate pyroptosis by regulating the transcription of NLRP and then regulated its downstream substrates (Fann et al., 2018), we next explored how EE treatment inhabited pyroptosis through evaluating the NF-κB signaling pathway proteins. First, a western blot was used for the detection of p-65 and p-p65 expression levels in the penumbra following I/R injury. The results suggested that p65 levels were not influenced by EE treatment (p > 0.05). However, p-p65 was reduced when the rats after I/R injury were housed in EE (Figures 6A-C, p < 0.01). Furthermore, we used immunohistochemistry to explore the expression levels of p-p65 in the penumbra and found that rats of the IEE group significantly down-regulated p-p65 expression in comparison with rats of the ISC group (Figures 6D,E, p < 0.05). As is well known, p65 phosphorylation indicates the activation of the p65 NF-κB signal (Pradère et al., 2016); thus, these results illustrate that EE can inhibit the p65 NF-κB signal activation after cerebral I/R injury. In general, these data revealed that EE treatment inhibited neuronal pyroptosis by attenuating the expression of NLRP1/NLRP3 inflammasomes following cerebral I/R injury. These may be affected by inhabiting the NF-κB p65 signaling pathway. DISCUSSION Despite the high rate of disability associated with stroke worldwide, valid therapeutic methods were restricted (Virani et al., 2020). Residual dysfunction of stroke greatly affected the life quality of the survivors (Stinear et al., 2020). It should not be overlooked to search methods for the recovery of the functional deficit caused by stroke. Abundant evidence showed that EE could effectively promote functional recovery after ischemic stroke (Chen J.Y. et al., 2017;Kubota et al., 2018;Lin et al., 2021, p. 1). However, due to the complexity of transforming EE into clinical practice, EE remained mainly a laboratory stage (Lang et al., 2015). Therefore, exploring the potential mechanism underlying the role of EE in promoting functional recovery may provide precise targets for the recovery of ischemic stroke and expedite its clinical application. Our previous study demonstrated the connections between neuroprotective effects of EE and neuronal cell death . A variety of pathological stimuli such as heart attacks, obesity, or cancer could trigger pyroptosis (Bergsbaken et al., 2009). Moreover, pyroptosis was closely related to central nervous system diseases (Fricker et al., 2018). In models of multiple sclerosis, pyroptosis inhibition preserved axons in the spinal cord lesions (McKenzie et al., 2018). In models of Alzheimer's disease, pyroptosis relived the behavioral ability . Previous research showed that neuronal pyroptosis affected the prognosis after ischemic stroke, which suggested that anti-pyroptosis was an effective treatment for all functional recovery following FIGURE 4 | Enriched environment suppressed the activities of NLRP1/NLRP3 inflammasomes of MCAO/R rats. (A-G) Western blots and quantification of NLRP1/NLRP3 inflammasomes related proteins including NLRP1, NLRP3, cleaved caspase-1, mature IL-1β, and IL-18 in peri-infarct tissues. n = 6. (H,I) Double immunostaining of Neun and Caspase-1 revealed a good co-localization of these two makers. Statistical analysis of the positive rate is shown. Treatment with EE reduced Caspase-1 positive neurons in the ischemic penumbra. Scale bars, 50 µm. n = 6. Data are expressed as mean ± SD. **p < 0.01, ***p < 0.001 vs. SSC group; # p < 0.05, ## p < 0.01, ### p < 0.001 vs. ISC group. I/R injury (Lu et al., 2021). But there remained insufficient evidence that whether pyroptosis was essential for EE-mediated ischemic stroke recovery, and if so, how EE influenced neuronal pyroptosis after the pathological process. A key finding of our research was that EE attenuated pyroptosis and improved functional recovery after cerebral ischemia/reperfusion injury. The schematic mechanism was shown in Figure 7. Being a novel type of cell death, pyroptosis mainly featured plasma-membrane pores formation, rapid plasma membrane rupture, and the release of intracellular inflammatory substances (Kuang et al., 2017). In the present study, we showed compelling evidence that the expression levels of GSDMD-N, the major pore-forming executive in pyroptosis, increased in the MCAO/R group versus the sham-operated group, and this alteration was counteracted in the EE treatment group. Then, to investigate how EE influenced neuronal pyroptosis after ischemic stroke, we detected the activation of inflammasomes which was regarded as the upstream signal in the early stage of pyroptosis. Inflammasomes were a group of the multimolecular complex that identified multiple inflammation-induced stimuli and mediate Quantitative analysis of IL-1β and IL-18 mRNA levels in penumbra. n = 3. Data are expressed as mean ± SD. *p < 0.05, **p < 0.01, ***p < 0.001 vs. SSC group; # p < 0.05, ## p < 0.01, ### p < 0.001 vs. ISC group. the maturation of critical proinflammatory cytokines in the process of pyroptosis (Strowig et al., 2012;Xue et al., 2019). It was worth noting that NLRP1 and NLRP3 inflammasomes were reported to be involved in ischemic stroke (Fann et al., 2014;Yang et al., 2014). However, it is not known whether EE treatment worked in the modulation of NLRP1/NLRP3 inflammasomes activation. The present research provided compelling evidence that EE treatment significantly modulated the activation of NLRP1/NLRP3 inflammasomes. The expression levels of NLRP1, NLRP3, the cleaved caspase-1, and the inflammatory cytokines mature IL-1β, and mature IL-18 were downregulated by EE treatment compared with standard conditions after I/R injury. And we found that the related proteins mainly expressed in the neurons through immunofluorescence double staining to locate its position. In summary, we found that EE treatment inhibited neuronal pyroptosis by affecting the activation of inflammasomes and thereby improved the functional recovery after I/R injury. Next, we investigated the potential molecular mechanism of EE-reduced NLRP1 and NLRP3 inflammasome expression and activation in neurons. The activation of NLRP1 and NLRP3 inflammasome in the brain following I/R injury may be induced by pattern recognition receptors (PRRs) including toll-like receptors (TLRs), the receptor for advanced glycation end products (RAGE), and the IL-1 receptor 1 (IL-1R1) (Gelderblom et al., 2015). PRRs identified different pathological stimuli including endogenous damage-associated molecular patterns (DAMPs) released from damaged cells in the stroke core such as high mobility group box 1 protein (HMGB1), heat shock proteins, and peroxiredoxin family proteins Wang et al., 2020a). DAMPs-activated PRRs further activated the intracellular NF−κB signaling pathway resulting in pyroptosis and the release of inflammatory factors (Schroder and Tschopp, 2010). As a transcription factor, NF-κB played a crucial role in cell death and inflammation (Kondylis et al., 2017;Liu et al., 2017). As a cytosolic sensor, the NF−κB signal activated and facilitated its nuclear translocation and DNA binding (Liu et al., 2017). Phosphorylation of p65 indicated the activation and functional status of the NF-κB signaling pathway (Pradère et al., 2016). Previous studies have indicated that activation of the NF-κB signaling pathway which could be activated by reactive oxygen species (ROS), hypoxia, and several inflammatory mediators occurred in neurons following I/R injury (Ridder and Schwaninger, 2009;Liu et al., 2019;He et al., 2020). The role of the NF-κB signaling pathway in regulating pyroptosis has been extensively studied in various diseases. Evidence has confirmed that NF-κB could regulate the transcription of NLRP by binding to their promoter region and then regulated its downstream substrates (Liu et al., 2017;Matias et al., 2019). The activation of the NF-κB signaling pathway was essential for the up-regulation of the protein synthesis of NLRP3 (Afonina et al., 2017). It has been demonstrated that the elevated expression level of IL-1β was induced by the activation of the NF-κB signaling pathway in ischemic damage (Zhou et al., 2012). Additionally, studies have confirmed that EE treatment was beneficial for the recovery of central nervous system diseases by inhibiting the NF-κB signaling pathway (Wu et al., 2016;Li et al., 2018). Moreover, Zhang et al. (2018) reported that EE mediated neurogenesis and functional recovery by inhibiting the NF-κB/IL-17A signaling pathway in astrocytes after ischemic stroke. In the present study, EE decreased the phosphorylation of p65 and the expression of NLRP1 and NLRP3 inflammasome proteins induced by I/R injury. This was supported by the study that NF-κB signaling promotes NLRP1 and NLRP3 inflammasome activation in neurons following I/R injury (Fann et al., 2018). In brief, our findings suggested that the anti-pyroptosis effect of EE after ischemic stroke was associated with the inhibition of the NF-κB p-65 signaling pathway and the reduced expression levels of NLRP1 and NLRP3 inflammasome-related proteins. Less perfection was that the upstream regulator of p65 phosphorylation remained to be explored in this study. As the activation of NLRP1 and NLRP3 inflammasome may be induced by PRRs which identified DAMPs released from dying neural cells and stimulated NF-κB translocation during the I/R process, the sources of danger signals that promoted inflammatory response remained to be further investigated (Dong et al., 2018). Downregulation of the HMGB1/TLR4/NF-κB pathway was associated with inhibition of pyroptosis . HMGB1 activated the NLRP3 inflammasome via the NF-κB signaling pathway in acute glaucoma (Chi et al., 2015). The activation of the TLR4/NF-κB signaling pathway could modulate NLRP3 inflammasome activation in inflammatory bowel disease FIGURE 6 | Enriched environment inhibited p65 phosphorylation after I/R injury. (A-C) Western blots and quantification illustrating increases in the activation of NF-κB (p-p65) in the penumbra. (D) Representative IHC staining images for p-p65 in the penumbra. Scale bars, 50 µm. (E) IHC score of p-p65 in penumbra. n = 6. Data are expressed as mean ± SD. **p < 0.01, ***p < 0.001 vs. SSC group; # p < 0.05, ## p < 0.01, ### p < 0.001 vs. ISC group. FIGURE 7 | Schematic mechanism of EE treatment regulates post-ischemic pyroptosis. The NF-κB signaling pathway is activated following I/R injury, which stimulates the nucleus to induce transcription of NLRP1 and NLRP3 proteins to form the NLRP1 and NLRP3 inflammasome. Pro-caspase-1 is activated through NLRP1 and NLRP3 signal cleaving GSDMD into GSDMD-N and GSDMD-C. Then the pore-forming GSDMD-N domain causes membrane lysis inducing pyroptosis. Also, cleaved caspase-1 mediates the maturation of IL-1β and IL-18 which are released into the extracellular environment. EE attenuates pyroptosis resulting in ischemic stroke outcomes amelioration. and induce GSDMD-mediated pyroptosis in tubular cells in diabetic kidney disease . Moreover, SYK expression which was downregulated by the activation of miRNA-27a could stimulate the NF-κB signaling pathway and facilitate NLRP3-mediated pyroptosis (Li et al., 2021). The previous study has shown that EE could regulate the expression of HMGB1 and mediate post-stroke angiogenesis (Chen J.Y. et al., 2017). EE was also associated with growth factors (epithelial growth factor, hepatocyte growth factor) and signaling pathways (STAT3, JNK, EKR1/2, NF-κB) expressed in the gastrocnemius muscle (Le Guennec et al., 2020). However, it remained to be explored whether EE regulated the NF-κB signaling pathway by regulating the expression of these growth factors or DAMPs. It was essential to figure out the upstream signaling pathway to reveal the underlying mechanism of the EEmediated effect on the NF-κB activation. In our future research, we would focus on solving this problem. In another hand, to confirm the exact effect of EE on the NF-κB signaling pathway, the agonist of NF-κB should be included. Our future work would dwell on this. Moreover, NF-κB pathway activation resulted in pyroptosis and the release of inflammatory factors. In turn, these inflammatory factors may act by activating the NF-κB signaling which would keep the cells in an activated cyclic state (Vallabhapurapu and Karin, 2009). There was increasing evidence that many cell death pathways including apoptosis, autophagy and pyroptosis were simultaneously present in the ischemic core and penumbral area, which were fine-tuned and had either beneficial, deleterious or dual roles in the progression of post-stroke brain damage (Şekerdag et al., 2018). However, the relevant research of EE on different types of cell death following ischemic stroke was extremely limited. In our previous studies, EE performed beneficial effects by inhibiting apoptosis of neurons following I/R . EE treatment increased the levels of anti-apoptotic protein Bcl-2 while decreased the levels of pro-apoptotic protein Bax, cytochrome c, caspase-3 in the penumbra after cerebral I/R injury. Caspases, the main drivers of apoptosis, were also involved in the crosstalk between apoptosis and autophagy. Activated caspases could inhibit autophagy by degrading autophagy proteins such as beclin-1, Atg5 and Atg7 (Wu et al., 2014). A recent study also demonstrated that EE promoted autophagy by increasing the expression of beclin-1 and enhancing the lysosomal activities of lysosomal-associated membrane protein 1, cathepsin B, and cathepsin D, which eventually boosted neurological function recovery following ischemic stroke (Deng et al., 2021). This kind of crosstalk in EE regulated cell death pathways after stroke could also be found in autophagy and pyroptosis. The evidence for the role of NLRP1/3 inflammasomes and pyroptosis in stroke pathology was well defined in the previous studies (Yang-Wei Fann et al., 2013;Barrington et al., 2017). Recently, autophagy has been linked to the regulation of inflammatory response. In Beclin1 +/-cells, the levels of NLRP3 and cleaved caspase1 were increased and the number of cells with inflammasome were elevated, indicating that autophagy inhibited inflammasome activation through NLRP3 degradation (Houtman et al., 2019, p. 1). Meanwhile, NLRs have been shown to increase the synthesis of autophagyrelated proteins and assist in the localization of autophagy proteins (Deretic, 2012). In the present study, we found that EE inhabited neuronal pyroptosis by suppressing the activities of NLRP1/NLRP3 inflammasomes after I/R injury. The cell death pathways after I/R injury were overlapping each other in several steps of the cascades and shared common features. Keeping on exploring the effect of EE on different cell death types and how these mechanisms worked individually or correlatively might bring us a step closer to a promising approach for stroke treatment. What's more, whether EE affected pyroptosis and the cell types experiencing pyroptosis after I/R injury remained unclear. Evidence showed that pyroptosis occurring in neurons, astrocytes, and microglia was involved in the pathological process in diseases other than ischemic stroke (Jamilloux et al., 2013;Alfonso-Loeches et al., 2014;Tan et al., 2014). In this study, we proved the effect of EE on neuronal pyroptosis after I/R injury. However, it was of great importance to notice that EE might not only influence neuronal pyroptosis but also affect pyroptosis or inflammatory reaction of additional cell types including astrocytes and microglia. Our future work would dwell on revealing the pleiotropic roles of EE-inhibited pyroptosis in other cell types. CONCLUSION Our finding showed that EE treatment presented a prospective cerebral-protective effect against I/R injury. EE treatment promoted functional recovery after I/R injury, involving inhibition of pyroptosis by suppressing the activities of NLRP1/NLRP3 inflammasomes. The beneficial effect of EE may result from the inhibition of NF-κB p65 phosphorylation. As a result, the therapeutic application of EE after I/R injury, including several potential therapeutic targets was probably a promising strategy for stroke recovery. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by the experimental animal Ethics Committee of Wuhan University (WP2020-08052).
8,009.4
2021-09-27T00:00:00.000
[ "Biology", "Medicine" ]
Reactive Distillation: Modeling, Simulation, and Optimization Chemical process industries deal with production which further utilizes reaction followed by separation of the reaction mixtures. Reactive distillation is a new technique of combination of both reaction and separation in a single unit beneficial for equilibrium-limited reactions and also cost-effective. This makes it a highly complex process because many parameters involved in both reaction and separation are interactive in nature. In this chapter, modeling, simulation, and optimization of reactive distillation are presented. Methyl acetate production via reactive distillation is chosen as a case study. The results are compared for both experimental and simulation studies. The synthesis of methyl acetate was carried out in a packed RDC by catalytic esterification using acetic acid and methanol as reactants in a pilot-scale experimental setup. A strong acidic ion exchange catalyst, Amberlyst-15, was used to enhance the rate of heterogeneous esterification reaction. The result obtained was observed with change in various variables including the reflux ratio (RR), distillate-to-feed (D/F) ratio, and bottom-to-feed (B/F) ratio with respect to product composition. The optimization and sensitivity analysis was carried out using Aspen Plus process simulation software. Reactive distillation (RD) Chemical engineering deals with the conversion of raw material into products via a chemical unit process or unit operations. Manufacturing of various chemicals like esters, ethers, cumene, petroleum processing unit, etc. required a reactor followed by separator such as a distillation unit to separate the required product from other constituents on the basis of relative volatility [1]. There are various constraints on this type of processing like more space required for the installation of the unit, higher cost, more energy input requirement, and reduced selectivity. Specifically the conversion limits for reversible reactions are difficult to overcome toward highest purity of product because once the equilibrium is achieved in the system, no more reactant will be converted into products. In view of all these constraints, reactive distillation emerged as a novel technique of process intensification in which reaction and separation of product take place simultaneously in a single column [2]. In the case of reactive distillation, total capital cost is reduced due to two combined process steps held in the single unit. This kind of integration is also beneficial in reducing pump cost and other instrumentation cost. The saving in total energy cost is due to exothermic nature of many chemical reactions which in turn are beneficial in providing heat for separation of components simultaneously [3][4][5][6][7][8][9][10]. The schematic diagram of reactive distillation column is shown in Figure 1. Industrial application of reactive distillation Reactive distillation, which uses heterogeneous catalysts known as catalytic distillation, was firstly considered for RD [11], but it then remained uninvestigated and lacked research interests until the 1980s. However in 1980, with the advent of reactive distillation technology, Eastman Company tentatively carried out synthesis of high-purity methyl acetate. Later on RD was categorized as hybrid and nonhybrid columns [12,13]. Hybrid RD is used to describe columns, which have separate reactive and separation sections, while the reaction takes place in the whole non-hybrid RD column. After the success story of Eastman Company, several European countries and universities joined forces to work on a development strategy for reactive distillation process under the umbrella of Brite Euram project. Sulzer Chemtech has developed special structured catalytic packing for reactive distillation columns [14]. RD is an important method for many chemical syntheses which require recovery of chemicals such as recovery of acetic acid. RD uses cation-exchange resin for many liquid-phase homogeneous catalyst reactions such as butyl acetate synthesis and helps in separating catalyst during downstream processing. The investigation of many such reactions is reported [15][16][17]. Transesterification for synthesis and characterization of biodiesel from different raw material such as palm oil, mustard oil, etc. has been proposed but still not commercialized using various homogeneous and heterogeneous catalysts. However, hydrodesulfurization of light oil fractions has been carried out commercially for diesel deep hydrodesulfurization. CDTECH, the major commercial process technology provider, licensed up to now over 200 commercial-scale processes. Sulzer reports the commercial application of reactive distillation as synthesis of ethyl, methyl, and butyl acetate, hydrolysis of methyl acetate, synthesis of methylal, removal of methanol from formaldehyde, and formation of fatty acid esters. Commercial reactive distillation application with Katapak licensed from Sulzer is tabulated in Table 1. Industrial perspective of reactive distillation Reactive distillation (RD) is a hybrid combination of reaction and separation in a single vessel. The first patent for this process route was out in the 1920s, but little was carried out till 1980 by the Eastman Company who synthesized methyl acetate for the first time using this technique. The following reactions have shown potential for reactive distillation: Esterification In esterification reaction, alcohol and acid react to form an ester. Esters are chemical compounds having pleasant fruity odor. The main application of esters is in the synthesis of artificial flavor and essence and solvent for oil, gum, fat, and resins. They are also used as plasticizers. Esterification is the oldest reaction carried out in a reactive distillation column. For example, in conventional methyl acetate production, the yield of methyl acetate is low because of low boiling azeotrope formation. This constraint is removed in RD and almost pure methyl acetate can be collected. Fatty acid esters are natural chemicals used, among other things in cosmetics; plastics and surfactants were also reported to be synthesized in reactive distillation. Transesterification Transesterification reaction in general can be represented as the reaction between triglyceride and alcohol to produce alkyl esters and glycerol. The best example is a synthesis of biodiesel using transesterification. Commercially, no industrial unit has been reported on synthesis of biodiesel in RD, but the literature shows that pilot-scale synthesis is possible. This process occurs by reacting the vegetable oil with alcohol in the presence of an alkaline or acidic catalyst. Heterogeneous catalysts are more effective from an economical point of view for biodiesel production. Sometimes transesterification can be a beneficial alternative to hydrolysis as it does not involve formation of water, and moreover, it brings out the value added through formation of another ester. Etherification Etherification refers to the synthesis of ethers from alcohol and acid. Ethers are an indispensable part of the fuel industry as, like the properties of alcohol, ether also enhances the octane value of fuel when added in appropriate proportion. Several model reactions via RD such as MTBE, ETBE, and TAME have been studied since last two decades. These fuel oxygenates are formed by reaction of isobutylene with alcohol to give ether and water. However, another alternative is to react tert-amyl alcohol (TAA) with corresponding lower alcohol such as methanol or ethanol. Alkylation Transfer of alkyl group from one molecule to another is known as alkylation. Cumene and ethyl benzene are some examples which are synthesized using alkylation process. In this process alkanes, which are a part of paraffin compounds, are reacted with an aromatic compound which results in production of a high-quality fuel substitutes like cumene. These compounds are added to gasoline as a blend to improve its octane number, reduce the engine problems like gum deposits on oxidation, etc. High aviation fuel blends are produced using an alkylation process whose octane number is denoted by a performance number having a value of greater than 100. The catalytic alkylation method uses aluminum chloride and hydrochloric acid as catalyst to initiate the reaction between benzene and propylene. Aldol condensation In an aldol condensation, an enolate ion reacts with a carbonyl compound to form a β-hydroxyaldehyde or β-hydroxyketone, followed by a dehydration to give a conjugated enone. By using reactive distillation (RD), one can improve the selectivity toward the intermediate or final product depending on the type of catalyst used and by continuously removing the desired product from the reaction zone. Dehydration Dehydration reaction simply means removal of water. This process is employed generally for glycerol to obtain acetol. This reaction is usually carried into the presence of various metallic catalysts like alumina, magnesium, ruthenium, nickel, platinum, palladium, copper, Raney nickel, etc. Single-stage and two-stage reactive distillation techniques are being employed, and special care is being taken to regenerate these catalysts as they are classified as precious and non-precious catalysts. Acetylation Various processes thereby produce a by-product which is of other important industrial use. Like in the case of biodiesel manufacturing using methanol, we get a secondary by-product called glycerol. It is a very good raw material for the process called acetylating as in this process, especially when carried out in reactive distillation column, it is reported that about 99% conversion of glycerol into triacetin is observed. This triacetin acts as an additive in compression engine fuels and reduced the knocking in the engine. Isomerization Isomerization is a process in which one molecule is transformed into another molecule which has exactly the same atom, but they have different arrangements. A-isophorone and b-isophorone in spite of being isomers can be very well separated by reactive distillation as there is a large difference in their volatilities. Oligomerization Oligomerization is a chemical process that converts monomers to macromolecular complexes through a finite degree of polymerization. Oligomer esters and acid were hydrolyzed using RD technology, and the results were consistent with industrial literature. Product purity Product purity is an ultimate customer requirement. If these are not fulfilled or low-quality product is supplied to the customer, the expectation of the customer will not be fulfilled. For this reason, quality parameters need to be defined. These parameters are differing in different cases. For example, few quality indexes like physical and chemical characteristics of the product, medicinal effects, toxicity, and shelf life are required to be given in the case of pharmaceutical products. Quality indexes such as taste, nutritional properties, texture, etc. are important in the case of food products. Similarly for products from chemical processes, final composition or product purity as quality index is required. Importance of product purity in chemical engineering Synthesis of various chemicals usually is carried out in a reactor which may or may not be followed by separator. Either the case may be choice of design variable is very important. The market value of overhead product or the bottom product relies on its purity. Also the need of any further treatment for enhancing the purity relies on the initial product composition. In view of this, the degree of freedom for the column should be zero; that means the number of variables should be the same or equal to the number of equations involved in modeling. For example for a distillation column, if a designer specifies reflux ratio or boil up ratio and a distillate rate, then there will be corresponding unique set of distillate and bottom composition with respect to a fixed feed flow rate. Product purity in reactive distillation Variability in the product purity is due to various factors including variable flow rate, reboiler heat duty, reflux rate, and temperature inside the column. These parameters can be controlled using various control techniques to meet final product specification requirement as per the market demand both for large market and small market. Various control techniques are available which can be suitably applied to get continuous controlled final product composition. Detailed process knowledge helps in control of such a nonlinear process. The control performance also affects plant processing rates and utility usage. Process control engineering helps in designing control loop system which helps in the control of multivariable system and the systems involved multiple inputs and multiple outputs. 5.1 Steps to achieve quality specifications Fixing product specifications A specification is the minimum requirement according to which a producer or service provider makes and delivers the product and service to the customer. Deciding on the method of manufacture Design and implementation of method of manufacture in actual plant condition permit to make product in the quickest and easiest way of manufacturing. These also require preparing manufacturing instructions, sequence of operations, and other procedures. Providing the necessary machines, plant, tooling, and other equipment Everything that is required for manufacture must be selected, taking care that all the elements are capable of achieving the standard of quality demanded. Benefits Benefits of reactive distillation include: • Increased speed of operation • Lower costs-reduced equipment use, reduced energy use, and handling being easy • Less waste and fewer by-products • Improved product quality-reducing opportunity for degradation because of less heat requirement Modeling of heterogeneous catalyzed packed RDC Modeling of RD column involves basic concept of distillation column carrying out reaction in a reactive zone in between the rectifying zone and stripping zone [18][19][20][21]. Thus modeling can be represented by various balances for different zones of reactive distillation column. Non-equilibrium modeling was carried out for heterogeneous catalyzed packed RDC using first principle approach. The schematic view of heterogeneous packed RDC is shown in Figure 2. The basic assumptions for this model are as follows: 7.1 Component material balance Figure 3 gives flow of vapor and liquid over a plate/tray. As per the reaction of two reactants producing two products, component material balance for various sections of the column can be written as follows: 1. Rectifying and stripping trays d x n, j M n À Á dt ¼ L nþ1 x nþ1, j þ V nÀ1 y nÀ1, j À L n x n, j À V n y n, j 2. Reactive trays d x n, j M n À Á dt ¼ L nþ1 x nþ1, j þ V nÀ1 y nÀ1, j À L n x n, j À V n y n, j þ R n, j 3. Feed trays d x n, j M n À Á dt ¼ L nþ1 x nþ1, j þ V nÀ1 y nÀ1, j À L n x n, j À V n y n, j þ R n, j þ F n z n, j 4. The net reaction rate for component j on tray n in the reactive zone is given by 6. Column base 7. Due to exothermic reaction, the heat of reaction vaporizes some liquid in reactive section. Therefore, the vapor rate increases in the reactive trays, and the liquid rate decreases down through the reactive trays. 8. Vapor phase where V piÀ1 is vapor entering the plate p, y piÀ1 is the mole fraction of component i, and P v is vapor added to the column, but these are leaving the column through condenser; therefore negative sign is considered, V p is the vapor leaving the plate p, and n ipv is gain of species i due to transport, i.e., mass transfer rates. It is given as. (10) where N ip is molar flux of species i at particular point in the two-phase dispersion. Liquid phase where L pi+1 is liquid entering the plate p, x pi+1 is the mole fraction of component i, P Lp is liquid added to the column, L p is the liquid leaving the plate p, and n ipl is loss of species i due to transport, i.e., mass transfer rates. It is given as. n ipl ¼ ð N ipl dp (12) where N ipl is molar flux of species i at particular point in the two-phase dispersion. Since there is no accumulation at phase interphase, it follows. M t is the accumulation due to mass transfer. Pilot-scale experimental results The experimental synthesis of methyl acetate esterification was performed in pilot-scale heterogeneous catalytic packed RDC shown in Figure 4. The characteristics of packed RDC are given in Table 2 and temperature data is given in Table 3. From the observations we conclude that the temperature of the reactive zone, from stage 3 to stage 6, lies between 50 and 70°C, which is an ideal condition for production of methyl acetate catalytic esterification reaction. The temperature of stripping zone lies between 50 and 59°C. Temperature of rectifying section lies between 30 and 45°C. We have set the reboiler temperature at 70°C which is close to boiling point of methanol. However it varies as the reaction proceeds. The composition of methyl acetate obtained experimentally is 96%. The pressure of the top stage varies between 108 and 163 mmHg and that of reboiler varies between 249 and 300 mmHg. Contents Characteristics and conditions It is obvious that the product composition continuously increases with respect to time and as soon as concentration of reactants decreases, the composition also decreases. For continuous process, continuous supply of reactants is required to maintain the product composition. The variation of composition with time is shown in Figure 5. Experimental results of methyl acetate synthesis. increasingly stringent environmental regulations, and global competition in product pricing and quality. One of the most important engineering tools for addressing these issues is optimization. Modifications in plant design and operating procedures have been implemented to reduce costs and meet constraints, with an emphasis on improving efficiency and increasing profitability. Optimal operating conditions can be implemented via increased automation at the process, plant, and company levels, often called computer-integrated manufacturing. Computers and associated software make the necessary computations feasible and cost-effective [22][23][24][25]. Steady-state simulation and optimization Steady-state simulation of methyl acetate esterification was carried out using Aspen Plus simulator. Radfrac module, NRTL property method, and other operating conditions such as feed condition, feed location, operating pressure, column configuration including number of stages and reaction stage, type of condenser, type of reboiler, and feed flow rate of the components used are specified in Aspen Plus environment. The specification and other results are included in Table 4. The simulation flow sheet is shown in Figure 6. The product purity is attaining a highest value at the top stage. The composition profile of the column is shown in Figure 7. As shown in figure, the maximum composition of product methyl acetate obtained is 95.4%. The amount of methanol and acetic acid is much lower at the top of the column; this indicates the complete consumption of reactants and formation of product. The temperature profile of the column is shown in Figure 8. As shown in figure, we can clearly observe that the temperature of the reactive section is higher than the other section; this is because of the exothermic nature of the esterification reaction. Also, temperature of reboiler is higher than the temperature at condenser. As it can be observed from the figure, the condenser temperature which is 57.4°C is lower than reboiler temperature which is 62.7°C. The temperature of the reactive zone is varied between 61. 3 the esterification reaction. The maximum temperature of the condenser during experiment was 58°C, and the temperature of the condenser obtained from Aspen Plus was 57.4°C, which shows good agreement between experimental and simulation results. Sensitivity analysis of methyl acetate RDC Reactive distillation exhibits multiple steady-state conditions throughout the operation. This is known as multiplicity of the process. There are two types of multiplicity; one is known as input multiplicity, and the other is known as output multiplicity. This is the condition in which column gives same output for the different sets of process condition. In this paper, we have studied input multiplicity, in which we obtained same output for different input conditions. To analyze the situation, we have performed sensitivity analysis in Aspen Plus simulator. For sensitivity analysis, we have first chosen molar flow of methyl acetate on the basis of heat duties whose lower and upper bounds are fixed as 1 and 3 kW, respectively. For the second case, we have calculated mass fraction of methyl acetate by setting the molar flow of acetic acid in feed in the range of 0.01-0.08 L/min. In the third case, we have calculated distillate flow rate by varying feed flow rate in the range of 0.01-0.08 L/min to calculate the distillate-to-feed ratio (D/F). Similarly we have also calculated bottom-to-feed ratio (B/F). The result curves are shown in Figures 9 and 10, respectively. A shown in Figure 9, we can observe that the flow rate of methyl acetate is increasing as heat duty is increasing and found the maximum flow rate to be 0.927 lbmol/hr. at heat duty of 6820 Btu/hr. Similarly, we can observe that in Figure 10, the variation in flow rate of acetic acid is observed WRT mole fraction of product methyl acetate. The maximum product fraction is observed as 95.2% at flow rate of 0.0872 cuft/hr. The effect of change in distillate-to-feed Ratio (D/F) and change in bottom-to-feed (B/F) ratio on composition was also observed. It was found that optimized distillate-to-feed (D/F) ratio obtained 0.6275 and optimized bottom-to-feed (B/F) ratio obtained 0.4238 to get maximum product purity. Optimization of methyl acetate RDC Model analysis tool under Aspen Plus simulation facilitates optimization of the reactive distillation column. In this analysis we defined mass fraction of methyl acetate as objective on the basis of standard volumetric flow rate of acetic acid to obtain the minimum product composition that can be achieved at the top of the column. Heat duty was defined as constraint with fixed values between 1 and 3 kW as lower and upper limits, respectively. After the optimization, we obtained 26.99% as the minimum composition of methyl acetate and 2 kW as the required optimized heat duty. The summary of optimization and sensitivity results obtained from Aspen Plus simulation is included in Table 5. The optimized value of reboiler heat duty obtained was 2 kW, and optimized reflux ratio obtained was 4.69. These values are close to the experimental values which again show good agreement between experimental and simulation studies. The optimized flow rate of methyl acetate obtained using reboiler heat duty as manipulated variable is 0.093 lbmol/hr., and optimized product fraction obtained using standard volumetric flow rate of acetic acid is 0.96. The sensitivity result curve for optimized flow rate and composition of methyl acetate is shown in Figure 11, and sensitivity result curve for variation in column temperature based on reflux flow is shown in Figure 12. Conclusion This chapter gives details of reactive distillation as effective unit for various synthesis and manufacturing. The detailed case study envisaged to produce methyl acetate using methanol and acetic acid in a pilot plant reactive distillation column. The operating conditions were maintained as feed temperature of 50°C, column pressure of 1 atmosphere, feed rate of 0.03 L/min, and initial reboiler temperature of 70°C. The experiment yielded high purity of methyl acetate. We have succeeded in obtaining 95% purity of methyl acetate. The experimentation was then followed by simulations so as to contrast the results. The Aspen Plus simulation gives methyl acetate purity of 91.1%. This was followed by validation of results using sensitivity and optimization analysis. The optimized value of reflux was obtained as 4.69 and required reboiler duty 2 kW. The sensitivity analysis registered distillation-to-feed (D/F) ratio as 0.6275 and bottom-to-feed (B/F) ratio 0.4235 to obtain maximum product purity. These encouraging results establish a good agreement between experimental and simulation studies. Nomenclature v j stoichiometric coefficient R n, j reaction rate on nth stage M n liquid holdup on nth stage k Fn forward reaction rate on nth stage k Bn backward reaction rate on nth stage x n, j liquid composition on nth stage V n flow rate of vapor on nth stage L n flow rate of liquid on nth stage λ heat of reaction ΔH v net heat of vaporization NT total number of stages D distillate flow rate B bottoms flow rate y n, j vapor composition on nth stage RR reflux ratio F n feed flow rate on nth stage z n, j feed composition on nth stage P S j pure component vapor pressure T n temperature at nth stage P total pressure
5,486.6
2019-04-01T00:00:00.000
[ "Chemistry", "Engineering" ]
Polarization of Diploid Daughter Cells Directed by Spatial Cues and GTP Hydrolysis of Cdc42 in Budding Yeast Cell polarization occurs along a single axis that is generally determined by a spatial cue. Cells of the budding yeast exhibit a characteristic pattern of budding, which depends on cell-type-specific cortical markers, reflecting a genetic programming for the site of cell polarization. The Cdc42 GTPase plays a key role in cell polarization in various cell types. Although previous studies in budding yeast suggested positive feedback loops whereby Cdc42 becomes polarized, these mechanisms do not include spatial cues, neglecting the normal patterns of budding. Here we combine live-cell imaging and mathematical modeling to understand how diploid daughter cells establish polarity preferentially at the pole distal to the previous division site. Live-cell imaging shows that daughter cells of diploids exhibit dynamic polarization of Cdc42-GTP, which localizes to the bud tip until the M phase, to the division site at cytokinesis, and then to the distal pole in the next G1 phase. The strong bias toward distal budding of daughter cells requires the distal-pole tag Bud8 and Rga1, a GTPase activating protein for Cdc42, which inhibits budding at the cytokinesis site. Unexpectedly, we also find that over 50% of daughter cells lacking Rga1 exhibit persistent Cdc42-GTP polarization at the bud tip and the distal pole, revealing an additional role of Rga1 in spatiotemporal regulation of Cdc42 and thus in the pattern of polarized growth. Mathematical modeling indeed reveals robust Cdc42-GTP clustering at the distal pole in diploid daughter cells despite random perturbation of the landmark cues. Moreover, modeling predicts different dynamics of Cdc42-GTP polarization when the landmark level and the initial level of Cdc42-GTP at the division site are perturbed by noise added in the model. Introduction Cell polarization is essential for a variety of cellular processes and functions. Cdc42 is highly conserved from yeast to humans and plays a central role in polarity establishment [1,2]. The budding yeast Saccharomyces cerevisiae provides a unique model to study the development of cell polarity owing to its pronounced cell polarization during growth and its experimental tractability. During vegetative growth, yeast cells choose a specific bud site depending on their cell type, which determines the axis of polarized cell growth. Haploid a and a cells bud in the axial pattern, in which both mother and daughter cells select a new bud site adjacent to their immediately preceding division site. In contrast, a/a cells (normal diploids) bud in the bipolar pattern, in which daughter cells usually bud at the pole distal to the previous division site (distal pole) and mother cells can choose a new bud site near the proximal pole (birth pole) or the distal pole (see Fig. 1A) [3,4,5,6]. These different budding patterns occur in response to cell-type-specific markers. The Rsr1 GTPase module, which is composed of Rsr1/Bud1, its GTPase activating protein (GAP) Bud2, and its GDP-GTP exchange factor (GEF) Bud5 [3,7,8,9,10,11], links the spatial cues to the polarity establishment machinery including Cdc42. Cdc42 thus becomes polarized at the predetermined cortical site to trigger bud growth (see review [2] and references therein). How do a/a cells select a bud site either at the distal or proximal pole? Previous studies uncovered a large number of genes affecting the bipolar budding pattern [12,13,14,15]. These studies also indicate a close link between the cell cycle progression and the bipolar budding pattern [14,16]. The bipolar pattern is dependent on transmembrane proteins including Bud8, Bud9, Rax1 and Rax2 [12,17,18,19,20]. Bud8 localizes to the distal pole of a newly born cell whereas Bud9 localizes to the bud side of the mother-bud neck (which becomes the proximal pole of a daughter cell) just before cytokinesis [20]. These localization patterns of Bud8 and Bud9 are consistent with their roles as putative distal and proximal pole markers, respectively. Rax1 and Rax2 localize to the tip of growing buds and to the mother-bud necks, and their presence at the division site is persistent throughout multiple generations [17,18,19]. Despite these interesting localization patterns of the putative bipolar landmarks, the mechanism by which the bipolar pattern is established remains largely unknown. One of the key questions is why daughter cells of a/a diploids choose predominantly the distal pole for their first budding despite the presence of Bud8 and Bud9 marking each pole. The complexity of the budding patterns led us to take a minimalist approach to address the question by combining mathematical modeling and live-cell imaging. Recent studies in budding yeast have uncovered mechanisms by which Cdc42 becomes polarized in the absence of spatial cues via a process called 'symmetry breaking'. Two positive feedback mechanisms of symmetry breaking have been suggested -one involving the actin cytoskeleton and the other relying on a Cdc42 signaling network including the scaffold protein Bem1 and the Cdc42 GEF Cdc24 [21,22,23,24]. Endocytosisand GDI (Guanosine nucleotide dissociation inhibitor)-mediated recycling of Cdc42 and a negative feedback loop confer robust initiation of cell polarization [25,26,27,28]. Using a stochastic mathematical model, an intrinsic stochastic mechanism involving linear positive feedback alone was shown to be sufficient to account for the spontaneous establishment of a single polarization site [29]. A Turing-type mechanism involving short-range excitation and long-range inhibition has also been proposed to explain the self-organized emergence of polarity [30,31]. These models capture several features of cell polarization and provide a mechanistic insight into spontaneous polarization in the absence of spatial cues. However, some aspects of these mechanisms and their physiological relevance are still unclear and controversial [31,32,33]. More importantly, it had been unclear whether and how the spatial cues are recognized and amplified through these feedback mechanisms. Here, we used computational modeling and live-cell imaging to explain cell polarization in diploid daughter cells. Because wild-type yeast cells undergo polarization in response to the celltype-specific spatial cues, we considered these cues to understand distinct budding patterns. We report that both spatial landmarks and GTP hydrolysis of Cdc42 by Rga1 control the robust Cdc42-GTP polarization in diploid daughter cells. A Mathematical Model of Cdc42 Polarization in Diploid Daughter Cells Diploid a/a cells exhibit a strong bias toward the distal pole during their first and second bud-site selection [4,12,20] (Fig. 1A). To examine this preferential distal-pole budding event in daughter cells of diploids more closely, we monitored localization of Cdc42-GTP every 2 min in wild-type diploid cells expressing Gic2-PBD-RFP (tdTomato fused to the p21-binding domain of Gic2) as a reporter for Cdc42-GTP [34] and GFP fused to Cdc3, a component of septins, as a marker for the timing and site of cytokinesis. As expected, Gic2-PBD-RFP localized to the periphery of a growing bud until the end of the M phase, to the motherbud neck (which becomes the proximal pole of daughter cells) during cytokinesis, and then to the distal pole in the daughter cells in the next G1 phase (100%, n = 15 movies) ( Fig. 1B; Movie S1). While Cdc42 becomes enriched at the mother-bud neck at the division site [34,35], the Gic2-PBD-RFP signal at the proximal pole was relatively weak presumably due to rapid hydrolysis of Cdc42-GTP by its GAP(s), consistent with a previous finding in haploids [34]. Nonetheless, our imaging was able to capture the daughter cells at an intermediate stage that exhibited Cdc42-GTP localization at both proximal and distal poles (see a cell marked with an arrowhead in Fig. 1B). The dynamics of Cdc42-GTP polarization is thus consistent with the distal-pole budding of diploid daughter cells. Why do daughter cells of diploids exhibit such dynamics of Cdc42-GTP despite the presence of spatial cues at both poles? Since our current knowledge of the bipolar landmark(s) does not provide a clear explanation for this time-evolved polarization of Cdc42-GTP in diploid daughter cells, we turn to mathematical modeling. We took into consideration several previous experimental observations and previous models for symmetry breaking. We assumed that the distal and proximal poles compete for Cdc42 or its effectors and regulators ( Fig. 2A). Our model was built upon the positive feedback mechanism involving the Bem1 complex originally proposed by Goryachev and Pokhilko [30] and Lew and colleagues [24]. Importantly, our model included the Cdc42 GAPs to account for the weak Gic2-PBD-RFP localization at the division site and the spatial cues at both poles. Specifically, several space-dependent rate parameters are included in our model as schematically shown in Fig. 2A. The establishment of Cdc42 polarization relied on the activation from its GDP-to GTP-bound state, which presumably depends on the pre-localized landmark signal and the Bem1-mediated feedback. This feedback was implemented in the activation rate of Cdc42 from the GDP-to the GTP-bound states (denoted by F), which depends on the levels of landmark cue (denoted by [cue]) and Cdc42-GTP, under the assumption that Bem1 is conserved (see Materials and Methods). The inactivation rate (k d ) of Cdc42 from the GTP-to the GDP-bound states was space-dependent because it was assumed to vary with the level of the Cdc42 GAPs (which localize to the division site [34,36,37,38]). The recruitment rate (k R ) of Cdc42 from the cytoplasm to the membrane represents the association rate of cytoplasmic Cdc42-GDP with the membrane. The rate k R depends on the level of spatial cues because Rsr1 is likely to interact with Cdc42 to enhance its recruitment to the membrane in response to the landmark [39]. The landmark and the Rsr1 module were considered together as an upstream input to represent the spatial cue that triggers the initial localization of Cdc42. Thus k R was positively correlated with the level of the landmark signal in our simulations (see details in Materials and Methods). The parameters used in our simulations are listed in Tables 1 and 2. Our model involved two reaction-diffusion equations to describe the spatial dynamics of Cdc42-GTP and Cdc42-GDP (Eq. [1][2] in Materials and Methods) on a cross section of the cell membrane with a diameter of 4 mm. In this model, the spatially distributed landmark [cue] was assumed to be a function of the membrane periphery, which was parameterized by the angle x along the circle (0u # x #360u) from the distal pole ( Fig. 2B, a). The function [cue] thus took maximal values locally at the proximal and distal poles (Fig. 2B, b) to represent the localized landmark at these poles. Our model also involved the following reactions: lateral membrane diffusion of Cdc42-GTP and Cdc42-GDP, activation of Cdc42 to the GTP-bound state and its inactivation, recruitment of Cdc42 from the cytoplasm to the membrane and its reverse reaction, and GDI-mediated extraction of Cdc42-GDP into the cytoplasm (see Materials and Methods). Our simulations started with a homogeneous level of Cdc42-GDP at initial time t = 0 and with Cdc42-GTP localized at the proximal pole of the cell, since Cdc42 is polarized to the division site (Fig. 2C). Fluctuations in the initial levels of these species due to naturally noisy background led to Cdc42-GTP clustering initially at both poles, which coexisted for a period of time. The Cdc42-GTP cluster at the proximal pole (180u) was gradually destabilized due to GTP hydrolysis by Cdc42 GAP(s) at the Table 2. Specific parameters used for simulations. previous budding site, resulting in Cdc42 polarization at the distal pole. Indeed, the Cdc42-GTP cluster was consistently formed at the distal pole with any set of parameters within the ranges shown in Table 1 ( Fig. 2C, a-d), suggesting that the outcomes of competition are relatively insensitive to the concentration of spatial cues at each pole. Our modeling thus explains robust distal-pole budding of a/a daughter cells despite the competition between two poles for recruiting the Bem1 complexes and Cdc42-GTP. (HPY2246), bem2D (HPY2384), and bem3D (HPY2426). The mean percentage 6 SD of each budding pattern is shown from three or four independent countings of wild type (n = 106), rga1D (n = 144), rga2D (n = 56), bem2D (n = 108), and bem3D (n = 53). Statistical significance was determined by Student's t-test between proximal-pole buddings in wild type and rga1D or bem2D (marked with asterisks): *p,10 25 (rga1D) and **p = 0.02 (bem2D). B. The position of the first bud relative to the birth scar in diploid daughter cells. Cells were double stained with Calcoflour white and WGA-FITC as described in [44] from wild type (YEF473), rga1D (YEF1233), bud8D (YHH415), and rga1D bud8D (HPY2385 Deletion of RGA1 Affects the Distal-pole Budding in Daughter Cells of a/a Diploids Because our modeling suggested that Cdc42 GTP hydrolysis rate at the division site contributes to robust distal-pole budding in a/a daughter cells, we wondered which Cdc42 GAP(s) play a role in this process. All predicted Cdc42 GAPs localize to the mother-bud neck at cytokinesis [34,36,37,38]. We thus scored the position of the first bud of newly born daughter cells of diploid wild type and mutants deleted for a Cdc42 GAP such as Rga1, Rga2, Bem2, or Bem3. As expected, daughter cells rarely budded at the proximal pole in wild-type cells (3.462.3%, n = 106). In contrast, a significant number of daughter cells of Localization pattern of Gic2-PBD-RFP (red) prior to, during, and after cytokinesis (Cdc3-GFP in green) is summarized from time-lapse imagings of wild type (n = 15), rga1D (n = 19), bud8D (n = 7) and rga1D bud8D (n = 8). The proximal-pole localization pattern (marked with 2*) of rga1D or rga1D bud8D daughter cells is different from those seen in wild type and bud8D cells (see text for details). doi:10.1371/journal.pone.0056665.g004 an a/a rga1D homozygous diploid strain budded at the proximal pole (29.361.9%, n = 144; see daughter cells marked with arrows in Fig. 3A), which is statistically significant (p,10 25 ). Deletions of RGA2 or BEM3 did not result in proximal-pole budding in daughter cells (0%, n = 56 and 53, respectively). While a bem2 deletion resulted in slightly increased proximal-pole budding (6.462.8%, n = 108), the difference between wild type and bem2D does not seem to be statistically significant (p = 0.22) (Fig. 3A). It is less clear whether Bem2, which is known as a GAP for Rho1, also functions as a GAP for Cdc42 in vivo [37,40,41,42]. Taken together, these results suggest that among the Cdc42 GAPs, Rga1 is uniquely required for the preferential distal-pole budding of a/a daughter cells. We thus focused on Rga1 in subsequent studies. Because Rga1 is uniquely required for preventing budding at the division site [34], we wondered whether the diploid rga1D daughter cells that failed to bud at the distal pole also budded at the division site. Unlike mother cells, which have bud scars (chitinous scar tissue located at the division site), daughter cells have a much less conspicuous birth scar (which has little or no chitin) at the division site [43]. To examine more closely the first bud position in daughter cells relative to birth scar, we stained cells with Calcofluor, which stains bud scars as well as the base of a bud, and FITC-labeled wheat germ agglutinin (WGA-FITC), which stains both bud scars and birth scars [44]. As expected, almost all wild-type daughter cells formed a bud opposite to the birth scar (which is marked with an arrow in Fig. 3B). In contrast, all of the rga1D daughter cells that failed to bud at the distal pole indeed budded within the birth scar (n = 65; note: this number includes some mother cells of rga1D because those mother cells that repeatedly budded within the birth scar could not be easily distinguished from daughter cells). As expected, almost all bud8D daughter cells budded at the proximal pole, but the position of a bud in bud8D was adjacent to, rather than within, the birth scar (97.4%, n = 39) (Fig. 3B). A small number of daughter cells of the diploid wild type (3.5%, n = 56) and bem2D mutant (6.3%, n = 63) also budded at the proximal pole, but these buds rarely appeared within the birth scar (data not shown). Interestingly, almost of all rga1D bud8D cells also budded within the birth scar (99.2%, n = 137; this counting is also likely to include some mother cells due to deletion of RGA1, see above). Taken together, these observations suggest that reduced distal-pole budding in the diploid rga1D daughter cells results from the increased Cdc42-GTP at the division site, consistent with a previous report [34]. Polarization of Cdc42-GTP in Diploid Daughter Cells Lacking RGA1 Although some diploid rga1D daughters budded within the birth scar, the majority of them (,70%) still showed strong preference for distal-pole budding. To gain insight into this cellular behaviour, we monitored the localization of Cdc42-GTP (using Gic2-PBD-RFP) in diploid rga1D cells every 2 min. Gic2-PBD-RFP localized to the periphery of a growing bud in an rga1D mutant as in wild type until cytokinesis. During cytokinesis and in the next G1 phase, however, three different patterns of Gic2-PBD-RFP localization were observed in rga1D daughter cells (n = 19 movies): 1) Gic2-PBD-RFP localized to Both the first and third patterns of Cdc42-GTP localization were expected to lead to the distal-pole budding in a/a rga1D daughter cells (see summary in Fig. 4B). The localization patterns of Gic2-PBD-RFP are thus consistent with the observed budding patterns of the rga1D daughter cells (see Fig. 3). While an increase of Cdc42-GTP at the proximal pole was expected given the lack of Cdc42 GAP activity at the division site in the rga1D mutant [34], it seemed counterintuitive that a significant percentage of rga1D daughter cells exhibited Cdc42-GTP polarization persistently at the distal pole. The one caveat is that our imaging was not fast enough to capture transient localization to the proximal pole in the third pattern (Fig. 4B). Nonetheless, these observations indicate that the dynamics of Cdc42-GTP in rga1D cells is different from that in wild type. Rga1 might thus have a unique role in Cdc42 polarization in diploid cells in addition to its role in clearing Cdc42-GTP at the division site (see below). Bud8 is Necessary for Polarization of Cdc42-GTP to the Distal Pole in Diploid rga1D Daughter Cells Since Bud8 functions as a distal pole marker important for normal bipolar budding pattern [20], we wondered whether the persistent distal-pole localization of Cdc42-GTP in the rga1D daughter cells is dependent on Bud8. Alternatively, Cdc42-GTP might be polarized to the distal pole independently of Bud8 as seen in the distal-pole budding of the rsr1 mutant during haploid invasive growth [45]. To distinguish these possibilities, we examined the Gic2-PBD-RFP localization in cells lacking both RGA1 and BUD8 by time-lapse microscopy. While Gic2-PBD-RFP still localized to the periphery of growing buds prior to cytokinesis in the rga1D bud8D cells, it always localized to the proximal pole during cytokinesis and remained at the proximal pole in the rga1D bud8D cells (100%, n = 8 movies) (Fig. 5, top panel; Movie S4). This observation indicates that Bud8 functions as a spatial cue for the enrichment of Cdc42-GTP at the distal pole of the rga1D daughter cells as in wild-type cells. Interestingly, Gic2-PBD-RFP localized to a site within the old Cdc3 ring (i.e., within the birth scar) in the rga1D bud8D daughter cells. In contrast, Gic2-PBD-RFP localized to the division site at cytokinesis but subsequently to a site adjacent to the old Cdc3 ring in bud8D cells (100%, n = 7 movies) (Fig. 5, bottom panel; Movie S5). These Cdc42-GTP polarization patterns are thus consistent with the first bud positions in daughter cells of these mutants (see Fig. 3B). Why is it that the persistent enrichment of Cdc42-GTP at the distal pole was observed only in rga1D daughter cells (see Fig. 4B)? How might Rga1 control Cdc42-GTP polarization? At the early phase of the cell cycle, most growth is targeted to the tip of the bud in budding yeast. This 'apical' growth is switched to 'isotropic' growth in the G2 phase, during which growth is distributed diffusely within the bud, and then cells are repolarized at the site of cytokinesis [46]. It has been suggested that apical growth and Fig. 2 (wild type) but the GTP hydrolysis rate of Cdc42 in the rga1D mutant is assumed to be about the same along the perimeter. See parameters in Table 2. Ab-Ac. Spatiotemporal dynamics of Cdc42-GTP leading to budding at (b) the proximal pole or (c) the distal pole in rga1D daughter cells. The horizontal axis represents the time window from 0 to 10 min. The 2D steady-state distribution of Cdc42-GTP is displayed on the right to each simulation. B. Spatiotemporal dynamics of Cdc42-GTP in diploid bud8D (top) and bud9D (bottom) mutants. The horizontal axis represents the time window from 0 to 20 min. The 2D steady-state distribution of Cdc42-GTP is displayed on the right to each simulation. Note: Cdc42-GTP became polarized at a site adjacent to the center of the proximal pole in bud8D (see 20 min time point), unlike in rga1D (see Fig. 7A, b). doi:10.1371/journal.pone.0056665.g007 repolarization during cytokinesis are critical for establishing spatial cues at the distal and proximal poles, respectively, and thus subsequent positioning of the division plane in diploid cells [14]. The rga1D cells have elongated bud morphology [47,48,49,50], suggesting a delay in the transition from apical growth to isotropic growth. We thus speculated that the prolonged apical growth of the rga1D mutant might result in more efficient delivery of the distal-pole marker such as Bud8 to the distal pole. To test the idea, we examined Bud8 localization. Bud8 localized to the bud tip of growing buds and the distal pole of wild-type daughter cells after division, as previously reported [20]. A significant percentage of large-budded cells also exhibited Bud8-GFP localization at both bud tips and the bud side of the mother-bud neck, although the latter was often weaker [19,20,51]. Interestingly, more large-budded rga1D cells exhibited Bud8-GFP localization to the bud tip (46.261.5%, n = 165) compared to wild type (34.560.1%, n = 174) (Fig. 6), and this difference appeared to be statistically significant (p = 0.006). Bud8-GFP often appeared to be confined to the extreme bud tip in these rga1D cells (Fig. 6). A minor difference of Bud8 localization was also observed in unbudded cells of rga1D compared to wild type (data not shown). These observations are thus consistent with the idea that Bud8 is more efficiently targeted to the bud tip (which becomes the distal pole of daughter cells) in rga1D cells, perhaps due to longer apical growth. However, it is unclear whether this different pattern of Bud8 localization solely accounts for persistent Cdc42-GTP polarization to the distal pole of rga1D cells. Indeed, we observed robust Cdc42-GTP polarization at the bud tip in large-budded cells of the bud8D rga1D mutant until cytokinesis (and even in bud8D cells, although Gic2-PBD-RFP appeared more broadly at the periphery of the buds in these cells) (see Fig. 5), suggesting that this Cdc42-GTP polarization prior to cytokinesis is independent on Bud8. Rga1 might also affect the targeting of Bud9 to the proximal pole or a component of the polarisome such as Spa2 or Ste20 at the bud tip [2,14], which might affect Cdc42-GTP polarization prior to cytokinesis via a feedback mechanism (see below). Further investigation is necessary to understand the underlying mechanism involved in polarized growth and selection of a growth site in diploids. Modeling Predicts Different Dynamics of Cdc42 Polarization Depending on the Levels of Spatial Cues as well as the GTP Hydrolysis of Cdc42 We then asked whether our mathematical modeling could account for these different types of Cdc42-GTP dynamics in the absence of a Cdc42 GAP or the spatial cues. In the absence of Rga1, the GTP hydrolysis rate k d would be spatially uniform; i.e., Cdc42 activity was no longer inhibited at the proximal pole (Fig. 7A, a). We thus expected that Cdc42 was able to form a cluster at both proximal and distal poles marked by the landmarks and that its subsequent dynamics would be determined by the initial level of Cdc42-GTP at the division site and landmark cues that are subject to random perturbation. Our simulation showed that rga1D daughter cells could indeed bud at both poles: First, the high initial localization of Cdc42-GTP inhibited the formation of cluster at the distal pole, so that only one cluster was formed at the proximal-pole budding in the entire process (Fig. 7A, b). Second, if the initial localization of Cdc42-GTP to the division site was reduced and the level of the distal-pole landmark was increased, the Cdc42-GTP cluster eventually formed at the distal pole (Fig. 7A, c). Interestingly, the time window when Cdc42-GTP localized to both proximal and distal poles changes depending on the initial level of Cdc42-GTP at the division site and the strength of landmark cues. Within some parameter range that strength of landmark cue at the proximal pole is slightly less than that at the distal pole (see Table 2), Cdc42-GTP localization coexisted at both poles for a substantial time window (Fig. 7Ac, top). However, when the ratio of strength of landmark cue at the distal pole to that at the proximal pole increases beyond that range, Cdc42-GTP localization to the proximal pole could be barely monitored (Fig. 7Ac, bottom), and this scenario would account for the persistent distal-pole localization of Cdc42-GTP observed in over 50% of the rga1D daughter cells (see above). Our simulations thus predict that the relatively higher landmark at Table 3. Yeast strains used in this study. Strain Relevant Genotype a Source/Comments YEF473* a/a his3-D200/his3-D200 leu2-D1/leu2-D1 lys2-801/lys2-801 trp1-D63/trp1-D63 ura3-52/ura3-52 [54] YEF1233* a/a rga1D::HIS3/rga1D::HIS3 [34] YHH415* a/a bud8-D1::TRP1/bud8-D1::TRP1 the distal pole or lower landmark at the proximal pole in the absence of Rga1 might result in persistent distal pole budding. These different patterns may thus arise from natural variations in the efficiency of delivery of these cues to the poles; in other words, the level of landmark cue or initial level of Cdc42-GTP in our model may be subject to substantial perturbation so that the parameters could fall in various ranges. While the exact mechanism remains unknown, a negative feedback loop involving Rga1 might be involved to buffer the level of Cdc42-GTP and thus to stop the polarity cluster from growing too large, as recently suggested by Howell et al. [28]. Next, we asked whether our modeling could recapitulate the behavior of bud8D and bud9D mutants, which bud exclusively at the proximal and distal poles, respectively [12]. We used similar parameters except that the landmark cue [cue](x) is high only at either the proximal or distal pole in bud8D or bud9D, respectively. Our simulations indeed indicated that Cdc42-GTP polarized to the proximal pole in a bud8D mutant (Fig. 7B, top) and to the distal pole in a bud9D mutant (Fig. 7B, bottom). It is noteworthy that Cdc42-GTP polarization developed eventually at a site adjacent to the division site in bud8D, unlike that in rga1D, consistent with the bud position in the daughter cells of these mutants (see Figs. 3B & 5). Taken together, our computational modeling indicated different dynamics of Cdc42-GTP polarization when the levels of landmark and Cdc42-GTP were perturbed by noise in the model. In summary, our mathematical modeling with limited parameters predicted the dynamics of Cdc42-GTP polarization, which accounts for robust distal-pole budding in diploid daughter cells. Live-cell imaging indicates that the distal-pole budding was dependent on Bud8 and GTP hydrolysis of Cdc42 by Rga1. While further investigation is necessary to fully understand the underlying mechanism, this study suggests that a Cdc42 GAP, not only the distal and proximal pole markers, affects the dynamics of Cdc42 polarization, contributing to selection of a growth site in diploid daughter cells. A Mathematical Model of Cdc42 Polarization in Response to the Landmark Cues in Diploid Daughter Cells The dynamics of the Cdc42-GTP and Cdc42-GDP on the cell membrane, with their particle densities denoted by [C42T] and [C42D], respectively, can be described by reaction-diffusion equations: L½C42T Lt L½C42D Lt The terms D m + 2 ½C42D and D m + 2 ½C42T represent the surface lateral diffusion of Cdc42-GDP and Cdc42-GTP on the cell membrane, with + 2 being the surface diffusion Laplacian operator and D m the diffusion rate. The level of the landmark cues ([cue]) is a function of the angle x which parameterizes the membrane periphery (0u # x #360u) from the distal pole: ,24{ x 15 ) 2 : (1z0:2d c (x,t)), Where d c (x,t) is a random variable from standard normal distribution to model the fluctuations from natural background. We remark here that in the absence of random fluctuation, [cue](x) is a function with basal level C 0 and has two peaks, with maximal levels C 1 and C 2 , at the proximal and distal poles, respectively. Other choices of the functional form with the same property will lead to similar results. The Bem1-mediated feedback, implemented by the activation rate F, takes the form: In equation [3], |M| denotes the total area of membrane surface, and the integral is taken over the cell membrane M, while the denominator represents the conservation of the total amount of Bem1 complex. We assume that the dynamic of Bem1 complex is much faster than that of Cdc42. We obtain the particle density of Bem1 complex at every time t by considering the quasi steady state solution of particle density of Bem1 complex, which is equal to the term The detailed derivation can be found in the next section. In equations [1] and [2], the parameter k d represents the inactivation rate of Cdc42 from the GTP-to the GDP-bound states, which is space-dependent because it varies with the level of the Cdc42 GAPs. We define it to be in the following form, with higher level at the proximal pole (at 180u): ) 2 zk dL : The parameters k dH and k dL are the maximal and minimal inactivation rates. Fig. 2Bb shows the spatial distribution of [cue] and the GTP hydrolysis rate in wild type a/a daughter cells, in which k dH is assumed to be much larger than k dL . In Fig. 7A, k dH is taken to be equal to k dL , and therefore k d appears constant. In equation [2], ½C42T and ½C42D respectively represent the average amount of [C42T] and [C42D] over the membrane, that is, the integral of [C42T] and [C42D] over the cell membrane divided by the cell surface area. Thus, the recruitment of Cdc42 from the cytoplasm to the membrane is modeled by k R (½cue)(1{½C42T{½C42D), where k R ([cue]) is the landmarksignal-dependent coefficient and (1{½C42T{½C42D) stands for the fraction of the cytoplasmic Cdc42. We remark here that to ensure (1{½C42T{½C42D) being between 0 and 1 to represent a fraction, the initial value for ½C42Tz½C42D needs to be less than 1, which is true with the initial conditions and the associated parameters used in our simulations. Here we also assume that Cdc42 is uniformly distributed throughout the cytoplasm because cytoplasmic Cdc42 diffuses fast enough to reach a homogeneous state. In our simulations, we define the spatial-cue-dependent parameter k R ([cue]) to be a function positively correlated with the function [cue], so that it has a similar spatial profile as the landmark cue. We choose to use the Michaelis-Menten form with power 1: however, other function forms of k R , if properly scaled, can also produce the same results. The parameter k off stands for the rate at which the membranebound Cdc42-GDP is extracted into the cytoplasm. This extraction of Cdc42-GDP away from the membrane is GDImediated, thus counteracting the recruitment of Cdc42. For the initial values of our simulations, we assume that initially Cdc42-GDP is a constant and Cdc42-GTP is localized at the proximal pole of the cell, both of them with 20% perturbation from their basal levels. The initial values of [C42D] and [C42T] are defined as follows: ½C42D(x,0)~A 0 (1z0:2d a (x)), where d a (x) and d b (x) are the random variables from a uniform distribution between 0 and 1, and A 0 is the basal level for Cdc42-GDP, while A 1 and A 2 are the basal maximal and minimal levels for Cdc42-GTP. All the above parameters are listed in Tables 1 and 2. Derivation of Equation [3] Let C(x,t) denote the particle density of Bem1 complex on the cell membrane, which is governed by where a and b are constant parameters; a ½C42T 2 zQ is the recruitment rate of Cdc24 from cytoplasm to membrane, depended on the particle density of Cdc42; Q is the spatial function representing the level of stimulation by the landmark cue and aQ represents the basal recruitment rate controlled by the landmark cue; (1{Ĉ C) is the fraction of the cytoplasmic Cdc24; bC is the disassociation rate of C from the membrane to the cytoplasm;Ĉ C~Ð M Cdx=DMD represents the average value of C over the membrane. The dynamic of Bem1 complex is much faster than that of Cdc42. In the system of Cdc42, we obtain the particle density of Bem1 complex at every time t by solving the quasi-steady-state solution of the equation [4]. By assuming right hand side of [4] to be zero, the steady state equation of [4] can be written as follows: By taking the average value of right hand side of [5] over the membrane (taking integration over the membrane and then dividing by the area of the membrane), we have which leads tô By substituting [6] into [5], we can obtain C in term of [C42T]: By defining [cue] = aQ/b and K~ffi ffiffiffiffiffiffi ffi b=a p , C can be rewritten into the form. C~½ cuez ½C42T If we assume that the activation rate of Cdc42 is proportional to the quasi-steady-state solution of particle density of Bem1 complex and define [cue] = aQ/b, the form of the activation rate of Cdc42 will be F(½cue,½C42T)~k on . Parameter Estimation For simplicity, we considered a diploid daughter cell as a 4 mmdiameter circle, since daughter cells are generally smaller than mother cells, which are typically 566 mm ellipsoids. For a yeast cell of radius R&2mm, the membrane diffusion coefficient of Cdc42 is estimated to be. D m~0 :001(2pR) 2 min {1 &0:15 mm 2 min {1 [29]. According to [29], we estimate an off-rate from membrane to cytoplasm to be k off~9 min {1 . Yeast cell polarization is mainly achieved by Bem1-mediated positive feedback and landmark cue is just an initial tracker for polarization so the level of landmark cue should be small comparing with feedback strength. Here we took the basal level of landmark cue to be C 0 = 0.1 and C 1 , C 2 = 0.15-0.25 which were small comparing with feedback strength we observed in simulations. The recruitment rate was estimated to be k R (½cue)~10 min {1 [29]. According to the definition of k R (½cue), we took k Rec~2 0 min {1 and K R~0 :1. For the activation rate coefficient of Cdc42, we took k on~0 :1 min {1 [30]. We assumed that the number of Cdc42-GTP on membrane is much smaller than total number of Cdc42, and thus we took inactivation rate coefficient k d of Cdc42 to be between 10 k on~1 min {1 and 20 k on~2 min {1 . Numerical Method for Simulations The simulations used a second-order central difference approximation for the diffusion terms, and the temporal discretization was carried out using a fourth order Adams-Moulton predictorcorrector method. FORTRAN 77 was used for the simulation shown in Figures 2 and 7 and plots were generated using MATLAB 7. Strains, Plasmids and Genetic Methods Standard methods of yeast genetics, DNA manipulation, and growth conditions were used [52,53] unless indicated otherwise. Plasmids YIp211-GIC2-PBD-1.5tdTomato and YIp128-CDC3-GFP (kindly provided by E. Bi, University of Pennsylvania) were used to construct strains expressing Gic2-PBD-RFP and Cdc3-GFP, respectively, as previously described [34]. Plasmids pRS314-HO and YCp50-HO (from the Park lab collection), which carry the HO gene, were used to generate a/a diploids. See Table 3 for a list of strains used in this study. Determination of the Budding Pattern and Localization of Bud8 To determine budding patterns, cells were spotted on a YPD plate after a brief sonication and then the position of each bud was monitored under the dissecting microscope at 25 o C. For timelapse imaging by DIC microscopy, cells were grown similarly, spotted on a slab of YPD medium containing 1% agarose, and then imaged using a Nikon E800 microscope (Nikon, Tokyo, Japan) fitted with a 100X oil-immersion objective (NA = 1.30) with a Hamamatsu ORCA-2 CCD camera (Hamamatsu Photonics, Bridgewater, NJ) and Slidebook software (Intelligent Imaging Innovations, Denver, CO) at 25 o C. Localization of Bud8 was examined as previously described [19] using YEpGFP-BUD8F [16]. 3D Time-lapse Microscopy To visualize GFP-and RFP-fusion proteins, a slab of SC-Ura was prepared as above using exponentially growing cells in SC-Ura media. Images were captured at 23-24 o C every 2 min using a spinning disk confocal microscope (UltraView ERS, Perkin Elmer Life and Analytical Sciences, Waltham, MA) equipped with a 1006/1.4 NA objective lens (Nikon, Melville, NY), a 488-nm solid state laser and 568-nm argon ion laser, and a cooled chargecoupled device camera (ORCA-AG, Hamamatsu, Bridgewater, NJ). Maximum intensity projections of Z-sections (spaced at 0.4-0.5 mm) are generated using UltraView ERS software. All timepoint images are shown in Movies S1, S2, S3, S4, S5, and the selected time-point images are shown in Figs. 1, 4, and 5.
8,650.6
2013-02-20T00:00:00.000
[ "Biology", "Mathematics" ]
Stimuli-Responsive Photonic Crystals : Recently, tunable photonic crystals (PhCs) have received great research interest, thanks to the wide range of applications in which they can be employed, such as light emission and sensing, among others. In addition, the versatility and ease of fabrication of PhCs allow for the integration of a large range of responsive elements that, in turn, can permit active tuning of PhC optical properties upon application of external stimuli, e.g., physical, chemical or even biological triggers. In this work, we summarize the most employed theoretical tools used for the design of optical properties of responsive PhCs and the most used fabrication techniques. Furthermore, we collect the most relevant results related to this field, with particular emphasis on electrochromic devices. Introduction Photonic crystals (PhCs) represent versatile building blocks in optics, although they are mostly used as passive optical elements. In these systems, the periodic arrangement of materials with different refractive indices gives rise to the so-called photonic band gap (PBG) and, thus, to structural coloration [1,2]. In this context, however, anything that can interfere with either the periodicity or the refractive index contrast can be translated into an active modulation of the PBG. This in fact enables application of PhCs as optical active elements. From conceptualization to fabrication, the simplest photonic crystal is represented by a multilayer that alternates materials with different refractive indices [3,4]. From an application point of view, multilayer photonic crystals, also known as distributed Bragg reflectors (DBRs) or Bragg stacks (BSs), have been used as resonators for distributed feedback lasers [5,6], smart dielectric layers for light-emitting transistors [7] and lightinduced tunable filters [8], among others. In this paper, we will review the latest advancements in the field of responsive 1D PhCs, with particular attention to electro-responsive systems. In particular, this paper will discuss both the theoretical methods that are commonly used to predict the optical properties of 1D PhCs, as well as the most employed fabrication techniques. In addition, we will describe the different approaches utilized to achieve active tuning of the photonic band gap. Theoretical Background First, we consider a multilayer of materials deposited on a substrate on one side (with refractive index n s ) and in contact with air (with refractive index n 0 = 1.000277 ∼ = 1) on the other side. The transfer matrix method is exhaustively explained in the literature, such as in reference [9]. The transfer matrix for the k th layer is given by [9]: M k = cos 2π λ n k d k − i p k sin 2π λ n k d k −ip k sin 2π λ n k d k cos 2π λ n k d k (1) with φ k = 2π λ n k d k cosα k that is the light phase variation passing through the kth layer, n k being the refractive index, d k the thickness of the layer and cosα k the parameter that takes into account the light beam propagating through the layer with refractive index n k , related to the angle of incidence ϑ 0 (as displayed in Figure 1a) of the light on the structure: Appl. Sci. 2021, 11, x FOR PEER REVIEW Figure 1. (a) Sketch of the one-dimensional photonic crystal (ϑ0 is the angle of incidence o light). Simulation of the angular dependent transmission spectrum of a 5 bilayer TiO2-SiO particle-based photonic crystal for a transverse electric (TE) wave (b) and a transverse ma (TM) wave (c). For both the nanoparticle layers, the filling factor is 0.7. Fabrication Techniques Photonic crystals can be fabricated by following two main approaches: (1) to methods; and (2) bottom-up methods. In particular, bottom-up techniques can conveniently used on the laboratory scale, while top-down approaches rely on t . Finally, to calculate the light transmission of the multilayer photonic crystal we can use the following equation: If, in the spectral range of interest, the refractive index does not show a significant wavelength dependence, its value can be essentially considered as constant. Otherwise, we should consider the refractive index as a function of the wavelength. The refractive indexes of many materials are reported in the literature [10] and can be expressed with a Sellmeier equation. For example, the Sellmeier equation for silicon dioxide, which is widely employed for the fabrication of one-dimensional photonic crystals, is [11]: For porous materials, we can determine the refractive index by employing the Maxwell-Garnett effective medium approximation [12,13]: where ε material is the dielectric constant of the material, ε air is the dielectric constant of air and f is the filling factor. In Figure 1, we show the simulation of the angular dependent transmission of a 1D photonic crystal, consisting of the alternation of five bilayers of TiO 2 -SiO 2 nanoparticles (filling factor = 0.7). The wavelength-dependent refractive index of TiO 2 is given by [14]. From these simulations, it is also possible to determine the electric field at the kth interface [15]. Fabrication Techniques Photonic crystals can be fabricated by following two main approaches: (1) top-down methods; and (2) bottom-up methods. In particular, bottom-up techniques can be more conveniently used on the laboratory scale, while top-down approaches rely on the use of microfabrication methods permitting development of microstructures with selected size and shape from bulk materials [16,17]. Besides these advantages, both of them also show some disadvantages. Bottom-up methods usually suffer from a relatively low throughput, whereas top-down techniques require substantial initial investment in terms of money and person hours for dedicated setups. For these reasons, it is thus essential to select the most suitable approach according to the desired goal. Self-assemble techniques are surely the most used bottom-up methods, combining building blocks such as nanoscale structures (e.g., nanoparticles) or block copolymers. These techniques are particularly suitable for the fabrication of responsive photonic crystals as, in this way, one can combine different unitary structures and materials to integrate different functionalities in a one single photonic device. A list of the most used techniques is reported in Table 1. Active Tuning of the Photonic Band Gap Any stimulus that can modify either the periodicity or the refractive index contrast (or both) of the PhC can lead to a shift of the PBG, according to Bragg-Snell's law: where λ max is the wavelength of the maximum reflection (photonic band gap) peak, d is the lattice constant, m is the order of diffraction, n eff is the effective refractive index and θ is the angle of incidence of the light with respect to the PhC [53]. It is noteworthy that the effective refractive index can be determined with different approaches [54]. A large number of external stimuli that are able to modulate the PBG are reported in the literature. Main examples include chemical, thermal, magnetic, biological, mechanical, light and electrical stimuli. Table 2 summarizes the main stimuli and tuning mechanisms mentioned in this article. Chemical Stimuli One of the most used methods to tune the PBG via chemical methods relies on the interaction between a soft structure (e.g., a hydrogel) and a given chemical species. For instance, the interaction with ions can generate a swelling and a shrinkage of the soft structure, which in turn changes the geometrical features of the PhC and leads to a shift of the photonic band gap [45,61,[96][97][98]. Furthermore, this approach can be used for the detection H + ions, and thus for building up pH sensors [45,60,62,99,100]. Another class of chemically tuned PhCs consist in the integration of porous materials in the PhC, in which refractive index modulation is given by infiltration of vapors [30,55,56,101] or solvents [20,29,[57][58][59][102][103][104]. Wang et al. fabricated 1D photonic crystals alternating films of poly methyl methacrylate-co-hydroxyethyl methacrylate-co-ethylene glycol dimethacrylate (PMMA-co-PHEMA-co-PEGDMA) and titania nanoparticles by spin-coating, drastically changing its structural color when immersed in different solvents (Figure 2a,b) [32]. Thermal and Magnetic Stimuli Photonic crystals fabricated using thermoresponsive materials like polymers or colloidal dispersions can be easily tuned through the application of a temperature gradient [48,62,63,105,106]. Chunfang et al. fabricated a SiO 2 PhC and infiltrated the pores with a thermo-sensitive Poly (N-isopropylacrylamide) (PNIPAM) hydrogel. The thermal variation generates a blue shift of the photonic band gap and exhibits a reversible response in the range from 24 • C to 31 • C [64]. Magnetically responsive PhCs are usually fabricated by integrating magnetic nanoparticles in the structure [16]. Herein, an external magnetic field interacts with the active material and changes its optical properties, orienting it according to the direction of the field [65][66][67][68][69][107][108][109]. Ge et al. synthesized some polyacrylate-capped superparamagnetic magnetite (Fe 3 O 4 ) colloidal nanocrystal clusters (CNCs) with sizes from 30 to 180 nm. These clusters self-assembled into colloidal photonic crystals in solution. In Figure 2c,d, the change in the optical response is visible, as the magnetic field changes by controlling the distance between the sample and an NdFeB magnet [110]. Biologic Stimuli Photonic crystals can be easily functionalized with appropriate recognition groups that allow for the detection of specific biomolecules. Due to the change in color, colorimetric detection is quick and easy [16,111]. These sensors change their optical properties not only when in contact with classical biomolecules like sugars, creatinine or glucose [40,112,113], but also larger ones such DNA [42,43,70,71] and proteins [72][73][74]114]. Recently, silver has been integrated inside 1D PhCs in order to exploit its antibacterial properties and detect the presence of bacteria [27,75,[115][116][117]. Paternò et al. fabricated a hybrid plasmonic-photonic device applying a silver layer on top of a TiO 2 /SiO 2 PhC. At the Ag/bacteria interface, there is a generation of polarization charges due to a "biodoping" mechanism. This triggers a change in the PBG of the sensor when exposed to Escherichia coli [27]. % start a new page without indent 4.6cm ER REVIEW 7 of 15 Electric Stimuli Electrically tunable photonic crystals represent an incredible opportunity for technological applications, ranging from colorful displays to sensitive claddings and electrochromic windows. Their electrotunability can be triggered in three ways: (1) reorientation of infiltrated liquid crystals; (2) an electrochemical process; and (3) electrophoretic forces in crystalline colloidal arrays. Liquid crystals (LCs) are a class of materials combining the properties of solid crystals and fluids. According to their alignment axis orientation, LCs assume a different distribution inside the liquid, switching from a randomly distributed phase (nematic) to an oriented phase (smectic or chiral). These materials own different refractive indices in different directions, so by changing their phase it is possible tune their Mechanical Stimuli Mechanically tuned photonic crystals exploit the elastic properties of the constituent materials. In general, they are usually compounded by an elastomeric matrix [17,39,44,47,51] that actively responds to mechanical stimuli. In this case, the mechanical force deforms the polymer and this changes the periodicity of the lattice, thus changing the optical response [16]. Karrock et al. fabricated a 400 nm periodical linear grating made of a nanostructured polydimethylsiloxane membrane by nanoimprinting replication. Subsequently, a high refractive index TiO2 nanoparticle layer was spin-coated, showing a guided-mode resonance. The elastomeric behavior of the membrane allows for a 20% elongation when subjected to stretching, varying the resonance peak position up to ≈80 nm. In Figure 2e, it is possible to see the change of structural colors brought about by the mechanical deformation [31]. Light Stimuli Incorporating a photosensitive material [75,[118][119][120] or a dye [77,121] inside a photonic crystal structure allows for tuning of its optical properties upon exposure to light stimuli. Light can change the refractive index or lead to a modification of the structural properties. For instance, PhCs can be used like a casing for liquid crystals and they will then be stimulated by light [50,51,76]. Paternò et al. fabricated an optically switchable SiO 2 /ITO 1D photonic crystal. Through a UV-light photodoping process, it is possible to tune the indium tin oxide (ITO) plasmonic response in the near-infrared range and translate the effect to the visible light range, switching the optical properties of the device [8]. Electric Stimuli Electrically tunable photonic crystals represent an incredible opportunity for technological applications, ranging from colorful displays to sensitive claddings and electrochromic windows. Their electrotunability can be triggered in three ways: (1) reorientation of infiltrated liquid crystals; (2) an electrochemical process; and (3) electrophoretic forces in crystalline colloidal arrays. Liquid crystals (LCs) are a class of materials combining the properties of solid crystals and fluids. According to their alignment axis orientation, LCs assume a different distribution inside the liquid, switching from a randomly distributed phase (nematic) to an oriented phase (smectic or chiral). These materials own different refractive indices in different directions, so by changing their phase it is possible tune their optical properties. Thus, it is possible to tune their dielectric constant by applying an external electric field. LCs are usually infiltrated inside a porous structure [78][79][80][81][82][122][123][124]. Criante et al. fabricated a porous silicon dioxide/zirconium dioxide 1D photonic crystal infiltrated with a nematic liquid crystal (Figure 3a). The device is tuned by applying an external electric field and so changing the LC alignment, producing a blue shift (Figure 3b) of the peak of 8 nm at 8 V [83]. Electrochemically tuned photonic crystals consist in an electrochemical cell immersed in a liquid electrolyte. By applying an electric field, it is possible to activate an electrolytic process, which promotes an oxidation-reduction effect or an acid-base exchange. The stimulus produces an electrostatic repulsion; thus, the original structure undergoes a destabilization due to a localized charge variation. The consequent reorganization of the structure generates a geometric variation of the sample, which, in accordance with Bragg-Snell's law, causes a shift of the peak [46,[84][85][86]125,126]. In the case of polymers, it is possible to incorporate an electro-responsive material inside the main chain, generating a swelling of the matrix under the application of the field [48,87,[127][128][129]. Xiao et al. fabricated a WO 3 -based electrochromic PhC by a facile, reproducible, one-step room temperature glancing-angle electron-beam evaporation (GLAD). By changing the deposition angle, it is possible to obtain layers with different porosities corresponding to different refractive indices. The PhC is then immersed in 1M LiClO 4 in propylene carbonate solution and subjected to an external electric field of −1.1 V (vs. Ag/AgCl). This leads to an electrochromic effect, as reported in Figure 3c. A gradual decrease of the reflectance and a shift of the reflection peak are attributed to colored Li x WO 3 , which decreases the optical thickness (reduces the refractive index) and increases the light absorption (Figure 3d,e). When an anodic potential is applied (+1.1 V), the process is completely reversed [26]. In recent years, the integration of plasmonic nanoparticles in PhCs has attracted the interest of the scientific community. In these systems, quantized carrier oscillations generate localized surface plasmon resonances (LSPRs) that span over a wide range of wavelengths, depending mostly on the charge carrier density and the surrounding refractive environment. For instance, the charge carrier density in heavily doped metal oxide nanocrystals lies in the infrared (IR) region [130][131][132] as it is significantly lower compared with bulk materials (10 21 cm −3 and 10 23 cm −3 , respectively). This allows for easy manipulation of this parameter and, hence, of the dielectric function upon application of external electrochemical bias. In particular, by applying an electric field it is possible to induce a capacitive depletion or accumulation and, consequently, a modulation of optical properties [65,132,133]. For these reasons, photonic crystals fabricated with plasmonic materials [27,33,65,88,89,134] have emerged in the in the last decade. Heo et al. exploited these materials to manufacture 1D photonic crystals composed of alternated layers of WO 3-x and indium tin oxide (ITO) nanocrystals. In this case, the selected materials show a very similar refractive index in the discharge state (2.19 for ITO and 2.1 for WO 3-x in the bulk), while charging leads to a strong modification in the WO 3-x refractive index, thus causing a change of the refractive index contrast (Figure 4a-c). Interestingly, the same procedure can be used to deposit the photonic crystals on an ITO-coated flexible polyethylene terephthalate substrate [90]. , x FOR PEER REVIEW 8 of 15 temperature glancing-angle electron-beam evaporation (GLAD). By changing the deposition angle, it is possible to obtain layers with different porosities corresponding to different refractive indices. The PhC is then immersed in 1M LiClO4 in propylene carbonate solution and subjected to an external electric field of -1.1 V (vs Ag/AgCl). This leads to an electrochromic effect, as reported in Figure 3c. A gradual decrease of the reflectance and a shift of the reflection peak are attributed to colored LixWO3, which decreases the optical thickness (reduces the refractive index) and increases the light absorption (Figure 3d,e). When an anodic potential is applied (+1.1 V), the process is completely reversed [26]. In recent years, the integration of plasmonic nanoparticles in PhCs has attracted the interest of the scientific community. In these systems, quantized carrier oscillations generate localized surface plasmon resonances (LSPRs) that span over a wide range of wavelengths, depending mostly on the charge carrier density and the surrounding refractive environment. For instance, the charge carrier density in heavily doped metal oxide nanocrystals lies in the infrared (IR) region [130][131][132] as it is significantly lower compared with bulk materials (10 21 cm −3 and 10 23 cm −3 ,respectively). This allows for easy manipulation Conclusions In this review, we have summarized some notable examples of tunable and st responsive 1D photonic crystals, with particular emphasis on electrotunable d These systems, which can usually be fabricated using easy and low-cost processes, p the conversion of an external stimulus into an easily recognizable optical response. these properties, they have attracted increasing attention from both the scientific co nity and industry, as they can be employed in a wide range of applications, such play, sensing and lighting. Another approach relies on the immersion of electrotunable PhCs in liquid electrolytes, with the aim to increase the migration of electrons and ions. Despite the relatively high electrotunability achieved in such devices, the electrolyte can lead to a degradation of the samples and restrict the possibilities of their applications [26,90,135]. To address this problem, all solid-state devices have been developed over the past decade, in order to minimize such a detrimental effect [8,49,136] For instance, we have recently proposed a 1D electrolyte-free photonic crystal, combining indium tin oxide nanoparticles with TiO 2 nanoparticles on top of a fluorine-doped tin oxide substrate acting as an electrode. The structure was contacted with a top fluorine-doped tin oxide (FTO) substrate and clipped with a paper binder to ensure mechanical stability. By applying a bias to this circuit, charges accumulate at the doped semiconductor/TiO 2 interface, leading to an increase of the charge carrier density and an increase of the plasma frequency, according to equation: where N is the carrier density, e is the electron charge, ε 0 is the dielectric constant under vacuum and m* is the effective mass [33]. Tunability of photonic crystals can be also achieved by means of electrophoretic forces, which occurs when an external electric field is applied on a high concentration colloidal system. In this way, the particles are in dynamic equilibrium between packing force and electrostatic repulsive force, leading to a specific interparticle distance and, thus, to a specific optical signal. The applied field generates an electrophoretic force between the particles that are forced to reorganize into a more stable structure. Finally, this lattice modification translates into a shift of the photonic band gap [41,[92][93][94][95]137]. For instance, Chen et al. exploited this mechanism to fabricate an electric-field-assisted multicolor printing (Figure 4d,e) based on electrically tunable and photocurable colloidal photonic crystals [91]. Conclusions In this review, we have summarized some notable examples of tunable and stimuliresponsive 1D photonic crystals, with particular emphasis on electrotunable devices. These systems, which can usually be fabricated using easy and low-cost processes, permit the conversion of an external stimulus into an easily recognizable optical response. Given these properties, they have attracted increasing attention from both the scientific community and industry, as they can be employed in a wide range of applications, such as display, sensing and lighting. Conflicts of Interest: The authors declare no conflict of interest.
4,676
2021-02-27T00:00:00.000
[ "Materials Science", "Physics" ]
E ff ect of the Surface Charge on the Adsorption Capacity of Chromium(VI) of Iron Oxide Magnetic Nanoparticles Prepared by Microwave-Assisted Synthesis : Solid phase extraction using magnetic nanoparticles has represented a leap forward in terms of the improvement of water quality, preventing the contamination of industrial e ffl uents from discharge in a more e ffi cient and a ff ordable way. In the present work, superparamagnetic iron oxide nanoparticles (MNP) with di ff erent surface charges are tested as nanosorbents for the removal of chromium(VI) in aqueous solution. Uniform magnetic nanoparticles (~12 nm) were synthesized by a microwave polyol-mediated method, and tetraethyl orthosilicate (TEOS) and (3-aminopropyl) triethoxysilane (APTES) were grafted onto their surface, providing a variation in the surface charge. The adsorptive process of chromium was evaluated as a function of the pH, the initial concentration of chromium and contact time. Kinetic studies were best described by a pseudo-second order model in all cases. TEOS@MNP barely removed the chromium from the media, while non-grafted particles and APTES@TEOS@MNP followed the Langmuir model, with maximum adsorption capacities of 15 and 35 mg Cr / g, respectively. The chromium adsorption capacities abruptly increased when the surface became positively charged as the species coexisting at the experimental pH are negatively charged. Furthermore, these particles have proven to be highly e ffi cient in water remediation due their 100% reusability after more than six consecutive adsorption / desorption cycles. Introduction Many developed countries have decided to strengthen their environmental policies to minimize water pollution by regulating industrial activities regarding the discharge of hazardous chemicals, including heavy metals, as wastewater into the environment [1,2]. Heavy metals are considered persistent contaminants, and they cannot be easily degraded into harmless products [3]. Among others, Pb, As, Cd, Cu, Zn, Ni and Cr are the most hazardous. Chromium-in particular, in the two stable oxidation states Cr(III) and Cr(VI)-is one of the substances that poses a significant potential threat to human health due to its known toxicity in human exposure [4]. Common drinking water can be considered toxic when it contains more than 0.05 mg/L of Cr(VI), because this chromium state is found to be highly soluble and toxic. The chromates HCrO 4− and Cr 2 O 7 2− have been discharged over the years by many industrial activities in the fields of petroleum refining, electroplating, metal coating and batteries, among others [1,5,6]. In a common wastewater treatment process, the removal of these kind of compounds takes place through chemical and physical treatments using conventional methods such as coagulation and flocculation, membrane separation, oxidation, adsorption and ionic exchange [7,8]. Specifically, for chromium removal, different techniques are already applied at a large scale, such as bioremediation, reduction by electrochemical and biological methods and adsorption using nanosorbents such as carbon-based materials [9,10]. The last method-solid phase extraction-seems to be a very effective and affordable water treatment technique, and there have been numerous studies applied to different heavy metals [11][12][13][14]. The selection of the adsorbent is crucial when maximizing efficiencies in the removal process. It is very important that the material used as an adsorbent presents high adsorption capacities and allows a non-complex separation from the aqueous media [7,15]. In this sense, iron oxide magnetic nanoparticles (MNPs) take advantage of their easy separation by means of a magnet, and their reduced size provides a high specific surface area [16,17]. Magnetite nanoparticle sizes below 20 nm also present superparamagnetic behaviors, which is a reversible magnetic behavior that diminishes magnetic interactions and therefore aggregation, ensuring the easy reuse of the particles [18]. In the present work, uniform MNPs were prepared by microwave-assisted synthesis in polyol media [19]. One of the main advantages of this interesting approach is that microwave radiation allows a simple and controlled source of selective heating by ionic conduction and dipolar polarization that takes place at the same time for all of the reaction volume [20]. This is a highly reproducible method that has shown an increase in reaction yields and also an impressive reduction in the synthesis time compared to other conventional methods [21]. Magnetic nanoparticle sizes were tuned within the superparamagnetic limit (~15 nm) by adjusting the experimental conditions. The prepared MNP were functionalized with silica-based compounds (tetraethyl orthosilicate and (3-Aminopropyl) triethoxysilane) to adjust the material surface charge from negative to positive for the removal of Cr(VI) in aqueous solution, as shown in Scheme 1. Water 2019, 11, x FOR PEER REVIEW 2 of 12 treatment process, the removal of these kind of compounds takes place through chemical and physical treatments using conventional methods such as coagulation and flocculation, membrane separation, oxidation, adsorption and ionic exchange [7,8]. Specifically, for chromium removal, different techniques are already applied at a large scale, such as bioremediation, reduction by electrochemical and biological methods and adsorption using nanosorbents such as carbon-based materials [9,10]. The last method-solid phase extraction-seems to be a very effective and affordable water treatment technique, and there have been numerous studies applied to different heavy metals [11][12][13][14]. The selection of the adsorbent is crucial when maximizing efficiencies in the removal process. It is very important that the material used as an adsorbent presents high adsorption capacities and allows a non-complex separation from the aqueous media [7,15]. In this sense, iron oxide magnetic nanoparticles (MNPs) take advantage of their easy separation by means of a magnet, and their reduced size provides a high specific surface area [16,17]. Magnetite nanoparticle sizes below 20 nm also present superparamagnetic behaviors, which is a reversible magnetic behavior that diminishes magnetic interactions and therefore aggregation, ensuring the easy reuse of the particles [18]. In the present work, uniform MNPs were prepared by microwave-assisted synthesis in polyol media [19]. One of the main advantages of this interesting approach is that microwave radiation allows a simple and controlled source of selective heating by ionic conduction and dipolar polarization that takes place at the same time for all of the reaction volume [20]. This is a highly reproducible method that has shown an increase in reaction yields and also an impressive reduction in the synthesis time compared to other conventional methods [21]. Magnetic nanoparticle sizes were tuned within the superparamagnetic limit (~15 nm) by adjusting the experimental conditions. The prepared MNP were functionalized with silica-based compounds (tetraethyl orthosilicate and (3-Aminopropyl) triethoxysilane) to adjust the material surface charge from negative to positive for the removal of Cr(VI) in aqueous solution, as shown in Scheme 1. Magnetic Nanosorbent Preparation The synthesis of the iron oxide magnetic nanoparticles was carried out using a microwave oven Monowave 300 produced by Anton Paar GmbH, Austria, working at 2.45 GHz and equipped with a built-in magnetic stirrer, a temperature controller by an internal fiber-optics probe, an infrared sensor for surface temperature and a pressure meter. A mixture containing 0.3 g of iron(II) acetate, 18.3 mL of DEG and 0.7 mL of distilled water was placed into the microwave reactor and stirred at 600 rpm, while the temperature increased at a rate of 3.75 • C/min until 170 • C. The mixture was left for 2 h at that temperature. Finally, the obtained product was collected and washed several times with ethanol by centrifugation at a relative centrifugal force (RCF) of 8000 for 15 min and suspended in 2-propanol for further functionalization and in water for the sorption process. The MNP were grafted with a layer of silica (TEOS@MNP) by using the Stöber process, in which a mixture of 100 mg of MNP, 200 mL of 2-propanol and 100 mL of distilled water was sonicated for 15 min at 20 • C. Then, 20 mL of ammonium hydroxide was added to the mixture and, while sonicating, TEOS was added dropwise and the mixture was left under sonication for another 15 min. The sample was then collected by centrifugation at 8000 RCF for 45 min, washed several times with ethanol and then suspended in 2-propanol for the APTES functionalization and in water for the adsorption process. A second grafting with APTES was performed over the TEOS@MNP, where an aliquot of the dispersion containing 50 mg of the material was taken to 20 mL 2-propanol and then sonicated for 5 min. Afterwards, while sonicating, 0.5 mL of APTES was added dropwise and left sonicating for 1 h. The final sample was washed with ethanol and collected by centrifugation at 8000 RCF/g for 15 min. Characterization The material crystalline structure was analyzed by X-ray diffraction (XRD) with a Bruker D8 Advance diffractometer with a graphite monochromator using CuKα radiation (λ = 1.5406 Å), within 10 and 70 2θ degrees. The crystal size was calculated by using Rietveld refinement [22]. On the other hand, the particle size and morphology were determined by transmission electron microscopy (TEM) using a JEOL JEM 1010 microscope (Pleasanton, CA, USA) operated at 100 keV, and the mean particle size was obtained by measuring the largest internal width of at least 200 particles. For TEM observation, samples were prepared by diluting the suspension and placing one drop of it on an amorphous carbon-coated copper grid. The colloidal properties of the MNP were studied by dynamic light scattering in a Malvern Instrument Zetasizer Nano SZ (Malvern, UK) equipped with a solid-state He-Ne laser (λ = 633 nm). The hydrodynamic particle size of the samples was obtained at pH 2.5 in a standard cuvette and with a refraction index of 2.42. The hydrodynamic size was evaluated as the mean value of the distribution by number. Also, zeta potential measurements were performed to determine the nanoparticle surface charge as a function of pH at room temperature by varying the pH of the suspensions between 2 and 12, using HNO 3 and KOH and using 10 −2 M KNO 3 as electrolyte. A vibrating sample magnetometer-MagLabVSM, Oxford Instrument (High Wycombe, UK)-was used to measure the magnetic properties of the MNP before and after grafting, where the samples were accurately weighted and pressed into a sample holder. The hysteresis loops of the powder samples were measured at 290 K up to 3000 kA/m, and the magnetic saturation of the material was obtained by extrapolating to 1/H = 0 the high field part of the magnetization curve. Fourier transform infrared spectra (FTIR) were performed to confirm the presence of the silica shell and APTES grafting on the magnetic nanoparticle surface. For this, dried powder samples were diluted in KBr at 2% w/w, pressed into pellets and measured in Bruker IFS 66VS (Billerica, MA, USA) apparatus in the range of 400-4000 cm −1 . Kinetic Measurements Kinetic experimentation was performed at a pH of 2.5 and initial Cr(VI) concentration of 20 mg Cr /L by varying the adsorption time (5,15,30,45, 60, 90, 120 and 1440 min). After each adsorption experiment, the magnetic nanosorbent was separated by using a 60 × 30 mm magnet with a field at the surface of 320 kA/m. The residual Cr(VI) concentration in the supernatant was measured by ICP-OES. The experimental data were analyzed with three different kinetic models: pseudo-first-order (PFO), pseudo-second-order (PSO) and Elovich. PFO is considered to describe the adsorption initial stage with long adsorption times and a system almost in equilibrium well. The non-linear form of the PFO rate equation is given by Equation (1) [23]: where t is the contact time in min, k 1 the first order adsorption rate constant in min −1 , q e is the equilibrium adsorption capacity in mg/g, and q t the adsorption capacity at contact time t in mg/g. The PSO model is used to explain processes ruled by the surface adsorption, as well as most environmental processes [24]. This model also indicates that the adsorption is due to physicochemical interactions between the adsorbate and adsorbent [25]. The second-order adsorption rate constant in g/mg (k 2 ) was obtained from the non-linear equation described in Equation (2) [26]: In contrast, the Elovich model fits adsorption processes far from equilibrium well and with a mechanism of chemisorption for long periods of time, neglecting the desorption process. The non-linear form of the Elovich rate equation is given by Equation (3) [23]: where β is a desorption constant related to the extent of surface coverage and activation energy for chemisorption, and α is the initial adsorption rate in mg/g min. Sorption Experimentation Batch Cr(VI) adsorption experiments were carried out at room temperature (20 • C) in a plastic 15 mL conical vial containing 5 mg of the nanosorbent (MNP, TEOS@MNP and APTES@TEOS@MNP) and 10 mL of Cr(VI) solutions with different initial concentrations (from 10 to 100 mg/L) selected to avoid the complete elimination of chromium for measurement purposes and to reach adsorption equilibrium. The mixture was then mechanically mixed for 2 h and, after equilibrium, the MNP was collected in 1 minute using a 60 × 30 mm magnet with a field at the surface of 320 kA/m. The residual pollutant concentration in the supernatant (C e , mg/L) in the aqueous phase was determined, and the adsorption capacity was calculated by Equation (4). where q e is the adsorption capacity equilibrium, in mg Cr /g; C 0 is the initial Cr(VI) concentration, in mg/L; C e is the equilibrium concentration, in mg/L; m is the dry weight of adsorbent, in g; and V is the volume of Cr(VI) solution, in L. The percentage of removal was obtained by Equation (5) The effect of the pH was analyzed in a pH range between 2 and 6 at a Cr(VI) initial concentration of 40 mg Cr /L. The effect of initial concentration was studied by using 0, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 mg C r/L in the experiments at optimum pH (2.5). The obtained experimental data were fitted to three different isotherm models: Langmuir, Freundlich and Temkin (Table 1). In each model, the dependence of the equilibrium adsorption capacity, q e (mg/g) is stablished as a function of the pollutant equilibrium concentration, C e (mg/L). The maximum adsorption capacity q m (mg/g) was obtained by the Langmuir equilibrium model. Table 1. Isotherm models linear equations and plots. Isotherm Linearized Equation Plot Langmuir C e /q e = 1/b 0 q m + C e /q m C e /q e vs. C e Freundlich lnq e = lnK f + (lnC e )/n lnq e vs. lnC e Temkin q e = (RT/b T )lnK T + (RT/b T )lnC e q t vs. lnC e The Langmuir isotherm model is considered to adjust well to processes in which the adsorbent presents a homogeneous surface and the adsorption is performed with monolayer coverage. In contrast, the Freundlich model describes adsorptions with non-existing limited levels and systems with heterogeneous surfaces, while the Temkin model considers that the adsorption heat reaction (b T , J/mol) decreases in a linear way [28]. In each model, the dependence of the equilibrium adsorption capacity q e (mg/g NS ) is taken as a function of the pollutant equilibrium concentration, Ce (mg/L). Finally, the reusability of the nanosorbent after the Cr(VI) removal process was determined by analyzing the adsorption capacity of seven successive sorption/desorption cycles. For this, 20 mg was mechanically mixed at 60 rpm for 2 h with 40 mL of Cr(VI) solution 100 mg/L at pH 2.5. The Cr(VI)-loaded material was collected by magnetic harvesting after the sorption process and washed several times with distilled water. Then, 5 mL of NaOH, 0.01 M was added and mixed for 1 h at 20 • C. Finally, the nanosorbent was dried in an inox-coated oven at 50 • C overnight, and the procedure was repeated for each successive cycle. Characterization The MNP were synthesized by a microwave-assisted method that presents greater efficiencies in comparison with other approaches such as thermal decomposition with conventional heating. One of the most noticeable differences between these methods is that a microwave produces an internal homogenous heating that promotes the nucleation in the whole vessel at the same time, reducing the growth possibilities of the nuclei generated and consequently obtaining uniform particles of lower sizes [21]. The MNP TEM images are shown in Figure 1a, and the particle size distribution in Figure 1e. The particle size distribution was evaluated and adjusted to a log normal distribution in which a mean particle size of 12.2 (±1.5) nm was obtained. Moreover, the XRD pattern of the MNP was obtained and is displayed in Figure 1d, in which it can be seen that there was no extra reflection of other iron oxide phases such as hematite or iron hydroxides. The depicted reflections fit well with the space group (Fd-3m:227) typically assigned to cubic spinel structures. The crystallite size was calculated to be 12.8 (±0.3) nm by Rietveld refinement. It is worthwhile to mention that microwave-assisted synthesis increases the reproducibility of the sample. 468 cm in both samples could be due to the Fe-O bonds of magnetite or maghemite [30]. The N-H bending and stretching bands of the terminal primary amine group of APTES cannot be seen as they overlapped with the 3443 cm −1 band; however, the grafting can be confirmed by the 2852 and 2932 cm −1 bands of the C-H bond stretching vibration that correspond to the propyl group [31]. Further confirmation of the presence of the APTES was obtained by the increase of surface charge of the particles and the displacement of the isoelectric point, as explained in the following section (Section 3.2). Figure 2 shows the magnetization curves of the nanosorbents at room temperature. As can be observed, the remanence magnetization and the coercive field were nearly zero for all samples, indicating a superparamagnetic behavior at room temperature. The saturation magnetization values decrease after each grafting process from 81 to 62 and 58 Am 2 /kg for the MNP, TEOS@MNP and APTES@TEOS@MNP, respectively, due to the addition of a non-magnetic material; i.e., around 23% Figure 1b,c show the TEM images of the TEOS@MNP and APTES@TEOS@MNP, respectively. It can be seen that a smooth silica layer has been placed over the MNP agglomerates in both cases, indicating that the grafting was performed correctly. Figure 1f shows the hydrodynamic size distribution of the grafted and non-grafted particles. The mean sizes for the number distribution obtained for the MNP, TEOS@MNP and APTES@TEOS@MNP were 172.2(0.2) nm, 278.0(0.2) nm and 491.5(0.2) nm, respectively, indicating that the grafting enlarges the hydrodynamic particle size. Also, in the case of the MNP, the presence of a small fraction of bigger particles can be observed, suggesting that, without the silica layer, the particles form large aggregates. To ascertain the presence of both graftings, FTIR analysis was performed and is shown in Figure 1g. It can be observed that the sample APTES@TEOS@MNP presents a band at approximately 1080 cm −1 which corresponds to the Si-O bond on the nanoparticles surface due to the silica grafting [28]. Also, the presence of coordinated -OH groups on the surface of the particles or water molecules with the unsaturated surface Fe atoms can be attributed to the bands at 3443 cm −1 and 1600 cm −1 of O-H stretching vibration and O-H deformed vibration (bending modes), respectively [29]. The band at 468 cm −1 in both samples could be due to the Fe-O bonds of magnetite or maghemite [30]. The N-H bending and stretching bands of the terminal primary amine group of APTES cannot be seen as they overlapped with the 3443 cm −1 band; however, the grafting can be confirmed by the 2852 and 2932 cm −1 bands of the C-H bond stretching vibration that correspond to the propyl group [31]. Further confirmation of the presence of the APTES was obtained by the increase of surface charge of the particles and the displacement of the isoelectric point, as explained in the following section (Section 3.2). Figure 2 shows the magnetization curves of the nanosorbents at room temperature. As can be observed, the remanence magnetization and the coercive field were nearly zero for all samples, indicating a superparamagnetic behavior at room temperature. The saturation magnetization values decrease after each grafting process from 81 to 62 and 58 Am 2 /kg for the MNP, TEOS@MNP and APTES@TEOS@MNP, respectively, due to the addition of a non-magnetic material; i.e., around 23% in weight of TEOS plus 5% of APTES. Bare particles present a saturation magnetization close to the bulk value for magnetic iron oxides. Effect of pH The effect of pH in the adsorption process of Cr(VI) was evaluated by using the non-grafted MNP in batch experimentation, in which the pH was varied between 2 and 6. It can be seen from Figure 3a that, after the adsorption process at different pH values, the obtained qe values and Cr(VI) removal decrease with the increasing pH, which agrees with other studies on Cr(VI) adsorption with iron oxide nanoparticles [32,33]. This behavior can be attributed to the surface charge of the particles, as can be seen in Figure 3b. At low pH values, the MNP present a positive charge (+25.9 mV at pH 2.5) that increases the affinity of the nanosorbent for the anionic chromates (Cr2O7 2− and HCrO 4− ) in the aqueous solution. It should be taken into account that TEOS@MNP and APTES@TEOS@MNP samples are also positively charged at that pH (+1.8 and +34.2 mV, respectively), which can also be used to confirm the FTIR results regarding the success in the coating process. On the contrary, at high pH values, the MNPs are negatively charged, causing electrostatic repulsion for Cr(VI) anions [34]. Therefore, for further adsorption analyses and for the sake of comparison, a pH of 2.5 was fixed for the following experiments. The isoelectric point or point of zero charge of APTES@TEOS@MNP (pH 10) confirms the success of the coating process in comparison with the bare nanoparticles (pH 6.5). This huge difference is attributed to the presence of amino groups on the surface of the functionalized nanoparticles. Effect of pH The effect of pH in the adsorption process of Cr(VI) was evaluated by using the non-grafted MNP in batch experimentation, in which the pH was varied between 2 and 6. It can be seen from Figure 3a that, after the adsorption process at different pH values, the obtained q e values and Cr(VI) removal decrease with the increasing pH, which agrees with other studies on Cr(VI) adsorption with iron oxide nanoparticles [32,33]. This behavior can be attributed to the surface charge of the particles, as can be seen in Figure 3b. At low pH values, the MNP present a positive charge (+25.9 mV at pH 2.5) that increases the affinity of the nanosorbent for the anionic chromates (Cr 2 O 7 2− and HCrO 4− ) in the aqueous solution. It should be taken into account that TEOS@MNP and APTES@TEOS@MNP samples are also positively charged at that pH (+1.8 and +34.2 mV, respectively), which can also be used to confirm the FTIR results regarding the success in the coating process. On the contrary, at high pH values, the MNPs are negatively charged, causing electrostatic repulsion for Cr(VI) anions [34]. Therefore, for further adsorption analyses and for the sake of comparison, a pH of 2.5 was fixed for the following experiments. The isoelectric point or point of zero charge of APTES@TEOS@MNP (pH 10) confirms the success of the coating process in comparison with the bare nanoparticles (pH 6.5). This huge difference is attributed to the presence of amino groups on the surface of the functionalized nanoparticles. Therefore, for further adsorption analyses and for the sake of comparison, a pH of 2.5 was fixed for the following experiments. The isoelectric point or point of zero charge of APTES@TEOS@MNP (pH 10) confirms the success of the coating process in comparison with the bare nanoparticles (pH 6.5). This huge difference is attributed to the presence of amino groups on the surface of the functionalized nanoparticles. Kinetics and Isotherms Models The adsorption kinetics were analyzed for the three nanosorbents (MNP, TEOS@MNP and APTES@TEOS@MNP), and the non-linear fitting of the experimental data and kinetic models as a function of time are presented in Figure 4a. As can be seen, the adsorption equilibrium is reached after 1 h, and the experimental data best fitted the PSO model (R 2 = 0.98(0.02)), indicating that the rate-limiting step is the surface adsorption, and chemisorption is the most likely mechanism of adsorption [35]. Also, due to the surface charge, it is important to acknowledge that the mechanism of adsorption may benefit from the ion exchange between the Cr(VI) molecules, with the adsorbent TEOS@MNP barely removing the Cr(VI) of the aqueous solution due to its low surface charge. Kinetics and Isotherms Models The adsorption kinetics were analyzed for the three nanosorbents (MNP, TEOS@MNP and APTES@TEOS@MNP), and the non-linear fitting of the experimental data and kinetic models as a function of time are presented in Figure 4a. As can be seen, the adsorption equilibrium is reached after 1 h, and the experimental data best fitted the PSO model (R 2 = 0.98(0.02)), indicating that the rate-limiting step is the surface adsorption, and chemisorption is the most likely mechanism of adsorption [35]. Also, due to the surface charge, it is important to acknowledge that the mechanism of adsorption may benefit from the ion exchange between the Cr(VI) molecules, with the adsorbent TEOS@MNP barely removing the Cr(VI) of the aqueous solution due to its low surface charge. In contrast, the experimental data obtained by analyzing the effect of the initial concentration of Cr(VI) were compared to the previously mentioned isotherm models, and the corresponding nonlinear fittings are included in Figure 4b. As can be seen, there was a better fitting with the Langmuir isotherm model in all cases (R 2 ≈ 0.98(0.02)), meaning that it can be assumed that the process follows a monolayer adsorption with specific active sites for each Cr(VI) molecule. Furthermore, when using the APTES@TEOS@MNP, the maximum adsorption capacity obtained was higher (35 mg/g) than the obtained with the non-grafted MNP (15 mg/g), supporting the fact that the affinity of the nanosorbent increases with the surface charge. Table 2 summarizes the kinetic and isotherm parameters obtained with the experimental data. In contrast, the experimental data obtained by analyzing the effect of the initial concentration of Cr(VI) were compared to the previously mentioned isotherm models, and the corresponding non-linear fittings are included in Figure 4b. As can be seen, there was a better fitting with the Langmuir isotherm model in all cases (R 2 ≈ 0.98(0.02)), meaning that it can be assumed that the process follows a monolayer adsorption with specific active sites for each Cr(VI) molecule. Furthermore, when using the APTES@TEOS@MNP, the maximum adsorption capacity obtained was higher (35 mg/g) than the obtained with the non-grafted MNP (15 mg/g), supporting the fact that the affinity of the nanosorbent increases with the surface charge. Table 2 summarizes the kinetic and isotherm parameters obtained with the experimental data. Figure 5 shows the efficiency of the Cr(VI) adsorption with MNP and APTES@TEOS@MNP after seven adsorption/desorption cycles. These tests showed that Cr(VI)-loaded nanosorbent requires just a small volume of a basic solution to efficiently desorb the chromate molecules due to the change of the particles' surface charge with pH. In both cases, only a slight decrease of efficiency was shown (≈98 ± 5%), proving that the nanosorbents are reusable and highly efficient in this adsorption process. Even though the maximum adsorption capacities obtained for MNP and APTES@TEOS@MNP were not as good as for other functionalized magnetic nanoparticles (Table 3), this material can be used several times and achieve greater chromium removals. Also, after seven cycles, there was no change in the morphology of the nanosorbents as demonstrated with TEM images, and there was no magnetic loss observed (95% recovery). Water 2019, 11, x FOR PEER REVIEW 9 of 12 Figure 5 shows the efficiency of the Cr(VI) adsorption with MNP and APTES@TEOS@MNP after seven adsorption/desorption cycles. These tests showed that Cr(VI)-loaded nanosorbent requires just a small volume of a basic solution to efficiently desorb the chromate molecules due to the change of the particles' surface charge with pH. In both cases, only a slight decrease of efficiency was shown (≈98 ± 5%), proving that the nanosorbents are reusable and highly efficient in this adsorption process. Even though the maximum adsorption capacities obtained for MNP and APTES@TEOS@MNP were not as good as for other functionalized magnetic nanoparticles (Table 3), this material can be used several times and achieve greater chromium removals. Also, after seven cycles, there was no change in the morphology of the nanosorbents as demonstrated with TEM images, and there was no magnetic loss observed (95% recovery). In a previous work [15], we obtained lower qm (12 mg/g) values with bare nanoparticles synthetized via an electrochemical method with a mean particle size of 21 nm. In the present work, we have improved the adsorption efficiency by reducing the nanoparticle size to 12 nm, obtaining a qm value of 15 mg/g for bare NPs. Table 2 presents a comparison of the adsorption capacity of Cr(VI) by using different magnetic nanosorbents with similar characteristics to the one used in this work. As can be seen for Cr(VI) removal, it is important to ensure a positive surface charge of the adsorbent at the working pH to reinforce attraction forces with negatively charged chromates. Additionally, the maximum adsorption capacity of Cr(VI) (qm = 35 mg/g) makes our APTES@TEOS@MNP a competitive adsorbent compared to other materials. Only the chitosan-coated Fe3O4 nanocomposite shows a high qm (81.5 mg/g), which is probably due to the quelate effect of the chitosan with the chromates, enhancing the adsorption efficiencies, but this may hinder its re-use, unlike the material presented in this research which can be used for several adsorption cycles, reaching higher removal percentages and lowering the process cost. In general, we have observed that the adsorption capacity increases by increasing the nanoparticle surface charge, as in the case from +1.8 mV (qm = 0 mg/g for In a previous work [15], we obtained lower q m (12 mg/g) values with bare nanoparticles synthetized via an electrochemical method with a mean particle size of 21 nm. In the present work, we have improved the adsorption efficiency by reducing the nanoparticle size to 12 nm, obtaining a q m value of 15 mg/g for bare NPs. Table 2 presents a comparison of the adsorption capacity of Cr(VI) by using different magnetic nanosorbents with similar characteristics to the one used in this work. As can be seen for Cr(VI) removal, it is important to ensure a positive surface charge of the adsorbent at the working pH to reinforce attraction forces with negatively charged chromates. Additionally, the maximum adsorption capacity of Cr(VI) (q m = 35 mg/g) makes our APTES@TEOS@MNP a competitive adsorbent compared to other materials. Only the chitosan-coated Fe 3 O 4 nanocomposite shows a high q m (81.5 mg/g), which is probably due to the quelate effect of the chitosan with the chromates, enhancing the adsorption efficiencies, but this may hinder its re-use, unlike the material presented in this research which can be used for several adsorption cycles, reaching higher removal percentages and lowering the process cost. In general, we have observed that the adsorption capacity increases by increasing the nanoparticle surface charge, as in the case from +1.8 mV (q m = 0 mg/g for TEOS@MNP) to +34.2 mV (q m = 35 mg/g for APTES@TEOS@MNP). Conclusions Superparamagnetic nanosorbents based on iron oxide nanoparticles of 12.2 (±1.5) nm in diameter and coated by silica (30% in weight) were developed and optimized for the removal of Cr(VI). It was observed that surface charge is an important parameter determining the adsorption capacity, reaching a maximum of 35 mg Cr(VI) per g of nanosorbent at the maximum positive surface charge. In this work, microwave polyol-mediated synthesis was chosen for the efficient and reproducible preparation of uniform magnetic cores with sizes below 15 nm to maintain superparamagnetic behavior, and the surface charge was varied from negative to positive by successively grafting tetraethyl orthosilicate and 3-Aminopropyl triethoxysilane. Chromium surface adsorption seems to be the rate-limiting step, and adsorption increases with increasing the nanoparticles' positive surface charge. Finally, the particles showed high reusability efficiencies (around 100%) after seven Cr(VI) desorption cycles. The easy separation and regeneration of these magnetic nanosorbents from aqueous solutions, and the high-adsorption capacity of Cr(VI) in comparison to others, suggest that these nanoparticles can be efficiently used for the decontamination of Cr(VI)-containing wastewater, such as that discharged by the electroplating industry.
7,539
2019-11-13T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Flower stages, germination and viability of pollen grains of Annona squamosa L. in tropical conditions The success in the application of artificial pollination techniques, aspects of floral biology should receive special attention, especially regarding the viability studies of pollen grains. In this sense, two experiments were carried out, aiming to determine the floral stages: floral opening (anthesis), female, male and senescence stages of A. squamosa L flowers under tropical climate conditions, and to evaluate the germination and viability of pollen grains submitted to different storage conditions. In the first experiment, observations and data collection began when flowers were still closed. Readings were taken every two hours for 46 hours until all the flowers reached the senescence stage. For the second, the experimental design was completely randomized, in a 2 x 7 factorial scheme, with 2 storage conditions (environment at 27 ± 2°C and refrigerated at 5 ± 2°C) and 7 pollen grain conditioning times (0, 4, 8, 12, 16, 20 and 24 hours), with 4 repetitions for the analysis of pollen grain germination and 3 repetitions for the analysis of pollen grain viability, and each repetition consisted of one blade. The variables evaluated were percentage of viable and non-viable pollen grains and percentage of germinated and non-germinated pollen grains. For tropical climatic conditions, the flower anthesis (female stage) of the sugar-apple begins at 00:00 h, extending until 12:00h on the same day. The flowers of the tree reach a functionally staminate stage (male stage) from 6 a.m. the next day. Pollen grains stored at room conditions (27 ± 2°C) remain viable for up to 24 hours after collection, showing that storage in a cold environment at low temperatures (5 ± 2°C) is not necessary. Seed pollen grains have a germination percentage of 51.25% when stored under ambient conditions (27 ± 2°C) for up to 4 hours after collection. Introduction The sugar apple tree (Annona squamosa L.) is species is native to tropical America, particularly to the Antilles, and can be cultivated in tropical and subtropical areas (Liu et al., 2015). Is one of the most important species of the genus Annona, due to the commercial value of its fruits and to the much-appreciated taste by consumers (Zucareli, Ferreira, Silvério, & Amaro, 2008). In addition to dietary properties, Annonaceae have medicinal, pharmaceutical (Madhu, Brainard, Raj, Swapn, & Rao, 2012;Manvi, Nanjwade, & Shing, 2011) and nutritional properties such as vitamins A, B, C, E, K1, antioxidants, polyunsaturated fatty acids and essential minerals, in addition to its pleasant aroma and flavor (Liu et al., 2015;Liu, Yuan, & Jing, 2013), as well as its potential as insecticide (Seffrin, Shikano, Akhtar, & Isman, 2010). The A. squamosa L is strongly affected by climatic conditions and depending on the seasonal variations of each biome, an advance or delay in the crop cycle may occur, affecting the viability of the floral set and the quantity and quality of fruits. Second, the problem of inefficient pollination is due to high temperatures (30°C) and low relative humidity (30%), while low temperature (25°C) and high relative humidity (80%) enable efficient pollination (Khalate, Supe, & Doke, 2018). Rodrigues et al. (2016) studying different temperatures for the in vitro pollen grain germination of A. squamosa, found that the temperature of 25 ± 1°C provided the best in vitro germination, with 48.13%, and as the temperature increased, germination percentage decreased significantly. George and Nissen (1988) cite that high temperatures adversely affect flower bud production, suggesting that tropical conditions are not conducive to high sugar-apple yields. Therefore, it is observed that the greater the climatic factors at the time of floral development and anthesis (high temperatures, low relative humidity and high vapor pressure deficit), the greater the likelihood that flowers will reach floral abscission as well as stigma desiccation before floral dehiscence, noting that the flower exhibits protogynous dichogamy. Thus, strategies that allow the extension of the floral stage, as the flower's receptive period and the viability of pollen grains, to obtain satisfactory A. squamosa production in regions with tropical and subtropical climatic conditions are necessary. Pollen viability is one of the factors that have a direct influence on fertilization success. It is recommended to test pollen viability before use (Einhardt, Correa, & Raseira, 2006). The success of in vitro germination depends on several factors such as plant species, plant nutritional status, time of year and time of collection, photoperiod, air temperature, collection method, incubation period and presence of micro and macronutrients in the culture medium (Soares et al., 2008), as well as adjustments of the culture medium composition for each species (Chagas, Pio, Chagas, Pasqual, & Bettiol Neto, 2010;Sinimbú Neto, Martins, & Barbosa, 2011). In vitro and in vivo pollen germination allows the analysis of pollen tube emission capacity and a correlation of this rate with pollen grain viability . However, the determination of pollen grain viability by dyes is a common practice in cytogenetics. Colorimetric methods use specific chemical dyes that react with cellular components present in mature pollen grains, such as lugol and acetic carmine. Some dyes are more commonly used than others, such as Tetrazolium salt, Lugol's solution, Alexander's solution and Acetic carmine (Einhardt et al., 2006). Studies on pollen grain are the basis for understanding reproductive biology and are important for breeding, conservation of plant genetic resources and hybrid seed production (Nascimento, Torres, & Lima, 2003) However, despite existing research nitiated a few decades ago, information is scarce for tropical climate conditions in the world, especially in the Northern Amazon, Brazil. Therefore, the aim of this study was to determine the flowering stages, and to evaluate the germination and viability of short-lived pollen grains subjected to different storage conditions, under the tropical climate conditions, Northern Amazon of Brazil. During the experimental period they were measured at average temperatures and relative humidity, as shown in Figure 1. For the experiment, 100 completely closed flower buds were selected, in a completely randomized design with 4 replications and under homogeneous conditions, which were marked with fluorescent labels two hours before the beginning of the observation. Observation and data collection began at the time when the flowers were still closed at 4 pm. Thereafter, the readings were taken every two hours for 46 hours when all the flowers reached the senescence stage. The evaluations were performed in February 2015. Data collected during this period consisted of annotations of the morphological changes of each flower during the observation period. Thus it was possible to accurately determine the period when the flowers reached the feminine stage, masculine stage and senescence. The second experiment was carried out at Embrapa Roraima's Tissue Culture and Plant Pathology Laboratory. In order to study the viability and in vitro germination of pollen grains, 100 randomly selected flowers were collected at 6 p.m., in homogeneous conditions at the beginning of the female stage (petals slightly apart). The flowers were taken to the Tissue Culture laboratory in single-layer plastic trays at room temperature (around 27 ± 2°C). The next day, at 5:30 a.m., when the flowers were in the male stage, the pollen grains were separated, divided into equal parts and placed in plastic containers, closed and identified as 'pollen refrigerated' and 'ambient pollen'. Acta Scientiarum. Technology, v. 43, e51013, 2021 Thereafter, the bottle labeled 'ambient pollen' was stored under ambient conditions at around 27 ± 2°C, while 'refrigerated pollen' was kept refrigerated at a temperature of 5 ± 2°C. Immediately after separation and identification of conservation treatments, in vitro germination and viability tests were performed, and the first evaluation was performed (0 hours of conditioning), at 8 a.m., shortly after pollen collection. The other evaluations were performed every 4 hours after the pollen grains were packed at 4, 8, 12, 16, 20 and 24 hours for both pollen subjected to room temperature and for pollen stored under refrigeration. In order to evaluate the viability of pollen grains, the technique described by Linsley & Cazier (1963) was adapted, including staining of pollen grains arranged on a glass slide with 1% cotton blue. The inoculated plate was surface washed with 15 drops of pure water to detach the pollen grains. Shortly thereafter, a drop of the pollen containing solution and a drop of lactophenol cotton blue were mixed in a slide. Then 1 drop of this solution was pipetted into the Neubauer chamber and visualized under the microscope. Four observation slides were mounted for each inoculated plate, facilitating visualization for the germinated pollen count. The method used to test pollen viability was an adaptation of the tetrazolium technique (Dafni, 1993). Pollen samples were collected every 4 hours in the different storage treatments for slide preparation with 2, 3, 5 triphenyl tetrazolium chloride (TTC) at a concentration of 1%. Pollen grains were distributed on 3 slides per treatment and one drop of dye added to each slide. The slides were left for four hours to check staining. Slides with 80% red stained pollen were considered viable pollen grains. For the in vitro germination test, the experimental design was completely randomized in a 2 x 7 factorial scheme, with 2 storage conditions (environment at 27 ± 2°C and refrigerated at 5 ± 2°C) and 7 different times of conditioning of pollen grains (0, 4, 8, 12, 16, 20 and 24 hours). The experiment consisted of 4 repetitions, each repetition consisting of a slide formed from each Petri dish. The pollen grains of each evaluated slide were observed under the optical microscope, using the 10x and 40x objectives. For the feasibility tests, the first evaluation was performed (0 hours of packaging) at 8 a.m., shortly after pollen collection. The experimental design was completely randomized in a 2 x 7 factorial scheme, with 2 storage conditions (environment at 27 ± 2°C and refrigerated at 5 ± 2°C) and 7 different times of pollen grain conditioning (0, 4 , 8, 12, 16, 20 and 24 hours). The experiment consisted of 3 repetitions, and each repetition consisted of an inoculated slide. Pollen grains from each slide were observed under the optical microscope using 10x and 40x objectives. The variables evaluated were subjected to analysis of variance, and the effects of quantitative treatments were submitted to polynomial regression. The analyzes were performed by the computer program R (R Core Team, 2018). Results and discussion Determination of floral opening and female, male and senescence stages Flower bud marking was started at 16:00 hours (h) on the first day of evaluation. The visual observation of the flowers was made every 2h, making a total of 46h of evaluation. During the study there was heterogeneity in the behavior of sugar-apple flowers, with flowers in the female stage and flowers in the male stage in the morning. The phases of the sugar-apple [closed flower bud, anthesis (female stage), male stage, senescent blossom] observed during the evaluations can be seen in Figure 2. It was observed that anthesis began at 00:00h (2nd assessment day), in 53% of marked flowers, and extending until 12:00h of the same day, when 100% of flowers reached anthesis (Figure 3). This result is similar to that obtained for flowers of araticum (Annona crassiflora Mart.), where the anthesis is gradual and begins in the early hours of the day and may extend until the early hours of the following day (Almeida-Júnior et al., 2018). The transition from the female stage to the male stage occurred between 6:00 and 12:00 in the morning of the following day (2nd assessment day). In this interval, the vast majority of flowers have reached the male stage. However, a small portion of the flowers of this sample extended until 8:00 a.m. the next day (3rd assessment day) (Figure 3). This transient phase of female to male flower is characterized by full opening of flowers and anther dehiscence (Ribeiro, São José, Rebouças, & Amaral, 2008). In flowers belonging to the family of Annonaceae, the male stage of protogenic species occurs between 3 and 6 a.m. with the beginning of pollen release (Carvalho & Webber, 2000). Already the female reproductive structure of A. squamosa matures first and can remain receptive for up to twenty-four hours (Gazit, Galon, & Podoler, 1982). It was observed that the number of flowers in the female stage increased from 00:00 in the morning, and the maximum percentage of female flowers was observed at 6 in the morning, probably the most appropriate time to perform the pollination as there is a decrease after this time. The maximum percentage of flowers in the male stage was observed at 10:00 a.m. in the morning. This information allows to estimate that artificial pollination can be performed in the morning shift, between 02:00 and 06:00 a.m. in tropical and subtropical regions, where climatic factors caused less damage during pollination, and may be extended until 10:00 am, however, with fewer female flowers. However, it is necessary to observe the viability period of pollen grains, because only the period after 06:00 a.m. is when there is viable pollen to perform artificial pollination. Feasibility of pollen grains The viability evaluation of pollen grains in minutes showed a decreasing quadratic effect for pollen stored at refrigerated temperature (5 ± 2°C) from 12 o'clock in the afternoon. However, pollen stored under ambient conditions (27 ± 2°C) remained viable until 24h in the afternoon after storage. These results indicate that it is not necessary to store under refrigerated conditions to maintain the viability of the pollen in the studied period ( Figure 4). The unfolding of the second-degree equation indicates that pollen grains may have a 99.35% viability over a period of approximately 28h of storage under ambient conditions (27 ± 2°C). Despite the high initial percentage of viable pollen grains stored in refrigerated environment (5 ± 2°C), reaching 100% viability after 8 hours of storage, this was not considered an adequate environment for storage, indicating that pollen when refrigerated, should be used within 8 hours after storage, losing its viability after this period. Storage periods (h) Changes in environmental conditions strongly affect the viability of pollen grains in vitro, and small decreases in relative humidity and temperature increases may indicate significant reductions (Alves Rodrigues, Nietsche, Mercadante-Simões, Toledo Pereira, & Ribeiro, 2018). Pre-anthesis temperatures may also be involved in pollen vigor reduction, impacting on the reduction of accumulated starch in the maturation phase and on the metabolism of the reserves, since most of the time, pollen twinning occurs autotrophically (Baker & Baker, 1979). In a study on the viability of araticum pollen (Annona crassiflora, Mart.) using acetic carmine test under ambient conditions, (Cavalcante, Naves, Franceschinelli, & Silva, 2009) observed a percentage of 93.83%, considered high (Mendes, Costa, Nietsche, Oliveira, & Pereira, 2012), observed that pollen grain viability in seed and seedless accessions was 38.5 and 52.5%, respectively. These results are similar to those found in the present study, where viability was verified above 95% for pollen stored under ambient conditions (27 ± 2°C) using the solution of 2, 3, 5 triphenyl tetrazolium chloride TTC ( Figure 5). According to (Nascimento, Gomes, Batista, Freitas, 2012), pollen grains with low viability generally result in low fruit fixation. Therefore, determining the viability of pollen grains is important as it contributes to the practice of artificial pollination, increasing productivity and quality of production. Pollen viability can be determined by a number of techniques, the most used being in vitro germination (Soares et al., 2008). Pollen viability estimates are one of the most significant instruments in the qualitative evaluation of the materials to be used in the crossings (Wondracek-Lüdke, Custodio, Simpson, & Valls, 2015). Figure 5 shows viable shortcut pollen grains when subjected to the staining test using 2, 3, 5 triphenyl tetrazolium chloride (TTC). Germination of pollen grains Significant differences were observed for germination percentage at the 1% level (p < 0.01). The sugar apple pollen grains stored under ambient (27 ± 2°C) and refrigerated (5 ± 2°C) conditions presented excellent germination rates at the first evaluation, i.e. storage time 0, with 65.75 and 50.25% respectively. In the first 4 hours of storage, the germination percentage decreased, however, with still satisfactory values, with 51.25 and 45%, with a significant decrease in the later hours. Pollen tube germination data ( Figure 6) show a significant drop when compared to viability data ( Figure 5). Results similar to those reported by (Nietsche, Pereira, Oliveira, Dias, & Reis, 2009) and (Pereira, Crane, Montas, Nietsche, & Vendrame, 2014), who reported that the use of pollen shortly after collection presents satisfactory results, were also obtained in the present work. These results confirm that the use of newly collected pollen presents better viability characteristics, culminating in high fruit fixation rates through artificial pollination. Acta Scientiarum. Technology, v. 43, e51013, 2021 On the other hand, Baker and Baker (1979) point out that variations in pre-anthesis temperature alter vigor and affect the accumulated starches during maturation and reserve metabolism. This was confirmed by Lora, Herrero and Hormaza (2012) who reported that starch decomposition and germination of A. cherimola pollen grain were affected by pre-anthesis temperatures, with starch loss occurring before anthesis in flowers stored at 25°C, however, there was no effect on storage at 15°C. Similarly Matsuda, Higuchi and Ogata (2016) observed that germination decreased below 14°C, being more noticeable with restrictive effect of pollen tube below 6°C and causing pollen wilting at 4°C, with the same effects occurring above 27°C, with only the preanthesis temperature in the range of 20-22°C ensuring increased pollen grain germination. Mendes et al. (2012) obtained similar results for 'Brazilian seedless' sugar-apple pollen grain germination in vitro , reaching 52.5% at a controlled temperature of 25 ± 1°C stored for 6 hours. The success of in vitro germination of pollen grains depends on several endogenous and exogenous factors such as plant nutritional status, time and method of pollen grain collection, photoperiod, environmental variations, incubation period and composition of the medium of culture (Alves Rodrigues et al., 2018;Chagas et al., 2010;Ramos, Pasqual, Salles, Chagas and Pio, 2008;Soares et al., 2008;Souza, Souza, Silva, Barbosa, & Araújo, 2014). It is also important to emphasize that the culture medium factor is specific for each species (Dafni, 1993), so this is a component that must be rigorously studied for successful in vitro germination. The storage of pollen grains in refrigerated environment may or may not promote good conditions for germination. Bettiol Neto, Del Nero, Kavati and Pinto-Maglio (2009) in in vitro germination tests and field pollination of cherimóia, sugar-apple and atemoya, found that pollen samples collected in the humid period and stored in the refrigerator were the ones with the best germination rates under temperate conditions. For the present study with sugar apple under tropical climate conditions, pollen grains under refrigerated storage (5 ± 2°C) presented a percentage below 45% after 4 hours of storage, drastically decreasing after this period, not being considered a satisfactory germination percentage after this period ( Figure 6). Despite the existence of publications in which colorimetric tests are used as a pollen vigor parameter (Cabral, Rossi, Klein, Vieira, & Giustina, 2013;Hister & Tedesco, 2016;Nunes, Bustamante, Techio, & Mittelmann, 2012), colorimetry should not be used as the sole indication of pollen viability because it can only point to the presence of cellular content, which does not necessarily imply the formation of the pollen tube and subsequent fertilization. In vitro tests of pollen tube germination are required to prove this viability more safely. Figure 7 shows the in vitro germination test of sugar apple pollen grains in the present experiment. Conclusion The anthesis of the flowers (female stage) of the sugar-apple begins at 00:00, extending until 12:00 of the same day. The flowers of the sugar apple reach a functionally staminate stage (male stage) from 6 a.m. in the tropical climate conditions. Pollen grains stored at ambient conditions (27 ± 2°C) remain viable for up to 24 hours after collection, showing that storage in a cold environment at low temperatures (5 ± 2°C) is not necessary. The sugar apple pollen grains have a germination percentage of 51.25% when stored under ambient conditions (27 ± 2°C) for up to 4 hours after collection.
4,660.6
2021-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Fiber Bragg grating fabrication by femtosecond laser radiation The paper presents the results of fiber Bragg gratings fabrication by femtosecond laser radiation using point-by-point and line-by-line inscription methods. The approach makes it possible to fabricate the second and higher diffraction orders fiber Bragg gratings, which can be used as sensitive elements of fiber-optic sensors. Introduction In the last decade, the field of research related to the formation of optical structures in the bulk of transparent dielectrics, including optical fibers, using femtosecond laser radiation is very promising. The action of laser radiation leads to a local modification of the refractive index of the regions that were exposed, thus, ordered structures of the refractive index are formed. These structures are fiber Bragg gratings (FBGs), which are widely used in various fiber-optic devices: as sensitive elements in sensors [1,2], as spectral filters in fiber lasers [3,4], etc. Fiber Bragg grating fabrication by femtosecond laser radiation The FBG was recorded using Yb:KGW femtosecond laser system operating at wavelength of 1030 nm, pulse width 280 fs, pulse repetition rate of 10 kHz. Laser radiation was focused using a Mitutoyo Plan Apo NIR high-aperture micro lens (100x, NA = 0.7). For moving of the fiber during inscription high-precision 2-dimensional positioning system Aerotech ANT130-110-XY Ultra and motorized linear stage Standa 8MT167-25LS (Z-axis) were used. For inscription of FBGs standard single mode fiber Corning SMF-28e+ was used (core diameter 8.2 μm), FBGs were inscribed according to the method of line-by-line and point-by-point technology at an energy of 150 nJ through the polymer jacket. The period of FBGs was chosen taking into account the second, third and fourth diffraction orders based on the linear dimensions of the inscribed structures. Inscription of FBGs was done by translating the fiber regarding to the focused laser beam. For precise flatness position adjustment of the sample the tilt corrector system (Standa 8MKVDOM) was used. To overcome the fabrication limitation imposed by the intrinsic fiber geometry, the optical fiber was placed between slide and cover glass, space between them filled with index-matching immersion liquid with a refractive index close to the refractive index of optical fiber. Glycerin had been used as the immersion liquid [5]. Micrographs of fourth order FBG (period Λ = 2.14 μm, central Bragg wavelength 1550 nm) inscribed using point-by-point fabrication method by femtosecond laser radiation is shown in Fig. 1. Fig. 1. Micrographs of fourth order FBG inscribed using point-by-point fabrication method: atop view; borthogonal view. Fig. 1 shows that the modification region completely intersects core of the optical fiber and does not deviate from its initial position along the Z axis. FBG fabricated using point-bypoint inscription method is a set of individual cylinders. In the case of line-by-line inscription method the formation of FBG is carried out sequentially in the form of strokes, this method is less demanding to configure the system.The main disadvantage of this method is the rather low recording speed due to the fact that it is necessary to modify large-sized areas. Using the line-by-line inscription method were inscribed the second (Λ = 1.07 µm), third (Λ = 1.605 µm) and fourth diffraction order (Λ = 1.605 µm) FBGs. Reflection spectrum of inscribed FBG were performed using optical spectrum interrogator module (OSI) NI PXIe-4844. Conclusion The paper presents the results of fabrication the second and higher diffraction orders FBGs by femtosecond laser radiation using point-by-point and line-by-line inscription methods. Inscribed FBGs can be used as sensitive elements of fiber optic sensors.
805
2019-01-01T00:00:00.000
[ "Physics" ]
A Novel Hybrid Fuel Consumption Prediction Model for Ocean-Going Container Ships Based on Sensor Data : Accurate, reliable, and real-time prediction of ship fuel consumption is the basis and premise of the development of fuel optimization; however, ship fuel consumption data mainly come from noon reports, and many current modeling methods have been based on a single model; therefore they have low accuracy and robustness. In this study, we propose a novel hybrid fuel consumption prediction model based on sensor data collected from an ocean-going container ship. First, a data processing method is proposed to clean the collected data. Secondly, the Bayesian optimization method of hyperparameters is used to reasonably set the hyperparameter values of the model. Finally, a hybrid fuel consumption prediction model is established by integrating extremely randomized tree (ET), random forest (RF), Xgboost (XGB) and multiple linear regression (MLR) methods. The experimental results show that data cleaning, the size of the dataset, marine environmental factors, and hyperparameter optimization can all affect the accuracy of the model, and the proposed hybrid model provides better predictive performance (higher accuracy) and greater robustness (smaller standard deviation) as compared with a single model. The proposed hybrid model should play a significant role in ship fuel consumption real-time monitoring, fault diagnosis, energy saving and emission reduction, etc. Introduction The maritime transportation industry has played a significant role in the cargo industry as a whole since the development of international trade [1,2], and it also has an important impact on the development of the national economy [3]. The total volume of international seaborne trade has been growing significantly over the last years [4]. In addition, container shipping is important for global seaborne trade and the quantity of cargo transported by container shipping has been increasing over the past decades [5]. The increased volume of maritime transport consumes a huge amount of fuel, and as the price of fuel continues to increase, the companies operating ships are facing tremendous freight pressure. In fact, fuel costs for tankers and container ships have been estimated to account for 58% and 78% of the total operating costs, respectively [6]. Another side effect of the significant volume of maritime transportation is an increase in ship-induced greenhouse gas emissions. As a consequence, global warming and various air pollution issues have surfaced. Worldwide estimated carbon emissions from ships, in 2012, were approximately 938 million tons, representing 2.6% of the global total carbon emissions. If no effective control measures are taken, it is expected to rise by 50% to 250% by 2050 [7]. The literature also proves that greenhouse gases emitted by ships mainly include SO 2 , NO x , CO 2 , PM 2.5 , PM 10 [8] and some technical research aiming at greenhouse gas emissions reduction is also being worked on by some experts, such as concerning seawater desulphurization [9][10][11]. The increase in greenhouse gases and environmental pollution has resulted in the International Maritime Organization (IMO), the member states, and related organizations taking various measures to improve the energy efficiency of ships. In 2009, the IMO issued the Guidelines for Voluntary Use of Energy Efficiency Operational Indicator (EEOI) which applies to all ships, and is used to measure the energy efficiency level of operational ships. In addition, the Energy Efficiency Design Index (EEDI) and the Ship Energy Efficiency Management Plan (SEEMP) were launched by the IMO, in 2011, for new ships and all ships, respectively. In 2015, the Marine Environment Protection Committee (MEPC) formulated a three-steps plan focused on ship energy savings and emission reduction based on fuel consumption data, i.e., gathering the data, analyzing the data, and optimizing the support decision. In the same year, China also issued a document entitled "Code for Smart Ships", which integrates the collection, analysis, assessment, and support decision of ship fuel consumption data as part of smart energy efficiency. In 2019, the Norwegian government partnered with the IMO to establish the GreenVoyage-2050 project, which aims to transform the shipping industry towards a lower carbon future. The main purpose of all the above measures is to improve energy efficiency and to minimize greenhouse gas emissions from international shipping and a prerequisite for the aforementioned objective is the development of an accurate and robust ship energy efficiency prediction model. Therefore, our main task is to establish a real-time prediction model with high accuracy and robustness based on the collected ship fuel consumption data, and it will be the basis and premise of fuel management and optimization of ship fuel efficiency. The remaining framework of this paper is as follows. Section 2 reviews the existing studies on ship fuel consumption prediction. ship fuel consumption data collection and processing is described in Section 3. The methodology, hyper-parameters optimization, and error matrices are outlined in Section 4. Section 5 discusses the experimental results and finally, the conclusion and future work is discussed in Section 5. Literature Review An accurate and robust ship fuel consumption prediction model plays a significant role in the optimization of ship fuel consumption. Currently, there are three main types of ship fuel consumption prediction models, namely the physics-based model, simulation-based model, and data-driven model. Physics-Based Ship Fuel Consumption Prediction Model In the physics-based ship fuel consumption prediction model, the ship's resistance is calculated through an empirical formula. This is followed by calculating the ship's fuel consumption based on the principle of equal resistance and thrust, combined with the relationship between thrust and a ship's fuel consumption rate. The earliest and most classic documentation of the physics-based ship fuel consumption prediction model was published by Holtrop and Mennen in 1982 [12]; however, the ship's resistance was calculated in calm water, without considering the marine environment. The model was improved by Kwon where marine environmental factors were considered [13]. Subsequent studies on the physics-based ship fuel consumption prediction model have been based on the above mentioned studies [14][15][16][17][18]. The advantages of the physics-based model are that the calculations are relatively simple and the principle of the model is easy to understand; however, it is difficult to accurately depict the impact of environmental factors on the ship fuel consumption using an empirical formula; therefore, the physics-based prediction model approaches are usually less accurate. Simulation-Based Ship Fuel Consumption Prediction Model The widely adopted simulation-based prediction model for ship fuel consumption uses computational fluid dynamics (CFD), an emerging interdisciplinary field of hydromechanics and computer science [19][20][21]. CFD is used to approximate the integral and differential terms of the fluid dynamics control equations into discrete algebraic forms, turning them into algebraic groups of equations. Then, these discrete groups of algebraic equations are solved using computer software in order to obtain numerical solutions at discrete time/space points. The simulation-based prediction model produces accurate results for ships sailing in calm water; however, the accuracy of a simulation-based prediction model for ships in actual sea conditions is still arguable, since it is still difficult to depict the impact of environmental factors on the ship fuel consumption. In addition, CFD simulations take a relatively long time, making it difficult to satisfy the demand for real-time prediction. Data-Driven Ship Fuel Consumption Prediction Model The data-driven ship fuel consumption prediction model was developed by using data mining, deep learning, ensemble learning, and other methods. This approach is becoming increasingly popular in this field of research since a large number of noon-report data and sensor data are collected and made available. From the empirical formula, the ship fuel consumption has a cubic relationship with engine speed, whereas the engine speed is related linearly to the voyage speed. Hence, the ship fuel consumption can be related directly to voyage speed. Through this relationship, the ship fuel consumption model can be established using statistical methods that combine the relationship between fuel consumption and voyage speed. Then, the collected data are used to fit the model parameters in order to make the model more realistic. Yao et al. [22] fitted the daily fuel consumption (y) and speed (v) of container ships and obtained the following relationship: y = k1 * v 3 + k2. Le et al. [23] collected the noon-report data from more than 100 container ships and classified ships into five types according to their sizes. Finally, ship speed, sailing time and total fuel consumption were linearly fitted. Bocchetti et al. [24] conducted experiments on oil tanker fuel consumption data and obtained the sixth power relationship. Bialystocki and Konovessis [25] used the collected data from the noon reports and took environmental factors into consideration. Finally, the daily fuel consumption and speed were fitted to obtain a quadratic relationship. The least absolute shrinkage and selection operator (LASSO) and ridge regression [26,27] techniques were also used to model ship fuel consumption, as compared with traditional linear regression technique, the prediction performance of LASSO and ridge regression techniques were better due to the characteristics of compressed features and deleted collinear features. Furthermore, the low accuracy of the linear regression technique is due to the high dimensional and nonlinear nature of ship fuel consumption data which make it difficult to fit their intrinsic relationships. Therefore, nonlinear models have been gradually applied to ship fuel consumption modeling and have obtained better prediction results [28][29][30]. With the continuous development of machine learning, the use of new technologies for developing ship fuel consumption prediction models is becoming increasingly well researched. The strong nonlinear fitting ability of artificial neural network (ANN) enables it to be widely used for models with high accuracy. It has been reported that the ship fuel consumption models using ANN have produced good prediction results based on noon-report data [23,[31][32][33][34], sensor data [29,[35][36][37][38][39]] and automatic identification system(AIS) data [40]. Another machine learning approach, known as ensemble learning, has been emerging as is gradually being applied in ship fuel consumption models [41,42]. Experimental results have revealed that the accuracy of ensemble learning methods for predicting ship fuel consumption is superior as compared with other algorithms. From the above literature review of data-driven methods, there is not one algorithm that is applicable to all research datasets. Different algorithms can be more appropriate because they perform better in particular research datasets [43]. In general, statistical methods are suitable for small datasets, whereby deep learning and ensemble learning perform are better in large datasets. Research Gap and Contributions In the process of developing a ship fuel consumption model, there are two major components that determine the performance of the model. One component is the quality of the fuel consumption data; there are two main types of ship fuel consumption data, namely noon-report data and sensor data. Noon-report data are filled in by crew once a day at noon, and therefore it is difficult to used these data for real-time monitoring of ship performance. Sensor data are collected by many sensors, with high frequency acquisition, and therefore these data meet the requirements of real-time monitoring of ship fuel consumption. The second component of a model is the methods of ship fuel consumption modeling. From the literature review, is seems that many current modeling methods have been based on a single model, and no multiple models are used, although multiple models have been proven to be effective approaches in other fields [44,45]. The main contributions of this study are the following: (1) A precise and high-frequency ship fuel consumption data set is obtained via multisensor in order to provide a substantial high-quality data for the model development. (2) A novel hybrid fuel consumption prediction model based on multiple models is proposed. Data Collection The ship fuel consumption data were obtained from a container ship from 14 September 2017 to 25 September 2018. The container ship information is shown in Table 1. The ship fuel consumption data records consist of information on characteristics such as data acquisition time, fuel consumption, Global Positioning System (GPS) speed, trim, mean draft, current speed and direction, wind speed and direction, and wave direction and height, and is shown in Table 2. The fuel consumption was collected by an installed onboard flow meter sensor, and each data record value is the volume of heavy fuel consumed by the ship's main engine within a 15-min period multiplied by the density of the heavy fuel. The GPS speed was collected by an installed onboard GPS sensor. Mean draft (mean value of for draft and aft draft) and trim (for draft minus aft draft) were obtained by the installed onboard ecosounder sensor. We also acquired wind speed, wind direction, wave height, wave direction, current speed, and current direction through the onboard radarsonde sensor, wave gauge sensor and current meter sensor, respectively. Since the collection frequency of the onboard GPS, echosounder, radarsonde, current meter, and wave gauge sensors vary from a few seconds to a few minutes, in order to be consistent with the collection frequency of fuel consumption, the values of GPS speed, mean draft, trim, wind speed, wind direction, wave height, wave direction, current speed and current direction are the mean value within 15 min. For the convenience of subsequent research, the fuel consumption value within 15 min was converted into daily fuel consumption, E, where Equation is as follows: where E r is the ship fuel consumption in every 15 min. Data Processing Data processing is an important step and a prerequisite for developing a ship fuel consumption model, because there are inevitably some errors in the raw data collection process due to data transmission delay, deviation, and/or interruption, etc. [37], the errors include null data, noisy data, anomaly data, etc. The following steps were performed to delete the errors in the raw data. (1) Some of the characteristics in the fuel consumption data contained null data that were deleted to ensure the integrity of the data records. (2) Characteristics that contained noisy data that were greater that the recognition range were considered to be noisy data and deleted. For example, if the values for the direction of wind, wave, and current goes beyond 0-360 • , and mean draft over 20 m, trim over 5 m in absolute value, wind speed over 30 m/s, wave height over 10 m, and current speed over 2 Kn are all considered as noise data, those data were deleted. After the data processing on null data and noisy data was completed, there are 9371 ship fuel consumption data records remaining, as shown in Figure 1a. (3) Unlike null data and noisy data that are relatively easy to find, anomaly data can only be found with the help of existing research and domain knowledge. The process of deleting anomaly data is as follows [46]. Step 1. Delete any ship fuel consumption data records with ship GPS speed V < 10 kn or ship GPS speed V > 30 kn. Step 2. Calculate the ratio k of any two daily fuel consumption data records add 1 to the outlier scores of the i − th and j − th data records, traverse all data records, and count the total score of each data record. Step 3. Sort the outlier scores of the data records in descending order and delete the top 20% of the data records. After deleting anomaly data, the final cleaned data consisted of 7493 reliable ship fuel consumption data records, as shown in Figure 1b. It can be observed from Figure 1 that after deleting anomaly data, the distribution of ship fuel consumption data became more regular. Furthermore, the accuracy of the data when fitted to the GPS speed curve increased from 0.7773 to 0.9179, which indicates that data processing can effectively improve the performance of the model. Data Overview The distribution of the processed fuel consumption data is shown in Figure 2. Figure 2a is the distribution of fuel consumption and GPS speed, and we can see that, most of the time, the fuel consumption value is approximately 100-130 t, and the GPS speed value is approximately 18-20 kn, which is the speed range that corresponds to the customary speed of container ships. The distribution of mean draft and trim is shown in Figure 2b Overall Framework The main focus of this study included the following three objectives: fuel consumption data collection, data processing, and data analysis (fuel consumption modeling). Then, the model was applied (fuel consumption optimization), which was the ultimate goal of the study, i.e., to improve energy efficiency, reduce emissions, and protect the marine environment. The details are shown in Figure 3. The main steps of the study are as follows: Step 1. The fuel consumption data of an ocean-going container were obtained by different sensors, including fuel consumption, GPS speed, mean draft, etc., for a total of 10 related features. The daily fuel consumption was the output variable, and the remaining variables were used as input variables. Step 2. Data preprocessing was performed, including data transformation, null, noisy, and anomaly data deletion, etc., to obtain a high-quality fuel consumption dataset. Step 3. The fuel consumption dataset was divided into the training and testing sets according to a certain ratio. Step 5. A Bayesian optimization method of hyperparameters was used to enhance model performance. Step 6. The model performance was evaluated using error metrics. Step 7. The best model could be applied to ship route, speed, trim optimization in the future. XGB Xgboost (XGB) algorithm was first proposed by Chen and Guestrin [47]. and has a wide range of applications in various fields due to its high accuracy, regularization, support for parallel operations, and automatic processing of missing values [48,49]. The XGB algorithm is solved through the following steps [47]: whereŷ i is the predicted value of the i-th sample, K is the total number of trees, x i is the feature vectors, F is the set of trees, and f is the structure of trees. where l(y i ,ŷ i ) is the loss function and Ω( f k ) is the regularization term. where T is the number of leaf nodes, w j is the weights of leaf nodes, and γ and λ are weight penalties. In the process of objective function minimization, each newly added function f t (x i ) should minimize the loss function, the t-th round objective function of Equation (3) can be converted: The following Equation can be obtained by using second-order Taylor expansion to approximate value of the loss function: where I j = {i | q(x i = j)} is the set of each leaf node of the j-th tree, are the first and second derivative of the loss function, respectively. Let G j = ∑ i∈I j g i and H j = ∑ i∈I j h i , then, substitute them into Equation (6) to obtain Equation (7) as follows: The following Equation by calculating partial derivative of w. Substitute Equation (8) into Equation (7) and obtain Equation (9) as follows: The greedy algorithm is used to enumerate the feasible split points to split the subtree, so that the model obtains a higher gain and smaller objective function. The calculation Equation is as follows: where G 2 L H L +λ is the gain generated after the split of the left sub-tree, H R +λ is the gain generated after the split of the right sub-tree, H L +H R +λ is the gain generated without sub-tree splitting. RF and ET Random Forest (RF) was proposed by Breiman and was developed based on the bagging technique [50]. The final result of the RF is averaged from the results of many independent decision trees. The calculation Equation is as follows [50]: where m is the total number of trees and f i () is the prediction result of i-th tree. Extremely randomized tree (ET) is a variant of RF [51]. The principle is similar to that of RF where the only differences are as the following: (1) RF uses bootstrap random sampling to select the training set for each of the decision trees, whereby ET generally does not use bootstrap random sampling. (2) After selecting the split feature, RF will select an optimal feature value as the split point, which is the same as the traditional decision tree. However, ET will randomly select a feature value to split. MLR Multiple linear regression (MLR) is a statistical analysis method used to determine the interdependent quantitative relationship between two or more variables. Assuming the input variable is X = (x 1 , · · · , x D ), its expression is the following [42,52]: where w can be estimated using the least squares(LS) approach as follows [53]: The Hybrid Fuel Consumption Prediction Model The proposed hybrid fuel consumption prediction model was developed on the basis of the stacking theory method [54,55]. By fusing multiple algorithms into the hybrid model, the advantages of each algorithm were fully utilized to improve the robustness of the model and enhance its generalization ability. The hybrid model improves the generalization ability by combining a set of single models, rather than selecting the best one among them. The proposed hybrid model is a hierarchical model integration framework and its structure is shown in Figure 4. There are two layers of models in a hybrid model framework. The first-level layer is composed of multiple base-models (ET, RF and XGB in this study), the original training set (X − train) is used to train the base models and to generate a new training set (S − train), combined with K-fold cross-validation. Subsequently, the new test set (S − test) can be generated by using the trained base models to predict the original test set (X − test). The second-level layer is a meta model (MLR in this study), the training set (S − train) is used to train the MLR model, then, the trained model is used to predict the test set (S − test) and obtain the final prediction result. The calculation equation is as follows: where f i () is the i − th basis-learner and h() is the meta-learner. The advantages of a hybrid model are better generalization ability, the ability to adapt to more complex tasks, the ability to fit nonlinear relationships, and greater robustness; however, the disadvantages of the hybrid model include difficulty in determining values of the hyperparameters and complex calculations. Hyperparameters Optimization and Cross-Validation The value of hyperparameters for a model are determined before training, not obtained through training. Therefore, it is necessary to have a set of optimized values of hyperparameters in order to improve the prediction performance of the model. Since it is relatively challenging to determine the value of hyperparameters, it is important to choose a reasonable hyperparameters optimization method. There are main three methods for hyperparameters optimization, i.e., grid search, random grid search, and Bayesian optimization [56]. In grid search, a comprehensive search on all the enumerated possibilities is performed. As a result, the optimal values of hyperparameters are obtained for all combinations at the cost of a longer runtime. In random grid search, a certain number of random searches are performed on all possible combinations of hyperparameters; therefore, random grid search has the shortest runtime; however, there is low possibility of obtaining the optimal combination of hyperparameters. The Bayesian optimization of hyperparameters method is between the previous two methods, it runs faster and can also obtain a better hyperparameter combination. To obtain a more reasonable hyperparameter values, K-fold cross-validation is usually combined during hyperparameters tuning to jointly train the model. K-fold cross-validation is used to divide the dataset into K parts in equal proportions, takes one part as the validation set, and the other K-1 parts as the training set, and the experiment is repeated K times. Error Metric To evaluate the performance of the the ship fuel consumption prediction model, in the study, four performance metrics were constructed. They are R 2 , mean square error (MSE), mean absolute error (MAE) and running time (T) as follows [29,41]: (18) where y i is the true value of ship fuel consumption,ŷ i is the predicted value,ȳ i is the average value, and n is the number of data samples. t start and t end are the start and end time of model operation, respectively. It can be observed from Equation (15) that a larger value of the performance index R 2 , indicates a better model performance. Conversely, a better model performance is indicated by a smaller value of MSE, MAE, and T in Equations (16)-(18). Results and Discussion All experiments were conducted using Python3.5 running on a 64-bit Windows 10 operating system, Intel Core i5-7200 CPU processor, and 12.0 GB memory. To verify the superiority of the proposed hybrid model, it was validated against reference models developed using MLR, SVM, ANN, ET, RF, and XGB. Due to the different range of characteristic values of ship fuel consumption data, the performances of the models (MLR, SVM, ANN) are affected. Therefore, the data need to be standardized before being used for the models. The equation for data standardization is as follows [29]: where µ and σ are the mean of each characteristic and standard deviation respectively. Each ship fuel consumption dataset was divided into a training set and a test set according to the ratio 0.8:0.2. The model was trained using the training set, and the trained model was used to predict the test set in order to obtain the prediction result. To ensure that the training and test sets were the same for each model, the random division state (random − state) of the data needs to be fixed and set to the same value. Simultaneously, in order to reproduce the experimental result, the random − state of each model also needs to be fixed. Since the training set and the test set were divided according to random − state, the model result in a certain random − state does not indicate whether the model is suitable; therefore, it is necessary to test in different random − state and take the average value as the value of model performance. The following experiments were conducted on the influence of data volume, environmental factors, and hyperparameters on the model; the first two experiments models are not hyperparameter tuned. At the same time, the performances of different models before and after hyperparameter optimization were also compared. All the experimental results are the average of five experiments. The Impact of Data Volume on Model Performance The most significant difference between noon-report data and sensor data is the data acquisition frequency. The noon-report data acquisition frequency is once a day, while the sensor data is every 15 min; therefore the amount of sensor data is 96 times the amount of noon-report data in the same duration. Therefore, we can indirectly compare the effect of fuel consumption modeling based on sensor data versus noon report-data from the impact of the dataset size on model performance. The data volume is set to 100, 200, 400, 800, 1600, 3200, 6400, and 7493. A total of eight different data volumes and 400 (10 × 8 × 5 = 400) experiments are required. The models R 2 , MSE, MAE, and T values under different data volumes are shown in Figure 5a-d. It can be observed from Figure 5a-c that for data volume less than 1000, the R 2 values of all models increase rapidly with the an increase in the data volume. It takes almost three years for noon reports to achieve a dataset with 1000 records, while sensor data only requires about 10 days. It clearly demonstrates that the sensor data are more suitable for fuel consumption prediction modeling as compared with noon-report data. For data volume between 1000 and 3000, the R 2 values increase slightly with an increase in the data volume, and for data volume more than 3000, the R 2 values for all models are basically constant. The runtime of a model, T, generally increases with an increase in data volume. As the data volume continues to increase, the model runtime, T, also continues to increase, as shown in Figure 5d. The findings from this experiment can be used as a reference for the selection a model for real-time and online incremental modeling of ship fuel consumption. The Impact of Marine Environmental Factors on Model Performance To study the impact of marine environmental factors such as wind, wave, and current on ship fuel consumption, four different ship fuel consumption datasets were designed, i.e., Set 1, Set 2, Set 3, and Set 4. Set 1 covers all environmental factors, whereby Set 2, Set 3, and Set 4 are without wind factors, wave factors, and current factors respectively. A total of 200 (10 * 4 * 5 = 200) tests were conducted. The results of the proposed model, and the reference models on the four datasets are as shown in Tables 3 and 4 respectively. As shown in Table 3, the R 2 values for the proposed model for Set 1, Set 2, Set 3, and Set 4 are 0.9932, 0.9927, 0.9924, and 0.9932, respectively. This shows that the R 2 value decreases by 0.0005, 0.0008, and 0.0000 when the wind, wave, and current factor is missing from the dataset, respectively. In terms of the MSE value for the proposed model, it increases by 0.6493, 0.9637 and 0.0753 when the dataset lacks wind, wave, and current factor, respectively. In a similar trend, the value of MAE for the proposed model increased by 0.0957, 0.1149 and 0.0083, respectively. From the above analysis, it can be observed that among the three environmental factors, the wave factor has the greatest impact on the model, followed by wind and current factors. Since the feature is reduced, the model runtime, T is also slightly reduced, as shown in the last column of Table 3. The experimental results of the reference models in Table 4 show similar trends to those in Table 3 in terms of the importance of wind, wave, and current factors to the model. Table 4. Effects of wind, wave and current factors on the results of the reference models. In order to further verify the findings of the above experiments, ET and XGB are used to calculate the importance of different features to the ship fuel consumption model. As shown in Figure 6a,b, the sum of the importance value for GPS speed, trim, and mean draft for ET and XGB reached 0.9617 and 0.9385, respectively. This indicates that these three factors play a leading role in ship fuel consumption modeling. According to the literature [33], the fuel consumption of a ship is approximately cubic, or even quadratic, in relation to the GPS speed, and two-thirds in relation to the draft. The importance values for the environmental factors wind, wave, and current are 0.0135, 0.0208, and 0.0040, respectively, for the ET model. In the XGB model, the importance value for wind factor is 0.0221, wave factor is 0.0258, and current factor is 0.0136. Wind resistance is quadratic with wind speed, and wave resistance is quadratic with wave height and ship's hydrostatic speed, so both wave and wind will lead to increased fuel consumption of the ship [17]. Datasets From this analysis, we verified that the importance value of the wave factor is the most significant among the environmental factors, followed by wind and current. This indicates that wind and waves are the more important factors because they reduced the ship's propeller propulsion efficiency, thereby, affecting the ship's daily fuel consumption. Current had little effect on the ship's propulsion efficiency; therefore, it had a smaller impact on the ship's daily fuel consumption. The Influence of Hyperparameters on Model Performance A challenging but important step in the modeling process is to obtain reasonable values of hyperparameters. As previously mentioned in Section 4.3, the Bayesian optimization and five-fold crossvalidation methods were chosen to obtain the optimized values of hyperparameters for each model. Table 5 outlines the hyperparameters that need to be optimized and it can be observed that ET, RF, XGB and ANN have more hyperparameters as compared with SVM, and MLR has no hyperparameter. Additionally, there is no hyperparameters optimization for the proposed model, as shown in Table 5, because the proposed model consists of some single models, and therefore the hyperparameter values of those single models are also the hyperparameter value of the proposed model. In order to verify the effect of the hyperparameter optimization, the performances of the models are compared before and after hyperparameters optimization, as shown in Figure 7. Figure 7a shows the R 2 value, where the blue dotted line and red line represent the results before and after hyperparameter optimization, respectively. It can be observed that the red line is almost always on the periphery, which indicates that the R 2 value of the models after hyperparameters optimization is increased, in other words, the model performance is improved. Figure 7b,c show the MSE and MAE values, respectively. The red lines are always located in the inner circle, which indicates that the MSE and MAE values after hyperparameters optimization have been reduced, which also shows that the performance of the model is improved. Figure 7d shows the model runtime, T, before and after hyperparameters tuning. It can be observed that after hyperparameters tuning, the runtime, T, increases for all models. Table 5. Hyper-parameters that need to be optimized. Performance Analysis of Different Models Experiments were conducted to find out the impact of different data volumes, different environmental factors, and hyperparameters optimization. The results revealed that increasing the data volume, increasing the environmental factors, and optimizing the model hyperparameters all improved the performance of the model to some extent. In order to determine the most suitable model for ship fuel consumption prediction, Set 1 was chosen as the data source and Bayesian optimization of hyperparameters was performed on each model. Then, the mean and standard deviation (Std) of error metrics were used to compare the performance of the models as shown in Table 6. In all single models, ET is the best model with the highest value of R 2 (mean R 2 = 0.9938), the lowest value of MSE (mean MSE = 7.3496), and MAE (mean MAE = 1.6752). The ANN is one of the most widely used models for ship fuel consumption modeling; however, its predictive performance is lower than that of ensemble learning methods (ET, RF, and XGB). MLR has the worst predictive performance, because MLR is a typical linear regression model, but the ship fuel consumption data present a nonlinear relationship. By comparing the ET model to the proposed model, it can be observed, as shown in Table 6, that the models have similar mean accuracy values, because of the same R 2 values, the proposed model has a lower MSE value (mean = 7.3446), but the ET model has a lower MAE value (Mean = 1.6752). In terms of Std, the proposed model has lower MSE and MAE Std values than the ET model, i.e., the former has MSE (Std = 0.2982) and MAE Std values (Std = 0.0354), whereas the latter has MSE (Std = 0.3025) and MAE Std values (Std = 0.0371). This indicates that the robustness and stability of the proposed model is better than the ET model. The model runtimes, T, for all models are less than 80 s, which is significantly faster than the 15-min ship fuel consumption collection time interval. This indicates that all of the models meet the requirements of real-time ship fuel consumption prediction. The proposed model exhibited good performance in ship fuel consumption prediction, and therefore can provide a reference for other ship fuel consumption data sources modeling in the future. Conclusions In this study, a hybrid method was implemented to develop a ship fuel consumption prediction model based on collected real-time ship sensor data. The research conclusions are mainly reflected in the following three aspects. First, the proposed data processing method can effectively improve the quality of ship fuel consumption data. The R 2 value of the data, when fitted to the GPS speed curve, increased from 0.7773 to 0.9179 after data cleaning was conducted. Second, the increase of data volume, with maritime environmental factors and hyperparameter optimization were also found to contribute to the prediction accuracy of the model. Third, the experimental results revealed that the proposed model produced the better prediction results, followed by ET, XGB, RF, ANN, SVM and MLR. By comparing the proposed model with the best single model (ET), the accuracy (mean R 2 = 0.9938) of the two methods were similar, but the standard deviation values (MSE Std = 0.2982 and MAE Std = 0.0354) of the proposed model were lower, which indicates that the proposed model is more robust and stable than ET. The runtime of all models were shorter than the ship fuel consumption collection time interval (15 min), thus, meeting the requirements of real-time ship fuel consumption prediction. The proposed hybrid model can accurately predict fuel consumption in real time and it can also be applied for ship fuel consumption monitoring and fault diagnosis. Shipping companies and related maritime organizations are concerned about how to achieve energy conservation and emission reduction from ships; therefore, in the future, to achieve these goals, the proposed hybrid model could be applied to optimize ship fuel consumption, such as speed optimization, trim optimization, and route optimization. At the same time, this study had several limitations. The research object in this study is a container ship; it is difficult to discover the universal laws through a container ship's fuel consumption data; therefore, the fuel consumption data for a fleet of container ships need to be collected in the future. In addition, all results or conclusions were obtained only from the data-driven level, and therefore we plan to focus on the impact of ship hydrodynamics on fuel consumption in the future. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality.
8,821.4
2021-04-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Neurologic Complications of Varicella-Zoster Virus Infection Varicella-zoster virus (VZV) causes a diverse spectrum of neurologic complications: aseptic meningitis, encephalitis, cerebral infarction associated with granulomatous vasculitis, myelitis, and cranial polyneuropathy. These VZV-associated central nervous system (CNS) diseases usually result from reactivation of latent infection in immunosuppressive conditions, such as old age, diabetes mellitus, cancer, human immunodeficiency virus (HIV) infection, and the use of immunosuppressive drugs. However, they also occur in immunocompetent subjects. Since VZV antigen or DNA is often detected in the cerebrospinal fluid of these patients, it is thought that reactivated VZV reaches the central nervous system by direct spread from latently infected sensory ganglia. Analysis of cerebrospinal fluid by PCR is important for the diagnosis of VZV-associated CNS diseases particularly in the absence of exanthema/herpes zoster. Clinicians should be aware of the neurologic complications of VZV infection, because early acyclovir therapy is necessary for these disorders. Introduction The clinical manifestations of varicella-zoster virus (VZV) infections of the central nervous system (CNS) include aseptic meningitis, encephalitis, cerebral infarction associated with granulomatous vasculitis, myelitis, and multiple cranial neuropathies (Figure 1) [1][2][3][4]. In these patients, viral antigens or DNA are often detected in the cerebrospinal fluid (CSF) or the sites of pathology. Thus, those neurological disorders reflect reactivation of latent VZV in the trigeminal ganglia and dorsal root ganglion, with subsequent spread of the infection into the CNS [1]. In addition, the incidence of CNS complications caused by VZV is more likely higher in elderly individuals; those with underlying diseases, such as malignant tumors and HIV; and those who are immunosuppressed due to the use of steroids or immunosuppressive drugs [5][6][7][8]. However, it can also affect healthy individuals; therefore, these CNS VZV infections may be suspected even in patients without underlying diseases. Among CNS infections caused by VZV, diseases other than meningitis are rare; however, clinicians should be aware of the various clinical features of CNS infections caused by VZV to start early and accurate antivirus drugs for treatment. Aseptic meningitis Meningitis is inflammation of the pia mater and the arachnoid that cover the surface of the brain. Its clinical signs include fever, headache, nausea, vomiting, and meningeal irritation symptoms, such as nuchal rigidity and Kernig's sign. Furthermore, jolt accentuation and neck flexion tests are often positive. However, these are common symptoms and findings of meningitis regardless of the cause. The CSF examination shows monocyte-dominant pleocytosis and elevated protein levels with normal glucose levels. Patients with meningitis wherein bacteria are not detected via the CSF test are generally diagnosed as having aseptic meningitis. Most cases of aseptic meningitis involve viral meningitis. The most common virus that causes viral meningitis is enterovirus. In adults, enterovirus is followed by herpes simplex virus type 2 (HSV-2) and VZV [10], and VZV infection accounts for 8% of the total meningitis cases [11]. VZV meningitis can sometimes cause cranial polyneuropathy or dysuria due to sacral radiculopathy, and dysuria due to sacral radiculopathy is known as Elsberg syndrome. Meningitis caused by VZV is also frequently observed among healthy young individuals. Such a condition generally has a good prognosis and rarely causes any sequelae. Elsberg syndrome Elsberg syndrome is caused by bilateral sacral radiculopathy, which is characterized by urinary retention, sensory disturbance, and neuralgia of the perineum and lower limbs. Although Elsberg syndrome was originally characterized by urinary retention due to sacral radiculopathy associated with genital herpes, it is now defined as aseptic meningitis-associated sacral radiculopathy. As the causative virus, HSV, particularly HSV-2, is the most common cause, followed by VZV [12,13]. When urinary retention occurs, urethral catheterization is required. However, this condition resolves as meningitis improves. Case 1: Elsberg syndrome A 32-year-old man was admitted to our hospital because of high fever, headache, nausea, acute urinary retention, and dysesthesia in a lumbosacral dermatome distribution. There were no motor symptoms and no rash. CSF analysis showed 249 leukocytes/mm 3 , 70 mg/dl protein, and positive of VZV DNA by PCR. Gadoliniumenhanced MRI revealed the meningeal lesions of the conus medullaris and the swollen radicular fibers in the upper lumbar spinal canal. Treatment of acyclovir and dexamethasone for 2 weeks led to complete resolution of meningitis and urinary retention. Encephalitis and cerebral infarction associated with granulomatous vasculitis The symptoms of encephalitis include acute disturbance of consciousness, headache, fever, and convulsions. Neurological findings of encephalitis include meningeal irritation symptoms, such as nuchal rigidity; however, patients with encephalitis sometimes present with motor paralysis and sensory disturbance due to parenchymal brain damage. Among the pathogens that cause viral encephalitis, VZV is the second most common cause following HSV, accounting for 5% of the total encephalitis cases [14]. According to a recent analysis that used PCR, though, the risk of VZV encephalitis increases in elderly individuals, those with herpes zoster ophthalmicus, and those with disseminated herpes zoster, and this result indicates that the incidence of VZV encephalitis might have increased [15]. The clinical manifestations of VZV encephalitis include meningoencephalitis and vasculopathy [16]. The meningoencephalitis form shows no detectable lesions on MRI. In contrast, the vasculopathy form is characterized by non-specific ischemia, hemorrhagic lesions, and multiple white matter lesions on MRI [16]. Pathological studies suggested that VZV encephalitis develops based on vasculopathy in the large and small vessels. Therefore, MRI typically demonstrates ischemic or hemorrhagic infarction in both gray and white matter and particularly at graywhite matter junctions as characteristic imaging findings of VZV encephalitis [16]. In VZV encephalitis, lesions in the temporal lobe and limbic system, which are often observed in patients with herpes simplex encephalitis, are rare. Moreover, hemorrhagic lesions and necrosis, which are characteristics of herpes simplex encephalitis, are not commonly observed. Because VZV DNA is generally detected in the CSF of adult patients with VZV encephalitis, direct viral invasion to the CNS is believed to be the pathology of VZV encephalitis. In contrast, in varicella encephalitis in children who develop acute cerebellar ataxia associated with varicella infection, VZV is not detected in the CSF. Therefore, a secondary immunological allergic mechanism is considered as the pathology of varicella encephalitis. Cerebral infarction caused by granulomatous vasculitis is a complication of herpes zoster infection [1,17]. A typical patient presents with herpes zoster ophthalmicus, followed by postherpetic contralateral hemiplegia, and develops cerebral infarction between the eighth day and sixth month after herpes zoster infection (average of 7 weeks) [18,19]. Patients with cerebral infarction often present with stenosis or obstruction in the anterior cerebral artery or middle cerebral artery. Because VZV DNA and antigens are detected in the walls of cerebral arteries, this evidence should provide an anatomic pathway for transaxonal spread of VZV after reactivation from trigeminal ganglia as a mechanism of intracerebral VZV vasculopathy [20][21][22]. The incidence of stroke increases 6 months after the onset of herpes zoster infection [23], and VZV vaccine and antiviral drug therapy may help reduce the risk of stroke after herpes zoster infection [24]. Cerebral infarction can also develop after varicella infection in children [25]. Although it is rare, it occurs within 6 months after varicella infection, and a similar mechanism as cerebral infarction after varicella zoster infection is considered [25]. In these conditions, VZV, which causes latent infection in the trigeminal ganglion after varicella infection, reactivates and directly invades the vessels in the CNS. Case 2: meningoencephalitis The patient was a 77-year-old woman who was admitted to our hospital due to convulsions and impaired consciousness. She presented with a Glasgow Coma Scale score of E1V1M4, and positive nuchal rigidity was observed. The convulsions were treated with the intravenous injection (IV) of diazepam and intramuscular injection of phenobarbital. However, the patient had high fever after admission at the hospital. CSF examination showed increased cell count (125.0 mg/dl), elevated protein level (125.0 mg/dl), and positivity for VZV DNA, and she was then diagnosed with VZV infection. The patient was treated with acyclovir and dexamethasone, and she regained consciousness and was able to talk on the second day of hospitalization. On the seventh day, she recovered with lucid consciousness without sequelae (Figure 2). Her MRI showed no abnormal lesions in the brain parenchyma, and she was diagnosed with meningoencephalitis. Case 3: cerebral infarction associated with granulomatous vasculitis The patient was a 76-year-old man who developed infarction in the right medial hypothalamus 34 days after the onset of right ophthalmic herpes zoster. He further developed an infarction in the right occipital lobe 73 days after the onset of herpes zoster infection. Although the MRI obtained while the patient presented with herpes zoster rash did not show any abnormal findings, the MRI performed 73 days later showed severe stenosis of the posterior communicating artery. Case 4: cerebral infarction associated with granulomatous vasculitis The patient was a 52-year-old woman with systemic lupus erythematosus (SLE) who exhibited altered levels of consciousness during immunotherapy for SLE. The CSF test showed pleocytosis, an elevated protein level, and positivity for VZV DNA, and the patient was then diagnosed with VZV meningoencephalitis. Brain MRI showed cerebral infarction in the left cerebral white matter, and MR angiogram showed stenosis of the left middle cerebral artery. Cranial polyneuropathy Cranial nerve palsy can sometimes develop in patients with herpes zoster of face or neck regions. Facial nerve palsy accompanying herpes zoster infection is known as Ramsay Hunt syndrome, and those patients often exhibit cranial polyneuropathy [26,27]. Lower cranial polyneuropathy causes dysphagia, dysarthria, and hoarseness. Furthermore, there was no elevation or constriction in the unilateral soft palate, and tongue deviation and muscular weakness of the sternocleidomastoid and trapezius muscles were observed due to unilateral glossopharyngeal, vagus, accessary, and hypoglossal nerve paralyzes. Cranial polyneuropathy is often accompanied by meningitis, and CSF examination showed pleocytosis and elevated protein levels. In most cases, brain MRI shows no abnormalities. However, contrast MRI sometimes shows enhancement in the affected cranial nerves. As a mechanism of this condition, reactivation of VZV from the geniculate ganglion could result inflammatory process, circulatory disturbance, or edema to involve cranial nerves [28]. Case 5: lower cranial polyneuropathy A 64-year-old woman developed acute paralysis of the IX, X, XI, and XII nerves on the left side after experiencing pain in the left ear and throat. CSF examination revealed lymphocytic pleocytosis and elevated protein levels. VZV DNA was detected with PCR using CSF. She was diagnosed with cranial polyneuropathy due to VZV reactivation. After the oral administration of antiviral agent and steroid, all signs and symptoms dramatically improved. Notably, there was no evidence of cutaneous or mucosal rash during the entire course of the disease. VZV reactivation should be included in the differential diagnosis of multiple cranial nerve palsies, particularly with pain and even without rash. Case 6: lower cranial polyneuropathy The patient was a 66-year-old man who presented with dysphagia and hoarseness 2 days after the onset of pain in the left occipital region to the shoulder. At an otorhinolaryngology clinic, recurrent nerve paralysis was observed, and lesions of herpes zoster were noted in the left side of the neck. Left glossopharyngeal, vagal, accessory, and hypoglossal nerve paralyzes were observed during neurological examination. CSF examination showed increased cell count and positivity for VZV DNA, and the patient was diagnosed with multiple lower cranial polyneuropathy. Myelitis VZV myelitis is a rare clinical manifestation. However, Brown-Séquard syndrome and transverse myelopathy may occur as a complication of herpes zoster infection [6,7,[29][30][31]. Previous reports revealed that myelitis occurred in elderly or immunocompromised patients, such as those with HIV infection, and this condition often had severe sequelae such as motor paralysis [6,7]. MRI shows low to equal signal intensity on T1-weighted images and high signal intensity on T2-weighted images, and sometimes, contrast enhancement can be observed as spinal cord lesions. In addition, MRI enhancement may be observed not only in the lesions in the spinal cord but also in the meninges around the spinal cord as well as in the dorsal root nerve, and these findings indicate myeloradiculitis. Myelitis is often characterized by myelopathy at a level consistent with the spinal segment affected by herpes zoster. As a pathogenesis of this condition, reactivated VZV in the dorsal root ganglion of the spinal cord directly invades the spinal cord from the dorsal root nerve, resulting in myelitis [6,7]. Moreover, spinal cord lesions are caused by vasculopathy, such as damage to the anterior spinal artery due to vasculitis, similar to cerebral infarction after herpes zoster infection, and this may be considered another mechanism. Case 7: myelitis The patient was a 60-year-old man with right lower extremity paralysis and sensory disturbance of the right trunk and lower extremity who was diagnosed with VZV myelitis based on CSF examination. MRI of the spinal cord showed a highsignal lesion in the right posterior funiculus at the thoracic vertebral level of Th6 and Th7, which should indicate that VZV directly invaded the spinal cord from the dorsal root. Case 8: myelitis An 87-year-old woman developed weakness of the right lower limb 2 days after developing herpes zoster lesions in the right side of the chest. Neurological examination revealed a spastic palsy in the right lower limb and loss of pain and temperature sensation in the left side to T6. However, vibration and position senses were not impaired in both sides. Thus, the patient presented with incomplete Brown-Séquard syndrome. Spinal T2-weighted MRI images showed a high-intensity lesion in the right side of the spinal cord except at the posterior funiculus at the Th2 level. CSF analysis showed the following results: leukocyte count, 109/mm 3 , and protein level, 79 mg/dl, as well as negativity for VZV PCR, elevated titer levels for anti-VZV IgM and IgG, and increased IgG index. Although she was treated with a combination of acyclovir and steroid pulse therapy, her weakness in the right lower limb did not improve. In this case, because the posterior funiculus circulating from the posterior spinal artery was not involved, the incomplete Brown-Séquard syndrome may have been caused by spinal cord infarction due to VZV vasculitis of the anterior spinal artery. Postherpetic neuralgia Although most cases of acute herpes zoster are self-limited, about 10-15% of patients with herpes zoster will develop postherpetic neuralgia (PHN) [32], particularly in older adults [33]. Immunosuppressed patients have a higher incidence of PHN. PHN refers to pain persisting for months to years after the resolution of the rash. Sensory symptoms can include pain, numbness, dysesthesias, and allodynia (pain precipitated by movement) in the affected dermatome. And these symptoms may be severe enough to restrict sleep, appetite, or daily activities. The diagnosis of PHN is clear-cut and could be made if those sensory symptoms including pain persist beyond 4 months in the same distribution as a preceding episode of acute herpes zoster [34]. Gabapentin, pregabalin, tricyclic antidepressants, and opioids are generally the first-line drugs for the treatment of PHN [35][36][37]. Vaccines are also available for prevention of acute zoster and PHN [38, 39]. Diagnosis For the diagnosis of CNS infection caused by VZV, the detection of VZV DNA with PCR using CSF is necessary [40][41][42]. However, a negative VZV DNA result does not rule out VZV infection, and particularly, PCR examination after the initiation of antiviral treatment will likely turn out negative. Thus, testing should be conducted using CSF before the antiviral treatment. When measuring anti-VZV antibodies, a significant increase of the anti-VZV antibody titer in CSF over the course of the illness or findings suggesting the production of intrathecal antibody [serum/CSF antibody ratio ≤ 20 or antibody titer index = (CSF antibody/serum antibody)/(CSF albumin/serum albumin ≥ 2)] should be confirmed. Therapy Antiviral therapy with intravenous acyclovir (10 mg/kg intravenous every 8 hours) should be initiated as soon as the diagnosis is considered [43]. Although the prognosis of meningitis is good, encephalitis and myelitis often result in sequelae, and a delay in the initiation of acyclovir treatment leads to poor prognosis. Therefore, if CNS infection caused by VZV is suspected based on clinical symptoms as well as CSF examination and imaging findings, the administration of acyclovir must be immediately initiated. According to a recent study from the UK, some causative pathogens were identified in 42% of acute encephalitis cases, of which HSV or VZV was identified in one-fourth of the cases. Thus, the administration of acyclovir should be immediately initiated if encephalitis is clinically suspected, and then, acyclovir should be administered for 2 weeks in an immunocompetent host and for 3 weeks in an immunosuppressive host if encephalitis caused by HSV or VZV is confirmed [5]. In CNS infection caused by VZV, the standard administration period is similar. There is no evidence showing the therapeutic effect of adjunctive corticosteroid use. However, corticosteroids suppress the inflammatory response accompanied by cytotoxicity due to the host immune response to viral infection, and in cases of encephalitis/vasculitis, myelitis, and cranial polyneuropathy, the adjunctive administration of dexamethasone or steroid pulse therapy with acyclovir is recommended. Conclusions VZV causes the diverse spectrum of neurologic complications: aseptic meningitis, encephalitis, cerebral infarction associated with granulomatous vasculitis, myelitis, and cranial polyneuropathy. Clinicians should be aware of the neurologic complications of VZV, because early acyclovir therapy is necessary for these disorders.
3,943.8
2019-01-17T00:00:00.000
[ "Medicine", "Biology" ]
Phycobiliproteins Ameliorate Gonadal Toxicity in Male Mice Treated with Cyclophosphamide Cyclophosphamide (CP)—which is used to treat autoimmune diseases and cancer—is related to gonadotoxicity attributed to oxidative stress. As phycobiliproteins (PBPs) are strong antioxidants that are unexplored as protective agents against male gonadotoxicity, our work aimed to investigate the effects of PBP crude extract on testicular damage and sperm parameter alterations caused by CP in mice. Three doses of PBP (50, 100, and 200 mg/kg) were tested in the experimental groups (n = 8 per group), administered concomitantly with 100 mg/kg CP. After 42 days receiving PBP daily and CP weekly, body and relative testicular weights, serum testosterone levels, testicular lipoperoxidation and antioxidant enzyme activity levels, and testicular histology and sperm parameter alterations were assessed. The results showed that PBP crude extract at 200 mg/kg prevented testosterone serum reduction, body weight loss, lipoperoxidation and enzyme activity increments, and sperm parameter alterations and partially ameliorated relative testicular weight reductions and histological damage in CP-treated mice. In conclusion, we showed that PBP crude extract (200 mg/kg) mitigated oxidative damage in the testes and ameliorated alterations in sperm parameters in mice treated with CP (100 mg/kg); therefore, PBP extract could be considered as a potential protective agent against CP toxicity. Introduction Cyclophosphamide (CP) is a prodrug with alkylating activity that is widely used in chemotherapy, autoimmune disorders, and organ transplantation [1]. Despite its pharmacological benefits, CP has been associated with gonadotoxicity [2][3][4]. CP deteriorates sperm quality and atrophies testicular tissue in both humans and mice [5,6]. It also reduces superoxide dismutase (SOD) and glutathione peroxidase (GPX) activities in rat testes [7] and causes reductions correlated with impaired sperm quality in infertile patients [8,9]. CP metabolism generates active molecules such as phosphoramide mustard and acrolein [10], which are associated with therapeutic effect and toxicity, respectively [11]. Acrolein-a strong electrophile-creates oxidative stress conditions, modifies antioxidant enzymes [3], and reduces fertility in CP-treated patients [12]. As sperm cryopreservationwhich is inaccessible to the majority of the population-is recommended before receiving chemotherapy [13,14], supplementation with phytocompounds and natural antioxidants has been proposed because they restore antioxidant balance and enhance male fertility [15]. C-phycocyanin (C-PC), allophycocyanin (APC), and phycoerythrin (PE)-known as phycobiliproteins (PBPs) [16]-are water-soluble fluorescent pigments found in cryptomonads, Animals The CD-1 male mice (25-30 g, 6-7 weeks old) from Bioterium of Universidad Autónoma del Estado de Hidalgo (Pachuca de Soto, Mexico) were housed in cages with access to Lab-Diet Rodent 5001 food (Fort Worth, TX, USA) and water ad libitum at constant temperature (24 ± 2 • C) and relative humidity (50 ± 10%) and with 12 h light/dark cycles. Conditions were maintained after and during the experimental procedures. The investigation protocol was previously approved by the Bioethical Committee of the National School of Biological Sciences of the National Polytechnic Institute (protocol code: ZOO-016-2020; 21 December 2020), and procedures with animals were performed in accordance with Mexican Official Standard NOM-062-ZOO-2001. Experimental Design Cyclophosphamide was dissolved in physiological solution (0.9% NaCl) and administered intraperitoneally at 100 mg/kg as described by Elangovan et al. [25], and Lu et al. [7], while PBPs were dissolved in phosphate-buffered saline and administered intragastrically at 50, 100, and 200 mg/kg-doubling and halving the 100 mg/kg dose reported by Castro-García et al. [22]. After an acclimatization week, treated male mice received daily i.g. doses of 50, 100, or 200 mg/kg PBP for 42 days concomitantly with weekly i.p. doses of 100 mg/kg CP (Figure 1). Control groups were included; whereby the vehicle control group received i.g. 0.9% NaCl daily for 42 days, the CP group received weekly i.g. doses of 100 mg/kg for 5 weeks, and the PBP group received daily i.p. doses of 200 mg/kg for 42 days; thus, experimental groups were defined as control, CP, PBP, PBP50 + CP, PBP100 + CP, and PBP200 + CP. At the end of treatment (day 42), the procedures described in Sections 2.5-2.9 were performed. Experimental Design Cyclophosphamide was dissolved in physiological solution (0.9% NaCl) and administered intraperitoneally at 100 mg/kg as described by Elangovan et al. [25], and Lu et al. [7], while PBPs were dissolved in phosphate-buffered saline and administered intragastrically at 50, 100, and 200 mg/kg-doubling and halving the 100 mg/kg dose reported by Castro-García et al. [22]. After an acclimatization week, treated male mice received daily i.g. doses of 50, 100, or 200 mg/kg PBP for 42 days concomitantly with weekly i.p. doses of 100 mg/kg CP (Figure 1). Control groups were included; whereby the vehicle control group received i.g. 0.9% NaCl daily for 42 days, the CP group received weekly i.g. doses of 100 mg/kg for 5 weeks, and the PBP group received daily i.p. doses of 200 mg/kg for 42 days; thus, experimental groups were defined as control, CP, PBP, PBP50 + CP, PBP100 + CP, and PBP200 + CP. At the end of treatment (day 42), the procedures described in Sections 2.5-2.9 were performed. Serum Testosterone, Body Weight, and Relative Testicular Weight Blood was collected by retro-orbital bleeding on day 42, and serum samples were separated to quantify testosterone levels using an ELISA kit. Mice were weighed and euthanized by cervical dislocation to immediately dissect the testes and epididymis and to determine the relative testicular weight. Lipoperoxidation and SOD and GPX Activity in the Testes The left testicle was homogenized in physiological solution by sonication for 40 s (output 40%, 20 kHz, Ultrasonic Homogenizer VP-5S, Taitec). Testicle homogenates were centrifuged at 10,000 rpm (1619 Rotor, Hettich Universal 320 Centrifuge) for 10 min at 4 °C and the supernatants were collected to evaluate lipoperoxidation and the enzymatic activity levels of SOD and GPX. Before determination, the total protein concentrations in supernatants were calculated by Bradford assay. Lipoperoxidation was determined by malondialdehyde (MDA) identification using thiobarbituric acid as proposed by Buege and Aust [26]. SOD and GPX activity levels were evaluated using RANSOD and RANSEL kits, respectively, following the manufacturers' instructions. Serum Testosterone, Body Weight, and Relative Testicular Weight Blood was collected by retro-orbital bleeding on day 42, and serum samples were separated to quantify testosterone levels using an ELISA kit. Mice were weighed and euthanized by cervical dislocation to immediately dissect the testes and epididymis and to determine the relative testicular weight. Lipoperoxidation and SOD and GPX Activity in the Testes The left testicle was homogenized in physiological solution by sonication for 40 s (output 40%, 20 kHz; Ultrasonic Homogenizer VP-050N, Taitec Corp., Koshigaya City, Saitama, Japan). Testicle homogenates were centrifuged at 10,000 rpm (1619 Rotor Universal 320 Centrifuge, Hettich ® , Tuttlingen, DE) for 10 min at 4 • C and the supernatants were collected to evaluate lipoperoxidation and the enzymatic activity levels of SOD and GPX. Before determination, the total protein concentrations in supernatants were calculated by Bradford assay. Lipoperoxidation was determined by malondialdehyde (MDA) identification using thiobarbituric acid as proposed by Buege and Aust [26]. SOD and GPX activity levels were evaluated using RANSOD and RANSEL kits, respectively, following the manufacturers' instructions. Histomorphometry Morphology changes in seminiferous tubules were determined in five different fields from each testicle section. The total seminiferous tubule area (TSTA) and lumen tubule area (LTA) were measured with Image-Pro Plus ® software (Media Cybernetics Inc., Rockville, MD, USA). Next, the seminiferous tubule area (STA) was estimated as the difference between the TSTA and LTA ( Figure 2). Histomorphometry Morphology changes in seminiferous tubules were determined in five different fields from each testicle section. The total seminiferous tubule area (TSTA) and lumen tubule area (LTA) were measured with Image-Pro Plus ® software (Media Cybernetics Inc., Rockville, MD, USA). Next, the seminiferous tubule area (STA) was estimated as the difference between the TSTA and LTA ( Figure 2). Sperm Parameter Assessment Epididymis spermatozoa were collected via flushing with M-16 medium (100 mM NaCl, 25 mM NaHCO3, 5.5 mM glucose, 2.6 mM KCl, 1.56 mM Na2HPO4, 0.5 mM sodium pyruvate, 1.8 mM CaCl2, 0.5 mM, MgCl2, 20 mM sodium lactate, 100 IU/mg penicillin, and 100 µg/mL streptomycin at pH 7.2 and 37 °C). The progressive motility, sperm count, and cell viability were evaluated as sperm parameters according to World Health Organization guidelines [29]. Progressive motility was determined on a glass slide, sperm were counted in a hemocytometer, and sperm viability was evaluated by eosin-nigrosin exclusion assay; a phase contrast microscope (Carl Zeiss Microscopy Co., Oberkochen, Germany) was used for all determinations. Statistical Analysis The results were analyzed by one-way ANOVA followed by the Holm-Sidak multiple comparison test. Data were represented as means ± standard error of the mean (SEM), and p < 0.05 was considered statistically significant. The PBP, PBP50 + CP, PBP100 + CP, and PBP200 + CP groups were compared with the control and CP groups. Analysis was performed and graphs were produced in SigmaPlot v12.0 software. All samples were evaluated in triplicate. PBP Determination The PBP concentrations, purity index values, and extraction yields are indicated in Table 1. C-PC, APC, and PE, respectively, corresponded to 69.04, 30.34, and 0.62% of the PBPs extracted from SP, with C-PC being 2.3-and 111.5-fold more abundant than APC and PE, respectively. Additionally, the purity index and extraction yield of C-PC were higher than the purity index values and extraction yields of APC and PE. Sperm Parameter Assessment Epididymis spermatozoa were collected via flushing with M-16 medium (100 mM NaCl, 25 mM NaHCO 3 , 5.5 mM glucose, 2.6 mM KCl, 1.56 mM Na 2 HPO 4 , 0.5 mM sodium pyruvate, 1.8 mM CaCl 2 , 0.5 mM, MgCl 2 , 20 mM sodium lactate, 100 IU/mg penicillin, and 100 µg/mL streptomycin at pH 7.2 and 37 • C). The progressive motility, sperm count, and cell viability were evaluated as sperm parameters according to World Health Organization guidelines [29]. Progressive motility was determined on a glass slide, sperm were counted in a hemocytometer, and sperm viability was evaluated by eosin-nigrosin exclusion assay; a phase contrast microscope (Carl Zeiss Microscopy Co., Oberkochen, Germany) was used for all determinations. Statistical Analysis The results were analyzed by one-way ANOVA followed by the Holm-Sidak multiple comparison test. Data were represented as means ± standard error of the mean (SEM), and p < 0.05 was considered statistically significant. The PBP, PBP50 + CP, PBP100 + CP, and PBP200 + CP groups were compared with the control and CP groups. Analysis was performed and graphs were produced in SigmaPlot v12.0 software. All samples were evaluated in triplicate. PBP Determination The PBP concentrations, purity index values, and extraction yields are indicated in Table 1. C-PC, APC, and PE, respectively, corresponded to 69.04, 30.34, and 0.62% of the PBPs extracted from SP, with C-PC being 2.3-and 111.5-fold more abundant than APC and PE, respectively. Additionally, the purity index and extraction yield of C-PC were higher than the purity index values and extraction yields of APC and PE. Figure 3 shows the results obtained from the testosterone serum determination. PBP at 100 and 200 mg/kg prevented the reductions in testosterone levels observed in the CP group. Although the PBP50 + CP group showed testosterone levels statistically lower than control, PBP50 + CP testosterone levels were 7.9% higher than the CP group. The body weight results (Figure 4a) showed that PBP at 50, 100, and 200 mg/kg prevented weight loss caused by CP, while the relative testicular weight data (Figure 4b) indicated that all groups receiving CP decreased their relative testicular weight compared to control; Serum Testosterone, Body Weight, and Relative Testicular Weight Nutrients 2021, 13, 2616 5 of 13 however, the statistical analysis showed that the PBP200 + CP group presented relative testicular weights higher than those of the CP group, suggesting that the PBP 200 mg/kg dose ameliorated testicular weight reduction. Figure 3 shows the results obtained from the testosterone serum determination. PBP at 100 and 200 mg/kg prevented the reductions in testosterone levels observed in the CP group. Although the PBP50 + CP group showed testosterone levels statistically lower than control, PBP50 + CP testosterone levels were 7.9% higher than the CP group. The body weight results (Figure 4a) showed that PBP at 50, 100, and 200 mg/kg prevented weight loss caused by CP, while the relative testicular weight data (Figure 4b) indicated that all groups receiving CP decreased their relative testicular weight compared to control; however, the statistical analysis showed that the PBP200 + CP group presented relative testicular weights higher than those of the CP group, suggesting that the PBP 200 mg/kg dose ameliorated testicular weight reduction. Lipoperoxidation and SOD and GPX Activity in the Testes The MDA levels, determined as lipoperoxidation indicators, are shown in Figure 5. MDA in the CP group was fourfold greater than the control, while PBP50 + CP, PBP100 + CP, and PBP200 + CP groups maintained MDA at similar levels to the control group, suggesting a protective effect of PBP (50-200 mg/kg) against testicular lipoperoxidation Figure 3 shows the results obtained from the testosterone serum determination. PBP at 100 and 200 mg/kg prevented the reductions in testosterone levels observed in the CP group. Although the PBP50 + CP group showed testosterone levels statistically lower than control, PBP50 + CP testosterone levels were 7.9% higher than the CP group. The body weight results (Figure 4a) showed that PBP at 50, 100, and 200 mg/kg prevented weight loss caused by CP, while the relative testicular weight data (Figure 4b) indicated that all groups receiving CP decreased their relative testicular weight compared to control; however, the statistical analysis showed that the PBP200 + CP group presented relative testicular weights higher than those of the CP group, suggesting that the PBP 200 mg/kg dose ameliorated testicular weight reduction. Lipoperoxidation and SOD and GPX Activity in the Testes The MDA levels, determined as lipoperoxidation indicators, are shown in Figure 5. MDA in the CP group was fourfold greater than the control, while PBP50 + CP, PBP100 + CP, and PBP200 + CP groups maintained MDA at similar levels to the control group, suggesting a protective effect of PBP (50-200 mg/kg) against testicular lipoperoxidation Figure 4. Effects of PBPs on body weight and relative testicular weight of CP-treated mice: (a) body weights measured at end of treatment, (b) relative testicular weights. Means ± SEM analyzed by one-way ANOVA followed by Holm-Sidak test. * Indicates p < 0.05 vs. control group; # indicates p < 0.05 vs. CP group. Lipoperoxidation and SOD and GPX Activity in the Testes The MDA levels, determined as lipoperoxidation indicators, are shown in Figure 5. MDA in the CP group was fourfold greater than the control, while PBP50 + CP, PBP100 + CP, and PBP200 + CP groups maintained MDA at similar levels to the control group, suggesting a protective effect of PBP (50-200 mg/kg) against testicular lipoperoxidation induced by CP. On the other hand, as presented in Figure 6, the enzymatic activity of SOD and GPX increased in the CP and PBP50 + CP groups compared with the control, while the PBP100 + CP and PBP200 + CP groups showed similar SOD and GPX activity levels to the control group. induced by CP. On the other hand, as presented in Figure 6, the enzymatic activity of SOD and GPX increased in the CP and PBP50 + CP groups compared with the control, while the PBP100 + CP and PBP200 + CP groups showed similar SOD and GPX activity levels to the control group. Figure 7 shows representative micrographs from histological observations. The control and PBP groups presented intact seminiferous tubules where spermatogonium, spermatocytes, and spermatids were identified. Similarly, the PBP50 + CP, PBP100 + CP, and PBP200 + CP groups preserved germinal cells from the basal membrane to the tubule lumen, while the CP group showed small tubules and large spaces in both the tubule lumen and interstitium, indicating possible depletion of cells, especially mature spermatids and Leydig cells. As interstitial spaces were also observed in PBP50 + CP micrographs, the testicular histology indicated that PBPs at the 50 mg/kg dose partially ameliorated damage caused by CP, while the 100 and 200 mg/kg doses improved testicle histology. induced by CP. On the other hand, as presented in Figure 6, the enzymatic activity of SOD and GPX increased in the CP and PBP50 + CP groups compared with the control, while the PBP100 + CP and PBP200 + CP groups showed similar SOD and GPX activity levels to the control group. Figure 7 shows representative micrographs from histological observations. The control and PBP groups presented intact seminiferous tubules where spermatogonium, spermatocytes, and spermatids were identified. Similarly, the PBP50 + CP, PBP100 + CP, and PBP200 + CP groups preserved germinal cells from the basal membrane to the tubule lumen, while the CP group showed small tubules and large spaces in both the tubule lumen and interstitium, indicating possible depletion of cells, especially mature spermatids and Leydig cells. As interstitial spaces were also observed in PBP50 + CP micrographs, the testicular histology indicated that PBPs at the 50 mg/kg dose partially ameliorated damage caused by CP, while the 100 and 200 mg/kg doses improved testicle histology. Figure 7 shows representative micrographs from histological observations. The control and PBP groups presented intact seminiferous tubules where spermatogonium, spermatocytes, and spermatids were identified. Similarly, the PBP50 + CP, PBP100 + CP, and PBP200 + CP groups preserved germinal cells from the basal membrane to the tubule lumen, while the CP group showed small tubules and large spaces in both the tubule lumen and interstitium, indicating possible depletion of cells, especially mature spermatids and Leydig cells. As interstitial spaces were also observed in PBP50 + CP micrographs, the testicular histology indicated that PBPs at the 50 mg/kg dose partially ameliorated damage caused by CP, while the 100 and 200 mg/kg doses improved testicle histology. Histomorphometry The TSTA, LTA, and STA are shown in Table 2. Although large spaces in the seminal lumen were observed only in the CP group micrograph (Figure 7b), the LTA diminished in all the groups compared with the control, while TSTA was reduced only in the CP group. STA, which represented the seminal tubule cellular structure (Figure 2), was Figure 7. Effects of PBPs on the testicular histology of CP-treated mice: (a) The control group presented spermatogonium (green arrows), spermatocytes (black arrows), spermatids (red arrows), and mature spermatids (purple arrows) in seminiferous tubules and Leydig cells in interstitial space (blue circles); (b) the CP group showed small seminiferous tubules, large interstitial spaces (yellow squares), and several spermatids; (c) PBP, (d) PBP50 + CP, (e) PBP100 + CP, and (f) PBP200 + CP groups presented preserved germinal cells; however, large interstitial spaces were identified in PBP50 + CP (d). Boxes in the upper right corner are maximizations (X2) from specific field areas, as indicated by dashed lines. H&E-stained sections observed at 10×. Histomorphometry The TSTA, LTA, and STA are shown in Table 2. Although large spaces in the seminal lumen were observed only in the CP group micrograph (Figure 7b), the LTA diminished in all the groups compared with the control, while TSTA was reduced only in the CP group. STA, which represented the seminal tubule cellular structure (Figure 2), was decreased in the CP group compared with the control, confirming tubule reduction in the CP micrograph (Figure 7b). The PBP50 + CP, PBP100 + CP, and PBP200 + CP groups presented STA values lower than the control but greater than the CP group, indicating partial improvement of testicular CP damage. Interestingly, TSTA and STA levels in the PBP group (without CP) were 12.8 and 21.9% higher than the control, respectively, suggesting that PBP consumption (200 mg/kg) contributed to germinal cell increase. Sperm Parameters The progressive motility, sperm count, and sperm viability-evaluated as parameters of sperm quality-are shown in Figure 8. The three evaluated parameters were reduced in the CP group compared with the control. PBP50 + CP, PBP100 + CP, and PBP200 + CP maintained cell viability; PBP100 + CP and PBP200 + CP preserved progressive motility; and PBP200 + CP sustained sperm count at similar levels to the control. The data suggested that PBPs at 200 mg/kg were required to recover sperm parameter alterations caused by CP administration in mice. Nutrients 2021, 13, x FOR PEER REVIEW 8 of 13 decreased in the CP group compared with the control, confirming tubule reduction in the CP micrograph (Figure 7b). The PBP50 + CP, PBP100 + CP, and PBP200 + CP groups presented STA values lower than the control but greater than the CP group, indicating partial improvement of testicular CP damage. Interestingly, TSTA and STA levels in the PBP group (without CP) were 12.8 and 21.9% higher than the control, respectively, suggesting that PBP consumption (200 mg/kg) contributed to germinal cell increase. Sperm Parameters The progressive motility, sperm count, and sperm viability-evaluated as parameters of sperm quality-are shown in Figure 8. The three evaluated parameters were reduced in the CP group compared with the control. PBP50 + CP, PBP100 + CP, and PBP200 + CP maintained cell viability; PBP100 + CP and PBP200 + CP preserved progressive motility; and PBP200 + CP sustained sperm count at similar levels to the control. The data suggested that PBPs at 200 mg/kg were required to recover sperm parameter alterations caused by CP administration in mice. Discussion Although CP is used to treat autoimmune diseases and cancer, reproductive organ toxicity has been shown in patients and experimental models [30][31][32]. As CP toxicity is attributed to oxidative stress [10], and PBPs demonstrate strong antioxidant properties [33], we explored the effects of PBP crude extract on the testicular damage caused by multiple CP doses in mice. In PBP crude extract obtained from SP, we identified a higher C-PC percentage (69.04%) than APC (30.34%) and PE (0.62%) ( Table 1). C-PC, which is related to high antioxidant capacity [16], was reported in PBP crude extracts from SP by Rodríguez-Sánchez et al. at a low percentage (C-CP, 47%) [21] and by Walter et al. at a Discussion Although CP is used to treat autoimmune diseases and cancer, reproductive organ toxicity has been shown in patients and experimental models [30][31][32]. As CP toxicity is attributed to oxidative stress [10], and PBPs demonstrate strong antioxidant properties [33], we explored the effects of PBP crude extract on the testicular damage caused by multiple CP doses in mice. In PBP crude extract obtained from SP, we identified a higher C-PC percentage (69.04%) than APC (30.34%) and PE (0.62%) ( Table 1). C-PC, which is related to high antioxidant capacity [16], was reported in PBP crude extracts from SP by Rodríguez-Sánchez et al. at a low percentage (C-CP, 47%) [21] and by Walter et al. at a reduced concentration (C-PC, 0.237 mg/mL) [34] as compared with our results; however, the C-PC purity index reported by Walter (0.8) correlates with our C-PC purity index result (0.7), with both index values considered to be of food grade (purity index ≥ 0.7) [35]. PBP concentrations (50, 100, and 200 mg/mL) were selected according to the results observed by Castro-García et al. [22] in a rat preeclampsia model, while the duration of CP treatment was determined considering the spermatogenesis cycle [36]. Our results are in agreement with the CP effects previously reported by Elangovan et al. [25], Lu et al. [7], and Iqubal et al. [37] including reductions in serum testosterone, body weight, relative testis weight, and sperm parameter alterations; increases in lipoperoxidation and enzyme activity (SOD and GPX); and testicle damage observed by histology. The PBP crude extract mitigated certain CP effects, depending on the PBP dose administered. In particular, PBP at 200 mg/kg prevented changes in testosterone serum, body weight, lipoperoxidation, enzyme activity, and evaluated sperm parameters while partially ameliorating relative testicular weight loss and histological damage. Body weight loss is attributed to appetite disruption caused by sensory taste cell modifications after CP administration in mice [38]. Protein consumption is recommended to avoid weight loss and malnutrition in patients receiving chemotherapy [39,40]; thus, body weight recuperation observed in CP-treated mice at PBP doses of 50, 100, and 200 mg/kg was probably an effect of the protein content in the PBP crude extract. The antioxidant effects of PBPs have been previously demonstrated [17,41], which were confirmed in our results, whereby lipoperoxidation and SOD and GPX enzymatic activity increments were prevented with PBP doses of 100 and 200 mg/kg. As SOD and GPX activity, as well as lipoperoxidation [42], are proportional to oxidative stress conditions [43][44][45] and because the antioxidant effects of PBP are related to the capacity of C-PC to trap reactive molecules such as acrolein [46], oxidative stress-generated as a consequence of acrolein interactions-was possibly reduced by the presence of C-PC in the PBP crude extract; however, the PBP crude extract at 200 mg/kg-the highest concentration tested in our study-partially ameliorated the relative testis weight reduction and testicular damage observed by histology. Decrements in relative testicular weight were probably related to STA reductions and decreases in Leydig cells and spermatids in the CP group. Despite our histological results revealing that spermatogonia were preserved after multiple CP doses as previously described by Drumond et al. [47], the interstitial spaces observed in the CP and PBP50 + CP groups suggest Leydig cells loss. Gu et al. [48] related this to the autophagy process of Leydig cells that is induced by CP. As Leydig cells maintain testosterone secretion [49], observed testosterone reductions confirm Leydig cell loss. Although testosterone levels and interstitial spaces improved in PBP 100 and 200 mg/kg groups, suggesting that Leydig cells were recovered, STA values and relative testicular weights were lower than the control, suggesting that cell populations were deteriorated. Interestingly, the PBP (200 mg/kg)-treated group without CP presented higher STA values than the control, probably because the antioxidant activity of PBPs reduced the oxidant molecules required for regular apoptosis processes [50]. On the other hand, sperm parameters such as progressive motility, sperm count, and cell viability were completely recovered at the PBP dose of 200 mg/kg. The effects of the PBP crude extract on sperm parameter alterations in CP-treated mice are attributable to the antioxidant activities of PBPs because damage observed in sperm is directly related to oxidative stress conditions. For example, decreased sperm motility is a result of flagellum defects caused by oxidative damage in the tubulin [51], reduced sperm count is related to spermatogenesis inhibition or apoptosis induction by oxidant molecules, and cell viability is associated with apoptosis triggered by oxidative stress [52]. The NADPH oxidase system (NOX)-embedded in the plasma membrane-catalyzes O2 •− production through the oxidation of NADPH. NADPH oxidase isoform 5 (NOX5) is a potential candidate for the ROS-generating system in spermatozoa [53,54]. NOX5 can be directly activated by PKCα independent of an increase in intracellular calcium [55]. Acrolein, which is likely the agent responsible for the toxicity of CP to spermatozoa, is known to activate PKCα [56]; hence, overactivation of NOX5 may be largely responsible for the pro-oxidative impact of CP on spermatozoa. Intracellular free bilirubin generated by heme oxygenase is known to inhibit various isoforms of NADPH oxidase in low nanomolar concentrations [57], although its specific effect on the NOX5 form has not been reported. Phycocyanobilin-a compound present in the chromophores of phycocyanin and allophycocyanin [17]-is converted within cells to phycocyanorubin, which is nearly identical in structure to bilirubin [58]; this may explain why phycocyanobilin has been found to mimic the NADPH oxidase-inhibitory effect of biliverdin or bilirubin in cell cultures. Our results, therefore, suggest that bilirubin and phycocyanobilin can function as inhibitors of NOX5, thereby protecting spermatozoa from acrolein. This possibility could be easily tested in vitro. As PBP crude extract was studied in the present work, the beneficial effects observed against CP gonadotoxicity in male mice could be increased by other phytochemicals with antioxidant properties such as the polyphenols [59] and polysaccharides phycobiliproteins (PBPs) [60] previously identified in SP. Although our results suggest that PBPs ameliorate CP damage, this study was limited to evaluating healthy mice; hence, the effects of PBPs must be assessed in an animal model of cancer or autoimmune disease to exclude an inactivation of the therapeutic effects of CP. Conclusions Phycobiliproteins crude extract at 200 mg/kg ameliorated gonadotoxicity caused by CP in male mice. Alterations in testosterone serum, body weight, lipoperoxidation, enzyme activity, and sperm parameters were completely prevented, while relative testicular weight loss and histological damage were attenuated; however, additional CP function tests are recommended.
6,587
2021-07-29T00:00:00.000
[ "Biology" ]
Investigation of femtosecond collisional ionization rates in a solid-density aluminium plasma The rate at which atoms and ions within a plasma are further ionized by collisions with the free electrons is a fundamental parameter that dictates the dynamics of plasma systems at intermediate and high densities. While collision rates are well known experimentally in a few dilute systems, similar measurements for nonideal plasmas at densities approaching or exceeding those of solids remain elusive. Here we describe a spectroscopic method to study collision rates in solid-density aluminium plasmas created and diagnosed using the Linac Coherent light Source free-electron X-ray laser, tuned to specific interaction pathways around the absorption edges of ionic charge states. We estimate the rate of collisional ionization in solid-density aluminium plasmas at temperatures ~30 eV to be several times higher than that predicted by standard semiempirical models. The electrons in a plasma can further ionize the ions when the two collide. Vinko et al. now study this ultrafast process in an unconventional plasma with a density similar to that of a solid, and show that the rate is several times higher than that predicted by standard theoretical models. A knowledge of the rate at which electrons are removed from, or recombine with, atoms and ions in a plasma is of fundamental importance in the understanding and prediction of plasma formation and dynamics. Changes in the charge state can occur via many processes, for example, interactions with photons, charge exchange, Auger decay and-of interest here-via collisional ionization with unbound electrons within the system. For weakly coupled systems, where the thermal energy of the charged components greatly exceeds their mean energy of coulomb interaction, the plasma can be treated as close to 'ideal', and it is commonly assumed when calculating collisional ionization rates that the collisions can be treated as binary, that is, that collective many-body effects play a negligible role in determining the dynamics of the interaction. Under such an assumption, cross-sections can be calculated via the binary scattering theory, and by integrating these over the electron density and temperature distribution, the appropriate collision rates can be obtained. For many configurations these rates can be calculated theoretically within the relativistic distorted-wave approximation framework, which yields accurate results for high-temperature, highly ionized plasmas. This method is currently implemented in the widely used HULLAC 1 and FAC 2 atomic codes, but can be further refined to a range of increasingly complex plasma conditions (see ref. 3, and references therein). However, given the very large number of possible different ionization configurations, and/or the need for a comprehensive treatment of further effects, such as inner-shell excitation and autoionization, which can quickly become computationally cumbersome, empirical or semiempirical formulae, such as those in ref. 4-7, are popular for use within collisional-radiative modelling codes (for example, FLYCHK 8 and SCFLY 9 , CRETIN 10 , CRModel 11 and ABAKO 12 ). Owing to the foundational link between rates and dynamics, collisional ionization cross-sections for isolated atoms and ions have been thoroughly investigated experimentally over the past half century using a variety of techniques such as the crossedbeam method 13 , or directly via plasma spectroscopy of welldefined low-density systems 14,15 . The rate of collisional ionization follows directly from the cross-section once the number and energy of the colliders are known, via the temperature-dependent electron distribution function. In contrast, our understanding of the rate of collisional ionization in dense, strongly coupled plasmas, where the binary-collision model breaks down, is woefully inadequate. Such plasmas are prevalent throughout the Universe within stellar environments-for example, the conditions half-way to the centre of the sun are equivalent in density to that of a typical solid, but with temperatures of order 100 eV. A number of theoretical approaches have been taken in an attempt to model collisions within such systems. Indeed, it is predicted that many-body effects such as plasma screening and ionization potential depression (IPD) can already have a significant effect on these rates at electron densities of 10 21 cm À 3 (refs 16,17), increasing them several times over and above those that would be predicted by a more classical binary approach. As experiments using intense, short-pulse lasers to heat solid targets, as well as inertial confinement fusion investigations, routinely deal with nonequilibrium plasmas at even higher electron densities, in the range of 10 23 -10 25 cm À 3 (refs 18,19), an understanding of how collisional ionization processes are affected in a dense plasma environment is of great practical importance 20,21 . While a considerable amount of theoretical work has been carried out on calculating collisional ionization rates in dense plasmas, there is a dearth of experimental data in this regime. In part, this is because of the difficulty of producing highly ionized, dense and hot plasmas under well-known conditions, but also because the measurement of collision rates cannot generally be isolated from other competing transitions and recombinations, such that a good understanding of all the processes in the plasma system is required to investigate any single one. Moreover, for most conditions of interest in dense plasmas far from equilibrium, Auger recombination, electron collisional ionization and other electronic processes occur on femtosecond timescales, severely limiting the feasibility of direct, time-resolved investigations. It is in the above context that we describe here an experimental method to investigate the collisional ionization rates under warmand hot-dense plasma conditions with well-defined electron densities and temperatures. We show that the commonly used semiemprical models 4,5 , implemented in many plasma-kinetics codes, underestimate the rate of collisional ionization by a considerable margin-a trend predicted by the theoretical investigations cited above 16,17,21 . As we will describe, our method relies on heating a solid target with the Linac Coherent Light Source (LCLS) X-ray free-electron laser (FEL). We exploit the short duration of the X-ray pulse to ensure that the plasma we create and study is heated isochorically to well-defined conditions, and the tunability of the LCLS photon energy to allow us, via X-ray-driven emission spectroscopy, to investigate specific collisional ionization rates within the system. Results Experiment. We use the LCLS FEL 22 to create a hot and dense Al plasma on ultrashort timescales using X-ray irradiation of 1-mm-thick foil targets at peak intensities reaching 10 17 W cm À 2 . The X-ray wavelength is tuned over a range of values around the K-edges of the various ionization stages of Al, so that the predominant absorption pathway for the X-rays is K-shell photoionization, leading to the creation of a core hole in the 1s state. This K-shell hole subsequently recombines within a few femtoseconds, mainly via Auger decay from the L-shell, so that the energy of the X-ray pulse is efficiently retained, and the system is rapidly heated to peak temperatures approaching 200 eV, at solid density, within the duration of the pulse. The experiment, and the properties of the plasma created, has been described in detail elsewhere [23][24][25] . The main experimental diagnostic is X-ray emission spectroscopy in the Al K a region of the spectrum in the range 1,460-1,600 eV, obtained via an ADP (101) crystal spectrometer with spectral resolution of B1.3 eV. Since the system is driven to temperatures not high enough to thermally ionize the K-shell, the K a emission is driven solely by the X-ray beam, and the spectra carry information on the system exclusively within the duration of the X-ray pulse, which is of B80 fs. Because of these short timescales, the ions do not have time to move more than a fraction of the lattice spacing, and the heating of the sample is isochoric with a well-known ion density throughout the entire time of emission. The electrons, on the other hand, equilibrate very quickly, and collisional ionization was observed to play an important role in their thermalization within the duration of the pulse 23 . The experimentally measured spectra for a range of X-ray irradiation photon energies are shown in Fig. 1. The various emission lines correspond to different charge states of an Al ion containing one hole in the K-shell and an additional number of holes in the L-shell. The first line in Fig. 1, denoted by IV, is emitted from a system containing initially a single K-shell hole and a fully filled L-shell. The three M-shell electrons are considered to be pressure-ionized in metallic Al. Since emission from line IV can only be observed if a 1s electron is photoionized from an ion with charge state 3 þ , the onset of this emission can be used to determine the K-shell ionization energy of the said charge state. Similarly, emission from V is determined by the K-edge of charge state 4 þ , and so on, for all possible occupations of the L-shell. Therefore, by observing the intensity of an emission line as a function of the photon energy of the X-ray pump, the positions of the K-edges can be extracted from the data for a range of specific ionic charge states, a result that has recently led to the reporting of the first direct measurement of the IPD in a dense plasma 25 . Simulations. As the experimental results are collected over the duration of the X-ray pulse, which also heats and ionizes the sample, the analysis remains reliant on the accurate modelling of the evolution of the plasma system under the intense X-ray irradiation. Therefore, we simulate the experiment using the SCFLY collisional-radiative code 8,9 , which has proved capable of modelling the non-local thermodynamic equilibrium evolution of X-ray FEL-irradiated samples [23][24][25][26][27] . The basic parameters of the simulation have been reported previously [23][24][25] . The X-ray pulse is modelled via a Gaussian time distribution with a full-width at half-maximum pulse length of 80 fs. The simulations of the K a peaks away from resonance are seen to not depend on the specific time structure of the pulse, and varying the pulse length down to 60 fs, or the shape to that of a square pulse yields identical results to those reported, provided the total energy contained in the pulse is left unchanged. The time structure of the evolution of the system is simulated over a period of 200 fs in 50 time steps, and the total spectrum is obtained by integrating the emission from all steps. In the experiment, the sample was irradiated over a range of different intensities determined by the spatial focusing profile of the X-ray beam. This distribution on target of the X-ray intensity was measured experimentally and is used to construct a twodimensional (2D) intensity map of the surface of the emitting region of the sample 28,29 . In the calculations, the total emitting region is simulated by binning the target in 30 intensity bins, spanning over five orders of magnitude in intensity, each of which constitutes an independent atomic kinetics simulation. The spectra from these individual simulations are then weighted by the volume of the region irradiated at that intensity, and added together to simulate the emission observed experimentally on the detector. Such simulated emission spectra, pumped at photon energies of 1,590 and 1,630 eV, are shown as a function of time in Fig. 2. Here the peak of the X-ray pump is set to t ¼ 0 (negative times are earlier). The comparison between the time-integrated calculated spectrum and the experimental measurement is given at the top of Fig. 2, and illustrates an excellent agreement between the two, provided the higher collision rates are used in the calculation, as will be discussed shortly. We note that, while the FEL pump and the simple X-ray photoabsorption interaction channel enable us to accurately simulate the plasma evolution, the emission spectrum remains that of a sample, which is both inhomogeneously spatially heated, and which is also evolving in time. The Gaussian FEL pulse has a full-width at half-maximum of 80 fs and its peak is placed at t ¼ 0. In a, the system is shown to be pumped at 1,590 eV, for which lines IV and V can be pumped directly, while lines VI and above emit because of collisional processes. In b, the system is pumped at 1,630 eV and lines VII and above emit because of collisional ionization. The relatively weak feature just below 1,560 eV is K b emission. NATURE COMMUNICATIONS | DOI: 10.1038/ncomms7397 ARTICLE The opacity of the emitting region is modelled via the escape factor formalism, with a line-of-sight target thickness of 1 mm. The escape factor formalism was shown to be an adequate way to model the opacity in these conditions, provided the X-ray pumping occurs above the cold Al K-edge (1,550 eV); more detailed radiation transfer calculations are in turn required to model the K-L resonance transitions 27 . The bandwidth of the X-ray pulse is 0.4%, determined experimentally 23 . The calculations presented use the modified version of the Ecker-Kröll model (mEK) 30 for the IPD, which has been previously observed to be the most accurate model for both describing the evolution of the physical system, as well as for reproducing the observed spectra 25 . The collision rates used are those in ref. 5. Collisional ionization. The experiment is sensitive to collisional processes. To show this, let us consider the emission line VI, photo-pumped at an X-ray photon energy of 1,590 eV. The atomic configuration that produces emission line VI is K 1 L 6 . In the experiment, K-shell vacancies can only be created via photoionization, since the temperature of the system heated by the LCLS is never sufficient for the thermal ionization of the K-shell of Al to be significant. Given that the K-edge of configuration K 2 L 6 is at 1,610 eV (see Fig. 1), the configuration K 1 L 6 cannot be created via photoionization from this ground state with X-ray photons at 1,590 eV. Hence, one might expect to observe no emission from line VI while the X-ray pump remains below 1,610 eV. As we see from Fig. 2a, however, a non-negligible amount of emission from line VI is produced at this pump wavelength. In contrast, the emission from line V is determined by the photoionization of configuration K 2 L 7 , which has a K-edge at 1,575 eV, and is energetically allowed. This results in a significant production of ions in the K 1 L 7 configuration, which in turn gives rise to the strong emission from line V. We now note that, although the electron temperature is insufficient for K-shell ionization, it is large enough for significant L-shell collisional ionization to take place. This provides an alternative channel for the creation of the line VI emitting configuration K 1 L 6 from the photoionization of K 2 L 7 , rather than from the energetically forbidden K 2 L 6 , via a two-step process: Clearly, this channel opens once the first step (photoionization) is possible, so that the intensity of emission line VI depends not only on the K-edge of charge state 5 þ (K 2 L 6 ) but also on the K-edge of the lower charge state 4 þ (K 2 L 7 ). Since multiple L-shell collisions can occur, albeit with decreasing probability, the intensity of a specific emission line will generally depend on the K-edge of all lower charge states. The ionic configuration K 1 L 7 is short-lived, with a lifetime determined primarily by the Auger recombination into K 2 L 5 , which takes place within about 2 fs. The amount of K 1 L 6 created in these conditions via the process described in equation (1) is then given through the competition between collisional and Auger processes, both of which compete to destroy state K 1 L 7 . Importantly, this means that, provided the emission intensities are a good map of the charge state populations they correspond to, and provided we can neglect higher-order collisional effects, the ratio of neighbouring emission lines pumped at an X-ray photon energy that lies between the K-edges of the two states can yield directly the ratio between the collisional ionization rate and the rate of Auger decay for a specific charge state. Since the Auger rates are an atomic property of the ion, such a measurement provides a straightforward method to extract collision rates in a dense, strongly coupled plasma. We plot in Fig. 3 the intensity of the emission lines IV through VIII as a function of the pump X-ray photon energy. The emission is normalized to the the main K a line and is offset for clarity. While each of these charge states has an observable K-edge determined by the minimum photon energy necessary to photoionize a 1s electron in an Al ion of given atomic configuration, we further observe that higher charge states display multiple such edges, that is, thresholds for the emission intensity at photon energies well below that of the K-edge. These higherorder edges appear at the photon energies of the K-edges of lower charge states, and are driven by collisional ionization of the L-shell electrons, via the chain process described in equation (1). For example, the experimental emission from line VII clearly shows three distinct thresholds at the K-edge energies of charge states 4 þ , 5 þ and 6 þ . Ionization potential depression. An accurate modelling of the evolution of the system, as well as that of collisional ionization processes, requires the positions of the K-edges to be calculated correctly a range of charge states and plasma conditions. This is because the K-edges need to be known for the spectroscopic model here described to be applicable, but more importantly, because the IPD determines the energy required to ionize bound electrons, and also affects the collision rates via determining the free-electron temperature and density distribution. As previously reported in ref. 25 the mEK IPD model, which we also employ here, is capable of reproducing the measured experimental K-edges with good accuracy. More precisely, we see from Fig. 3 that for lines IV and V the IPD is overestimated by at most a few eV (position of edge in simulations is lower than that of the experiment), while for lines VI and above the IPD is underestimated by at most 5-10 eV. We note that other commonly used IPD models, such as that of Stewart and Pyatt 31 or based on the ion-sphere model 32 shown here were conducted, the results of which are in better agreement with the trend of both the mEK model and the experimental edges as depicted in Fig. 3 (refs 33-35). Discussion The effect of collisional ionization can best be observed from the step-like features of the emission intensity of the various emission lines, as shown in Fig. 3. In particular, the ratio between intensities of neighbouring lines near absorption edges is overwhelmingly dominated by collisional ionization within the time window of Auger decay. A higher emission intensity above the K-edge of a certain charge state in the calculations compared with the experiment is, therefore, indicative of too few collisions taking place (insufficient depopulation of the photo-pumped charge state), while for the below-edge case the same is true if the simulations yield a lower emission intensity than in the experiment (insufficient collisional population of a charge state that cannot be photo-pumped directly). An interesting consequence of this is that observing the emission above and below the absorption edge of a single line can be sufficient to estimate the rate of collisions taking place, and, in particular, whether or not this process is accurately reproduced by the theoretical models used in collisional-radiative calculations. This effect is clearly shown by the different calculations presented in Fig. 3. We deduce from Fig. 3 that the standard model for collisional ionization yields rates that are consistently too small to explain the observed spectral emission for all the emission lines investigated (V-VIII). To evaluate the amount by which this process is underestimated, we have performed calculations for a range of scaled collision rates, where all the collisional ionization rates, as well as their inverse processes (or, equivalently, crosssections) are multiplied by a factor of 0.2, 1, 3 or 5. All other parameters in the simulations are held constant. The results are shown by the various simulation lines in Fig. 3, from which we see that the collision rates are underestimated by as much as a factor of 3 or more over the range of conditions investigated. We note that the changes in the collisional rates have no influence on the positions of the Al K-edges. Despite the spectrum being emitted from a plasma, which is inhomogeneously heated, both spatially and in time, we find that the individual lines that emit because of the collisional ionization of an Al ion with a K-shell hole do so in surprisingly well-defined plasma conditions. The reason for this lies in the high collisionality of the system, which is very quickly driven towards LTE. To see this, we plot in Fig. 4a the distribution of emission from the various intensities from within the focal spot, which contribute to the total emission from line VI when the sample is pumped at 1,590 eV, a process described in detail in the previous section. Unsurprisingly, low intensities from the fringes of the focal spot contribute little to the emission of the line, as an insufficient number of ions in the emitting charge states are created. Similarly, the highest intensities reached at the focus are also seen to contribute negligibly. Here copious amounts of the right charge states can be created; however, the fraction of the sample that interacts with the peak of the X-ray pulse is also very small, so that this emission is strongly overshadowed by the emission at intermediate intensities, which are representative of a much larger portion of the emitting plasma. This is why the intensity weighting distribution, shown in Fig. 4, is seen to be peaked at intermediate intensities. For each value of the intensity we plot the free-electron temperature and density of the plasma in the conditions of peak emission of the line of interest during the pulse. While the various X-ray intensities heat regions of the sample to very different final temperatures and ionizations, we observe that the peak of the emission tends to always occur in very similar conditions, irrespective of the intensity. Hence, the parts of the sample that interact with high X-ray intensities will emit collisional lines VI (VII), shown in Fig. 4a,b, at earlier times, while those at low intensities will emit later, towards the end of the pulse. By weighting the electron temperatures and densities over all intensities via this distribution, we can extract the range of conditions in which the line was emitted across the whole plasma, that is, where the predominant amount of collisional ionization took place. For an X-ray pump photon energy of 1,590 eV, we find T ¼ (27 ± 4) eV, n e ¼ (2.5 ± 0.2) Â 10 23 cm À 3 for line VI. Similarly, the emission from line VII, when the plasma is pumped at 1,630 eV (driven by the collisional process K 1 L 6 þ collision-K 1 L 5 þ e À ), occurs at T ¼ (35±4) eV and n e ¼ (2.9 ± 0.2) Â 10 23 cm À 3 . Our collisional-radiative calculations assume instant thermalization of the Auger and photoelectrons to provide the instantaneous free-electron temperature and density. In reality, however, although they thermalize quickly, the Auger electrons, and to a much lesser extent the photoelectrons, will be nonthermal for a short period of time, during which their effect on the collisional ionization process will be different to that of the thermal free electrons. A large relative population of energetic Auger electrons could then skew the interpretation of the experimental results, provided their collisional cross-section was significantly different to that of the cooler, thermal electrons, because collisional ionization from the Auger electrons is not included separately in our calculations. To estimate the size of this effect, we have separated the Augers from the thermal electrons in the simulation and thus relaxed the assumption of instant thermalization, so that now, in order to thermalize, an Auger electron must either collisionally ionize an ion in the system or collide with the thermal electrons. This ionization is tracked alongside that of the thermal electrons and is plotted in Fig. 5 as a function of time during the FEL pulse, at a photon energy of 1,630 eV. Importantly, we find that the population of each charge state peaks at a time when the collisional ionization due to the thermal electrons is over an order of magnitude larger than that due to the Auger electrons. Hence, we conclude that the instant thermalization assumption is overall valid in these conditions, and that the error introduced in the evaluation of the collision rates is not significant. Recently, more advanced calculations of relaxation dynamics of the free-electron distribution in an FEL-driven system using a Fokker-Planck approach were reported in ref. 36. At the highest densities at which they performed their calculations, of 10 22 atoms cm À 3 , the authors report a persistent presence of nonthermal Auger and photoelectrons during the entire duration of the pulse. However, because of the high density of the system the populations of these high-energy electrons were seen to be very small compared with that of the thermal electron distribution, for all but the very beginning of the pulse. This observation is consistent with our simpler calculations. Moreover, our results are for solid-density Al, which has a six-times higher atomic density, for which thermalization times should be even shorter. Perhaps surprisingly, the effect of nonthermal electrons in the distribution was also seen to initially lower the observed rates of impact ionization 36 . The reason for this is that in the instant thermalization approximation, all the absorbed energies that do not go into ionization contribute directly to the temperature, whereas for nonthermal distributions some of that energy is syphoned off to the high-energy electrons, leading to a cooler thermal electron component. Hence, were these effects significant, one would expect to overestimate the experimental collisional ionization rates in the calculation rather than underestimate them, as is the case of the results reported here. In conclusion, we have described a spectroscopic method to extract femtosecond collisional ionization rates in dense plasmas using the unique characteristics of an X-ray FEL, namely its high intensity, narrow bandwidth and wavelength tunability. By comparing emission intensities of lines produced from neighbouring charge states, one of which is photo-pumped above its K-edge, the other below, the collisional ionization rate can be clocked to the Auger decay rate of a core hole, providing, in principle, a plasma-model-independent reference window for the collisional event. Recent experimental emission measurements on solid-density Al plasmas at LCLS show that the emission from charge states pumped below their respective K-edge is systematically underestimated by our collisional-radiative calculations in the regions where the measurements are sensitive to collisional effects. In the present measurements, opacity effects prevent us from extracting the collisional rate directly from the emission spectrum; therefore, we have conducted a full non-local thermodynamic equilibrium spectral calculation, including opacity via the escape factor formalism. We observe that the measurement is consistent with a too small collisional ionization rate used in the modelling, and are able to match the experimental results extremely well, with the exception of small discrepancies in the modelling of the IPD on the order of 5-10 eV, provided we assume the collisional ionization rates are larger by a factor between three and five. While nonthermal collisional effects, primarily driven by energetic Auger electrons, are an important contributing factor to the collisionality of the system at early times in the plasma evolution, our current calculations indicate that their effect should be small compared with the much larger thermal electron distribution at the times when most of the relevant emission intensities are produced. Regardless, we wish to stress that more detailed investigations, aimed at better quantifying the effect of nonthermal electron distributions on transition rates on ultrashort timescales, are certainly necessary in the context of studying collisional dynamics on X-ray FELs. Finally, we calculate that the plasma conditions in which the spectrum is sensitive to collisional effects has very welldefined electron temperatures and densities, despite the time-and space-integrated nature of the experiment. This is primarily because of the strong collisionality driving the system rapidly towards LTE, so that charge state populations, and hence peak emissions, are closely linked to the plasma temperature. Hence, this technique affords, for the first time, the possibility to measure charge-state-specific collisional ionization rates in well-defined conditions in dense, strongly coupled plasmas.
6,664
2015-03-03T00:00:00.000
[ "Physics" ]
EGCG-S Impacts Oxidative Stress and Infection of Enterovirus 69 in Lung Cells Enteroviruses are responsible for emerging diseases which cause diverse symptoms and may result in neurological complications. An antiviral with multiple mechanisms of action can help prevent enterovirus mediated disease despite differences in the pathogenesis between enteroviruses, including the recently identified enterovirus 69 (EV-69) for which pathogenesis is not well understood. This study investigated the efficacy of epigallocatechin-3-gallate stearate (EGCG-S), a modified form of the antioxidant green tea catechin ep-igallocatechin-3-gallate (EGCG), in inhibiting EV-69 infection of lung fibroblast cells in vitro. Treatment with EGCG-S resulted in moderate protection from EV-69 mediated cytotoxicity as demonstrated by increased metabolic activity as well as maintenance of cell morphology and mitochondrial function. These effects were correlated with reduced hydrogen peroxide production in infected cells following EGCG-S treatment with concentrations less than 100 µM, suggesting a role for inhibition of EV-69 mediated oxidative stress. This study provides insight into characteristics of EV-69 infection as well as the efficacy of EGCG-S mediated inhibition of EV-69 infection. Introduction Enteroviruses comprise a large and diverse genus of non-enveloped RNA viruses in the Picornaviridae family. Members of this genus are grouped into fifteen species all of which are associated with a wide variety of diseases due to their tropism encompassing gastrointestinal, respiratory, and neuronal cells [1]. This allows enterovirus infections affecting the gastrointestinal or respiratory tract to also cause neuropathogenesis and fatal conditions such as meningitis [2] [3]. The rapid mutation rate of enteroviruses allows for emergence of strains capable of causing severe disease and even within the same species of enteroviruses pathogenesis can greatly vary. The Human enterovirus D species, for example, includes enterovirus 68 (EV-68) which causes respiratory disease and acute flaccid paralysis while another serotype, EV-70, is primarily associated with acute hemorrhagic conjunctivitis [4] [5]. These serotypes are two of four enteroviruses most recently isolated, along with EV-69 and EV-71. EV-71 causes hand, foot, and mouth disease [6]. Both EV-68 and EV-70 along with EV-71, have been associated with severe neurological disease [7] [8] [9]. Much about the pathogenesis of EV-69, however, is still not known since its first reported isolation in the U.S. in 1959 [10]. This type was originally associated with respiratory illness similar to EV-68 [11] but in recent years has been isolated worldwide from patients presenting with acute flaccid paralysis and encephalitis [12]- [18]. While some common features of the enterovirus infectious cycle likely apply to EV-69, there is a lack of knowledge on the tissue tropism of this enterovirus serotype. This and other emerging enteroviruses may continue to cause outbreaks and disease associated mortality, necessitating novel antivirals that can alleviate infection regardless of the enterovirus type. Such antivirals would thus ideally be able to simultaneously target multiple stages of the infectious cycle to account for differences, such as different host receptor utilization, between enteroviruses. Here, we investigate the modulation of oxidative stress and the therapeutic properties of a compound derived from the Camellia sinensis plant, known as epigallocatechin-3-gallate (EGCG). EGCG is a green tea polyphenol which has been applied to numerous infectious disease models for the inhibition of microbial growth and viral infectivity [19] [20] [21]. The antiviral effect of EGCG has been reported for both enveloped and non-enveloped viruses including the enteroviruses Coxsackie B3 and EV-71 [21]- [26]. Additionally, it has been demonstrated to be well tolerated in vivo and even utilized for alleviating neurotoxicity associated viral proteins in mice [20] [27]. While the mechanism behind EGCG mediated antiviral effects has been often proposed to be due to interfering with virus attachment to host cells [28], some studies have found that it is the antioxidant property and ability of EGCG to modulate cellular redox that is associated with inhibition of infection [26] [29] [30] [31]. The ability of EGCG to inhibit viral infection through various ways shows promise for the development of an antiviral agent that can be applicable to emerging viruses for which pathogenesis is still unknown. Furthermore, modifications have been made to the structure of EGCG to enhance its stability and bioavailability [23] [32]. Such include palmitoylation of EGCG (pEGCG), which resulted in more effective inhibition of HSV-1 adsorption and infection in Vero cells than did EGCG [23]. Furthermore, addition of stearic acid to EGCG to make epigallocatechin-3-gallate-stearate (EGCG-S) is another modification that enhanced stability and was also shown to inhibit HSV-1 infection in A549 cells without causing cytotoxicity at up to 75 μM, similar to EGCG [33]. In this study, we investigate the cytoprotective effects of EGCG-S on infected cells and evaluate the efficacy of EGCG-S in inhibiting infection of EV-69 in MRC-5 cells and A549 cells. MRC-5 and A549 lung fibroblast cells are appropriate models for in vitro study because EV-69 is associated with atypical respiratory illness. Materials and Methods Cell Culture Maintenance The adherent human lung epithelial A549 cells (CCL-185) and the fetal lung fibroblast IRR-MRC-5 cells (ATCC 55-X) (American Type Culture Collection (ATCC) Manassas, VA) were maintained in T25 flasks at 37˚C in a 5% CO 2 incubator. The A549 cell line and MRC-5 cell line were propagated in F12K and MEM media (Gibco, ThermoFisher Scientific), respectively, both of which were supplemented with 10% Fetal Bovine Serum and 1% gentamicin. Virus Cytotoxicity Study of Treatment of A549 and MRC-5 Cells with EGCG-S Enteroviruses are able to infect a variety of cells which include cells of the respiratory tract and can be found in respiratory secretions [34] [35]. To understand the potential effect of EGCG-S treatment on EV-69 infection in cells of the respiratory tract, A549 lung epithelial cells and MRC-5 lung fibroblast cells which were previously demonstrated to be highly susceptible to enterovirus infection were used [36]. Cytotoxicity to EGCG-S was assayed at a concentration range of 25 -100 μM. Since EGCG-S was dissolved in DMSO, the effect of DMSO alone on cells was evaluated. Microscopy analysis showed that cells were not affected by the DMSO vehicle control even when treated at concentrations greater than 0.5% (The highest final concentration of DMSO used in the study). This correlated well with a previously reported study [37]. damage pathways. The cytotoxicity of EGCG-S on cells is dose dependent. EGCG-S was safely applied to cultured A549 cells [33] and cultured Vero cells (unpublished data) up to 75 μM. The results indicated that EGCG-S is non-cytotoxic to A549 and MRC-5 cells. Proliferation Assay of A549 and MRC-5 Cells Infected with EV-69 and EGCG-S Treated EV-69 The ability of EGCG-S to protect against EV-69 mediated cytotoxicity was further investigated by measuring viability with respect to metabolically active cells as well as ATP levels as an indicator of normal mitochondrial function. The percent inhibition of infection was determined based on these viabilities as described in the Methods section. An increased viability of cells with treated EV-69 as compared to the untreated EV-69 was observed (Figure 2(a)). The highest inhibition of infection observed was 47% after 75 µM treatment for A549 cells (Figure 2(b)). The percent inhibition of infection was overall lower for MRC-5 cells at the same EGCG-S concentrations tested for A549 cells, at most being 24.5% after 50 µM treatment (Figure 2(b)). Altogether, this demonstrates that EGCG-S treatment has some efficacy in inhibiting EV-69 infection but highlights the differences in efficacy of EGCG-S in vitro that can be expected be- The % Inhibition calculated based on these viabilities was lower than 50% for both cell lines (b). Data are presented as mean ± SEM. Statistical analysis was performed for the viability assay using a one-way Anova with Dunnett's post-hoc test comparing results to the untreated EV-69 infected control, *p < 0.5, **p < 0.01, ***p < 0.001, ****p < 0.0001, ns = non-significant. Viral ToxGlo ATP Detection Assay To gain insight into the metabolic activity of the MRC-5 cells for which EGCG-S mediated inhibition of infection was not prominent, we also assayed for ATP levels as an indicator of mitochondrial function and cell viability. Treatment with EGCG-S alone did not negatively impact ATP production in these cells, as measured through the ToxGlo ATP detection assay (Figure 3(a)). Treatment of MRC-5 with EGCG-S did not influence ATP production, therefore treatment with EGCG-S had no negative impact on cell viability. There is no statistical difference between untreated MRC-5 cells and cells treated with EGCG-S up to 100 µM Advances in Bioscience and Biotechnology . Data are presented as mean ± SD from one experiment (5 replicates), *p < 0.5, **p < 0.01, ***p < 0.001, ****p < 0.0001, ns = non-significant. ( Figure 3(a)). However, the limited efficacy of EGCG-S in inhibiting infection was further observed when examining the ATP levels of infected MRC-5 cells. Intracellular ATP for these cells was increased with EGCG-S treatment but was still substantially lower than the uninfected control (Figure 3(b)). This suggests that EV-69 infection still greatly compromised MRC-5 cells, and EGCG-S treatment only moderately protects MRC-5 cells from mitochondrial dysfunction as well loss of metabolic activity induced by EV-69 infection. ROS Detection Assay Treatment with EGCG-S has thus far been demonstrated to reduce mitochondrial dysfunction, and overall cytotoxicity caused by EV-69 infection. These effects are known to occur in enterovirus infection as a result of oxidative stress [38] [39] [40]. While MRC-5 cells are highly susceptible to EV-69 mediated infection, EGCG-S treatment moderately increased cell viability in comparison to untreated controls (Figure 2(a) and Figure 2(b)). EGCG has been reported to modulate cellular redox that is associated with inhibition of infection [26] [29] [30] [31]. Therefore, this study investigated the antioxidant potential of EGCG-S in inhibiting EV-69 infection. To assess whether the protective effects of EGCG-S observed may be due to its antioxidant potential, the total intracellu- Microscopic Observation of EV-69 Infected MRC-5 Cells and EGCG-S Treated Infected MRC-5 Cells Infection with EV-69 causes a decrease in viability, characterized as more than 90% of cells exhibiting rounding cytopathic effects or by the presence of cellular debris as a result of prominent cell lysis at 48 hrs post infection (hpi). These cytopathic effects (CPE) were investigated in MRC-5 cells infected with untreated EV-69 or EV-69 that was pretreated for 1 hr with EGCG-S (25 to 100 µM) before infecting cells. As a result of EGCG-S treatment of EV-69, more cell proliferation was observed for MRC-5 cells, particularly at 50 and 75 µM, than for the untreated EV-69 infected control ( Figure 5). Cytotoxicity became apparent at 100 µM for MRC-5 cells, however, as demonstrated by increased rounding and debris relative to cells infected with untreated EV-69 ( Figure 5). These morphological changes may be a result of both viral infection and toxicity of EGCG-S. Thus, microscopic observation indicated that EGCG-S reduces EV-69 induced cytopathic effects. Altogether, this supports that EGCG-S can prevent EV-69 mediated damage to cells but at concentrations lower than 100 μM. Discussion The benefits of EGCG-S treatment in viral infections with greatly differing clinical manifestations have been extensively investigated. In this study, we assess the antiviral effect of EGCG-S on EV-69, an enterovirus that was observed to cause productive infection in respiratory epithelial and fibroblast cells in our study. Following treatment with EGCG-S, MRC-5 cells had reduced EV-69 mediated cytotoxicity in comparison to the untreated virus control, albeit with low percent of inhibition of infection. In contrast, other studies reported EGCG-S mediated inhibition of infection to be much higher for HSV-1 infected Vero cells and A549 cells [23] [33]. This difference in efficacy of EGCG-S observed in our study may be attributed to the structural differences in viruses treated, as this modified form of EGCG is postulated to be more efficacious than unmodified EGCG due to having better affinity for the viral envelope and EV-69 is a non-enveloped virus [23]. Furthermore, it is possible that the modes of action of EGCG-S against cells infected with RNA versus DNA viruses also differ. Moreover, we observed that levels of hydrogen peroxide were reduced in MRC-5 cells infected with EGCG-S treated virus at concentrations lower than 100 µM, suggesting EV-69 mediated oxidative stress was inhibited. Determining the causes of the moderate antiviral activity observed in this study requires understanding of both EV-69 pathogenesis as well as the full range of mechanisms in which EGCG-S inhibits viral infections. The proposed antiviral mechanism(s) of EGCG has differed among studies. Researchers demonstrated that it is likely due to EGCG competing with sialic acid or heparin sulfate containing host cell receptors for viral attachment. This finding is supported by reduced antiviral efficacy against poliovirus. This reduction was associated with poliovirus not being dependent on sialic acid or heparin sulfate containing receptors for entry [21]. However, a different study attributed EGCG-mediated inhibition of EV-71 replication to a reduction of EV-71 induced oxidative stress [26]. Our study demonstrated that hydrogen peroxide was reduced with EGCG-S treatment. EV-69 was treated with EGCG-S prior to infection, which would allow attachment of EGCG-S to the virus. Thus, it is possible that inhibition of EV-69 infection may have occurred through both modulation of cellular redox and prevention of virus attachment to cells. Subsequent research in our laboratory found that EGCG-S inhibited attachment and penetration of HSV-2 in cultured Vero cells (unpublished data [47]. Our study suggests that EGCG-S reduced the oxidative damage caused by EV-69 infection and increased the viability of infected cells, thus potentially reducing viral replication. While EGCG was shown to reduce viral replication in previous studies, contrasting effects on the inflammatory response associated with viral infections were reported. For influenza A infection, antiviral effects were associated with reduced lung inflammation, in agreement with studies reporting the overall anti-inflammatory effect of EGCG [48] [49] [50]. On the other hand for Coxsackie B3 infection, EGCG treatment was found to greatly inhibit viral replication in heart tissues of mice and associated myocarditis but did not decrease pro-inflammatory cytokine levels [25]. Dampening of the pro-inflammatory cytokine response may be beneficial for viral diseases that cause severe symptoms in result of a cytokine storm, such as influenza A virus or the newly emerged SARS-COV-2 [51], but inhibition of inflammation in other viral infections may lead to prolonged disease or persistent infection due to the suboptimal immune response necessary for clearance of infection. The role of EGCG-S in inhibiting infection will thus require closer examination of the antiviral immune response, including the production of type I interferons (IFNs), in untreated versus treated infected cells. Examination of the cytokine milieu as well as antigen presenting cell chemotaxis and targeting of EGCG-S treated infected cells would help elucidate whether EGCG-S treatment facilitates clearance of enterovirus infection or enables persistent infection due to interference with the antiviral response. Furthermore, as enterovirus infections may cause severe disease progression to neurological complications, the ability of EGCG-S to serve as an antiviral in early versus later stages of infection will need to be assessed. Future studies investigating the application of EGCG-S 24 or 48 hrs following infection can help characterize the efficacy of EGCG-S in inhibiting EV-69 infection at different stages, as well as further distinguish the role of EGCG-S in virus attachment. Thus far, our data suggest that EV-69 mediated cytotoxicity can be reduced with EGCG-S application in the early stages of infection and merit further investigation into maximization of efficacy alone or in combinatorial strategies. If applicable to enteroviral infections in vivo, EGCG-S can contribute to the prevention of disease progression and spread in areas in which enterovirus outbreaks occur.
3,568.6
2021-05-28T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Mechanical and Assembly Units of Viral Capsids Identified via Quasi-Rigid Domain Decomposition Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV) for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available. Introduction The genomic material of many viruses is encapsidated inside icosahedral protein shells with diameters in the 20-100 nm range. The number of structurally inequivalent protein units that tessellate these capsids is usually very small [1,2]. This, in turn, is reflected in the limited repertoire of viable capsid shapes with icosahedral symmetry [3]. Understanding the organization of viral capsids at levels that are intermediate between the single protein units and the fully assembled, infectious particles is crucial to elucidate key aspects of the viral life cycle. These include the molecular basis of capsid conformational changes, such as swelling or maturation events [4], as well as the assemby/disassembly of virion particles [5][6][7][8]. Both these processes, in fact, are best characterised and rationalised in terms of the typically multimeric protein units [9] that behave as approximately rigid units in the capsid's conformational mechanics or act as basic assembly/disassembly units. These approaches have proved extremely valuable to gain insight into various mechanisms controlling the physico-chemical behaviour of few specific viruses [10][11][12][14][15][16][22][23][24]. For instance, nano-indentation experiments, where viral particles are subject to mechanical stress and fatigue by atomic force microscopy, have singled out the mechanical building blocks of viral capsids and elucidated the mechanisms of genome uncoating [25]. However, the systematic application of these techniques has been hindered either by the difficulty of transferring the methodologies across different virus types or by their severe experimental/computational demands. As a step towards developing a general scheme for identifying functional and structural units in viral shells, here we introduce and apply a novel and efficient computational strategy that can single out capsid domains that, according to various criteria, are expected to be mechanically stable. The method consists of a decomposition of the capsid into quasi-rigid units based on a suitable analysis of its internal dynamics. In accord with the mesoscopic spirit of the approach, the sought internal dynamics can be efficiently obtained from elastic network approaches, in place of computationally-demanding molecular dynamics simulations. The variational decomposition strategy is applied to several viruses covering a wide range of sizes and capsid classes, from T = 1 to pT = 7. For validation purposes, the set includes several well-characterised instances: the cowpea chlorotic mottle virus (CCMV), the MS2 virus, the satellite tobacco necrosis virus (SNTV) and satellite tobacco mosaic virus (STMV). The units obtained from the decomposition are in excellent agreement with known basic blocks of the assembly/disassembly process or of the structural transitions. These successful comparisons give confidence in the viability of the strategy for identifying putative functional units of viral capsids. This suggests that the method could be profitably used for interpreting viral assembly, disassembly and genome uncoating experiments or as a predictive tool. Towards this latter goal, we conclude the present study by formulating predictions for a number of viruses whose capsid structure is available but whose functional units are still unknown, or debated. This prediction set includes the L-A (pT = 2), Pariacoto (T = 3) and polyoma viruses (pT = 7). The decomposition algorithm, which is formulated in a general and hence transferable way, is made freely available for academic use at the link: http://people.sissa.it/,michelet/vircapdomains. Results/Discussion The main objective of this study is to investigate whether, and if so how, a suitable analysis of quasi-rigid domains of fullyassembled viral shells can identify the functional units of a capsid. With this term we refer to those protein domains that are either: (i) the basic, undeformable building blocks (capsomeres) that can be used to describe the structural transitions of a capsid or (ii) its fundamental assembly/disassembly blocks. Although, for brevity, these two unit types are collectively referred to as ''functional units'', their clear distinction must be borne in mind [9,15,16,26]. The quasi-rigid decomposition approach is motivated by the observation that the large-scale internal dynamics of proteins, or protein assemblies, is often well-described by the relative rigid-like motion (rotations and translations) of a limited number of subdomains [27][28][29][30][31][32][33][34]. Based on this observation and building on the successful multiscale or coarse-grained simulations of viral shells modeled as assemblies of rigid tiles [15,[35][36][37], one can expect that protein capsids can be viably decomposed into quasi-rigid domains. Because of their intrinsic mechanical stability, these quasi-rigid protein units are expected to be functionally relevant. We accordingly performed quasi-rigid domain decompositions of several viral capsids for which the atomic structural data is publicly available [38], namely CCMV, MS2, STNV, STMV, as well as L-A, Pariacoto and polyoma virus. The whole set covers various capsid geometries, namely T = 1, pT = 2, T = 3, and pT = 7, and spans a wide range of sizes, from the 60 proteins of STMV (with a total of 8820 amino acids) to the 360 ones of polyoma virus (totaling 129060 amino acids). The decomposition algorithm is detailed in the Methods section and is briefly outlined here in order to convey the salient methodological steps, with their advantages and limitations. Our analysis, which follows the approach of [33,34,39], involves the three main steps summarised in the flow chart of Fig. 1 and briefly discussed hereafter. 1. Calculation of structural fluctuations via ENM. The first step consists of characterizing a capsid's internal dynamics using an elastic network model (ENM). As detailed in the Methods section, these models are based on a quadratic approximation of the free energy landscape which, by construction, has its minimum in correspondence of the reference crystal structure of the molecule [28,[40][41][42][43][44][45]. The viability of these models to capture the large-scale, low-energy structural fluctuations of equilibrated proteins and protein complexes has been demonstrated in several contexts by successful comparison with experimental data [22,46,47] and atomistic molecular dynamics simulations. The latter include instances where ENMs were applied to viral capsids [7,8]. In fact, because of the major challenges posed by studying even small viral particles using atomistic molecular dynamics simulations [15,16], several studies have previously relied on the use of ENMs to characterise the internal dynamics of several capsids [5][6][7][8]48]. It is important to recall that in all cases, ENMs were applied to the empty protein shells. Notice that the latter may not necessarily be stable on their own in vivo [22,47] (or in silico when realistic force fields are used [15,16,[49][50][51]). Yet, their consideration in ENM contexts appears justifiable because the stability of the empty capsid is guaranteed by construction and hence can effectively make up for the stabilizing interactions of coat-proteins and packaged nucleic acids (typically non-resolved in available crystal structures). 2. Exploration of capsid subdivisions into putative quasi-rigid domains. Second, the ENM-based structural fluctuations are analysed to identify the putative quasi-rigid domains. Specifically, the capsid is subdivided into nonoverlapping groups of amino acids whose internal pairwise distances have negligible fluctuations compared to the overall capsid motion. Because the optimal, ''innate'' number of quasirigid units is not known a priori, we consider all possible capsid subdivisions into Q~2,3,::: domains. For each value of Q, the possible amino acid partitions into Q distinct groups are explored and the one which minimizes the intra-group geometric strain is identified. We note that the exploration of the combinatorial space of the possible amino acid grouping is done stochastically in a completely unsupervised manner. In particular, the groups are not constrained a priori to be uninterrupted in sequence or compact in space, nor to coincide with entire proteins. As detailed in the Methods section, the quasi-rigid character of the returned subdivision can be assessed by considering the relative weight of the two independent contributions to the overall capsid motion coming from: (i) the rigid-like relative movement of the putative quasi-rigid domains, which consists of relative rotations and translations, and (ii) the internal structural fluctuations of the groups. Clearly, for genuine quasirigid decompositions the rigid-like movements of the domains Author Summary The genetic material of viruses is packaged inside capsids constituted from a few tens to thousands of proteins. The latter can organize in multimers that serve as fundamental blocks for the viral shell assembly or that control the capsid conformational transitions and response to mechanical stress. In this work, we introduce and apply a computational scheme that identifies the fundamental protein blocks from the structural fluctuations of the capsids in thermal equilibrium. These can be derived from phenomenological elastic network models with minimal computational expenditure. Accordingly, the basic functional protein units of a capsid can be obtained from the sole input of the capsid crystal structure. The method is applied to a heterogeneous set of viruses of various size and geometries. These include well-characterised instances for validation purposes, as well as debated ones for which predictions are formulated. ought to capture a substantial fraction of the overall capsid motion. 3. Selection of an optimal subdivision into basic mechanical units. Finally, several order parameters are examined to identify the most plausible subdivision of the capsid into mechanically-stable units. In principle, the optimal subdivision could be identified by examining how the internal strain of the putative quasi-rigid domains decreases with Q. However, because such decrease is usually gradual, it is more appropriate to identify the natural quasi-rigid partition by considering a few general properties that can more sensitively discriminate between functionally viable and non-viable subdivisions. Arguably, a minimal set of desiderata for the optimal, basic mechanical units is that: (i) they should preserve the structural integrity of proteins (or protein domains), (ii) it should be possible to group them into only few structurally inequivalent types, (iii) they cannot be further partitioned into smaller units that meet the two previous criteria. Accordingly, among the strain-minimizing subdivisions for varying number of domains Q we shall pick the one which best satisfies criteria (i) and (ii), and has the smallest units, i.e. the largest Q. CCMV is an icosahedral RNA plant virus whose capsid is constituted of 180 chemically identical protein subunits assembled in the shape of a truncated icosahedron with T = 3 geometry. The protein units adopt three different, quasi-equivalent conformations, conventionally denoted as A, B and C [26,53]. As shown in Fig. 2, the A proteins are organised in groups of five around the five-fold symmetry axes, whereas the B and C proteins cluster alternately in groups of six around the three-fold axes. The pentamers and hexamers are stabilised by the interactions between the N-terminal arms of the constituent subunits. These intracapsomere interactions are complemented by inter-capsomere ones resulting from the mutual interlocking of the C-terminal arms and the b-barrel of neighbouring protein pairs in different capsomeres [26]. According to various experiments these dimers correspond to the capsid assembly blocks for the virion [26,54]. In the fullyassembled shell the dimeric units involve A/B and C/C pairs in a 2:1 ratio. It should be noted that for A/B and C/C dimers the relative positioning of the subunits (specifically their canting angle) is different. Indeed, the subunit interlocking provides a flexible hinge that, in response to suitable environmental conditions, allows the virion to expand [59]. This fact aptly clarifies that the assembly/disassembly units are not necessarily expected to have sufficient rigidity to become the fundamental mechanically-stable units in the assembled capsid [62]. Indeed, for CCMV various studies consensually indicate that these mechanical units correspond to the pentameric and hexameric capsomeres [26,46,53,54]. This conclusion can be drawn by considering the details of both the expansion process and the capsid's response to nano-indentation. In fact, during the expansion produced by the hinge-motion of the dimers, the pentameric and hexameric capsomeres rotate about their axis maintaining an internal quasi-rigid character [61]. In accord with this result, recent coarse-grained simulations of CCMV nanoindentation have demonstrated that mechanical failure occurs along the seams that bridge hexamers and pentamers, which remain largely undeformed by the application of mechanical stress [12,14]. The above-mentioned phenomenology provides a clear context for benchmarking the proposed strategy for identifying mechanical units in viral capsids. Specifically, for CCMV it ought to return hexamers and pentamers, and not the dimers, as the primary quasi-rigid blocks. We started by characterising the internal dynamics of CCMV by computing its collective low-energy modes of structural fluctuations and used the data to partition the capsid into a number of putative quasi-rigid units, Q, ranging from 2 up to 180 (the latter corresponding to the number of capsid proteins). The value of Q corresponding to the most plausible subdivision into functional units was found by assessing their compliance with the aforementioned desiderata: the preservation of protein structural integrity and the small number of structurally-inequivalent domain types. To this purpose we computed and analysed the order parameters shown in Fig. 2. We start by discussing box B, which reports the profile of the protein integrity order parameter as a function of the number of imposed quasi-rigid domains, Q. The integrity parameter is evaluated by first computing for each protein the largest percentage of its amino acids that are assigned to the same quasi-rigid block and next averaging this fraction over all proteins. Accordingly, an integrity score of 0.8 implies that, on average, 80% of the amino acids of any protein are in the same quasi-rigid block. We point out that measuring the integrity score at the level of entire proteins is appropriate for CCMV (and the other considered viruses too) because of the structural compactness of its constituent proteins. When the latter comprise two or more structural domains, the score can be straightforwardly generalised to capture the integrity of these subdomains. One such example is given by the subdivision of the Hepatitis E virus-like particle discussed in Fig. S1. It is seen from Fig. 2 that there exists only one prominent peak of protein integrity (90%) corresponding to a subdivision into Q~32 domains. The genuine quasi-rigid character of the domains is confirmed by the fact that about 85% of the capsid's mean square fluctuation results from the relative rigid-like motion of the domains, see Fig. S2. Furthermore, throughout the considered range of subdivisions, 15ƒQƒ180, the strain-minimizing partition into Q~32 domains is the only one yielding a limited number of inequivalent domains and can be readily singled out by visual inspection. Specifically, it involves only two distinct domain types, while a minimum of 5 to a maximum of 23 different types is found for all other values of Q §15. Lower values of Q, which correspond to subdivisions into very few macrodomains, are more obviously associated to both high integrity scores and few different domain types, see Fig. S3. The combined inspection of the integrity score and the domain types therefore provides a clearcut and non-ambiguous indication of the ''innate'' character of the CCMV capsid subdivision into 32 quasi-rigid domains which in turn can be grouped into only two Each left (A) box shows the capsid structure and its asymmetric structural unit (with distinct quasi-equivalent proteins highlighted in different colors). The middle (B) box shows the order parameters used to identify and characterize the optimal quasi-rigid subdivision. The latter is marked by the red dropline. The corresponding partition into basic mechanical units is represented in the rightmost (C) box. The yellow line marks the boundary between the mechanical units which, for both capsids, come in two different types and are colored in shades of blue and red, respectively. The relationship between the mechanical units and the structurally-inequivalent proteins is illustrated at the bottom of box C. doi:10.1371/journal.pcbi.1003331.g002 structurally inequivalent types. The corresponding subdivision is shown in box C of Fig. 2, with the two domain types colored in shades of blue and red, respectively. The inspection of the subdivision shows that one domain type corresponds to pentameric units and the other to hexameric ones. There are 12 and 20 domains of each type, respectively. By considering the detailed structural representation of the two domain types, shown at the bottom of box C in Fig. 2, it is readily seen that they are, practically, an exact match of the hexameric and pentameric capsomeres described before, the only difference being that the interlocked C-terminus is assigned to the ''host'' dimeric subunit and not to the parent one. The swapping of C-termini across the hexameric and pentameric units yields an integrity score smaller than 100%. A further relevant parameter to consider for assessing the functional role of the subdivision is the degree of domain interlocking. The corresponding profile is shown in the bottom graph of box B in Fig. 2 and portrays the average number of a protein's terminal amino acids assigned to a quasi-rigid domain which is not the one containing the protein core. This parameter is monitored because several viruses, including CCMV, are assembled from protein dimers stabilised by the mutual interlocking of their termini which reach inside the partner protein core. The incidence of such interlockings across different quasi-rigid domains provides valuable clues regarding the relationship between the mechanically stable domains and capsid assembly/disassembly. In particular, the absence of cross-domain interlocking ought to be a good indicator that the mechanical domains are viable assembly/disassembly units too. The opposite should hold in case a significant amount of cross-domain interlocking is observed. It should, nevertheless, be borne in mind that cross-domain interlocking can arise after the assembly process. For the case of CCMV, we observe that the degree of interdomain interlocking for Q~32 is non-negligible and, indeed, it reflects the above mentioned dimeric swapping of the C-termini between protein subunits. From the previous considerations, this fact indicates that the quasi-rigid hexamers and pentamers do not have the correct level of internal structural independence to be viable candidates for assembly or disassembly blocks. This conclusion is indeed correct given the known role of dimers with linked domains as assembly units. In conclusion, the emerging quasi-rigid domain subdivision matches correctly the units identified by previous experimental and numerical studies. Bacteriophage MS2. We next consider the MS2 virus, which is constituted by 180 chemically-identical coat proteins with a total of 23220 amino acids [63,64]. As for CCMV, the protein units come in three structurally-inequivalent types (conformers), labelled A, B and C in box A of Fig. 2, which form interlocked A/B and C/ C dimers and are assembled in a T = 3 capsid geometry. However, the arrangement of these units is different: the asymmetric A/B dimer occurs in two groups of 5 around the 6 five-fold axes, and the symmetric C/C dimers are positioned on both ends of the 15 two-fold axes. The results of the MS2 quasi-rigid domain subdivisions are illustrated in the upper panel of Fig. 2. The protein integrity profile shows one prominent peak corresponding to the subdivision into Q~90 quasi-rigid blocks, whose relative rigid-like motions suffice to capture about 95% of the capsid's mean square fluctuations, see Fig. S2. These quasi-rigid units come in only two inequivalent types, as illustrated in box B. Detailed inspection of the subdivision reveals that these two types occur precisely in a 2:1 ratio and correspond to the C/C and A/B dimers, which are colored in shades of blue and red, respectively, in box C. As before, this match of the mechanical domains and structural dimers must be understood with the proviso that protein integrity cannot be fully respected. In fact, amino acids at the boundary of quasi-rigid dimer domains are not necessarily assigned to their sequence-wise nominal dimer. As a result, although the whole A/B and C/C dimers would comprise exactly the same number of amino acids, the two types of quasi-rigid units are structurally diverse enough to be distinguishable by size, see box C in Fig. 2. It is worth recalling that the MS2 capsid is in the same T = 3 class as CCMV. Hence their very different number and types of fundamental quasi-rigid units point to the important role played by specific capsid proteins in shaping the properties and behaviour of viral capsids that are not large enough to be dealt with by continuum approaches [9]. One further major difference between the MS2 and CCMV optimal subdivisions is that the 90 units have a practically negligible degree of interlocking. Indeed, the interlocking profile has a minimum for Q~90. This indicates that the small quasi-rigid units are structurally self-contained dimers. They are therefore viable candidates for being not only the fundamental mechanical blocks of the fully-assembled capsid but can be expected to be structurally-stable even in isolation and hence are also good candidates for being the assembly or disassembly units of the capsid. Indeed, this has been confirmed by isotope pulse-chase experiments [56]. In these experiments, protein subunits of dimers in complex with RNA are labelled differently from those in RNA-free dimers (via different isotopes) and both species are mixed. The fact that no dimers with differently labelled subunits are detected in solution or as part of any of the assembly intermediates suggests that the dimers do not fall apart into individual subunits and that hence the dimer is indeed the unit of assembly. We emphasize that this a priori conclusion has necessarily a tentative character. In fact, because the method is based on the properties of fully-assembled protein shells, it cannot account for the interaction of coat proteins and genomic material during the assembly process. Such interaction can be crucial to aid the fast and correct assembly in vivo [36,37,55,56,[65][66][67]. However, building on the fact that spontaneous in vitro assembly does occur in the absence of the genome, it appears plausible to consider noninterlocked quasi-rigid units as putative assembly units. These considerations are fully supported by the successful comparison with experimental data for MS2. In fact, it has been established that the capsid is assembled from the A/B and C/C dimeric units [56], and the assembly pathways have been characterized in detail both experimentally and theoretically [10,11]. In addition, the key role of the dimeric protein-protein interactions for capsid stability has been indicated by thermal and pressure denaturation experiments [68]. In summary, the MS2 findings reinforce the CCMV indications that the innate functional units identified with the quasi-rigid domain decomposition correspond to those established experimentally. STNV. The satellite tobacco necrosis virus has been one of the first to be determined at high resolution [69,70]. With a diameter of only 17 nm, this T = 1 RNA plant virus is one of the smallest known. The capsid is composed of 60 chemically and structurally identical coat proteins. Each of these consists of 195 amino acids and their N-terminal arms are positively charged [52,58], a common feature in many plant viruses. In the fully-assembled, genome-loaded capsids (which are extremely stable [15,58]) the Ntermini interact with RNA loops, achieving charge neutrality. This interaction has been argued to favour an extended and ordered conformation of the N-termini, which in turn aids the formation of trimeric capsomere units [52,57,71]. The quasi-rigid domain decomposition, whose results are reported in Fig. 3, returns an optimal subdivision for Q~20 mechanical domains. Their relative rigid motion accounts for more than 60% of the capsid's structural fluctuations, see Fig. S2. These domains correspond to trimeric units that are monodisperse in size and do not have interlocked termini. This outcome is consistent with the assembly mechanism discussed above, which involves 20 trimers as basic assembly units [52]. A noteworthy implication is that the fundamental units of the assembly process, in which the RNA is known to play a major role, can be correctly identified through the quasi-rigid domain decomposition of the empty capsid. In this regard, it must be borne in mind that elastic network models guarantee by construction the stability of the model capsid for structural fluctuations of the crystal structure. Therefore, as remarked earlier, ENM approaches can make up for the missing stabilizing interaction of capsid proteins and the packaged nucleic acid. At the same time, the finding indicates that the mechanical stability of the individual (non interlocked) assembly units is still discernible in the internal dynamics of the fully-assembled capsid. This remarkable property shows a posteriori that even in cases where protein-nucleic acids interplay is important, the quasi-rigid domain analysis of the pure protein shell can still give valuable clues about the assembly process. STMV. It is interesting to compare the above analysis with the one for another plant virus, the satellite tobacco mosaic virus (STMV), which presents several similarities with the STNV [15,58] including the T = 1 arrangement of the 60 identical coat proteins (with a total of 8820 amino acids) [72]. Because of its relatively small size, STMV represents an ideal and natural reference for numerical investigations [15,16]. To the present day, it remains the only virus for which all-atom molecular dynamics simulations have been performed on the fully-assembled capsid, both in the presence and in the absence of the genome [16]. This study as well as coarse-grained simulations [15] provide considerable insight into the internal dynamics of the capsid, its structural stability and resistance to nanoindentation. The consensus indication of these investigations is that the basic mechanical units are trimers of coat proteins. While this represents a further point of contact with STNV, it should be noted that the similarity of their assembly processes is still disputed. In fact, it is not yet understood whether assembly proceeds as a condensation of a protein-RNA complex [73] or if the collapse of the RNA into a globular state precedes and favours the formation of trimeric and pentameric units [16]. The results of the quasi-rigid domain decomposition of STMV are provided in the bottom panel of Fig. 3. The profiles shown in box B provide a clear indication that the basic rigid units correspond to monodispersed (identical) trimers; this partitioning of the capsid suffices to capture as much as 85% of the capsid's structural fluctuations, see Fig. S2. This result is fully consistent with the previously mentioned computational studies of STMV's structural stability, and also parallels the results of the related STNV case. However, at variance with STNV, the analysis of the interlocking profiles shows that, at low values of Q, the trimers present a significant degree of interlocking originating from the interdigitating N-terminal arms of dimers that straddle domain boundaries. This difference from STNV is not surprising, given the lack of amino-acid homology or immunological cross-reactivity between STMV and STNV [74]. As previously discussed for CCMV and unlike STNV, the significant interlocking prevents from concluding that the trimers are plausible building blocks for the assembly of STMV. As a matter of fact, McPherson et al. [74] suggest that the building blocks may be dimers that contact the genomic RNA at the particle 2-fold axes. This open issue could possibly be settled by establishing whether termini interlocking occurs before or after assembly. This information, which is at the heart of the ongoing debate on the STMV assembly process, is clearly beyond reach of the present approach which is based only on the fully assembled capsid. Predictions We now turn to discuss three viruses for which the basic, mechanically stable functional units are not conclusively known. The following viruses are considered, chosen in order of increasing complexity of the capsid type (T-numbers): the L-A (pT = 2), Pariacoto (T = 3) and polyoma (pT = 7) viruses. We recall that the pT = 2 and pT = 7 cases refer to non-standard Caspar-Klug geometries. L-A virus. The L-A virus is a double-stranded RNA (dsRNA) yeast virus whose capsid is composed of 120 chemically-identical coat proteins with a total of 78120 amino acids. The proteins are assigned to two types, A and B, based on their inequivalent positions, see box A in Fig. 4. Similar to several other dsRNA viruses, the A/B dimers are arranged in a T = 1 icosahedral capsid. This virus is classified as pT = 2 to account for the fact that dimers occupy the positions of monomers in a T = 1 structure [75,76]. Stable empty capsids are observed in vitro and it has been suggested that A/B dimers are the basic assembly building blocks [75]. By inspection one readily recognizes that the A/B asymmetric unit tiling the capsid can be defined in two inequivalent asymmetric ways (see Fig. 4). Because the two alternative pairings have a comparable buried surface area, it is not clear a priori which dimer type could be the basic assembly block. As we discuss hereafter, the quasi-rigid domain analysis can provide valuable insights into this open problem. From the analysis of the plots in box B of in Fig. 4 it emerges very clearly that the optimal subdivision is attained for Q~60 identical quasi-rigid domains. Their relative rigid motion captures about 80% of the capsid's mean square fluctuations, see Fig. S2. Because of the high integrity score of this subdivision and the bipartite A/B capsid tiling, it follows that these basic mechanicallystable units necessarily correspond to A/B dimers which, furthermore, are negligibly interlocked. This result is therefore fully consistent with the experimental indication of A/B dimers being the basic assembly units. The notable point is that the quasi-rigid domain analysis discriminates very clearly between the two inequivalent asymmetric A/B dimers shown in box A, arguably because of their different networks of intra-and inter-dimer interactions. In fact, upon repeating the quasi-rigid domain partitioning into Q~60 domains, one invariably observes that the strain-minimizing subdivision is the one shown in box C of Fig. 4. Given the robustness of this subdivision we predict that the A/B dimer shown in box C is the basic assembly unit of the L-A virus. Pariacoto virus. The Pariacoto insect virus belongs to the nodaviridae family and has a T = 3 capsid [77,78] constituted by 180 chemically identical coat proteins occupying three quasiequivalent positions. As shown in Fig. 4 the A units cluster around the five-fold axes while the B and C units are found at the threefold axes. The capsid consists of 62760 amino acids in total. While the C-terminal arm of each A protein is located in a channel formed by the A, B, C monomers at the quasi-3-fold axes, the N-terminal arms of the A proteins are involved in an extensive interaction with the encapsidated single-stranded RNA [77]. The inspection of the profiles in box B of Fig. 4 indicates that the optimal subdivision into mechanically-stable units is obtained for Q~60, whose rigid-like motion accounts for about 90% of the capsid's structural fluctuations, see Fig. S2. This partition corresponds to monodispersed, identical trimers, see box C. The other prominent peak for the much smaller number of Q~12 subdivisions corresponds to multiples (pentamers) of these trimeric units. The trimeric units correspond precisely to the A, B, C complexes and their minimal degree of interlocking is suggestive of their role as basic assembly units for the Pariacoto virus capsid. The identification of a trimer of proteins as the first stage of assembly is also consistent with the theoretical work by Reddy [79], which is based on calculations of the buried surface area of the coat proteins. Polyoma virus. We conclude the analysis with the discussion of the murine polyoma virus. This non-enveloped DNA virus has an icosahedral capsid with a pT = 7 (non Caspar-Klug) geometry [80]. The shell consists of 360 copies of the main coat protein (VP1) with a total of 129060 amino acids, the largest capsid considered here. The asymmetric structural unit involves six identical coat proteins which are organised into pentameric clusters with structurally inequivalent bonding environments [81], see Fig. 4. The peak structure of the integrity score profile indicates that the optimal subdivision involves Q~72 rigid domains. Their rigidlike motion accounts for about 90% of the capsid's structural fluctuations, see Fig. S2. As illustrated in box B of Fig. 4, these correspond to pentamers. More precisely, two inequivalent types of pentamers are recognized by our approach. The pentameric units shown in box C are therefore expected to be the stable mechanical units for the capsid (though not the assembly ones because of the significant amount of interlocking). This conclusion is reinforced by the analysis of the suboptimal subdivision into Q~12 domains. These larger domains correspond to five-fold symmetric units made of a central pentamer surrounded by five further pentamers and hence give additional support to the capsid's flexibility at pentamer-pentamer boundaries. This prediction could be verified by e.g. using molecular dynamics simulations to analyse the response of the capsid to nano-indentation. Summary and conclusions Identifying the fundamental, and typically multimeric, protein units that control the mechanical response of viral capsids or its assembly and disassembly is important both for rationalizing and for modeling key steps of viral life cycles [15]. Here we introduced and applied a novel computational strategy that, to our knowledge, represents the first attempt to develop a general and efficient method for identifying the basic, mechanically stable protein units starting from the sole input of the fullyassembled protein capsid. The method relies on the characterization of the internal dynamics of the capsid by means of elastic network models and uses it to optimally decompose the protein shell into blocks that have the characteristics expected for genuine capsid functional units, such as mechanical stability (quasi-rigidity), structural integrity of the constitutive proteins, or small numbers of inequivalent block types etc. The viability of the scheme was first assessed and validated by considering a set of four viruses (CCMV, MS2, STNV, STMV) for which the fundamental functional units are known. In all cases, the results of the optimal decomposition scheme were fully consistent with available experimental or numerical results for the known mechanical and/or assembly protein units. We next turned to a further set of three viruses, namely polyoma, Pariacoto and L-A virus, whose functional units are debated or not known, and for which we formulate verifiable predictions. The positive validation of the method and its affordable computational cost (the first hundred ENM modes of the internal dynamics of capsids of about 60000 amino acids can be obtained in *2 hours on a single Intel Xeon 2.40 GHz processor) demonstrate that simple structure-based strategies can provide considerable information on the basic functional units. In particular, they not only aid the understanding of various viral processes but can also guide the development of their multiscale modelling. We envisage two natural extensions of this first study. On the one hand it would be important to explore the possibility to include, even approximately, the interaction of the coat proteins with the packaged genome. This would be an apt complement of previous studies which considered the viability of ENM characterizations of empty capsid shells as proxies for the genome-loaded virion particles. On the other hand, it would be most interesting to extend considerations systematically to larger and more complex capsid geometries in order to understand how the functional units change as one goes from small-or medium-sized capsids (where the discrete protein nature of the capsid is visible) to larger structures that are well approximated by continuum theory [9]. Calculation of the structural fluctuations via ENM Proteins and protein assemblies in thermal equilibrium can sustain structural fluctuations of appreciable amplitude. A large body of experimental and numerical evidence has indicated that the principal fluctuation modes, those of lowest energy, have a collective character. This means that the structural deformations associated to these modes entail the concerted displacements of groups of several amino-acids. As was first shown by Tirion [41], the collective character of the modes justifies the use of simplified, coarse-grained models (rather than atomistically-accurate ones) for calculating the principal modes of fluctuation of a protein around its reference, native structure. A commonly used framework for such coarse-grained calculations is provided by elastic network models. The latter rely on a quadratic approximation of the near-native protein free energy, where N is the number of amino acids, dr i is the vector displacement from the native position of the i th main chain (backbone) centroid (typically the C a atom) and M is the effective symmetric interaction matrix of linear size 3N. Within the quadratic approximation of Eq. (1), the principal modes of structural fluctuations can be calculated exactly with minimal computational expenditure, and they correspond to the eigenvectors of M having the lowest non-zero eigenvalues. In the following we shall indicate by l 1 , l 2 , l 3 , . . ., the nonzero eigenvalues ranked according to increasing magnitude (they are all positive) and with v 1 , v 2 ,v 3 , . . . the corresponding orthonormal eigenvectors. It can be shown that l l corresponds to the total mean square structural fluctuation projected on the l th mode, T, where S : T denotes the canonical equilibrium average and v l,i is the displacement of the i th amino acid projected on the l th mode. In this study, we shall resort to the beta-Gaussian network model [45] to compute the matrix M and its eigenvalues and eigenvectors. The model, which is implemented in a freelyavailable numerical code [45], was previously successfully validated against extensive molecular dynamics simulations of various proteins and protein complexes. At variance with most elastic network approaches it uses not one, but two interaction centers per amino acid: one for the main chain, the other for the side-chain (omitted for glycine). As customary, the centroids' interaction range was set equal to 7:5 Å . Because the side-chain degrees of freedom are integrated out analytically, the linear size of the matrix M is still equal to 3N, as in single-centroid schemes. The computational burden associated with the memory storage and diagonalization of the M matrices for the capsids (N is in the 10 4 {10 5 range) was limited by taking advantage of the sparse character of M and calculating its lowest-energy eigenvectors using the shift-inverse Arnoldi method, as implemented in the Arpack routines [82]. These algorithmic techniques (which could be further aided by symmetry considerations [83]) sufficed to compute the relevant low-energy modes of all capsids, except for L-A, using less than 24 Gb of RAM and a single 2.4 GHz Intel processor. The modes calculation is the slowest computational step in the whole decomposition procedure for larger viruses (for instance, it took about 3 hours for the L-A case). For the polyoma capsid alone which, at N~129060, is the largest entry in our set, we found it necessary to adopt a coarser ENM description. Specifically, we used one centroid per two amino acids by retaining only one for every other C a . The interaction range was rescaled accordingly and set equal to 15 Å . Consistent with established results for the case of globular proteins [84], this coarse-graining procedure has no effect on the optimal quasi-rigid domain decomposition of smaller capsids. This is illustrated in Fig. S4 for the STMV capsid which, being the smallest considered here, is expected to be the most susceptible to the coarse-graining level. This validation and the considerations of [84] provide a justification for the use of the coarse-grained description for the polyoma capsid. Exploration of capsid subdivisions into putative quasirigid domains The subdivision of viral capsids into quasi-rigid domains is based on the PiSQRD strategy introduced in refs. [33,34]. The approach relies on the notion that for a genuine rigid-body the modulus of the distance of any two points remains constant as the body is moved in space. Accordingly, one can quantify the viability of a tentative capsid subdivision into Q putative quasi-rigid domains by comparing amino acids' pairwise distance fluctuations within each domain with those across domains. For good subdivisions, the former should be much smaller than the latter, see sketch in Fig. 5. To turn this observation into a quantitative scheme amenable to numerical implementation, we consider the geometric strain, f ij , for a given pair of amino acids, i and j. Using the same notation introduced after equation (1) for the principal modes of structural fluctuations, v, and their associated amplitudes l, where d ij is the reference, native distance vector of the i th and j th amino acid, and n is the number of retained principal modes. n is chosen by retaining all the modes with energy lower than the fifth non-zero mode of a single coat protein, thus ensuring a sufficient level of detail while minimizing the computational effort and discarding the mostly irrelevant high-frequency details. Accordingly, the internal strain of the k th domain D k is defined as where the sum runs over all the pairs belonging to that domain, and the overall strain is therefore Based on previous considerations, the desired subdivision is the amino acid partitioning into Q groups that minimizes the overall strain F . Notice that the minimization of F needs to be performed separately for all possible values of Q, that is from 2 up to the number of protein units forming the capsid (or even larger values in case the mechanical domains involve protein structural subunits). In fact, the ''correct'' optimal number of quasi-rigid domains is not known a priori and needs to be found based on physical considerations, see the next subsection. For each explored value of Q, the minimization of F over the amino acids' assignments is performed by a greedy algorithm starting from a random labelling. At each step of the algorithm a randomly-picked amino acid is reassigned to a randomly-chosen domain. The new assignment is accepted if it leads to a decrease of F and rejected otherwise. The scheme is repeated until the algorithm is unable to further improve the solution, i.e. the count of systematically rejected moves is comparable with the total number of amino-acids. To reduce the impact of getting trapped in local minima of F (whose landscape roughness increases with Q) the greedy minimization scheme is iterated if the distribution of the domain strain F D k , k~1, . . . ,Q is highly heterogeneous (which could be a sign of a very asymmetric solution). Specifically, we first compute the average, m, and standard deviation, s, of the domains' strain and check if one or more residuals R k~FDk {m j jis larger than 3s. If so, then the two domains with smallest strain are joined while the one with the largest strain is split in two. This amino acid reassignment clearly preserves the total number of domains, Q. The greedy minimization of F is repeated and the procedure is iterated until one of the following holds: (i) convergence to a minimum which features a sufficiently homogeneous energy distribution, or (ii) the splitting/joining move is unable to improve the solution. It is important to note that no a priori information about the capsid's parsing into single proteins is used to identify the domains. Indeed, in principle mechanical domains can cut through proteins, for example when a rather loose loop tightly binds to a different block. The comparison between the mechanical and the proteins boundaries is done a posteriori, providing information on the reliability of the subdivision itself. The Q-dependence of the miminized geometric strain for all considered capsids is shown in Fig. S5. A posteriori assessment of the quasi-rigid character of a subdivision Besides calculating the geometric strain, the genuine quasi-rigid character of a given decomposition into Q domains is more intuitively assessed by computing the fraction of overall capsid motion that can be ascribed to the relative rigid-like movements of the domains (i.e. by neglecting intra-domain fluctuations as if the domains were strictly rigid). This quantity is calculated by considering that each normalised mode, v l , can be decomposed as a sum of two contributions: one consisting of pure rigid rotations and translations of the domains, v rb l , and one describing intra-domain fluctuations, Dv l , i.e. v l~v rb l zDv l . Because these two components are orthogonal [33] one has that Ev l E 2 :Ev rb l E 2 zEDv l E 2~1 . The fraction of the capsid's mean square structural fluctuations that can be ascribed to the relative rigid displacement of the domains is accordingly: The profile of the fraction of motion captured by the domain decomposition of all considered capsids is shown in Fig. S2. Selection of optimal subdivision into basic mechanical units The algorithm for the subdivision into Q domains was applied to the viral capsid several times, varying Q between 2 and the total number of proteins in the capsid. After establishing the quasi-rigid character of the putative subdivisions by monitoring the strain and the above-mentioned fraction of captured motion, the identification of the optimal value(s) of Q corresponding to a subdivision into viable, basic mechanical units was performed by monitoring two physical quantities: protein integrity and the number of inequivalent capsomere types. They respectively account for the compatibility of the subdivision with the natural elementary units represented by the single proteins and for the structural similarity of the tiles, which results in a low number of different tiles. Given a subdivision into domains, an integrity parameter was defined for each protein. For a general subdivision, the amino acids of a protein can be assigned to a number of different domains. However, a good subdivision should preserve the integrity of the protein, i.e. almost the whole protein should belong to a single domain. We thus defined the integrity score for a protein as the largest fraction of its amino acids assigned to a single domain. This quantity was then averaged for all the proteins, providing a score for the capsid subdivision. We also computed the number of similar tiles identified by our subdivision by size inspection. Specifically, we defined the size of the i th domain as the number of amino-acids belonging to the domain itself; we then assigned domains to a tile type if their size is the same within ca. 3% of the average size. Viable subdivisions into basic mechanical units were identified by maxima in the integrity score corresponding to a small number of tile types. Interlocking between capsomeres To detect possible intertwinings between quasi-rigid units (e.g. due to swapped tails or subdomains of the parent proteins) we computed the interlocking parameter. Specifically, we considered separately the two termini of each protein in the capsid, namely the first and last twenty amino acids, and counted the number of amino acids assigned to a rigid domain different from the dominant one (i.e. the domain to which most of the protein's amino acids belong). This calculation returned the number of interlocked amino acids for each terminus of each protein in the capsid. The numbers relative to the N and C terminals were averaged separately, and the largest of the two averages was taken as a measure of the interlocking of the quasi-rigid domains. In other words, if a quasi-rigid domain subdivision has interlocking number equal to 10, it means that on average one protein has 10 terminal residues assigned to a different domain than its core. Clearly, this also implies that the other terminus has less than 10 interlocked amino acids. Figure S1 Decomposition into basic mechanical units of the HEV virus-like particle. As is shown in box A, each of the 60 coat proteins features three distinct structural subdomains, named S (a coat domain which composes the envelope for the genetic material), P1 (which forms a protrusion around the three-fold axis) and P2 (which forms spikes on the two-fold axis). The optimal subdivision, corresponding to Q~50 domains (coming in two distinct types) is identified by the peak in the integrity score calculated at the protein level and at subdomain level, see the black and blue curves, respectively, in box B. The fact that the peak of the subdomain integrity is much more prominent than for entire proteins indicates that the basic mechanical domains involve structural subunits from different proteins. This is clearly visible in box C which shows that one domain type corresponds to the spike (formed by the P2 subunits of two neighbouring coat proteins) while the other is a trimer involving the S and P1 subunits of three neighbouring coat proteins. (TIF) Figure S2 Fraction of overall capsid motion (mean square structural fluctuations) that can be ascribed to the pure rigid-like movements of the Q quasi-rigid domains. For each value of Q we considered the domain subdivision which minimizes the geometric strain. Panels a-h refer respectively to: CCMV, MS2, STNV, STMV, L-A virus, Pariacoto virus, polyoma virus and HEV. (TIF) Figure S3 Suboptimal decompositions of CCMV. Panel A shows a close-up of the CCMV profiles for the integrity score and number of tile types for subdivisions from Q~2 up to 30 quasirigid domains. Panel B illustrates non-optimal quasi-rigid decompositions of CCMV. The subdivisions correspond to partitions into very few domains as indicated by the Q label. For each of these subdivisions the number of different tile type is large and ranges from 3 to 4. For simplicity we therefore used a different color for each domain rather than a different color for domain type as in the figures in the main text. (TIF) Figure S4 Structural coarse-graining and robustness of quasi-rigid domain decompositions. Optimal subdivision of the STMV capsid into 20 quasi-rigid domains obtained by using the coarse-grained ENM where only every other C a atom is retained, see Methods. The profiles of various order parameters for the subdivison are shown in box A. The resulting coarsegrained subdivision is shown in box B and is practically indistinguishable from the one given in Fig. 3 where all C a atoms were retained. (TIF) Figure S5 Q-dependence of the miminized geometric strain. Panels a-h refer respectively to: CCMV, MS2, STNV, STMV, L-A virus, Pariacoto virus, polyoma virus and HEV. Notice that at the value of Q corresponding to the optimal subdivision (highlighted by the red band) there is usually a kink. The latter signals the change of the slope of the strain curves when the ''innate'' number of subdivisions is crossed. (TIF)
11,799.2
2013-11-01T00:00:00.000
[ "Materials Science", "Physics" ]
Programmable Session Layer MULTI-Connectivity Our devices can use a wide range of communication technologies such as multiple cellular technologies (4G/5G), WiFi, and also Ethernet. At the same time, applications have a choice of a wide range of transport protocols such as QUIC and TCP that can be fine-tuned and optimized according to their needs. However, in spite of these advances, offering seamless multiconnectivity to applications continues to be a hard problem. The key factors that continue to be a roadblock towards achieving seamless multiconnectivity include a) applications cannot specify the communication technologies to be used by their flows, and b) the traditional definition of a connection endpoint was not designed to support mobile nodes. In this paper we discuss the key challenges that make this problem hard. We also present MULTI, a session layer approach that can be leveraged to address some of the key sub-problems of this problem. For instance, we observe that MULTI incurred a small overhead (less than 5% decrease in throughput) when using TCP compared to the native asyncio python library. I. INTRODUCTION Our mobile devices, including laptops and smart phones, can use multiple different communication technologies and multiple transport protocols. However, utilizing these technologies and protocols to their full potential continues to be difficult and off-the-shelf mobile devices still continue to use a single communication technology for data transfer. For instance, Android devices give WiFi a higher priority than cellular due to different reasons such as monetary, bandwidth or latency [1]. Some of the key reasons that stop us from efficiently utilizing these technologies and protocols are as follows. 1. Applications cannot easily specify the communication technology that should be used for its flows. Different applications have different requirements for their connectivity. Some may require bandwidth, while some require low latency. Devices typically allow using only a single communication technology for data transfer, and the applications have almost no ability to specify their preferred choice of communication technology. For instance, our smart phones typically give WiFi a higher priority than cellular [1]. On Linux devices, the The associate editor coordinating the review of this manuscript and approving it for publication was P. Venkata Krishna . priority of communication interfaces is typically device-wide, and applications require specific capabilities or superuser privileges to specify application specific interface priorities [2]. Similarly, versions of Android may allow restricting applications to either mobile networks or WiFi through system settings or through a firewall application [3], [4]. 2. The traditional definition of a connection endpoint was not designed to support mobile nodes. An UDP or a TCP transport connection is defined as a five tuple -source IP address, source transport port, destination IP address, destination transport port, and transport protocol. These five values are present in the packets using the connection, and the four values other than the transport protocol may be modified at different in-network functions in the path between the source and the destination. Consider a packet originating from a mobile device to a remote server. For example, network address translators can modify the source IP address and the source port number while reverse proxies may modify the destination IP address and port numbers [5], [6]. Clearly, the definition of a transport connection is not truly end-to-end because the same connection may be defined by a different five tuple at the source and the destination [7], [8]. Over the years, there have been multiple different solutions that have been proposed to address these issues (see §II). However, Internet ossification makes it difficult to use the proposed protocols in the wild [9]. Regardless, what is common to all of them is that they either tackle a specific use case or require special infrastructure and software. For instance, the shortcomings of using the five tuple as a connection endpoint requires mechanisms that are agnostic to the changes at underlying transport and network layer. [1], [10], [12]. QUIC and its extension MultiPath QUIC (MPQUIC) exemplify such a mechanism [1], [10], [12]. They are designed from the beginning to be transport protocols over UDP that are agnostic to the source address and port by using a Connection ID inside the packets. The Connection ID allows the QUIC server to associate packets with same Connection ID to an established connection regardless of the source of the packets. Similarly, Multipath TCP (MPTCP) [13] uses special TCP headers to carry a Connection ID that allows end hosts to combine packets with a different five tuple to the same connection. Another example is Mosh [14], which addresses this issue with its State Synchronizing Protocol (SSP); SSP can be broadly categorized as a session layer protocol that uses the services of transport layers. In this article, we first discuss some of the key challenges that continue to make seamless multiconnectivity hard. We then present an example of a programmable session layer solution, MULTI, that is aimed at addressing some key sub-problems of this problem. Among the key goals of MULTI is the ability to allow applications to specify the preference of protocols, and also suggest configurations for the protocols. MULTI acts as a shim at the session layer that also allows multiplexing and aggregating data over multiple connections. Using multiple, potentially different transports allows MULTI to achieve connectivity over different networks with different policies. As establishing connectivity and multiplexing data over connections is performed at the session layer, MULTI can be deployed rapidly and allows fast deployment of new features, including more advanced schedulers that determine how, for example, link aggregation is handled. Our key contributions are as follows. 1) Our solution, MULTI, attempts to work around Internet ossification and provides an umbrella which incorporates past approaches. MULTI combines the best of previous works in multiconnectivity and builds on their insights. 2) We detail the characteristics of MULTI which allows applications to simultaneously use multiple transport protocols and interfaces with different characteristics. Specifically, MULTI allows applications to request connections with certain characteristics including configurations for the transport and Internet protocol, the interfaces and interface configurations, etc., and this in turn enables MULTI to multiplex data over multiple transport layer connections. 3) We provide an open source proof-of-concept implementation of MULTI for evaluation purposes. We believe that our prototype of MULTI can be easily extended to include features such as custom schedulers for multiplexing data over multiple connections and also supporting new transport protocols. MULTI builds on the insights of Mobile IP [15], QUIC, and MPTCP. We believe that it is the next step in the series of works that have been aimed at offering mobility and multiconnectivity. Specifically, Mobile IP allows the interfaces and the networks to change, but the IP address assigned to the device stays fixed. Using QUIC allows applications running on mobile devices to remain connected when the used IP address and interface changes by using a Connection ID, while MPTCP allows applications to multiplex data streams over multiple interfaces. In contrast, MULTI allows applications to multiplex data stream over multiple transport protocols, each of which can use different IP addresses and interfaces. We implement MULTI using state-of-the-art asyncio primitives of Python [16], and the asyncio QUIC library [17]. We show that it can support more than one transport protocol, and more than one link layer technology. We also show that it can achieve a throughput and latency that is comparable to the protocol implementations it uses. For instance, we observe that MULTI incurred a small overhead (less than 5% decrease in throughput) compared to the base line asyncio when using TCP. II. BACKGROUND There have been many different multiconnectivity solutions developed over the years. These solutions operate in different levels of the network stack, such as network, transport, and application layers. Some of the solutions also operate in multiple layers, as they may use capabilities of a layer above or below the actual layer where they operate. The solutions can also be roughly divided into those that do multihoming [15], multipath [12], [13], and those which do both to various degrees [18]. In this section, we briefly describe several of the solutions and categorize them based on their characteristics. We also discuss why multiconnectivity is still hard in the current Internet. A. NETWORK LAYER AND BELOW In this section, we go through several multiconnectivity methods that belong to the network layer. These solutions operate under the transport layer and aim to provide either multihoming or multipathing. The Table 1 shows the main differences of network layer protocols discussed below. 1) HIP Host Identity Protocol (HIP) is a technology that separates the endpoint identifier and locator roles of an IP address, and creating a new name space that allows mobility and multihoming [19], [21]. HIP is implemented as a layer between the transport layer and the IP layer, creating a new Host Identity Layer between them. For identifying hosts, HIP uses cryptographic Host Identity Tags, which are exchanged between VOLUME 10, 2022 hosts and rendezvous server. Every time a host moves into a new location, i.e. its IP address changes, the host updates the rendezvous server of its new IP address. When two hosts need to communicate with each other, they first contact the rendezvous server through for example the DNS system. The server responds with the current location of the destination host, allowing hosts to communicate. While HIP allows mobility and multihoming, there are drawbacks. First, both hosts need to support HIP in their operating system. Second, there need to be special rendezvous servers in the Internet with static locations. These require resources to run, and are susceptible to attacks. 2) MOBILE IP Mobile IP is another way to allow multihoming and mobility [15]. In Mobile IP, each host has its own Home Network. Withing the Home Network, the host has a permanent IP address. When the host is away from the home network, it receives a so-called Care-of Address from the foreign network it is attached to at that time. The host then registers the Care-of Address with a Home Agent inside the Home Network. This allows the Home Agent to tunnel traffic to the host's permanent address to the host's current Care-of Address, and allows applications to remain unchanged. As with other systems, Mobile IP does have its drawbacks. Main drawback is the need for the Home Network, which needs hosting resources. Since traffic is relayed through the Home Network, it also causes extra latency, and depending how Home Network is connected to the Internet, possible bandwidth caps. 3) SDN-BASED MULTIHOMING On-device virtual switches managed by Software-Defined Network (SDN) controllers can also be used to offer multiconnectivity. Meghna [20] is one such example for networkdriven multihoming. Meghna uses an SDN switch on the host device that is connected through different interfaces to an SDN capable network. The host switch, or more precisely the traffic managed by the host switch, is controlled by a network SDN controller. The host switch bridges all network interfaces together, and exposes a single interface to the applications. The traffic is then forwarded to the network using the interface selected by the controller. The limitation of this approach is that it requires a home network. If the device is roaming away from the network, the Meghna establishes a VPN connection to the home network, and the traffic is first forwarded to the home network and then beyond. This causes overheads as the traffic has to take a longer non-optimal route. On the other hand, the Meghna allows the applications to retain the same IP address independent to their physical network location, while the main drawback is the requirement of the home network. B. TRANSPORT LAYER In this section, we discuss several transport layer technologies. What sets them apart from the lower network layer is that they aim to provide their capabilities over either extending existing transport protocols such as TCP with multiconnectivity features, or provide new transports that have been designed with multiconnectivity in mind. The differences between the discussed protocols are shown in Table 2 for reference, and the details are discussed below. 1) MULTIPATH TCP Multipath TCP (MPTCP) [13] extends the regular TCP protocol. It was developed to add multipath support to TCP, i.e, allow the simultaneous usage of all available network addresses for a TCP flow by creating sub-flows over the addresses. MPTCP can aggregate all available links for bandwidth, perform seamless migration, and choose a lowest latency link for interactive usage. MPTCP is implemented by special TCP options. An MPTCP host announces that it supports MPTCP on opening a TCP connection. If the destination is also MPTCP capable, they negotiate the connection, announce their other addresses, and initiate sub-flow establishment over other addresses. If a sub-flow becomes invalid, for example due to roaming between networks, the host announces that the particular subflow is invalid and should be removed. Allowing MPTCP to use all possible interfaces increases the available bandwidth, but can incur extra monetary costs over cellular links. In addition, using simple path managers can allow the traffic to also traverse networks that are undesirable due to various reasons such as security. However, MPTCP supports more advanced path managers that can be used to achieve desired interface usage based on existing policies or user input. 2) QUIC QUIC [1], [10], [23] is a transport protocol built over UDP to facilitate better performance for HTTPS. QUIC is designed to multiplex multiple data streams into a single QUIC connection as many web pages contain multiple small elements, and opening a new connection for each of them is expensive. Unlike TCP and MPTCP, QUIC is implemented in userspace, and this allows easier deployment as it does not require modifications to the OS kernel. QUIC supports mobility by including a Connection ID in the QUIC headers. This Connection ID allows QUIC to resume connections that would otherwise be broken on IP address and port changes, for example due to switching from WiFi to cellular network. Although QUIC is designed for HTTPS, applications can use it as a transport protocol to exchange other data streams. 3) MPQUIC Even though regular QUIC is not dependent on the classical 5-tuple describing connections due to the Connection ID, QUIC does not support multipath connections. Currently, there is a proposal to extend QUIC with multi-patch capabilities known as Multipath QUIC (MPQUIC) [12]. MPQUIC is an extension to QUIC. Its operations are similar to MPTCP. When a QUIC connection is established, the peers of the connection exchange their IP addresses, and check if they can exchange flows between themselves over the new connections. Similarly to MPTCP, MPQUIC can tolerate a loss of connections, and use the rest of the flows as before. 4) STREAM CONTROL TRANSMISSION PROTOCOL Stream Control Transmission Protocol (SCTP) is a protocol designed for reliably transferring messages between endpoints over UDP [22]. Unlike TCP, SCTP transfers messages instead of constant stream of data over UDP. The messages are encapsulated in chunks that are transferred inside SCTP packets. SCTP provides congestion control and supports multihoming. Both endpoints of a SCTP connection can have multiple IP addresses, and connectivity between them is probed during the connection establishment. If any of the available connections fail, the rest of the connections can still be used. While the SCTP was published in 2000, it has been plagued by the lack of support in middleboxes and lack of awareness. As such, while SCTP would provide many of the features desired in multiconnectivity, it cannot be relied as the only option. C. APPLICATION LAYER Here we discuss two approaches for multiconnectivity in the application layer. Namely, these methods do not rely on underlying layers, but embed relevant information in the application data to allow hosts to move. 1) MOSH Mobile Shell (MOSH) [14] is a remote terminal application that can handle roaming between networks. It serves as an example of a session layer approach to offer multiconnectivity. MOSH is primarily aimed at addressing the problems of SSH, including no roaming and sleep. These problems largely stem from the way SSH connects to the server and transfers data. The connection is tied to the 5 tuple describing the connection, and the data is transferred as a continuous data stream, i.e. all bytes need to be transferred and shown in order. MOSH does not transfer data in a stream, instead it synchronizes objects. Consequently, a MOSH user sees the latest visible terminal data instead of having to go through all the backlog of the terminal. MOSH operates over UDP that allows datagrams to be sent over a UDP socket regardless of the current IP address. Like QUIC, MOSH also uses connection identifiers embedded in the UDP datagrams with its State Synchronizing Protocol (SSP). This allow MOSH server to associate datagrams from different clients to specific sessions, allowing MOSH to achieve roaming support. SSP can be broadly categorized as a session layer protocol that uses the services of transport layers. However, as MOSH is only a remote terminal application, MOSH cannot be used as a transport protocol. 2) BUFFERING Another way to handle multiconnectivity is buffering data. This approach is applicable to streaming video or similar from the network, where there is no realtime component to data, i.e. the stream is predetermined and does not have changing elements [24]. In this approach, the video streaming application uses cookies or similar to carry the Connection ID and buffers the received data before showing it to the user. If the network connectivity changes, the application still has buffered video to show while it tries to reconnect to the service. At best, the user does not even realize the network has changed as long as the reconnect happens before the buffered video runs out. This approach is not applicable to any use cases, where the content changes or there is any realtime component, such as gaming in the extreme case or even browsing web pages. In these cases, the user expects to receive the data as fast as possible with minimal buffering. D. TAPS Earlier examples of multiconnectivity protocols discussed above are single technologies that solve multiconnectivity VOLUME 10, 2022 in their own niche. However, they are not able to provide a holistic approach to multiconnectivity. The Internet Engineering Task Force (IETF) Transport Services (TAPS) working group (WG) is working on defining an architecture for exposing a Transport Services API to the application developers [25]. The goal of the Transport Services API is to allow application developers easier access to transport protocol services such as multiple IP addresses, multipathing, and providing multiple application streams. Traditionally the socket API provides access to different transport protocols such as TCP and UDP. However, different protocols have different methods for accessing them and are not used consistently. In some cases, conceptually similar protocols, for example TCP and Transmission Layer Security (TLS) that both provide reliable data streaming services, use different calls to access the send and receive services of the protocols. Similarly, different protocols use different terminologies for the same concepts such as connection, flow, and messages. These create a burden for application developers to learn the differences of each transport protocols, including the calls to be used and the terminology. This has caused a stagnation on what protocols are actively used in the Internet, namely TCP and UDP. The TAPS WG has also identified a set of services that different transport protocols offer, including services that can be handled automatically by the operating system, and services that require interaction from the applications. This identification has allowed TAPS to specify a minimal set of required services that the Transport Services API needs to provide to the application. The Transport Services API defines the mechanism for applications to create network connections and perform data transfer. The API is an asynchronous, event driven system that uses messages to transfer data. Each call is designed to be asynchronous, i.e. the calls do not block the application. E. NEAT NEAT is a framework for platform and protocol independent transport API [26]. It is a userspace framework that allows application developers to use different transport protocols with minimal interaction from the application side. The NEAT also allows applications to specify different options and requests to the operating system on what transports should be used and how they should be configured. As such, NEAT can be considered a prototype implementation of TAPS. The NEAT framework consists of the NEAT user module, which includes the NEAT API that applications use to request a connection. The NEAT module's policy manager and other components handle gathering candidate connections based on policies and cached information. The NEAT framework performs Happy Eyeballs connection candidate gathering for each available or requested protocol [27]. For example, if the transport policies advocate SCTP or TCP, the NEAT performs connectivity checks to determine what IP address (IPv4 or IPv6) and which protocols work. This allows NEAT to discard non-working protocols and cache working protocols for future use. The NEAT framework provides the best transport solution based on the request the application makes to NEAT, the policies defined by the system and its administrators, and the transports the network offers. This allows NEAT to provide the best single transport solution to the application. The goals of TAPS and NEAT are very similar to the goals of MULTI which are discussed in §III-A. The main difference between NEAT and MULTI is that NEAT aims to provide the best single transport solution such as MPTCP or QUIC, MULTI aims to provide multiple transports that can be used simultaneously. F. WHY IS MULTICONNECTIVITY STILL HARD? Solutions described in this section try to solve multiconnectivity in different ways at different layers. Some of them do it at the network layer, some at session, at the application, or mixing them. Unfortunately, each of them have their own niches and requirements from the hosts, servers, networks, and middleboxes. 1) FIVE TUPLE CONNECTION ENDPOINTS DO NOT SUPPORT MOBILITY The transport and network protocols used today were primarily designed to offer connectivity to devices that were either placed in static locations or did not move from one network to another. The end-to-end principle [28] is also largely violated with the introduction of load balancers, network address translators (NAT), and in-network middleboxes that modify the packet headers and payloads [9]. As a consequence, the five-tuple definition of a connection endpoint is no longer valid in a large number of networks. 2) MIDDLEBOXES MAKE IT DIFFICULT Middleboxes and firewalls also ossify the protocols used because they tend to drop packets from protocols that are not well-known. For instance, firewalls blocking UDP datagrams make it difficult to introduce new UDP-based protocols or provide extensions to existing UDP-based protocols such as QUIC [1], [10], [12]. Stateful middleboxes that maintain the state of the connection also hinder multiconnectivity. When a device changes its location, the 5-tuple becomes obsolete, and causes the connection to break unless the middlebox can somehow tie the old and the new location together. Furthermore, encrypting the payloads of the packets aggravates this problem. 3) DEVICES MAKE MULTICONNECTIVITY HARD Another thing that makes the multiconnectivity hard are the devices themselves. The internal routing of the devices are driven by predefined rules such as an Ethernet interface will have higher priority than a WiFi which in turn has a higher priority than a cellular interface, even though the application could reach the destination using any of them. These rules do not take into account the current network state. For example, the WiFi interface of the device could be connected to a network with very slow uplink, while the cellular network could have a much faster uplink. A user (or application) cannot easily specify what communication interfaces to use, apart from manually turning an interface off and forcing the device to use other interfaces. One approach to perform system-level traffic steering is to set routing table rules. However, these are system-wide changes and are hard to do per application basis, and they can break connectivity completely by accident. Furthermore, applications require super-user privileges to perform these changes. To summarise, currently there is no single solution that would solve the issues of multiconnectivity. Each solution handles a specific use case or area, requiring specific infrastructure and software support. III. OUR SOLUTION: MULTI We design MULTI to complement the end-to-end principle of system design, in which applications draw a modular boundary around the communication subsystem and define an interface between it and the rest of the application [28]. In the previous section, we highlighted that TAPS and NEAT exemplify the design of such an interface. In this section we detail how MULTI builds on the insights of TAPS and NEAT to multiplex data over multiple transport layer connections, potentially using different protocols simultaneously. MULTI enables application developers to support and also adapt to the protocols in the link layer, network layer, and transport layer that are available on the networks to which devices using MULTI are connected. Specifically, MULTI acts as an umbrella for different solutions that have been developed over the years and it enables applications to make requests and provide hints and intents on its expectations and demands. In §III-A, we discuss our goals, followed by architecture design in §III-B. Finally, in §III-C we discuss our implementation details of the proof-of-concept version of MULTI. A. GOALS MULTI is designed to achieve the following goals. 1) END-TO-END EXCHANGE OF DATA STREAMS Our aim is to allow applications to exchange streams of data where the order in which the data arrives matters; currently we do not focus on datagrams where data can arrive out of order. We envision MULTI to be used by an end-user device for exchanging streams of data with a remote host. We believe that if our solution can support exchange of streams data, it can easily be modified to support datagrams and also short messages. Currently, we limit MULTI to transport protocols that have native support for exchanging data streams. For instance, MULTI can use UDP-based transport protocols such as QUIC that have native support for exchanging streams of data, but it cannot use UDP directly. 2) SUPPORT DIFFERENT TRANSPORT, NETWORK, AND LINK-LAYER PROTOCOLS We aim to offer applications the flexibility to choose from a wide range of transport, network, and link-layer protocols. We assume that devices that use MULTI will be able to avail themselves of the services of different networks, each of which supports a different set of network and transport protocols. Inspired by TAPS and NEAT, our goal is to enable applications to use the best protocols for each situation depending on the requirements, the current policies, and the capabilities of the networks available for the data exchange. 3) SIMULTANEOUSLY USE MULTIPLE TRANSPORT PROTOCOLS Simultaneously using multiple different transport protocols allows MULTI to achieve connectivity over multiple heterogeneous networks. Networks use middleboxes [29], and Honda et al. [9] highlight their impact on the ossification of the Internet. As a consequence, the set of transport protocols supported by the networks to which a device may connect cannot be known a priori. For instance, some networks may restrict or disallow usage of protocols such as MPTCP due to its special TCP headers or have policies against UDP flows that in turn affect QUIC. Unlike TAPS and NEAT which attempt to identify which transport protocols may be supported by the networks, and select one of the supported transport protocols, we believe that applications can benefit with the ability to multiplex data streams over multiple transport layer connections. Using multiple different protocols over the networks allows MULTI to achieve connectivity when a single protocol could fail. We believe that the idea of aggregating across multiple transport protocols makes MULTI a novel approach to offer seamless multiconnectivity. 4) ALLOW APPLICATIONS TO a) SPECIFY THE PREFERENCE OF PROTOCOLS, AND b) SUGGEST CONFIGURATIONS FOR THE PROTOCOLS Along with allowing applications to be agnostic to the underlying protocols, we also aim to support a fine-grained control of the protocols. This is essential to ensure that flexibility does not come at a cost of reduced control over the underlying protocols used. Preference of protocols is vital because some protocols may be optimal for the application, but at the same time applications would prefer to fall back to alternative protocols if the optimal ones are not supported by the networks. For instance, an application might prefer QUIC over TCP when both MultiPath QUIC and MultiPath TCP are not supported by the networks. 5) SEAMLESSLY REACT TO NETWORK CHANGES One of our aims is to support seamless multiconnectivity. In modern networks, devices regularly move between different networks, usually between WiFi and mobile networks. When the devices roam between these networks, the IP addresses of the network interfaces change or the VOLUME 10, 2022 connectivity is broken. To allow MULTI to seamlessly react to these changes, MULTI needs to be able to detect network changes, and either automatically switch to available connections or resume connection when network connectivity has been re-established. 6) USERSPACE IMPLEMENTATION Inspired by QUIC, MULTI is designed to be implemented in the user space. Furthermore, unlike TAPS and NEAT, we assume that MULTI will be supported by both endpoints. We make this choice because userspace implementations allow for faster deployment. In the rest of this section we detail our approach to achieve these goals. B. ARCHITECTURE As shown in Figure 1, MULTI allows applications to exchange bi-directional data streams. It takes as input a bi-directional stream of data, and multiplexes it over multiple transport, network, and link layer protocols. MULTI bundles multiple solutions such as QUIC and MPTCP under one roof because it is aimed to harness the strengths of previous attempts to offer seamless multiconnectivity. It is also designed to be implemented in user space to allow easy deployment. In the following paragraphs we detail how MULTI allows applications to multiplex data stream over multiple transport protocols. 1) SESSION LAYER ENDPOINTS MULTI requires both the source and destination of a data stream to support it. It is implemented above the transport layer of the protocol stack, specifically at the session layer. A MULTI session is uniquely identified by an application using a Session ID. This Session ID can be either specified by the application or generated at run time during the session initiation. 2) INITIATING A SESSION A host (initiator) can initiate a MULTI session by opening a transport layer connection with a remote host. After the transport layer connection is established, the initiator provides the Session ID to uniquely identify a MULTI session. Once a session is created, MULTI uses this Session ID when opening subsequent transport layer connections that correspond to this session. After a session is initiated, the two endpoints exchange data using MULTI segments. 3) MULTI SEGMENTS As shown in Figure 1, MULTI segments are encapsulated within the payload of the transport-layer protocols. Consequently, the MULTI segments may be encrypted when using transport protocols such as QUIC that encrypt the payload. Each MULTI segment includes a header whose fields are presented in Table 3. We would like to point out that these fields were chosen only to demonstrate the benefits of multiplexing transport-layer connections, and have not been optimized to improve MULTI's performance. Furthermore, because MULTI is implemented in userspace, we envision that these fields can be updated, modified, and optimized by the applications that use MULTI. To allow potentially different versions of MULTI to be able to establish connections, we include a version field in the header. This field allows newer version of MULTI to be compatible with older versions of MULTI. Although having a version in the header can expose some vulnerability for instance in the TLS [30], it can be mitigated by negotiating the suitable non-vulnerable versions. A key field in the header is the Session ID. As previously mentioned, the same Session ID is used across the transport-layer connections of a given session. MULTI segments can either be control plane segments, or contain the data. Control plane segments include keep-alive messages and a special segment to explicitly close the session by closing all the open transport-layer connections of that session. The data plane segments include segments that encapsulate the data, and the acknowledgments to identify that the segment was successfully received. 4) MULTI CONTROL PLANE Before opening a MULTI session, the application needs to provide the configuration that MULTI can use for transporting the data to the remote host. We provide an example configuration in Figure 2. The design is motivated by URLSession [31], and it is designed to be verbose enough to meet our goals. The first entry is the configuration_priority, i.e., the suggested order for the transport-layer connections. Each connection is defined by a triple: the transport-layer protocol, the IP-layer protocol, and the interfaces that can be used for the connection. Applications using MULTI can use the connection priority to explicitly specify their preferred preference of the protocols from the protocol stack. In the example shown in Figure 2, the first entry for connection_priority specifies that MULTI can use MPTCP, and this MPTCP connection can use IPv6 and the Ethernet and WiFi interfaces; this allows the application to specify that it is optimized for MPTCP, but it might prefer QUIC over TCP when MultiPath TCP is not supported by the network. MULTI opens connections similar to Happy Eyeballs (HE) [27]. In HE, multiple connection establishment attempts are launched simultaneously over available transport interfaces and protocols; when a connection is available, it is given to the application while the rest of the finished connections are cached for future use. Similar to HE, the suggested order for connections, along with the fields for multi_config allows MULTI to determine how the connections are created and used. For instance, the connect field of multi_config shows that the connections are configured to be opened sequentially. Connections can be opened sequentially or in parallel. Opening a connection in parallel is useful for minimising latency before the first data byte is transferred. As soon as the first connection is open, MULTI can use that for data transfer, and when the rest of the connection establishment attempts finish, they are added to the list of usable connections. When there are more than one connection open, MULTI can start using them as specified in the configuration. In contrast, sequential trying to open connections can increase the latency before the first data byte is transferred, but can have other benefits. The main benefit of sequential opening is restricting the load induced to both the device and the destination system. As connection establishment attempts arrive sequentially, the destination system is capable of handling more unrelated connection attempts simultaneously. For the host, the sequential opening may even allow better power management, as network interfaces are woken up only when needed, and not all simultaneously. The main cause of latency when opening the connections sequentially are failed connection attempts. If a connection establishment fails, detecting it can induce extra time before moving to the next connection on the list. The scheduler field of multi_config specifies how the data is multiplexed all the open connections. In the example presented, the application specifies that it would like MULTI to use the Round Robin scheduler to distribute the load evenly across the opened connections. The scheduler operates in user space and this opens avenues for creation of application specific schedulers. For instance, applications can specify lowest latency to indicate that the transport connection with the lowest latency is to be selected, or it can specify that the applications can use the top n connections that have the lowest latency, or they can select the top n connections with the best throughput. Furthermore, along with opening the connections sequentially or in parallel, MULTI can be easily modified to support staggered connection attempts. However, MULTI is restricted to what the network can offer. For example, if an application requires low latency connection, MULTI can only offer what the available networks can provide. In this case, MULTI can be extended to use the connection with the lowest latency and share the measured latency across all connections with the application. The application can then decide whether to continue with the available latency or terminate the session. The subsequent fields in the configuration allow the application to specify how the MULTI layer should set up and use the connections. This allows MULTI to configure the transport, network, and link-layer protocols. In the example shown in Figure 2, MPTCP is configured to use the Round Robin scheduler, QUIC is configured to have an idle timeout of 5 seconds, the nagle algorithm is disabled for TCP, and TCP's idle timeout is increased to 10 seconds. Furthermore, the SSL certificates to be used by QUIC and TCP are also specified. Along with configuring the transport protocols, MULTI can also request the host operating system to configure the link layer protocols. In this example, MULTI can request the host OS to disable the WiFi power savings. 5) MULTI DATA PLANE After providing the configuration, the application can open a MULTI session to a remote host. On opening a session, the application is provided with two handles: one for transmitting data, and the other for receiving data. As shown in Figure 1, MULTI buffers the data sent by the application in the session layer. In this layer, MULTI uses the specified scheduler to multiplex the data transfer over the opened connections. MULTI also splits the data into segments and adds the MULTI header to each segment. As shown in Table 3, the header includes the sequence number for ordering the data. Note that MULTI relies on the underlying transport VOLUME 10, 2022 protocols to perform the congestion control, and ensuring the reliable transfer of the segments. Similar to data transmission, data received over these connections is ordered according to the sequence number and buffered till the application reads from the buffers. All fields of the MULTI control plane segments and data plane segments including the MULTI headers are encapsulated in the payloads of the transport protocol segments. This implies that these payloads may be encrypted if the transport protocol encrypts its payload. For instance, MULTI segments including the MULTI headers encapsulated in QUIC payloads are encrypted. We discuss the implications of encrypting the MULTI header in §V. C. IMPLEMENTATION We implement our MULTI prototype in python, and we have made our code and the scripts used for evaluating its performance publicly available. Our prototype currently supports TCP and QUIC as the transport-layer protocols, and is also designed to support MPTCP. We use the native asyncio implementation for TCP, and the aioquic library for QUIC [17]. We use the asyncio libraries because they are designed to allow developers to build networking applications such as web-servers. MULTI requires the ability to a) simultaneously open and use multiple transport layer connections, and b) explicitly provide socket options that reflect the requirements specified by the user. The default asyncio library for TCP and UDP offers these capabilities, however, they are currently not available for the aioquic library. We therefore added these features to the aioquic library in the following manner. First, we made the aioquic client awaitable. This enables the MULTI library to simultaneously open multiple connections and use them concurrently. Second, the current version of aioquic does not allow setting socket options. In its default state, aioquic uses the OS routing to send UDP datagrams over any of the available interfaces. We therefore modified the aioquic to allow binding a socket to a specific interface to restrict the UDP datagrams to that interface. Note that by doing so we have violated the sans-I/O design principle [32] governing the design of the asyncio protocol libraries for TCP and QUIC. The sans-I/O mandates that the library code does not perform network I/O, however we violate it because we want MULTI to offer the flexibility to specify the interfaces to be used. In order to offer this flexibility, MULTI must bind the sockets it uses to the interfaces specified in its configuration. From the application point of view, when a session has been created, MULTI exposes a stream reader and stream writer to the application. The application can use these handles as it would use normal sockets to send and receive data. Under the hood, MULTI multiplexes the sending and receiving of data over the available transport connections. IV. EVALUATION In this section we present the results of experiments to evaluate our MULTI prototype. A. GOALS The goal of our evaluation is to showcase both the strengths and weaknesses of the MULTI approach. Specifically, we aim to identify avenues to improve MULTI and highlight some of the factors that continue to make seamless connectivity hard, including overheads caused by a session layer approach and the effect of the packet scheduler when multiplexing multiple connections. In our evaluation, we first focus on the throughput when transferring large amounts of data (256 MB), the duration to transfer a small amount of data (25 kB), and the time to open a MULTI session. We then show how MULTI behaves when the test device moves between networks, either aggregating or switching active connections. For this test, the client device initiates a download over a cellular connection and then joins a WiFi network. The details of our methodology for the above tests are presented in §IV-C. B. EVALUATION TESTBED For our evaluation, we use our testbed presented in Figure 3, and the hardware and software detailed in Table 4. Our client laptop can exchange data with our server via WiFi (802.11ac), a 1 Gbps Ethernet link, and an 5G mobile phone using USB tethering. Note that all the Ethernet links, including the link between the WiFi Access Point (AP) and our server, can operate at a bit rate of at least 1 Gbps. All link layer technologies, namely Ethernet, WiFi, and 5G, can use both IPv4 and IPv6 to reach the server. However, while the server can reach the laptop over all three technologies using IPv6, the server can reach the laptop only through WiFi and Ethernet when using IPv4. This is due to two levels of NAT between the laptop and the server over the 5G link: the ISP has IPv4 NAT between the phone and the Internet, and the USB tethering at the phone adds another NAT. Due to this, our baseline iperf tests (described in §IV-C) are only run over IPv6 from server to the laptop. The average round trip time (RTT) between the server and client over 5G, Ethernet and WiFi was 46 ms, 0.250 ms and 1.8 ms respectively. These RTT measurements were conducted using ping. Furthermore, we use a dedicated management network (not shown in the figure) to facilitate remote management, and also for all other background traffic traversing our laptop and server. C. TEST DESCRIPTIONS To evaluate our MULTI prototype, we first use three different test scenarios to establish the baseline performance of our testbed and also the performance of MULTI. Then, we use three tests to evaluate the current performance of our MULTI prototype, followed by two tests to demonstrate how MULTI handles roaming between networks. In each test, the laptop is acting as client that connects to the test server. In each of the figures, the values shown in Rx (receiving) and Tx (transmitting) are from the laptop's point of view. 1) BASELINE PERFORMANCE We measure the baseline performance for our testbed with the following three tests. 1. iPerf. We use iPerf to measure the achievable throughput when using our Ethernet and WiFi links for transferring data over TCP and UDP over IPv6. iPerf currently does not support QUIC, so we use UDP to give a rough estimate on the throughput that can be achieved when using UDP based transport protocols such as QUIC. 2. Base. For this test, we measure a) the time required to open a connection, b) the throughput for transferring 256 MB of data, and c) the total duration to transfer 25 kB of data, including the time to open the connections, when using the asyncio python libraries for TCP and QUIC. We run the tests over all links using TCP and QUIC over IPv6 in both directions. 3. Multi. We repeat the previous test using MULTI to quantify the bandwidth overheads incurred when it is used. Similar to the previous test, we configure MULTI to use only one transport-layer protocol (TCP or QUIC), IPv6, and one link layer interface. 2) MULTI PERFORMANCE We consider MULTI with three connections; specifically, we open a transport-layer connection on each interface. As with the baseline performance, we measure a) the time to open a connection, b) the duration to transfer 25 kB of data, and c) the throughput when transferring 256 MB of data. a: TIME TO OPEN A CONNECTION A key component of MULTI is the module that allows it to open multiple connections for the data transfer. However, the time taken to open each of the connections can vary significantly due to link characteristics and also the configuration settings. For instance, MULTI can currently be configured to (i) open all connections sequentially and begin the data transfer after all the connections are open, or (ii) try to open all connections in parallel, and begin the data transfer after one of them is successfully opened. For this test, we consider option (i), i.e., sequential opening, because it is representative of the worst case scenario; the baseline measurements for MULTI are representative of option (ii), i.e., parallel opening. b: DURATION TO TRANSFER 25 kB Multipath protocols such as MPTCP can be inefficient when transferring small amounts of data as the time to open multiple connections can be larger than the time to transfer the data [33]. In this test, we emulate this scenario. Specifically, we perform the data transfer after all the connections have been established, and we use the Round Robin scheduler to distribute data evenly across the links. We use segment sizes of 16 kB, and allow the MULTI segments to be fragmented by the transport-layer protocols. For instance, the default aioquic implementation fragments each MULTI segment into segments of 1280 bytes. c: THROUGHPUT WHEN TRANSFERRING 256 MB In this test, we evaluate the performance of MULTI when using the Round Robin scheduler to distribute the load across the two communication links. Although the Round Robin scheduler is not the most efficient scheduler to maximize the throughput, we use it to showcase the multiplexing capabilities of MULTI. 3) MOVING BETWEEN NETWORKS One of the main goals of MULTI is to support mobility. In previous tests, the networks the test laptop has been connected to have been static. However, when devices move between networks, the active connections are affected, and can cause degraded service. Here, we emulate the laptop moving from one network to another, i.e. either switching between active interfaces or aggregating multiple interfaces when they become available. Aggregating multiple connections over multiple network interfaces should increase the total bandwidth available to MULTI. Switching between networks can either cause total connection loss, or packet loss and momentary drops in bandwidth. We emulate network switching with make-before-break, i.e. the new connection has been established before the old connection is lost. In our tests, the network switch is triggered manually after 15 seconds. This emulates the case when the user has configured MULTI to use WiFi when available and cellular as a backup when WiFi is not available. In real networks, the changes in networks happen when the devices move from one place to an another, and network changes can be very abrupt. Our current MULTI prototype does not yet listen to the network events, however, it has hooks that can be extended to support the switching between active connections. For each test we report the values measured at our client laptop. Furthermore, we measure the values observed when the client is transmitting the data to the server (Tx), and when it is receiving the data from the server (Rx). Notable here is the cellular connection, whose bandwidth depends on if the laptop is sending or receiving because sending data over the cellular connection we used is slower than receiving data over the same connection. 1) BASELINE PERFORMANCE We present the results of our baseline measurements in Figure 4 and Figure 5. In Figure 4, we observe that iPerf can reach roughly 940 Mbps over Ethernet in both directions, and 200 Mbps over WiFi. Over 5G links, iPerf can reach 190 Mbps download and 35 Mbps upload in our testbed. The asymmetry between download and upload is due to cellular network technologies. These results provide us with the maximum achievable throughput without overheads caused by MULTI or other protocols in our testbed. In Figure 5(a) we observe that the time to establish a connection is smaller when using Ethernet compared to WiFi, and the most time by a large margin is when using 5G. We believe that this is because of the larger RTTs for WiFi and 5G, and also due to radio characteristics of the wireless links that can cause the first packet to incur a larger latency than the subsequent packets [34]. Furthermore, when using the base asyncio library we observe that the time to open a TCP connection (2.09 ms over Ethernet, 5.57 ms over WiFi,) is smaller than that for a QUIC connection (31.53 ms over Ethernet, 34.32 ms over WiFi, and 50.2 ms over 5G) because QUIC also performs a TLS handshake during connection establishment. In Figure 5(b), we observe the MULTI and the baseline TCP and QUIC libraries require similar amount of time to transmit 25 kB; the only notable difference is when MULTI receives data over QUIC. This is most likely a consequence of the fragmentation caused by the MULTI segment header. The additional bytes required by the MULTI header increases the number of IP packets, which in turn increases the number of asynchronous events and calls to the SSL libraries in the QUIC implementation. This increase in events coupled with the slower laptop CPU results in the additional time. In Figure 5(c), we observe that using the asyncio libraries (Base) results in a significant decrease in the throughput compared to iPerf, and this decrease is significantly large for QUIC; for instance, we were able to achieve only 844 Mbps Tx (10.2% decrease) and 724 Mbps Rx (22.9% decrease) when using TCP over Ethernet, and 48 Mbps Tx (94.8% decrease) and 78 Mbps Rx (91.7% decrease) when using QUIC over Ethernet. We also observe a similar decrease in throughput for WiFi and 5G. We believe that the difference in the Tx and Rx throughput is because of slower laptop CPU coupled with the increased CPU load incurred when reading data at the client; a write results in writing to an asyncio stream buffer, while a read results in asyncio events for reading the requested amount of data from the socket buffers. We also observe that MULTI incurred a small overhead (less than 5% decrease in throughput) when using TCP for Ethernet and WiFi when compared to the base line asyncio, but it incurred a high overhead (up to 20% decrease in throughput) when using QUIC over Ethernet and WiFi. When using 5G, we observe that the performance of MULTI degrades considerably when receiving data over TCP. This is primarily due to the increased number of asyncio events created by MULTI to process the incoming TCP segments and longer latency over the 5G link (46 ms compared to less than 2 ms over WiFi or Ethernet). For WiFi and Ethernet, the network latency is small enough for the asyncio event loop to batch the events together, however this batching does not happen because of the delays and the variance in the delays in the 5G network we used. We are currently exploring approaches to address this shortcoming. 2) MULTI: THREE CONNECTIONS USED TO DEMONSTRATE ITS MULTIPLEXING CAPABILITIES In this test, we measure the performance of MULTI using three different link-layer transports, namely Ethernet, WiFi, and 5G. We use these tests to demonstrate the multiplexing capabilities of MULTI and present its performance when using multiple available link-layer and transport-layer protocols. We expect to see poor performance because of the inherent weakness of the basic Round Robin scheduler with highly asymmetric links. In Figure 6(a) we observe that the time to open three connections (for combinations of TCP and QUIC) is longer than the time observed to open a single connection in Figure 5(a). The time to open three connections is the smallest when all the connections use TCP because QUIC requires the TLS handshakes; we plan to investigate the impact of using TLS over TCP in our future work. In Figure 6(b) we observe that using multiple links increases the duration to transfer 25 kB of data. This is inline with other works that have compared the effect of multipath protocols for small and large file transfers [12], [33]. Furthermore, sending the data is faster than receiving because of the slower laptop CPU coupled with the WiFi and 5G savings. Figure 6(c) highlights the benefits and shortcomings of a Round Robin scheduler: it can alleviate the load on the slower links, but the achieved throughput is significantly smaller compared to when using only the link with the best performance. Note that our testbed is biased towards the 1 Gbps Ethernet. For instance, the performance for T, T, T, i.e. all TCP, is best as expected, although due to the nature of Round Robin scheduler the speed is not as good as with a single TCP over Ethernet connection. However, the throughput is roughly twice as fast as the TCP over WiFi or 5G, showing the benefit of aggregation. MULTI's scheduler can be tuned to account for the costs of using each link, and the desired performance. For instance, if the faster link is expensive, MULTI can be tuned to minimize the cost while achieving the desired throughput. 3) MOVING BETWEEN NETWORKS In Figure 7(a) and Figure 7(b), we present the results of MULTI aggregating bandwidth with a 5G and WiFi network when downloading data from the Internet. In these tests, MULTI first uses cellular network to establish a connection and start transferring data. At 15 seconds, the WiFi network becomes available, and MULTI aggregates both interfaces for additional bandwidth. In the optimal case, MULTI should reach similar speeds as in Figure 5(c), i.e. if we aggregate TCP over WiFi and cellular, the best download speed MULTI should reach is around 350 to 400 Mbps. However, we do not achieve these speeds because our prototype is currently single-threaded, and there are overheads in handling multiple streams simultaneously in single threaded applications. In Figure 7(a) we show MULTI transferring data over TCP. In the beginning, MULTI reaches a bandwidth of around 100 Mbps over 5G. At the 15 seconds mark, MULTI aggregates the WiFi network to the existing connection. After aggregation, MULTI gains additional bandwidth and reaches the maximum bandwidth of 340 Mbps. This bandwidth decreases to roughly 300 Mbps after a short time, most likely due to full receiving buffers at the client laptop. We also observe some abnormal throughput in WiFi connection. We believe that this could be due to various factors including a) buffers filling in our AP or the test laptop, or b) background traffic in our networks, and c) background activities in our laptop. In Figure 7(b) we show results when MULTI only uses QUIC to transfer data. At the 15 seconds mark, we add another QUIC connection and start aggregating the connections. Here MULTI reaches its peak performance of 75 Mbps. We also observe a small drop in 5G bandwidth during the experiment. We believe this caused by the interaction of two independent QUIC connections, higher latency of tens of ms over 5G, and the asyncio event loop of the Python. In Figure 8, we show the performance of MULTI when we use a mix of TCP and QUIC connections over 5G and WiFi. In Figure 8(a), we use TCP over 5G and QUIC over WiFi. When we aggregate QUIC over WiFi to TCP over 5G, we observe a drop in the throughput over our 5G link. While QUIC over WiFi reaches similar throughput as in Figure 7(b), the throughput of TCP over 5G drops to 50 Mbps. As above, we believe this drop to be due to the client handling the QUIC stream and the difference in bitrate and latency between the 5G and the WiFi connections. These lead to Figures (a) and (b) respectively present how MULTI behaves when moving from 5G network to the range of a WiFi network, either using TCP or QUIC for both connections. In Figure 7(a), after MULTI aggregates 5G and WiFi, it almost doubles available bandwidth. In Figure 7(b), MULTI gains additional bandwidth after aggregation with QUIC. However, QUIC's congestion window increase is slower over 5G due to longer latency. Similarly, QUIC over 5G loses bandwidth possibly due to heterogeneous links and MULTI implementation. Figure 8(a), after MULTI aggregates TCP over 5G and QUIC over WiFi, while MULTI increases overall bandwidth, we observe a drop in TCP over 5G, most likely due to queuing at client. In Figure 8(b), MULTI gains additional bandwidth after aggregation. Variance in WiFi is most likely due to buffering at AP or at the client. Figures (a) and (b) present how MULTI behaves when switching between 5G and WiFi network. A switch is triggered every 15 seconds. In Figure 9(a), we show the behavior when both connections use TCP. In Figure 9(b), we show behavior with QUIC. Here, we see how QUIC's congestion window increase over 5G takes longer than over WiFi. the client receiving more MULTI segments encapsulated in QUIC frames over WiFi, which take longer to process than plain MULTI segments over TCP. Similarly, Figure 8(b) shows the case where we initiate the connection with QUIC over 5G and aggregate it with TCP over WiFi. As earlier, QUIC over 5G rises to a bandwidth of around 30 Mbps. However, when the TCP over WiFi is added, we again observe the drop in QUIC over 5G due to the above-mentioned reasons. The WiFi throughput of the WiFi connection also becomes unstable at around 23 seconds, most likely due to the behaviors previously mentioned. In Figure 9, we show how MULTI behaves when we manually switch between transport links. Here we use makebefore-break when switching the transports, i.e. the connection over the other link is established before the other link is broken. In Figure 9(a), we show how MULTI behaves when we switch TCP over 5G to WiFi and vice versa. MULTI achieves the same performance as before when single interface is used. Notable here is the TCP windows size adjustment. In the beginning, the TCP window size slowly increases to reach the maximum bandwidth over 5G. However, when we perform the switch back to 5G from WiFi, the window size converges much more rapidly. Figure 9(b) presents the results of a similar experiment with both transports using QUIC. Here MULTI reaches the speeds as above using single transports. When the switch between networks happens, we see overlap between connections due to buffers not being empty. As we use make-before-break in our testbed, this allows MULTI to empty buffers instead of losing data. The results in Figure 7, Figure 8, and Figure 9 demonstrate how MULTI can be used when devices switch between networks. A caveat of our evaluation is that a make before break used in our evaluation may not be possible without the help of the network. However, when moving into the WiFi range and switching to it from 5G, make-before-break should always be possible as long as the 5G (or slower) coverage is available. When moving from WiFi to 5G, the WiFi connection loss can be more abrupt, however, if MULTI uses hooks to the underlying network interfaces, MULTI could detect impending connection loss from WiFi signal strength and trigger the switch automatically. V. DISCUSSION AND FUTURE WORK Our evaluation shows that MULTI can offer programmable multiconnectivity over multiple interfaces and transports. Although our current prototype highlights the strengths of MULTI's architecture, it also shows the weaknesses of the prototype. There are many avenues to optimize its performance; for instance, we were unable to use uvloop [35] because it currently does not support binding UDP sockets to specific interfaces. Currently our solution's network view is local to the devices on which MULTI is running. We envision a larger system, where different network controllers are available with which MULTI can exchange information to assess the available capacities of the networks. By working together with these controllers, and adding new capabilities to MULTI, we envision that programmable multiconnectivity can be achieved. A. COUPLED CONGESTION CONTROL Transport protocols such as TCP and QUIC implement their own congestion control mechanisms. These mechanisms work on single connection basis, i.e. each connection has its own view of congestion. However, when multipath transport protocols such as MPTCP are used, there is a need for coupled congestion control mechanisms as each path will have different congestion characteristics [36]. The current version of MULTI does not support coupled congestion control. However, coupled congestion control can be implemented in user space, as demonstrated by Multipath QUIC (MPQUIC). Specifically, MPQUIC uses Opportunistic Linked Increase Algorithm (OLIA), which we are planning to implement to allow MULTI to handle congestion better [37]. However, some of the challenges we envision include the impact of having a congestion control algorithm in user space when using transport protocols such as TCP whose congestion control algorithms are implemented in the OS kernel. B. NETWORKING APIs MULTI exposes two handles: one for reading data that arrives from the network, and the other for sending data to the remote peer. Internally MULTI uses the socket file descriptions via the asyncio python library. MULTI's API are inspired by URLSession [31] and the APIs recommended by the IETF TAPs architecture [25]. Specifically, MULTI's API complements these efforts that are aimed at hiding the semantics of the socket API from the application developers. We acknowledge that the current APIs in our prototype are not comprehensive because they were primarily designed to implement our prototype of MULTI. C. END-TO-END SUPPORT Our current MULTI prototype requires both endpoints to support MULTI. Currently, the application developer has to have the knowledge that the destination is also using MULTI. In the future, we envision MULTI to be able to determine if the destination is either MULTI-capable or not through methods similar to TAPS, MPTCP, and Happy Eyeballs. This will allow MULTI to be properly agnostic to the underlying networks. Similarly, the work on pluginizing QUIC is relevant to MULTI [38]. Pluginized QUIC allows endpoints to exchange protocol plugins per connection basis, thus extending features that either endpoint supports. Protocol plugins would allow MULTI to use new userspace protocols that are not available when the application is created. D. MIDDLEBOXES Different multiconnectivity protocols have issues with middleboxes because many middlebox implementations do not behave well with protocols they do not support [9], [12], [38]. As such, a single multiconnectivity protocol may not be able to use all available networks. For example, if a device is connected to two networks, one of which supports MPTCP and one that does not understand MPTCP TCP options, it discards the MPTCP packets. In this instance, MPTCP is restricted to only single path, while MULTI can use both networks as MULTI can use more than one protocol simultaneously. At the same time encrypted communications makes traversing middleboxes even more difficult, as we discuss below. E. ENCRYPTED COMMUNICATIONS Encrypted communications have become the norm. For example, QUIC has been designed to use encryption from its inception. QUIC encrypts most of the packet payload while leaving enough of the headers unencrypted for middleboxes to detect QUIC protocol [10]. The goal of the MULTI is similar, encrypt the data of the flows when requested. However, this causes MULTI to encrypt data twice when using underlying protocols that encrypt data like QUIC. This is problematic when dealing with load balancers, as the balancers see QUIC packets which contain encrypted MULTI frames. This can cause different flows of MULTI to reach different servers. This requires MULTI to decide which connection to use based on requested connection priorities and marking the connection to the other server as invalid. While not fatal for MULTI, this is not optimal and requires further study. F. HEAD-OF-LINE BLOCKING Head-of-Line blocking is an issue when dealing with network connections that stream buffered data over heterogeneous networks. In MULTI, we can employ similar methods like MPQUIC to avoid the Head-of-Line blocking [12]. MPQUIC uses the same packet scheduler as MPTCP uses in the Linux kernel with few modifications. These include different methods such as specific WINDOW_UPDATE frames to estimate the latency and the bandwidth of different paths. It then uses the results of these estimates to adjust its packet scheduler to avoid Head-of-Line blocking. However, this approach will not work when using TCP as the transport for exchanging streams, and MULTI can be configured to avoid using TCP when used by applications that are sensitive to Head-of-Line blocking. G. COSTS OF SCHEDULING AND PAYLOAD INCREASES Using multiple transports simultaneously carries its own costs. As with any multipath protocols, scheduling traffic over multiple paths inherently incurs additional latency. [12] This latency can be mitigated with scheduler designs, where more complex schedulers can take the differences of different transport into account and optimize the scheduling decisions. Similarly, using a session layer protocol increases the packet sizes as the headers need to carry enough information for endpoints to associate packets with the connections they belong to. This is true for other multipath protocols regardless of which network layer they operate. For example, the MPTCP carries Connection IDs and other information as TCP options [13]. Like with the scheduler design, the headers can be designed to only carry as much information as is required for the protocol to operate. To achieve this, QUIC uses different sets of headers for connection establishment and data transfer [10]. H. CAVEATS OF THE MULTI Our MULTI prototype that we used in our evaluation is not perfect. Instead, its main purpose is to highlight both the problems of the current state of multiconnectivity and the insights that can be gleaned from the proof-of-concept implementation. As discussed in earlier chapters, none of the existing multiconnectivity protocols and solutions work in all scenarios. Some of them come close, for example QUIC is already in use and its popularity is ever increasing, and MPTCP has been deployed on some mobile phones. TAPS and NEAT also show how systems can move from being restricted to a single protocol to protocol agnostic system. MULTI on the other hand tries to combine multiple protocols into a unified connection over multiple interfaces. It is not restricted by the protocols themselves, but is restricted in other ways as discussed in this section. There are several large issues with MULTI; namely congestion control and HOL blocking. Our MULTI prototype is based on asyncio which has its own performance issues. Optimizations such as uvloop address some of these issues, but they are not a complete replacement for the native asyncio python libraries. For instance, we could not use uvloop and bind UDP sockets. Our prototype is therefore a proof of concept and is not designed to optimize the I/O throughput. VI. CONCLUDING REMARKS Multiconnectivity, be it either multihoming or multipath, has been studied for decades [18]. In this paper, we discuss solutions that operate on different layers of the network stack and highlight some key issues applications may face when using them. For example, MPTCP is a good replacement for TCP, but its deployment is hindered by the Internet ossification and middleboxes. Existing solutions do not offer applications and users the control over the set of chosen communication interfaces. Insights from the seminal work of Bahl et al. [39] and the recent works on TAPS motivate us to believe that this level of control will be vital and useful in the next generation networks and end-user applications. We therefore created our solution, MULTI, that allows applications to specify their requirements and can be extended to request the network to fulfill them. It is agnostic to the underlying network and transport protocols. It draws from QUIC and MOSH on how to handle the connectivity and roaming in the session layer, i.e. closing a link will not break a connection as long as there is an alternative route. While MULTI can currently exchange a single data stream inside multiple connections, we plan to extend it to multiplex multiple streams. We also envision a system where networks can provide information to MULTI. For example, MULTI could use MAMS [40] to negotiate the required QoS/QoE with the networks when selecting the set of communication interfaces and protocols to use. Although our prototype highlights the strengths of MULTI's architecture, there are many avenues to optimize its performance; for instance, we were unable to use uvloop [35] to replace the default Python event loop as it currently does not support binding UDP sockets to specific interfaces. This shows in poor performance when using QUIC. Although our MULTI prototype is not optimized, it still highlights the gains that can be achieved with programmable multiconnectivity. APPENDIX A ARTIFACTS The MULTI prototype and results are available at: https:// version.helsinki.fi/multiconnectivity/multiconnscratch APPENDIX B RESULTS WHEN USING IPv4 In this section, we present some of our experimental results for highlighting that MULTI is agnostic to underlying IP protocol. These results use the same test environment as results presented in §IV with a few differences. As such, there are only two available transport interfaces for MULTI to choose from because the experiments were conducted using IPv4. The test environment does not include a 5G connection because of the double NAT issue discussed in §IV. In Figure 10, we observe that iPerf and MULTI can reach similar speeds and bandwidths with IPv4 as presented in Figure 4 over IPv6. Figure 11 shows the results of MULTI using two connections over different combinations of interfaces and protocols. In Figure 11(a) we present the time to open both the connections. The connection establishment time is the smallest when both the connections use TCP, and the worst when both the connections use QUIC because of the TLS handshakes performed by QUIC. As expected the time to open two TCP connections is twice the amounts observed in Figure 10(b); we observe a similar behavior for QUIC. In Figure 11(b) we present the time to transfer 25 kB, we observe that MULTI performs significantly better when receiving data over two QUIC connections. However, we believe this to be an artifact of our proof-of-concept and not the real case. In Figure 11(c), we show the benefits and shortcomings of using the round-robin scheduler. While this scheduler can be used to alleviate the load on the WiFi links, the achieved throughput is significantly smaller compared to when using only the link with the best performance (Ethernet link shown for MULTI in Figure 10(c)). For instance, the performance for Eth-TCP, WiFi-TCP is best as expected, although due to the nature of Round Robin scheduler the speed is not as good as with a single TCP over Ethernet connection. However, the throughput is roughly twice as fast as the TCP over WiFi, showing the benefit of aggregation.
16,590
2022-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Music Audio Rhythm Recognition Based on Recurrent Neural Network Music rhythm detection and tracking is an important part of music understanding system and music visualization system. Based on the important position of rhythm in music expression and the wide range of multimedia applications, rhythm extraction has become an important hotspot in computer music analysis. In the field of audio recognition research, deep learning can automatically learn the features of audio and extract the rhythm of music. This paper takes music audio rhythm recognition as the main research object and carries out a series of researches with deep learning GRU neural network as the main technical support. A residual network is introduced into the GRU network model, and it is found that when the residual network is at 50 layers, the model has the highest accuracy for audio rhythm extraction. After adjusting the model parameters through experiments, this paper concludes that the average recognition accuracy of the ResNet_50-GRU model for recognizing the rhythm of the music audio in the MSD, AudioSet, and FMA data sets is 92.5%. Introduction Music is an acoustic sign, the most abstract of all human art genres, and its ideas and emotions have evolved over the centuries. The main elements of music are rhythm, melody, harmony, and sound. With the proper research and application of computer technology in the multimedia field, the multimedia process has developed rapidly. However, as the most important form of multimedia audio data, musicians perfectly combine the basic elements of music through computers, presenting a rich emotional world. Deep learning [1] is a multilevel neural network that can recognize and simulate nonlinear mapping in specific situations and has achieved very successful results in the fields of image recognition and machine translation. Deep learning can be used as a classification tool in speech recognition research, but deeper network structures can enhance learning capabilities, and undersupervised learning, deep learning can automatically learn audio features. Rhythm usually means the phenomenon of changes in the strength and weakness of music. Without rhythm, music loses its ability to express creativity. With the importance of rhythm in music performance and the expansion of multimedia applications, rhythm recognition has become an important focus of intelligent music analysis and has a wide range of applications in computer multimedia and other fields. At the same time, with the development of computer technology, communication, electronic technology, and multimedia, researchers will focus on the field of rhythm design using logical processes to control music production and try to establish the use of artificial intelligence for music creation. The proposed models or algorithms for music rhythm extraction have long been a hot topic in computer analysis, and they are constantly being improved. The following are the main breakthroughs in audio rhythm recognition using a recurrent neural network: First, this paper uses a GRU neural network to build a short-term music audio rhythm extraction and recognition model. Second, in the experiment, the residual network is introduced into the GRU network model, and it is verified that the residual network can improve the model's recognition accuracy to some extent. Third, this paper examines the impact of various activation functions on the GRU model's recognition accuracy. Related Work Holden proposed a real-time character control mechanism based on phase function neural network. In this network structure, the weights are calculated by a circular function using the phase as input. As the stage progresses, the system takes the previous state of the character, the geometry of the scene as input user controls and automatically generates high-quality motion to achieve the desired user controls. A new alternating update clique convolutional neural network structure (CliqueNet) can obtain deeper network, which improves the utilization of network features. To maximize the transfer of semantic information [2], Wu introduced the clique of CliqueNet. He proposed a new fully convolutional network based on the encoder-decoder structure, called CyclicNet, an alternating update network for semantic segmentation. In addition, long-hop connections and shorthop connections are added to the network to avoid vanishing gradients [2]. Granero-Molina proposed a recurrent neural network (RNN) for solving linear programming problems, which has good natural convergence and fast convergence. In order to achieve the optimal accuracy and computational complexity, an algorithm is also proposed. It presents the MATLAB-Simulink modeling and simulation verification of this recurrent neural network. The modeling and simulation results verify the theoretical analysis and effectiveness of the regression neural network for solving linear programming problems. The application of RNN to music recognition demonstrates the performance of recurrent neural networks [3]. Jin proposed an improved finitetime zero-convergence modified neural network (FTCZNN) to solve time-varying complex linear matrix formulas (TVLCME) online. To converge the error matrix to zero, the new FTCZNN uses a new design formula. The new FTCZNN converges to the TVLCM theoretical solution in finite time, according to theoretical analysis. He also developed CZNN to solve the same TVLCM for comparison. The new FTCZNN has better convergence performance than the fast convergence CZNN [4]. To obtain the frequency spectrum, Ma uses a short-time Fourier transform of the music signal. The autocorrelation properties of the endpoint intensity curves were used to extract pulse code modulation (PCM) values. He proposed a multipath search and cluster analysis-based rhythm detection algorithm. That is, he proposed a multipath detection and tracking algorithm based on the clustering algorithm and incorporating the idea of multipath tracking. It eliminates the drawback that clustering algorithms necessitate the use of digital interface tools (MIDI) to assist the input in achieving the desired result. The algorithm uses a PCM signal as an input because it is more practical [5]. Based on the fusion of visual and acoustic features, Nanni et al. proposed a new and effective audio automatic classification method. They assess and compare sound's acoustic characteristics. They then combined these characteristics into an ensemble that improved classification accuracy over more advanced methods. A different support vector machine (SVM) [6] is trained for each feature descriptor. Music Audio Rhythm Recognition Based on Recurrent Neural Network 3.1. Recurrent Neural Network (RNN). RNN is a type of neural network that is very effective for data with sequence properties. It can mine time series information and semantic information in data and make use of this ability of RNN to make deep learning models make breakthroughs in solving problems in NLP fields such as speech recognition, language model, machine translation, and time series analysis [7]. The difference between RNN and fully connected neural network is that it can combine the content of the data before and after to train the model and introduces the time weight matrix, which can accurately identify the content in the context of special scenes. The RNN structure diagram is shown in Figure 1. V and U are the parameter matrix from the hidden layer to the output layer and the parameter matrix from the input layer to the hidden layer, respectively. The training of RNN is similar to the training of traditional ANN, and the error back-propagation is used, but the parameters W, U, and V are shared. In the gradient descent algorithm, the output of each step is not only affected by the current step network but also depends on the network state of the previous steps [8]. (1) Forward propagation RNN forward propagation is similar to the perceptron model with only one hidden layer. Assuming a sequence X of length T, the number of RNN network input layer units is A, and the number of hidden layer and output layer units is B and C. Iteratively calculate Formulas (1) and (2) from time t = 1 until the entire input sequence is completed. Among them, x t a represents the value of the input unit a at time t, and s t b , and q t h represent the value collected by the hidden unit b of the neural network at time t and the value calculated by the planning function, respectively. (2) Output layer The output vector of the neural network is given by the output layer activation, and the input value of each output layer c of the neural network is the sum of q t h . Wireless Communications and Mobile Computing The number of output layer units and the selection of the activation function should be determined according to the specific application scenario. When the number of classifications is large, the Softmax function can be used as the activation function to obtain the probability value of the classification result. The target probability can be expressed as (3) Backward propagation The loss function for a given RNN model is Model parameters are derived using the BPTT algorithm. BPTT is similar to standard back-propagation, BPTT contains repeated chain rules. The difference is that for a recurrent neural network, the activation function of the hidden layer from the loss function will not only affect the output layer but also affect the next moment of the hidden layer. The Formula is The complete sequence of ϱ is iteratively calculated from time T using Formula (8), and the final derivation result of the completed parameters is 3.2.1. GRU Neural Network. GRU is an improved version of RNN, which can well capture the nonlinear relationship between sequence data and alleviate the phenomenon of gradient disappearance. The improvement mainly includes two aspects. One is that targets at different positions in the sequence have different effects on the state of the current hidden layer, and the earlier the effect, the smaller the effect. That is, each previous state weights the current influence by distance, and the farther the distance, the smaller the weight. Second, when an error occurs, the error may be caused by one or several data, so only the corresponding content weight should be updated. GRU adds a gate control unit on the basis of standard RNN, which can control the flow state of information at different time steps in the network. The structure of GRU is shown in Figure 2 [9][10][11]. In the GRU network, all parameters are trained through the back-propagation algorithm. The working principle of GRU is it has two gate control units which are the update gate Z t and reset gate r t . The update gate is used to balance the proportion of historical memory information and the input information at the current time, and the reset gate determines to forget part of the state information of the hidden layer at the previous moment. The smaller the update gate value, the more inclined the model output is to the state of the upper hidden layer, and the smaller the reset gate value, the less historical information is introduced [12][13][14]. Both values depend on the hidden layer state ht at the previous time step in the network and the input xt at the current time step, as shown in Formulas (10) and (11). Then through the reset gate, calculate the candidate vector added to the hidden state of the current time step, as shown in Formula (12). Wireless Communications and Mobile Computing Finally, with the updated gate value as the weight, the candidate vector and the hidden layer state of the previous time step are mixed to obtain the output of the GRU network at time step t. The main reason why the GRU network can slow down the gradient disappearance phenomenon is that the gating unit acts as a "short circuit" mechanism. Through the control of the parameters of the gating unit, the previous memory is selectively retained without being erased, and the gradient is not easily attenuated when it propagates backward in the direction of the delay time axis, which greatly slows down the disappearance of the gradient. LSTM Neural Network. LSTM is a long-and shortterm memory model, which was proposed in 1997 and is also a special recurrent neural network structure. The key structure of LSTM is a unit state. This unit state has the ability to automatically add or delete information. The way to achieve these functions is to use a threshold to filter the information output by this memory unit. It includes input threshold layer and forgets threshold layer and output threshold layer. These thresholds can be selflooping parameters and can be changed according to the context. The invariance in the information flow process is guaranteed by the combination of multiple gate structures, and it is the mutual cooperation of these gate structures that the cell state can be well controlled and protected in LSTM [15][16][17]. The schematic diagram of its structure is shown in Figure 3. Music Audio Rhythm Recognition Method 3.3.1. MFCC Feature Extraction. MFCC is the most commonly used feature to solve audio recognition problems, mainly because MFCC can represent the processing results of human ears on audio. The Mel filter bank can relatively accurately describe the filtering effect of the cochlea. The Mel frequency is used in the Mel filter bank to describe the audio features, and the mapping relationship between it and the normal Hertz frequency can be described by Formula (14). f = 2595 log 10 1 + f Hz 700 : ð14Þ The MFCC parameter calculation method is shown in Figure 4. The main steps are preprocessing, FFT, Mel filtering, and discrete cosine transform: Preemphasis is mainly to enhance the high frequency of the audio but also to highlight the formant. The calculation process is mainly to pass the digitally sampled signal, that is, the input signal, through a high-pass filter. The Formula in the frequency domain is Framing is the process of dividing the original audio into multiple short audio frames in audio signal preprocessing. The main reason is that the amplitude of the original audio changes drastically in the whole process, and the audio signal is approximately considered to be stable in a short period of time by framing. In general, the size of a frame is between 20 milliseconds and 40 milliseconds. If the frame length is set too small, it will result in too few sample points, which is not convenient for analysis. If the frame length is set too large, the signal will not be stable enough. In order to ensure the continuity of adjacent frames, adjacent frames will have a certain overlap, which also avoids the loss of critical point information. After framing, each audio frame needs to be windowed to facilitate subsequent Fourier expansion to further analyze the audio. The advantage of windowing is that it can not only avoid the Gibbs effect but also make the global signal analysis more continuous. It can also make the original audio signal exhibit some characteristics of periodic functions and reduce the size of side lobes and spectral leakage after FFT. However, windowing will cause the loss of signal energy at both ends of the audio frame. In order to avoid missing important audio information, there is usually a partial overlap between adjacent frames. Windowing is performed on each frame of signal during calculation. There are many optional window functions, such as rectangular, Wireless Communications and Mobile Computing Hamming, and Gaussian windows. By using a windowing technique, the impact of performing the FFT in noninteger cycles can be minimized [18]. y n n ð Þ = z n ð Þy n ð Þ: Among them, y (n) is the time domain signal, z (n) is the window function, and y n ðnÞ can be obtained by truncating the time domain signal by using the window function. In order to get the frequency domain information of the signal, fast Fourier transform is performed. The modulus value of the calculated signal spectrum is squared, and the calculated power spectrum is the signal. The Mel filter bank is composed of triangular bandpass filters. The triangular bandpass filter can have different forms, for example, the filter waveform can be of equal height, or it can change exponentially, and it can also be in the form of an inverse filter. Taking f (r) to represent the center frequency of the rth triangular bandpass filter, the frequency response of the filter bank is The discrete cosine transform actually reduces the dimensionality of the data, which is a kind of lossy compression. It is mainly aimed at some tasks that do not require high compression accuracy. The discrete cosine transform can concentrate the signal energy. N is the dimension of the MFCC features to be extracted. 3.3.2. Description of Frequency Measurement Algorithm. In order to accurately measure the frequency of the output signal, the frequency measurement algorithm uses the method of measuring the cycle to indirectly convert the signal frequency, and the measurement signal cycle adopts the accumulation method to record the number of pulses N, the time A, and B of the first and last pulse within 200 ms. The measurement cycle process is as follows: First, the number of pulses is initialized, N = 0. Secondly, record the count value A of the counter when the first pulse comes, and N = 1 at the same time. Then, record the count value B of the counter when the next pulse comes, and N = N + 1, then calculate the time difference, if the result is less than 200 ms, return to the previous step, stop counting until the difference is greater than or equal to 200 ms, and then calculate the period [19,20]. Using the evaluation algorithm, the measurement data are shown in Table 1. The measured period is converted to obtain the corresponding signal frequency. The performance of the frequency measurement algorithm is shown in Figure 5, and the relative error is used as the performance index. The relative error calculation method is As can be seen from the Figure 5, the frequency measurement algorithm takes the relative error as the performance index, and the calculated maximum relative error is 2.05%, which is about 1 ms. Therefore, for low-frequency signals, the error is negligible. GRU-Based Audio Rhythm Extraction Model. The music signal will have a sudden change in energy at the general rhythm point if the music energy waveform is observed. The energy waveform of a piece of music is shown in Figure 6, and the energy difference between the rhythm and nonrhythm points can be clearly seen. As a result, the basic idea of audio rhythm extraction is that the maximum value of the sampling point frequency is compared as the rhythm point during a period of music audio. Then, by analyzing the time interval between the rhythm points, the interfering rhythm points are removed, and the maximum value detected in each second is used as the rhythm output point. This paper uses GRU network for rhythm recognition of music audio. The specific structure is shown in Figure 7, which mainly includes audio data input, logarithmic Mel spectrogram feature extraction, feature learning and training of feature vectors by GRU network, Dropout layer, activa-tion function for final recognition, and output rhythm recognition results. In the feature extraction step, logarithmic Mel spectrogram and CQT spectrogram are selected as system features. The logarithmic Mel spectrogram is based on MFCC extraction, and the step after DCT is removed in the MFCC extraction step. Then, two steps are added, namely, obtaining the energy spectrum and performing logarithmic operations on the energy spectrum. The choice of activation function will be compared in the experimental part to compare the recognition accuracy of models using different activation functions. Music Audio Rhythm Recognition Experiment Based on GUR Recurrent Neural Network 4.1. Experimental Data and Settings. In the experiment of music audio rhythm recognition method based on GUR recurrent neural network, this paper chooses to use MATLAB for simulation. This paper selects three public music audio data sets, namely MSD (Million Song Dataset), FMA (Free Music Archive), and AudioSet. Taking MSD as an example to introduce the music audio data set, MSD is similar to a resource integration platform, which collects data from 7 well-known and authoritative music communities such as SecondHandSongs dataset and Last.fm dataset. In addition to the original data of major music websites, MSD also conducted necessary analysis and extraction on them. AudioSet is an extended ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10second sound clips extracted from YouTube videos. FMA is a data set for music analysis, 1000 GB in size. Before conducting the experiment, each data set should be preprocessed, a certain number of sample data should be selected, divided into training set and test set according to the ratio of 3: 1, and the music audio duration of the selected samples should be unified. It also performs Mel filtering to obtain a Mel spectrogram of the audio in the data Table 2. After the frame length and frame shift of the preprocessing are fixed, the size of the audio duration determines the value of the first dimension in the two-dimensional matrix, and the Mel filter fixes the value of the second dimension. Taking the MSD data set as an example, the size of the Mel spectrogram is 498 * 64, and 498 represents the number of audio frames divided by 8 seconds of audio with a frame length of 25 milliseconds and a frame shift of 10 milliseconds. 64 represents the energy information obtained by the Mel filter for each frame after FFT and other operations. The Experimental Results of Audio Rhythm Recognition by Recurrent Neural Network. The first step in the experiment is to look at the recognition accuracy of traditional RNN neural networks, GRU neural networks, and LSTM neural networks, as well as the effect of the Lag Window size, or the size of the sliding window used for training and prediction, on the error value. The relative error of the model's recognition of audio rhythm points determines the accuracy value. The FMA sample data set is the data set used in this step. Figure 8 depicts the experimental results. Based on the experimental results, it can be concluded that the Lag Window value corresponding to each RNN's minimum error is not consistent, and that determining a general Lag Window parameter that is suitable for most RNN models is difficult. And the neural network rhythm recognition model's relative error value SMAPE has a trend of decreasing and then increasing, indicating that the size of the Lag Window and the recognition accuracy of the cyclic neural network music audio rhythm recognition model are not completely correlated. The main reason is that if the selected Lag Window value is larger, the range of identification data of each node is enlarged. That is to say, a part of data that is far away from the time to be recognized will be introduced, which will bring redundant information or noise, which will lead to loss of recognition accuracy. In addition, the increase in the dimension of model input data will also reduce the recognition efficiency of the model. Then, if the selected Lag Window value is small, the recognition model does not recognize enough information, and it is impossible to learn the peak change pattern within the sequence, and it is difficult to make a relatively accurate recognition. This also shows that the Lag Window, whether large or small, will have an adverse effect on the prediction results, that is, each prediction model is very sensitive to this parameter. Then, for the three cyclic neural networks used in the comparative experiments in this paper, within a certain range, the error mean of the GRU and LSTM models is smaller than that of the RNN model. Compared with GRU and LSTM, the mean error of GRU will be relatively small. The second step of the experiment is to introduce a residual network (Res Net) in the GRU music audio rhythm recognition model. The idea of ResNet is to assume that we involve a network layer and there is an optimized network layer, so often the deep network we design has many network layers as redundant layers. Then, we hope that these redundant layers can complete the identity mapping to ensure that the input and output through the identity layer are exactly the same. The specific layers are identity layers, which will be judged by yourself during network training. As the depth of the network increases, the model may degenerate. That is to say, if there are redundant network layers in the neural network model, the recognition accuracy may be lower than that of the model with fewer layers, so it is necessary for the model to train the redundant layers to be identical in the process of training the network model. Mapping layers, that is, data passing through these layers does not change the input and output values. In this step of the experiment, the rhythm recognition accuracy experiments were carried out on the MSD, AudioSet, and FMA data sets, respectively. The layers of the residual network were 20, 50, 80, and 110 layers, respectively. The experimental results are shown in Figure 9. It can be seen that in the experiments on the three types of audio data sets, the rhythm recognition accuracy is the highest when the residual network is 50 layers, and when the residual network layer is more than 50 layers, the increase in the number of residual network layers reduces the recognition accuracy. This may be because as the number of network layers deepens, a certain overfitting problem occurs, so the training accuracy cannot be improved, and the training time is also increasing. In summary, this paper chooses to introduce a residual network structure with 50 layers. The experiment next studies the relationship between the choice of activation function in the ResNet_50-GRU model and the model's rhythm recognition accuracy. The activation function is to introduce nonlinear factors into neurons, so that the neural network can approximate any nonlinear function arbitrarily, so that the neural network can be applied to many nonlinear models. The activation functions selected for comparative experiments are Softmax function, RELU function, and Tanh function, and experiments are also performed on different audio data sets. The experimental index is the recognition accuracy of audio rhythm. The training set experiment is carried out first, and then, the test set experiment is carried out. The experi-mental results are shown in Figure 10. It can be seen from the experimental results that the recognition accuracy of the ResNet_50-GRU recurrent neural network recognition model of the Softmax activation function is higher than that of the RELU activation function and the Tanh activation function model. This may be because the distribution of audio tempo over time may be closer to a discrete probability distribution, and the Softmax function is essentially a discrete probability distribution corresponding to multiclassification tasks. Taking the output recognition results of the test set as an example, compared with the model without the activation function, the recognition accuracy of the ResNet_50-GRU recurrent neural network recognition model with Softmax activation function on the MSD, AudioSet, and FMA audio data sets has been improved by 5.4%, 3.3% and 7.2%. The recognition accuracy on the FMA under the three data sets is relatively low, which may be due to the fact that the peak time of the audio signal of the experimental sample selected in this data set is relatively close, and the discrimination between the rhythm points of the audio is relatively low. When evaluating the recognition system, in addition to statistical recognition accuracy, confusion matrix can also be used for intuitive description. Confusion matrix, also known as error matrix, is a visualization tool that can reflect the accuracy of recognition from different perspectives. For the ResNet_50-GRU model whose activation function is the Softmax function, the rhythm extraction and recognition tests are performed on the fast three music pieces. The energy envelope, frequency sampling points, and actual rhythm points of its audio are shown in Figure 11. The rhythm points in this segment of audio are represented by A-D, assuming that the identified rhythm points are represented by 1, and the nonrhythm points are represented by 0. The specific confusion matrix results are shown in Table 3. The table shows that the majority of the rhythm points in the audio can be accurately identified, but the model does not recognize the timing node as a rhythm point for the fourth rhythm point in this audio segment. The reason for this analysis could be that the rhythm point's peak value is Discussion The main research direction of this paper is to study the rhythm recognition of music audio based on cyclic neural network. The main technical and theoretical supports are cyclic neural network algorithm and audio rhythm extraction theory. The audio rhythm recognition method mainly includes the preprocessing and feature extraction methods of audio data, and the recurrent neural network is used to learn the extraction rules of audio rhythm and then perform adaptive analysis on the audio. This paper firstly reviews the related technical principles. In the introduction of cyclic neural network, it mainly includes the basic principles and related content introduction of traditional RNN neural network. This paper also introduces the variant models of traditional RNN, GRU neural network ,and LSTM neural network. Both GRU and LSTM are sequence processing models based on gate control units. In the introduction of related principles of music audio rhythm recognition method, the main content is elaborated and analyzed around MFCC feature extraction and frequency measurement algorithm, and based on the previous theory, an audio rhythm extraction model based on GRU neural network is proposed. The article's subsequent experimental section is divided into two subsections. The first subsection provides an overview of the experimental data and basic settings, as well as information on the public music audio data sets used in this study and the preprocessing process. The analysis of the specific experimental results, which is divided into four stages, is the second subsection. The GRU model chosen in this paper is compared to traditional RNN and LSTM models in the first stage. The results show that GRU has a higher rhythm recognition accuracy than the other two, and that the influence of the size of the Lag Window on the model recognition accuracy is not purely linear. The GRU model's influence on the number of residual network layers is investigated in the second stage. After conducting tests, it was discovered that the model's recognition accuracy is highest when the number of residual network layers is 50 layers. The third stage of experiments is to study the relationship between different activation functions and GRU audio rhythm recognition accuracy. The experimental results show that under the Softmax function, the model's ability to analyze and identify each data set is better than the other two activation functions. The fourth stage experiment is a specific verification experiment. It identifies an audio frequency of a fast three music rhythm and gives the signal envelope and experimental matrix of the audio frequency. After analyzing the results in this paper, the model parameters are adjusted, and finally the recognition accuracy of the ResNet_50-GRU model for all sample data sets is calculated. Conclusions Recurrent neural networks are very effective for meaningful processing of data with sequential characteristics and have made major breakthroughs in problems in the field of NLP such as language modeling and speech recognition. This paper is a research on the recognition of music audio rhythm based on recurrent neural network. This paper summarizes the RNN and its variant neural network model, conducts experimental analysis on the audio data set with a certain sample size, and has made certain research progress, but there are still many shortcomings. For example, in the experiment of the influence of Lag Window size on model recognition accuracy, due to the limitation of sample size, an optimal Lag Window size value has not been researched. And the number of residual network layers selected later may only show the best effect in the model of this paper, because there are many influencing factors, and it does not have good universality. Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest The author does not have any possible conflicts of interest.
7,456.4
2022-03-18T00:00:00.000
[ "Computer Science" ]
Renewable energy consumption and economic growth in Argentina. A multivariate co-integration analysis This paper applied the ARDL bounds test approach and the VECM test technique to examine the long run relationship and direction of causality between renewable energy consumption and economic growth in Argentina. Quarterly time series data was employed in this study covering a period between 1990 and 2014. Trade openness, capital and employment were included in the study to form a multivariate framework. The results established that there is a long run relationship between the variables. The VECM test technique confirmed a unidirectional causality flowing from economic growth to renewable energy consumption. This implies that energy conservation policies may not harm the economic growth. The study, therefore, suggest that an appropriate and effective energy policy should be implemented in the long run. INTRODUCTION Climate change and global warming have been a major concern worldwide and has attracted much attention of the energy economists and environmentalists. Many studies have investigated the causal relationship between energy consumption, carbon dioxide emissions and economic growth in trying to come up with important energy policies. It has been observed that energy consumption is a driver for economic growth and also economic growth stimulates energy consumption (Khobai and Le Roux, 2017). Ozturk and Acaravci (2010) established that higher levels of carbon dioxide emissions are accounted for by the increase in energy consumption. This implies that since high levels of economic growth require high levels of energy consumption, this is responsible for high levels of carbon dioxide emissions. This notion has attracted much attention where energy economist aimed to come up with energy policies that will enhance economic growth while at the same time reducing the emissions of carbon dioxide. Most studies that argued that economic growth should not be sustained at the expense of the environment focused on the policies that would pursue an energy mix that includes clean and renewable energy. This led to studies investigating the linkage between economic growth and other sources of energy such as renewable energy (Sebri and Ben-Salha, 2014). A vast majority of the studies that aimed to examine this relationship established mixed results. Some studies revealed a unidirectional causality flowing from renewable energy consumption to economic growth (Khobai and Le Roux, 2017;Apergis and Payne, 2011); whereas other studies confirmed a unidirectional causality running from economic growth to renewable energy consumption (Ocal and Aslan, 2013;Ziramba, 2013). Most studies affirmed a bidirectional relationship between renewable energy consumption and economic growth (Sebri and Ben-Salha, 2014;Apergis and Payne, 2014;Sadorsky, 2009;Apergis and Payne, 2010). This led to the current study examining the causal relationship between renewable energy consumption and economic growth in Argentina. The choice of Argentina is motivated by the fact that the country's domestic oil industry is a major driver to export growth. The majority of energy export from Argentina is accounted for by crude oil (US Department of Energy, 2003). It was also established that since 1990, the total energy usage in Argentina has increased by more than 40% and this accounted mostly by natural gas (46%) followed by oil (38.4%) (US Department of Energy, 2003). Energy economics has supported the fact that burning of fossil fuels such as oil are the major causes of carbon dioxide emission. In this accord, this serves to empirically investigate the causal linkage between renewable energy consumption and economic growth in Argentina covering the period between 1990 and 2014. The main objective of this study is to determine whether the implementation of environmentally friendly policies on economic growth in Argentina will have a positive or negative effect on the country's economic growth and development. This is achieved by applying the Autoregressive Distributed Lag (ARDL) bounds testing approach to establish whether there is a long run relationship among the variables and the usage of the Vector Error Correction Model (VECM) technique to determine the direction of causality between the variables. The findings of this study will assist in determining whether the energy conservation policies have a positive or negative effect on growth by affecting energy consumption The remainder of the study is structured as follows: Section two discusses the review of the literature between renewable energy consumption and economic growth. Section three outlines the methodology, data sources and the model specification. The empirical results are presented in section four followed by the conclusion and policy recommendations in section five. LITERATURE REVIEW The literature review shows that the linkages between economic growth and renewable energy consumption can be broadly classified into two research clusters. Firstly, the empirical work focuses on the relationship between economic growth and energy consumption using the co-integration approaches and the Grangercausality techniques. Secondly, analyses focus on economic growth and other disaggregated renewable energy (such as hydroelectricity) consumption nexus. Nevertheless, for Argentina, a limited number of studies are available. Numerous studies have investigated the relationship between economic growth and renewable energy consumption using the cointegration techniques and Granger-causality frameworks; these studies include those by Apergis and Payne (2011) Khobai and Le Roux (2017), Inglesi-Lotz (2013); Apergis and Payne (2010) and Sadorsky (2009). Ivanovski et al. (2020) contributed the most recent studies on renewable energy consumption -economic growth nexus. Employing non-parametric model, the results suggested that non-renewable energy consumption has a positive impact on economic growth across OECD nations. It further portrayed that both renewable and non-renewable energy consumption enhance economic growth in non-OECD countries. Shahbaz et al. (2020) examined the impact of renewable energy consumption and economic growth for 38 countries. The findings from dynamic ordinary least squares (DOLS) and Fully modified ordinary least squares (FOMLS) confirm the existence of a long run relationship between renewable energy consumption and economic growth. Specifically, renewable energy consumption has a positive impact on economic growth for 55% of the sample countries. Haseeb et al. (2018) investigated the renewable energy consumption -economic growth nexus. This Malaysian study revealed that renewable energy have a positive and significant effect on economic well-being both in the short and long run. Can and Korkmaz (2018) focused on Bulgaria to in investigating the relationship between renewable energy consumption and economic growth. The finding from the ARDL model showed no existence of a long run relationship but Toda-Yamamoto causality results posited that renewable energy consumption and renewable electricity output causes economic growth. Khobai and Le Roux (2017) established that there is a positive long run relationship between renewable energy consumption and economic growth in South Africa. The study employed the ARDL model and the VECM technique covering the period from 1990 to 2014 for South Africa. The VECM model validated a unidirectional causality flowing from renewable energy consumption to economic growth. Another research that focused on a single country was conducted by Ocal and Aslan (2013). The study purposed to examine the relationship between renewable energy consumption and economic growth in Turkey for the period 1990-2010. The results from the ARDL bounds testing approach validated a presence of a negative relationship between renewable energy consumption and economic growth in Turkey. The Granger-causality test by Toda-Yamamoto evidenced a unidirectional causality flowing from economic growth to renewable energy consumption. Sebri and Ben-Salha (2014) established the same results of a long run positive relationship between renewable energy consumption and economic growth but for a bigger group, Brics countries. The study employed the ARDL bounds testing approach and the VECM technique for the period between 1970 and 2010. The VECM model results confirmed bidirectional causality flowing between renewable energy consumption and economic growth. Another study focused on a larger group was undertaken by Apergis and Payne (2014) who explored the relationship between renewable energy consumption and economic growth for seven Central American countries. The study validated that there is existence of a long run positive relationship between renewable energy consumption and economic growth. Apergis and Payne (2012) also affirmed a long run relationship between economic growth and renewable energy consumption for 80 countries. Moreover, the study detected bidirectional causality flowing between renewable consumption and economic growth. One of the current studies was done by Ozcan and Ozturk (2019) to examine the relationship between renewable energy consumption and economic growth in 17 emerging countries and only established a growth hypothesis for Poland and a neutral hypothesis for the remaining 16 emerging countries. Liu and Liang (2019) served to investigate the relationship between energy consumption, biodiversity and economic growth for China and five countries (Cambodia, Laos, Myanmar, Thailand and Vietnam). The ARDL model results posits that the fossil fuels have more effect on economic growth than renewable energy as such renewable energy is an alternative for fossil fuels. In exploring West Africa, Maji and Sulaiman (2019) established that renewable energy consumption has an adverse effect on economic growth. This could be attributed to the fact that in West Africa, wood biomass is mostly used as the source of renewable energy. Sadorsky (2009) carried a study for eighteen emerging countries and established that renewable energy consumption per capita and real income per capita have a long run relationship. Using a panel error correction model over the period 1994-2003, it was confirmed that renewable energy consumption and economic growth Granger-cause each other for the eighteen emerging countries. Apergis and Payne (2010) studied the causal relationship between economic growth and renewable energy consumption for panel of 20 OECD countries. Covering the period between 1985 and 2005, the study established that there is long run relationship between economic growth and renewable energy consumption. The Granger-causality tests suggested bidirectional causality flowing between renewable energy consumption and economic growth. Another OECD study that investigated the causal relationship between renewable energy consumption and economic welfare was conducted by Inglesi-Lotz (2013). Employing panel cointegration techniques, Inglesi-Lotz affirmed that renewable energy consumption has a positive and significant impact on economic welfare. Apergis and Payne (2011) studied the six Central American countries in investigating the relationship between renewable energy consumption and economic growth. Using annual data for the period between 1980 and 2004, this study established that there is a presence of a long run relationship between economic growth and renewable energy consumption. It was also found that energy consumption Granger-causes economic growth both in the short run and long run. Tagcu, Ozturk and Aslan investigated the causal relationship between economic growth and renewable energy consumption using the ARDL bounds testing approach and the recent developed Granger-causality test by Hatemi-J (2012). Their findings validated existence of a long run relationship between economic growth and renewable energy consumption. The Hatemi-J causality test suggested bidirectional causality flowing between economic growth and renewable energy consumption. Instead of aggregated renewable energy consumption, Ziramba (2013) focused on hydroelectricity consumption and economic growth linkages for Algeria, Egypt and South Africa. Using data for the period 1980-2009, this study established a unidirectional causality flowing from economic growth to hydroelectricity in South Africa, a feedback hypothesis in Algeria and a neutrality hypothesis in Egypt. Model Specification Based on the economic growth literature, the hypothesized model specification is as follows: All the series are expressed in log-linear form and equation 3.1 now becomes; LGDP Where LGDP is the natural logarithm of economic growth and is measured by real GDP per capita. LRE represents the natural logarithm of renewable energy consumption. LTR denotes the natural logarithm of trade openness (the sum of imports and exports of goods and services). LEM represent the natural logarithm of employment and LK is the natural logarithm of capital formation. Data Collection In tracing the linkages between economic growth and renewable energy consumption in Argentina, the study employs quarterly data covering the period from 1990 to 2018. In doing so, the Gross domestic product (GDP) per capita at 2010 constant prices is used as an indicator for economic growth. Trade openness is the combination of exports and imports. Commercial, agricultural and manufacturing employments are used as proxy for Employment. Capital is measured as gross capital formation (constant 2010 US$). The data for Gross domestic product, capital and employment was extracted from the World Development Indicators (WDI) published by the World Bank (WB, 2016). The data for trade openness was sourced from United Nations and Trade Development (UNCTAD). The data for renewable energy consumption was obtained from International Energy Agency (IEA). Unit Root The first step in examining the long run relationship between the variables is to test whether the variables are stationary or non-stationary. To examine the non-stationarity property of the series variables both in the levels and in the first difference, the Augmented Dickey Fuller (ADF) test has been employed. This test is the modification of the Dickey Fuller (DF) test and the lagged values of the dependent variables are added in the estimation of an equation as follows: The Phillips and Perron (PP) test is also employed in the empirical analysis. This is on account that the ADF tests does not consider cases of heteroscedasticity and non-normality which are mostly realized in raw data of economic time series variables. The PP test also has power when the time series of interest has serial correlation and there is structural breaks. The PP test is based on the following form of equation: The ADF and the Phillips-Perron tests have been criticised for their low power when variables are stationary but with a root close to nonstationary boundary (Brooks, 2014). Elliot et al. (1996) argue that the Dickey Fuller Generalised Least Squares (DF-GLS) test has more power in the presence of an unknown mean or trend compared to the ADF and the Phillips-Perron tests. On this accord, the DF-GLS is also employed in this study to test for stationarity among the variables. Co-integration Test In order to investigate the linkage between economic growth and renewable energy consumption in Argentina, the study applies the ARDL bounds testing approach to co-integration developed by Pesaran et al., (2001). This model has become more popular in recent studies. In simple form, the ARDL model involves estimating the following conditional error correction models: Where: LGDP t is the natural logarithm of Gross Domestic Product. LRE t is the natural logarithm of Renewable energy consumption. LTR t is the natural logarithm of trade openness. LEM t is the natural logarithm of employment. LK t denotes the natural logarithm of capital formation. T and Δ represent the time period and the first difference operator, respectively. It is assumed that the residuals (ε 1t, ε 2t, ε 3t, ε 4t, ε 5t ) are normally distributed and white noise. The existence of a long run relationship between the variables is determined based on an F-test (Wald test) by setting the coefficients of one period lagged level of the independent variables equal to zero. The null hypothesis of no co-integration among the variables is H 0 : α GDP = α RE = α TR = α EM = α K = 0 tested against the alternative hypothesis H 1 : As a result, if the calculated F-statistics exceeds the upper critical bound value, then the H 0 is rejected and the results conclude in favour of co-integration. On the contrary, H 0 cannot be rejected if the F-statistics falls below the lower critical bound value. Finally, if the F-statistics falls within the two bounds, then the co-integration test becomes inconclusive. If a long run relationship between the variables is established, the next step is to investigate the long run and short run relationship among variables of interest. To estimate the long run relationship among the variables based on the ARDL approach, the following equation is built up. Furthermore, in order to investigate the short run dynamics from the ARDL model and recheck the presence of co-integration established in the ARDL model, the study estimates the error correction model, which is developed as follows: LGDP LGDP If the coefficient of the ECM in the equation is negative and significant, there is an existence of a long run relationship among the variables. This also denotes the speed of adjustment to the equilibrium. Finally, to determine the reliability of the ARDL result, the study checks for serial correlation, functional form, normality and heterosckedasticity of the ARDL model. In addition, the stability of the parameters will be tested using the Cumulative Sum of Recursive Residual (CUSUM). Granger-causality After examining the long run relationship between the variables, the Granger-causality is applied to find the direction of causality among the variables. If the results detect existence of a long run relationship, the Vector Error Correction Model is used to estimate the direction of causality. The VECM is used to determine the long run and short run relationship between the variables and can detect sources of causation. The VECM is moulded by (Eq. 3.10-Eq. 3.14). In each equation, the dependent variable is explained by itself, the independent variables and the error correction term: Δ represent the difference operator, α it is the constant term and ECT refers to the error correction term derived from the long run cointegrating linkages. The short run causal relationships are captured through the coefficients of the independent variables. This is determined using a standard Wald test. The long run causal relationships are based on the error correction terms. The t-statistics is employed to test the significance of the speed of adjustment in ECT terms. If the coefficients of the error correction term are negative and significant, then there is evidence of a long run causal relationship. Unit Root Tests The first step taken in the study was to determine whether the variables are stationary or not. This was examined using the Augmented Dickey Fuller, Phillips and Perron and Dickey Fuller Generalised Least Squares unit root tests for the five variables. The results are presented in Table 1. Table 1 shows that we fail to reject the null hypothesis of non-stationary at levels for all the variables but at first difference the null hypothesis is rejected. This means that all the variables are non-stationary at levels but are found to be stationary when differenced once. Hence they are integrated of first order, I(1). Co-Integration Having established that the variables are stationary, the next step is to determine whether there is a long run relationship among the variables. But before investigating the existence of a long run relationship between economic growth, renewable energy consumption, trade openness, employment and capital, it is necessary to determine the optimal lag length. The Akaike information criteria and Schwartz Criteria are employed to find the optimal lag length and the results are illustrated in Table 2. The long run relationship was examined using the ARDL bounds tests and the results are presented in Table 3. There results suggest there is an existence of a long run relationship among the variables when economic growth is used as the dependent variable. This is on account that the F-statistics of economic growth (28.8) is greater than the upper critical bound value of 4.797 at 1% level of significance. This means that when economic growth is used as an independent variable, there is evidence of a long run relationship between the variables. Similar results were obtained when renewable energy consumption, trade openness, capital and employment are each used as dependent variables. This is because the F-statistics of trade openness (8.86), capital (23.63) and employment (22.77) are greater than the upper critical bound value of 4.797 at 1% percent level of significance, while the F-statistics of renewable energy consumption (3.73) is greater than the upper critical bound value of 3.72 at 5% level of significance. Therefore, we conclude that there is a long run relationship between economic growth, renewable energy consumption, trade openness, employment and capital in Argentina. Table 4 presents the estimated coefficients of the long run relationship. Based on the findings in Table 4, the long run economic growth model can be moulded as follows: LGDP t = 1.78 + 0.17LRE-0.01LTR + 0.95LEM + 0.30LK The estimated coefficients suggest that renewable energy consumption, employment and capital have a statistically significant positive impact on economic growth, which is in line with theoretical argument that renewable energy consumption, employment and capital boost economic growth. More specifically, the long run elasticity of renewable energy consumption is 0.17, which implies that a 1% increase in renewable energy consumption leads to about 0.17% rise in economic growth, when all else is the same. These results are in line with the findings of Khobai and Le Roux (2017), Payne (2011) andSadorsky (2009). Similarly, the elasticity of employment suggests that a 1% increase in employment results in 0.95% increase in economic growth on average, all else held constant. The long run elasticity of capital is 0.30, which implies that a 1% rise in capital leads to approximately 0.30% increase in economic growth. The results coincides with the findings Adebola (2011). However, trade openness has an insignificant impact on economic growth. Table 5 presents the short run results. The results suggest that renewable energy consumption has a positive and significant impact on economic growth. Specifically, a 1% increase in renewable energy consumption leads to a 0.09% increase in economic growth in the short run. These results confirm Sebri and Ben-Salha' (2014) findings. Moreover, the findings posits that employment and capital have a positive and significant effect on economic growth. Based on the results illustrated in Table 5, the estimated coefficient of the ECM t-1 is −0.64. Since the error correction term is negative and significant, this means that the results support the existence of a long run relationship among the variables. The results indicate that departure from long-term growth path due to a certain shock is adjusted by 64% each quarter. The diagnostic tests results are illustrated in Table 6. It was validated that the error terms of the short run models are free of heteroscedasticity, have no serial correlation and are normally distributed. It was also discovered that the Durbin Watson statistics is greater than the R 2 , which implies that the short run models are not spurious. The stability of the long run parameters were tested using the cumulative sum of recursive residuals (CUSUM). The results are illustrated in Figure 1. The results fail to reject the null hypothesis at 5% level of significance because the plot of the tests fall within the critical limits. Therefore, it can be realised that our selected ARDL model is stable. Granger Causality After confirming the presence of a long run relationship between the variables, the VECM Granger-causality approach is used to examine the direction of causality between economic growth, renewable energy consumption, trade openness, capital and employment. The system of the vector error correction model uses all the series endogenously. This system allows the predicted variables to explain itself both by its own lags and lags of forcing variables as well as the error correction term and by residual term. The short run and long run Granger causality results are reported in Table 7. The reported values in parentheses are the p-values of the test. The findings indicate that there is a long run causality flowing from economic growth, trade openness, capital and employment to renewable energy consumption. This is because the error correction term (−0.36) is negative and significant when renewable energy consumption was used as the dependent variable. The results suggest that there is an existence of a conservation hypothesis which indicates that renewable energy consumption has less or no impact on economic growth in the long run. As a result, a fall in renewable energy consumption will lead to a minor or no impact on economic growth. These results are consistent to Ziramba (2013) and Ocal and Aslan (2013). Furthermore, it was observed that there is a long run causality flowing from economic growth, renewable energy consumption, trade openness and employment to capital. The short run results validated a causality flowing from capital to economic growth. Another short run causality was established flowing from economic growth to trade openness. Lastly, it was discovered that economic growth Granger-causes capital in the short run. CONCLUSION This paper investigated the causal relationship between renewable energy consumption and economic growth in Argentina for the period 1990-2014. Despite numerous studies which were conducted on this notion, there is still no consensus as to whether renewable energy consumption drives economic growth or whether it is economic growth that stimulates renewable energy consumption. Unlike some of the previous studies done on this subject, the current study employed the recently developed ARDL bounds testing approach to co-integration and the Vector Error Correction Model Granger-causality to determine this relationship. To the best of the author's knowledge, this might be the first study of its kind to investigate the causal relationship between renewable energy consumption and economic growth in Argentina using this modern time-series techniques. The empirical results established that there is a long run relationship between economic growth, renewable energy consumption, trade openness, capital and employment in Argentina. The VECM test technique confirmed a unidirectional causality flowing from economic growth, trade openness, capital and employment to renewable energy consumption. More specifically, economic growth Granger-causes renewable energy consumption. This implies that economic growth drives renewable energy consumption but not the other way around. In this case, implementation of the energy conservation policies will have a minor or no effect at all on economic growth. Therefore, the study recommends that the energy conservation policies should be applied to curb unnecessary waste of energy in Argentina.
5,873
2018-03-09T00:00:00.000
[ "Economics", "Environmental Science" ]
Significant Improvement of Thermal Conductivity of Polyamide 6/Boron Nitride Composites by Adding a Small Amount of Stearic Acid This study investigates the effect of adding stearic acid (SA) on the thermal conductivity of polyamide 6 (PA6)/boron nitride (BN) composites. The composites were prepared by melt blending, and the mass ratio of PA6 to BN was fixed at 50:50. The results show that when the SA content is less than 5 phr, some SA is distributed at the interface between BN sheets and PA6, which improves the interface adhesion of the two phases. This improves the force transfer from the matrix to BN sheets, promoting the exfoliation and dispersion of BN sheets. However, when the SA content was greater than 5 phr, SA tends to aggregate and form separate domains rather than being dispersed at the interface between PA6 and BN. Additionally, the well-dispersed BN sheets act as a heterogeneous nucleation agent, significantly improving the crystallinity of the PA6 matrix. The combination of good interface adhesion, excellent orientation, and high crystallinity of the matrix leads to efficient phonon propagation, resulting in a significant improvement in the thermal conductivity of the composite. The highest thermal conductivity of the composite is achieved when the SA content is 5 phr, which is 3.59 W m−1 K−1. The utilization of a composite material consisting of 5phr SA as the thermal interface material displays the highest thermal conductivity, and the composite also demonstrates satisfactory mechanical properties. This study proposes a promising strategy for the preparation of composites with high thermal conductivity. Introduction Thermal conductivity composites have garnered significant interest in modern electronic devices, LED packaging, and other related fields [1]. However, the polymer matrix's low thermal conductivity (<0.5 W m −1 K −1 ) has limited its potential applications. One feasible solution is adding high thermal conductivity fillers to the polymer. The resulting thermal conductivity composites exhibit high thermal conductivity, low dielectric loss, light weight, and ease of processing. BN is a two-dimensional material that has attracted attention for its high in-plane thermal conductivity of 600 W m −1 K −1 [15]. This property makes it a promising candidate for use as a filler in composites for thermal management, as it can effectively transfer heat while maintaining exceptional chemical and thermal stability at high temperatures [16]. Additionally, BN's excellent electrical insulation properties, characterized by low dielectric loss and high resistivity, make it an ideal candidate for composites that require both heat conduction and insulation [17,18]. Phonon transmission is the primary heat conduction mechanism in these composites. To significantly enhance the thermal conductivity of these composites, it is necessary to improve the dispersion of BN within the matrix and strengthen the interface adhesion between BN and the matrix. BN's unique combination of thermal and electrical properties makes it a promising material for a wide range of applications in the fields of electronics, aerospace, and energy. However, due to BN's high chemical inertness, meeting the aforementioned requirements is a challenge. To address this, Li et al. [19] utilized solution blending to prepare polyvinylidene fluoride/boron nitride nanosheet (PVDF/BNNS) composites. By subjecting PVDF to appropriate water-bath heating, it improved the dispersion of BNNS and enhanced the fluidity of PVDF. The resulting composite, with a loading of 3.8 wt% BNNS, exhibited a thermal conductivity of 0.49 W m −1 K −1 , which was 3.5 times that of a pure PVDF film. Similarly, Yang et al. [20] developed natural rubber/BN (NR/BN) composites by modifying BN with poly (dopamine) (PDA) and γ-methacryloxypropyl trimethoxy silane to improve the interface bonding between BN and NR. With 51.3 wt% modified BN fillers, the composite displayed a thermal conductivity of 0.39 W m −1 K −1 , which was 3.9 times that of pure NR. Our group [11] prepared isocyanate-functionalized BN (f-BN), dispersed it in caprolactam (CL) solution, and prepared PA6/BN composites through in situ polymerization. Since PA6 was in-situ-grafted on BN sheets during polymerization, BN and PA6 present good interface bonding. The thermal conductivity of PA6/f-BN composites at 5 wt% of f-BN loading was 66% higher than that of pure PA6. Furthermore, due to the in-plane thermal conductivity of BN being 20 times higher than its through-plane thermal conductivity [21], constructing a three-dimensional BN (3D-BN) network has been proven to significantly enhance the thermal conductivity of polymer/BN composites by enabling phonon transfer along the BN plane. Various techniques have been utilized to fabricate 3D-BN structures, including organic/inorganic templating [22][23][24][25], foaming [26,27], 3D printing [28,29], electric alignment [30,31], magnetic alignment [32,33], and ice templating [16,[34][35][36]. For instance, Khakbaz et al. [28] successfully produced thermoplastic polyurethane/BN (PU/BN) composites through 3D printing, resulting in a 74% increase in thermal conductivity at a loading of 20 wt% BN compared with unmodified PU. Han et al. [30] prepared silicone rubber/BN composites using the electric-field-assisted curing technique. Under the AC electric field (50 Hz) of 11.0 kV/mm, the thermal conductivity with 20 vol% loading of BN was enhanced 250% higher than that of the composite prepared without an electric field. In our group's work [12], we fabricated 3D-BN scaffolds using the ice-templating method and impregnated them with caprolactone monomers, then polymerized them via microwave-assisted techniques to produce polycaprolactone/3D-BN (PCL/3D-BN) composites. The maximum thermal conductivity achieved was 1.42 W m −1 K −1 at 25.6 wt% BN loading, which was 7.1 times higher than that of pure PCL. The method mentioned above can dramatically promote the thermal conductivity of the composites. However, direct melt blending is usually the preferred process for the largescale preparation of thermally conductive materials [37][38][39][40][41][42][43]. Zhang et al. [37] prepared acrylonitrile butadiene styrene copolymer/BN (ABS/BN) composites by melt blending. During the preparation, a small amount of a hyperbranched polymer was added to reduce the viscosity of ABS. With the loading of 60 wt% BN, the thermal conductivity of the composite increased to 1.12 W m −1 K −1 . Wang et al. [38] fabricated PA6/BN composites by the same method. As the BN content was 50 wt%, the thermal conductivity of the composite could reach 0.93 W m −1 K −1 . Generally, when a thermally conductive composite was prepared by the melt-blending method, a large amount of BN fillers was usually added to significantly enhance the thermal conductivity. In most of the reported work with other methods, although much more BN fillers were applied in the melt-blending process, the thermal conductivity of the composite was still less than 2.0 W m −1 K −1 , which was not enough in some applicants. In this work, PA6/BN composites were prepared by melt blending. By adding a small amount of stearic acid (SA), the thermal conductivity of the composite was dramatically promoted, and the influence of SA on the improvement of the thermal conductivity was investigated. Characterization The morphologies of the composites were observed by field emission scanning electron microscopy (FESEM, Nova NanoSEM 450, FEI, USA) at an accelerating voltage of 5 kV. All samples were frozen in liquid nitrogen and then fractured to obtain flat surfaces. The cross sections were sprayed with gold, and the thickness was about 5 nm. The samples were subjected to X-ray diffraction (XRD) analysis on an X-ray diffractometer (D8 Advance, Bruker, Germany) employing a Cu-K α radiation source. XRD data were collected from 10 • to 80 • at a scanning rate of 3 • /min. A differential scanning calorimeter (DSC; 200F3, Netzsch, Germany) characterized the melting and crystallization behaviors of the composites. First, the sample was heated from 50 • C to 250 • C at 50 • C/min, kept at 250 • C for 3 min to eliminate the thermal history, then cooled down to 50 • C at 10 • C/min, and finally heated to 250 • C again at 10 • C/min. The cooling and secondary heating were recorded for thermal performance analysis. The crystallinity (X c ) of PA6 in the composites was calculated according to the following equation: where ∆H m is the melting enthalpy of the sample, ω is the weight fraction of PA6 in the composite, and ∆H 0 m is the theoretically 100% melting enthalpy of PA6, 190 J g −1 . The thermal conductivity of the composite was measured at 25 • C with LFA 467 Nanoflash (Netzsch, Germany). The sample was a circular sheet with a diameter of 12.7 mm and a thickness of 0.9 mm. In addition, the sample was fixed between the bottom of an LED lamp and a radiator as a thermal interface material (TIM) to promote the heat dissipation of the LED lamp, and the surface temperature of the LED lamp was recorded by an infrared thermal imager (Fotric 365C, Fotric, Dallas, TX, USA). The tensile properties of the samples were tested by a universal tensile tester (AGS-X, Shenzhen, China), and the tensile dumbbell-shaped samples were prepared by a Haake Morphology As shown in the FESEM image of the composite in Figure 1, BN fillers in the composite without SA present aggregation, thicker lamellae, smooth surface, and obvious interface with the PA6 matrix, indicating that the interfacial adhesion between BN and PA6 is poor. With an increase in the SA content, BN fillers exhibit thinner and even dispersion in the matrix, indicating that the fillers are exfoliated, and the interfacial adhesion between the two phases becomes better. However, as the amount of SA is more than 5 phr, the interfacial adhesion of the composite is worse than that of the PA6/BN/SA5 composite. an LED lamp and a radiator as a thermal interface material (TIM) to promote the heat dissipation of the LED lamp, and the surface temperature of the LED lamp was recorded by an infrared thermal imager (Fotric 365C, Fotric, Dallas, TX, USA). The tensile properties of the samples were tested by a universal tensile tester (AGS-X, Shenzhen, China), and the tensile dumbbell-shaped samples were prepared by a Haake microinjection molding machine (Thermo Fisher, Waltham, MA, US) according to ISO 527-2-5A. The sample was stretched at a rate of 5 mm/min. Morphology As shown in the FESEM image of the composite in Figure 1, BN fillers in the composite without SA present aggregation, thicker lamellae, smooth surface, and obvious interface with the PA6 matrix, indicating that the interfacial adhesion between BN and PA6 is poor. With an increase in the SA content, BN fillers exhibit thinner and even dispersion in the matrix, indicating that the fillers are exfoliated, and the interfacial adhesion between the two phases becomes better. However, as the amount of SA is more than 5 phr, the interfacial adhesion of the composite is worse than that of the PA6/BN/SA5 composite. SA in the composite can be etched by hot ethanol. SA has some affinity with PA6 due to its polarity and structural similarity. SA molecules are typically composed of long-chain fatty acid molecules, which possess carboxylic acid groups that give SA a certain degree of polarity. Similarly, PA6 molecules also contain carboxylic acid and amide functional groups, which allow SA to interact with PA6 molecules through van der Waals forces and affinity interactions. These interactions help SA to disperse and dissolve in PA6. However, the affinity between SA and PA6 is not particularly strong. When the SA content is high, some SA molecules will be excluded to the interface between PA6 and BN, leading to the improvement of the compatibility between PA6 and BN. Therefore, SA was often used in PA6 composites to promote the compatibility between PA6 and fillers [44]. In this work, when the SA content is less than 3 phr, it is mainly distributed in the PA6 phase. When its content is at 3 phr, 4 phr, or 5 phr, due to the limited compatibility between SA and PA6, some SA is distributed at the interface between PA6 and BN, thus improving the interfacial adhesion between PA6 and BN. During the mixing process, the shear force acting on the matrix can be better transferred to the fillers, thus promoting the exfoliation and dispersion of BN fillers. However, when the content of SA is greater than 5 phr, the intermolecular interactions between SA molecules become stronger, eventually causing them to aggregate and form separate domains. As a result, the interface between PA6 and BN in the PA6/BN/SA6 composite becomes visible again. In addition, it should be noted that in some of the samples depicted in Figure 1, the BN sheets were aligned in the direction of thermal conduction likely due to the shear force experienced during injection. To determine the orientation of the BN sheets in each sample, XRD analysis was performed. The XRD patterns for the samples can be found in Figure 3. It was observed that PA6 exhibited strong α-crystalline diffraction peaks at 2θ = 20.9° and 24.0°. As shown in Figure 3b, as the SA content increases, the α-crystalline diffraction peak of PA6 in the composite material slightly strengthens. However, in order In addition, it should be noted that in some of the samples depicted in Figure 1, the BN sheets were aligned in the direction of thermal conduction likely due to the shear force experienced during injection. To determine the orientation of the BN sheets in each sample, XRD analysis was performed. The XRD patterns for the samples can be found in Figure 3. It was observed that PA6 exhibited strong α-crystalline diffraction peaks at 2θ = 20.9 • and 24.0 • . As shown in Figure 3b, as the SA content increases, the α-crystalline diffraction peak of PA6 in the composite material slightly strengthens. However, in order to investigate the orientation of BN, the focus was placed on the diffraction peaks of BN within the composites. The sharp diffraction peaks at 2θ = 26.8 • and 41.6 • correspond to (002) and (100) crystal planes of BN, respectively [45]. The ratio of the intensity of (002) and (100) planes, which is recorded in Table 1, can be used to reflect the orientation of BN fillers in the samples. For the composites with an SA content of 3 phr, 4 phr, or 5 phr, the value of I 002 /I 100 is significantly smaller compared with that for the other composites. This is consistent with the results observed by SEM. Due to the excellent dispersion of BN sheets in the composites of PA6/BN/SA3, PA6/BN/SA4, and PA6/BN/SA5, more BN sheets are oriented in the shear field. Melting and Crystallization Behavior The thermal conducting mechanism of PA6/BN composites is mainly phonon transmission. The transmission efficiency of phonons in the crystalline region is higher than that in the amorphous region. Therefore, the crystallinity of the polymer matrix in the composite has an important impact on phonon transmission. Figure 4 shows the DSC curves of cooling and secondary heating of the composites, and the relevant data are listed in Table 2. With the addition of SA, the melting temperature (Tm) and crystallization temperature (Tc) of PA6 in the composite decrease, which is mainly due to the plasticizing effect of SA in the PA6 matrix. Notably, the crystallinity of PA6 significantly increases with the increase in the SA content, which is consistent with the analysis of the XRD patterns. It is because the addition of SA promotes the exfoliation and dispersion of BN sheets, which increases the number of heterogeneous nucleation points in the composite, thus improving the crystallization of PA6. When the SA content is 5 phr, the crystallinity of PA6 in the composite is the highest, reaching 40.0%, which is 39.9% higher than that of pure PA6. As aforementioned, the dispersion of BN fillers decreases when the SA content is further increased, so the crystallinity of PA6 in the corresponding composite slightly decreases. Melting and Crystallization Behavior The thermal conducting mechanism of PA6/BN composites is mainly phonon transmission. The transmission efficiency of phonons in the crystalline region is higher than that in the amorphous region. Therefore, the crystallinity of the polymer matrix in the composite has an important impact on phonon transmission. Figure 4 shows the DSC curves of cooling and secondary heating of the composites, and the relevant data are listed in Table 2. With the addition of SA, the melting temperature (T m ) and crystallization temperature (T c ) of PA6 in the composite decrease, which is mainly due to the plasticizing effect of SA in the PA6 matrix. Notably, the crystallinity of PA6 significantly increases with the increase in the SA content, which is consistent with the analysis of the XRD patterns. It is because the addition of SA promotes the exfoliation and dispersion of BN sheets, which increases the number of heterogeneous nucleation points in the composite, thus improving the crystallization of PA6. When the SA content is 5 phr, the crystallinity of PA6 in the composite is the highest, reaching 40.0%, which is 39.9% higher than that of pure PA6. As aforementioned, the dispersion of BN fillers decreases when the SA content is further increased, so the crystallinity of PA6 in the corresponding composite slightly decreases. sheets, which increases the number of heterogeneous nucleation points in the composite thus improving the crystallization of PA6. When the SA content is 5 phr, the crystallinity of PA6 in the composite is the highest, reaching 40.0%, which is 39.9% higher than that of pure PA6. As aforementioned, the dispersion of BN fillers decreases when the SA content is further increased, so the crystallinity of PA6 in the corresponding composite slightly decreases. Thermal Conductivity The thermal conductivities of the composites are shown in Figure 5a. When BN is added, the thermal conductivity of the composite is significantly higher than that of pure PA6, 0.3 W m −1 K −1 . With the increase in the SA content, the thermal conductivity of the composite increases first and then decreases. The thermal conductivity of the PA6/BN/SA5 composite is the highest, 3.59 W m −1 K −1 , which is 12 times and 2.6 times that of pure PA6 and the PA6/BN composite, respectively. It is consistent with the previous analysis. When the content of SA is 3 phr, 4 phr, or 5 phr, SA is mainly dispersed at the interface between BN sheets and the matrix, which improves their compatibility and reduces the scattering of phonons at the interface. Moreover, BN sheets in these composites exhibit excellent orientation, making it easy to construct thermally conductive paths. When the SA content is 5 phr, the crystallinity of the PA6 matrix in the PA6/BN/SA5 composite is the highest, which also contributes to the propagation of phonons in the matrix. Figure 5b exhibits the thermal conductivities of the composites prepared by the melt-blending method reported in the literature [7,[37][38][39][40][41][42][43]. The data show that the thermal conductivity of the PA6/BN/SA5 composite is higher than that of other composites, demonstrating that adding a small amount of SA to PA6/BN composites is an effective strategy to fabricate composites with high thermal conductivity. Transient Temperature Responses The samples were used as the TIM to detect its heat dissipation effect on the LED lamp and to investigate its thermally conductive property. Figures 6 and 7 BN sheets and the matrix, which improves their compatibility and reduces the scattering of phonons at the interface. Moreover, BN sheets in these composites exhibit excellent orientation, making it easy to construct thermally conductive paths. When the SA content is 5 phr, the crystallinity of the PA6 matrix in the PA6/BN/SA5 composite is the highest, which also contributes to the propagation of phonons in the matrix. Figure 5b exhibits the thermal conductivities of the composites prepared by the melt-blending method reported in the literature [7,[37][38][39][40][41][42][43]. The data show that the thermal conductivity of the PA6/BN/SA5 composite is higher than that of other composites, demonstrating that adding a small amount of SA to PA6/BN composites is an effective strategy to fabricate composites with high thermal conductivity. Transient Temperature Responses The samples were used as the TIM to detect its heat dissipation effect on the LED lamp and to investigate its thermally conductive property. Figures 6 and 7 Mechanical Property The addition of inorganic fillers will have some effect on the mechanical properties of the composites. The tensile strength and Young's modulus of pure PA6 were 66 MPa and 2500 MPa, respectively. The tensile properties of the composites are demonstrated in Mechanical Property The addition of inorganic fillers will have some effect on the mechanical properties of the composites. The tensile strength and Young's modulus of pure PA6 were 66 MPa and 2500 MPa, respectively. The tensile properties of the composites are demonstrated in Table 3. The tensile strength and Young's modulus of the PA6/BN composite are 61.5 MPa and 2151 MPa, respectively, which are 6.8% and 14.0% lower than those of pure PA6. This suggests that the addition of a large amount of an inorganic filler decreases the strength and stiffness of PA6. As the amount of SA in the PA6/BN/SA composites increased, both the tensile strength and Young's modulus showed a declining trend. At 5 phr, the composite exhibited a modest decrease in mechanical properties, with tensile strength and Young's modulus dropping to 42.7 MPa and 1597 MPa, respectively. However, at 6 phr, the composite showed a significant decrease in mechanical properties, with tensile strength and Young's modulus decreasing by 32% and 34%, respectively, compared with the PA6/BN/SA5 composite. This decrease can be attributed to the formation of separate phase domains of SA, which increases stress concentration and makes the composite more susceptible to fracture when subjected to external forces. Conclusions Adding thermal conductive fillers to polymers is an effective way to enhance their thermal conductivity. Among various preparation methods of thermal conductive composites, melt blending is the most convenient and practical method for large-scale production. In this study, PA6/BN composites were prepared using the melt-blending technique. To investigate the effect of small amounts of SA on the thermal conductivity of the composites, SA was added to the composite material. When the content of SA is within a certain range, some SA can be uniformly distributed at the interface between BN sheets and the matrix. This not only enhances the compatibility of the materials but also promotes the exfoliation and dispersion of BN sheets. Furthermore, BN sheets are more likely to be oriented during the injection process. When the SA content is between 3 phr and 5 phr, the combined effect of better interface bonding, excellent orientation, and higher crystallinity of the matrix promotes the efficiency of phonon propagation. This leads to an increase in the thermal conductivity of the composite material. The highest thermal conductivity of the PA6/BN/SA5 composite material was achieved at 3.59 W m −1 K −1 , which is 2.23 W m −1 K −1 higher than that of the PA6/BN composite. The resulting composites were used as TIMs (thermal interface materials) in heat dissipation experiments for LED lamps. The surface temperature of the LED lamp with the PA6/BN/SA5 composite as the TIM was the lowest. Although the tensile properties of the composite material decreased with the increase of the SA content, the decrease in the PA6/BN/5SA composite material was not significant. The tensile strength and Young's modulus of the composite material were 42.7 MPa and 1597 MPa, respectively, which are suitable for various applications. This study provides a promising strategy for the preparation of composites with high thermal conductivity.
5,404
2023-04-01T00:00:00.000
[ "Materials Science" ]
The 2 D MHD Systems with Vertical Dissipation and Vertical Dissipation Magnetic Diffusion In this paper, we study the global regularity of the classical solution of the 2D incompressible magnetohydrodynamic equation with vertical dissipation and vertical magnetic dissipation. We show that any solution of the second component ( ) 2 2 , u b has a global 2 r L -bound, where r satisfies 1 r ≤ < ∞ and the boundary does not grow faster than log r r as r increases. Introduction The generalized MHD system is where , , , 0 ν κ α β > , ( ) 1 2 Λ = −∆ , u denotes the velocity field and b denotes the magnetic field. The magnetohydrodynamic (MHD) systems [1] control the dynamics of velocity and magnetic fields in conductive fluids such as plasma and reflect the basic laws of physical conservation. In recent years, the MHD equations with partial dissipation regularity problem have attracted considerable interests. For example, the n-dimensional MHD Equation (1), when the coefficient satisfies 1 , 0, 1 , 2 4 2 n n α β α β ≥ + > + ≥ + it has been proved that the solution has global regularity [2]. Wu [3] has been And it is also proved that the condition satisfying 0, 1 ν β = > has a global smooth solution with the direction of the magnetic field that remains sufficiently smooth. Cao, Regmi and Wu [4] have been proved that the 2D MHD with horizontal dissipation and horizontal magnetic diffusion in horizontal component of any solutions has a global regularity. The global regularity of the class solution of the MHD equation with magnetic diffusion and mixed partial dissipation is established by Wu [5]. In [6], the global existence and uniqueness of the smooth solution of 2D micropolar fluid flow with zero angular viscosity have been proved. Other related articles can be seen in [7] [8] [9], etc. In this paper, we study the 2D MHD systems with vertical dissipation and vertical dissipation magnetic diffusion, namely In this case, we only get the global 2r L -bound of the solution in the y-direction, and the global regularity problem for the complete directional solution has not been achieved. In the following article, let w u b ± = ± , this will provide us with convenience. We have a symmetric equation by (2) The new Equation (3) consists of two vectors, which is more complicated in the calculation process, therefore, we use fractionally derivative triple product estimation [4] to solve this difficulty. This paper takes Cao and Wu recent study of two-dimensional partially dissipated Boussinesq equation [8] as an example to discuss the influence of known vertical component ( ) 2 2 , u b Lebesgue norm on global regularity. And in Section 4, we obtain the main Theorem 3, which proves that ( ) In fact, in Section 2 we get Theorem 1, which is about the solution of Equation (2) bounded by Lebesgue in the y-direction. The sameness of Theorem 1 and Theorem 3 is that boundedness is related to the r, but in Theorem 1, we get the case of 1 r = , and Theorem 3 has a slower bounded change with the increase of r. The rest of this article is divided into four parts. In Section 2, we prove the global bounded for ( ) , and the boundedness depends on the index of r. In Section 3, we show the global bounded for . In Section 4, we prove that the solution of (2) in y-direction has a global Lebesgue bound. In Section 5, we prove the bounded condition of ( ) A Global Bound in the Lebesgue Spaces In this section, we prove the classical solution of (2) at the y-direction exists globally bounded in 2r L norm. The boundedness obtained here depends on the index of r. We have the following theorem. Here we omit the proof of Lemma 1 and now begin to prove Theorem 1. Consequently, Based on the above estimates, we get Following the Gronwall's inequality, we obtain Global Bounds for the Pressure In this section, we show the solution of the first components ( ) 1 1 , u b has a 2 L -bound with 2 r = or 3 r = , and establish the pressure has a global bound. The results can be stated as follows. where 1 3 q < ≤ and ( ) 0,1 s ∈ , and C is a constant related to T and initial value. Here we use two calculus inequalities of the following lemma. Lemma 2. [4] Assume that ( ) Proof. We use the symmetric Equation (3) to prove the case of 2 r = in Theorem 2. Take the inner product of the first Equation (3) Using 0 w + ∇ ⋅ = and integrate by parts, we get I pw w w x p w w w C p w w w According to (7), Therefore, by Young's inequality, . According to Lemma 2 and 0 According to (7), we get The same can be proved that by Hölder's inequality and (6), we get We now proved the inequality (9), taking the divergence of the first two equations in (3) An Improved Global Lebesgue Bound From the conclusions of Sections 2 and 3, we have the main theorem of this paper. Theorem 3. Assume that ( ) ( ) be the corresponding solution of (2). Let 2 r < < ∞ , then Before proving the Theorem 3, we first describe the lemma that will be used. Lemma 3. [4] Let where ρ and γ are given by where  and 2) For any 2 q ≤ < ∞ satisfying where C is again bounded uniformly as 1 s − → , and we make ( ) ( )( ) For further estimation, we spilt into two parts and bound one of them by Lemma 4. Moreover, we get any 0 1 β ≤ ≤ , The condition in (28) with a constant 0 C is independent in s and C is bounded uniformly as Using (32), (35) and (37) to simplify this index and get Conditional Global Regularity This section estimates the global boundedness of the vertical component 2 u We divide the proof of the theorem into two parts. be the corresponding solution of (2). Then, for any Proof. Taking the inner product of the first equation in (3)
1,513.4
2019-04-08T00:00:00.000
[ "Mathematics" ]
Characterization and Weathering of the Building Materials of Sanctuaries in the Archaeological Site of Dion , Greece The sanctuaries of Demeter and Asklepios are part of the Dion archaeological site that sits among the eastern foothills of Mount Olympus. The main building materials are limestones and conglomerates. Sandstones, marbles, and ceramic plinths were also used. The materials consist mainly of calcite and/or dolomite, whereas the deteriorated surfaces contain also secondary and recrystallized calcite and dolomite, gypsum, various inorganic compounds, fluoroapatite, microorganisms and other organic compounds. Cracks and holes were observed in various parts of the stones. The influence of specific weathering agents and factors to the behavior of the materials was examined. The particular environmental conditions in Dion combine increased moisture and rain fall, insolation and great temperature differences, abundance of intensive surface and underground water bodies in the surrounding area, an area full of plants and trees, therefore, they can cause extensive chemical, biological and mechanical decay of the monuments. The following physical characteristics of the building materials have been studied: bulk density, open porosity, pore size distribution, water absorption and desorption, capillary absorption and desorption. The chemical composition of bulk precipitation, surface and underground water was investigated. The salts presence and crystallization was examined. The influence of the water presence to the behavior of the materials was examined by in situ IR thermometer measurements. Temperature values increased from the lower to the upper parts of the building stones and they significantly depend on the orientation of the walls. The results indicate the existence of water in the bulk of the materials due to capillary penetration. The existence of water in the bulk of the materials due to capillary penetration, the cycles of wet-dry conditions, correlated with the intensive surface and underground water presence in the whole surrounding area, lead to partial dissolution-recrystallization of the carbonate material and loss of the structural cohesion and the surface stability. Introduction Deterioration of historical monuments is the result of chemical reactions of polluted air, soil and water with the stone building materials.The crystallization and hydration of weathering products result in their expansion causing the degradation of dolomite, limestone, marble, sandstone and other building materials.In most cases the stone surfaces are gradually covered by salts and black crusts containing calcium, magnesium, sodium, potassium sulphates, nitrates and other constituents.Also the water can easily penetrate and remain into the building stone materials, resulting in a destructive influence due to the absorption and evaporation of the moisture that affects their volume and causes cracks leading to the deterioration of the structure [1] .Under these conditions, the stone surfaces disintegrate into powder and the building materials gradually lose their mechanical strength and their artistic form [2][3][4][5][6] .In the case of marbles the main mechanism of deterioration is the sulphation of their surfaces, leading to the formation of gypsum layers on the stone surface, due to the solid state diffusion of Ca 2+ [7][8][9][10][11][12][13] .Various destructive or non-destructive methods are used for the study of the weathering of the building stone materials of the monuments, being part of their conservation [14][15][16].The aim of the present work is the study of the effect of the environmental factors and the deterioration problems of stone monuments of Demeter and Asklepios sanctuaries in Dion archaeological site (Figure 1), one of the most important religious centers of ancient Greeks in central Macedonia.In earlier works [17][18][19][20] it was found that the main building materials of the monuments are limestones and conglomerates.Sandstones, marbles and ceramic plinths were also used.The materials consist mainly of calcite and/or dolomite.The surfaces of the building materials are partially covered by the weathering products of the primary minerals such as secondary carbonate (calcite-dolomite) precipitated from water solutions, recrystallized calcite and dolomite and in some cases gypsum.The presence of crusts of various Materials of Sanctuaries in the Archaeological Site of Dion, Greece inorganic/organic compounds, such as illite, kaolinite, sericite, rutile, Fe-oxides, Mn-oxides, fluoroapatite, fragments of fossils, is related to various sediments that covered the primary materials.No significant amounts of salts were found on the surface or inside the pore of the materials.The purpose of this investigation is the analysis of the environmental conditions in the area of the archaeological site, the examination of their contribution to the deterioration of the building materials and the study of the influence of the water presence to the behavior of the materials by in situ IR thermometer measurements and laboratory measurements of their physical characteristics. Materials And Methods A series of samples of the various building materials were collected from different locations of both monuments, Asklepios and Demeter.The accurate sampling sites were previously mentioned and presented [17] .The in situ measurements were focused in two monuments, Asklepios Temple, Altar in Demeter sanctuary (Figure 1).The mineralogical study of thin sections of the samples was carried out by optical microscopy using a Leitz Laborlux 11 POL S microscope.Scanning electron microscopy (SEM) was used to study the surface of samples.The SEM experiments were carried out with a JEOL, JSM-840 A scanning microscope, connected with an Energy Dispenser Spectrometer -EDS -(LINK, AN 10/55S).The physical properties of the materials were studied according standard methods [21] . Twelve samples of bulk precipitation were collected on a monthly basis (December 2010 to November 2011) using a bulk precipitation collector located in the archaeological area for a period of one year.Three samples of surface waters were also collected from Vaphyras river and two rillets, all passing from the archaeological area.Upon receipt in the Laboratory, precipitation and surface water samples were filtered through 0.45 μm pore diameter cellulose membranes to remove particles.Chemical analysis for the determination of the chloride, nitrate and sulphate ions was carried out by Ion Chromatography. Two series of IR thermometer in situ measurements, in conditions of sunny or wet weather, were carried out by a portable infrared laser thermometer (Center 358, Infrared thermometer, Range:-18 o C~ 315 o C).The question was to determine the high of the capillary water at the base of building stones, at the contact with the soil, given that the aquifer is very high, quite near to the foundation level of the monument.The idea was to use an infrared thermometer, because the inside temperature of the wet part of a stone is different than the next dry part, of the same stone, for the same time and weather conditions.The environment temperature during the measurements was ~ 28 ο C (sunny conditions) or ~ 9 ο C (wet conditions).In this study, infrared thermometer measurements were used in the assessment of moisture in porous stones.Due to the difference between the thermal diffusivities of moist and the dry stones, IR thermometer measurements are capable of showing qualitative variations in respiration behaviour (i.e.moisture impact), appearing as surface temperature fluctuations [22][23] . Results and Discussion The results of the mineralogical analysis of the deteriorated surfaces and inside the pores in the bulk of the materials are shown in Table 1 and Figures 2-3. From these results it is evident that the surfaces of the building materials are partially covered by the weathering products of the primary minerals such as secondary carbonate (calcite-dolomite) precipitated from water solutions, recrystallized calcite and dolomite and in some cases gypsum.An intense presence of lichens and bryophyte is observed.The presence of crusts of various inorganic/organic compounds, such as illite, kaolinite, sericite, rutile, chromite, Fe-oxides, Mn-oxides, fluoroapatite, fragments of fossils, is related to various sediments that covered the primary materials. Table 1. Mineralogical composition of the deteriorated surfaces of the building materials of Asklepios and Demeter sanctuaries. Primary minerals Secondary minerals (sediments products) Secondary minerals (deterioration products) Calcite: CaCO 3 Aragonite: CaCO The results of the study of the physical properties and characteristic pores of the materials are shown in Tables 2 and 3 and Figure 4. These results show that exist great differences in the values of open porosity, water and capillary absorption between the various building materials.Despite this, it is observed that in all cases of materials the values of capillary absorption are close to the corresponding values of total water absorption indicating that capillary absorption is enough for the materials to reach moisture saturation conditions.It is also shown that a significant amount of the capillary absorbed water remains in the material after desorption in environmental conditions.In the specific conditions of the archaeological area a permanent intensive presence of surface and underground waters for all periods of the year and high temperature values in the dry periods of summer are observed, leading in repeated cycles of wet-dry conditions of the materials.From these results and observations, in correlation with the observed main weathering products, secondary and recrystallized calcite and dolomite, follow that the main deterioration problem of the materials is the moisture presence due to capillary action.The cycles of wet-dry conditions lead to partial dissolution-recrystallization of the carbonate material and loss of the structural cohesion and the surface stability.The results of the chemical analysis of bulk precipitation and surface water for major anions are shown in Figure 5. In all surface water samples, ionic concentrations followed the order nitrates>sulphates>chlorides while the highest values were found in Vaphyras river.In all samples, ionic concentrations were within the range of values found in the river systems of Macedonia, northern Greece [24][25] . All bulk precipitation samples exhibited alkaline pH (6.5-7.5)suggesting neutralization of rainwater with alkaline reagents, such as gaseous ammonia and calcareous dust particles.Expectedly, bulk precipitation samplers, which are continuously open, also sample gases and particles deposited on the collection surface.With the exception of May and June samples, that exhibited extremely high sulphate content, concentrations ranged between 4.1 and 16 mgL -1 in agreement with the range of values found in wet-only precipitation samples in Thessaloniki (2.5-30 mgL -1 ) [26][27] .Nitrate concentrations were highest in April and May (13 and 17 mgL -1 , respectively), but in most months they were below 4.4 mgL -1 , similarly to previous data.Finally, chlorides exhibited somewhat elevated concentrations (2.4-39 mgL - 1 ) with highest values in May and June suggesting possible transport of marine aerosol. From these results it is evident that there are not significant amounts of various ions such as chlorides, nitrates or sulphates (except a period of two months of rain water samples).This observation is in accordance with the mentioned absence of crystallized salts on the surface or inside the pores of the materials (only limited gypsum was observed).Since a moist porous material presents emittance variations, moisture detection in porous stones by means of IR thermometer measurements is feasible.IR thermometry monitors the water movement in porous materials and detects its impact by recording temperature variations on the stones' surfaces.The presence of moisture (lower temperatures) that arises as a result of the capillary movement of water causes deterioration of the building material.In such cases, the optical properties are altered, the density, specific heat capacity and thermal conductivity are also affected and so any temperature changes are much slower in a moist area, as the energy required to raise the temperature of a moist area would be much greater than an area that is unaffected by water.In all cases of IR thermometer in situ measurements, the recorded temperatures on the side surfaces of the walls increase with the distance from the ground.The temperature differences depend mainly on the environmental conditions (sunny or wet), also on the kind of the material and the orientation of the wall, being greater in sunny and smaller in wet conditions.The IR thermometer measurements correlated with the water and capillary absorption and desorption results (Table 2) and also the permanent intensive presence of surface and underground waters indicate that the main deteriorating factor of the materials is the moisture penetration due to capillary action.In sunny conditions, moisture penetrates into the materials only by capillary absorption (greater temperature differences, Figures 6, 7), while in wet conditions rain water and environmental humidity contribute also to the total moisture absorption (smaller temperature differences, Figures 8, 9). Conclusions From the combination of laboratory experiments and in situ IR thermometer measurements follow safe results about the deterioration problems of the materials. The surface of the building materials are partially covered by the weathering products of the primary minerals such as secondary calcite and dolomite precipitated from water solutions, and recrystallized calcite and dolomite. Limited presence of crystallized salts on the surface or inside the pores of the materials is observed. Absence of significant amounts of various ions such as chlorides, nitrates or sulphates is observed in the rain and surface waters. The main weathering factor of the materials is the moisture penetration due to capillary action. In sunny conditions, moisture penetrates into the materials only by capillary absorption, while in wet conditions rain water and environmental humidity contribute also to the total moisture absorption. The existence of water in the bulk of the materials due to capillary penetration correlated with an intensive surface and underground water presence in the whole surrounding area lead to loss of the structural cohesion and the surface instability of the building materials. Figure 1 Figure 1 General view of the sanctuaries of a) Asklepios, b) Demeter. Figure 6 Figure 7 Figure 6 IR thermometer measurements, sunny conditions, Asklepios temple, a) north side, b) east side, c) south side, d) west side Figure 8 Figure 8 IR thermometer measurements, wet conditions, Asklepios temple, a) north side, b) east side, c) south side, d) west side Figure 9 Figure 9 IR thermometer measurements, wet conditions, Demeter sanctuary, Altar, a) north side, b) east side, c) south side, d) west side.
3,137.4
2015-09-30T00:00:00.000
[ "Materials Science", "Geology" ]
Thermal scaling laws of the optical Bragg acceleration structure The temperature distribution and heat flow in the planar optical Bragg acceleration structure, fed by a train of high-power laser pulses, are analyzed. Dynamic analysis of a high-repetition rate train of pulses indicates that the stationary solution is an excellent approximation for the regime of interest. Analytic expressions for the temperature and heat distributions across the acceleration structure are developed. Assuming an accelerating gradient of 1 GV=m and a loss factor similar to that existing in communication optical fibers 1 dB=km tan 10 11 , the temperature increase is less than 1 K and the heat flow is of the order of 1 W=cm2, which is 3 orders of magnitude lower than the known technological limit for heat dissipation. Obviously, using materials with a significantly higher loss tangent may lead to unacceptable temperatures and temperature gradients as well as confinement difficulties and phase mismatch. I. INTRODUCTION Motivated by the availability of solid-state lasers with increasing wall plug to light efficiencies, optical acceleration of charged particles is a subject of recent interest.Acceleration is facilitated by laser light rather than by microwave radiation, and accordingly, the acceleration structure must be made of dielectric materials as these have lower loss and are less susceptible to breakdown compared to their metallic counterparts.An example of an open optical structure is the LEAP [1] crossed laser beam experiment where the interaction between the crossed laser beams and the particles is limited by slits to satisfy the Lawson-Woodward theorem [2,3].Another example is the traveling wave acceleration structure, where a laser pulse is guided in a dielectric structure with a vacuum tunnel bored in its center.This concept can be implemented by a two-dimensional photonic band-gap structure [4], and recently it was suggested [5] to use Bragg reflection waveguides [6][7][8], designed specifically for the speed-of-light mode. In Ref. [5], it was demonstrated that optical Bragg acceleration structures, either planar or cylindrical, having typical transverse dimensions of a few microns, exhibit high performance as acceleration structures and, therefore, seem to be promising candidates for future optical accelerators.In this study, we focus on the planar optical Bragg acceleration structure illustrated in Fig. 1.The laser light is guided in a vacuum core of width 2D int , so that the wave propagates along the z axis, and no variations are assumed along the y axis (@=@y 0).The core is surrounded by dielectric layers with alternating permittivity, having a width equal to the transverse quarter-wavelength [=4 " r ÿ 1 p ] with the exception of the innermost layer.This first layer is a matching layer whose width is deter-mined so that the structure supports the speed-of-light TM mode required for the acceleration process [5]. Transition from operation at radiation wavelengths of a few centimeters to a few microns requires examining a wide spectrum of phenomena which are insignificant in the former regime.For example, structures that operate at a wavelength of a few centimeters are machined today with an accuracy of microns.In the future, it will not be possible to maintain a difference of 4 -5 orders of magnitude between the operating wavelength and the achievable tolerance, since this would entail engineering of a surface at the atomic level.As a result, the size of irregularities may be of the same order of magnitude as the microbunches, and they may generate wake fields [9,10] that, in turn, may alter the dynamics of electrons.Fortunately, the electromagnetic properties of materials at wavelengths that are significantly smaller than 0:1 m do not differ dramatically from these of the vacuum.Consequently, a reduction on the sensitivity to manufacturing tolerances may be expected. Another aspect that may have a critical impact on the performance of a dielectric acceleration structure and will be investigated here is the heat dissipation and the temperature increase associated with electromagnetic power loss.As a central component of a future optical accelerator, the acceleration structure ought to withstand the manifes-tations of the most important constraint imposed by the machine specifications, namely, the luminosity.Being a measure of the number of colliding particles per second at the interaction point, the luminosity sets the lower limit to the energy level the particles are exposed to.Together with the constraint of single mode operation, they determine the minimal electromagnetic energy density in the acceleration tunnel and its close vicinity.While, obviously, the average number of particles per second is at least as in a machine designed to operate at microwave wavelengths, in an optical acceleration structure, the volume where most of the electromagnetic energy is confined is reduced by several orders of magnitude.Consequently, the potential damage due to heat flow or temperature increase may become a significant obstacle, and it is our goal in this study to determine the main scaling laws of these processes. Before examining the heat dissipation in an acceleration structure, it is instructive to briefly review previous work on related topics.Attention to the thermal effects induced by the overheating of laser structures grew along with the progress made in high-power laser systems more than 20 years ago.Thermal considerations gradually became a limiting factor on the laser performance.Studies by Eggleston et al. [11] and Kane et al. [12] analyzed phenomena such as thermally induced stress, thermo-optical modification of the refractive index, birefringence, and thermal ''lensing'' in a slab geometry laser.The thermal stress and the refractive index modification [13,14], along with the birefringence [15,16] and the thermal lensing [17,18], were further investigated.Evidently, the damage caused by the thermal effects can be controlled by proper cooling.A number of practical cooling setups for laser systems were experimentally tested [19,20].Diamond films proved to be particularly useful in cooling schemes, being an excellent heat conducting material [21,22]. Two traditional theoretical models describe the cooling process.In the first case, the so-called Newton's law of heat transfer, the amount of heat extracted from a cooled surface is proportional to the temperature difference between the surface and the coolant.The proportionality factor h is the heat transfer coefficient, which is a measure of the cooling efficiency.Xie et al. [23][24][25] studied the factors that affect the heat transfer coefficient and its relation to the thermal effects inside the laser structure.The second theoretical model describing the cooling process is the heat sink, according to which the temperature at the outer surface is identical to the temperature of the adjacent heat sink.This is, in fact, the first model in the limit h ! 1. Analysis of thermal effects requires establishing the temperature distribution inside the structure.Koechner [26] was among the first to derive single pulse and multipulse solutions for an infinitely long and pumped uniformly in the longitudinal direction laser rod.A more realistic analysis, considering single and multipulse tem-perature distributions in a finite rod with an arbitrary distribution in the longitudinal direction of the pumping energy, is given in Refs.[27,28].The steady-state solution for a finite laser slab was developed [29], and the steadystate problem of a laser rod subject to a cylindrically symmetric and longitudinally homogeneous pumping was solved [16].The steady-state heat distribution caused by longitudinally inhomogeneous pumping was recently studied as well [30]. All studies mentioned above consider single-layered structures.Heat conduction in multilayered structures has also been investigated.For example, the thermal profile of a multilayered structure heated by a scanning laser or by an electron beam [31][32][33][34] has been determined, both being common processes in fields such as magneto-optical media, electron beam lithography, and ion implantation.A similar problem was solved in order to describe the photothermal deflection of a laser beam passing above heated multilayered media [35].In recent years, there has been an increasing interest in the heat transfer in multilayered structures [36 -39].In Ref. [36], a solution of the heat conduction equation for a two-layered cylindrical structure heated by a short laser pulse was introduced.The transient [37] and the steady-state [38,39] 3-dimensional heat conduction in multilayered structures was studied.Further extensive analysis on the transient heat transfer in multilayered structures was also performed recently [40 -43]. In the present study, we aim to determine the scaling laws regarding heat flow and temperature increase in the planar optical Bragg acceleration structure.We focus on the temperature developed due to the propagating laser, rather than including the wakefield as a source of heat dissipation and consequently obtaining a temperature rise.The part of the wakefield that propagates at the fundamental speed-of-light mode is therefore tacitly included in the estimation given here, whereas the high frequency content of the wake is not expected to significantly change the general picture.In the next section, we examine the thermal energy flow in the structure illustrated in Fig. 1 when a train of laser pulses in injected into it.Each pulse is assumed to have an electromagnetic power profile which is based on an analytic estimate.Assuming similar thermal properties to the various layers, it is demonstrated that, for the set of parameters of interest, the deviation of the peak temperature or heat flow from the time-averaged value is extremely small.Analytic expressions for the average temperature and heat flow are developed.These are the important quantities that provide an excellent description of the thermal process since the latter occurs on a much longer time scale comparing to the electromagnetic processes. In Sec.III, we establish the time-averaged temperature and heat flow in the Bragg structure and develop corresponding analytic expressions.In Sec.IV, we discuss two possible configurations using the tools developed in VADIM KARAGODSKY et al. Phys.Rev. ST Accel.Beams 9, 051301 ( Sec. III.In the last section, we briefly examine the thermal stress developed in the dielectric layers of the Bragg acceleration structure. II. APPROXIMATE DYNAMIC ANALYSIS Consider the planar optical Bragg acceleration structure shown in Fig. 1.The structure may support an accelerating gradient of the order of E A ' 1 GV=m at a typical radiation wavelength of 0 ' 1 m.We assume that a train of laser pulses is injected into the acceleration structure, and it is the goal of this study to establish the temperature dynamics within the structure.For simplicity, it is assumed in this section that the dielectric layers have identical thermal characteristics (single layer), and the dissipated power across the device is approximated by its general behavior as given in Ref. [5].Consequently, we may develop an analytic result, which is an approximation of the exact solution.In the following section, the different thermal characteristics of the layers and the exact behavior of the electromagnetic field are accounted for. The power propagates at the speed-of-light mode, having a transverse profile X 0 x, which is determined by the detailed geometric and electrical characteristics of the structure as well as the vacuum wavelength.For the planar optical Bragg acceleration structure, the transverse profile may be approximated by [5] where x c 0 =4" 1 ÿ 1 ÿ1=2 " 2 ÿ 1 ÿ1=2 jln" 1 " 1 ÿ 1 ÿ1=2 =" 2 " 2 ÿ 1 ÿ1=2 j ÿ1 ; " 1 is the permittivity of the layer adjacent to the core, and the remainder of the layers are alternating with materials " 2 and " 1 .Each pulse has a time duration T p which is of the order of picoseconds, and they are separated by intervals of T rr that, in turn, may vary from microseconds to nanoseconds.The pulses travel at a group velocity V gr , and have a temporal profile of T 0 t, so that for the simple case of a rectangular pulse, the temporal profile of the power density may be represented as a Fourier series T 0 t P f expj2t=T rr , where f T p =T rr sincT p =T rr and sinc sin=.The temperature developed inside the structure is due to the dissipated electromagnetic power given by where tan is the material's loss tangent.Consequently, the dissipated power may be written in the form P loss P 0 X 0 xT 0 t ÿ z=V gr .Based on Ref. [5], the field in the vacuum tunnel, having a time dependence of e j!t , is of the form E z E A expÿj !c z, and accordingly, the peak value of the dissipated power may be approximated by note that the expressions in Eqs. ( 2) and (3) tacitly assume time average over one period of the radiation field. All the characteristic time parameters of the electromagnetic problem are orders of magnitude shorter than that corresponding to the diffusion process.The latter controls the temperature variation T throughout the system described by where D is the thermal diffusion coefficient, and T is the thermal conductivity.It is assumed that at the boundary of the outermost layer x D ext the structure is in thermodynamic equilibrium with a ''heat sink'' which maintains a constant temperature at that boundary, i.e., Tx D ext 0. In addition, there is no heat flow from the dielectric to the vacuum core, and thus we ignore radiative heat transfer and assume @T=@xx D int 0. Subject to the above conditions, the diffusion equation may be solved to obtain The transverse heat flow may be derived from the temperature by Q ÿ T<EMAIL_ADDRESS>quantities of interest are readily evaluated: the average change in temperature T AV over the repetition rate period of the pulse T rr at the vacuumdielectric interface x D int , and second, the average heat flow Q AV at the external layer x D ext .These two quantities were evaluated to read where the right-hand side of the top equation was evaluated assuming that D ext ÿ D int x c =2, a condition that may be dictated in part by the requirement for mechanical strength of the structure.Assuming D ext ÿ D int x c =2, the thermal transconductance has a very simple form, Q AV =T AV ' T =D ext ÿ D int , which is virtually independent of the electromagnetic characteristics of the structure x c .In order to investigate the above results, we shall consider a silica-zirconia structure " 1 2:1; " 2 4 with a set of parameters given in Table I. Figure 2 illustrates the two quantities Q AV and T AV as a function of the core halfwidth D int .The curves are computed assuming that the field on axis remains constant E A 1 GV=m, so that the dissipated power P 0 of Eq. ( 6) changes with D int according to Eq. ( 3).Both quantities are linearly dependent on the material loss tangent and their value at D int 0:2 m is T AV 1:94 10 8 tan [K], and Q AV 3:2 10 9 tan [W=cm 2 ].Evidently, for the range of core half-widths shown in Fig. 2, if tan < 10 ÿ8 , the temperature rise in the structure is less than 15 K, and the average heat flow is about 250 W=cm 2 .While the average quantities T AV ; Q AV do not seem to pose a difficulty provided that tan 10 ÿ8 , it still Parameters used for the simulations shown in Figs.2-4. Structure parameters Pulse and thermal parameters remains to determine whether the peak temperature or heat flow do not impose any stringent constraints.For this purpose, the expression in Eq. ( 5) is plotted in Fig. 3, showing a snapshot of the temperature at the core boundary as a function of the longitudinal coordinate, for a structure having the parameters listed in Table I and D int 0:6 m. The decay of the temperature behind each pulse is linear, as the diffusion time for this structure is of the order of 10 ÿ4 s, thus much larger than T rr and T p , and the maximal relative change in temperature, maxjT ÿ T AV =T AV j < 10 ÿ4 , is extremely small.This last quantity is depicted in Fig. 4 for different values of pulse periodicity T rr , while the remainder of the parameters are as given in Table I.For a higher repetition rate (shorter T rr ), the excitation is closer to a cw signal, and therefore the temperature fluctuations are smaller.Even for a repetition rate of 10 MHz T rr 10 ÿ7 s), the relative change in temperature is less than 1%.Clearly, for most practical purposes the deviation from the average values is miniscule.Therefore, in what follows, only the dc component v 0 in Eq. ( 5) will be considered. III. STEADY-STATE SOLUTION In the previous section, it was established, based on a simplified model, that at the high-repetition rate regime, the resulting temperature is to a very good approximation the solution of the steady-state problem, in which a cw signal is injected into the waveguide.In this section, the steady-state solution @=@t 0 of the diffusion equation [Eq.( 4)], which becomes Poisson's equation, is developed for the exact configuration.Evidently, the steady-state solution of the heat dissipation problem is independent of the longitudinal coordinate @=@z 0.Moreover, for developing a realistic solution, a convective cooling mechanism is taken into consideration. A. Exact steady-state solution At this point, we are in a good position to establish the impact of the detailed geometry which facilitates the electromagnetic field confinement (see Fig. 1) on the steadystate solution.In each dielectric layer, the longitudinal electric field is of the form [5] and, assuming that the field is confined, the amplitudes may be derived by imposing the boundary conditions, beginning from the vacuum core, where the field profile is known; k n !=c " r;n ÿ 1 p is the transverse wave number in the nth layer.With that in mind, the explicit form of the average dissipated power density is In each layer, Poisson's equation with the time-averaged dissipated power density term 1 D T P loss T p T rr must be solved, so that the change in the temperature in the nth layer satisfies At the transition from one layer to another, both the temperature change and the heat flow ought to be continuous.Special attention is required at the vacuum-dielectric interface and at the outermost layer.The former is left as in the dynamic analysis case, namely, For the external layer, we shall generalize the idealized assumption of a heat sink T SS;N j xD ext 0 to include the impact of a convective cooling process; therefore h being the convective heat transfer coefficient, and values of 3 to 20 W=cm 2 K are considered in what follows.Values of this order were demonstrated to be achievable in cooling schemes of microelectronic devices [44 -47]. B. Approximate steady-state analytic solution In addition to the exact solution presented above, it is instructive to develop an approximate analytic expression relying on Eq. ( 1) and solving Eq. ( 9) subject to the boundary conditions in Eqs.(10) and (11).This is similar to the one layer model discussed in Sec.II, but only the dc term [ 0 in Eq. ( 5)] is taken here, and, in addition, the presence of the various layers is accounted for effectively.For this purpose, we shall assume a uniform thermal resistance, which is the serial resistance of the two layers, namely, we may define the effective thermal conductivity T;eff as and, consequently, which for h ! 1 are identical to those of a uniform layer discussed above [Eq.( 6)].The term 1 ÿ x c =2ÿ T;eff =h D ext ÿD int in Eq. ( 13) expresses the temperature rise above the value dictated by the heat sink idealization, caused by a suboptimal convective coolant. The approximate solution may be further improved by using P 0 instead of P 0 , where is a form factor given by P exact x being the exact dissipated power profile according to Eq. (8).By this, we require that the total dissipated power across the structure is identical in the approximate and the exact expressions, and since this is the source of the temperature rise, a better approximation is obtained. IV. ANALYSIS OF TWO CONFIGURATIONS Silica-zirconia.-In this section, we examine the temperature distribution and heat flow as obtained by the two solution methods introduced in the previous section.Figure 5 illustrates the steady-state temperature variation across the silica-zirconia acceleration structure for a set of parameters listed in Table II.The exact solution (Sec.III A) is represented by square markers, and the approximate solution (Sec.III B) is represented by a dashed line -the two are virtually indistinguishable.For the simulation purposes of the exact solution, it was assumed that the loss tangent of each material is the one corresponding to a conventional optical fiber made of that material, and having an attenuation of dB=km 1. Attenuation coefficients of this order and less were, in fact, measured in silica fibers [48,49].For convenience, we illustrate also the exact dissipated electromagnetic power profile, as evaluated based on Ref. [5], and the exponential decay approximation as given in Eq. ( 1).Each quantity is normalized by its maximal approximate value; the relevant normalization constants are listed in Table II.The maximum values are given in terms of the loss tangent of the first material (silica) denoted by tan 1 .The accuracy of the approximate solution of the temperature is better than 0.5% relative to the temperature maximal value, and the accuracy of the approximate heat-flux solution, shown in Fig. 6, is better than 2% relative to its maximal value. Three values of the convective heat transfer coefficient h were examined, showing the role of the cooling mechanism in reducing the maximal temperature (see Table II).For the unnormalized curves, the effect of a different h is to add an offset to the entire curve according to Eq. ( 12).However, the heat-flux profile, which is depicted in Fig. 6, is independent of h, as revealed by the approximate analytic expression of Eq. ( 12).Moreover, the heat flow is independent of the thermal conductivities, since according to Eq. ( 9), in conjunction with Q SS ÿ T @T AV =@x, we may readily conclude that the heat flow satisfies the equation @Q SS =@x T p =T rr P loss x, subject to the vacuum-0.II.For each value of h, the dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.All quantities are normalized by their maximal approximate value. showing that the heat flow indeed depends only on the dissipated power profile.This is expected since, in the steady-state solution, no energy is accumulated in time at any point in space, implying that all the dissipated power must leave the system. Assuming that the values of the loss tangent are as given in Table II tan 10 ÿ11 , and even for the repetition rate examined of 1 GHz, the temperature rise in the silicazirconia structure is less than 1 K, and the heat flow across the external surface is a fraction of 1 W=cm 2 .For comparison, the acceptable technological limiting value of passive heat extraction is of the order of 1500 W=cm 2 .Therefore, in the framework of this operation regime, the system works orders of magnitude below the limit of heat extraction and well within the range of feasible temperature stabilization.Clearly, if the materials exhibit higher losses by a few orders of magnitude, the temperature rise and the heat flow would attain unacceptable values according to the linear relation presented in the last four rows of Table II. Silica-silicon.-A structure having a much better confinement and, consequently, higher interaction impedance (see Ref. [5]), is a silica-silicon acceleration structure, having a transverse decay parameter of x c 0:55 m, rather than x c 2:68 m of the silica-zirconia structure.It follows that, for the silica-zirconia structure, more layers 0. II.The dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.The two quantities are normalized by their maximal approximate value.are necessary for a similar confinement to that of the silicasilicon structure.The temperature profile for this structure is given in Fig. 7, and the heat-flux profile is depicted in Fig. 8, while the simulation parameters are presented in Table II.Since the transverse decay of the electromagnetic power is much stronger than that of the silica-zirconia structure, the approximate exponential decay describes less accurately the power fluctuations, and consequently, the factor [see Eq. ( 14)] is smaller.As a result, the accuracy of the approximate solutions is also lower; the accuracy of the analytic estimate of the temperature is better than 1% relative to the temperature maximal value, while the accuracy of the approximate heat-flux solution is better than 10% relative to its maximal value. Having assumed throughout the simulations that the acceleration longitudinal field remains constant, it follows that in the silica-silicon structure less power is dissipated, and, consequently, the temperature and the heat flow reach lower values, as can be seen in Table II.This clear preference is further supported by the high thermal conductivity of the silicon. Two comments are in place before concluding this section.First, it is important to emphasize that no attempt has been made here to optimize materials but rather to demonstrate the main physical scaling laws.For example, a loss parameter of the order of tan 10 ÿ11 has been reported for silica.Yet, it is not obvious that in the process of making the Bragg structure this parameter is not altered.However, this should be the focus of an experimental study which is beyond the scope of the present one.Second, it was tacitly assumed that the thermal properties of the bulk material are valid even at an infinitesimal distance from the interface with another material.Deviation from the bulk values of the thermal properties may occur due to misalignments of the two lattices and due to the fabrication process of the dielectric layers.Since, if it occurs, this process takes place in an infinitesimally small volume, we may assume that its relative weight is negligible.A change in the loss tangent at the interfaces may be effectively taken into account by redefining the form factor in Eq. ( 14). V. THERMALLY INDUCED STRESS When the temperature distribution across the structure is inhomogeneous, the structure becomes subject to thermal stress.A theoretical foundation of the stress profile in single-layered laser rod and slab geometries has been extensively studied [11,14,18,24].It has been established [11] that, for the temperature distribution of a crosssectional dependence, T Tx, the stress profile is given by where, E [Pa] is the elastic modulus, [K ÿ1 ] is the thermal expansion coefficient, v is Poisson's ratio, and D int dxTx represents the space-averaged temperature.The maximal temperature rise above average which can be tolerated by the material without fracturing is known as the thermal shock resistance and is given by with f [Pa] representing the flexure strength, i.e., the maximum stress tolerable by the material.The thermomechanical parameters of fused silica and silicon [50] are listed in Table III.It is evident from the values listed in Table III that the thermal stress does not pose a severe limitation factor on the system performance with this set of materials.For example, the maximum temperature increase allowed based on thermal stress considerations is more than 3 orders of magnitude larger than the anticipated temperature increase. When dealing with multilayered structures, a possible stress induced limitation factor may occur due to the mis- match of the thermomechanical properties on both sides of the interface between the inner layers, which may create cracks.Relaxation of the thermal mismatch induced stress [51] and designing crack-free Bragg mirrors [52,53] were addressed in recent years.In the case of a Bragg acceleration structure, the temperature where these effects become significant is prohibitively high from the perspective of radiation confinement and phase control, due to changes in the dielectric coefficients, since the change of the refractive index with the temperature, @n=@T, is of the order of 10 ÿ4 -10 ÿ5 K ÿ1 [54,55]. VI. CONCLUSION In conclusion, we have analyzed the temperature and heat flow across the planar optical Bragg acceleration structure.An approximate dynamic analysis showed that, for the range of parameters of interest, it is sufficient to consider the steady-state problem, rather than account for high-repetition rate pulses.Approximate analytic expressions for the steady-state temperature and heat variations were given [Eq.( 12)].These expressions were tested against the exact solutions and their accuracy was found to be better than 1% for the temperature, and better than 10% for the heat flux, relative to their respective maximal values.With these approximate solutions, it is possible to estimate, accounting only effectively for the dielectric layers, the maximal change in temperature at the internal boundary and the heat flow at the external boundary.Assuming an accelerating gradient of 1 GV=m and a low-loss material similar to that existing in communication optical fibers 1 dB=km, resulting in a loss tangent of the order of tan 10 ÿ11 , the temperature increase is less than 1 K, and the heat flow is of the order of 1 W=cm 2 .This heat flow is 3 orders of magnitude lower than the known technological limit, implying that a proper choice of materials may eliminate thermal considerations from the list of obstacles for the operation of an optical accelerator.In this regard, the wavelength, which we have taken arbitrarily to be 1 m, is another important parameter that should be optimized in order to achieve the lowest dissipation possible in the materials involved. FIG. 2 . FIG. 2. The average change in temperature T AV and the average heat flow Q AV as a function of D int , for a silica-zirconia structure with the parameters given in Table I.Both quantities are normalized by their value at D int 0:2 m, T AV D int 0:2 m 1:94 10 8 tan [K] and Q AV D int 0:2 m 3:21 10 9 tan [ W=cm 2 ]. FIG. 4 . FIG.4.The relative change in temperature maxjT ÿ T AV =T AV j as a function of the pulse separation T rr .The structure parameters excluding T rr are given in TableI. FIG. 5 . FIG.5.Temperature profile for different values of h in the silica-zirconia structure, the parameters of which are given in TableII.For each value of h, the dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.All quantities are normalized by their maximal approximate value. FIG. 6 . FIG.6.Heat-flux profile in the silica-zirconia structure, the parameters of which are given in TableII.The dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.The two quantities are normalized by their maximal approximate value. TABLE II . Parameters used for the simulations presented in Figs.5-8, and maximum values of the temperature and heat flux.Temperature profile of different values of h in the silica-silicon structure, the parameters of which are given in TableII.For each value of h, the dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.All quantities are normalized by their maximal approximate value.THERMAL SCALING LAWS OF THE OPTICAL BRAGG . . . TABLE III . [50]momechanical properties of fused silica and silicon[50].FIG.8.Heat-flux profile in the silica-silicon structure, the parameters of which are given in TableII.The dashed line is the approximate curve, and the exact solution is indicated by square markers.The dissipated power profiles, approximate (dashed line) and exact (solid line), are also plotted.The two quantities are normalized by their maximal approximate value.VADIM KARAGODSKY et al.Phys.Rev. ST Accel.Beams 9, 051301 (2006) max
7,315
2006-05-23T00:00:00.000
[ "Physics" ]
Ion-acoustic rogue waves in double pair plasma having non-extensive particles The modulational instability (MI) of ion-acoustic (IA) waves (IAWs) and associated IA rogue waves (IARWs) in double pair plasma containing non-extensive electrons, iso-thermal positrons, negatively and positively charged ions have been governed by the standard nonlinear Schr\"{o}dinger equation (NLSE). It has been figured out from the numerical study of NLSE that the plasma system holds modulationally stable (unstable) region in which the dispersive and nonlinear coefficients of the NLSE have the opposite (same) signs. It is also found that the fundamental features of IAWs (viz., MI criteria, amplitude and width of the IARWs, etc.) are rigorously organized by the plasma parameters such as mass, charge state, and number density of the plasma components. The existing outcomes of our present study should be helpful for understanding the nonlinear features of IAWs (viz., MI and IARWs) in both laboratory and space plasmas. Introduction Double pair plasma (DPP) is characterised as fully ionized gas having electrons, positron as well as positive and negative ions, and is believed to exist in astrophysical environments such as Van Allen radiation belt and near the polar cap of fast rotation neutron stars [1], solar atmosphere [2], D-region (H + , O − 2 ) and F-region (H + , H − ) of the earths's ionosphere [3], upper region of Titan's atmosphere [4] and also in laboratory environments [5,6,7,8,9,10]. A number of authors [11,12,13] studied ion-acoustic (IA) waves (IAWs) and associated nonlinear electrostatic structures namely, solitons, shocks, rogue waves, and double layers in the DPP. Maxwellian distribution function is one of the most widely used velocity distribution functions of particles to describe the dynamics of the iso-thermal particles. But it has been observed that the characteristics of majority of particles in the space [14] and laboratory plasma environments [15] are departed from the Maxwellian distribution. So, to narrate the non-Maxwellian particles, Renyi [16] first recognized the modification of Maxwellian distribution, and finally, Tsallis [17] generalized the non-extensive q-distribution. It is noted that the index q in the non-extensive q-distribution characterizes the degree of non-extensivity of the particles [18]. Shalini et al. [19] studied IAWs in non-extensive plasma having two-temperature electrons, and observed that the width of the first and second order IA rogue waves (IARWs) associated with IAWs decreases with increasing the value of q but the amplitude of the first and second-order IARWs associated with IAWs is remain constant. Tribeche et al. [20] investigated electrostatic solitary waves in presence of the non-extensive electrons, and found that the amplitude of the potential increases with non-extensive parameter. Hafez and Talukder [21] examined the propagation of the nonlinear electrostatic waves in a three-component nonextensive plasma having inertialess non-extensive electrons and positrons, and inertial ions, and reported that the amplitude of the soliton increases with increasing temperature of the nonextensive electron. The investigation of the modulational instability (MI) [22,23,24,25,26,27] and associated nonlinear features of wave is one of the most important research areas for plasma physicists. It is noteworthy that the MI of the wave is considered to be the primary reason for the formation of massive and gigantic rogue waves (RWs) [28]. Rogue wave, which is the rational solution of the standard nonlinear Schrödinger equation (NLSE) [28,29,30,31], is a short-lived phenomenon which emerges from nowhere and disappears without a trace [29]. A number of authors have investigated the MI of IAWs by considering the non-extensive particles [32,33,34]. Bains et al. [32] studied the MI of IAWs in presence of non-extensive electrons, and demonstrated that the critical wave number (k c ) at which the instability sets in increases with the increase in the value of q (q > 0). Bouzit et al. [33] investigated the stability conditions of IAWs in presence of non-extensive non-thermal electrons. Eslami et al. [34] investigated the MI of IAWs in electron-positron-ion plasma having non-extensive electrons and positrons, and observed that the k c decreases with q for q < 0 while increases with q for q > 0. To the best knowledge of the authors, no attempt has been made to investigate the MI of the IAWs and associated IARWs in a four-component plasma containing inertial positively and negatively charged ions, and inertialess non-extensive electrons, and iso-thermal positrons. Therefore, it is a rational fascination to examine the influence of non-extensive electrons and iso-thermal positrons on the MI of IAWs and associated IARWs in a four-component DPP. The manuscript is organized as the following pattern: The model equations are presented in Sec. 2. The derivation of the NLSE is shown in Sec. 3. The stability of IAWs is provided in Sec. 4. The IARWs is demonstrated in Sec. 5. Finally, a conclusion is given in Sec. 6. Model Equations We consider the propagation of IAWs in a collisionless, fully ionized, unmagnetized plasma system consisting of warm negative ions, symbolized by n −i (charge q −i = −Z −i e; mass m −i ), warm positive ions, denoted by n +i (charge q +i = Z +i e; mass m +i ), non-extensive q-distributed electrons, identified by n e (charge q e = −e; mass m e ), and iso-thermal positrons, expressed by n p (charge q p = +e; mass m p ); where Z −i (Z +i ) is the charge state of the negatively (positively) charged ion, and e being the magnitude of the charge of the electron. The charge neutrality condition of our present model can be written as n p0 + Z +i n +i0 = n e0 + Z −i n −i0 . Now, the normalized equations can be given in the following form where n −i , n +i , n e , and n p are normalized by n −i0 , n +i0 , n e0 , and n p0 , respectively; u −i and u +i indicate the negatively and positively charged ion fluid, respectively, normalized by the IA wave speed C −i = (Z −i k B T e /m −i ) 1/2 (with k B being the Boltzmann constant and T e being the temperature of the electron); φ denoted as the electrostatic wave potential, normalized by k B T e /e; the time and space variables are, respectively normalized by where P −i0 (T −i ) being the equilibrium pressure (temperature) of the negatively charged ion, and P +i = P +i0 (N +i /n +i0 ) γ with P +i0 = n +i0 k B T +i ; where P +i0 (T +i ) being the equilibrium pressure (temperature) of the positively charged ion, respectively, and γ = (N + 2)/N (where N recognized as the degree of freedom and for one-dimensional case N = 1, so γ = 3). Other parameters can be defined as , and λ 5 = Z +i n +i0 /Z −i n −i0 . Now, the number densities of the nonextensive q−distributed [17,35] electron and iso-thermally distributed [36,37] positron can be represented by the following normalized equations where λ 6 = T e /T p (with T p being the temperature of the isothermally distributed positron and T e > T p ). The parameter q, generally known as entropic index which quantifies the degree of non-extensivity. It is noteworthy that when q = 1, the entropy reduces to standard Maxwell-Boltzmann distribution. On the other hand, in the limits q > 0 (q < 0), the entropy shows subextensivity (super-extensivity). Now, by substituting Eqs. (6) and (7) into Eq. (5) and expanding up to third order in φ, we can draw up as where It is noted that the terms containing M 1 , M 2 , and M 3 in Eq. (8) are due to the contribution of the non-extensive q-distributed electrons and iso-thermal positrons. Derivation of the NLSE To study the MI of the IAWs, first we want to construct the NLSE by employing the reductive perturbation method. In that case, the stretched co-ordinates can be written in the following fashion [35,38,39,40,41,42] where v g is the group speed and ǫ is a small parameter. After that the dependent variables can be represented as [35,38,39,40,41,42] Π( where , and k (ω) is real variables representing the carrier wave number (frequency). The derivative operators can be showed as Now, by substituting Eqs. shortened equations can be presented as where S = λ 1 k 2 − ω 2 and A = ω 2 − λ 3 k 2 . These equations provide the dispersion relation of IAWs in the following form where I = (λ 1 k 2 + λ 3 k 2 + λ 1 M 1 + λ 3 M 1 + λ 2 λ 5 + 1), U = (k 2 + M 1 )/k 2 , and J = k 2 (λ 1 λ 3 k 2 + λ 1 λ 3 M 1 + λ 3 + λ 1 λ 2 λ 5 ). However, to obtain the real and positive values of ω, the conditions I 2 > 4U J must be maintained. It is noted that the positive and negative signs in Eq. (18) resembled to the fast (ω f ) and slow (ω s ) IA modes, respectively. The second order (m = 2 with l = 1) equations and with the compatibility condition, we can be written the group speed of the IAWs where L = λ 2 λ 3 λ 5 k 2 S 2 + λ 2 λ 5 ω 2 S 2 + λ 2 λ 5 AS 2 . Now, the coefficient of ǫ (when m = 2 with l = 2) yield the second-order harmonic amplitudes are found to be proportional to |φ (1) where Next, consider the image for m = 3 with l = 0 and m = 2 with l = 0, which margined the zeroth harmonic modes. In such a way we can get the following results where , , Lastly, the third-order harmonic modes (m = 3 with l = 1), with the assistance of Eqs. (14)−(29), represent a complete set of equations, which can be transformed to the NLSE: where Φ = φ (1) 1 for simplicity. In Eq. (30), P is the dispersion co-efficient, which can be written as and Q is the nonlinear co-efficient, which can be written as where It may be noted here that both P and Q are directly depend on different parameters namely λ 1 , λ 2 , λ 3 , λ 4 , λ 5 , λ 6 , q, and are indirectly depend on mass, number density, temperature, and charge state of the different plasma components. Stability of IAWs The propagation of IAWs is modulationaly stable when P and Q have opposite sign (i.e., P/Q < 0), and is modulationally unstable when both P and Q have same sign (i.e., P/Q > 0). The effect of electron number density on the stability condition of IAWs can be understood by plotting P/Q with k for different values of λ 4 in Fig. 3. It is easy to demonstrate from this figure that the stable (unstable) parametric regime of IAWs increases (decreases) with the increase in the value of the equilibrium electron number density. The impact of sub-extensivity and super-extensivity of electrons on the stability condition of IAWs can be seen in Figs. 4 and 5, respectively. It is obvious from these two figures that the sub-extensive property of the electrons allows the IAWs to be stable for large wave number while the super-extensive property of the electrons allows the IAWs to be stable for small wave number. It is worthy to mention that the first-order rogue wave solution of the NLSE indicates that a considerable amount of IAWs energy is condensed into a very small domain in DPP. The effect of the number density and charge state of both positively and negatively charged ions on the amplitude and width of the IARWs can be observed from Fig. 6. It is noted that an increase in the number density of positive (negative) ions tend to enhance (decrease) both the amplitude and width of the IARWs in the modulationally unstable parametric regime (P/Q > 0) for a constant value of positive and negative ions charge state. The physics of this result is that an increase in the value of positive (negative) ion number density tend to increase (decrease) the nonlinearity as well as amplitude and width of the IARWs. The nature of IARWs may also be affected by the electron and positron temperature which can be observed in Fig. 7. This figure reveals that an increase in the value of the electron (positron) temperature would make the amplitude and width of the IARWs associated with IAWs smaller (taller). The physics behind this result is that the nonlinearity of the plasma medium as well as height and width of the IARWs decreases (increases) with electron (positron) temperature. Conclusion We have scrutinized the MI of IAWs and associated IARWs in a four-component DPP having inertial positive and negative ions and inertialess non-extensive electrons and iso-thermal positrons by deriving a standard NLSE. It is noted that all of the plasma components in a DPP medium play a vigorous role in the stability criteria of the IAWs. However, the essence of our findings can be summarized as follows:
3,099.6
2020-12-17T00:00:00.000
[ "Physics" ]
On the rms-radius of the proton We study the world data on elastic electron-proton scattering in order to determine the proton charge rms-radius. After accounting for the Coulomb distortion and using a parameterization that allows to deal properly with the higher moments we find a radius of 0.895+-0.018 fm, which is significantly larger than the radii used in the past. Introduction. The root-mean-square (rms) radius of the proton is a quantity of great interest for an understanding of the proton; it describes the most important integral property concerning its size. Accurate knowledge of the rms-radius of the charge distribution is needed for the interpretation of high-precision measurements of transitions in hydrogen atoms, studied in connection with measurements of fundamental constants [1]; these measurements recently have made great progress, and are now limited by the accuracy with which the proton radius is known [2]. The radius is also needed for the planned measurements of muonic X-ray transitions [3]; these experiments can only scan a narrow frequency range, which must be chosen according to the best value of the rms-radius presently known. The proton rms-radius in the past in general has been determined from elastic electron-proton scattering. The usual approach has been to employ the most accurate cross sections at low momentum transfer q, perform an experimental separation of longitudinal (L, charge) and transverse (T, magnetic) contributions. The resulting charge data as a function of q 2 are then fit with an appropriate function to get the rms-radius , i.e. the q 2 = 0 slope of the form factor 1 . Alternative approaches have included theory-motivated fits such as given by the Vector Dominance Model (VDM) in combination with dispersion relations. Past results. The initial electron scattering experiments on the proton were performed some 40 years ago by the Hofstadter group at Stanford [4,5]. This data, mainly at medium q and not low q, was fitted using multi-pole form factors. From the parameters of the fit an rms-radius could be calculated. The resulting value of 0.81f m, which is still quoted in the literature, should have long been superseded by values coming from more precise data at lower q which are indeed sensitive to the rms-radius. In the seventies, accurate low-q data, mainly measured at the Mainz electron accelerator, became available [6]- [9]. After an L/T-separation, the data were usually fitted with a polynomial expansion of the form factor G e (q) = 1 − q 2 r 2 /6 + q 4 r 4 /120 − ... (1) and, in general, a floating normalization of the individual data sets in order to produce the lowest χ 2 . The most prominent result was probably the one obtained by Simon et al. [8], r rms = 0.862 ± 0.012 f m. Occasionally, fits with 2-or 4-pole expressions [21] were performed, and significantly bigger values, i.e. 0.88 ± 0.02f m and 0.92 ± 0.02 f m were found as compared to values determined at very low q [18]. The difference was partly understood [21] as a consequence of different treatments of the r 4 term. In parallel, fits based on dispersion relations and the VDM [22,23] were performed by several groups. These fits included much more theory input, and were constrained by the need to fit all four nucleon form factors. The most recent value resulting from such fits is the one of Mergell et al., 0.847 ± 0.009f m. The average, 0.854 ±0.012 f m, of this radius and the one of Simon et al is quoted as the "best" value in the compilation of Mohr and Taylor [24]. Recent studies have provided additional insight: even for a system as light as the proton, Coulomb distortion of the electron waves needs to be accounted for [25,26]. This Coulomb distortion was shown to solve a long standing puzzle with the deuteron rms-radius, and Rosenfelder demonstrated [27] that it also increases the proton rms-radius. Using a restricted set of data and the above mentioned polynomial expansion he showed that the radius increases by about 0.01 f m when accounting for Coulomb distortion. Model-independent radii? In general, the groups studying the proton data have tried to extract a rms-radius that is model-independent. This is possible when using as in eq.(1) the expansion of G e (q) in terms of the moments r 2 , r 4 ,.. . At very low q, one could hope that the q 4 r 4 -term is small, such that the r 2 -term can be determined without using a specific model for G e (q). This is true in principle, but very hard in practice. At small q also the q 2 r 2 /6-term is small, and it is difficult to determine it accurately from the experimental form factors which are proportional to 1−q 2 r 2 /6+... . Small systematic errors in the normalization of the cross sections have a strong influence on the small q 2 r 2 /6-term. When "eliminating" problems with the normalization of the data by floating them much of the sensitivity to the rms-radius gets lost and the norm-determining (implicit) extrapolation to q = 0 becomes very sensitive to small q-dependent systematic errors in the data (which are always ignored). In practice, one therefore has to include data at not-so-low q which are also sensitive to the higher moments. The problem with theses moments is particularly detrimental for the proton. The proton has approximately an exponential charge density (or, more accurately speaking, a form factor of the dipole shape, G e (q) = (1+q 2 0.055f m 2 ) −2 , the Fourier transform of which gives an exponential). For such a density (form factor) the higher moments are increasing with order, i.e. r 4 = 2.5 r 2 2 , r 6 =11.6 r 2 3 etc, hence giving a large contribution to G(q). The consequence: there is no q-region where the r 2 term dominates the finite size effect to >98% and the finite size effect is sufficiently big compared to experimental errors to allow a, say, 2% determination of the rms-radius. There is also no region of q where the r 4 moment can be determined accurately without getting into difficulty with the r 6 term. Towards higher q, the polynomial expansion is seriously restricted by the convergence radius of ∼ 1.4f m −1 . This situation is illustrated in fig.1 which shows the contribution of the various q n terms to the finite size effect. This problematic situation with the higher moments is at the origin of the difficulties of determining a modelindependent proton rms-radius. Continued-fraction expansion. Continued Fraction (CF) expansions are a subclass of Padé approximants which have initially been introduced to solve the "problem of moments", i.e. to find a function f (z) specified by its moments z n [28] and to accelerate the convergence of poorly converging series [29]. The radius of convergence of the CF expansion is much larger than the one of the polynomial expansion, although within the convergence radius of the latter it agrees exactly with it. The moments of interest are directly linked to the coefficients b 1 , b 2 , ..b N i.e. the coefficients of q 2 , q 4 ,... are given by b 2 . . An important advantage, already exploited in fits of the deuteron form factor [30], is the fact that the parameters b 1 , b 2 for exponential-type densities are well decoupled. This is a consequence of the fact that the CF is the natural parameterization for form factors resulting from exchange-poles at q 2 < 0, the physical mechanism exploited in the VDM. Tests of CF-expansion. In order to study the dependence introduced by the usage of the CF expansion with given number N of terms and given q max , we have used pseudo-data. These cross sections were generated using parameterized expressions for the form factors (dipole form, or the dispersion relation parameterization of Hoehler et al. [22]). The pseudo data were generated at the energies and angles of the experimental data, with the error bars of the experimental data. In the fits, the pseudo data were used as calculated from the parameterization, or with random fluctuations calculated from the experimental error bars superimposed. Fits of these pseudo-data were performed with the CF expansion with a variable number N of terms, and with variable q max of the points fitted. We have studied the scatter of the resulting fitted r 2 values, and their deviation from the known radius used in the generation of the pseudo-data. In these tests, we have been rather generous in accepting fits, i.e. by including fits with χ 2 ≤ 1.2χ 2 min . When using the region 1f m −1 < q max < 5f m −1 and 2 to 5 terms in the CF-expansion, we find a scatter of the fitted rms-radii of ±0.010f m around the true (input) values. This scatter we take as representative of the uncertainty due to the choice of N and q max ; it covers the statistical error (which for pseudo-and real data is the same by construction) as well. Analysis of world data. In order to determine the proton rmsradius we use the world cross sections [4]- [20] for q < 4f m −1 . The most precise data relevant for the radius determination have been measured at Mainz [6]- [9]. These data are absolute, that is they have small systematic uncertainties in the absolute normalization. This type of data is the most useful one for a determination of the rms-radius. We use for our fits the primary cross sections. When parameterizing both G e (q) and G m (q) with the CF expansion and fitting G e and G m simultaneously to the cross sections, the L/T-separation is automatically performed, with superior quality as compared to the standard approach of separating L and T for each individual experiment. The Coulomb corrections are calculated in second-order Born approximation according to [26] using an exponential charge density. These corrections are applied to the cross section data, such that the subsequent fit can be performed in PWIA as has been done in the past. In the fits we use all data with their standard random uncertainties. The error matrix is used to compute the random uncertainty of derived quantities. In order to evaluate the effect of the systematic uncertainties (normalization uncertainties) the individual data sets are changed by their quoted uncertainties, refitted and the resulting changes quadratically added. In the fits one finds experimental data sets (for instance the 40 years old Stanford data) that have much too large a χ 2 ; these points, however, do not inappropriately influence the final result, so we have not increased their error bars just to get a good-looking χ 2 . We also find small discrepancies in the overall normalization of some data sets (e.g. the data set of ref. [9] seems ∼1% high). We have chosen to keep the norm at the experimental value, and not float the data. For such precision experiments more than half the effort has gone into the determination of the overall normalization; ignoring this effort by floating the norm (or greatly mitigating its influence by treating the normalization as just one further data point) does not do justice to the experiments and leads to loss of much information. Again, the effects upon the rms-radius of the observed "discrepancies" have been found to be small and are covered by the quoted uncertainty. As a check we have also used the polynomial expansion, with q max = 1.2f m −1 and the q 4 coefficient taken from a fit that explains the higherq data. We find the same rms-radius as with the CF fit, but a larger uncertainty and a higher sensitivity to the q max employed. The quality of the fits is quite good. We show in fig.2 the ratio of experimental cross sections and fit for the CF parameterization and 5 CF coefficients. The χ 2 is 512 for 310 data points 2 . The resulting rms-radius is 0.895f m. The uncertainty due to N, q max and statistics is ±0.010f m, the systematic uncertainty 0.013f m. This yields as the final result for the charge radius of the proton r e rms = 0.895 ± 0.018f m. This radius is significantly larger than the values generally cited in the literature. It agrees with the most accurate value derived from atomic transitions [2] 0.890 ± 0.014f m. Differences to previous determinations. It may be interesting to understand why previous analyses gave smaller radii. Simon et al. [8] (r rms = 0.862f m) used the polynomial expansion up to q 4 and q max = 1.2f m −1 , but found a r 4 -moment that was a factor of ten smaller than given by fits that explain the proton data to higher q; this difference comes from very small systematic problems in the data which we have not further explored. When repeating their fit with the r 4 -moment given by a fit that explains the data to larger q, e.g. the one from the CF fit, one finds a radius that agrees with the one we find. The fits based on dispersion relations and the VDM are strongly constrained by theory and the need to fit all four nucleon form factors. When looking at the ratio of experimental and VDM cross sections with the resolution employed in fig. 2 the systematic deviations of the fits [22,23] from the data at low q are immediately obvious. Rosenfelder [27] (r rms = 0.880f m), whose primary interest was the exploration of the effect of Coulomb distortion, also used the polynomial expansion, with the r 4 term taken from a low-q fit quoted in the literature. When correcting his value for a better r 4 value from a good fit to the higher-q data and accounting for differences in the data set, one arrives at the value of the proton rms-radius we find. Conclusions. From an analysis of the world-data on e-p scattering we determine the proton rms-radius and find a value that is significantly larger than previous values. The change is understood as a consequence of treating properly the higher moments r n .
3,333.4
2003-10-09T00:00:00.000
[ "Physics" ]
Non-Coding RNAs and SARS-Related Coronaviruses The emergence of SARS-CoV-2 in 2019 has caused a major health and economic crisis around the globe. Gaining knowledge about its attributes and interactions with human host cells is crucial. Non-coding RNAs (ncRNAs) are involved in the host cells’ innate antiviral immune response. In RNA interference, microRNAs (miRNAs) may bind to complementary sequences of the viral RNA strand, forming an miRNA-induced silencing complex, which destroys the viral RNA, thereby inhibiting viral protein expression. There are several targets for human miRNAs on SARS-CoV-2’s RNA, most of which are in the 5’ and 3’ untranslated regions. Mutations of the viral genome causing the creation or loss of miRNA binding sites may have crucial effects on SARS-CoV-2 pathogenicity. In addition to mediating immunity, the ncRNA landscape of host cells further influences their susceptibility to virus infection, as certain miRNAs are essential in the regulation of cellular receptors that are necessary for virus invasion. Conversely, virus infection also changes the host ncRNA expression patterns, possibly augmenting conditions for viral replication and dissemination. Hence, ncRNAs typically upregulated in SARS-CoV-2 infection could be useful biomarkers for disease progression and severity. Understanding these mechanisms could provide further insight into the pathogenesis and possible treatment options against COVID-19. Introduction Following the outbreak of the coronavirus disease 2019 (COVID-19) pandemic in 2019, there has been a lot of research concerning the attributes of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), its pathogenic mechanisms, and potential treatments against COVID-19. SARS-CoV-2 is a single-stranded, positive RNA virus that is closely related to SARS-CoV, the causative agent of SARS, which already led to a pandemic in 2003 [1,2]. The RNA of both viruses is around 30 kb long and consists of 14 open reading frames (ORFs) that encode for a total of four structural and 16 non-structural proteins (NSPs), as well as a capped leader sequence (5' untranslated region (UTR)) and a polyadenylated terminus (3' UTR) [3][4][5]. SARS-CoV-2 and SARS-CoV both use one of their structural proteins, the spike (S) protein, to mediate cell invasion. Human angiotensin-converting enzyme 2 (ACE2) serves as a receptor for the S protein [4,6], while type II transmembrane serine protease (TMPRSS2), which is responsible for the cleavage and activation of the S protein, comprises a necessary co-receptor for the completion of the infection process [7]. The S protein sequences of the two viruses are 76% identical [8], with SARS-CoV-2's S protein gene having additional nucleotides that form a furin-like cleavage site, which is considered to be responsible for the higher infectivity of the virus compared to other similar coronaviruses [2,9]. Upon infection, the most common symptoms that can be observed in COVID-19 include fever and respiratory failure, although gastrointestinal and neurological symptoms have also been observed [10]. SARS-CoV-2 mainly targets the lungs, where, in severe cases, infection results in acute respiratory distress syndrome (ARDS). ARDS leads to diffuse alveoli damage (DAD) and often correlates with an excessive release of pro-inflammatory cytokines, called a "cytokine storm" [11]. The aim of this review was to give a comprehensive overview of the current knowledge about the involvement of non-coding RNAs (ncRNAs), in particular, micro RNAs (miRNAs), small interfering RNAs (siRNAs), and long non-coding RNAs (lncRNAs) in the pathogenesis of SARS-CoV-2 and the antiviral immune defense mechanisms of the host. To date, treatment options for SARS-CoV-2 infection remain scarce [12]. Considering SARS-CoV-2 as an RNA virus, the present work also aimed to shed a light on possible therapeutic applications of ncRNAs in antiviral therapies and to encourage further research in this field. Methods This paper was based solely on literature research using the "PubMed" database. The platform was searched using various combinations of the terms "non-coding RNA," "ncRNA," "SARS," and "Covid" and a filter for free full texts was applied. Studies in which ncRNAs were only used as research tools or where SARS only served as an example in topics regarding other viruses were not included, as this review was particularly focused on the role of non-coding RNAs in the interactions between human cells and SARS-related coronaviruses. RNA Interference RNA interference (RNAi) mediated by siRNAs or miRNAs is an important immunity mechanism in plants and invertebrates. In mammals, however, the dominating antiviral innate immune response is the interferon (IFN)-mediated response and the existence of an additional RNAi response in mammals has only been verified recently [13,14]. While siRNA-mediated RNA interference is only present in embryonic stem cells of mammals [15], RNAi mediated by miRNAs directly bind to the viral genome or messenger RNAs (mRNAs) of viral genes, thereby limiting viral replication and mitigating pathogenicity, which can also be observed in differentiated adult host cells [16,17]. Antiviral Interactions between Host miRNAs and Viral RNA In general, miRNA-virus interactions can result in two different situations: the repression of viral translation, which inhibits viral replication, or stabilization of the viral RNA, which, in contrast, enhances viral replication [16,18]. In RNAi, the former is the case, as miRNAs acting in the miRNA-induced silencing complex (miRISC) bind to viral mRNA and, if there is complete complementarity, induce mRNA decay. If the pairing between the viral mRNA and host miRNA is imperfect, the mRNA will not be degraded but translation will still be inhibited, resulting in the ability of one single miRNA to target multiple mRNAs [19,20]. miRNA binding sites in viral RNAs are typically located in the 5' and 3' UTRs [3,16,21] but have also been found in coding regions, e.g., in the genome of influenza A [22] and enterovirus 71 [23]. Since positive-strand RNA virus genomes are structurally identical to mRNAs, they might be regulated by miRNAs in a similar way [16] (Figure 1). Hosseini et al. [24] recently identified seven targets of miRNA in the genome of SARS-CoV-2. Originally, there were ten targets, but three of them were lost because of conserved mutations. Among the human miRNAs that are able to bind to SARS-CoV-2 encoded transcripts, thereby mediating immunity, are miR-574-5p, miR-214, miR-17, miR-98, miR-223, and miR-148a [24]. However, computational predictions of miRNA binding sites should be viewed with caution, as results often fail to be verified experimentally [26][27][28]. This high false-positive rate of miRNA target prediction has been described frequently, yet it can be narrowed down using certain approaches, such as multi-targeting, integration of existing experimental evidence, or the use of algorithms designed to refine the results of miRNA target searches [27,29]. It may also be beneficial to consider conditions specific to the research question (e.g., excluding potential targets of all miRNAs that are not expressed in cells prone to SARS-CoV-2 infection [24]). Since the studies by Hosseini et al. [24] and Fulzele et al. [25] have taken such measures to narrow down their results, their findings still appear promising and should be taken into account in further research on COVID-19 pathogenesis and potential RNAi therapeutics. Evasion of miRNA-Mediated RNAi in the Untranslated Regions The UTRs at both ends of the viral RNA serve as control elements in its replication, transcription, and translation [30], and might influence the evasion of host RNA decay. The 5' and 3' UTRs include Hosseini et al. [24] recently identified seven targets of miRNA in the genome of SARS-CoV-2. Originally, there were ten targets, but three of them were lost because of conserved mutations. Among the human miRNAs that are able to bind to SARS-CoV-2 encoded transcripts, thereby mediating immunity, are miR-574-5p, miR-214, miR-17, miR-98, miR-223, and miR-148a [24]. However, computational predictions of miRNA binding sites should be viewed with caution, as results often fail to be verified experimentally [26][27][28]. This high false-positive rate of miRNA target prediction has been described frequently, yet it can be narrowed down using certain approaches, such as multi-targeting, integration of existing experimental evidence, or the use of algorithms designed to refine the results of miRNA target searches [27,29]. It may also be beneficial to consider conditions specific to the research question (e.g., excluding potential targets of all miRNAs that are not expressed in cells prone to SARS-CoV-2 infection [24]). Since the studies by Hosseini et al. [24] and Fulzele et al. [25] have taken such measures to narrow down their results, their findings still appear promising and should be taken into account in further research on COVID-19 pathogenesis and potential RNAi therapeutics. Evasion of miRNA-Mediated RNAi in the Untranslated Regions The UTRs at both ends of the viral RNA serve as control elements in its replication, transcription, and translation [30], and might influence the evasion of host RNA decay. The 5 and 3 UTRs include binding sites for host miRNAs and RNA-binding proteins (RBPs). While the binding of host-derived miRNA to a viral RNA usually results in degradation of the RNA, certain RBPs binding to transcripts of the 5 UTR, such as CUG-binding protein (CUG-BP) and trans-active response DNA binding protein (TARDBP), increase the translation of viral proteins [3]. For example, in a SARS-CoV-2 variant, the binding site for CUG-BP turns into a TARDBP binding site via a change of "C" to "U" on position 241, resulting in increased infectivity of the virus and virus-related mortality [31]. Mukherjee et al. [3] determined more of these proviral interactions between RBPs and host miRNAs in sequence variations of SARS-CoV-2's UTRs, identifying different possibilities of how mRNA-stabilizing RBPs can prevent miRNA-mediated RNA decay. Some binding sites for RBPs overlap with binding sites for host miRNA, preventing RISC binding and RISC-induced RNA decay. This is the case with miR-34b-5p and RBMS3 [3]. It is also possible that miRISC and an RBP compete for the same binding site, where the RBP might outperform the miRNA, as seen with miR-3664-5p and SRSF5. Furthermore, miRNA binding might be prevented by a specific nucleotide variation in an overlapping binding site, while RBP binding remains possible; an example of this scenario is seen in miR-9-5p and HNRNPA1 [3]. Mutations in the UTR sequence can therefore have an effect on viral fitness by changing or creating new miRNA or RBP binding sites, e.g., the creation of a new host miRNA binding site in a UTR is expected to enhance RNAi and weaken viral replication [3]. Mutations in the Viral Genome Mutations in the viral genome have a critical effect on the pathogenicity and susceptibility of the virus to the antiviral immune response, e.g., by changing the RNA secondary structure or creating new binding sites for host miRNAs [24]. Since binding sites for host miRNAs are expected to decrease viral fitness by making the virus more vulnerable to RNA interference, it is obvious that mutations leading to the creation of such will only withstand selective pressure if they do not cause a real disadvantage to viral replication and dissemination [16]. To give an example, a certain SARS-CoV-2 variant with a binding site for the human miR-4701-3p is only prone to miRNA-mediated RNA decay in limited amounts due to insufficient miR-4701-3p expression in lung tissue [3]. It is true for all mutations that in order to persist, they have to offer an advantage to the virus or at least not cause a disadvantage (with the exception of mutations located in highly conserved regions) [16]. Another example of this is the loss of miRNA binding sites through mutations, as seen in SARS-CoV-2 in the target of miR-197-5p, which was located in the NSP3 sequence. Due to the loss of this binding site, the miRNA is not able to bind and degrade viral transcripts anymore, sparing the virus from the miRNA-mediated immune defense. miR-197-5p is usually overexpressed in cardiovascular patients, who have a higher risk of mortality due to SARS-CoV-2 infection [24]. The mutation rate in RNA viruses is generally high because of the RNA-dependent RNA polymerase's inadequate proofreading activity [16]. Nevertheless, the mutation rate of SARS-CoV-2 is reduced by the 30-50 exonuclease nsp14 in the RDRP complex, which also helps the virus to defend itself against the host's base editor [24]. Influence of the Host miRNA Expression on Viral Pathogenicity The susceptibility of a cell to virus infection is not only determined by its surface proteins but also by its miRNA expression pattern [16]. The most important proteins for cell invasion by SARS-CoV-2, comprising ACE2, TMPRSS2, and possibly disintegrin and metalloproteinase domain 17 (ADAM17) and furin, are all regulated by miRNAs. While TMPRSS2, ADAM17, and furin are co-receptors needed to complete the infection process [1], ACE2 serves as the receptor for SARS-CoV-2's spike protein, enabling the virus to enter the host cell upon interaction with the S protein [6]. Regulation of Receptor Expression Lysine-specific demethylase 5B (JARID1B), which is encoded by the KDM5B gene, is responsible for the downregulation of several miRNAs targeting ACE2 and TMPRSS2, to the extent where, in the majority of human cells, ACE2 and TMPRSS2 are not expressed without the presence of JARID1B. Human respiratory epithelium cells show especially high expression levels of all three proteins [32]. The miRNAs directed against ACE2 and TMPRSS2 that are suppressed by JARID1B include hsa-let-7e/hsa-miR-125a [33] and hsa-miR-141/hsa-miR-200 [34]. Other miRNAs targeting TMPRSS2 include let-7a-g/i and miR-98-5p [35]. Let-7a-g/i, besides suppressing TMPRSS2 expression, also has an effect on immunity by regulating cytokine expression [1]. Let-7a-g/i is located in the intragenic region of a gene regulated by estradiol and is therefore upregulated after estrogen activation [35]. In contrast, all miRNAs of the let-7 family have been shown to be downregulated by androgens [36], providing one possible explanation for the gender disparities in the severity of COVID-19, which usually affects men more seriously than women [37]. miR-98-5p is another estrogen-responsive miRNA [38] that represses not only TMPRSS2, but also IL-6 expression [1]. The let-7 miRNA family can be bound by the lncRNA H19, resulting in the decreased availability of let-7 in the cell, making it more vulnerable to SARS-CoV-2 infection [1]. H19 is overexpressed in cancer cells [39], leading to the conclusion that these cells may be highly susceptible to virus infection [1]. Along with let-7, miR-145 and miR-222, which are directed against ADAM17 [1], are also suppressed in lung cancer cells [40], possibly leading to higher expression rates of TMPRSS2 and ADAM17, which again would make the cells more susceptible to virus infection [1]. While miR-222 is estrogen-dependent, just like the miRNAs targeting TMPRSS2 [35], miR-145 is upregulated by vitamin D [41], which might explain the correlation between vitamin D deficiency and the severe progression of COVID-19 [42]. Other Ways miRNAs Influence Susceptibility to Virus Infection miRNAs play an important role in the secretion of the airway surface liquid (ASL) by regulating the ion channels and transporters responsible for the para-and transcellular movement of water and electrolytes [44]. The ASL covers the surface of epithelial cells in the respiratory tract, where one of the main functions is the protection of the host from inhaled pathogens, such as SARS-related coronaviruses [44][45][46]. Apart from airway surface liquid homeostasis, miRNAs have various other impacts on the immune defense of the respiratory tract [44]. Low expression levels of miRNAs targeting SARS-CoV-2, as seen in elderly patients, correlate with a higher risk of severe disease progression and mortality for COVID-19 [25]. Virus-Induced Alterations in the Transcriptome of the Host Cell During virus infection, the transcriptome of a host cell, including miRNA and lncRNA expression patterns, is changed due to the initiation of the innate immune response in the infected cell. In addition to these host cell-induced developments, the virus may alter expression levels of host miRNAs by binding and destroying these molecules, potentially leading to an augmentation of cellular conditions for virus replication and dissemination [16]. Furthermore, the virus is able to synthesize miRNAs that interfere with cellular pathways by itself, enhancing its own pathogenicity and downregulating the host cell's immune response [11]. Virus invasion also results in the alteration of siRNA expression profiles in the host cell and the generation of so-called "virus-activated siRNAs" (va-siRNAs), some of which might act as antivirals, while others might have proviral effects [47]. Virus-Induced Changes of miRNA Expression A study conducted by Mallick et al. [48] in 2009 evaluated the miRNA landscape in human bronchoalveolar stem cells (BASCs) during SARS-CoV infection, showing the upregulation of miR-17*, miR-574-5p, and miR-214, which repress virus replication and contribute to immune evasion until a successful transmission of the virus has taken place, as well as the downregulation of miR-223 and miR-98, which serves the regulation of BASC differentiation, activation of proinflammatory cytokines, and ACE2 suppression [48]. miRNAs can be used as biomarkers for the diagnosis of certain infectious diseases, e.g., miR-519c-3p serves as a biomarker to distinguish community-acquired pneumonia from chronic obstructive pulmonary disease exacerbations [49]. Guterres et al. [50] suggest that miRNAs may also be used as biomarkers for the determination of the disease progression of COVID-19. Suitable miRNAs could be directed at molecules that are responsible for the downregulation of inflammatory cytokines and chemokines since an increase in the expression levels of those miRNAs would result in enhanced production of proinflammatory cytokines during SARS-CoV-2 infection [50]. The Role of lncRNA in the Cellular Response to Virus Infection A transcriptome analysis of murine SARS-CoV-infected cells in 2010 by Peng et al. [51] uncovered around 500 annotated lncRNAs. Since the expressed lncRNAs were associated with type I interferon receptor and signal transducer and activator of transcription 1 (STAT1) and most were similarly regulated in the examined cells after infection with influenza virus and interferon treatment, it seemed likely to the authors that lncRNAs are involved in the regulation of the innate antiviral immune response of host cells [51]. In addition to these findings, another study by Josset et al. from 2014 [52] provided evidence for the co-expression of most virus infection-associated lncRNAs with genes involved in the lung homeostasis and immune response. lncRNA Expression in SARS-CoV-2-Infected Cells In an effort to identify cellular pathways during SARS-CoV-2 infection, Vishnubalaji et al. [53] evaluated transcriptome data from normal human bronchial epithelial (NHBE) cells infected with SARS-CoV-2. An upregulation was observed in IFN-responsive gene targets leading to activation of the innate immune response. Interestingly, the NHBE cells showed overexpression of the lncRNA metastasis-associated lung adenocarcinoma transcript 1 (MALAT1), which is also known to be overexpressed in multiple neoplastic diseases, as well as inflammatory processes after lung transplants [53]. Since it has been shown that the silencing of MALAT1 mitigates inflammatory injuries after lung transplants by the inhibition of neutrophil chemotaxis [54], it might also lead to a reduction in the prevalence of cytokine storms in SARS-CoV-2 patients. Furthermore, the authors suggest that host-derived lncRNAs in SARS-CoV-2 infected cells, such as MALAT1 and nuclear-enriched autosomal transcript 1 (NEAT1), could potentially be used as biomarkers for infection [53]. Expression of Viral miRNAs Mirroring Human miRNAs As mentioned above, viruses may express miRNAs themselves, affecting pathogenicity via the downregulation of the host cell's immune response or the creation of a proviral intracellular environment. Arisan et al. [11] identified seven sequences in the genome of SARS-CoV-2 that are completely identical to human miRNAs and evaluated the impact of the molecular pathways linked to these miRNAs on the pathogenesis of COVID-19 [11]. Their findings were that more than half of the seven miRNAs (including miR-8066, miR-3934-3p, miR-1307-3p, miR-1468-5p, and miR-3691-3p) were associated with the TGF-β signaling pathway [11]. TGF-β is a cytokine responsible for lung development and alveolarization, as well as the homeostasis and extracellular matrix composition of lung tissue. Additionally, it affects the immunity, survival, migration, and apoptosis of host cells [55]. Almost as many of the miRNAs (miR-8066, miR-5197, and miR-3934-3p) were involved in mucin-type O-glycan synthesis. The O-and N-glycosylation patterns of SARS-CoV-2's S protein, where the latter of which is influenced by miR-8066 [11], are important for viral entry into the cell [56]. Further relevant effects of miR-8066 are related to the induction of the cytokine storm that may be observed in severely ill patients with COVID-19; the miRNA not only targets genes responsible for cytokine regulation [11] but its sequence also includes a core motif correlated with an increased probability of TLR-8 (toll-like receptor 8) expression via NF-κB, which leads to cytokine synthesis [57]. miR-3934-3p is also associated with the biosynthesis of heparan sulfate [11], which, as part of proteoglycans, serves as a binding site for SARS-CoV-2 on the host cell during the early attachment phase of virus invasion [58]. Additionally, miR-3934-3p is linked to vitamin assimilation [11], which might be interesting since vitamin D deficiency is associated with the increased severity of COVID-19 progression [42]. Other pathways related to the identified miRNAs included the cytochrome P450-mediated metabolism of xenobiotics, morphine addiction, semaphorin signaling, pulmonary hypertension, and cardiac fibrosis [11]. RNA Interference Using Artificial siRNAs Although siRNA-mediated RNAi is a mechanism that is only present in plants and invertebrates, it may be induced therapeutically in humans, as well. For this purpose, a pre-synthesized siRNA is administered to the cell, where it binds specifically to complementary sequences as part of the RISC, which leads to post-transcriptional gene silencing [59]. Because siRNAs are only 21 to 23 bp in length, they are not able to activate the innate IFN immune response in the host cell, which may only be induced by dsRNAs larger than 30 bp [60]. Nevertheless, it is necessary to use siRNAs only in small doses, as high concentrations of siRNA have been observed to lead to undesirable effects, such as the induction of genes related to stress and programmed cell death [61]. Additionally, even though siRNAs are highly specific in general, there may be a certain amount of unintended gene silencing, where siRNAs may deter partially complementary mRNAs from being translated. This is problematic, as there will be partially complementary sequences in the human genome for most siRNAs [62]. Yet, in a study by Tang et al. [63] investigating anti-SARS-CoV siRNAs in rhesus macaques in 2008 via intranasal administration using a carrier, no siRNA-induced toxicity was observed. Instead, the authors proved the siRNA's antiviral activity after administration resulted in fever relief and milder diffuse alveoli damage (DAD) [63]. In the last few years, siRNA therapeutics for various diseases have advanced to phase 3 of clinical trials [64]. One of them, namely, ONPATTRO TM (patisiran), was approved for the treatment of patients with hereditary transthyretin-mediated amyloidosis in the USA and Europe in 2018 [65]. Another siRNA therapy that is under investigation is aimed at the treatment of the hepatitis B virus (HBV) and showed a strong reduction of the targeted HBV S antigen in human trials [66]. There have also been studies on siRNA therapeutics against SARS-CoV in the years following the initial SARS outbreak in 2002 ( [67][68][69][70][71][72][73][74][75][76][77][78][79][80][81]; see below) and the progression of this research, which could potentially lead to the development of RNAi drugs against the novel SARS-CoV-2, is strongly endorsed. Administration of siRNAs There are two ways in which the desired effect of RNAi in the targeted cell may be provoked: on the one hand, the pre-synthesized siRNA may be transfected directly into the cell using a suitable carrier, while on the other hand, plasmid vectors encoding for shRNA, which will be processed to siRNA intracellularly, may be used. Both methods offer different advantages: while the administration of pre-made siRNA via viral vectors is highly efficient compared to the transfection of plasmid DNA, the latter option leads to gene-silencing for months after successful vector delivery. The transfection of siRNA, on the other hand, only induces a silencing effect for a few days because siRNA is degraded steadily after administration [59]. One approach to improve the serum half-life of artificial siRNA, while simultaneously reducing unspecific off-target effects, is the modification of siRNA with a locked nucleic acid (LNA), which is a synthetic nucleotide analog [82]. For the safe and efficient delivery of siRNA, carriers, such as lipid nanoparticles (LNPs), can be used, which protect the siRNA from enzymal degradation during administration and deliver it selectively to the targeted tissue [83,84]. LNPs may be administered intranasally for the treatment of diseases affecting the lung, such as COVID-19, and have been suggested as carriers for siRNA targeting SARS-CoV-2 by Itani et al. in a recent review [85]. Interestingly, cationic liposomes have been shown to have a greater bioavailability following intranasal administration than anionic ones, which is due to their electrostatic interaction with the negatively charged mucosa of the respiratory tract [86]. There are also several other ways to deliver RNAi therapeutics to the targeted tissue, including the use of exosomes ( [87]; see below), natural or synthetic polymers, dendrimers, gold, magnetic iron oxide and silicia nanoparticles, quantum dots (QDs), carbon nanotubes (CNTs), or an N-acetylgalactosamine conjugated siRNA system (GalNAc-siRNA) [88][89][90]. Targeted Sequences in SARS-CoV Ever since the outbreak of SARS-CoV in 2003, there has been a lot of research concerning siRNA-mediated RNA interference in SARS-CoV-infected cells. In most of these studies, the targeted sequences were those of the four structural proteins of SARS-CoV. These four major proteins comprise the N, M, S, and E proteins [67]. The nucleocapsid, or N protein, apart from forming the long helical nucleocapsid of the virus, also plays a role in RNA synthesis [67]. Effects of the N protein include the induction of the apoptotic pathway, upregulation of proinflammatory cytokine production, and inhibition of the antiviral response of the innate immune system. Additionally, it enhances the production of IFNβ [68], which is primarily increased by the M protein [69]. It has been reported that the targeting of the N gene in SARS-CoV has led not only to the reduced expression of the N gene but also to decreased IFNβ production [68]. The M (membrane) glycoprotein, apart from increasing IFNβ synthesis [68], is also essential for virus budding and assembly [70] and is highly abundant in infected cells [45]. In a study by Wang et al. from 2010 [69], two highly conserved regions in the RNA sequence encoding for the M protein were targeted by artificial siRNAs, which led to decreased expression of the M gene. The authors found out that the 5' half of the M gene of SARS-CoV was seemingly more susceptible to spontaneous mutations than the 3' half, which made them choose to target sequences in the latter [69]. The next important structural protein encoded by SARS-CoV is the S glycoprotein located on the viral capsule, which is responsible for the invasion of host cells [67]. The spike protein consists of two subunits: S1 binds to ACE2 on the host cell membrane and S2 mediates fusion between the cell membranes of the virus and the cell [71]. The S protein serves as an antigen for the specific antibody and T cell response of the host cell [72]. siRNAs directed against SARS-CoV's S protein have been shown to successfully suppress the expression of the S protein, as well as the replication of the virus in infected cells [73,74]. siRNA duplexes targeting ORF1b in addition to the S protein have also been investigated in monkeys and have been shown to inhibit virus replication, mitigate SARS symptoms, and protect the lungs from harm [75]. Another study by Wu et al. [76] also proved the antiviral effect of siRNAs directed against the S protein and the 3' untranslated region of SARS-CoV. Lastly, the envelope, or E protein, is responsible for virus assembly and has been successfully targeted by siRNAs at two different sites by Meng et al. [77]. The same study also investigated different siRNAs directed against the gene encoding the RNA-dependent RNA polymerase (RDRP), where only two out of four siRNAs lead to a decrease in RDRP expression [77]. There have also been studies investigating siRNAs directed at two different structural genes, demonstrating that these siRNAs had even better antiviral effects than siRNAs only targeting one gene [78] and showing that the concentration of the siRNA duplexes correlated with antiviral activity [79]. Another study by Li et al. [80] evaluated siRNA targeting the leader sequence of SARS-CoV, reporting a much stronger inhibitory effect on virus replication than siRNA targeting the S protein gene. Furthermore, it is possible to target subgenomic RNA translated from one of SARS-CoV's 14 open reading frames with siRNA, as shown by Akerstrom et al. [81], who tested siRNA directed at sgRNAs 2, 3, and 7, resulting in lower viral reproduction. Moreover, Chen et al. [91] demonstrated that the transfection of siRNA-targeting ORF8a to a SARS-CoV infected cell led to a decrease of greater than 50% in the replication of the virus. In contrast, siRNA directed at ORF3a did not lead to a decrease in replication after administration to an infected cell; however, it significantly suppressed the virus release. This was found out by Lu et al. [92] in an effort to identify the function of the ORF3a ion channel using siRNAs for gene silencing. Attributes of Potential RNAi Targets in SARS-CoV-2 As of September 2020, no siRNAs targeted at sequences of SARS-CoV-2 have been tested yet. The characteristics of a potential target, however, remain clear: sequences targeted by siRNAs cannot be longer than 21 to 25 nucleotides, and they should not be similar to sequences in the human genome, since this could lead to unintended silencing of the host genes [93]. Furthermore, it is advisable to target only highly conserved regions of the viral RNA that have a low susceptibility to spontaneous mutations because if the virus acquires a mutation in the target site, the transcripts of the sequence in question may not be degraded by the RISC anymore, as there would be a lack of complementarity to the associated siRNA. The risk of RISC dysfunction due to virus mutation may also be lowered by using two or more different siRNAs simultaneously such that even if one complementary sequence mutates, the effect of the RISC will still be observed on the other [59]. Target sites may encode for proteins essential for viral replication, e.g., the RNA-dependent RNA polymerase, but also certain proteins encoded by the host cell DNA, which are adopted by the virus for its own reproduction [67]. Moreover, siRNA may also be directed against host genes that are necessary for viral entry to the cell [59]. For example, a study by Lu et al. from 2008 [94] used siRNA-targeting ACE2 mRNA, which led to the silencing of ACE2 expression and consequently reduced SARS-CoV infection in the transfected Vero E6 cells. Viral Suppression Strategies of RNAi Viral RNAi-suppressor proteins prevent the degradation of viral RNA by inhibiting the generation of siRNA and the RISC assembly of existing siRNA [13]. According to Karjee et al. [95], one of the viral proteins acting as RNA silencing suppressors in SARS-CoV is derived from ORF7a. It is a transmembrane protein that is localized mainly in the ER and Golgi of the host cell, where the viral genome is replicated [47]. The reason for the localization of the 7a protein is that in replication, dsRNA is generated, which is the main trigger of (natural) siRNA production [96]. Another important RNAi suppressor protein is SARS-CoV's structural nucleocapsid (N) protein [13]. Because of the homology of the two viruses, it is likely that the 7a and N protein also act as RNAi suppressors in SARS-CoV-2, which would certainly be an interesting and potentially promising direction for future research. The downregulation of these proteins might be achieved using artificial siRNAs or CRISPR-Cas13a [47]. miRNA-Related Approaches miRNAs could potentially be used therapeutically in gene therapy vectors, vaccines, or as antiviral drugs [16]. To give an example, Ivashchenko et al. [97] suggest the use of artificial complete complementary miRNA (cc-miR) that is able to bind to the gRNA of SARS-CoV-2. The cc-miR, which is coated by vesicles, could be administered specifically to the lung via inhalation, or introduced to the blood, which would lead to antiviral effects in every tissue the virus is able to enter. The authors designed a cc-miR based on miR-5197-3p, which interacts effectively with the gRNA of SARS-CoV-2 but also has binding sites in human genes. To prevent off-target effects, the artificial cc-miR was designed to have only low complementarity to these human target sites [97]. Regarding the significance of miRNAs for vaccines, Hosseini et al. [24] also suggest that the inclusion of binding sites for host miRNA into the viral genome could be a means to debilitate the live viruses used for active vaccination. Another way in which synthetic miRNAs could be used as a potential therapy or vaccine against SARS-CoV-2 is proposed by Kreis et al. [98] and is related to the adaption of a natural antiviral mechanism in the human placenta: trophoblasts secrete exosomes comprising miRNAs of the C19MC (chromosome 19 miRNA cluster) in order to transfer their antiviral effects to other placental cells, as well as maternal and fetal cells. These miRNAs include miR517-3p, miR516b-5p, and miR512-3p, which all have an inhibitory effect on both RNA and DNA viruses and induce autophagy of cytoplasmic viruses in infected cells [98]. Serum-derived exosomes have several advantages over other sRNA delivery systems: they hold the potential to specifically target certain cell types and, since they are secreted endogenously, they are less likely to elicit undesired immune responses in the host [87]. This was also proven in a study by Zhang et al. [99], who successfully delivered miRNA and siRNA-packed exosomes to alveolar macrophages via intratracheal administration in a murine model. The predicted effects of the sRNAs were observed in the macrophages, but no anti-exosome immune response was provoked. To achieve the delivery of exosomes to epithelial and other lung cells, a method preventing their uptake by macrophages is needed [99]. Furthermore, Chow et al. [100] state that the expression rates of certain miRNAs targeting SARS-CoV-2 are very low in lung epithelia, which makes these tissues especially vulnerable to infection, and suggest that by therapeutically increasing the abundance of those miRNAs in respiratory epithelial cells, the antiviral defense mechanisms of the cells may be enhanced. Conclusions Non-coding RNAs are involved in various and complex mechanisms in SARS-CoV and SARS-CoV-2 infection, many of which have yet to be fully understood. During infection, the viruses change the host's miRNAome to augment cellular conditions for their own replication and assembly, while encoding for miRNAs that interfere with cellular pathways themselves. On the other hand, non-coding RNAs, including miRNAs, play an essential role in antiviral immunity by regulating the expression of cellular receptors for virus invasion and forming RISCs with proteins that may degrade, and therefore, silence viral RNA. Mutations in the viral genome affecting miRNA binding sites may enhance pathogenicity by enabling the virus to evade RNA interference. Inducing siRNA-mediated RNAi by transferring artificial siRNAs complementary to viral RNA sequences to infected cells is one promising approach for curing SARS-CoV-2 infections using non-coding RNAs. Further development of this idea requires more research on possible targets of siRNA in the viral genome. Other antiviral treatments that utilize miRNAs, e.g., as gene therapy vectors or in vaccines, have also been suggested.
8,054.2
2020-12-01T00:00:00.000
[ "Medicine", "Biology" ]
The Component Graph of the Uniform Spanning Forest: Transitions in Dimensions $9,10,11,\ldots$ We prove that the uniform spanning forests of $\mathbb{Z}^d$ and $\mathbb{Z}^{\ell}$ have qualitatively different connectivity properties whenever $\ell>d \geq 4$. In particular, we consider the graph formed by contracting each tree of the uniform spanning forest down to a single vertex, which we call the component graph. We introduce the notion of ubiquitous subgraphs and show that the set of ubiquitous subgraphs of the component graph changes whenever the dimension changes and is above $8$. To separate dimensions $5,6,7,$ and $8$, we prove a similar result concerning ubiquitous subhypergraphs in the component hypergraph. Our result sharpens a theorem of Benjamini, Kesten, Peres, and Schramm, who proved that the diameter of the component graph increases by one every time the dimension increases by four. Introduction The uniform spanning forests of an infinite, connected, locally finite graph G are defined to be distributional limits of uniform spanning trees of large finite subgraphs of G. These limits can be taken with either free or wired boundary conditions, yielding the free uniform spanning forest (FUSF) and wired uniform spanning forest (WUSF) respectively. Although they are defined as limits of trees, the USFs are not necessarily connected. Indeed, Pemantle [21] proved that the FUSF and WUSF of Z d coincide for all d (so that we can refer to both simply as the USF of Z d ), and are a single tree almost surely (a.s.) if and only if d ≤ 4. A complete characterization of the connectivity of the WUSF was given by Benjamini, Lyons, Peres, and Schramm [3], who proved that the WUSF of a graph is connected if and only if two independent random walks on G intersect infinitely often a.s. Extending Pemantle's result, Benjamini, Kesten, Peres, and Schramm [2] (henceforth referred to as BKPS) discovered the following surprising theorem. Theorem (BKPS [2]). Let F be a sample of the USF of Z d . For each x, y ∈ Z d , let N (x, y) be the minimal number of edges that are not in F used by a path from x to y in Z d . Then almost surely. In particular, this theorem shows that every two trees in the uniform spanning forest of Z d are adjacent almost surely if and only if d ≤ 8. Similar results have since been obtained for other models [22,24,4,18,23]. The purpose of this paper is to show that, once d ≥ 5, the uniform spanning forest undergoes qualitative changes to its connectivity every time the dimension increases, rather than just every four dimensions. In order to formulate such a theorem, we introduce the component graph of the uniform spanning forest. Let G be a graph and let ω be a subgraph of G. The component graph Figure 1. Three trees with boundary that can be used to distinguish the component graphs of the uniform spanning forest in dimensions 9, 10, 11, and 12. Boundary vertices are white, interior vertices are black. C 1 (ω) of ω is defined to be the simple graph that has the connected components of ω as its vertices, and has an edge between two connected components k 1 and k 2 of ω if and only if there exists an edge e of G that has one endpoint in k 1 and the other endpoint in k 2 . More generally, for each r ≥ 1, we define the distance r component graph C r (ω) to be the graph which has the components of ω as its vertices, and has an edge between two components k 1 and k 2 of ω if and only if there is path in G from k 1 to k 2 that has length at most r. When formulated in terms of the component graph, the result of BKPS states that the diameter of C 1 (F) is almost surely (d − 4)/4 for every d ≥ 1. In particular, it implies that C 1 (F) is almost surely a single point for all 1 ≤ d ≤ 4 (as follows from Pemantle's theorem), and is almost surely a complete graph on a countably infinite number of vertices for all 5 ≤ d ≤ 8. We now introduce the notion of ubiquitous subgraphs. We define a graph with boundary H = (∂V, V • , E) = (∂V (H), V • (H), E(H)) to be a graph H = (V, E) whose vertex set V is partitioned into two disjoint sets, V = ∂V ∪ V • , which we call the boundary and interior vertices of H, such that ∂V = ∅. Given a graph G, a graph with boundary H, and collection of distinct vertices (x u ) u∈∂V of G indexed by the boundary vertices of H, we say that H is present at (x u ) u∈∂V if there exists a collection of vertices (x u ) u∈V• of G indexed by the interior vertices of H such that x u ∼ x v or x u = x v for every u ∼ v in H. (Note that, in this definition, we do not require that x u and x v are not adjacent in G if u and v are not adjacent in H.) We say that H is faithfully present at (x u ) u∈∂V if there exists a collection of distinct vertices (x u ) u∈V• of G, disjoint from (x u ) u∈∂V , indexed by the interior vertices of H such that x u ∼ x v for every u ∼ v in H. In figures, we will use the convention that boundary vertices are white and interior vertices are black. We say that H is ubiquitous in G if it is present at every collection of distinct vertices (x u ) u∈∂V in G, and that H is faithfully ubiquitous in G if it is faithfully present at every collection of distinct vertices (x u ) u∈∂V in G. For example, if H is a path of length n with the endpoints of the path as its boundary, then H is ubiquitous in a graph G if and only if G has diameter less than or equal to n. The same graph is faithfully ubiquitous in G if and only if every two vertices of G can be connected by a simple path of length exactly n. If H is a star with k leaves set to be in the boundary and the central vertex set to be in the interior, then H is ubiquitous in a graph G if and only if every k vertices of G share a common neighbour, and in this case H is also faithfully ubiquitous. The main result of this paper is the following theorem. We say that a transitive graph G is d-dimensional if there exist positive constants c and C such that cn d ≤ |B(x, n)| ≤ Cn d for every vertex x of G and every n ≥ 1, where B(x, n) denotes the graph-distance ball of radius n around x in G. The WUSF and FUSF of any d-dimensional transitive graph coincide [3], and we speak simply of the USF of G. Note that the geometry of a d-dimensional transitive graph may be very different from that of Z d . (Working at this level of generality does not add any substantial complications to the proof, however.) Theorem 1.1. Let G 1 and G 2 be transitive graphs of dimension d 1 and d 2 respectively, and let F 1 and F 2 be uniform spanning forests of G 1 and G 2 respectively. Then the following claims hold for every r 1 , r 2 ≥ 1: (1) (Universality and monotonicity.) If d 1 ≥ d 2 ≥ 9, then every finite graph with boundary that is ubiquitous in C r 1 (F 1 ) is also ubiquitous in C r 2 (F 2 ) almost surely. (2) (Distinguishability of different dimensions.) If d 1 > d 2 ≥ 9, then there exists a finite graph with boundary H such that H is almost surely ubiquitous in C r 2 (F 2 ) but not in C r 1 (F 1 ). Moreover, the same result holds with 'ubiquitous' replaced by 'faithfully ubiquitous'. In order to prove item (2) of Theorem 1.1, it will suffice to consider the case that H is a tree. In this case, the following theorem allows us to calculate the dimensions for which H is ubiquitous in the component graph of the uniform spanning forest. The corresponding result for general H is given in Theorem 1.4. Examples of trees that can be used to distinguish between different dimensions using Theorem 1.2 are given in Figures 1 and 2. Theorem 1.2. Let G be a d-dimensional transitive graph for some d > 8, let F be a uniform spanning forest of G, let r ≥ 1, and let T be a finite tree with boundary. Then T is almost surely ubiquitous in C r (F) if and only if T is almost surely faithfully ubiquitous in C r (F), if and only if Note that (d − 4)/(d − 8) is a decreasing function of d for d > 8. The theorem of BKPS follows as a special case of Theorem 1.2 by taking T to be a path. Figure 2 gives an example of a family of trees that can be used to deduce item (2) of Theorem 1.1 from Theorem 1.2. See Figure 3 for another example application. The next theorem shows that uniform spanning forests in different dimensions between 5 and 8 also have qualitatively different connectivity properties. The result is more naturally stated in terms of ubiquitous subhypergraphs in the component hypergraph of the USF; see the following section for definitions and Figure 4 for an illustration of the relevant hypergraphs. 5, 6, 7, and 8.). Let G be a d-dimensional transitive graph and let F be a uniform spanning forest of G. The following hold almost surely. (1) If d = 5, then there exists a constant r 0 such that for every five trees of F, there exists a ball of radius r 0 in G that is intersected by each of the five trees. On the other hand, if d ≥ 6, then for every r ≥ 1, there exists a set of four trees in F such that there does not exist a ball of radius r in G intersecting all four trees. (2) If d = 5 or 6, then there exists a constant r 0 such that for every three trees of F, there exists a ball of radius r 0 in G that is intersected by each of the three trees. On the other hand, if d ≥ 7, then for every r ≥ 1, there exists a set of three trees in F such that there does not exist a ball of radius r in G intersecting all three trees. (3) If d = 5, 6, or 7, then there exists a constant r 0 such that for every r ≥ r 0 , every set of three pairs of trees of F have the following property: There exist three trees T 1 , T 2 , T 3 in F such that T i and the ith pair of trees all intersect some ball B i of radius r in G for each i = 1, 2, 3, and the trees T 1 , T 2 , T 3 all intersect some ball B 0 of radius r in G. On the other hand, if d ≥ 8, then for every r ≥ 1 there exists a set of three pairs of trees of F that do not have this property. 1.1 Ubiquity of general graphs and hypergraphs in the component graph. In this section, we extend Theorem 1.2 to the case that H is not a tree. In order to formulate this extension, it is convenient to consider the even more general setting in which H is a hypergraph with boundary. Indeed, it is a surprising feature of the resulting theory that one is forced to consider hypergraphs even if one is interested only in graphs. We define a hypergraph H = (V, E, ⊥) to be a triple consisting of a set of vertices V , a set of edges E, and a binary relation ⊥⊆ V × E such that the set {v ∈ V : (v, e) ∈⊥} is nonempty for every e ∈ E. We write v ⊥ e or e ⊥ v and say that v is incident to e if (v, e) ∈⊥. Note that this definition is somewhat nonstandard, as it allows multiple edges with the same set of incident vertices. We say that a hypergraph is simple if it does not contain two distinct edges whose sets of incident vertices are equal. Every graph is also a hypergraph. A hypergraph with boundary H = (∂V, V • , E, ⊥) is defined to be a hypergraph H = (V, E, ⊥) together with a partition of V into disjoint subsets, V = ∂V ∪ V • , the boundary and interior vertices of H, such that ∂V = ∅. The degree of a vertex in a hypergraph is the number of edges that are incident to it, and the degree of an edge in a hypergraph is the number of vertices it is incident to. To lighten notation, we will often write simply H = (∂V, V • , E) for a hypergraph with boundary, leaving the incidence relation ⊥ implicit. If H = (∂V, V • , E, ⊥) is a hypergraph with boundary, a subhypergraph (with boundary) of H is defined to be a hypergraph with boundary of the form H = (∂V , V • , E , ⊥ ), where We say that a hypergraph with boundary H = (∂V , V • , E , ⊥ ) is a quotient of a hypergraph with boundary H = (∂V, V • , E, ⊥) if there exists a surjective function φ V : V → V mapping ∂V bijectively onto ∂V and a bijective function for every e ∈ E. Similarly, we say that H is a coarsening of H (and call H a refinement of H ) if there exists a bijection φ V : V → V mapping ∂V bijectively onto ∂V and a surjection Theorem 1.4. Let G be a d-dimensional transitive graph for some d > 4, let F be the uniform spanning forest of G, let H be finite simple graph with boundary, and let r ≥ 1. Then H is faithfully ubiquitous in C r (F) almost surely if and only that is, if and only if H has a coarsening all of whose subhypergraphs are d-buoyant. Moreover, H is ubiquitous in C r (F) if and only if it has a quotient that is faithfully ubiquitous in C r (F) almost surely. This terminology used here arises from the following analogy: We imagine that from each vertex-edge pair (v, e) of H with v ⊥ e we hang a weight exerting a downward force of (d−4), while from each edge and each interior vertex of H we attach a balloon exerting an upward force of either d or (d − 4) respectively. The net force is equal to the apparent weight. The hypergraph is buoyant (i.e., floats) if the apparent weight is non-positive. Theorem 1.4 is best understood as a special case of a more general theorem concerning the component hypergraph. Given a subset ω of a graph G and r ≥ 1, we define the component hypergraph C hyp r (ω) to be the simple hypergraph that has the components of ω as vertices, and where a finite set of components W is an edge of C hyp r (ω) if and only if there exists a set of diameter r in G that intersects every component of ω in the set W . Presence, faithful presence, ubiquity and faithful ubiquity of a hypergraph with boundary H in a hypergraph G are defined similarly to the graph case. For example, we say that a finite hypergraph with boundary H = (∂V, V • , E) is faithfully present at (x u ) u∈∂V in G if there exists a collection of distinct vertices (x u ) u∈V• of G, disjoint from (x u ) u∈∂V , indexed by the interior vertices of H such that for each e ∈ E there exists an edge f of G that is incident to all of the vertices in the set {x v : v ⊥ e}. Given a d-dimensional graph G and M ≥ 1, we let R G (M ) be minimal such that there exists a set of vertices in G of diameter R G (M ) that intersects M distinct components of the uniform spanning forest of G with positive probability. Given a hypergraph with boundary H, we let R G (H) = R G (max e∈E deg(e)). immediately by applying Theorem 1.5 to the hypergraphs pictured in Figure 4. The min max problem arising in (1.1) and (1.2) is studied in Section 2.7. Organisation In Section 2, we give background on uniform spanning forests, establish notation, and prove some simple preliminaries that will be used throughout the rest of the paper. In Section 3, we outline some of the key steps in the proof of the main theorems; this section is optional if the reader prefers to go straight to the fully detailed proofs. Section 4 is the computational heart of the paper, where the quantitative estimates needed for the proof of the main theorems are established. In Section 5, we deduce the main theorems from the estimates of Section 4 together with the multicomponent indistinguishability theorem of [12], which is used as a zero-one law. This section is quite short, most the work having already been done in Section 4. We conclude with some open problems and remarks in Section 6. 2 Background, definitions, and preliminaries 2.1 Basic notation Let G be a d-dimensional transitive graph with vertex set V, and let F be the uniform spanning forest of G. For each set W ⊆ V, we write write F (W ) for the event that the vertices of W are all in the same component of F. Let r ≥ 1 and let H = (∂V, V • , E) be a finite hypergraph with boundary. We definê We write , , and for inequalities or equalities that hold up to a positive multiplicative constant depending only on some fixed data that will be clear from the context, usually G, H, and r, and write , and ≈ for inequalities or equalities that hold up to an additive constant depending only on the same data. In particular a b if and only if log 2 a ≈ log 2 b. We sometimes write exp 2 (a) to mean 2 a . For each two vertices x and y of G, we write xy = d G (x, y) + 1, where d G is the graph metric on G. For each vertex x of G and ∞ ≥ N > n ≥ 0, we define the dyadic shell Λ x (n, N ) := y ∈ V : 2 n ≤ xy ≤ 2 N . If x = (x u ) u∈∂V is a collection of vertices in G, we choose one such point x 0 arbitrarily and set Λ x (n, N ) = Λ x 0 (n, N ) for every N > n ≥ 0. Since G is d-dimensional, we have that for all n ≥ 0 and N ≥ n + 1. The upper bound is immediate, while the lower bound follows because Λ x (n, N ) contains both some point y with x 0 y = 2 N −1 + 2 N −2 and the ball of radius 2 N −2 around this point y. Uniform spanning forests Given a finite connected graph G, we define UST G to be the uniform probability measure on the set of spanning trees of G, that is, connected subgraphs of G that contain every vertex of G and do not contain any cycles. Now suppose that G = (V, E) is an infinite, connected, locally finite graph, and let (V i ) i≥1 be an exhaustion of V by finite sets, that is, an increasing sequence of finite, connected subsets of V such that i≥1 V i = V . For each i ≥ 1, let G i be the subgraph of G induced 1 by V i , and let G * i be the graph formed from G by contracting V \ V i down to a single vertex and deleting all of the self-loops that are created by this contraction. The free and wired uniform spanning forest (FUSF and WUSF) measures of G, denoted FUSF G and WUSF G , are defined to be the weak limits of the uniform spanning tree measures of G i and G * i respectively. That is, for every finite set S ⊂ E, Both limits were proven to exist by Pemantle [21] (although the WUSF was not considered explicitly until the work of Häggström [9]), and do not depend on the choice of exhaustion. Benjamini, Lyons, Peres, and Schramm [3] proved that the WUSF and FUSF of G coincide if and only if G does not admit harmonic functions of finite Dirichlet energy, from which they deduced that the WUSF and FUSF coincide on any amenable transitive graph. In particular, it follows that the WUSF and FUSF coincide for every transitive d-dimensional graph, and in this context we refer to both the FUSF and WUSF measures on G as simply the uniform spanning forest measure, USF G , on G. We say that a random spanning forest of G is a uniform spanning forest of G if it has law USF G . Wilson's Algorithm Wilson's algorithm [29] is a way of generating the uniform spanning tree of a finite graph by joining together loop-erased random walks. It was extended to generate the wired uniform spanning forests of infinite, transient graphs by Benjamini, Lyons, Peres, and Schramm [3]. Recall that, given a path γ = (γ n ) n≥0 in a graph G that is either finite or visits each vertex of G at most finitely often, the loop-erasure of γ is defined by deleting loops from γ chronologically as they are created. The loop-erasure of a simple random walk path is known as loop-erased random walk and was first studied by Lawler [17]. Formally, we define the loop-erasure of γ to be LE(γ) = (γ τ i ) i≥0 , where τ i is defined recursively by setting τ 0 = 0 and (If G is not simple, then we also keep track of which edges are used by LE(γ).) Let G be an infinite, connected, transient, locally finite graph. Wilson's algorithm rooted at infinity allows us to sample the wired uniform spanning forest of G as follows. Let (v i ) i≥1 be an enumeration of the vertices of G. Let F 0 = ∅, and define a sequence of random subforests (F i ) i≥0 of G as recursively follows. n=0 . Finally, let F = i≥1 F i . This is Wilson's algorithm rooted at infinity: the resulting random forest F is a wired uniform spanning forest of G. The main connectivity estimate Let K be a finite set of vertices of G. Following [2], we define the spread of K, denoted K , to be Note that the tree τ being minimized over in the definition of K need not be a subgraph of G. If we enumerate the vertices of K as x 1 , . . . , x n , then we have the simple estimate [2, Lemma 2.6] where the implied constant depends on the cardinality of K. In practice we will always use (2.2), rather than the definition, to estimate the spread. The main tool in our analysis of the USF is the following estimate of BKPS. Recall that F (K) is the event that every vertex of K is in the same component of the uniform spanning forest F. Theorem 2.1 (BKPS [2]). Let G be a d-dimensional transitive graph with d > 4, let F be the uniform spanning forest of G, and let K be a finite set of vertices of G. Then there exists a constant C = C(G, |K|) such that BKPS proved the theorem in the case G = Z d . The general case follows from the same proof by applying the heat kernel estimates of Hebisch and Saloff-Coste [10] (see Theorem 4.18), as stated in [2,Remark 6.12]. These heat kernel estimates imply in particular that the Greens function estimate holds for every d-dimensional transitive graph G with d > 2 and every pair u, v ∈ V. Proposition 2.2. Let G be a d-dimensional transitive graph, let F be the uniform spanning forest of G, and let K i be a collection of finite sets of vertices of G indexed by some finite set I. Then there exists a constant C = C(G, |I|, {|K i | : i ∈ I}) such that Proof. We may assume that I = {1, . . . , k} for some k ≥ 1. Given a collection of independent random walks X 1 , . . . , X n , let A(X 1 , . . . , X n ) be the indicator of the event that the forest generated by running the first n steps of Wilson's algorithm using the walks X 1 , . . . , X n , in that order, is connected. Thus, given a finite set K ⊂ V, we have P(F (K)) = P A X 1 , . . . , X |K| = 1 where X 1 , . . . , X |K| are independent random walks started at the vertices of K. Now suppose that (K i ) i∈I is a collection of finite sets, and suppose we generate a sample F of the USF, starting with independent random walks X 1,1 , . . . , X 1,|K 1 | , X 2,1 , . . . , X k,|K k | , where X i,j starts from the jth element of K i . Then we observe that and hence that The claim now follows from Theorem 2.1. It is also possible to prove (2.6) using the negative association property of the USF, see e.g. [8]. x v } are edges of G and there exist three distinct trees of F each containing one of the sets Witnesses Let H be a finite hypergraph with boundary, let r ≥ 1, and let x = (x v ) v∈∂V be a collection of vertices in G. We say that H is r-faithfully present at x if it is faithfully present at the components of x in C hyp r (F). We define r-presence of H at x similarly. Let E • be the set of pairs (e, v), where e ∈ E is an edge of H and v ⊥ e is a vertex of H incident to e. We say that ξ = (ξ (e,v) ) (e,v)∈E• ∈ V E• is a witness for the r-faithful presence of H at x if the following conditions hold: (1) For every e ∈ E and every u, v ⊥ e we have that ξ (e,v) ξ (e,u) ≤ r − 1. See Figure 7 for an illustrated example. We write W (x, ξ) = W H r (x, ξ) for the event that ξ is a witness for the r-faithful presence of H at x. Thus, on the event that all the vertices of x are in distinct components of F, H is r-faithfully present at x if and only if W H r (x, ξ) occurs for some ξ ∈ V E• , and is present at x if and only if W H r (x, ξ) occurs for some quotient H of H and some ξ ∈ V E•(H ) . We say that H is r-robustly faithfully present at x = (x v ) v∈V if there is an infinite collection {ξ i = (ξ i (e,v) ) (e,v)∈E• : i ≥ 1} such that ξ i is a witnesses for the r-faithful presence of H at x for every i, and ξ j (e,v) = ξ j (e ,v ) for every i > j ≥ 1 and (e, v), (e , v ) ∈ E • . Often, x, r and H will be fixed. In this case we will speak simply of 'faithful presence' to mean 'r-faithful presence', 'robustly faithfully present' to mean 'r-robustly faithfully present', 'witnesses' to mean 'witnesses for the r-faithful presence of H at x', and so on. It will be useful to define the following sets in which witnesses must live. For every (x v ) v∈∂V , n ≥ 0 and N > n, let Indistinguishability of tuples of trees In this section we provide background on the notion of indistinguishability theorems, including the indistinguishability theorem of [12] which will play a major role in the proofs of our main theorems. Indistinguishability theorems tell us that, roughly speaking, 'all infinite components look alike'. The first such theorem was proven in the context of Bernoulli percolation by Lyons and Schramm [20]. Indistinguishability of components in uniform spanning forests was conjectured by Benjamini, Lyons, Peres, and Schramm [3] and proven by Hutchcroft and Nachmias [13]. (Partial progress was made independently at the same time by Timár [27].) All of the results just mentioned apply to individual components. In this paper, we will instead apply the indistinguishability theorem of [12], which yields a form of indistinguishability for multiple components in the uniform spanning forest. We will use this theorem as a zero-one law that allows us to pass from an estimate showing that certain events occur with positive probability to knowing that these events must occur with probability one. We now give the definitions required to state this theorem. Let G = (V, E) be a graph, and let k ≥ 1. We define Ω k (G) = {0, 1} E × V k , which we equip with its product σ-algebra and think of as the set of subgraphs of G rooted at an ordered k-tuple of vertices. A measurable set A ⊆ Ω k (G) is said to be a k-component property if That is, A is a k-component property if it is stable under replacing the root vertices with other root vertices from within the same components. Given a k-component property A , we say that a k-tuple of components (K 1 , . . . , K k ) of a configuration ω ∈ {0, 1} E has property A if (ω, (u i ) k i=1 ) ∈ A whenever u 1 , . . . , u k are vertices of G such that u i ∈ K i for every Given a vertex v of G and a configuration ω ∈ {0, 1} E , let K ω (v) denote the connected component of ω containing v. We say that a k-component property A is a tail k-component property if where denotes the symmetric difference. In other words, tail multicomponent properties are stable under finite modifications to ω that result in finite modifications to each of the components of interest K ω (v 1 ), . . . , K ω (v k ). Theorem 2.3 ([12] ). Let G be a d-dimensional transitive graph with d > 4 and with vertex set V, and let F be the uniform spanning forest of G. Then for each k ≥ 1 and each tail k-component property A ⊆ Ω k (G), either every k-tuple of distinct connected components of F has property A almost surely or no k-tuple of distinct connected components of F has property A almost surely. We say that A is a multicomponent property if it is a k-component property for some k ≥ 1. For our purposes, the key example of a tail multicomponent property is the property that some finite hypergraph with boundary H is r-robustly faithfully present at (x v ) v∈∂V . Applying Theorem 2.3, we will deduce that if H is r-robustly faithfully present at some (x v ) v∈∂V with positive probability then it must be almost surely r-robustly faithfully present at every (x v ) v∈∂V for which the vertices {x v } v∈∂V are all in distinct components of F. Optimal Coarsenings In this section we study the min max problem appearing in Theorems 1.4 and 1.5, proving the following. We call a coarsening H = H/ of H d-optimal if is d-optimal. We say that a subhyper- Lemma 2.5. Let H be a finite hypergraph with boundary, let d ∈ R, and let H/ be a d-optimal coarsening of H. Then H / is a d-optimal coarsening of H for every full subhypergraph H of H subordinate to . It follows that where the equality on the second line follows from Lemma 2.5. Taking where the second equality follows from (2.9). The the final line of this display is clearly less than or equal to the first line, so that all the lines must be equal, completing the proof. Remark. Lemma 2.6 yields a brute force algorithm for computing the value of the relevant max min problem that is exponentially faster than the trivial brute force algorithm, although still taking superexponential time in the number of edges of H. Sketch of the proof In this section we give a detail-free overview of the most important components of the proof. This section is completely optional; all the arguments and definitions mentioned here will be repeated in full detail later on. Non-ubiquity in high dimensions Let G be a d-dimensional transitive graph, let H be a finite hypergraph with boundary, and let F be the uniform spanning forest of G. We wish to show that if every coarsening of H has a subhypergraph that is not d-buoyant, then H is not faithfully ubiquitous in C hyp r (F) for any r ≥ 1 a.s. By Lemma 2.4, this condition is equivalent to there existing a subhypergraph of H none of whose coarsenings are d-buoyant. If H is faithfully ubiquitous then so are all of its subhypergraphs, and so it suffices to consider the case that H does not have any d-buoyant coarsenings, i.e., thatη d (H) > 0. To show that H is not faithfully ubiquitous, it would suffice to show that if the vertices x = (x v ) v∈∂V are far apart from each other, then the expected total number of witnesses for the faithful presence of H at x is small. As it happens, we are not able to control the total number of witnesses without making further assumptions on H. Nevertheless, the most important step in our argument is to show that if x is contained in Λ x (0, n − 1), then the expected number of witnesses in Λ x (n, n + 1) is exponentially small as a function of n. Once we have done this, we will control the expected number of witnesses that occur 'at the same scale' as x by a similar argument. We are not finished at this point, of course, since we have not ruled out the existence of witnesses that are spread out across multiple scales. However, given the single-scale estimates, we are able to handle multi-scale witnesses of this form via an inductive argument on the size of H (Lemmas 4.7-4.9), which allows us to reduce from the multi-scale setting to the single-scale setting. Let us briefly discuss how the single-scale estimate is attained. Write Ξ = Ξ x (n, n + 1). Proposition 2.2 implies that the expected number of witnesses in Λ x (n, n + 1) is at most a constant multiple of ξ∈Ξ u∈∂V To control this sum, we split it as follows. Let L be the set of symmetric functions : E 2 → {0, . . . , n} such that (e, e) = 0 for every e ∈ E. For each ∈ L, let Ξ = ξ ∈ Ξ : 2 (e,e ) ≤ ξ e ξ e ≤ 2 (e,e )+2 for all e, e ∈ E , so that Ξ = ∈L Ξ . The advantage of this decomposition is that W is approximately constant on each set Ξ : On the other hand, by considering the number of choices we have for ξ e i at each step given our previous choices, it follows that whereˆ is the largest ultrametric on E that is dominated by . (Ξ could be much smaller than this of course -it could even be empty.) We deduce that We have that log 2 |L| = E 2 log 2 (n + 1), which will be negligible compared with the rest of the expression in the case thatη d (H) > 0. From here, the problem is to identify the ∈ L achieving the maximum above. We will argue, by invoking a general lemma ( Choosing such a coarsening optimally, it is not hard to deduce that giving the desired exponential decay. Ubiquity in low dimensions We now sketch the proof of ubiquity in low dimensions. Here we will only discuss the case is an integer raises several additional technical complications, see Section 4.2.1. Let G be a d-dimensional transitive graph with d ∈ {7} ∪ {9, 10, . . .}, let H be a finite hypergraph with boundary, and let F be the uniform spanning forest of G. Recall the definition of R G (H) from Section 1.1. Working in the opposite direction to the previous subsection, we wish to prove that if H has a coarsening all of whose subhypergraphs are d-buoyant, then H is faithfully ubiquitous in the component hypergraph C hyp r (F) for every r ≥ R G (H) a.s. We say that H is r-robustly faithfully present at x = (x v ) v∈V if there are infinitely many disjoint witnesses for the faithful presence of H at x. The event that H is r-robustly faithfully present at x is a tail |∂V |-component property. Thus, by Theorem 2.3, it suffices to prove that there exists an x such that, with positive probability, the points of x are all in different components of F and H is R G (H)-robustly faithfully present at x. Let us suppose for now that every subhypergraph of H is d-buoyant (i.e., that we do not have to pass to a coarsening for this to be true). To prove that H has a positive probability of being robustly faithfully present at some x, we perform a first and second moment analysis on the number of witnesses in dyadic shells. Suppose that x is contained in Λ x (0, n − 1). Since we are now interested in existence rather than nonexistence, we can make things easier for ourselves by considering only ξ that are both contained in a dyadic shell Λ x (n, n + 1), and such that ξ (e,u) ξ (e ,u ) ≥ 2 n−C 1 whenever e = e , for some appropriate chosen constant C 1 . Furthermore, for each e ∈ E the points {ξ (e,u) : u ⊥ e} must be sufficiently well separated that there are not local obstructions to ξ being a witness -this is where we need that r ≥ R G (H). Call such a ξ good, and denote the set of good ξ by Ω x (n). We then argue that for good ξ, the probability that ξ is a witness is comparable to where ξ e is chosen arbitrarily from {ξ (e,u) : u ⊥ e} for each e, and hence that the expected number of witnesses in Ω x (n) is comparable to 2 −η d (H)n . In other words, we have that the upper bound on the probability that ξ is a witness provided by Proposition 2.2 is comparable to the true probability when ξ is good. Our proof of this estimate appears in Section 4.3; unfortunately it is quite long. Taking this lower bound on trust for now, the rest of the analysis proceeds similarly to that sketched in Section 3.1, and is in fact somewhat simpler thanks to our restriction to good configurations. The bound implies that the expected number of good witnesses in Λ x (n, n + 1) is comparable to exp 2 −η d (H) n . Estimating the second moment is equivalent to estimating the expected number of pairs ξ, ζ such that ξ and ζ are both good witnesses. Observe that if ξ and ζ are both good witnesses then the following hold: (1) For each v ∈ V , there is at most one v ∈ V such that ξ (e,v) and ζ (e ,v ) are in the same component of F for some (and hence every) e ⊥ v and e ⊥ v . (2) For each e ∈ E, there is at most one e ∈ E such that ξ e ζ e ≤ 2 n−C 1 −1 . To account for the degrees of freedom given by (1), we define Φ to be the set of functions (Here and elsewhere, we use as a dummy symbol so that we can encode partial bijections by functions.) For each φ ∈ Φ, we defineW φ (ξ, ζ) to be the event that ξ and ζ are both witnesses, and that ξ (e,v) and ζ (e ,v ) are in the same component of F if and only if e = φ(e). Thus, to control the expected number of pairs of good witnesses, it suffices to control Next, to account for the degrees of freedom given by (2), we define Ψ to be the set of functions ψ : E → E ∪ { } such that the preimage ψ −1 (e) has at most one element for each e ∈ E. We can easily upper bound the volume Using this together with Proposition 2.2, is is straightforward to calculate that We now come to some case analysis. Observe that for every ψ ∈ Ψ and e ∈ E, we have that is not an integer, the middle case cannot occur and we obtain that From here, our task is to show that the expression on the right hand side is maximized when φ ≡ and ψ ≡ , in which case it is equal to −2dη d (H)n. To do this, we identify optimal choices of φ and ψ with subhypergraphs of H, and use the assumption that every subhypergraph of H is d-buoyant. This should be compared to how, in the proof of non-ubiquity sketched in the previous subsection, we identified optimal choices of with coarsenings of H. Once we have this, since there are only a constant number of choices for φ and ψ, we deduce that the second moment of the number of good witnesses is comparable to the square of the first moment. Thus, it follows from the Cauchy-Schwarz inequality that the probability of there being a good witness in each sufficiently large dyadic shell is bounded from below by some ε > 0, and we deduce from Fatou's lemma that there are good witnesses in infinitely many dyadic shells with probability at least ε. This completes the proof that robust faithful presence occurs with positive probability. It remains to remove the simplifying assumption we placed on H, i.e., to allow ourselves to pass to a coarsening of H all of whose subhypergraphs are d-buoyant before proving faithful ubiquity. To do this, we introduce the notion of constellations of witnesses. These are larger collections of points, defined in such a a way that every constellation of witness for H contains a witness for each refinement of H. In the actual, fully detailed proof we will work with constellations from the beginning. This does not add many complications. Non-ubiquity in high dimensions The goal of this section is to prove the following. Let H = (∂V, V • , E) be a finite hypergraph with boundary such that E = ∅, and let r ≥ 1. Recall that W (x, ξ) is defined to be the event that ξ is a witness for the faithful presence of H at x. For each N > n, we define and so that, if we choose a vertex u(e) ⊥ e arbitrarily for each e ∈ E and set (ξ e ) e∈E = (ξ (e,u(e)) ) e∈E , it follows from Proposition 2.2 that for every x, n, and N . To avoid trivialities, in the case that H does not have any edges we define W H x (n, N ) = 1 for every x ∈ V ∂V and N > n. In order to prove Proposition 4.1, it will suffice to show that if H has a subhypergraph with boundary that does not have any d-buoyant coarsenings, then for every ε > 0 there exists a collection of vertices (x u ) u∈∂V such that all the vertices x u are in a different component of F with probability at least 1/2 (which, by Theorem 2.1, will be the case if the vertices are all far away from each other), but P(H is faithfully present at In order to prove this, we seek to obtain upper bounds on the quantity W H x (n, N ). We begin by considering the case of a single distant scale. That is, the case that |N − n| is a constant and all the points of x are contained in Λ x (0, n − 1). Recall thatη d (H) is defined to be min{η d (H ) : H is a coarsening of H}. It will be useful for applications in Section 4.3 to prove a more general result. A graph G is said to be d-Ahlfors regular if there exists a positive constant c such that c −1 r d ≤ |B(x, r)| ≤ cr d for every r ≥ 1 and every x ∈ V (in which case we say G is d-Ahlfors regular with constant c). Given α > 0 and a finite hypergraph with boundary H, we define H is a coarsening of H}. Given a graph G, a finite hypergraph with boundary H = (∂V, V • , E), and points (x v ) v∈∂V , (ξ e ) e∈E we also define and, for each N > n, Note that η d = η d,2 and W H x = W H,2 x , so that Lemma 4.2 follows as a special case of the following lemma. Before proving this lemma, we will require a quick detour to analyze a relevant optimization problem. Optimization on the ultrametric polytope which is a closed convex subset of R A 2 . We consider U A to be the set of all ultrametrics on A with distances bounded by 1. We write P(A 2 ) for the set of subsets of A 2 . Lemma 4.4. Let A be a finite non-empty set, and let F : where K < ∞, c 1 , . . . , c K ∈ R, and W 1 , . . . , W K ∈ P(A 2 ). Then the maximum of F on U A is obtained by an ultrametric for which all distances are either zero or one. That is, Proof. We prove the claim by induction on |A|. The case |A| = 1 is trivial. Suppose that the claim holds for all sets with cardinality less than that of A. We may assume that (a, a) / It is easily verified that for every x ∈ R A 2 , every λ ≥ 0, and every α ∈ R. Suppose y ∈ U A is such that F (y) = max x∈U A F (x). We may assume that F (y) > F (1) and that F (y) > F (0) = 0, since otherwise the claim is trivial. Let m = min{y a,b : a, b ∈ A, a = b}, which is less than one by assumption. We have that and so we must have m = 0 since y maximizes F . Define an equivalence relation on A by letting a and b be related if and only if y a,b = 0. We writeâ for the equivalence class of b under . Let C be the set of equivalence classes of , and let φ : for every x ∈ U n . For each 1 ≤ k ≤ K, letŴ k be the set of pairsâ,b ∈ C such that (a, b) ∈ W k for some a in the equivalence classâ and b in the equivalence classb. Let We have thatF = F • φ, and, since y maximized F , we deduce that, by the induction hypothesis, completing the proof. We will also require the following generalisation of Lemma 4.4. For each finite collection of disjoint finite sets {A i } i∈I with union A = i∈I A i , we define x a,b = 1 for every distinct i, j ∈ I and every a ∈ A i and b ∈ A j .}. Lemma 4.5. Let {A i } i∈I be a finite collection of disjoint, finite, non-empty sets with union A = i∈I A i , and let F : where K < ∞, c 1 , . . . , c K ∈ R, and W 1 , . . . , W K ∈ P(A 2 ). Then the maximum of F on U A is obtained by an ultrametric for which all distances are either zero or one. That is, Proof. We prove the claim by fixing the index set I and inducting on |A|. The case |A| = |I| is trivial. Suppose that the claim holds for all collections of finite disjoint sets indexed by I with total cardinality less than that of A. We may assume that (i, i) / ∈ W k for every 1 ≤ k ≤ K and i ∈ A, since if (i, i) ∈ W k for some 1 ≤ k ≤ K then the term c k min{x i,j : (i, j) ∈ W k } is identically zero on U A . Furthermore, we may assume that W k contains more than one element of at least one of the sets A i for each 1 ≤ k ≤ K, since otherwise the term c k min{x i,j : (i, j) ∈ W k } is equal to the constant c k on U {A i } i∈I . We write 1 and i for the vectors and i a,b = 1(a = b, and a, b ∈ A i for some i ∈ I). It is easily verified that The rest of the proof is similar to that of Lemma 4.4. Back to the uniform spanning forest We now return to the proofs of Proposition 4.1 and Lemma 4.3. Proof of Lemma 4.3. In this proof, implicit constants will be functions of c , H, α, d and m. The case that E = ∅ is trivial (by the assumption that d ≥ 2α), so we may assume that |E| ≥ 1. By considering the number of choices we have for ξ e i at each step given our previous choices, it follows that min ˆ (e i , e j ) : j < i . Now, for every ξ ∈ Ξ , we have that Thus, from (4.5) and (4.2) we have that Let Q : L → R be defined to be the expression on the right hand side of (4.3). We clearly have that Q(ˆ ) ≥ Q( ) for every ∈ L, and so there exists ∈ L maximizing Q such that is an ultrametric. It follows from Lemma 4.4 (applied to the normalized ultrametric /n) that there exists ∈ L maximizing Q such that is an ultrametric and every value of is in {0, n}. Fix one such , and define an equivalence relation on E by letting e e if and only if (e, e ) = 0, which is an equivalence relation since is an ultrametric. Observe that, for every 2 ≤ i ≤ |E|, Since |L| ≤ (n + 1) |E| 2 , we deduce that as claimed. Next, we consider the case that the points x v are roughly equally spaced and we are summing over points ξ that are on the same scale as the spacing of the x v . Proof. We may assume that E = ∅, the case E = ∅ being trivial. For notational convenience, we will write ξ v = x v , and consider v ⊥ v for every vertex v ∈ ∂V . Write Ξ = Ξ x (0, n + m 2 ), and observe that for each ξ ∈ Ξ and e ∈ E there exists at most one v ∈ ∂V for which log 2 ξ e ξ v < n − m 1 − 1. To account for these degrees of freedom, we define Φ to be the set of functions φ : For each φ ∈ Φ, let L φ be the set of symmetric functions : (E ∪ ∂V ) 2 → {0, . . . , n} such that (e, e) = 0 for every e ∈ E ∪ ∂V and (e, e ) = n for every e, e ∈ E ∪ ∂V such that φ(e) = φ(e ). For each φ ∈ Φ and ∈ L φ , let Ξ φ, = ξ ∈ Ξ : (e, e ) − m 1 − 1 ≤ log 2 ξ e ξ e ≤ (e, e ) + m 2 + 1 for every e, e ∈ E ∪ ∂V , and observe that Ξ = φ∈Φ ∈L φ Ξ φ, . Now, for each φ ∈ Φ and ∈ L φ , letˆ be the largest ultrametric on E ∪ ∂V that is dominated by . Observe thatˆ ∈ L φ , and that, as in the previous lemma, we have that for every e, e ∈ E ∪ ∂V . Let e 1 , . . . , e |E| be an enumeration of E, and let e 0 , e −1 , . . . , e −|∂V |+1 be an enumeration of ∂V . As in the proof of the previous lemma, we have the volume estimate min{ˆ (e i , e j ) : j < i} (4.5) Now, for every ξ ∈ Ξ φ, , we have that, similarly to the previous proof, min{ (e i , e j ) : j < i, e j ⊥ u}. (Recall that we are considering u ⊥ u for each u ∈ ∂V .) Thus, we have Let Q : L φ → R be defined to be the expression on the right hand side of (4.6). Similarly to the previous proof but applying Lemma Since d > 4 and each equivalence class of can contain at most one vertex of v, we see that Q increases if we remove a vertex v ∈ ∂V from its equivalence class. Since was chosen to maximize Q, we deduce that the equivalence class of v under is a singleton for every v ∈ ∂V . Thus, there exists an ultrametric ∈ L φ maximizing Q such that (e, e ) ∈ {0, n} for every e, e ∈ E and (e, v) = n for every e ∈ E and v ∈ ∂V . Letting be the equivalence relation on E (rather than E ∪ ∂V ) corresponding to such an optimal , we have for every x = (x u ) u∈∂V ∈ V ∂V and every N such that x u x v ≤ 2 N −1 for all u, v ∈ ∂V . Note that when |E| ≥ 1 we must consider the term E = ∅ when taking the maximum in this lemma, which gives −η d (H)N + |E| 2 log 2 N . Proof. The claim is trivial in the case E = ∅, so suppose that |E| ≥ 1. For each E E and every 1 ≤ m ≤ |E| + 1, let Observe that if ξ ∈ Ξ then, by the Pigeonhole Principle, there must exist 1 ≤ m ≤ |E| + 2 such that ξ e is not in Λ for any e ∈ E, and we deduce that Thus, to prove the lemma it suffices to show that For each ξ ∈ Ξ E ,m , let ξ = (ξ e ) e∈E = (ξ e ) e∈E and ξ = (ξ e ) e∈E = (ξ e ) e∈E . Then the above displays imply that for every ξ ∈ Ξ E ,m . Thus, summing over ξ ∈ (Λ x (0, N + m − 1)) E and ξ ∈ (Λ x (N + m, N + |E| + 2)) E , we obtain that where the second inequality follows from Lemma 4.2. To deduce (4.8) from (4.9), it suffices to show that We now use Lemma 4.7 and Lemma 4.6 to perform an inductive analysis of W. Although we are mostly interested in the non-buoyant case, we begin by controlling the buoyant case. Proof. We induct on the number of edges in H. The claim is trivial when E = ∅. Suppose that |E| ≥ 1 and that the claim holds for all finite hypergraphs with boundary that have fewer edges than H. By assumption,η d (H ) ≤ 0 for all subhypergraphs H of H. Thus, it follows from the induction hypothesis that log 2 W H x (0, N + |E| + 2) −η d (H ) N + (|E ∪ ∂V | 2 + 1) log 2 N for each proper subhypergraph H of H, and hence that (Note that the implicit constants depending on H from the induction hypothesis are bounded by a constant depending on H since H has only finitely many subhypergraphs.) Observe that whenever E E we have that and so we deduce that for every proper subhypergraph H of H. Thus, we have that for all N ≥ n, where we applied Lemma 4.7 in the second inequality. Summing from n to N we deduce that Using Lemma 4.6 to control the term W H x (0, n) completes the induction. We are now ready to perform a similar induction for the non-buoyant case. Note that in this case the induction hypothesis concerns probabilities rather than expectations. This is necessary because the expectations can grow as N → ∞ for the wrong reasons if H has a buoyant coarsening but has a subhypergraph that does not have a buoyant coarsening (e.g. the tree in Figure 3). for all x = (x u ) u∈∂V ∈ V ∂V such that 2 n−m ≤ x u x v ≤ 2 n−1 for all u, v ∈ ∂V . Proof. We induct on the number of edges in H. For the base case, suppose that H has a single edge. In this case we must have that η d (H) > 0, and we deduce from Lemmas 4.2 and 4.6 that Thus, it suffices to consider the case thatη d (H) > 0 but thatη d (H ) ≤ 0 for every proper subhypergraph H of H. In this case, we apply Lemma 4.7 to deduce that Lemma 4.8 then yields that Finally, combining this with Lemma 4.6 yields that, sinceη d (H) > 0, and the claim follows from Markov's inequality. Proof of Proposition 4.1. Let H be a finite hypergraph with boundary that has a subhypergraph that does not have any d-buoyant coarsenings, so that in particular H has at least one edge. Lemma 4.9 and Proposition 2.2 imply that for every ε > 0, there exists x = (x v ) v∈∂V such that each of the points x v are in different components of F with probability at least 1−ε, but H has probability at most ε to be faithfully present at x in the component hypergraph C hyp r (F). It follows that H is not faithfully ubiquitous in the component graph C hyp r (F) a.s. Now suppose that H is a hypergraph with boundary such that every quotient H of H such that R G (H ) ≤ r has a subhypergraph that does not have any d-buoyant coarsenings. Note that if H is a quotient of H such that R G (H ) > r then H is not faithfully present anywhere in G a.s. This follows immediately from the definition of R G (H ). On the other hand, Lemma 4.9 and Proposition 2.2 imply that for every ε > 0, there exists x = (x v ) v∈∂V such that each of the points x v are in different components of F with probability at least 1 − ε, but, for each quotient H of H with R G (H ) ≤ r, the hypergraph H has probability at most ε/|{quotients of H}| to be faithfully present at x in the component hypergraph C hyp r (F), since H must have a subhypergraph none of whose coarsenings are d-buoyant by assumption. It follows by a union bound that H has probability at most ε to be present in C hyp r (F) at this x. It follows as above that H is not ubiquitous in the component hypergraph C hyp r (F) a.s. Positive probability of robust faithful presence in low dimensions Recall that if G is a d-dimensional transitive graph, H = (∂V, V • , E) is a finite hypergraph with boundary, that r ≥ 1 and that (x v ) v∈∂V is a collection of points in G, we say that H is r-robustly faithfully present at x = (x v ) v∈V if there is an infinite collection {ξ i = (ξ i (e,v) ) (e,v)∈E• : i ≥ 1} such that ξ i is a witness for the r-faithful presence of H at x for every i, and ξ j (e,v) = ξ j (e ,v ) for every i, j ≥ 1 and (e, v), (e , v ) ∈ E • such that i = j. As in the introduction, for each M ≥ 1 we let R G (M ) be minimal such that it is possible for a set of diameter R G (M ) to intersect M distinct components of the uniform spanning forest of G, and let R G (H) = R G (max e∈E deg(e)). We say that a set W ⊂ V is well-separated if the vertices of W are all in different components of the uniform spanning forest F with positive probability. Lemma 4.10. Let G be a d-dimensional transitive graph with d > 4, and let F be the uniform spanning forest of G. Then a finite set W ⊂ V is well-separated if and only if when we start a collection of independent simple random walks {X v : v ∈ W } at the vertices of W , the event that {X u i : i ≥ 0} ∩ {X v i : i ≥ 0} = ∅ for every distinct u, v ∈ W has positive probability. Proof. We will be brief since the statement is intuitively obvious from Wilson's algorithm and the details are somewhat tedious. The 'if' implication follows trivially from Wilson's algorithm. To see the reverse implication, suppose that W is well-separated and consider the paths almost surely on the event that the vertices of W are all in different components of F. Let i ≥ 1 and consider the collection of simple random walks Y v,i started at Γ v i and conditionally independent of each other and of F given (Γ v i ) v∈W , and letỸ v,i be the random path formed by concatenating (Γ v j ) i j=1 with Y v,i . It follows from (4.11) and Markov's inequality that where we recall that F (W ) is the event that all the vertices of W are in different components of F. In particular, it follows that the probability appearing on the left hand side of (4.12) is positive for some i 0 ≥ 0. The result now follows since the walks {X v : v ∈ W } have a positive probability of following the paths Γ v for their first i 0 steps, and on this event their conditional distribution coincides with that of {Ỹ v,i 0 : v ∈ W }. The goal of this subsection is to prove criteria for robust faithful presence to occur with positive probability. We begin with the case that d/(d − 4) is not an integer (i.e., d / ∈ {5, 6, 8}), which is technically simpler. The corresponding proposition for d = 5, 6, 8 is given in Proposition 4.15. is not an integer, and let F be the uniform spanning forest of G. Let H be a finite hypergraph with boundary with at least one edge, and suppose that H has a coarsening all of whose subhypergraphs are d-buoyant. Then for every r ≥ R G (H) and every well-separated collection of points (x v ) v∈∂V in V, there is a positive probability that the vertices x u are all in different components of F and that H is robustly faithfully present at x in C hyp r (F). The proof of Proposition 4.11 will employ the notion of constellations. The reason we work with constellations is that a constellation of witnesses for the presence of H (defined below) necessarily contains a witness for every refinement of H. This allows us to pass to a coarsening and work in the setting that every subhypergraph of H is d-buoyant. We call a set of vertices y = (y (B,b) ) of G indexed by P • (A) an A-constellation. Given an A-constellation y, we define A r (y) to be the event that y (B,b) and y (B ,b ) are connected in F if and only if b = b , and in this case they are connected by a path in F with diameter at most r. We say that an A-constellation y in G is r-good if it satisfies the following conditions. Let H = (∂V, V • , E) be a finite hypergraph with boundary with at least one edge, and let r = r(max e deg(e)) be as in Lemma 4.12. We write P • (e) = P • ({v ∈ V : v ⊥ e}) for each e ∈ E. For each ξ = (ξ e ) e∈E ∈ V E and each e ∈ E, we let (ξ (e,B,v) ) (B,v)∈P•(e) be an r-good e-constellation contained in the ball of radius r about ξ e , whose existence is guaranteed by Lemma 4.12. For each x = (x v ) v∈∂V and ξ = (ξ e ) e∈E , we defineW (x, ξ) to be the event that the following conditions hold: For each n ≥ 0, let Ω x (n) be the set Ω x (n) = (ξ e ) e∈E ∈ Λ x (n, n + 1) E : ξ e ξ e ≥ 2 n−C 1 for all distinct e, e ∈ E , where C 1 = C 1 (E) is chosen so that log 2 |Ω x (n)| ≈ nd|E| for all n sufficiently large and all x. It is easy to see that such a constant exists using the d-dimensionality of G. For each n ≥ 0 we defineS x (n) to be the random variablẽ S x (n) := ξ∈Ωx(n) 1(W (x, ξ)), so that every refinement H of H is R G (H )-faithfully present at x on the event thatS x (n) is positive for some n ≥ 0, and every refinement H of H is R G (H )-robustly faithfully present at x on the event thatS x (n) is positive for infinitely many n ≥ 0. The following lemma lower bounds the first moment ofS n . Lemma 4.13. Let G be a d-dimensional transitive graph with d > 4. Let H be a finite hypergraph with boundary with at least one edge, let ε > 0, and suppose that x = (x v ) v∈∂V is such that x u x v ≤ 2 n−1 for all u, v ∈ ∂V and satisfies are a collection of independent simple random walks started at (x v ) v∈∂V . Then there exist constants c = c(G, H, ε) and n 0 = n 0 (G, H, ε) such that if n ≥ n 0 then for every ξ ∈ Ω x (n) and hence that The proofs of Lemma 4.12 and Lemma 4.13 are unfortunately rather technical, and are deferred to Section 4.3. For the rest of this section, we will take these lemmas as given, and use them to prove Proposition 4.11. The key remaining step is to upper bound the second moment of the random variableS x (n). for all x = (x u ) u∈∂V ∈ (V) ∂V and all n such that x u x v ≤ 2 n−1 for all u, v ∈ ∂V . Proof. Observe that if ξ, ζ ∈ Ω x (n) are such that the eventsW (x, ξ) andW (x, ζ) both occur, then the following hold: (1) For each v ∈ V , there is at most one v ∈ V such that ξ (e,A,v) and ζ (e ,A ,v ) are in the same component of F for some (and hence every) e, e ∈ E and (A, v) ∈ P • (e), (A , v ) ∈ P • (e ). (2) For each e ∈ E, there is at most one e such that ξ e ζ e ≤ 2 n−C 1 −1 . As a bookkeeping tool to account for the first of these degrees of freedom, we define Φ be the set of functions φ : (Here and elsewhere, we use as a dummy symbol so that we can encode partial bijections by functions.) For each φ ∈ Φ, and ξ, ζ ∈ V, define the eventW φ (ζ, ξ) to be the event that both the eventW (x, ξ) ∩W (x, ζ) occurs, and that for any two distinct vertices u, v ∈ V • the components of F containing {ξ (e,A,u) : e ∈ E, (A, u) ∈ P • (e)} and {ζ (e,A,v) : e ∈ E, (A, v) ∈ P • (e)} coincide if and only if v = φ(u). Thus, we have that and hence that It follows from Proposition 2.2 that We define R φ (ξ, ζ) to be the expression on the right hand side of (4.13), so that We now account for the second of the two degrees of freedom above. Let Ψ be the set of functions ψ : E → E ∪ { } such that the preimage ψ −1 (e) has at most one element for every e ∈ E. For each ψ ∈ Ψ and k = (k e ) e∈E ∈ {0, . . . , n} E , let 2 n−ke ≤ ζ e ξ ψ(e) ≤ 2 n−ke+2 for all e ∈ E such that ψ(e) = , and ζ e ξ e ≥ 2 n−C 1 −2 for all e, e ∈ E such that e = ψ(e) , where C 1 is the constant from the definition of Ω x (n), and observe that (4.14) For each ξ, ζ ∈ Ω x (n) and e ∈ E, there is at most one e ∈ E such that ζ e ξ e ≤ 2 n−C 1 −2 , and it follows that where the union is taken over ψ ∈ Ψ and k ∈ {0, . . . , n} E . Now, for any ξ, ζ ∈ Ω ψ,k and u ∈ V • with φ(u) = , we have that Meanwhile, we have that Thus, using the volume estimate (4.14), we have that Observe that for every ψ ∈ Ψ and e ∈ E, we have that Thus, summing over k, we see that for every ψ ∈ Ψ and φ ∈ Φ we have that Since d/(d − 4) is not an integer, the last term is zero, so that if we define Q : Φ × Ψ → R by (4.16) then we have that Thus, since |Φ × Ψ| does not depend on n, we have that and so it suffices to prove that Q(φ, ψ) ≤ 0 for every (φ, ψ) ∈ Φ × Ψ. To prove this, first observe that we can bound Let H be the subhypergraph of H with boundary vertices given by the boundary vertices of H, edges given by the set of edges of H that have |{u ⊥ e : φ(u) = }| > d/(d − 4), and interior vertices given by the set of interior vertices u of H for which φ(u) = and φ(u) ⊥ e for some e ∈ E . Then we can rewritẽ where the second inequality follows by the assumption that every subhypergraph of H is d-buoyant. This completes the proof. Proof of Proposition 4.11. Suppose that the finite hypergraph with boundary H has a doptimal coarsening all of whose subhypergraphs are d-buoyant. Then the lower bound on the square of the first moment ofS H x (n) provided by Lemma 4.13 and the upper bound on the second moment ofS H x (n) provided by Lemma 4.14 coincide, so that the Cauchy-Schwarz inequality implies that for every n such that x u x v ≤ 2 n−1 for every u, v ∈ ∂V . It follows from Fatou's lemma that P S H x (n) > 0 for infinitely many n ≥ lim sup so that H is robustly faithfully present at x with positive probability as claimed. The cases d = 5, 6, 8. We now treat the cases in which d/(d − 4) is an integer. This requires somewhat more care owing to the possible presence of the logarithmic term in (4.15). Indeed, we will only treat certain special 'building block' hypergraphs directly via the second moment method. We will later build other hypergraphs out of these special hypergraphs in order to to prove the main theorems. Let H = (∂V, V • , E) be a finite hypergraph with boundary. We say that a subhypergraph H = (∂V , V • , E ) of H is bordered if ∂V = ∂V and every vertex v ∈ V \ V is incident to at most one edge in E . For example, every full subhypergraph containing every boundary vertex is bordered. We say that a subhypergraph of H is proper if it is not equal to H and non-trivial if it has at least one edge. We say that H is d-basic if it does not have any edges of degree less than or equal to d/(d − 4) and does not contain any proper, non-trivial bordered subhypergraphs H with η d (H ) = 0. (1) H is a refinement of a hypergraph with boundary that has exactly one edge, the unique edge contains exactly d/(d−4) boundary vertices, and every interior vertex is incident to the unique edge. or (2) H has a d-basic coarsening with more than one edge, all of whose subhypergraphs are d-buoyant. Then for every r ≥ R G (H) and every well-separated collection of points (x v ) v∈∂V in V there is a positive probability that the vertices x u are all in different components of F and that H is robustly faithfully present at x. The proof of Proposition 4.15 will apply the following lemma, which is the analogue of Lemma 4.14 in this context. (2) If H is d-basic, then there exists a constant c = c(G, H) such that for all x = (x u ) u∈∂V ∈ (V) ∂V and all n such that x u x v ≤ 2 n−1 for all u, v ∈ ∂V . Proof. Note that in both cases we have that every subhypergraph of H is d-buoyant. We use the notation of the proof of Proposition 4.11. As in equation (4.15) of that proof, we have that where Q(φ, ψ) is defined as in (4.16). Moreover, the same argument used in that proof shows that Q(φ, ψ) ≤ 0 for every (φ, ψ) ∈ Φ × Ψ. In case (1) of the lemma, in which H has a single edge, we immediately obtain the desired bound since η d (H) = 0 and the coefficient of the log 2 n term is either 0 or 1. Now suppose that H is d-basic. Let L(φ, ψ) be the coefficient of log 2 n in (4.18). Note that H cannot have an edge whose intersection with ∂V has (d − 4)/d elements or more, since otherwise the subhypergraph H of H with that single edge and with no internal vertices is proper, bordered, and has η d (H ) ≥ 0. Thus, we have that if φ 0 is defined by φ 0 (v) = for every v ∈ V • then Let Isom ⊆ Φ × Ψ be the set of all (φ, ψ) such that φ(u) ⊥ ψ(e) for every e ∈ E and v ⊥ e. Since H is d-basic we have that if (φ, ψ) ∈ Isom then We claim that Q(φ, ψ) ≤ −(d − 4) unless either φ = φ 0 or (φ, ψ) ∈ Isom. Once proven this will conclude the proof, since we will then have that for every (φ, ψ) ∈ Φ × Ψ, from which we can conclude by summing over Φ × Ψ as done previously. We first prove that We claim that if φ is such that η d (H ) = 0 then H is bordered, and consequently is either equal to H or does not have any edges by our assumptions on H. To see this, suppose for contradiction that H is not bordered, so that there exists a vertex v ∈ V • \ V • that is incident to more than one edge of H . Let H be the subhypergraph of It remains to show that if φ(v) = for every v ∈ V then Q(φ 1 , ψ) ≤ −(d − 4) unless (φ, ψ) ∈ Isom. Since every edge of H has degree strictly larger than d/(d − 4), we have that for every e ∈ E and every (φ, ψ) ∈ Φ × Ψ such that |{u ⊥ e : φ(u) ⊥ ψ(e)}| < deg(e). It follows easily from this and the definition of Q(φ, ψ) that if φ has φ(v) = for every v ∈ V , then Since η d (H) ≤ 0 by assumption, it follows that Q(φ, ψ) ≤ −(d − 4) unless (φ, ψ) ∈ Isom. This concludes the proof. Lemma 4.14 (together with Lemma 4.13) is already sufficient to yield case (2) of Proposition 4.15. To handle case (1), we will require the following additional estimate. Proof. Let Φ andW φ (ξ, ζ) be defined as in the proof of Lemma 4.14. For every ξ ∈ Ω x (n) and ζ ∈ Ω x (n + m), we have that all distances relevant to our calculations are on the order of either 2 n or 2 n+m . That is, log 2 ξ e ξ e , log 2 ξ e x v ≈ n and log 2 ξ e ζ e , log 2 ζ e ζ e , log 2 ζ e x v ≈ n + m for all e, e ∈ E and v ∈ ∂V . Thus, using (4.13), can estimate which is maximized when φ(v) = for all v ∈ V • . Now, since we deduce that as claimed. for every n such that x u x v ≤ 2 n−1 for every u, v ∈ ∂V , from which it follows by Cauchy-Schwarz that for every n such that x u x v ≤ 2 n−1 for every u, v ∈ ∂V . The proof can now be concluded as in the proof of Proposition 4.11. Proof of Lemmas 4.12 and 4.13 In this section we prove Lemma 4.12 and Lemma 4.13. We begin with some background on random walk estimates. Given a graph G and a vertex u of G, we write P u for the law of the random walk on G started at u. Let G be a graph, and let p n (x, y) be the probability that a random walk on G started at x is at y at time n. Given positive constants c and c , we say that G satisfies (c, c )-Gaussian heat kernel estimates if c |B(x, for every n ≥ 0 and every pair of vertices x, y in G with d(x, y) ≤ n. We say that G satisfies Gaussian heat kernel estimates if it satisfies (c, c )-Gaussian Heat Kernel Estimates for some positive constants c and c . Hebisch and Saloff-Coste proved their result only for Cayley graphs, but the general case can be proven by similar methods 2 , see e.g. [30,Corollary 14.5 and Theorem 14.19]. Now, recall that two graphs G = (V, E) and G = (V , E ) are said to be (α, β)-rough isometric if there exists a function φ : V → V such that the following conditions hold. (1) φ roughly preserves distances: The estimate holds for all x, y ∈ V . (2) φ is roughly surjective: For every x ∈ V , there exists y ∈ V such that d (x, φ(y)) ≤ β. The following stability theorem for Gaussian heat kernel estimates follows from the work of Delmotte [5]; see also [15,Theorem 3.3.5]. Theorem 4.19. Let G and G be (α, β)-roughly isometric graphs for some positive α, β, and suppose that the degrees of G and G are bounded by M < ∞ and that G satisfies (c, c )-Gaussian heat kernel estimates for some positive c, c . Then there existc =c(α, β, M, c, c ) andc =c (α, β, M, c, c ) such that G satisfies (c,c )-Gaussian heat kernel estimates. Recall that a function h : V → R defined on the vertex set of a graph is said to be for every vertex v ∈ A, where the sum is taken with appropriate multiplicities if there are multiple edges between u and v. The graph G is said to satisfy an elliptic Harnack inequality if for every α > 1, there exist a constant c(α) ≥ 1 such that for every two vertices u and v of G and every positive function h that is harmonic on the set in which case we say that G satisfies an elliptic Harnack inequality with constants c(α). The following theorem also follows from the work of Delmotte [5], and was implicit in the earlier work of e.g. Fabes and Stroock [6]; see also [15,Theorem 3.3.5]. Note that these references all concern the parabolic Harnack inequality, which is stronger than the elliptic Harnack inequality. We remark that the elliptic Harnack inequality has recently been shown to be stable under rough isometries in the breakthrough work of Barlow and Murugan [1]. Recall that a graph is said to be d-Ahlfors regular if there exists a positive constant c such that c −1 r d ≤ |B(x, r)| ≤ cr d for every r ≥ 1 and every x ∈ V (in which case we say G is d-Ahlfors regular with constant c). Ahlfors regularity is clearly preserved by rough isometry, in the sense that if G and G are (α, β)-rough isometric graphs for some positive α, β, and G is d-Ahlfors regular with constant c, then there exists a constant c = c (α, β, c) such that G is d-Ahlfors regular with constant c . Observe that if the graph G is d-Ahlfors regular for some d > 2 and satisfies a Gaussian heat kernel estimate, then summing the estimate (4.19) yields that for every vertex v, and that for all vertices u and v of G. We now turn to the proofs of Lemma 4.12 and Lemma 4.13. The key to both proofs is the following lemma. Lemma 4.21. Let G be a d-Ahlfors regular graph with constant c 0 for some d > 4, let F be the uniform spanning forest of G, and suppose that G satisfies (c −1 0 , c 0 )-Gaussian heat kernel estimates. Let K 1 , . . . , K N be a collection of finite, disjoint sets of vertices, and let K} be a collection of independent simple random walks started from the vertices of K. If then there exist constants c = c(G, H, ε, |K|, c 0 ) and C = C(G, H, ε, |K|, c 0 ) such that On the other hand, it follows easily from the Greens function estimate (2.4) that if r is sufficiently large (depending on |A| and ε) then and we deduce that for such r . Applying Lemma 4.21, we deduce that P(A Cr (ξ)) ≥ c for some C = C(G, |A|, ε, r ) and c = c(G, |A|, ε). It follows that (ξ (B,b) ) (B,b)∈P•(A) is an r-good A constellation for some r = r(|A|) sufficiently large. Proof of Lemma 4.13 given Lemma 4.21. Let G be a d-dimensional transitive graph for some d > 4. Let x = (x v ) v∈∂V be such that x u x v ≤ 2 n−1 for every u, v ∈ ∂V , let ξ = (ξ e ) e∈E ∈ Ω x (n), and let r = r(H) and (ξ (e,A,v) ) e∈E,(A,v)∈P•(e) be as in Section 4.2. For each edge e of H, write A e (ξ) for the event A r ((ξ (e,A,v) ) (A,v)∈P•(e) ), which has probability at least 1/r by definition of the r-good constellation (ξ (e,A,v) ) (A,v)∈P•(e) . Since the number of subtrees of a ball of radius r in G is bounded by a constant, it follows that there exists a constant ε = ε(G, H) and a collection of disjoint subtrees (T (e,v) (ξ)) (e,v)∈E• of G such that the tree T (e,v) (ξ) has diameter at most r and contains each of the vertices ξ (e,A,v) with (A, v) ∈ P • (e) for every (e, v) ∈ E • , and the estimate holds for every e ∈ E. Fix one such collection (T (e,v) (ξ)) (e,v)∈E• for every ξ ∈ Ω x (n), and for each e ∈ E let B e (ξ) be the event that T (e,v) (ξ) is contained in F for every v ∈ E. Let B(ξ) = e∈E B e (ξ). Considering generating F using Wilson's algorithm, starting with random walks {X (e,A,v) : e ∈ E, (A, v) ∈ P • (e)} such that X (e,A,v) 0 = ξ (e,A,v) for every e ∈ E and (A, v) ∈ P • (e), we observe that A,v) and X (e ,A ,v ) intersect for some distinct e, e ∈ E and some (A, v) ∈ P • (e), (A , v ) ∈ P • (e ) (4.23) and hence that for all n sufficiently large and ξ ∈ Ω x (n). Let G ξ be the graph obtained by contracting the tree T (e,v) (ξ) down to a single vertex for each (e, v) ∈ E • . The spatial Markov property of the USF (see e.g. [14, Section 2.2.1]) implies that the law of F given the event B(ξ) is equal to the law of the union of (e,v)∈E• T (e,v) (ξ) with the uniform spanning forest of G ξ . Observe that G ξ and G are rough isometric, with constants depending only on G and H, and that G ξ has degrees bounded by a constant depending only on G and H. Thus, it follows from Theorems 4.18-4.20 that G ξ is d-Ahlfors regular, satisfies Gaussian heat kernel estimates, and satisfies an elliptic Harnack inequality, each with constants depending only on H and G. We now start working towards the proof of Lemma 4.21. We begin with the following simple estimate. for every c ≥ C, every vertex x, every n ≥ 1, and every u, w ∈ Λ x (n + c, n + 2c). Proof. The upper bound follows immediately from (4.20). We now prove the lower bound. For every c ≥ 1 and every u, w ∈ Λ x (n + c, ∞), we have that Thus, we have that P u (hit w and Λ x (0, n)) ≤ P u (hit Λ x (0, n) after hitting w) + P u (hit w after hitting Λ x (0, n)) where the second term is bounded by conditioning on the location at which the walk hits Λ x (0, n) and then using the strong Markov property. By the triangle inequality, we must have that at least one of ux or wx is greater than 1 2 uw . This yields the bound On the other hand, if u, w ∈ Λ x (n + c, n + 2c) then conditioning on the location at which the walk hits Λ x (n + 3c, ∞) yields that The claim now follows easily. Proof of Lemma 4.13. For each 1 ≤ i ≤ N , let x i be chosen arbitrarily from the set K i . Let (X x ) x∈K be a collection of independent random walks on G, where X x is started at x for each x ∈ K, and write X i = X x i . Let K i = K i \ {x i } for each 1 ≤ i ≤ N and let K = N i=1 K i . In this proof, implicit constants will be functions of |K|, N, c 0 , and d. We take n such that 2 n−1 ≤ diam(K) ≤ 2 n . Let c 1 , c 2 , c 3 be constants to be determined. For each y = (y x ) x∈K ∈ (Λ(n + c 1 , n + c 3 )) K , let Y y be the event Y y = {X x 2 2(n+c 2 ) = y x for each x ∈ K}. Let C (c 2 ) be the event that none of the walks X x intersect each other before time 2 2(n+c 2 ) , so that P(C (c 2 )) ≥ ε for every c 2 ≥ 0 by assumption. For each x ∈ K, let D x (c 1 , c 3 ) be the event that X x 2 2(n+c 2 ) is in Λ(n + c 1 , n + c 3 ) and that X x m ∈ Λ(n, ∞) for all m ≥ 2 2(n+c 2 ) , and let D(c 1 , c 3 ) = D x (c 1 , c 3 ). It follows by an easy application of the Gaussian heat kernel estimates that we can choose c 2 = c 2 (G, N, ε) and c 3 = c 3 (G, N, ε) sufficiently large that for every y = (y x ) x∈K ∈ (Λ(n+c 1 , n+c 3 )) K , and in particular so that P(C (c 2 )∩D(c 1 , c 3 )) ≥ ε. We fix some such sufficiently large c 1 , c 2 , and c 3 , and also assume that c 1 is larger than the constant from Lemma 4.22. We write C = C (c 2 ), D x = D x (c 1 , c 3 ), and D = D(c 1 , c 3 ). For each 1 ≤ i ≤ N and x ∈ K i , we define I x to be the event that the walk X x hits the set L i good = LE(X i ) m : LE(X i ) m ∈ Λ(n + 2c 3 , n + 4c 3 ), LE(X i ) m ∈ Λ(0, n + 6c 3 ) for all 0 ≤ m ≤ m before hitting Λ(n + 6c 3 , ∞), and let I = x∈K I x . For each x and x in K, we define E x,x to be the event that the walks X x and X x intersect, and let These events have been defined so that, if we sample F using Wilson's algorithm, beginning with the walks {X v : v ∈ V } (in any order) and then the walks {X x : x ∈ K} (in any order), we have that if and only if i = j, and each two points in K i are connected by a path in F of diameter at most 2 6c Thus, it suffices to prove that We break this estimate up into the following two lemmas: one lower bounding the probability of the good event C ∩ D ∩ I , and the other upper bounding the probability of the bad event C ∩ D ∩ I ∩ E . The proof uses techniques from [19] and the proof of [2,Theorem 4.2]. Proof of Lemma 4.23. Fix x ∈ K , and let 1 ≤ i ≤ N be such that x ∈ K i . Write Y = X i and Z = X x . Let L = (L(k)) k≥0 be the loop-erasure of (Y k ) k≥0 and, for each m ≥ 0, let L m = (L m (k)) qm k=0 be the loop-erasure of (Y k ) m k=0 . Define τ (m) = inf{0 ≤ r ≤ q m : L m (r) = Y k for some k ≥ m} and τ (m, ) = inf{0 ≤ r ≤ q m : L m (r) = Z k for some k ≥ }. The definition of τ (m) ensures that L m (k) = L(k) for all k ≤ τ (m). We define the indicator random variables Observe that I x ⊆ J m, = 1 for some m, ≥ 2 2(n+c 2 ) . Moreover, for every m, ≥ 2 2(n+c 2 ) and every y ∈ (Λ(n + c 1 , n + c 3 )) K , the walks Y k k≥m and Z k k≥ have the same distribution conditional on the event Thus, we deduce that whenever the event being conditioned on has positive probability, and therefore that · P yx ( hit w before Λ(n + 6c 3 , ∞) | do not hit Λ(0, n)) On the other hand, we have that Meanwhile, decomposing E[I 2 | Y y ] according to the location of the intersections and applying the Gaussian heat kernel estimates yields that where the two different terms come from whether Y and Z hit the points of intersection in the same order or not. With the possible exception of wz , all the distances involved in this expression are comparable to 2 n . Thus, we obtain that For each w ∈ V, considering the contributions of dyadic shells centred at w yields that, since d > 4, and we deduce that Thus, the Cauchy-Schwarz inequality implies that as claimed. We next use the elliptic Harnack inequality to pass from an estimate on I x to an estimate on I . Let X be the σ-algebra generated by the random walks (X i ) N i=1 . Observe that for each x ∈ K we have good before Λ(0, n + 6c 3 ), never leave Λ(n, ∞) P yx never leave Λ(n, ∞) P yx hit L i good before Λ(0, n + 6c 3 ), never leave Λ(n, ∞) . The right hand side of the second line is a positive harmonic function of y x on Λ(n + c 1 , n + c 3 +1), and so the elliptic Harnack inequality implies that for every y, y ∈ (Λ(n+c 1 , n+c 3 )) K and every x ∈ K , we have that . Furthermore, if y is obtained from y by swapping y x and y x for some 1 ≤ i ≤ N and x, x ∈ K i , then clearly Therefore, it follows that Since the events I x are conditionally independent given the σ-algebra X and the event C ∩ D ∩ Y y , we deduce that Now, the random variables P(I x i | X , C ∩ D ∩ Y y ) |K i | are independent conditional on the event C ∩ D ∩ Y y , and so we have that as claimed, where the second line follows from Jensen's inequality. Finally, it remains to show that the probability of getting unwanted intersections in addition to those that we do want is of lower order than the probability of just getting the intersections that we want. Lemma 4.25. We have that Proof. For each w ∈ V and x, x ∈ K, let E x,x (w) be the event that X x and X x both hit w. Let ζ = (ζ x ) x∈K and let σ = (σ i ) N i=1 be such that σ v is a bijection from {1, . . . , |K i |} to K i for each 1 ≤ i ≤ N . We define R σ (ζ) to be the event that for each 1 ≤ i ≤ N the walk X i passes through the points {ζ x : x ∈ K i } in the order given by σ and that for each x ∈ K the walk X x hits the point ζ x . We also define so that P(R σ (ζ)) R σ (ζ) for every ζ ∈ V K . Let Λ ζ = Λ(n + c 1 , n + c 1 + c 2 ) K , Λ w,1 = Λ(n, n + c 2 + 1), Λ w,2 = Λ(n + c 2 + 1, ∞), and Λ w = Λ w,1 ∪ Λ w,2 . (Note that these sets are not functions of ζ or w, but rather are the sets from which ζ and w will be drawn.) We also define To be the set of pairs of points at least one of which must have their associated pair of random walks intersect in order for the event E to occur. Define the random variables M σ,0 , M σ,1 , and M σ,2 to be Observe that σ (M σ,1 + M σ,2 ) ≥ 1 on the event C ∩ B ∩ I ∩ E , and so to prove Lemma 4.25 it suffices to prove that log 2 E M σ,1 + M σ,2 − (d − 4)|K | + 2 n + 2 log 2 n (4.26) for every σ. We will require the following estimate. Lemma 4.26. The estimate holds for every (x, x ) ∈ O, every ζ ∈ Λ ζ , every w ∈ Λ w , and every collection σ Proof. Unfortunately, this proof requires a straightforward but tedious case analysis. We will give details for the simplest case, in which both x, x ∈ K . A similar proof applies in the cases that one or both of x or x is not in K , but there are a larger amount of subcases to consider according to when the intersection takes place. In the case that x, x ∈ K , let E −,− (ζ, w), E −,+ (ζ, w), E +,− (ζ, w) and E +,+ (ζ, w) be the events defined as follows: The event R σ (ζ) occurs, and X x and X x both hit w before they hit ζ x and ζ x respectively. The event R σ (ζ) occurs, X x hits w before hitting ζ x , and X x hits w after hitting ζ x . The event R σ (ζ) occurs, X x hits w after hitting ζ x , and X x hits w before hitting ζ x . E +,+ (ζ, w): The event R σ (ζ) occurs, and X x and X x both hit w after they hit ζ x and ζ x respectively. We have the estimates and . In all cases, a bound of the desired form follows since wx ζ x x and wx ζ x x for every x, x ∈ K , ζ ∈ Λ ζ , and w ∈ Λ w , and we conclude by summing these four bounds. Our aim now is to prove eq. (4.26) by an appeal to Lemma 4.3. To do this, we will encode the combinatorics of the potential ways that the walks can intersect via hypergraphs. To this end, let H σ be the finite hypergraph with boundary that has vertex set See Figure 8 for an illustration. Note that the isomorphism class of H σ does not depend on σ. (n, n + c 1 + c 2 ). We claim that η d,2 (H σ ) ≥ η d,2 (H σ ) + 2 (4.28) for any coarsening H σ of H σ , so that and hence that log 2 E[M σ,0 ] −(d − 4)|K | n + |K | 2 log 2 (n) (4.29) by Lemma 4.3. Indeed, suppose that H σ / is a proper coarsening of H σ corresponding to some equivalence relation on E(H σ ), and that the edge corresponding to x = σ i (j) ∈ K is maximal in its equivalence class in the sense that there does not exist σ i (j ) in the equivalence class of σ i (j) with j > j. Clearly such a maximal x must exist in every equivalence class. Moreover, for such a maximal x = σ i (j) there can be at most one edge of H σ that it shares a vertex with and is also in its class, namely the edge corresponding to σ i (j − 1). Thus, if x is maximal and its equivalence class is not a singleton, let H σ / be the coarsening corresponding to the equivalence relation obtained from by removing x from its equivalence class. Then we have that ∆(H σ / ) ≤ ∆(H σ / ) + 1 and that |E(H σ / )| = |E(H σ / )| + 1, so that 30) and the claim follows by inducting on the number of edges in non-singleton equivalence classes. To obtain a bound on the expectation of M σ,2 , considering the contribution of each shell Λ(m, m + 1) yields the estimate for every ζ ∈ Λ ζ , and it follows from Lemma 4.26 and (4.29) that (4.31) It remains to bound the expectation of M σ,1 . For each two distinct x, x ∈ K , let H σ (x, x ) be the hypergraph with boundary obtained from H σ by adding a single vertex, , and adding this vertex to the two edges corresponding to x and x respectively. These hypergraphs are defined in such a way that, by Lemma 4.26, for every two distinct x, x ∈ K. First observe that coarsenings of H σ and of H σ (x, x ) both correspond to equivalence relations on K. Let be an equivalence relation on K, and let H σ (x, x ) and H σ be the corresponding coarsenings. Clearly |E(H σ (x, x ))| = |E(H σ )| and |V • (H σ (x, x ))| = |V • (H σ )| + 1. If x and x are related under , then we have that ∆(H σ (x, x )) = ∆(H σ ) + 1, while if x and x are not related under , then we have that ∆(H σ (x, x )) = ∆(H σ ) + 2. We deduce that If x x then H σ must be a proper coarsening of H σ , and we deduce from (4.28) that the inequality η d,2 (H σ (x, x )) ≥ η d,2 (H σ ) + 2 holds for every coarsening H σ (x, x ) of H σ (x, x ), yielding the claimed inequality (4.32). Using (4.32), we deduce from Lemma 4.3 that (4.33) Combining (4.31) and (4.33) yields the claimed estimate (4.26), completing the proof. Completion of the proof of Lemma 4.21. Since the upper bound given by Lemma 4.25 is of lower order than the lower bound given by Lemma 4.24, it follows that there exists n 0 = n 0 (|K|, N, d, c 1 , c 2 ) such that if n ≥ n 0 , and hence that for sufficiently large n as claimed. We now complete the proof of Theorem 1.5. We begin with the simpler case in which d/ (d−4) is not an integer. Proof of Theorem 1.5 for d / ∈ {5, 6, 8}. We begin by analyzing faithful ubiquity. Let G be a d-dimensional transitive graph, and let H be a finite hypergraph with boundary. If H has a subhypergraph none of whose coarsenings are d-buoyant, then Proposition 4.1 implies that H is not faithfully ubiquitous in C hyp r (F) almost surely for any r ≥ 1. Otherwise, by Lemma 2.4, H has a coarsening all of whose subhypergraphs are d-buoyant. 4) is not an integer, then it follows from Proposition 4.11 that there exist vertices (x v ) v∈∂V in G such that with positive probability, the vertices x v are in different components of F and H is R G (H)-robustly faithfully present at (x v ) v∈V . The set − 4), every set of n trees of F are contained in an edge of C hyp r (F) for every r ≥ R G (n) almost surely. Let H be a finite hypergraph with boundary all of whose subhypergraphs are d-buoyant. Suppose that |E(H)| ≥ 2 and that the claim has been established for all hypergraphs with fewer edges than H. If H is d-basic then we are already done, so assume not. Then at least one of the following must occur: (1) H has an edge of degree less than or equal to d/(d − 4). Thus, we deduce from the induction hypotheses that every refinement H of either H 1 or H 2 is faithfully ubiquitous in C hyp r (F) almost surely for every r ≥ R G (H) ≥ max{R G (H 1 ), R G (H 2 )}. It is easily verified that this implies that every refinement H of H is faithfully ubiquitous in C hyp r (F) for every r ≥ R G (H ) almost surely. Proof of Theorem 1.2. We begin by proving the claim about faithful ubiquity. Applying Theorem 1.4 and Lemma 2.4, and since every subgraph of a tree is a forest, it suffices to prove that if T is a finite forest with boundary then η d (T ) ≥ η d (T ) whenever d ≥ 4 and T is a coarsening of T , so that, in particular, Indeed, suppose that T = T / is a proper coarsening of a finite forest with boundary T . Since T is a finite forest, the subgraph of T spanned by each equivalence class of is also a finite forest, and therefore must contain a leaf. Choose a non-singleton equivalence class of and an edge e of this equivalence relation that is incident to a leaf of the spanned forest. Thus, e has the property that one of the endpoints of e is not incident to any other edge in e's equivalence class. Let be the equivalence relation obtained from by removing e from its equivalence class and placing it in a singleton class by itself. Then we have that |E(T / )| = |E(T / )| + 1 and ∆(T / ) ≤ ∆(T / ) + 1 so that Thus, it follows by induction on the number of edges of T in non-singleton equivalence classes that η d (T / ) ≥ η d (T ) for every coarsening T / of T as claimed. This establishes the claim about faithful ubiquity. We now turn to ubiquity. Let G be a d-dimensional transitive graph for some d > 8, let r ≥ 1, and let F be the uniform spanning forest of G. Let T be a finite tree with boundary that is not faithfully ubiquitous in C r (F), and let T be a subgraph of T such that We easily deduce that η d (S) ≥ η d (T ) > 0, and consequently that S is not faithfully ubiquitous in C r (F) almost surely. On the other hand, since S is a subgraph of H, we have that if H is faithfully ubiquitous in C r (F) almost surely then S is also. Since the quotient H was arbitrary, it follows from Theorem 1.4 that T is ubiquitous in C r (F) if and only if it is faithfully ubiquitous in C r (F) almost surely, completing the proof. for each d ≥ 9. We will use the family of trees pictured in Figure 2. Write d = 4 + 5k + where 0 ≤ < 5 and let T d be the tree that has one vertex of degree five connected to paths of length k + 1 and 5 − paths of length k. T d has five leaves, which we declare to be in its boundary, and declare all the other vertices to be in its interior. Clearly any subgraph T d of T d maximizing |V • (T d )|/|E(T d )| must be induced by a union of geodesics joining the boundary vertices, and it is easily verified that, amongst these subgraphs, it is the full graph The proof of Theorem 1.5 also yields the following result. If G is a d-dimensional transitive graph, F is the uniform spanning forest of G, H = (∂V, V • , E) is a finite hypergraph with boundary, and r ≥ 1, then the following hold almost surely: (1) If H is faithfully ubiquitous in C hyp r (F), then for every collection (x u ) u∈∂V of distinct vertices of C hyp r (F), there exists a collection (x i u ) u∈V• of distinct vertices of C hyp r (F) for each i ≥ 1 such that {x i u : u ∈ V • , u ⊥ e} ∪ {x u : u ∈ ∂V, u ⊥ e} is an edge of C hyp r (F) for every i ≥ 1 and every e ∈ E, {x i u : u ∈ V • } is disjoint from {x u : u ∈ ∂V } for every i ≥ 1, and {x i u : u ∈ V • } and {x j u : u ∈ V • } are disjoint whenever i > j ≥ 1. (2) If H is not faithfully ubiquitous in C hyp r (F), then for every collection (x u ) u∈∂V of distinct vertices of C hyp r (F) there exists a finite set of vertices A of C hyp r (F) such that {x u : u ∈ V • } intersects A whenever (x u ) u∈V• is a collection of distinct vertices of C hyp r (F) disjoint from (x u ) u∈∂V with the property that {x i u : u ∈ V • , u ⊥ e} ∪ {x u : u ∈ ∂V, u ⊥ e} is an edge of C hyp r (F) for every e ∈ E. Indeed, item (2) is an immediate consequence of Theorem 2.3. This has the following interesting consequence. For each d > 8, it follows from Theorem 1.2 that the star with (d − 4)/(d − 8) boundary leaves and one internal vertex is not faithfully ubiquitous in the component graph of the uniform spanning forest of Z d . Thus, we deduce from item (2), above, that if d > 8 then for every collection of (d − 4)/(d − 8) distinct vertices of the component graph, there is almost surely some finite M depending on the collection such that any clique containing the collection has size at most M . In particular, we conclude that the component graph of the uniform spanning forest of Z d does not contain an infinite clique whenever d > 8 a.s. In contrast, we note that the component graph of the uniform spanning forest of Z d does contain arbitrarily large cliques almost surely whenever d ≥ 5. (This follows as a special case of Theorem 1.4 as in Figure 5, but is also very easy to prove directly.) Further questions about the component graph of the USF. It is natural to wonder whether Theorem 1.4 determines the component graph up to isomorphism. It turns out that this is not the case. Indeed, observe that faithful ubiquity of a finite graph with boundary H can be expressed as a first order sentence in the language of graphs: for all (x v ) v∈∂V there exists (x v ) v∈V• such that x u ∼ x v for every u, v ∈ V such that u ∼ v. Ubiquity of H can be expressed similarly. However, even if we knew the almost-sure truth value of every first order sentence in the language of graphs, this still would not suffice to determine the graph up to isomorphism. Indeed, recall that a graph G = (V, E) is quasik-transitive if the action of its automorphism group on V k has only finitely many orbits. The model-theoretic Ryll-Nardzewski Theorem [11,Theorem 7.3.1] implies that a countably infinite graph is determined up to isomorphism by its first order theory if and only if it is oligomorphic, i.e., quasi-k-transitive for every k ≥ 1. By considering sizes of cliques as in Section 6.1, it follows from the discussion in that section that the component graph of the uniform spanning forest of Z d is a.s. not quasi-(d − 4)/(d − 8) -transitive when d > 8, and hence is a.s. not oligomorphic when d > 8. We conjecture that in fact the component graph has very little symmetry indeed. Conjecture 6.1. Let G be a d-dimensional transitive graph for some d > 8, and let r ≥ 1. Then C r (F) has no non-trivial automorphisms almost surely. Moreover, there does not exist a deterministic graph G such that C r (F) is isomorphic to G with positive probability. Although we do not believe the component graphs of the USF on different transitive graphs of the same dimension to be isomorphic, it seems nevertheless that most properties of the component graph should be determined by the dimension. One way of formalizing such a statement would be to axiomatize entire the almost-sure first order theory of the component graph of the uniform spanning forest and show that this first order theory is the same for different transitive graphs of the same dimension. We expect that Theorem 1.4, or a slightly stronger variation of it, should play an important role in this axiomatization. See [25] for the development of such a theory in the mean-field setting of Erdős-Rényi graphs. In particular, we believe the following. Conjecture 6.2. Let G 1 and G 2 be d-dimensional transitive graphs, let r 1 , r 2 ≥ 1, and let F 1 and F 2 be the uniform spanning forests of G 1 and G 2 respectively. Then the component graphs C r 1 (F 1 ) and C r 2 (F 2 ) are elementarily equivalent almost surely. That is, they satisfy the same set of first order sentences in the language of graphs almost surely. Component graphs of other models and other graphs. It would be interesting to study ubiquitous subgraphs in component graphs derived from other models on Z d . The most tractable of these is likely to be the interlacement process [26,24,22], for which some related results have been proven by Lacoin and Tykesson [16]. Here the component graph is defined by considering two trajectories to be adjacent if and only if they intersect. The picture should be quite different to ours since the connection probabilities for more than two points are no longer given by a power of the spread. A much more straightforward extension of our results would be to consider uniform spanning forests generated by long-range random walks on Z d . Similarly, one could consider uniform spanning forests on non-transitive, possibly fractal, graphs that are Ahlfors-regular and satisfy sub-Gaussian heat kernel estimates of some order β ≥ 2 (see e.g. [15,Chapter 3]). The beginnings of this analysis are already present implicitly in Lemma 4.3.
27,564.6
2017-02-19T00:00:00.000
[ "Mathematics" ]
The Pancreatic Islet Regulome Browser The pancreatic islet is a highly specialized tissue embedded in the exocrine pancreas whose primary function is that of controlling glucose homeostasis. Thus, understanding the transcriptional control of islet-cell may help to puzzle out the pathogenesis of glucose metabolism disorders. Integrative computational analyses of transcriptomic and epigenomic data allows predicting genomic coordinates of putative regulatory elements across the genome and, decipher tissue-specific functions of the non-coding genome. We herein present the Islet Regulome Browser, a tool that allows fast access and exploration of pancreatic islet epigenomic and transcriptomic data produced by different labs worldwide. The Islet Regulome Browser is now accessible on the internet or may be installed locally. It allows uploading custom tracks as well as providing interactive access to a wealth of information including Genome-Wide Association Studies (GWAS) variants, different classes of regulatory elements, together with enhancer clusters, stretch-enhancers and transcription factor binding sites in pancreatic progenitors and adult human pancreatic islets. Integration and visualization of such data may allow a deeper understanding of the regulatory networks driving tissue-specific transcription and guide the identification of regulatory variants. We believe that such tool will facilitate the access to pancreatic islet public genomic datasets providing a major boost to functional genomics studies in glucose metabolism related traits including diabetes. INTRODUCTION During the last decade, the advent of high-throughput "-omics" technologies, has greatly promoted advances in the study of human diseases at the genomic, transcriptomic, and epigenomic levels. Sequence databases and software analysis tools are now crucial tools for molecular biologist to understand the molecular mechanisms underlying tissue-specific functions. Nevertheless, the systematic acquisition of large bioinformatic datasets has created a tremendous gap between available data and their biological interpretation. Frameworks to access processed and integrated genomic datasets may assist, computational and non-computational scientists, to bridge this gap and provide understanding and biological interpretations to the regulatory and transcriptional complexity of the genome. In this context genome browsers are key tools in the accomplishment of this task. The UCSC Genome Browser (Speir et al., 2016), ENSEMBL (Yates et al., 2016) and NCBI's Sequence Viewer (Wolfsberg, 2011), for example, provide to the research community a wealth of integrated information and represent nowadays essential instruments to assist the interpretation of genomic data. The pancreatic islets of Langerhans constitute an endocrine tissue embedded in the exocrine pancreas and represent the sole source of insulin in the human body. Pancreatic islets play a crucial role in maintaining normal glucose homeostasis, and islet-cell dysfunction and/or reduction in islet-cell mass are key elements in the development of diabetes mellitus. For these reasons, understanding the regulatory networks controlling the tissue-specific expression of pancreatic islets, is key to shed light on the molecular mechanisms underlying diabetes. Large consortia such as ENCODE (Dunham et al., 2012) and the Epigenome Roadmap (Bernstein et al., 2010) provided extensive epigenetics maps allowing annotation of the noncoding regions of the human genome for a large amount of cell lines and tissues including several relevant to diabetes such as adipose tissue and skeletal muscle, while other less accessible primary tissues such as the endocrine pancreas were not prioritized in these studies. For their central role in diabetes pathogenesis, different laboratories embarked in profiling the transcriptomic and epigenetic landscape of human pancreatic islet-cells (Bhandare et al., 2010;Gaulton et al., 2010;Stitzel et al., 2010;Parker et al., 2013;Dayeh et al., 2014;Pasquali et al., 2014) in an ongoing effort to shed light on the pancreatic islets tissuespecific gene regulation. Free access to such data represents an invaluable opportunity for the research community to dissect the molecular mechanisms of glucose metabolism diseases (Ashcroft and Rorsman, 2012). Nevertheless, these datasets are deposited in different repositories, often in bulky raw format files, thus of difficult immediate access especially to non-bioinformatic users. Here we present the Islet Regulome Browser, an intuitive web tool providing access to interactive exploration of a wealth of pancreatic islet genomic data allowing the visualization of different classes of regulatory elements and transcription factor binding sites obtained from experiments performed by different labs worldwide. The Islet Regulome Browser is addressed to molecular biologists, human geneticist and clinicians with or without bioinformatics skills. MATERIALS AND METHODS The overall structure of the Islet Regulome Browser is illustrated in Figure 1. The Islet Regulome Browser internal structure is composed of three main components: (a) the database, which is saved in binary format as RData objects and tabix indexed files, (b) the code for computing the graphic image, written in R (Rizzo 1 ), and (c) the interface and the framework for the web service, written in Python (http://www.python.org). The Islet Regulome Browser is compatible with all the most popular web browsers and operative systems. It can be explored via web at http://www.isletregulome. com or can be installed in a workstation or laptop through the Python package management system with the command pip install regulome_web. The source code is available under the MIT license at https://bitbucket.org/batterio/regulome_web. Web Interface and Plot Generation The code for running the Islet Regulome Browser is composed of two main blocks that interact with each other: a Python framework that creates the web interface and retrieves the user input, and the R code which generates the plot and the tables. On the server side the Islet Regulome Browser is managed by the Flask framework (http://flask.pocoo.org/). The web interface allows users to generate plots and tables by querying for a gene name or for a specific genomic region. In addition, users can customize their analyses by choosing which datasets to use. The interactivity of the web application is achieved by using Brython (http://brython.info/), a Python 3 implementation for clientside web programming. The options selected by the user are forwarded to the R script that generates both the plot and the result tables (Figure 1). The plot is generated by an R script (R version 3.3.1) that takes as input the user specified features, such as the genomic location and the datasets to use. Several Bioconductor packages have been used to read the database and render the final plot: Rsamtools (Morgan et al., 2016), rtracklayer (Lawrence et al., 2009), and Sushi (Phanstiel et al., 2014). A plot may also be generated via command line, using the code as a stand-alone script. The plots are converted from PDF to PNG format by the ImageMagick converter tool (http://www.imagemagick.org/) and cached, along with the produced text tables. This allows to rapidly load a plot, instead of generating a new one, in case the same query is repeated. The cache is not used when the users upload their own data. Code Structure and Development We deposited the Islet Regulome Browser code in a publicly accessible Bitbucket repository (https://bitbucket.org/batterio/ regulome_web). Even though the web application can be explored at http://www.isletregulome.com, we created a Python package to easily install the Islet Regulome Browser on a personal computer. The recommended way to install the package is by using the Python package management system (pip install regulome_web). The main requirement for the web application is Python (version 3.5 or above), R (version 3.3.1 and above), and ImageMagik (http://www.imagemagick.org). Other Python related dependencies are listed in the "requirement.txt" file, however, by using the Python package management system all the libraries are automatically installed. Once installed, the Islet Regulome Browser can be executed with the command regulome_web. The program has two sub-commands: init and start. regulome_web init will create several folders following a structure required by the program, and a configuration file that needs to be modified by the user. The sub-command regulome_web start runs the Islet Regulome Browser web server, locally accessible at the url localhost:5000. The R code to render the plot contains two main scripts: (1) plot_IRB_main.R, which is the script that needs to be executed to call all other scripts and to draw each part of the plot. (2), plot_IRB_config.R contains all configuration variables, including the path to the database. The R script are integrated in the web application but they can also be used via command prompt as a stand-alone program. Database Central to the system is the database, which stores the genomic annotations, chromatin tracks, genome-wide association study (GWAS) variants and transcription factor binding sites that may be visualized by the browser. The publicly available data that can be currently visualized by the Islet Regulome Browser consists of transcription factor binding sites obtained from ChIP-seq experiments in adult human pancreatic islets (PDX1, FOXA2, NKX2.2, NKX6.1, and MAFB) (Pasquali et al., 2014) and pancreatic progenitors (PDX1, FOXA2, ONECUT1, HNF1B, and TEAD1) (Cebola et al., 2015); open chromatin classes and chromatin states in adult pancreatic islets (Parker et al., 2013;Pasquali et al., 2014), enhancer predictions in pancreatic progenitors (Cebola et al., 2015); enhancer clusters and stretched enhancers in adult pancreatic islets (Parker et al., 2013;Pasquali et al., 2014); open chromatin profiles of α-and β-cells FACS purified form adult human pancreatic islets (Ackermann et al., 2016); expression data obtained from RNA-seq experiments including coding (Morán et al., 2012) and non-coding RNA in adult pancreatic islets (Akerman et al., in press), and datasets for genome wide association studies for type 2 diabetes, DIAGRAM (Cho et al., 2012) and fasting glycemia, MAGIC (Scott et al., 2012). While the above description summarizes the data currently available, the Islet Regulome Browser is a dynamic project. We periodically revise the database and the literature with the aim of providing the most updated and relevant datasets to the pancreatic islet community. We will ensure the future maintenance the Islet Regulome Browser and will interact with other members of the pancreatic islets community to collect their feedback and improve the user interaction with browser. For each dataset visualized in the browser we provide, in the "Data Source" page, full reference of publication as well as links to the repositories where the raw data was deposited for bulk download. RESULTS The Islet Regulome Browser (http://www.isletregulome.com) provides interactive access to a wealth of information, allowing the visualization of GWAS variants, different classes of regulatory elements, together with enhancer clusters, stretch-enhancers and transcription factor binding sites in pancreatic progenitors and adult human pancreatic islets. Integration and visualization of such data may help in the interpretation of the regulatory networks driving tissue-specific transcription and guide the identification of regulatory variants. From the initial page (Figure 2) a plot can be generated by selecting a valid gene name or an absolute chromosomal location by specifying the genomic coordinates (chromosome, start, and end). The available human builds are: hg18, hg19 (default), and hg38. The plot can be extended at both sides of the gene/location by selecting a range that by default is 50 Kb. To limit the computational load on the server, on the web applications, plots can span a maximum 5 Mb of genomic space and a minimum of 10 bp. These restrictions can be changed in a local installation of the Islet Regulome Browser. Four major track types can be loaded to obtain the desired plot. (1) Tracks named "chromatin maps" refer to genomic maps of regions that may be involved in gene transcription regulation. Such publicly available maps were inferred from experimental datasets such as open chromatin and histone modification profiles, performed in adult human pancreatic islets and pancreatic progenitors. (2) "transcription factors" tracks are maps of transcription factors binding sites obtained from Chip-seq experiments performed in human adult pancreatic islets and pancreatic progenitors. (3) "SNPs" tracks include GWAS variants datasets associated to type 2 diabetes and fasting glycemia. (4) An optional "chromatin profile" track can be loaded to visualize open chromatin profiles obtained from ATAC-Seq experiments performed in FACS purified alpha and beta cells (Figure 2). Variants or chromatin maps tracks can be uploaded by the user for temporary display from the home page, "Advanced options" section. The file size of the uploaded file should not exceed 50 Mb. If a file contains a header, this should start with the "#" symbol. A "variant file" should consist of three or four tab-delimited fields. Mandatory fields are those of chromosome, position, and p-value. The files can also contain an optional fourth field with the reference number of the variant, additional columns will be ignored. A "chromatin map file" has a typical BED file format and should be composed of 3 tab-delimited fields: chromosome, start, and end, additional columns will be ignored. The fields with positional information should only contain integer values while the p-values should be numerical values. Upon data upload, a "Share uploaded files" option may be selected. This will provide a link that can be copy-pasted to a browser address bar in order to reproduce the Islet Regulome Browser session in use, including the uploaded data. Such link may be shared with other users in order to share data on the Islet Regulome Browser. Data uploaded by the user will be available for 1 month. Plot Description For any given gene or genomic region selected by the user, a plot is generated (Figure 3). The plot illustrates the regulatory regions, transcription factors binding sites and GWAS variants in which the sequence of the base genome is represented on the horizontal axis. In the upper part of the plot a red line on the chromosome ideogram reflects the portion of the chromosome displayed. Each dot represents a genomic variant, being the color intensity of the dot proportional to -Log p-value of association, as indicated on the side of the plot. A black box in the central part of the plot contains vertical colored bands depicting different chromatin states, open chromatin classes or regulatory elements as described in the legend above the plot. Black lines connecting the circles (each representing a different transcription factor) to the black box, point to the genomic location of each transcription factor binding site. The color intensity of such lines is proportional to the number of co-bound transcription factors. Annotated genes are depicted as horizontal gray lines at the bottom of the plot, with transcriptional orientation indicated by arrows. Boxes along the line correspond to positions of coding exons. Islet-specific genes are shown in dark gray. Plotting Versatility Graphical outputs are highly dynamic, being rendered on the fly. The user can zoom in and out at different resolutions as well as slide left or right 25, 50, and 75% of the length of the plot. The "Data displayed" panel, selectable from top left corner of the plot page, allows reviewing all the settings used to make the plot including genomic coordinates, genome build and all the features selected. Retrieve Results Graphical representations and text tables are available for download ( Figure 4A). The plot can be downloaded as PNG (Portable Network Graphics) or as PDF (Adobe Portable Document) format by clicking on the download icon above the plot. The difference between the two formats is that the latter uses vector graphics that is more suitable for high resolution publication figures while PNG compresses the image to a bitmap. A button above the plot provides a link to a UCSC browser (Speir et al., 2016) session containing all the data currently available in the Islet Regulome Browser for classic UCSC visualization. For this purpose bigwig files were generated from BAM files obtained by aligning the raw data using Bowtie2 (Langmead and Salzberg, 2012) (default parameters). Three tables related to the selected locus can be downloaded from the " Table" panel, selectable from the top left corner of the plot page. One table contains the regulatory regions, open chromatin classes or chromatin states selected for display along with the transcription factors whose binding sites overlap them ( Figure 4B). A second table lists the variants contained in the selected locus along with their p-value of association ( Figure 4C). Finally a third table includes reference ID and expression level of the different transcript isoforms overlapping the selected locus ( Figure 4D). A link at the top left corner of the plot page named "Data displayed" redirects the user to the "Data Source" used to create the plot displayed, including reference, date of publication and links to the databases where the raw data is deposited. DISCUSSION With the advent of high-throughput sequencing technologies we are assisting to an exponential production of data relevant to different fields of research including pancreatic islet regulatory genomics. Scientists are now facing new challenges by shifting the research efforts from data acquisition to data processing, and knowledge extraction. The role of the Islet Regulome Browser is to provide to the pancreatic islet community fast accessibility to processed genomic data obtained from experiments performed on the endocrine pancreatic tissue. Such data is otherwise of difficult accessibility to non-bioinformatics laboratories being publicly available but usually deposited in bulky unprocessed formats. Much of the scientific effort in the pancreatic islet field is nowadays dedicated to the understanding of the non-coding genome functions in diabetes, in an effort of translating the GWAS genetic signal of association to a molecular mechanism. Compared to preliminary meeting communications (Ramos et al., 2016) the Islet Regulome Browser now allows the visualization of different classes of regulatory elements and transcription factor binding sites obtained from experiments performed by different labs worldwide. The original view of the data provided by the Islet Regulome Browser allows to easily integrating GWAS raw files with epigenomic and transcriptomic FIGURE 3 | Plots generated by the Islet Regulome Browser. (A) Illustration of the different sections of the plot, see the text for the detailed description. The left panel shows an illustrative example of a fasting glycemia associated locus (proximal to the PCSK1 gene), depicting highly associated SNPs mapping to active regulatory elements (dashed box). The right panel plot illustrates the adult islet regulatory landscape at the locus transcribing the β-cell specific transcription factor NKX6.1. The locus is characterized by a large Enhancer Cluster upstream the gene annotation. (B) Example of a fasting glycemia associated locus in proximity of the DGKB gene. As for the previous example, the integration of GWAS data with regulatory elements and transcription factors binding sites, allows pinpointing associated variants that directly map to active enhancer sites (dashed box). (C) Islet Regulome Browser view of the pancreatic progenitors regulatory landscape at chromosome 17q12. The locus is characterized by a high density of active enhancer elements bound by multiple transcription factors in proximity of the gene encoding HNF1B, a transcription factors involved in pancreatic development and homeostasis. datasets. The user can thus visualize the whole spectrum of variants with different p-values of association and contrast them with non-coding regulatory elements and transcription factor binding sites in simple way. We believe that such level of data integration is novel compared to other available genome browsers and can assist researchers in prioritizing diabetes associated variants and to boost their functional validations. The Islet Regulome Browser is not intended to compete with other genomic browser tools rather to integrate data of specific interest to a relative small scientific community with genomic annotation and epigenetic features obtained from other tissues. To this end we provide the data available at the Islet Regulome Browser processed and organized in UCSC genomic browser sessions as well as direct links to the raw fastq files. The Islet Regulome Browser is an intuitive interface to explore pancreatic islet genomic datasets. Publicly available experimental data sets such as open chromatin assays, transcription factor binding assays or GWAS variants are readily visualized at loci of interest and provided in the form of summary tables, facilitating the selection of candidate loci to be considered in experimental settings. We believe that such tool will facilitate the access to pancreatic islet public genomic datasets providing a major boost to functional genomics studies in glucose metabolism related traits including diabetes. The Islet Regulome Browser is freely accessible at http://www. isletregulome.com. AUTHOR CONTRIBUTIONS LP and LM conceived the project. LM designed and implemented the interface, the Web page, and the R code with contribution from MR. LP wrote the paper with contributions from LM and MR. All the authors read and approved it.
4,631.2
2017-02-14T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
A novel device to assess the oxygen saturation and congestion status of the gastric conduit in thoracic esophagectomy Background In thoracic esophagectomy, anastomotic leakage is one of the most important surgical complications. Indocyanine green (ICG) is the most widely used method to assess tissue blood flow; however, this technique has been pointed out to have disadvantages such as difficulty in evaluating the degree of congestion, lack of objectivity in evaluating the degree of staining, and bias easily caused by ICG injection, camera distance, and other factors. Evaluating tissue oxygen saturation (StO2) overcomes these disadvantages and can be performed easily and repeatedly. It is also possible to measure objective values including the degree of congestion. We evaluate novel imaging technology to assess tissue oxygen saturation (StO2) in the gastric conduit during thoracic esophagectomy. Methods Fifty patients were enrolled, with seven excluded due to intraoperative findings, leaving 43 for analysis. These patients underwent thoracic esophagectomy for esophageal cancer. The device was used intraoperatively to evaluate tissue oxygen saturation (StO2) and total hemoglobin index (T-HbI), which guided the optimal site for gastric tube anastomosis. The efficacies of StO2 and T-HbI in relation to short-term outcomes were analyzed. Results StO2, indicating blood supply to the gastric tube, remained stable beyond the right gastroepiploic artery (RGEA) end but significantly decreased distally to the demarcation line (p <  0.05). T-HbI, indicative of congestion, significantly decreased past the RGEA (p <  0.05). Three patients experienced anastomotic leakage. These patients exhibited significantly lower StO2 (p <  0.01) and higher T-HbI (p <  0.01) at both the RGEA end and the demarcation line. Furthermore, the anastomotic site, usually within 3 cm of the RGEA’s anorectal side, also showed significantly lower StO2 (p <  0.01) and higher T-HbI (p <  0.01) in patients with anastomotic leakage. Conclusions The novel device provides real-time, objective evaluations of blood flow and congestion in the gastric tube. It proves useful for safer reconstruction during thoracic esophagectomy, particularly by identifying optimal anastomosis sites and predicting potential anastomotic leakage. Supplementary Information The online version contains supplementary material available at 10.1186/s12893-023-02303-0. Background Gastrointestinal anastomosis is of the utmost importance in gastrointestinal surgery, as anastomotic leakage prolongs the postoperative hospital stay and increases healthcare costs.Furthermore, anastomotic leakage can also negatively affect the long-term quality of life and prognosis [1,2].In thoracic esophagectomy, the risk of anastomotic leakage is reduced by properly assessing the blood flow to the reconstructed organ, namely the gastrointestinal tract, to determine the optimal anastomotic site.One method that is widely used to assess the blood flow to determine the optimal anastomotic site is indocyanine green (ICG) imaging [3,4].Recent clinical studies report that ICG imaging of the stomach reduces the incidence of anastomotic leakage [5,6].However, administration of ICG may cause allergic reactions.Furthermore, ICG imaging analysis of blood perfusion is qualitative, not quantitative. A new device has recently been developed to assess blood perfusion in the tissue in real time by analyzing the tissue oxygen saturation (StO2) and congestion [7,8].This device quantitatively measures perfusion and stasis without the administration of fluorescent or contrast agents and can measure the entire gastrointestinal tract in real time during anastomosis procedures, suggesting that its clinical use may minimize the occurrence of anastomotic leakage.The purpose of this study was to evaluate the efficacy of this novel blood flow assessment device to assess gastrointestinal tract perfusion for anastomotic integrity in patients undergoing thoracic esophagectomy. Study design This single-center prospective cohort study was performed between April 2020 and January 2021 at the National Cancer Center Hospital East in Japan.The study flowchart is shown in Fig. 1.Patients undergoing thoracic esophagectomy for esophageal cancer who underwent intraoperative evaluation of the StO2 and total hemoglobin index (T-HbI) immediately after construction of the gastric conduit were analyzed. Participants were selected from 50 esophageal cancer patients who visited our esophageal cancer clinic and indicated their willingness to participate in the study.Informed consent was obtained by providing patients with a written description of the study's purpose, protocol, and risks and benefits and all patients provided written informed consent. The study period is 10 months, from January 2020 to April 2021.The study was approved by the Ethics Committee (approval number #2018-248). Participants were esophageal cancer patients who had received a reconstructed stomach tube and had stomach tube blood flow assessment performed using the Toccare device.The device provides a noninvasive method for quantitative blood flow assessment. For the evaluation, the device was used to quantitatively measure the participant's stomach tube blood flow.Measurements were taken at specific anatomical points during surgery.The evaluation included analysis of blood flow parameters based on data obtained from the device. Statistical methods and appropriate descriptive statistics were used to analyze the data.The primary endpoints were the efficacy, safety, and quantitative value of the Toccare device in assessing gastric tube blood flow.Secondary endpoints considered were postoperative complications and results related to stomach tube reconstruction. Study results were processed according to the principles of anonymization and confidentiality and do not contain personally identifiable information.Training and quality control of data collection and analysis protocols were also implemented to minimize study limitations and bias. Fig. 1 Flow chart of the patients' recruitment.Between April 2020 to January 2021, fifty patients who met the eligibility criteria for radical thoracic esophageal cancer surgery were enrolled.Of these, one case with intraoperative evidence of cervical esophageal invasion, four cases in which safe anastomosis with linear staple was judged to be difficult based on surgical findings, and two cases in which the cancer was incompletely resected were excluded as ineligible.Forty-three patients were included in the final analysis Patients Fifty patients who underwent thoracic esophagectomy for esophageal cancer at the National Cancer Center Hospital East in Japan were investigated.Preoperative diagnoses were based on imaging studies, namely upper gastrointestinal studies, endoscopic examination, and conventional computed tomography.Histological evaluation of endoscopic biopsy specimens was performed for all patients.The preoperative tumor stage, histopathological findings, surgical procedures performed, and outcomes were recorded. The inclusion criteria for this study were (1) aged over 20 years; (2) diagnosed with malignant thoracic esophageal cancer; (3) a pretreatment clinical disease stage of cT1-4aN0-3; (4) a European Cooperative Oncology Group performance status of 0-1; and (5) an assumption that reconstruction by stomach tube with cervical anastomosis using the mechanical linear stapling technique was possible during surgery.The exclusion criteria were (1) cervical and abdominal esophageal cancer; (2) any history of definitive chemoradiation treatment of the esophagus; (3) cT4b tumors; and (4) anastomosis via a method other than the mechanical linear stapling technique. Written informed consent was obtained from all patients.The study was approved by the Committee for ethics of the National Cancer Center (Japan) (approval number #2018-332).Also, this study confirms to the provisions of the Declaration of Helsinki (as revised in Tokyo 2004). Novel device to evaluate the oxygen saturation and total hemoglobin index The novel Toccare ™ device (Astem Corporation, Kawasaki, Japan) was used to analyze the StO2 and T-HbI (Fig. 2).The Toccare ™ provides the regional StO2 per unit of volume of targeted biological tissue.It also provides a T-HbI value that indicates regional tissue congestion levels by calculating the hemoglobin value per unit volume of tissue.Briefly, the Toccare ™ consists of two LED lights and their receptors (Fig. 2b).The regional StO2 and T-HbI are displayed simultaneously on the separate monitors in real time (Fig. 2c).The Toccare ™ derives the StO2 images from the differences in the absorption coefficient in the visible light region between oxy-and deoxyhemoglobin using a small number of wavelengths (Fig. 3).The details of this Toccare ™ device are described elsewhere [9]. Anastomotic procedure In all cases, thoracic esophagectomy was performed under the direction of the regular attending surgeon.For transthoracic esophagectomy, subtotal resection of the esophagus was performed with three-field regional lymph node dissection, regardless of tumor stage [10].For thoracoscopic esophagectomy, we preserved the azygos arch and the right bronchial artery [10].The laparoscopic approach was principally used for the abdominal portion of the operation, except in patients with bulky lymph node metastases or a history of laparotomy [10].The esophagus was usually reconstructed with a gastric tube via the retrosternal route [10]. The conduit was designed as a narrow gastric tube using the curved-shape stapler (Endo GIA ™ Radial Reload with Tri-Staple ™ ; Medtronic, Minneapolis, MN, USA) in the first stapling.The right gastric artery is ligated in all cases before the first stapling.A linear stapler (Endo GIA ™ with Tri-Staple ™ ; Medtronic) was then used.The anastomosis was performed using the modified Collard technique with a 45-mm linear stapler posteriorly and 45-mm and 60-mm linear staplers anteriorly.The detail of the procedure was described previously [10]. Anesthesia and intraoperative management during thoracic esophagectomy Routine monitoring was initiated upon arrival in the operating room, namely electrocardiography, noninvasive blood pressure monitoring, pulse oximetry, and capnography.Anesthesia was then induced with 1.5-2.5 mg/ kg propofol, 1-2 μg/kg fentanyl, and 0.1 mg/kg vecuronium, and was maintained with 3% end-tidal sevoflurane in oxygen until tracheal intubation.After intubation, anesthesia was maintained with 2% end-tidal sevoflurane at 40% oxygen (air/oxygen mixture at 4 L/min) supplemented with doses of fentanyl and vecuronium. After the operation, the endotracheal tube was removed.Once the patient achieved a modified Aldrete score of > 9, they were transported from the operating room to the intensive care unit (ICU), as described previously [10]. Quantitative evaluation of regional tissue oxygen saturation and congestion The evaluation procedures were recorded on video and the stored still images were used to quantitatively evaluate the StO2 and T-HbI using software developed by the Astem Corporation.The regions of interest were three areas on the gastric tube (stomach tube tip zone, anastomosis zone, stomach angle zone) and two reference lines (demarcation line, line at the end of the right gastroepiploic vessel).The stomach angle zone, the height where the gastric angle was located on the exact the point of the original stomach was marked on the greater curvature side, and that area was used as the measurement point.The anastomosis zone was measured at the anastomotic position of the temporally elevated stomach tube (in most cases, it was located between the end of the RGEA and the demarcation line).The stomach tube tip zone was literally the tip of the stomach tube.The StO2 and congestion (i.e., the T-HbI) were calculated for each region of interest (Fig. 4).In addition, the distance from the end of the right gastroepiploic vessel to the anastomotic site of the gastric tube was recorded and the StO2 of the region was calculated.The average StO2 and T-HbI of each patient were compared among the regions of interest. Definitions of surgical complications Surgical complications were evaluated using the Clavien-Dindo classification system [11].Complications classified as grade 2 and higher were defined as surgical complications.Surgical site infection was defined according to the Surgical Wound Infection Task Force 1 guidelines and included infections at the incision site or within the organs/spaces manipulated during surgery.Respiratory infection was defined as the presence of new or progressive infiltrates on chest radiography plus at least two of the following signs: temperature > 38 °C, purulent sputum, white blood cell count > 1 × 10 4 /mm 3 or < 4 × 10 3 / mm 3 , and signs of inflammation on auscultation, as described previously [10]. Perioperative management The same postoperative clinical management pathway (CMP) was used for all patients, regardless of the type of abdominal approach.All patients received enteral nutrition through a nasal feeding tube until the start of oral intake on postoperative day (POD) 6.Briefly, fluid Fig. 3 Signal of the wavelength of light to be analyzed and the characteristics of the device.This device can obtain images from the surface of the organ as well as the body surface, acquiring the signals from the processor unit.The unit calculates a StO 2 value by using consecutive two different wavelength illuminations balance was achieved through a peripheral line, with additional enteral feeding on POD 1. Enteral nutrition was discontinued after the absence of anastomotic leakage was confirmed on POD 6. Perioperative management was performed by the same clinical staff in the same ICU (POD 1 and 2) and ward (POD 3 and later).The endotracheal tube was removed from all patients in the operating room or immediately upon arrival in the ICU.On POD 6, a radiographic contrast agent swallow examination was performed to evaluate the anastomosis and any passage problems.If this examination showed no leakage or obstruction, the nasogastric tube was removed and oral intake was initiated in accordance with the CMP.In the absence of any complications, the patient was enrolled in the postoperative rehabilitation program and discharged on POD 12-20, as described previously [10]. Any abnormal clinical findings after surgery, such as hypoxia, leukocytosis, or abnormal pleural drainage, were investigated using computed tomography and/ or other radiographic examinations to diagnose and optimally manage the abnormality as soon as possible, as described previously [10]. Statistical analysis Statistical analyses were performed using R software (R Foundation, Vienna, Austria).Intergroup differences were compared using the chi-squared test and the Mann-Whitney U-test.P < 0.05 was considered to indicate a significant difference. Results A total of 50 consecutive patients were enrolled in the present study.After enrollment, seven patients were deemed ineligible and excluded because of intraoperative findings; one excluded patient had tumor invasion into the cervical esophagus discovered intraoperatively, four patients underwent anastomosis by other methods because the linear stapling technique was judged to be unsafe, and two patients had incomplete resection of the cancer (Fig. 1).The StO2 and T-HbI values were successfully acquired without problems and analyzed intraoperatively in a real-time manner in all 43 included patients.There were no intraoperative incidents during esophagectomy.The patients' demographic characteristics are shown in Table 1.The cohort undergoing esophageal cancer surgery had no unusual characteristics in terms of average patient age, sex, body size, or anesthesia risk.Table 2 shows the operative data.Most esophagectomies were performed using the thoracoscopic and laparoscopic approaches.All patients underwent three-field lymph node dissection and received a narrow gastric conduit.The most frequent reconstruction route was retrosternal.Mechanical anastomosis using the modified Collard technique was performed in all 43 patients. A representative video of an intraoperative evaluation of the gastric tube is shown in Supplemental Video 1. The regional StO2 values for each region of interest on the reconstructed organ, namely the stomach tube, are shown in Fig. 5.The StO2 was generally well preserved in the stomach angle zone that was supplied by the right gastroepiploic artery (Fig. 4).The StO2 values in the anastomotic zone was sufficient after the direct right gastroepiploic arterial blood supply was lost, and did not differ from the StO2 values in the stomach angle zone.However, the StO2 decreased markedly around the tip of the gastric tube in the area beyond the demarcation line (Figs.4, 5). Two fields 2 Three fields 41 Route of reconstruction Retrosternal 39 Posterior mediastinal 4 Type of anastomotic procedure Mechanical liner 43 Others 0 Fig. 5 Mean value of tissue oxygen saturation at three zones of the gastric tube.Tissue oxygen saturation gradually decreased from the caudal side to the cranial side.No statistically significant differences in tissue oxygen saturation were found between the stomach zone and the anastomotic zone.However, there was a significant decrease in tissue oxygen saturation in the stomach tube tip zone compared to the anastomotic zone.These results suggest that oxygen saturation is maintained to some extent by intramural blood flow in the stomach in the gastroepiploic region beyond the end of the right gastroepiploic artery, but that it is difficult to maintain oxygenation beyond the demarcation line, resulting in a decrease in oxygen saturation Details of the StO2 and T-HbI within the anastomotic zone, the most important region, are given below.Figure 6 shows the comparison of the mean T-HbI values at each site of the gastric tube.In principle, the T-HbI is an indicator of tissue congestion.The T-HbI value was lowest at the stomach angle zone, then gradually significantly increased toward the stomach tube tip zone. Postoperative outcomes are shown in Table 3.There were no in-hospital deaths and 86.0% of patients successfully accomplished the CMP.The median postoperative hospital stay was 16 days.The incidences of anastomotic leakage, anastomotic stricture, postoperative pneumonia, and postoperative vocal cord paralysis were 6.9.%, 9.3, 16.2, and 25.5%, respectively. During the reconstruction process in esophagectomy, the line of anastomosis was selected based on the blood perfusion as determined by the StO2 and T-HbI as well as tension considerations.Most anastomosis sites were within 3 cm of the end of the right gastroepiploic artery.The relationships between the distance from the site of anastomosis to the end of the right gastroepiploic artery and the StO2 and T-HbI values for each patient are shown in Figs.7 and 8. Figure 7 shows the relationship between the location of the actual anastomosis site of the stomach tube and the StO2 value in the 43 patients.In most cases, anastomosis was performed at plus or minus 3 cm of the end of the right gastroepiploic artery line, and most cases had the anastomosis performed at a position more cephalic than the line at the end of the right gastroepiploic artery.Anastomotic leakage was not observed in cases in which the StO2 value at the right gastroepiploic artery line remained above 50%, even if the anastomosis was performed at the tip of the gastric tube.In contrast, the StO2 value at the right gastroepiploic artery line was less than 50% in all three patients with anastomotic leakage. Figure 8 shows the relationship between the location of the actual anastomosis of the stomach tube and the T-HbI values of the 43 patients.Anastomotic leakage was not observed in most patients who were able to maintain a congestive state with a T-HbI value at the line at the end of the right gastroepiploic artery of 25 × 10 −2 or less, even if the anastomosis was performed at the tip of the stomach tube.In contrast, the T-HbI value at the end of the right gastroepiploic artery was higher than 25 × 10 −2 in all three patients with anastomotic leakage. Table 4 shows the mean StO2 values at the line of the end of the right gastroepiploic artery, demarcation line, and anastomosis site in patients grouped according to the presence or absence of postoperative anastomotic leakage.The mean StO2 was significantly lower in the group without anastomotic leakage versus the group with leakage at the line at the end of the right gastroepiploic artery (P < 0.01), demarcation line (P < 0.01), and anastomosis site (P < 0.01).Moreover, the mean T-HbI values were significantly lower in the group without anastomotic leakage than the group with leakage at the line at the end of the right gastroepiploic artery (P < 0.01), demarcation line (P < 0.01), and anastomosis site (P < 0.01) (Table 5). Supplemental Table 1 shows the StO2 and T-HbI values at the two height lines of the gastric tube and the height line of the anastomosis.The regional StO2 significantly decreased from the end of the right gastroepiploic artery through the anastomotic zone to the demarcation line (P < 0.01).The mean T-HbI significantly increased from the end of the right gastroepiploic artery through the anastomotic zone to the demarcation line (P < 0.01); even within the anastomotic zone, there was a clear increase in congestion toward the tip of the stomach tube. Discussion Anastomotic leakage after esophageal cancer surgery has a significant impact on not only the short-term outcomes but also the medium-term quality of life [12,13].It has recently been reported that perioperative anastomotic leakage also affects the long-term oncologic prognosis [14,15].Failure of the anastomosis is caused by a variety of risk factors.Among these multiple risk factors, some of the most important are the preoperative nutritional status and presence of pre-existing metabolic diseases, as well as the pre-and perioperative management strategies, such as preoperative rehabilitation [16].Another important patient-specific factor is the presence of a small stomach, which makes it difficult to create a long enough gastric tube [17].However, one of the most direct and important factors affecting anastomotic failure are the blood flow status of the stomach tube and the determination of its optimal anastomotic site [18,19]. The ICG method is currently the most widely used method for assessing the blood flow status of the stomach tube and determining the optimal site of anastomosis.A recent systematic review has shown that the use of the ICG technique in combination with the ICG method contributes to a reduction in the incidence of anastomotic leakage [20], and there is unanimous support for its usefulness among surgeons.However, the ICG method is a visually intuitive evaluation method that is difficult to evaluate objectively [21].The ICG method has several inadequacies, including: 1) various biases in evaluation, 2) difficulty in establishing objective indices, 3) difficulty in evaluating the congestive status although the inflowing blood flow can be measured, 4) difficulty in performing and reevaluating the method repeatedly, and 5) the need for caution in patients with a history of contrast agent allergy [22,23].In particular, measurement bias has a significant influence on the results, and it is conventionally noted that although a closer proximity allows for darker staining, a few cm difference in the distance between the camera and the target organ can significantly change the results [24]. The novel device used in the present study may eliminate the various shortcomings of the ICG method listed above.The device provides numerical values as objective indicators and enables repeated measurements.In addition, ischemia can be measured based on the StO2, and the congestive state can be simultaneously evaluated in real time based on the T-HbI.Furthermore, there are no safety concerns related to the injection of a reagent into the body.This novel device has the potential to evaluate organ blood flow not only in other types of gastrointestinal cancer surgery but also in certain types of reconstructive plastic surgery.Evaluation of the stomach tube using this device showed that the StO2 was maintained even at the tip of the stomach tube, rather than at the site receiving direct blood flow from the right gastroepiploic artery, and that the StO2 was markedly decreased distal to the demarcation boundary line.The present results show that if the StO2 is above 50% and the T-HbI is below 25 × 10 −2 , this site may be judged as safe for the anastomotic position. Based on the results of this study, if the gastric tube can only be raised to a position that does not meet the criteria of StO2 of 50% or more and T-HbI of 25 or less, the following measures can be assumed, although anastomotic leakage does not always occur. First, in order to further secure the elevation, Kocher's duodenum mobilization and detachment of adhesions at the base of the stomach tube should be performed to the maximum extent possible, and efforts should be made to increase the elevation as close as possible to the optimal target anastomotic site.Second, to minimize the risk of anastomotic leakage, additional measures such as omentoplasty and the use of an automatic anastomosis device with reinforcement should be added.Third, even in the event of anastomotic leakage, remnant omental tissue filling of the dead space and positioning of the drain to minimize the risk of serious condition. The present study has several limitations.First, it was a prospective study conducted in a single institution.A multicenter study is required to show the efficacy of this novel device for analyzing the StO2 and T-HbI, and demonstrate its superiority over ICG imaging.Second, the thoracic and abdominal surgical devices slightly changed over the course of the study period.However, this change was caused by the introduction of the robotic approach, while the anesthesia and perioperative patient management remained consistent, which is a strength of the study.Third, the surgical approaches were not uniformly fixed; several patients underwent laparotomy because they had a history of laparotomy or had bulky lymph node metastases in the abdominal area.However, the incidence of anastomotic leakage did not differ in accordance with the anastomosis procedure. Conclusions In conclusion, the novel device that measures the regional StO2 and T-HbI provides real-time intraoperative measurements of the StO2 and supposed tissue congestion level and is useful in determining the optimal site for anastomosis to minimize the risk of anastomotic leakage in patients undergoing thoracic esophagectomy for esophageal cancer.Future large-scale randomized controlled studies are warranted to confirm our findings and demonstrate the superiority of this novel device over ICG imaging. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Fig. 2 Fig. 2 Overall view of the oxygen saturation and tissue congestion evaluation device.a Appearance of the display of this device.The upper panel displays tissue oxygen saturation, and the lower panel displays T-Hb index as a congestion index.By placing the detector in close contact with the target organ, these two parameters are immediately analyzed and displayed.These values can be measured repeatedly in real time without the need for reagents.b Overview of the internal structure of the device's detector.Tissue oxygen saturation quantifies the value obtained by projecting the target tissue from two different photodiodes.c Representative images of an intraoperative real time evaluation for oxygen saturation & T-HbI of the gastric tube obtained during esophagectomy Fig. 4 Fig. 4 Schematic representation of the sites and the boundaries of stomach tube for tissue oxygen saturation assessment.Two important boundaries in the post-production gastrointestinal tract (demarcation line and end of right gastroepiploic vessel line) and three gastrointestinal tract regions (stomach angle zone, anastomosis zone, stomach tube tip zone) divided by the demarcation line.Stomach tube tip zone) Fig. 6 Fig. 6 Mean value of T-Hb index at three zones of the gastric tube.T-Hb index values gradually increased from the caudal side to the cranial side.Statistically significant differences in T-Hb index values were found between the stomach angle zone and the anastomotic zone (p < 0.05).A statistically significant difference in T-Hb index was also observed between the anastomotic zone and the stomach tube tip zone (p < 0.05).These results indicate that the progression of congestion toward the tip of the gastrointestinal tract is gradual and objective Fig. 7 Fig. 8 Fig. 7 Relationship between anastomotic site and value of tissue oxygen saturation of the stomach tube.The actual anastomotic position in the final gastric tube and the distribution of tissue oxygen saturation (%) at the terminal line of the RGEA are shown as oral (+) and anal (−) from the terminal line of the RGEA.Tissue oxygen saturation generally ranged from 50 to 70% in most cases.The actual anastomotic position varied.In cases with anastomotic leakage, the anastomotic position was on the oral side of the terminal line of the RGEA.Tissue oxygen saturation in the anastomotic leakage cases was below 50% in all cases.(Cases with suture failure are indicated by red dots) Table 2 Patients' operative data Table 3 Postoperative outcomes Table 4 Mean tissue oxygen saturation at two gastric tube points and the point of anastomosis in patients who did and did not experience postoperative anastomotic leakage Table 5 Mean tissue total Hb index at two gastric tube points and the point of anastomosis in patients who did and did not experience postoperative anastomotic leakage
6,351.8
2024-01-08T00:00:00.000
[ "Medicine", "Engineering" ]
Hidden but revealed: After years of genetic studies behavioural monitoring combined with genomics uncover new insight into the population dynamics of Atlantic cod in Icelandic waters Abstract Stock structure is of paramount importance for sustainable management of exploited resources. In that context, genetic markers have been used for more than two decades to resolve spatial structure of marine exploited resources and to fully fathom stock dynamics and interactions. While genetic markers such as allozymes and RFLP dominated the debate in the early era of genetics, technology advances have provided scientists with new tools every decade to better assess stock discrimination and interactions (i.e. gene flow). Here, we provide a review of genetic studies performed to understand stock structure of Atlantic cod in Icelandic waters, from the early allozyme approaches to the genomic work currently carried out. We further highlight the importance of the generation of a chromosome‐anchored genome assembly together with whole‐genome population data, which drastically changed our perception of the possible management units to consider. After nearly 60 years of genetic investigation of Atlantic cod structure in Icelandic waters, genetic (and later genomic) data combined with behavioural monitoring using Data Storage Tags shifted the attention from geographical population structures to behavioural ecotypes. This review also demonstrates the need for future research to further disentangle the impact of these ecotypes (and gene flow among them) on the population structure of Atlantic cod in Icelandic waters. It also highlights the importance of whole‐genome data to unravel unexpected within‐species diversity related to chromosomal inversions and associated supergenes, which are important to consider for future development of sustainable management programmes of the species within the North Atlantic. teractions (i.e. gene flow). Here, we provide a review of genetic studies performed to understand stock structure of Atlantic cod in Icelandic waters, from the early allozyme approaches to the genomic work currently carried out. We further highlight the importance of the generation of a chromosome-anchored genome assembly together with whole-genome population data, which drastically changed our perception of the possible management units to consider. After nearly 60 years of genetic investigation of Atlantic cod structure in Icelandic waters, genetic (and later genomic) data combined with behavioural monitoring using Data Storage Tags shifted the attention from geographical population structures to behavioural ecotypes. This review also demonstrates the need for future research to further disentangle the impact of these ecotypes (and gene flow among them) on the population structure of Atlantic cod in Icelandic waters. It also highlights the importance of whole-genome data to unravel unexpected within-species diversity related to chromosomal inversions and associated supergenes, which are important to consider for future development of sustainable management programmes of the species within the North Atlantic. K E Y W O R D S behavioural ecotypes, Gadus morhua, genetics/genomics, Iceland, management perspective, stock structure | INTRODUC TI ON The Atlantic cod (Gadus morhua L.) has been one of the most important commercial species in the North Atlantic for more than 1000 years, with evidence of cod trading during the Viking age . Atlantic cod has been intensively exploited for the last 100 years, which led to the drastic collapse of several stocks in many regions owing to overexploitation (see Christensen et al., 2003 for a review). This was also the case in Icelandic waters where the spawning stock biomass (SSB) of Atlantic cod decreased from 1 million tonnes in the 1950s to <200,000 tonnes in the 1980s (ICES, 2019). Such a decrease is likely a result of the rapid changes in the exploitation capacities such as increase in boat size, engine power and more efficient fishing gear. Since the early 1990s, SSB increased gradually and reached 600,000 tonnes in 2018 (MFRI, 2018). Concurrently to the observed decrease in SSB from the 1950s to 1980s, significant changes were observed in the life history of Icelandic cod, indicating potential fisheries-induced effects. Age truncation and consecutive changes in size distribution (Schopka, 1994) as well as maturity at younger ages and smaller sizes were reported (Marteinsdóttir & Begg, 2002). In Icelandic waters, the dynamic migration pattern of cod from spawning to feeding grounds was described as early as the 1700s (Magnússon, 1785) and was later confirmed by extensive tagging experiments conducted over eight decades in the 20th century. The first indication of multiple stocks of Atlantic cod in Icelandic waters was obtained from tagging experiments as early as the 1900s using Petersen tags (Saemundsson, 1913;Schmidt, 1907), followed by successive tagging experiments in the years from 1948 to 1986 (see Jónsson, 1996). These results, spanning over a period of 200 years, were crucial for the understanding of cod dynamics in Icelandic waters and provided the first evidence for both spawning site fidelity and homing behaviour. The studies showed that postspawning cod, tagged at spawning grounds from the southwest (SW) region, undertook long-distance migrations to one of the two main feeding regions located either (i) northwest (NW) or (ii) northeast (NE) of Iceland, respectively (Jónsson, 1996;Pálsson & Thorsteinsson, 2003; see Figure 1). Conversely, postspawning cod tagged in the northern regions were shown to be more sedentary than their SW counterpart (Jónsson, 1996) and were rarely recaptured at the main feeding grounds. The common understanding was that the majority of the Icelandic cod stock originated from one main spawning ground in the south/southwestern region (Jónsson, 1996), where the migration pattern from these spawning grounds to feeding grounds NW and NE was assumed coupled to homing behaviour. It was also known that in addition to the main spawning ground in south/southwest, spawning occurred in numerous small spawning aggregations along the coast from the NW to the southeast (SE). However, the contribution to the SSB from these smaller spawning aggregations was relatively small compared with the contribution from the main spawning ground in south/southwest (Figure 1; Jónsson, 1996;Sólmundsson et al., 2015). Over the years, numerous studies confirmed that features of the Icelandic cod life portfolio, such as the importance of biologically relevant diversity within the species, were distinctly variable. Pronounced differences were observed between north and south of Iceland regarding age at maturity, growth rate, eggs and larval drift, otolith chemistry and many other parameters (Begg & Marteinsdottir, 2003;Brickman et al., 2007;Grabowski et al., 2011;Jónsdóttir et al., 2006aJónsdóttir et al., , 2006bJónsdóttir et al., , 2007Jónsdóttir et al., , 2008Marteinsdóttir & Begg, 2002;Marteinsdóttir, Guðmunðsdóttir, et al., 2000;Marteinsdóttir, Gunnarsson, et al., 2000;Pálsson & Björnsson, 2011;Pardoe et al., 2008Pétursdóttir et al., 2006;Righton et al., 2010;Thorsteinsson et al., 2012). As such, biological evidence on the presence of several Icelandic cod populations that exhibit different life-history traits has accumulated for nearly two decades. In the same time period, few comprehensive genetic studies were performed to assess the importance of gene flow and connectivity among the Icelandic cod populations. Here, we review genetic studies performed on Atlantic cod in Icelandic waters, by summarizing them from the earliest F I G U R E 1 Atlantic cod migration dynamic in Icelandic waters. Spawning grounds are indicated with orange areas, while feeding ground locations are indicated in green colour. The feeding migration from the southern spawning grounds is indicated by red arrows, while black arrows indicate migration from the northern spawning grounds to more localized feeding grounds. haemoglobin studies performed in the 1960s to recent genomic investigations. We focus specifically on studies investigating genetic population structure and we describe how the advancement of genomic tools has changed our perception of biological units in the last six decades. Based on new evidence from wholegenome sequencing, combined with Data Storage Tags (DSTs), the behavioural ecotypes (coastal and frontal ecotypes) might play a crucial role in the maintenance of the described spatial genetic structure in Icelandic waters. These potential new management units revealed in Icelandic waters represent parallel evolutionary adaptive cod lineages that were recently found across the species range (Matschiner et al., 2022) and represent crucial information that should be taken into consideration for the development of sustainable management programmes for Atlantic cod stocks through its distribution range. | THE G ENE TI C PI ONEER S: S TUD IE S OF HAEMOG LOB IN P OLYMORPHIS M Genetic studies of stock structure in Icelandic waters started as early as the 1960s, when Sick (1965) revealed the presence of limited genetic variation at the haemoglobin HbI locus. Several years later, Jamieson and Birley (1989) were the first ones to describe a clear difference between cod from the NE and the SW of Iceland, owing to a frequency shift of the HbI 1 allele from 0.61 in the NE to 0.09-0.32 in the SW. Although very little was known about the haemoglobin variants in the early 1980s, these findings have now been confirmed and two amino-acids replacements, Met55β 1 Val and Lys62β 1 Ala located at crucial positions in the α 1 β 1 subunit interface and haem pocket, respectively, were discovered (Andersen et al., 2009). The described HbI variants seemed to affect the oxygen-binding properties differently in various cod populations and were assumed to reflect adaptation to local environmental conditions (Andersen et al., 2009). Further, evidence of the multi-copy nature and potential adaptive significance of haemoglobin has accumulated and questioned the functionality of the different haemoglobin variants (Baalsrud et al., 2017;Barlow et al., 2017;Borza et al., 2009;Knight, 2017). By doing these early investigations, Jamieson and Birley (1989) nevertheless presented the first evidence of local adaptation in a species with large population sizes and potentially high gene flow. As such, they were the pioneers in a long series of studies investigating the potential population structure of Atlantic cod in this region. However, these results remained unnoticed for many years and were not supported by the first sequence variation studies of mitochondrial DNA (mtDNA; Árnason & Rand, 1992;. By investigating mtDNA sequence variation in Iceland and Greenland, these studies reported a high degree of variation and suggested a lack of differentiation of haplotype frequencies within the studied regions, that is, a lack of genetic structure. Subsequently, the Icelandic cod stock was thought to be composed of a single management unit for many years (Schopka, 1994). | THE PANTOPHYS IN (PAN I LO CUS) ER A One of the most popular genetic markers for Atlantic cod in the Northeast Atlantic was the synaptophysin locus described by Fevolden and Pogson (1997), later called the pantophysin locus (Pan I). The popularity of the Pan I locus lasted more than a decade and was probably linked to the fact that this gene contains two major alleles, Pan I A and Pan I B , and to the ease of obtaining genotypes. The two alleles Pan I A and Pan I B differ by six fixed nonsynonymous DNA substitutions, clustering in the first intravesicular loop (IV1 domain) of the protein (Pogson & Mesa, 2004). The function of this locus is still poorly understood, but it has been suggested to be a potential candidate gene under selection (Pogson & Mesa, 2004). The gene codes for an integral membrane protein expressed in cytoplasmic transport vesicles (Brooks et al., 2000;Windoffer et al., 1999). In the following years, multiple studies revealed differences in Pan I allele frequencies across the North Atlantic Ocean and potential driving forces behind the selection at this locus were investigated (Case et al., 2005;Karlsson & Mork, 2003;Pampoulie et al., 2006;Pogson, 2001;Sarvas & Fevolden, 2005a;Skarstein et al., 2007;Stenvik et al., 2006). In Icelandic cod, differences in Pan I allele frequencies were observed at relatively small geographical scales in the SW region and were shown to be temporally stable over a period of 2 years (Imsland et al., 2004;Jónsdóttir et al., 1999Jónsdóttir et al., , 2001. However, 10 more years passed before information on Pan I allele frequency variation across the entire spawning regions of Atlantic cod in Icelandic waters were available (Pampoulie et al., 2006). By collecting and genotyping more than 2500 spawning cod at 22 different locations around Iceland, using the Pan I locus and nine microsatellite loci, Pampoulie et al. (2006) were the first to demonstrate that Icelandic cod populations were not panmictic, but consisted of at least two genetically differentiated spawning components, the NE and SW. They observed a distinct Pan I allele shift between these two regions with a higher frequency of the Pan I B allele in the southwestern spawning ground compared with the northern region. The observed difference in Pan I B allele frequencies was also supported by differentiation at microsatellite loci and by tagging experiments. Nevertheless, the level of differentiation observed with the Pan I locus was 80-fold higher than the one observed for microsatellite loci, a result interpreted as evidence for potential local adaptation. Moreover, Pampoulie et al. (2006) confirmed the increase in Pan I B allele frequency linked to the depth in which cod occur and which had been previously observed in the SW region (Jónsdóttir et al., 1999). The concept of an inshore versus offshore cod population within the region emerged. While these results were questioned by a subsequent study (Eiríksson & Árnason, 2013), the debate was mainly centred around the acceptance of the conclusion drawn about a NE-SW structure in the Icelandic cod and about the consideration of selective processes (adaptation) in conservation and management practices. Today it is generally accepted that natural selection (local adaptation) processes are important to identify population structure and should be used within a management context to conserve within-species diversity (Nielsen et al., 2009). Although the mechanisms behind the observed allele frequencies at the Pan I locus are still not completely understood, a majority of the subsequent genetic/genomic studies confirm the presence of unique NE versus SW units as well as a distinction linked to the depth cod occurs as proposed by Pampoulie et al. (2006). The Pan I locus remains one of the most used genetic markers to study population structure in Atlantic cod, and it is now clear that the locus is located within a large inverted genomic region at linkage group 1 (LG1), known to discriminate between the iconic migratory Northeast Arctic cod (NEAC) and the stationary Norwegian coastal cod (NCC; Berg et al., 2016Berg et al., , 2017. This chromosomal inversion contains hundreds of genes, each of which might play an important role in driving local adaptation, and therefore renders the discussion about the adaptive role of the Pan I locus speculative. It is highly likely that it is this collection of linked genes, defined as a supergene inherited in a Mendelian manner, rather than a single gene which is driving the adaptive process of cod to local environmental conditions. | S ING LE-NUCLEOTIDE P OLYMORPHIS MS (S NP MARKER S) In the early 2000s, the advent of new genetic technology introduced novel genetic markers that enabled further investigations of genetic structure in Icelandic cod. As such, the development of next-generation sequencing and hence of thousands of SNPs made a significant impact in many nonmodel organisms such as cod (Bonanomi et al., 2015(Bonanomi et al., , 2016O'Leary et al., 2006;Therkildsen et al., 2013;Wirgin et al., 2007). By utilizing these novel techniques, several studies suggested that ecological divergence (in the presence of gene flow) was pronounced and affected specific genomic regions, so-called 'genomic islands of divergence' (Bradbury et al., 2013;Hemmer-Hansen et al., 2013) (previously referred as heterogeneous genomic divergence; see Nosil et al., 2009), whereas the remaining parts of the genome were homogenized by gene flow. Several SNP-based studies that were not focusing exclusively on the genetic structure of cod in Icelandic waters confirmed the discrimination between the NE and SW breeding grounds (Bonanomi et al., 2015;Therkildsen et al., 2013). Altogether, the novel SNP-based studies confirmed the presence of two distinct Atlantic cod populations in Icelandic waters where differentiation was mainly driven by selective processes at key genomic regions. Using 1152 validated transcriptome-derived SNPs, Therkildsen et al. (2013) confirmed the previously described inshore versus offshore cod population differences in the SW region of Iceland (Jónsdóttir et al., 1999;Pampoulie et al., 2006). | DATA S TOR AG E TAG S PROVIDE CRITIC AL NE W INFORMATI ON While geneticists were focussing on potential spatial genetic structure of Atlantic cod in Icelandic waters, marine biologists were trying to better understand the role of this species in the food web and its potential migration routes. Two pioneers drastically changed the perception of stock structure and questions related to conservation and management of the Icelandic cod and consequently also the path of genetic investigation in Icelandic waters for the following 20 years. In the early 2000s, Pálsson and Thorsteinsson (2003) conducted DSTs experiments on a spawning ground at the southwestern coast of Iceland from 1996 to 1999. DSTs are biologging devices, which are introduced in the abdominal cavity of individual fish where they record depth and temperature with high accuracy at a constant time interval. The results of the DSTs experiment were quite surprising. Among individuals tagged within the same inshore spawning location, some individuals appeared to stay all year in shallow coastal waters (<200 m), characterized by seasonal trend in temperature (abbreviated 'coastal cod'), while other made feeding migrations to deeper waters where they foraged in thermal fronts (abbreviated 'frontal cod'; Figure 2). Additionally, the use of DSTs provided evidence of spawning skippers (e.g. mature cod which do not reproduce during one spawning season, see Jónsdóttir et al., 2014), as well as evidence of the fact that: (a) the migration timing in successive years was close to being synchronous, suggesting that the onset of migration was consistent, (b) the use of a tidal model suggested that different behavioural types were undertaking feeding migration in groups or shoals, and (c) the stability of behaviour from year to year suggested that the behavioural strategies were related to food availability or genetic differences (Thorsteinsson et al., 2012). Similar distinguishable migration patterns were described as early as the 1930s in Norwegian waters, where two distinct Atlantic cod ecotypes were described, the NEAC and the NCC (Rollefsen, 1934). NEAC is known to exhibit long-distance migrations from feeding areas located in the Barents Sea and the Svalbard region (Bergstad et al., 1987) to its spawning grounds along the Norwegian coast where the main spawning areas are located around the Lofoten Islands (Bergstad et al., 1987;Brander, 1994). Such a migratory pattern is similar to what is observed for the frontal Icelandic cod. The NCC are more stationary and usually resident in more shallow and warmer waters along the Norwegian coastline including the Lofoten Islands (Rollefsen, 1954) and in numerous fjords in which they usually spawn (Jakobsen, 1987). Hence, the NCC display a migration pattern that is similar to what is observed in the coastal Icelandic cod. | B EHAVIOUR AL ECOT YPE S E XHIB IT G ENE TI C D IFFEREN CE S AT THE PANTOPHYS IN AND RHODOPS IN G ENE S The first attempt to understand the genetic background of coastal and frontal behavioural ecotypes in Icelandic waters utilized the most common genetic marker for Atlantic cod at the time, the Pan I locus. Based on data collected from 69 DSTs-recaptured individuals, Pampoulie et al. (2008) showed that 97% of Pan I AA genotypes exhibited a typical coastal behaviour, while 88% of Pan I BB genotypes exhibited a frontal behaviour. The heterozygotes Pan I AB exhibited either coastal or frontal behaviours with a 50%-50% proportion, which implied that using the Pan I locus alone was not sufficient to accurately assign individual cod to behaviour ecotypes in this region. Further analyses considering geographical partitioning of behavioural ecotypes, using a higher number of recaptured individuals (n = 172), demonstrated that almost no Pan I BB genotypes were captured in the north and that the relationship between the Pan I genotypes and the behavioural ecotypes varied among regions (Figure 3). While most of the Pan I AA genotypes were of the coastal ecotype in the west and southeast of Iceland, 23% of the recaptured ones of the northeast exhibited a frontal behaviour (Figure 3). The same pattern was observed with the Pan I BB genotypes for which only 67% of them exhibited a frontal behaviour in the southeast compared to 89% in the southwest. The Pan I AB genotypes also exhibited a higher percentage of frontal behaviour in the southeast than in any other region. Interestingly, similar associations between behavioural ecotypes and the Pan I locus allele frequencies were also demonstrated among NEAC and NCC in Norwegian waters (Nordeide, 1998;Sarvas & Fevolden, 2005a, 2005bSkarstein et al., 2007). Since these ecotypes are occurring at different depths during most of the year, they are clearly exposed to different light conditions. In the rhodopsin pigment, which is involved in light detection, several amino acid substitutions have been shown to affect the spectral sensitivity in several teleost species (Yokoyama et al., 2008). Numerous studies provide evidence of the importance of protein modifications of rhodopsin in marine vertebrates, resulting in local adaptation to various light environments and ultimately to species diversification (Ebert & Andrew, 2009;Shum et al., 2014;Sivasundar & Palumbi, 2010). Consequently, Pampoulie et al. (2015) focussed on the polymorphism in the RH1 opsin gene, using 148 tagged and recaptured individuals with DSTs, and observed 18 variable sites within the RH1 opsin gene, and two in the 3′-untranslated region (3′-UTR). However, only two of these polymorphic sites had high MAFs that markedly differed between behavioural ecotypes (one synonymous SNP at site 459 [AA153] and one nonsynonymous at site 1295 in the 3′-UTR). For nonmodel organisms such as Atlantic cod, the genomic era offered ample opportunity to better understand genomic features such as the observed Pan I locus and the RH1 opsin gene variation among the behavioural ecotypes. Both these genes were shown to be located within the large chromosomal inversion on LG1, and as mentioned above found to be involved in behavioural ecotypes divergence in the North Atlantic (Berg et al., 2017). | ENTERING THE G ENOMI C ER A AND AN E VALUATI ON OF P OTENTIAL MANAG EMENT UNITS The Atlantic cod was one of the first nonmodel organism for which a chromosome-anchored draft genome assembly was available F I G U R E 2 Typical coastal (upper) and frontal (lower) data storage tags profiles. Depth is depicted in black, temperature in light grey. (Star et al., 2011). However, it took another 5-6 years before the first whole-genome study in Icelandic waters was performed. The study was performed on samples collected on a large geographical scale to understand genome-wide patterns of divergence among the behavioural ecotypes of Atlantic cod (Berg et al., 2017). The populationbased sequencing efforts in Atlantic cod identified genome-wide patterns of divergence-mainly linked to four large chromosomal inversions-shedding light on processes of local adaptation in spatially structured populations across the North Atlantic Ocean (Berg et al., 2017). It was shown that three of these genomic regions-on LG1, LG2 and LG7-clearly discriminated the migratory NEAC from the nonmigratory NCC as well as the coastal and frontal ecotypes found around the Icelandic waters and characterized by DSTs profiles (Berg et al., , 2017; Figure 4). The chromosomal inversions, or supergenes, which span several Mb and contain hundreds of genes, are likely maintained by selection processes, and due to low recombination between the inversion variants, impacting the entire genomic region(s). Hence, they facilitate coevolution of genes underlying complex traits of behavioural ecotypes (Berg et al., 2017) such as the Pan I locus and the RH1 gene both located within the large chromosomal inversion in LG1 (as mentioned above). Moreover, the genomic data also indicated that the migratory ecotype NEAC originated from the stationary ancestral ecotype NCC. The derived inversion variant found on LG1, the main inversion linked to the behavioural difference between the NEAC and NCC Hemmer-Hansen et al., 2013;Kirubakaran et al., 2016), is found in high frequency (0.50) in NEAC, whereas the ancestral inversion variant is most frequently observed (almost fixed: 0.93) in the NCC and other cod populations (Berg et al., 2017;Matschiner et al., 2022). However, a clear separation (i.e. genetic differentiation) between NEAC and NCC is also found within the inversions on LG2 and LG7 (Berg et al., 2017). These inversions seem to also vary more in frequency linked to environmental conditions (Barth et al., 2019;Berg et al., 2015;Kirubakaran et al., 2020) Thus, the separation seen here on LG2 and LG7 between NEAC and NCC could be due to the fact that NEAC is experiencing more extreme and colder environmental conditions . For the Icelandic coastal and frontal behavioural ecotypes, however, the allele frequency differences showed a higher degree of complexity. Even if there is seemingly a differentiation between the coastal and frontal behavioural ecotypes in terms of the inversion found on LG1 (see Figure 4), the genetic differentiation is less pronounced than for the Norwegian counterparts (Berg et al., 2017). This is mainly due to a less clear separation in the inversion frequencies found between the two ecotypes, with most of the frontal cod being either heterozygous (0.69) or homozygous (0.21) for the derived inversion variant, whereas the coastal cod displayed a higher frequency of the ancestral variant (0.59), and some heterozygote individuals (0.33) were also detected (Berg et al., 2017). For the two other inversions (on LG2 and LG7), the separation between Icelandic coastal and frontal F I G U R E 3 Proportion of the different Pan I genotypes among the coastal and frontal ecotypes within geographical regions during spawning time (data analysed for this review, n = 172). behavioural ecotypes was not that obvious at all (see Figure 4 and supp. material of Berg et al., 2017). Both the frontal and coastal be- These results indicate quite strongly that the two behavioural ecotypes found in Icelandic waters have most likely derived from NEAC (Berg et al., 2017). This is supported by less genetic differentiation observed between coastal and frontal behavioural ecotypes compared with the differentiation detected in the two ecotypes in the Norwegian waters as supported by the majority of outliers loci ( Figure 4). These observations could be linked to higher complexity of behaviour differentiation in the Icelandic waters. Genomic diversity within species is as important as diversity of species for ecosystem function (Hoban et al., 2022), and the presence of genomic regions of divergence among behavioural ecotypes of Atlantic cod highlights the importance of full genome data for biodiversity conservation and management. Thus, further research is warranted, to fully pinpoint the genomic signatures underlying behavioural ecotypes, as well as how these behavioural differences and migration patterns (via gene flow as well as genetic drift) impact the population structure of Atlantic cod in Icelandic waters. Such information is of high value for future development of sustainable management programmes of these important fish stocks. As mentioned above, in other geographical regions differentiation in supergene frequencies has been shown to be correlated with various environmental characteristics such as seawater temperature (Barney et al., 2017) and salinity and to promote ecological stasis and persistence over millennia despite the fisheries-induced decline in populations (Sodeland et al., 2022). | CON CLUS I ON AND PER S PEC TIVE S One of the premises of scientific results integration into management plans and conservation practices is to fully fathom diversity within the distribution range of a harvested species. Several distinct approaches have been used for decades, and the last consensus has been that multidisciplinary approaches should be developed. and it is therefore premature to draw conclusion on the dynamic of F I G U R E 4 Majority of detected outliers' loci within the Atlantic cod genome are clustered within linkage groups (LGs) 1, 2, 7 and 12 for the migratory and nonmigratory cod including the NEAC/NCC complex (a) and the Icelandic coastal and frontal behavioural ecotypes (b) described using DSTs data (reanalysis of data from Berg et al., 2016Berg et al., , 2017. Cod drawing was provided by Jón Baldur Hlíðberg©. DSTs, Data Storage Tags; NCC, Norwegian coastal cod; NEAC, Northeast Arctic cod. the cod stock in this region. The primary remaining question to resolve is to understand the role of the inverted variants in the maintenance of geographical versus ecotypes divergence both during spawning time (population differentiation) and during the rest of the cod life cycle (contribution to nursery, juveniles and feeding aggregations). Once this crucial question is resolved, the temporal stability of the observed structure and the effect of fisheries can be investigated further. To conclude, the advancement of genome sequencing technologies in the last decades has drastically redirected genomic investigations in nonmodel organisms, such as the Icelandic cod behavioural ecotypes. The recent use of reference genomes of coastal versus frontal Icelandic cod and of stationary versus migratory individuals of cod across the North Atlantic has confirmed the presence of supergenes under natural selection, which shaped the architecture of local adaptation of the species in Icelandic waters for the last 30,000 years and in the North Atlantic for the last 0.4-1.66 million years (Matschiner et al., 2022). Finally, this review also demonstrates the importance of reference genomes to detect the presence of an unexpected within-species diversity related to large inverted variant(s). Large chromosomal inversions have now been successfully identified in a multitude of species and have all been shown to be related to within-species diversity reflecting local adaptive processes (Akopyan et al., 2022;Ayala et al., 2013;Berg et al., 2015;Huang et al., 2020;Koch et al., 2021;Sanchez-Donoso et al., 2022;Twyford & Friedman, 2015) and are therefore becoming relevant for management and for conservation genomics (Formenti et al., 2022). ACK N OWLED G M ENTS The research reviewed in the present manuscript was supported CO N FLI C T O F I NTE R E S T The authors declare no competing interests. DATA AVA I L A B I L I T Y S TAT E M E N T Data reviewed in the present article were published in previous manuscripts. If not open-access, the data can be requested to the first author of this article.
6,928.6
2022-08-22T00:00:00.000
[ "Biology" ]
Structural Transformation versus Environmental Quality: The Experience of the Low-income Countries in Sub Saharan Africa Structural transformation has been recognized as a critical mechanism for improving living standards for developing countries in Africa. However, the growing evidence indicates that such change is associated with considerable damage to the environment quality and, hence, challenging sustainable development. The present study investigates industrialization’s influence on the environment quality for 20 low-income countries in Sub-Saharan Africa during 1980-2018. We employed two measurements for environmental quality, which are CO2 and nitrous dioxide emissions. Likewise, the study applied the Fully modified OLS and the Dynamic OLS as the most modern and suitable techniques related to the panel data analysis. Overall, the FMOL and DOLS results show that industrialization has an insignificant influence on environmental quality. The results also show that these countries’ population size is the main driver for environmental quality changes. This finding implies that these countries should continue in their current efforts regarding promoting the industrial sector without wondering about sustainable development. INTRODUCTION Since the beginning of the new millennium, the figures show that African economies have been growing at a somewhat speedy rate (UNCTAD, 2012). The achieved growth was reflected in an improvement on several factors such as trade, FDI, and progress in the physical infrastructure (African Union's Agenda 2063, 2015; African Development Bank, 2015;African Transformation Report, 2014;UNCTAD, 2012;IMF, 2013). Unfortunately, evidence suggests that the present trend of growth is neither inclusive nor sustainable. Several interrelated factors have been identified as primary sources for this failure. However, bypassing industrialization, a major stage in the structural change and development process, is recognized as a critical explanation (UNCTAD, 2012;Opokua and Boachieb, 2020). Theoretically, structural change is said to occur, as described by Kuzents(1966) and others, through the gradual movement Nonetheless, by shifting the economy's structure toward the industrial sector, structural transformation is a double-edged sword. It is well recognized that structural change is essential and preconditions for improving living standards and generating sustained growth. However, it is not sufficient to achieve sustainable development because such change is more likely to reflect high costs on ecological systems. The other countries' experience shows that the transformation from an agriculture-based economy to an industrial one is associated with considerable destruction to the environment (Fischer-Kowalski and Haberl, 2007). That is to say, despite the importance of structural change and industrialization for job creation and poverty alleviation; however, it also might create undesirable consequences on the quality of the environment and hence sustainable development. In this respect, Stern (2009) argued that due to climate change disasters and rising temperatures, achieving sustainable development is challenging. Given the significance of industrialization in accomplishing sustainable development goals, on the one hand, and the potential negative impact of automation on such goals, on the other hand, it is imperative to explore the consequence of industrialization on the environmental quality of developing countries in Africa. Although numerous empirical studies tried to explain the critical factors determining a group of African countries' ecological systems, industrialization's potential and explicit role in explaining this phenomenon has been ignored(will be discussed in the next section). To the best of our knowledge, only two studies by Lin et al. (2016) and Opokua and Boachieb (2020) addressed this matter straightforwardly for a group of African economies. The present study utilized the panel cointegration technique for 20 lowincome economies in Sub Sharhan Africa (SSA) over the period 1980-2018 to explore the industrialization process's influence on the environment quality. More specifically, this study's main objectives are first; to analyze the EKC's validity in low-income countries in Sub Sharah Africa using an extended version of the IPAT version. The secondary objectives comprise identifying the key factors that affect the quality of the environment in Africa by utilizing appropriate techniques such as panel cointegration, Fully Modified OLS(FMOLS), and Dynamic OLS (DOLS) techniques. This article aims to discover the experience of low-income countries in SSA with this matter, and it adds to the present works in three significant ways. Firstly, as we said, since so far, only two studies accounted for the role of industrialization in explaining environmental quality in Africa, the present study will add a new contribution to the field and open the door for further studies. Second, instead of dealing with African countries as a homogenous group, as Lin et al. (2016) and Opokua and and Boachieb (2020), the present study will limit the analysis to the low-income countries the continent. As per the World Bank (2020) classification, the 53 economies in the continent are classified into 23 low income,21 Lower middle income,6 Upper median income, and the remaining three as high income. It is well recognized that the structure of the economy and the level of development vary across countries. Thus, as UNCTAD (2012) suggested, the challenge of attaining sustainable development is different in economies at varying stages of development. Thirdly, the current study applies the most modern and suitable long-run panel techniques in the field of panel cointegration procedures offered by Pedroni (1999). For robustness checking, the current study utilized two indicators for the degree of environmental quality, namely, CO2 and nitrous dioxide emissions, and two analysis techniques, which are FMOLS besides DOLS. Besides, we also consider the influence of trade and FDI within the environmental quality -industrialization nexus. This study's outcomes are essential for these countries' policymakers in their current efforts to achieve, in a simultaneous way, structure transformation and social and environmental sustainability. In the subsequent section, related empirical literature will be summarized. The data, estimation technique, and methodology procedures are displayed in Section 3. The obtained results will be highlighted and discussed in Section 4. The final section includes the conclusion of the study in addition to policy implications and recommendations. LITERATURE REVIEW Following the influential work of Grossman and Krueger (1991), empirical analyses on the influence of various human actions and behavior on the environment's quality are growing extensively. However, most of these studies focused on developed counties' experiences and ignored that of emerging economies. Despite these growing studies, the relationship between growth in per capita GDP and environmental pollution remains complicated. Indeed, the EKC suggests some demonstrative instruments for shedding light on the interrelationship between economic activities and their environmental quality consequence. The EKC indicates that in the first stage of development, the per capita income increases will be associated with deterioration in the environment at an increasing rate. However, over time and once the economy moves to a relatively high development level, there will be a gradual improvement in the environment. Grossman (1995) interpreted the inverted 'U-shaped' form in the EKC hypothesis through the three effects, which are scale, composition, and technology influences. The scale consequence denotes that there will be a massive demand for all resources in general and natural resources, particularly at the beginning of the development process journey. The direct and indirect utilization of natural resources will be converted into the production of various manufactured products. At this stage, the economy is expected to witness a considerable amount of industrial waste that creates significant damage to the environment. Second, to sustain and boost per capita GDP growth, policymakers neglect the deterioration in environmental quality. The whole ecological degradation begins to spread with a rise in the production process (per capita GDP growth). However, with continuous increases in the per capita income, the industrial component of an economy starts experiencing a transformation, and thus, the composition of an economy begins altering. However, once the economy reaches a specific level of per capita income during this stage, the public and policymakers' attention will shift towards a clean environment. Therefore, the emerging industrial sector has to adopt more friendlyenvironment tools and equipment in the production process. This is once the industries sectors begin to integrate technologies for expanding energy efficiency, and thus less and less damage to the environment will occur. The growing empirical results regarding the growth-environment nexus have yielded mixed results. Besides, most of these studies are focused on advanced economies; thus, their outcomes are not consistent and untrustworthy with poor developing countries (Carson, 2010;Stern, 2003). Likewise, even the few empirical studies related to Africa derived mixed outcomes, which creates a challenge for leaders since it will manifest dissimilar policy consequences. The inconsistency of the findings was attributed to various factors including, model specifications(linear, quadratic, and Cubic), environment measurement, the additional explanatory variables that included, and the method of estimation employed, which depends on the structure of the data (time series/panel, cross-section). Likewise, the mixed outcomes were attributed to geographic location and the chosen period of the study. According to Wagner (2008), numerous critical econometric drawbacks have been neglected in previous studies related to the environmental Kuznets curve. Recently, Katz (2015) analyzed the correlation between freshwater use and income growth, and he discovered that the finding is substantially dependent on selecting datasets and employed econometric methods. This is why, even for similar economies or panels of economies, the obtained results are mixed (Shahbaz and Sinha, 2020). In the present study, since previous empirical work in this matter is significantly tremendous, we will limit the review on the empirical studies on EKC that focused on the Africa continent only 1 . More specifically, in reviewing previous studies in Africa, we divided these studies into two groups, a single country-oriented analysis and a group of countries-oriented research. Second, we will review empirical studies, regardless of the location of the country/countries covered, that incorporated, in an implicit way, industrialization as one of the critical explanations for environmental quality. This work will be classified into two groups; the first group comprises studies using several versions from the decomposition techniques. The second group contains studies that incorporated a proxy for industrialization variables in a linear, quadratic, or cubic form. Due to the unavailability of sufficient time-series data for most African countries, most of the studies, as mentioned earlier, are cross-section or panel data. However, recently and with the relative improvement in the data collection, some singles based studies started to emerge. Similarly, Kivyiro and Arminen (2014) examine the validity of EKC hypotheses for 6 Sub-Saharan countries during 1971-2010 using the quadratic specification. The findings show that while inverted U-shaped is verified in three economies, no evidence of EKC hypotheses is revealed in the remaining three countries. Moreover, Shahbaz et al. (2015) explore the validity of EKC hypotheses for 13 African countries by applying the ECK's quadratic specification. The results of the Johansen Cointegration method show mixed findings across these countries. Namely, the EKC shape is confirmed as inverted U, U-shaped, monotonically increasing, and no EKC in some countries. Regarding cross-countries studies, Farhani and Shahbaz (2014) inspect the validity of the EKC hypotheses for 10 MENA economies during 1980-2009 using the quadratic specification. The results of both FMOLS, as well as DOLS detected the existence of inverted U-shaped. Concerning preceding empirical studies that addressed industrialization's influence on the environment quality, as we mentioned previously, we classified these studies into two groups. The first group comprises studies that used several versions of the decomposition techniques. The second group contains studies that incorporated a proxy for industrialization variables in a linear, quadratic, or cubic specification of the EKC. Several version forms of the decomposition technique were employed in most of these studies. For instance, AkbostancI et al. (2008) tried to identify the source of the CO2 emissions of the Turkish manufacturing sector during 1995-2001. The Log Mean Divisia Index (LMDI) method was utilized to decompose the variations in the CO2 emissions of the manufacturing industry into five elements; changes in activity, activity structure, sectoral energy intensity, sectoral energy mix, and emission factors. The results demonstrated that the chief sources of the variation in CO2 emissions were total industrial activity and energy intensity. Likewise, Tunc et al. (2009) One more time, the carbon emissions were decomposed into four categories: energy structure, energy intensity, economic structure, and economic output effect. The results detected that the chief factor driving carbon emissions was the economic output, and the industry sector was the top contributor to carbon emissions. Concerning the second group of the studies, few studies incorporated a proxy for industrialization in linear, quadratic, or cubic specification. For instance, Xu and Lin (2015) examine automation and urbanization's role in explaining CO2 emissions for provincial panel data in China from 1990 to 2011. An inverted U-shaped nonlinear relationship has been confirmed between industrialization and CO2 emissions. Besides, Lin et al. (2016) utilize the STIRPAT framework and panel cointegration for five African economies from 1980 to 2011. The authors decompose growth into agricultural-based growth and industrial-based growth. The FMOLS technique's results failed to identify any significant relationship between CO2 emissions and agriculturalbased development or industrial-based growth. Also, Dogan and Inglesi-Lotz (2020) tried to inspect the economic structure's impact on seven European countries' environmental quality from 1980 to 2014. The FMOLS results show the U-shaped relationship between industrialization and growth in these countries. Likewise, Ha Le (2020) examined the impact of several factors on greenhouse gas emissions for a sample of 16 economies in South and East Asia during 1995-2012. The author employs four types of emission: GHG, CO2, CH4, and N2O, and utilizes two estimations; Prais-Winsten regression with Panel corrected standard error (PCSE) and Feasible General OLS (FGOLS). The results show that the influence of industrialization on the environment depends on environmental measurement. More specifically, while industrialization activities tend to harm the CO2, its effect on the remaining three environmental measures is favorable. Likewise, Opokua and Boachieb (2020) examined industrialization's environmental impact in 36 selected African economies during 1980-2014. Using various measures for the environment quality, the Pooled Mean Group (PMG) technique indicates the insignificant impact of industrialization on the environment depend on utilized measurement for the environment. Namely, the results show that manufacturing has a statistically negligible consequence on all pollution measurements except for nitrous oxide emissions that appear adversely affected by industrialization. From the reviewed literature, it is clear that there is a lack of consensus over the relevance of the EKC to the continent in general and the impact of the structural transformation. Most importantly, the previous studies' review confirms the lack of sufficient empirical research that accounted for industrialization's expected role in explaining the critical determinants of the environmental quality for the developing countries in Africa. As we said previously, the challenge of accomplishing sustainable development is different in countries at varying development levels. Model, Variables, and Data This section aims to illustrate the model, data, and framework utilized to build the empirical analysis of industrialization's environmental quality impact. To display the theoretical links among manufacturing, income per capita, and environmental quality, we firstly specified the quality of the environmental (EQ) as a function of industrialization (IND) and real per capita GDP (Y)and its square (Y 2 ) as shown in the general form below: Equation 1 demonstrates the fundamental role of economic growth in affecting the environment's status; thus, the EKC was combined into our investigation. It was crucial to select an appropriate measure for the quality of the environment as it was a vital factor in this study's interpretation. The ecological consequence of industrialization could take various types of pollution. In the present study, we employed two environmental quality measures following preceding studies: CO2 and nitrous oxide emissions. The utilization of these two indicators because the first, although data related to the environmental quality, is massive; however, for poor countries in Africa and during the study period, data are available for only these two variables. Second, using more than one indicator provides the sound of robustness for the analysis. Following the recent empirical research on the environmental quality, we added to Equation 1 an additional three explanatory variables that may contribute directly to the ecological quality or indirectly through its impact on industrialization. The variable that contributed directedly is population growth, as hypothesized in the IPAT framework (Rosa and Dietz, 2012;Chertow, 2000). The second two variables that indirectly contribute are a foreign direct investment, as hypothesized in the pollution haven hypothesis and Halo effect hypothesis (Copeland, 2005;Eskeland and Harrison, 2003;Temurshoev, 2006), and trade as postulated in the Porter hypothesis (Porter and Van Der Linde, 1995;Ren et al., 2014;Seker et al., 2015;Zhang and Zhou, 2016;Sapkota and Bastola, 2017 Table A2 in the appendix. The results of the correlation matrix, as shown in Table 1, reflect a relatively high correlation between the variable of interest, which is the industry, with each pollution measurement (LCO2 and LNIT). However, this outcome is not robust because, as we know, the correlation is different from causation. Estimation Approaches This section seeks to explain the stages that will be implemented toward the study's objective. As per previous empirical works that deal with panel data, we have to test the data's statistical features to construct the cross-sectional dependence test. In the second step, which depends on the first step's outcomes, we perform the unit root test, followed by specific panel cointegration testing. In the final step, if we identify a long relationship between the variables, we completed the long-run analysis by utilizing the FOMOLS and the DOLS. According to Shahbaz et al. (2017), unobserved frequent shocks that become an essential component of the error terms(ET) will lead to the presence of cross-sectional dependence in crosscountries data. Ignoring this test and procedure in the examination may lead to unreliable ET of the estimated coefficients (Driscoll and Kraay, 2001). In the present study and following previous work in this field and for robustness purposes, we will implement four different types of cross-sectional dependence tests. Once we perform the cross-sectional dependence tests, the next step is to examine the integration between the variables via panel unit root tests. Since several unit root test is available, selecting the specific unit root test depends mainly on the first step(i.e., cross-sectional dependence). If the unit root results show the nonexistence of integration at order two I(2), we have to move to the third step, the panel cointegration tests. If the test results show cointegration evidence between the selected variables, we move to the final step to perform our principal analysis and get our key objectives. The common and traditional estimation technique of panel data such as random effects, fixed effects, and GMM may manifest ambiguous and untrustworthy coefficients if employed on cointegrated panel data (Awad, 2019;Shahbaz et al., 2017). Besides, there is a possibility of an endogeneity problem in our EQ2 that might due to either omitted variables and reverse causality. On the one hand, some of the control variables may have been overlooked in EQ2. Therefore, our findings are most likely to be biased if the omitted variables are associated with the industrialization variable. On the other hand, it is also possible that the environment quality will influence industrialization, reflecting reverse causality. EQ2 has been estimated to overcome these problems using two techniques: Fully Modified Ordinary Least Squares (FMOLS) Dynamic Ordinary Least Squares. (DOLS). Pedroni has developed these two techniques (2000,2001) that are commonly used in the literature. It is well recognized that panel DOLS and FMOLS techniques reduce the endogeneity and autocorrelation between independent variables and ET, thus producing efficient results. For this reason, we follow the panel FMOLS and DOLS methods whose basic procedures are given in Eqs. (4) and (5), where A/EQ refer to explanatory/dependent variable in Eq. (3). Whether the FMOLS or DOLS method is favored, the empirical evidence is conflicting (Harris and Sollis, 2003). On the one hand, the FMOLS method and by default overcome the autocorrelation issue, but it is non-parametric. On the other hand, although the DOLS method remains a parametric test, its powerlessness rests in the degree of freedom matter due to leads and lags (Maeso-Fernandez et al., 2006). Table 2 represents the results of the cross-sectional independence tests. The products detect the existence of cross-sectional dependency for each selected variable. RESULTS AND DISCUSSION We carry on by carrying out panel unit root tests that take into account the dependency in our cross-sectional. The LLC statistic of Levin et al. (2002) and the CADF statistic of Pesaran (2007) are the two tests that consider such dependency (Awad, 2019). The results of these tests are reported in Table 3. The results indicate that all the variables are I(1). This finding implies that emissions measurement, industrialization, economic growth, population, trade, and FDI have a unique integration order for each panel. CADF-Fisher Chi-square test Therefore, and for each panel, we inspected the cointegration relationship between the variables. The Pedroni (1999Pedroni ( , 2004 panel cointegration tests are displayed in Table 4. The results suggest that out of the seven Pedroni tests, five statistics confirmed the existence of cointegration in each specification. However, as Pedroni (1999) proposed, Panel ADF and Group ADF are the leading statistics for small samples. In other words, if the results are controversial, as in our case, the Panel ADF and Group ADF statistics could be the benchmark. Consequently, based on the ADF and group ADF results, we can conclude that the long relationship is confirmed for each specification. Table 5 reports the estimated long-run coefficients from the FMOLS and the DOLS approach. Prior to discussing the findings, we verified the possible multicollinearity problem between the variables in each model. Tables A3 and A4 in the appendix show the Variance Inflation Factors (VIF) test implemented in each description. The results show that of such a problem in our analysis 2 . Now is the time to move forward and to look for the FMOLS and DOLS outcomes. 2 We tested the potential collinearity problems amongst the regressors by using the Coefficient Variance Decomposition (CVD) test. The results, which are not reported here, show the nonexistenceof any collinearity problem in our reults. The results of both FMOLS and DOLS are identical, which indicates the robustness of our analysis. The results tell us that our primary variable of interest, which is the industry, has a statistically insignificant impact on the two emitted pollutants' two measures. The negligible effect of industrialization on the environment could be due to the region's low industrial activity level. Indeed, aggregate data on the industry value add in Sub-Saharan Africa show a decreasing trend over time. For instance, while both Sub Saharan Africa (SSA)and South Asia (SA) have the same rate of growth in the industry value added as per 2000(10%), by 2017, SA registered a growth rate of 24% and for SSA remain below 10%. As we mentioned previously, unlike the experience of other regions, in Africa, the economy jumps directly from agriculture to informal economic activities in the service sector. According to Opoku and Yan (2019), the industrial sector's contribution to Africa's growth is either low or non-existent. Likewise, according to Gui-Diby and Renard (2015), industrialization has not yet occurred in Africa. Similarly, the Africa Growth Initiative (2016) has explained that Africa's industrial improvement and drive have been lagged for more than 40 years. According to Zamfir (2016), Africa's share in global manufacturing is tiny. This study's outcome seems, and to some extent, consistent with previous studies that addressed this matter in Africa, namely the work by Lin et al. (2016) and Opokua and Boachieb (2020). Lin et al. (2016) use the exact estimation Source: Author calculation. The P values are in ( ) ** and ***denotes significance at the 5% and 1% level of significance, respectively. (FMOLS) and arrive at the same conclusion on the insignificant impact of industrialization on environment quality for five African countries. Our finding is also consistent with the outcome of Opokua and Boachieb's (2020) result when CO2 is utilized but differs when environmental quality is proxied by nitrous dioxide emissions. Concerning the impact of per capita GDP, and it is a quadratic term, the results show that while per capita GDP is negative and statistically significant, its quadratic term appears positive and statistically significant. This suggests the presence of a "U"shaped relationship between the two environmental measurements and income in the low-income economies in Africa. Following Hasanov et al. (2019), to confirm that the results are consistent with reality, we calculated the turning point using both estimation methods' average results. The estimated turning point value is approximately equal to 5.5. This turning point value is lower than the whole countries' average income in our study (Table 6). This finding implies that for poor countries in Africa, it is expected that the growth process will continue to generate more damage to the environment as long as per capita income below the computed turning point. However, once this group of countries moves beyond that average, the growth process will generate minor damage to the environment. Our findings are consistent and contradict studies that were reviewed previously within the Africa context. The results indicate that the population is a leading and significant driver for the selected countries' emissions. As proposed by the STIRPAT framework, population growth is a considerable factor driving environmental problems comprising climate change. The increase in the population can cause damage to the environment in several ways. The pressure on the limited land resources will force the society to either destroy imperative forest resources or overexploitation arable land. Likewise, natural resources and climate are expected to be affected negatively due to population growth that will reflect more production and consumption. Numerous analyses have been conducted on the potential influences of the population on the environment ( CONCLUSION AND POLICY IMPLICATIONS The leaders in Africa have implemented several types of strategies to improve living standards and achieve sustainable growth. Although most of these countries witnessed and, to some extent, positive growth in per capita GDP, the poverty rate and the unemployment rate started to increase and expand. This led to a significant shift in the policymakers' mindset in the continent to implement a new strategy to allocate resources toward a more inclusive growth pattern. The structural transformation of the economy from agriculture, a and raw material-based economy, to a more industrialized economy, has been recognized as an essential tool in this strategy. However, evidence and the experience of the other countries show that industrialization is associated with environmental damage. Thus, it seems that there is a trade-off between automation and ecological quality. The present study employed panel data techniques to investigate the potential impact of industrialization on the environment quality for 19 developing countries in Sub-Saharan Africa during 1980-2018. The present study employed two indicators of environmental quality as well as the method of estimation. More specifically, for environmental quality, the current studies used CO2 and nitrous dioxide emissions. Besides, the FMOLS, as well as the DOLS, was utilized in the analysis. The results seem to bring good news for the developing countries in Africa since no significant impact for the industrialization of the environment quality has been detected. This finding implies that current observed efforts in the industrialization process should continue without considering it has a potentially adverse impact on these countries' environment. The environmental issue should be handled through topics related to population behavior. This study's results are considerable and provide imperative policy implications for the countries inspected in the panels and regional economic blocks, and environmental organizations. Our results also crucial for future studies, as it is expected that our research may open additional research directions. Other studies are still required for in-depth analysis and investigation for this matter. Future studies may, for example, with the Africa context, compare the outcome of the industrialization on the environment between this group of countries (low income) and other groups such as the middle-income group. Likewise, future studies may address the same issue by looking for low-income countries' experiences in different regions. Similarly, further studies may employ an alternative proxy for industrialization or add more explanatory variables or another specification. Carbon dioxide emissions are those stemming from the burning of fossil fuels and the manufacture of cement. They include carbon dioxide produced during consumption of solid, liquid, and gas fuels and gas flaring Nitrous oxide emissions (thousand metric tons of CO 2 equivalent) Nitrous oxide emissions are emissions from agricultural biomass burning, industrial activities, and livestock management Industry, value added (constant 2010 US$) Industry corresponds to ISIC divisions 10-45 and includes manufacturing (ISIC divisions 15-37). It comprises value added in mining, manufacturing (also reported as a separate subgroup), construction, electricity, water, and gas. Value added is the net output of a sector after adding up all outputs and subtracting intermediate inputs. APPENDIXES It is calculated without making deductions for depreciation of fabricated assets or depletion and degradation of natural resources. The origin of value added is determined by the International Standard Industrial Classification (ISIC), revision 3. Data are in constant 2010 U.S. dollars Population, total The total population is based on the de facto definition of population, which counts all residents regardless of legal status or citizenship GDP per capita (constant 2010 US$) GDP per capita is gross domestic product divided by midyear population. GDP is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. Foreign direct investment, net inflows (% of GDP) Foreign direct investment are the net inflows of investment to acquire a lasting management interest (10 percent or more of voting stock) in an enterprise operating in an economy other than that of the investor. It is the sum of equity capital, reinvestment of earnings, other long-term capital, and short-term capital, as shown in the balance of payments. This series shows net inflows (new investment inflows less disinvestment) in the reporting economy from foreign investors and is divided by GDP Trade (% of GDP) Trade is the sum of exports and imports of goods and services measured as a share of gross domestic product
6,916
2021-11-05T00:00:00.000
[ "Environmental Science", "Economics" ]
Interactive comment on “ A characterization of thermal structure and conditions for overshooting of tropical and extratropical cyclones with GPS radio occultation ” 1. Overshooting is defined by equations 2 and 3, but the variables on the left-hand sides of those equations are not defined. (What is Hcoldest_std? What is Hcoldest_std+1?) After the equation, it says that these variables are "considered to be" some things, but those are not definitions. The best I can do is to interpret this as sloppiness and assume that both should be Hcoldest. Then, when Hcoldest satisfies equation 2, I can "consider it to be indicative of" one thing, and when Hcoldest satisfies Introduction Tropical cyclones (TCs) are destructive events that every year cause many deaths, injuries and damage to human property and landscape.They are the natural catastrophes that account for major economic losses in several countries including the USA (Pielke et al., 2003;Emanuel, 2005).So far studies on TCs are neither able to clearly detect trends in the frequency and intensity of these phenomena nor able to understand what impact climate change could have on them (Landsea et al., 2006;Emanuel et al., 2008;Emanuel, 2013;Kunkel et al., 2013).However, it is predicted that major economic losses due to TCs may be doubled in the future (Mendelsohn et al., 2013). TCs hit whatever they find along their path without any distinction between poor and rich countries.Recently the landfall of hurricane Sandy was considered one of the most destructive events in USA east coast history (Halverson and Rabenhorst, 2013), while typhoon Haiyan created a devastating tragedy in the Philippines (Chiu, 2013). We are presently able to predict the track of TCs (100-200 km error) with good accuracy within 12 to 24 h (Goerss, 2000;Roy and Kovordányi, 2012), but we are still far from forecasting the intensity of the storm (Emanuel, 1999;De Maria et al., 2005;Lin et al., 2013) and understanding its development (Montgomery et al., 2012). Satellite measurements have drastically improved the TC forecast (e.g., Dvorak, 1975) and monitoring accuracy (Brueske and Velden, 2003;Demuth et al., 2004;Velden et al., 2006) by using different remote-sensing instruments on meteorological and research satellites.Further progress was made in the last decade by the global positioning systems (GPS) radio occultation (RO) technique (e.g., Huang et al., 2005). Wong and Emanuel (2007), Luo et al. (2008) and Vergados et al. (2013) demonstrated that there is a connection between the cloud top height and cloud top temperature with the intensity of the storm.Biondi et al. (2012Biondi et al. ( , 2013) ) showed a correlation between the cloud top altitude and the storm's thermal structure.The knowledge of the thermal structure gives important information on the cloud top height and this entails a better understanding of atmospheric circulation and troposphere-stratosphere transport, which are still poorly understood (Danielsen, 1993;Folkins and Martin, 2005). The measurement of atmospheric parameters (such as temperature) with high vertical resolution and accuracy at the tropopause level is difficult especially during severe weather events (e.g., TCs).Polar-orbiting satellites in low-Earth orbit do not provide suitable temporal and spatial (vertical and horizontal) resolution to study mesoscale weather phenomena.Geostationary satellites have excellent horizontal and temporal resolution for this purpose, but lack precise vertical discrimination, and offer little information about the tropical or subtropical tropopause.Ground-based measurements are too sparse and often not reliable in the upper troposphere and lower stratosphere (UTLS). Many studies have been conducted to determine the altitude of the storm cloud top height using satellite instruments and different techniques (Knibbe et al., 2000;Koelemeijer et al., 2002;Poole et al., 2002;Platnick et al., 2003;Minnis et al., 2008;Chang et al., 2010;Biondi et al., 2013), but the results depend strongly on the physical retrieval method and on the satellite data used (Sherwood et al., 2004), with errors ranging from about 400 m (Biondi et al., 2013) for a selected number of cases to 3 km (Chang et al., 2010).Some other studies have analyzed the UTLS during TCs using limb sounding measurements such as Atmospheric Infrared Sounder (AIRS) and Microwave Limb Sounder (MLS) with a vertical resolution of 2 to 3 km (Ray and Rosenlof, 2007). The GPS RO technique (Kursinski et al., 1997;Anthes, 2011;Steiner et al., 2011) allows for the estimation of atmospheric temperature in remote areas and during extreme weather events with global coverage and high vertical resolution and accuracy (Steiner et al., 2013), avoiding temperature smoothing issues in the UTLS (given by microwave and infrared radiometers) and improving the poor temporal and spatial coverage given by satellite lidars, radars and balloon soundings. The objective of this study is to analyze the thermal structure of TCs by using RO measurements for different storm intensities and different ocean basins where TCs develop.We aim to show that the RO measurements are well suited for studying severe storms and for evaluating the storms' contribution to the atmospheric circulation (Pommereau and Held, 2007;Corti et al., 2008;Romps and Kuang, 2009). In Sect. 2 we describe the data sets used, in Sect. 3 we give a description of the methodology, and in Sect. 4 we describe the results obtained analyzing all the RO profiles co-located with TCs.In the final section, we report the conclusions highlighting the possible future developments and applications. Tropical cyclones' best tracks We have downloaded the TCs' best tracks from the International Best Track Archive for Climate Stewardship (IB-TrACS; http://www.ncdc.noaa.gov/ibtracs/)(Knapp et al., 2010) in Network Common Data Form (netCDF) format.IB-TrACS is a complete archive containing information about TCs all around the world combining the data acquired by several agencies responsible for different ocean basins.For all the TCs the most important characteristics are reported, including the following: TC name, date and time of acquisition (every 3 or 6 h depending on the agency); latitude and longitude of the TC center; source (agency data provider); wind speed (averaged over 1 or 10 min depending on the agency); and pressure. GPS radio occultation temperature We have used the GPS RO product level 2 (L2) (including refractivity, temperature and pressure) processed by the Wegener Center for Climate and Global Change (WEGC) through the new occultation processing system (OPS) version 5.6 based on University Corporation for Atmospheric Research (UCAR) version 2010.2640orbit and excess phase data (Schwaerz et al., 2013).The WEGC OPSv5.6 is based on a geometrics optics retrieval combined with a wave optics retrieval in the lower and middle troposphere.A bending angle optimization is performed at high altitudes with colocated short-range forecast profiles of the European Centre for Medium-Range Weather Forecasts (ECMWF). The vertical resolution ranges from about 100 m in the lower troposphere to about 1 km in the stratosphere (Gorbunov et al., 2004;Kursinski et al., 1997).The horizontal resolution is about 1.5 km across ray and ranges from about 60 to about 300 km along ray (Melbourne et al., 1994;Kursinski et al., 1997). Physical temperature is retrieved based on an optimal estimation retrieval with co-located ECMWF short-term forecast profiles as background data (the latter contribute relevant information in the middle and lower troposphere).For the present study of TCs we use the OPSv5.6 physical temperature profiles. From this OPSv5.6archive, we use data from the following missions: the Satélite de Aplicaciones Scientíficas C (SAC-C) from 2001 to 2011 (Hajj et al., 2004), the Challenging Minisatellite Payload (CHAMP) from 2001 to 2008 (Wickert et al., 2001), the Gravity Recovery And Climate Experiment A (GRACE-A) from 2007 to 2012 (Beyerle et al., 2005), the Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) from 2006 to 2012 (Anthes et al., 2008).In order to have a suitable mean reference field available against which anomalies can be defined, we have created a GPS RO temperature reference climatology averaging all the GPS RO profiles collected in the period 2001 to 2012 from the different missions to monthly means for a 5 • × 5 • horizontal resolution.The climatology is finally provided at a vertical sampling grid of 100 m and at a horizontal grid sampled at 1 • × 1 • in longitude and latitude and it will be denoted in the following sections as T clim . Tropopause altitudes Tropopause altitudes were computed from individual RO temperature profiles (Rieckh et al., 2014), using the lapse rate definition of the World Meteorological Organization (WMO, 1957).This definition allows for finding multiple tropopauses, which was a requirement for this study.The WMO states that 1. "The first tropopause is defined as the lowest level at which the lapse rate decreases to 2 • C km −1 or less, provided also the average lapse rate between this level and all higher levels within 2 km does not exceed 2 • C km −1 ." 2. "If above the first tropopause the average lapse rate between any level and all higher levels within 1 km exceeds 3 • C km −1 , then a second tropopause is defined by the same criterion as under (1).This tropopause may be within or above the 1 km layer." An example of tropopause altitudes as a function of latitude is shown in Fig. 1 with about 60 000 cases in January 2007 (a) and in July 2007 (b): the tropopause has a seasonal variability with higher altitudes during the Northern Hemisphere winter (Rieckh et al., 2014). We finally computed monthly mean tropopause altitudes based on the individual tropopause altitudes for each month and for zonal means of 10-degree width in latitude.Cloud top altitudes were then compared to mean tropopause altitudes (± standard deviation) for the detection of possible overshootings into the stratosphere. Tropical cyclone cloud top height The mean GPS RO latitude and longitude tangent points were co-located with the TCs' center coordinates in a time window of 6 h and a space window of 600 km (Table 1), leading to more than 20 000 collocation cases.The RO profiles were also sub-selected, for checking the sensitivity to selection criteria, in a shorter time window (of 3 and 1 h) and in a smaller space window of 300 km.The results were found consistent with the larger data set (6 h and 600 km), which we finally used in this study, allowing for a larger number of samples for more robust statistics.We investigated different ocean basins as shown in Fig. 2: North Atlantic Ocean (NA), eastern Pacific Ocean (EP), western Pacific Ocean (WP), South Pacific Ocean (SP), northern Indian Ocean (NI) and southern Indian Ocean (SI).For any ocean basin the profiles were classified (Table 2) using a common storm intensity scale (tropical depression (TD), tropical storm (TS), TC categories 1-5) given by the Saffir-Simpson Hurricane wind scale. Due to the large dimensions of a TC and its relatively slow horizontal movement, it is possible that the same RO profile is selected more than once with different temporal and spatial distances from the TC center.In these cases we have included only the co-located RO profile with the shortest delay. For any ocean basin and for each storm category, we have sampled the RO profiles around the storm center as shown in Fig. 3, where we show the distribution of GPS RO profiles within 6 and 3 h around the center of tropical storms in the North Atlantic Ocean basin.In Fig. 4 we show the distribution of the same profiles along the real tracks in latitude and longitude. For each ocean basin and each storm category we computed the temperature anomaly (T anomaly ) of any single RO profile comparing the temperature during the storm (T storm ) with the local monthly mean climatology (T clim ) as defined in Sect.2.2 (i.e., in the respective 1 • × 1 • bin): We finally averaged all the profiles in the same ocean basin for each storm category to be able to compare the thermal structure characterizing the basin itself.In all the ocean basins, the TCs often move from the tropics to extratropical areas (especially in the North Atlantic Ocean and the western Pacific Ocean).We categorized the profiles as "tropical" between 20 • S and 20 • N and as "extratropical" beyond 20 • latitude as shown in Table 2, for highlighting the different thermal structures with the variation of latitude. For monitoring possible overshooting conditions during a storm, we computed the height of the lowest anomaly minimum (H coldest ) between 10 and 22 km in altitude for each T anomaly profile (Biondi et al., 2013), the monthly mean tropopause altitude (H mm_trop ) of the respective month and area (Sect.3.1), and the corresponding standard deviation of the monthly mean tropopause altitude (H mstd_trop ).We used the multi-annual standard deviation estimate for each month of the year here (e.g., October 2001 to 2012 data for October; sensitivity testing showed that using standard deviation estimates for individual months leads to essentially the same results). For robustness, we used two different references for detecting the possible overshooting conditions: where 1.0 km is the uncertainty for TC cloud top altitude detection using GPS RO as estimated by Biondi et al. (2013) from analysis with co-located lidar data.The uncertainty occurs mainly due to the finite resolution of RO data (see Sect. 2.2), and also due to co-location uncertainty, whereas the RO geopotential height and hence altitude allocation error is only about 10 m in the troposphere and around the tropopause within the 50 Eq.( 2) it is considered to be indicative of possible overshooting when the lowest anomaly minimum (the cloud top) overpasses the tropopause monthly mean altitude plus its standard deviation.Equation ( 3) defines an even more robust condition where H coldest is considered to be indicative of possible overshooting when the lowest anomaly minimum (the cloud top) overpasses the tropopause monthly mean altitude plus its standard deviation plus the 1 km uncertainty margin. We have used these two different thresholds, one less and one more conservative, for detecting the possible overshooting because there is still a large uncertainty in the atmospheric physics community in the overshooting detection.Equation (2) should be already accurate enough due to the temperature accuracy of GPS RO, but with Eq. (3) we want to take into account also the uncertainty of the technique used for detecting the TC cloud top altitude (Biondi et al., 2013).Since there is not enough independent reference data available for validating the results at this point, we report both results and do not advocate a more exact definition based on our current knowledge. Thermal structure The temperature anomaly during TCs usually shows a tropospheric warming and a sharp inversion just below the cloud top with a cooling corresponding to the cloud top altitude (Biondi et al., 2012(Biondi et al., , 2013)).With reference to these results, we assume that the storm cloud top altitude corresponds to H coldest .However, we note possible uncertainties regarding the cooling signature which may also be due to the presence of large-scale dynamical response to latent heating below the cold anomaly (Randel et al., 2003;Holloway and Neelin, 2007) or to gravity waves originated from the TC (Tsuda et al., 2000;Kiladis et al., 2001;Kim and Alexander, 2015). As shown in Fig. 5, this behavior is in general similar for TCs in the tropical and extratropical areas, but in the extratropical area the amplitudes of tropospheric warming and cloud top cooling are amplified.In Fig. 5 × 1 • bin).The same feature is evident in all the other ocean basins for all the categories (not shown).The mean temperature anomaly for tropical profiles (yellow line) reaches a maximum of about 2.5 K at about 10 km in altitude and a minimum of about −2.5 K near 16 km in altitude.The mean temperature anomaly for extratropical profiles (light-blue line) shows the same features but more pronounced with a maximum of about 6 K and a minimum of about −4 K. Figure 6 shows mean temperature anomaly profiles for the western Pacific Ocean basin and the South Pacific Ocean basin, respectively, for all storm categories as representative of the two hemispheres.Overall during a TC, the troposphere is warmer than the climatological mean and the cloud top is colder.In the Northern Hemisphere above the altitude H coldest there is a warming in the stratosphere, which is not well present in the Southern Hemisphere. Figure 7 gives an overview of the minimum temperature anomaly vs. altitude of the coldest point for all ocean basins and storm categories.TDs and TSs usually reach the coldest point at lower altitudes (4 basins out of 6) and the TCs in categories 4 and 5 reach the coldest point at higher altitudes.No relevant differences can be highlighted for the storm cat- egories 1, 2 and 3.The coldest anomalies are found in the South Pacific for all storm categories: between −8 and −6 K for TCs, between −6 and −5 K for TSs and about −4 K for TDs.For this area, the H coldest also is at higher altitudes (between 17.4 and 17.9 km) than in any other basin (Table 2).Temperature anomalies over the southern Indian Ocean are also usually colder than in the other ocean basins (except South Pacific), with higher H coldest .In the Southern Hemisphere the storms reach higher altitudes than in the Northern Hemisphere (Table 2) and they also have colder cloud tops. Another feature characteristic of storms is the double tropopause (Danielsen, 1993;Corti et al., 2008;Biondi et al., 2012Biondi et al., , 2013;;Davis et al., 2014), which is visible in Fig. 6b for TC category 5 (dotted lines).This is also apparent for all the TC categories in the northern Indian Ocean basin (not shown), since the small number of cases does not smooth the double variation such as it happens for the other ocean basins and categories. Tropopause uplift and possible overshooting The overshooting due to convective systems and TCs is an important topic for understanding the atmospheric circulation and the climate changes (Pommereau and Held, 2007;Romps and Kuang, 2009), but it is still debated due to the difficulties of measuring atmospheric parameters during severe events.Using the definition of possible overshooting conditions given by Eqs.(2) (3), we compared any single anomaly temperature profile with the corresponding zonal monthly mean tropopause altitude, computed for latitude bands with 10-degree width, obtaining the results reported in Tables 3 and 4. Table 3 reports the details for each ocean basin and each storm category distinguishing between events in the tropical or extratropical area.Table 4 gives a summary. As already described in Sect.3.2 and following the findings of Biondi et al. (2012Biondi et al. ( , 2013)), we assume that the lowest temperature anomaly minimum corresponds to the TC cloud top altitude and the cyclone's strong convection causes the local tropopause uplift.According to this theory the TC creates a double tropopause where the primary tropopause is due to the presence of the TC's cloud top and the secondary tropopause is the former tropopause which is pushed up by the convection (Biondi et al., 2012(Biondi et al., , 2013)). In Table 3 it is evident that the number of possible overshootings obtained by using Eq. ( 3) is much lower (about one third) than the number obtained by using Eq. ( 2), as should be expected from the former threshold criterion being more conservative.However, the distribution of the possible overshootings over the ocean basins is the same (not shown), but with a reduced number of cases from Eq. ( 3), so the same considerations done hereafter for Table 3 and Fig. 8 are also valid for possible overshootings computed with Eq. ( 3). Figure 8 shows the distribution map of co-locations between GPS RO and TC tracks for different intensities.Figure 9 shows the distribution map of possible overshootings detected using Eq. ( 2).The area with the highest overshooting probability from strong cyclones is found to be the western Pacific Ocean.Our results are consistent with the overshooting patterns reported by Romps and Kuang (2009) with only a small difference in the eastern Pacific Ocean basin, where we do not see too many overshooting conditions.The comparison between Figs. 8 and 9 highlights the presence of strong cyclones in all ocean basins including the North Atlantic Ocean and eastern Pacific Ocean basins, but the occurrence of possible overshootings is much lower in these basins than in the western Pacific Ocean and southern Indian Ocean basins.The results show that in general conditions for possible overshootings into the stratosphere are found more often in the tropics (26.8 %) than in the extratropics (13.5 %).In the Southern Hemisphere, possible overshootings are more frequent (38.9 % of tropical cases and 25 % of extratropical cases) than in the Northern Hemisphere (20.2 % of tropical cases and 9.9 % of extratropical cases).The possible overshootings mostly come from tropical cases with high intensity storms.The lowest percentage of possible overshooting conditions is detected in the eastern Pacific Ocean area (6.3 % for tropical cases and just 6.6 % of extratropical cases).The highest percentage is detected in the South Pacific Ocean area with 40.9 % of tropical cases and 48.4 % of extratropical cases.It is also high in the Indian Ocean with 34.5 and 38.3 % in the northern and southern tropics, respectively, and 46.6 and 40.1 % in the northern and southern extratropics, respectively (in this case the number of colocations is very small).We do not give any detail on the statistics by intensity, since the number of cases for higher intensities (i.e., categories 3 to 5) is too small. The monthly mean tropopause altitudes in the tropics between 20 • S and 20 • N ranges between 16 and 17.5 km altitude depending on the season.In the extratropics between 20 and 30 • latitude it is about 1 km lower, and exhibits higher Table 3.Total number of RO profiles (no.occ) co-located with storms of different intensities, selected by ocean basin.Columns denoted "overshoot" give the number of possible overshootings in percent and the mean altitude difference between the storm cloud top and the corresponding monthly mean tropopause computed with Eqs.(2) / (3).Acronyms are the same as in Table 2; see that caption for explanation.[km] occ variability.Between 30 and 40 • latitude, the tropopause altitude ranges from 11 to 15 km (Fig. 1). Figure 10 shows the difference between the cloud top altitude and the corresponding monthly mean tropopause (also reported in Table 3).The highest percentage of cases with differences larger than 3 km is detected for extratropical cases in the southern Indian Ocean basin.In general, in the North Atlantic Ocean and eastern Pacific Ocean basins the cloud top altitudes do not overpass the tropopause by more than a few hundred meters (green dots in Fig. 10). Figure 11 shows, in a statistical summary view, an example of possible overshooting detection results in the western Pacific Ocean basin for TSs at different latitudes (0-20 • ; 20-30 • ; 30-40 • ), as reported in Table 3.The magenta stars in the shaded area, according to Eqs. ( 2) and (3), denote the storm cloud top altitudes not overshooting into the stratosphere, the magenta stars in the white area account for possible overshooting according to Eq. ( 2), and the green stars account for possible overshooting according to Eq. (3).The distribution over the year shows that storms occur from April to December over the western Pacific Ocean at 0 to 20 • N and mainly from July to October at 20 to 40 • N. Overshootings are found in each investigated latitude zone when storms occur.Hardly any overshootings are found from July to September in the tropics (0 to 20 • N). Conclusions The thermal structure of TCs in different ocean basins and the conditions for possible overshooting of cloud tops into the stratosphere, were investigated based on GPS RO measurements.The results indicate that the effects of TCs on the UTLS should be studied in connection to the ocean basin where they develop, since their thermal structure is clearly connected to the basin.In particular, basins in the Northern and Southern hemispheres show a different thermal structure: In the Southern Hemisphere, storms reach higher altitudes and the cloud top is colder than in the Northern Hemisphere.The temperature anomaly above the cloud top becomes positive in Northern Hemisphere ocean basins while it stays negative up to about 25 km in altitude in the Southern Hemisphere ocean basins.The reason of this warming above the storm cloud top in the Northern Hemisphere is not clear yet and is a topic of further investigations. A double tropopause characterizes a storm (Biondi et al., 2012(Biondi et al., , 2013)), which is evident in all the ocean basins for all storm intensities (not shown) and can be definitely defined as a feature reflecting the high-altitude convection dynamics.Comparisons between the monthly mean tropopause altitude and the storm cloud top altitude indicate a significant fraction of possible overshootings.Results show that the possible overshootings will overpass the climatological tropopause more deeply at extratropical latitudes (Table 3), where the tropopause is lower, but there is no clear tendency connected to specific basins. While the co-locations between GPS ROs and TC tracks for all the intensities are well distributed in all the ocean basins, conditions for overshooting occur more frequently in the Southern Hemisphere and in the northern Indian Ocean basin.However, the number of possible overshootings for high intensities (i.e., TC categories 1 to 5) is higher in the western Pacific Ocean basin.In this area, conditions for over- shooting are found for a percentage of 30 to 50 % of the cyclones, especially within tropical latitudes. We have demonstrated that the GPS RO technique is very well suited for monitoring and understanding the TCs' thermal structure and its contribution to the atmospheric circulation through possible overshootings into the stratosphere.With the actual RO missions we are not able to fully monitor all TCs with high temporal resolution.Currently, the number of RO profiles is decreasing due to the degradation of Formosat-3/COSMIC.In the near future several new missions are planned (e.g., COSMIC-2, MetOp-C, PAZ and GEROS), and with the support of new Global Navigation Satellite System (GNSS) constellations (e.g., the European Galileo) and the availability of the Russian Global'naya Navigatsionnaya Sputnikovaya Sistema (GLONASS), we may be able to adequately monitor all TCs. To date the number of GPS ROs is about 2500 per day, but with the new mission COSMIC-2, for example, the coverage will increase to more than 10 000 per day and the density of profiles in the tropics will be higher due to a lower inclination of six of the twelve planned COSMIC-2 satellites.This will definitely be an advantage for the study of TCs. Figure 1 . Figure 1.Exemplary tropopause altitude distribution vs. latitude during the Northern Hemisphere winter (a) and summer (b). Figure 2 . Figure 2. Illustration of TC tracks (background from Wikipedia) for ocean basins: North Atlantic Ocean (red), eastern Pacific Ocean (magenta), western Pacific Ocean (green), South Pacific Ocean (cyan), northern Indian Ocean basin (blue) and southern Indian Ocean basin (white). Figure 3 . Figure 3. Exemplary distribution of GPS RO profiles within 6 h (red circles) and 3 h (blue dots) around the center of a tropical storm in the North Atlantic Ocean, within a spatial window of 600 km from the center. Figure 4 . Figure 4. Exemplary distribution of GPS RO profiles in a time window of 6 h and spatial window of 600 km along 135 tropical storm tracks in the North Atlantic Ocean basin. Figure 5 . Figure 5. RO temperature anomaly profiles during TC category 2 in the North Atlantic Ocean basin.In red the tropical profiles, in blue the extratropical profiles, in yellow the mean anomaly of tropical profiles, in light-blue the mean anomaly of extratropical profiles, in black the mean of all the profiles and dashed black the mean plus/minus the standard deviation. Figure 6 . Figure 6.Mean temperature anomalies for different storm categories shown for: (a) western Pacific Ocean and (b) South Pacific Ocean.Numbers in brackets denote the numbers of observations. we show as an example the 84 RO profiles (69 extratropical and 15 tropical) of a TC category 2 in the North Atlantic Ocean basin.The temperature anomaly profiles at the storm's location are computed relative to the monthly mean temperature climatology (2001 to 2012) for the respective location (1 • Figure 7 . Figure 7. Temperature anomaly vs. altitude of the coldest point for different ocean basins and different storm intensities.The colors denote different basins.The circle size denotes different intensities and increases with intensity, from the smallest to the biggest, in the following order: TD-TS-Cat1-Cat2-Cat3-Cat4-Cat5.The numbers represent the case number used for the analyses. Figure 8 . Figure 8. Distribution map of GPS RO co-located with storms of different categories: tropical depression (yellow), tropical storm (green), tropical cyclone categories 1 and 2 (red) and tropical cyclone categories 3 to 5 (magenta). Figure 10 . Figure 10.Distribution of the difference between the cloud top altitude and the tropopause altitude for all the GPS RO profiles colocated with TC best tracks. Table 1 . Number of RO profiles co-located with TCs within increasing distance from the center of the TC. Table 2 . Mean altitude (in km) of the lowest coldest point of temperature anomaly profiles for different ocean basins and different storm intensities.The Southern Hemisphere ocean basins are marked in italic.NA is North Atlantic Ocean; WP is western Pacific Ocean; EP is eastern Pacific Ocean; SP is South Pacific Ocean; NI is northern Indian Ocean; SI is southern Indian Ocean.TD is tropical depression; TS is tropical storm; Cat1 is tropical cyclone category-1; Cat2 is tropical cyclone category-2; Cat3 is tropical cyclone category-3; Cat4 is tropical cyclone category-4; Cat5 is tropical cyclone category-5. Table 4 . Summary of Table3, reporting the percentage of tropical and extratropical cases binned into ocean basins, hemispheres, and tropics/extratropics.The column "Percentage" reports the percentage of possible overshootings computed with Eq. (2) and within brackets the percentage of possible overshootings computed with Eq. (3).
6,892.2
2014-11-26T00:00:00.000
[ "Environmental Science", "Physics" ]
Towards a Semantic Administrative Shell for Industry 4.0 Components In the engineering and manufacturing domain, there is currently an atmosphere of departure to a new era of digitized production. In different regions, initiatives in these directions are known under different names, such as industrie du futur in France, industrial internet in the US or Industrie 4.0 in Germany. While the vision of digitizing production and manufacturing gained much traction lately, it is still relatively unclear how this vision can actually be implemented with concrete standards and technologies. Within the German Industry 4.0 initiative, the concept of an Administrative Shell was devised to respond to these requirements. The Administrative Shell is planned to provide a digital representation of all information being available about and from an object which can be a hardware system or a software platform. In this paper, we present an approach to develop such a digital re presentation based on semantic knowledge representation formalisms such as RDF, RDF Schema and OWL. We present our concept of a Semantic I4.0 Component which addresses the communication and comprehension challenges in Industry 4.0 scenarios using semantic technologies. Our approach is illustrated with a concrete example showing its benefits in a real-world use case. I. INTRODUCTION The dynamic of today's world imposes new challenges to the enterprises. The globalization, the ubiquitous presence of the internet and the development of hardware systems are some of the technological improvements that provoke changes everywhere. In the engineering and manufacturing domain, there is currently an atmosphere of departure to a new era of digitized production. In different regions, initiatives in these directions are known under different names, such as industrie du futur in France, industrial internet in the US or Industrie 4.0 in Germany. Industry 4.0 (I4.0) is a term coined in Germany to refer to the fourth industrial revolution. This is understood as the application of concepts such as Internet of Things (IoS), Cyber-physical Systems (CPS), the Internet of Services (IoS) and data-driven architectures in the real industry. With approximately the similar meaning, in North America, the term Industrial Internet has been created. This term is very similar to I4.0, but the application is broader than industrial production. Other areas are included, for instance, smart electrical grids [1]. With the goal to develop the Industry 4.0 vision, CPS are of paramount importance. CPS integrate physical and software processes [2]. In order to do so, they use various types of available data, digital communication facilities, and services [3]. While the vision of digitizing production and manufacturing gained much traction lately, it is still relatively unclear how this vision can be actually be implemented with concrete standards and technologies. The physical network connection problem is meanwhile largely solved using technologies such as Profibus/Profinet [4] and OPC-UA [5]. However, the much more challenging problem is to make smart industrial devices able to communicate and understand each other as a prerequisite for cooperation scenarios. To address this problem, we need techniques and standards for representing and exchanging information, data and knowledge between devices participating in manufacturing and production processes. Such standards must be flexible to accommodate new features, usage scenarios, cover multiple domains, device categories, and bridge organizational boundaries. Most importantly, they must be able to evolve seamlessly over time to facilitate the swift realization of new features and scenarios as they become apparent. Within the Industry 4.0 initiative, the concept of an Administrative Shell was devised to respond to these requirements. The Administrative Shell is planned to provide a digital representation of all information (and services) being available about and from a physical manufacturing component. In this article, we present an approach to develop such a digital representation based on semantic knowledge representation formalisms such as RDF, RDF-Schema and OWL. The advantages of such an RDF-based approach are: overview of background information and descriptions of the relevant terminology for our approach is provided in section II. A comprehensive list of challenges aggregated from the current state of the art and our ongoing work on is presented in section III. In section IV, we present Semantic I4.0 Component which is our approach to addressing the challenges using semantic web technologies. In addition, a concrete example is given in section V which shows the benefits of our approach in real world use case. We provide an overview about related work in section VI. The conclusion and an outlook to future work are presented in section VII. II. BACKGROUND This section describes several concepts and terminology that are relevant for our approach. A. RAMI Model Several German institutions and associations worked in close cooperation to define a reference model for Industry 4.0. The result is the Reference Architecture Model for Industry 4.0 (RAMI 4.0), that describes fundamental aspects of the Industry 4.0 [6]. It illustrates the connection between IT, manufacturers/plants and product life cycle through a threedimensional space. Each dimension shows a particular part of these worlds divided into different layers as depicted in Figure 1. Left vertical axis represents IT perspective which is comprised of various layers such as business, functional, information, etc. These layers corresponds to the IT way of thinking where complex projects are decomposed into smaller manageable parts. In the left hand horizontal axis is displayed the product life cycle where Type and Instance are distinguished as two main concepts. The model allows the representation of the data gathered during the entire life cycle. Along with the right hand horizontal axis the location of the functionalities and responsibilities are given in the hierarchical organization. The model broadens the hierarchical levels of IEC 62264 1 by adding the Product or a workpiece level at the bottom, and the Connected World goes beyond the boundaries of the individual factory at the top. In addition, the reference architecture model allows the description and implementation of highly flexible concepts. This leverages the transition process of current manufacturing systems to Industry 4.0 by providing an easy step by step migration environment. B. Industry 4.0 Component A component is a basic concept in Industry 4.0. As defined in [6] an I4.0 component constitutes a specific case of a CPS. It is used as a model for representing the properties of CPS, for instance, real objects in a production environment connected with virtual objects and processes. An I4.0 component can be a production system, an individual machine, or an assembly inside a machine. It is comprised of two foundational elements: object and Administrative Shell. Every object or entity that is surrounded by an Administrative Shell is described as I4.0 component. Figure 2 shows an example of such a component. Additionally, these objects have at least the capability of passive communication. As a result, a flexible framework for data description and provisioning is established. In the following, these elements are presented in detail. C. Object In [6], the term object is used to refer each individual physically or non-physically part. An object can be an entire machine, an automation component or a software platform. From a time perspective, it could be a legacy system or a new system developed by modern techniques and technologies. The industry should be able to integrate and benefit from these objects, independent of the type and time that they belong. D. Administrative Shell The Administrative Shell is used to store all important data of an object which can be hardware or software. It creates benefits for all participants in networked manufacturing. Consequently, a consistent way of managing data along with various functions and services for data manipulation and publication is provided. Some of the benefits are presented in detail as below [6]: a) Management of Data: The Administrative Shell provides mechanisms to manage a large amount of data and information generated by manufacturers or participants. For instance, it stores and manages the information related to configuration, maintenance or connectivity with other devices. b) Functions: Different functions such as configuration, operation, maintenance, complex algorithms for business logic are provided by the Administrative Shell. These functions facilitate the interactivity between the I4.0 component and other actors including human users. c) Services: Although the information of a component is stored only once, they can be used beyond the boundaries of the component, within enterprise networks and in the cloud as well. The advantage is that the information can be provided via IT services to any user or in any use case. On the other hand, the enterprises need to maintain a huge amount of legacy systems with their corresponding existing data. Commonly, this data is in different formats (e.g. plain text, DBMS, XML, etc.). The new data and new formats have to coexist with the old ones. 2) Global unique identification (Ch2): Enabling intercommunication among I4.0 components and the environment over the Internet is a big challenge. In addition to this, there should be a linking mechanism between the I4.0 components and the generated information [7]. Therefore, addressing this challenge is of paramount importance in order to realize the vision of I4.0. 3) Data availability (Ch3): Another challenge is the availability of the data beyond the boundaries of the manufacturers and across different hierarchy levels. This challenge becomes even harder when various policy rules from manufacturers are applied. I4.0 components will communicate with each other and interact with the environment through exchanging the data generated from different sensors and react to the events by triggering actions with the aim of controlling the physical world [8]. Therefore, sharing the generated data between participants [9] is a key factor in the Industry 4.0. 4) Standardization compliance (Ch4): The Standardization process is an important step toward the realization of I4.0. Several standards to deal with different layers in the enterprises exist nowadays. For instance, AutomationML [10], Profibus [11] and OPC-UA [5], [12] are just some of the examples of the mentioned standards. The core idea of all this effort is to provide a detailed description of the components in the manufacturing process. The production process constantly generates different components and the standards need to reflect this dynamically. As a result, the standards grow in size and number, making this interoperability between them a problem to solve. 5) Integration (Ch5): Highly dynamic environment is one of the key obstacles to the establishment of the vision of I4.0. The complexity of horizontal and vertical integration of the I4.0 components is drastically increased with the fluctuating number of the participants. Self-sense, self-configuration and self-integration are some concepts used to describe autonomous interaction of a component with the environment in a networked manufacturing. Following the principles from Reconfigurable Manufacturing Systems (RMS), adding, removing, replacing or rearranging the components must not affect the production process [13]. Thus, developing a consistent data model is a crucial factor that facilitates the integration of I4.0 components in the changing environments. 6) Multilinguality (Ch6): In order to achieve a wide range of applicability to different cultures and communities [14], the I4.0 should be able to support localization (and internationalization) of the generated information. This will decrease the learning curve and allow easier and faster adoption of the Industry 4.0 in real production environment. IV. SEMANTIC I4.0 COMPONENT A. Addressing the challenges with the semantic approach It is widely accepted that semantics technologies play a crucial role regarding the management of things, devices and services [15], [16]. Moreover, [6] recognizes as a requirement that I4.0 components and their contents should follow a common semantic model. Therefore, we propose a semantic approach to address the challenges presented in section III. This approach is developed using fine-grained principles from Semantic Web and Linked Data. Figure 3 depicts our proposal to add a semantic layer to the Administrative Shell. 1) Interoperability: To meet the interoperability demand, RDF and Linked Data have proven to be a successful way to integrate different types of data [17], [18], [19]. Embedded in the semantic for Administrative Shell we propose RDF as a middle layer to support the interoperability between the data of the legacy systems and the data generated by the I4.0 component. We aim to establish RDF as a lingua franca for data interoperability in the I4.0 landscape. 2) Global unique identification: Identification of each I4.0 component by using global unique identifier ensures entity disambiguation and retrievable [20]. According to linked data principles [21], HTTP URIs should be used for naming things. Following the above-mentioned principle, we propose that each I4.0 component should be identified by an HTTP URI. By doing so, a decentralized, holistic and extensible global unique identification scheme for I4.0 components is established. As a consequence, we will have derefenceable I4.0 components which are able to self-locate and communicate with each other. Listing 1 presents our proposal for identifying the I4.0 components. In addition, it shows that identification capabilities can be extended by various existing vocabularies that provides adequate means. This example uses the term identifier from Dublin Core Vocabulary 2 to achieve a reference to the resource which is unambiguous within a given context. #Class Definition i40c:Actuator rdfs:subClassOf i40c:Component ; rdfs:comment "Actuator is ..."; rdfs:label "Actuator". 2 http://dublincore.org/documents/dcmi-terms/#H1 3) Data availability: The benefits of employing RDF as the standard for representation of the data are twofold. Firstly, various data serialization formats are easy to be generated and transmitted over the network. Secondly, using SPARQL 3 , as a W3C Recommendation for an RDF query language, it is possible to make data available through a standard interface. RDF representation of the data can be created on the fly, even if they are stored in relational databases or other data formats [22]. By doing so, our approach enables data sharing between legacy systems and other participants in a networked manufacturing as well. 4) Standardization compliance: Following the idea of employing RDF as a lingua franca for data integration, we propose to translate existing standards into RDF vocabularies and SKOS thesauri. The interoperability between standards can thus be managed through the integration of the respective vocabularies. In addition, these vocabularies are also connected with the Administrative Shell data (cf Figure 3). As an example, we created an RDF vocabulary for the IEC 61360 -Common Data Dictionary (IEC CDD) 4 . IEC CDD is a common repository of concepts for all electrotechnical domains based on the methodology and the information model of IEC 61360. It provides a widely accepted terminology and definitions based on accepted sources such as IEC standards as well as international and industry standards. It contains four major concepts: Component, Material, Feature and Geometry. Component describes an industrial product which serves a specific function and which in a given context is considered not to be decomposable or physically divisible and is intended for use in a higher-order assembled product. Component is represented by an Object in RAMI Model, and when it is surrounded by the Administrative Shell forms and I4.0 component. Figure 4 depicts part of the hierarchy of the iec:Component class. 5) Integration: Running on completely unified and consistent data model facilitates the integration of I4.0 components. Newly added components need a shorter time for the integration process. Other peers will be aware of new peer and the way of communication with it by simply synchronizing with the latest version of vocabulary. The Vocabulary contains all necessary information for interaction and data exchanging between peers in a networked manufacturing. 6) Multilinguality: Since various communities across the world will interact with I4.0 components, it is very important that they will receive terms in their own language. The semantic web technologies enable implementation of multilinguality in a very straightforward manner. This will remain valid even for the newly introduced languages or concepts. Listing 2 depicts an example of our approach for multilinguality. In this example is modeled an actuator with the adequate translation for rdfs:label and rdfs:comment into respective languages, English and German. B. The Administrative Shell Vocabulary In order to provide a semantic layer for the Administrative Shell we have developed an RDFS vocabulary. This is depicted in Figure 5 The i40c:Object encapsulate the I4.0 object and important data related to it (e.g. identification, image, technical data). In addition, information regarding the phases of the object and the entire life cycle are managed. Given that the I4.0 object can be part of other components, we used the isPartOf Content Ontology Design pattern 5 to capture this characteristic. We argue that using a lightweight RDFS vocabulary, an important step towards realizing the I4.0 vision in real world applications is established. V. USE CASE The vision of Industry 4.0 is centered around the concept of decentralized production and smart objects that participate in the production in terms of autonomy and decision-making. To accomplish this goal, object metadata, data, and relations with other objects, need to be semantically described within the Administrative Shell. By doing so, the information provided by one object can be understood and exploited by other smart objects in the production chain. To illustrate the applicability of our approach we detail, in this section, a use case where the semantic Administrative Shell is used to describe an I4.0 component and some of its basic relations. Listing 3 shows the semantic representation of the Administrative Shell for the Motor controller CMMP-AS-C2-3A-M3 object (a product of Festo AG 6 ). For brevity, we describe here This example contains four instances of respective types. An AdminShell1 surround Object1 and related it with the majority of the concepts in the domain, as the Platform1 in this case. Also, Object1 has its technical data defined on the resource TechnicalData1 (cf. Figure 6). One of the main advantages of the Semantic Administrative Shell is the uniform data representation according to the RDFS model, which enables efficient integration and querying the data comprised in the shell. In order to illustrate the data retrieval, we have designed simple SPARQL queries. For example, it is relevant to know the technical characteristics of a component in its various phases (e.g. Single, Three). In the following query (cf. Listing 4), we construct an RDF graph that contains a description of the technical feature of an object with Single-phase. Another example is retrieving the information of the I4.0 component platform, during the maintenance cycle. The platform entities are referring to functional library elements, which are specific to a certain automation system. The query modeled in Listing 5, obtains the details of the platform like the name, version and the software URL that supports the I4.0 object. The above use case shows how the Semantic Administrative Shell provides a more flexible data model. This semantic representation helps to overcome the challenges that I4.0 is facing. VI. RELATED WORK Currently, there are some efforts discussing the need of bringing more semantics and data-driven approaches to I4.0. [27] presents guidelines, aiming to help on choosing the level of semantic formalization for the representation of the different types of I4.0 projects. The crucial role of semantic technologies for mass customization is discussed in [16]. This work recognizes semantic technologies as a glue to connect smart products, data and services. Obitko [28] describes the application of semantics to I4.0 from the Big Data perspective. The features of Big Data, as well as, an ontology for sensor data are presented. Table I provides a comparison of our approach to the related I4.0 component description approaches. Electronic Device Description Language (EDDL) is a language to describe information related to digital components [23], [29]. Up to date, EDDL is available for a huge amount of devices that are currently utilized in the process industry. EDDL is a text-based description file of the field device and its properties, which describes the data and how they should be displayed. The Object Memory Model (OMM) is an XML-based format which allows for modeling of the information about individual physical elements [25]. In this work, the memory of the elements is partitioned to include different types of data regarding the identification, name, etc. The idea of this approach was to bring a semantic layer to the physical components but still suffers the intrinsic limitations of XML. However, it is envisioned, that elements in the OMM (so called blocks) contain RDF and OWL payload data. Extending the concept of OMM Domeman [30] is a framework for representation, management, and utilization of digital object memories The idea of bringing semantic descriptions of physical elements by combining OMM and a server realization has been conducted by [31]. Nevertheless, this work is focused on the identification of the elements and still rely on the above mentioned limitations of OMM format. The Physical Markup Language (PML) is a common language for describing physical objects, processes and environments [24]. The goal of PML is to use this descriptions in remote monitoring and control of the physical environment. Janzen [26] defines the Smart products as the connection of physical products and information goods which allow the embedding of digital product information into physical products. In this approach the Smart Product Description Object (SPDO) is presented. SPDO is a data model built on top of the DOLCE ontology for describing smart products. [32] defines an approach for distinguishing the local and global data structures stored in Active Digital Object Memories (ADOMe), smart labels with memory and processing capabilities. According to the author, this can be realized by storing the data in a unified structured format. The idea of bringing semantic descriptions of physical elements by combining OMM and a server realization has been conducted by [31]. Nevertheless, this work is focused on the identification of the elements and still rely on the above mentioned limitations of OMM format. VII. CONCLUSION AND FUTURE WORK In this paper, we have described an approach for semantically representing information about smart I4.0 devices with an Administrative Shell. The approach is based on structuring the information using an extensible and light-weight vocabulary aiming to capture all relevant information. Compared to prior approaches, the RDF-based Semantic Administrative Shell has a number of advantages. The URI/IRI based identification scheme provides a unified way to identify all types of relevant entities, from physical objects, abstract concepts, properties, concrete raw and derived data, etc. Existing standards (such as eClass, IEC device characteristics or AutomationML) can be more easily integrated and referenced. Information about and from different objects can be easily integrated (since a basic integration can be achieved by merging sets of triples). Accessing the information in a unified way is established by using SPARQL as query language. We see this work as a first step in a larger research and development agenda aiming at equipping manufacturing equipment with semantics-based means for communication and data exchange. In the medium to long term, we aim to bring more intelligence to the edge of production facilities thus promoting self-organization and resilience. In future work, we envision to refine and expand the Administrative Shell vocabulary in order to provide support for a wide range of device types. We intend to study the interaction between various devices equipped with Administrative Shells and research application scenarios, such as predictive maintenance. Another interesting avenue of research is how Semantic Administrative Shells can be generated and populated from existing information systems and data sources available at the manufacturers. ACKNOWLEDGMENTS This work is supported by the German Ministry for Education and Research funded project LUCID and European Commission under H2020 for the project BigDataEurope (GA 644564).
5,401.2
2016-01-07T00:00:00.000
[ "Computer Science", "Engineering" ]
Controlled drug release of levo loxacin from poly (acrylamide) hydrogel Hydrogels are 3D polymer networks capable to absorb and release water or biological luids. They are stimuli-responsivematerials, which can show rapid volume changeswith response to small changes in environmental parameters such as ionic strength, pH, and temperature. In this work, we performed a synthesis of Poly(acrylamide) hydrogel and tested for controlled release of levo loxacin hemihydrate as a model drug. We used sodium metabisul ite and potassium persulphate as free radical initiators to prepare hydrogel with methylenebisacrylamide as a crosslinker. Characterization of hydrogel was performed by TGA, SEM, and FT-IR. Swelling study and drug release were performed at pH 1.2 and 7.4 solutions, identical to the gastrointestinal luid at 37◦C (human body temperature) to examine possible site-speci ic drug delivery. UV-Visible spectrophotometer was used to measure the concentration of drug release. Results exhibited the pH and temperature-dependent drug release. The amount of drug release was found to be 17% and 99% in acidic and alkaline pH of 1.2 and 7.4, respectively, after 6 hours. Hydrogel, free radical polymerization, pH, levo loxacin, drug delivery A Hydrogels are 3D polymer networks capable to absorb and release water or biological luids. They are stimuli-responsive materials, which can show rapid volume changes with response to small changes in environmental parameters such as ionic strength, pH, and temperature. In this work, we performed a synthesis of Poly(acrylamide) hydrogel and tested for controlled release of levo loxacin hemihydrate as a model drug. We used sodium metabisul ite and potassium persulphate as free radical initiators to prepare hydrogel with methylenebisacrylamide as a crosslinker. Characterization of hydrogel was performed by TGA, SEM, and FT-IR. Swelling study and drug release were performed at pH 1.2 and 7.4 solutions, identical to the gastrointestinal luid at 37 • C (human body temperature) to examine possible site-speci ic drug delivery. UV-Visible spectrophotometer was used to measure the concentration of drug release. Results exhibited the pH and temperature-dependent drug release. The amount of drug release was found to be 17% and 99% in acidic and alkaline pH of 1.2 and 7.4, respectively, after 6 hours. INTRODUCTION The polymers are continuous repeating monomer units. The general biomedical uses of polymers are in drug delivery systems, pharmaceutical adhesives, coating material, and emulsifying agents for dosage forms in site-speci ic and controlled drug delivery systems. Polymer molecules are linear or branched or may be crosslinked. The chemical response of polymers depends on the monomer unit present in the polymer chain. The homopolymers are having identical monomeric units and copolymers are formed from more than one monomer (Kamboj and Verma, 2015;Florence and Attwood, 1998;Moghimi and Hunter, 2000). The irst polymer gel was prepared in 1949 by Katchalsky. This gel responds to the surrounding environment solution by swelling or gathering from a network of water-soluble polyelectrolytes (Katchalsky and Gillis, 1949). In 1950 medical applications and its importance of hydrogels were revealed and using 2-hydroxyethyl methacrylate gel, soft contact lenses are manufactured. The smart hydrogel, like temperature-sensitive hydrogels, is focused till the mid-1980 (Kim et al., 2013). In the drug delivery systems, pH-sensitive hydrogels are the best materials for drug release to the target site of the body (Hamidi et al., 2008;Huynh et al., 2011). Poly(acrylamide) hydrogel polymer backbone containing functional group like amine is sensitive to charge by either release or accept protons in the aqueous media. The electrostatic repulsion in the polymer backbone network promotes swelling and then water diffusion. These hydrogels are more sensitive to slight changes in environmental factors. Those are called smart hydrogels (Wan et al., 2016;Wu et al., 2016;Katchalsky and Gillis, 1949). Figure 1 show its molecular structure. Levo loxacin is a luoroquinolone antibacterial drug with an active L-isomer of o loxacin (Hurst et al., 2002;Chen et al., 2003). Levo loxacin is used to cure the disease of gram-negative and gram-positive bacteria like keratitis, bacterial conjunctivitis, and other eye infections by inhibiting topoisomerase IV and DNA gyrase enzymes. These enzymes are important to DNA replication, recombination, transcription, and repair (Noel, 2009;Hooper, 1999). In the present work, we evaluate the drug release from poly(acrylamide) hydrogel using different temperature and pH solutions. Crosslinker methylenebisacrylamide used to control the network characteristic and model drug levo loxacin hemihydrate was used for drug release studies. Hydrogel synthesis Poly(acrylamide) hydrogel was synthesized by a free radical mechanism. Primarily, redox initiators of potassium persulphate (45 mg) and sodium metabisul ite (32 mg) were shifted into a vial containing 10 ml of deionized water. Then, add acrylamide (600 mg), allowed to stir for 10 minutes at room temperature after this crosslinker methylenebisacrylamide (06 mg) were added. Then, this composite was kept in a water bath until the gel was formed. The synthesized gel was washed with water to remove unreacted components. Then, the hydrogel was dried at 50 o C in the oven for 24 hours. FT-IR analysis Levo loxacin and levo loxacin loaded poly(acrylamide) hydrogel spectra were recorded using an FT-IR spectrometer (Shimadzu ATR) in the range of 400 to 4000 cm −1 to determining their intermolecular interactions and structure. TGA analysis To determine the thermal stability of poly(acrylamide) hydrogel was performed using a thermogravimetric analyzer (Perkin Elmer STA 600) by increasing the heating rate to 20 o C. Morphological examination The morphology of poly(acrylamide) hydrogel structures was determined using SEM (scanning electron microscope). Hydrogel composites were cut to expose their structure and imaged in an (SEM Zeiss, LS15) scanning electron microscope. Swelling study The swelling study of synthesized hydrogels was determined using dry samples in acidic buffer pH 1.2 and phosphate buffer pH 7.4 solutions. The preweighed hydrogel samples were immersed in solutions at 37 o C for swelling. At periodic intervals, the swollen samples were taken out from the solution and excess droplets on the surface of the hydrogel were withdrawn by wiping with ilter paper then weighed. The swelling ratio of hydrogel was determined from Equation (1). Similarly, the swelling ratio was observed at 27 o C in pH 7.4 solution with time intervals. Where, W a and W b is the dry and swollen gel weight. Preparation of calibration curves To construct a calibration curve, a stock solution 1000 mg/l of Levo loxacin drug solution was prepared using water as a solvent, then 2, 4, 6, 8, and 10 mg/l solutions were prepared by dilution of the stock solution. Using a UV-9000A spectrophotometer (Shanghai Metash), scan the solutions between 200 to 400 nm and absorption maximum was recorded to construct a calibration curve. Drug loading and drug release studies Levo loxacin hemihydrate was selected as a model drug. The loading of the drug was conducted by 1 mg/ml concentration solution using water as a solvent. Place 0.1 g dry hydrogel to 100 ml levo loxacin solution. The loaded hydrogel was dried and note down the loaded hydrogel weight. The in vitro release study was conducted by placing drugloaded hydrogel in 100 ml of acidic buffer pH 1.2 and phosphate buffer pH 7.4 solution at 37 • C and withdrawn 1 ml dissolution medium sample at regular time intervals (30 minutes) with stirring and replace fresh solution to maintain constant dissolution media. Using a UV-Visible spectrophotometer, scan the solutions between 200 to 400 nm with suitable dilution, note down the λ max absorbance. The percentage of released levo loxacin was calculated and its corresponding drug release graph was plotted. Similarly, temperature effects on drug release will study by setting temperatures at 27 and 37 • C in pH 7.4 solution. RESULTS AND DISCUSSION Poly(acrylamide) hydrogel was synthesized by a radical polymerization method and its swelling study was performed. Moreover, drug loading and drug release also performed using levo loxacin as a model drug and the effect of pH, temperature, and time of the drug release will also be studied. TGA analysis The thermogram of hydrogel was shown in Figure 4. The irst stage of weight loss, consider as loss of moisture present in the hydrogel, was observed at 169ºC with mass loss of 3.4%, then degradation occurred at 179ºC with weight loss of 9.6%, and maximum weight loss occurred at 381ºC with mass loss of 24.4% due to cleavage of the polymer chain in hydrogel (Ebrahimi and Salavaty, 2018). Morphological examination The surface morphology of synthesized hydrogel was studied by SEM. The micrograph Figure 5(a) and Figure 5(b) reveal that the surface is uniform and smooth in nature (Chen et al., 2009;Aouada et al., 2009). Swelling study The swelling study of the synthesized hydrogel was performed in buffer solutions. The swelling study of poly(acrylamide) hydrogels in pH 1.2 and 7.4 solutions similar to gastrointestinal luids at 37 • C are shown in Figure 6. From these results, the higher swelling rate observed at pH 7.4 when compared to pH 1.2 solution. In an acidic medium of pH 1.2, the ammonium groups (NH 3+ ) are formed by protonation but due to the presence of chloride (Cl − ), counterions drastically decreased its swelling (Wu et al., 2001;Pourjavadi and Mahdavinia, 2006). However, at pH 7.4, the (-CONH 2 ) and (-CONH-) groups are deprotonated and the presence of sodium (Na + ) ions in the solution will produce high osmotic swelling pressure hence shows maximum swelling. Similarly, the swelling was observed at temperature 27 and 37 • C (Figure 7) for temperature sensitivity. The results show when the temperature increases swelling ratio also increases. Drug selection Drug selection for the loading and release is most important because it should not react with hydrogel and solvents. This helps to avoid the λ max shift. Levo loxacin drug has good material because no change was observed in the λ max over time. Using a UV-Visible spectrophotometer scan, the solutions Hence, we conclude that drug release depends on the pH of the solution because the swelling ratio is more in pH 7.4 than in the pH 1.2 solution. The drug release from hydrogel into solution depends on swelling and the controlled release of levo loxacin was observed up to 6 hours (Figure 11). Drug release with temperature The temperature effect on drug release was evaluated and studied at temperatures 27 and 37 • C with time ( Figure 12). When the temperature increases, drug release also increased, and at 37 • C we observed the maximum drug release. When the temperature increases, the hydrogel network lexibility also increases. Hence, the more amount of buffer solution enters into hydrogel promotes more amount of drug is released. Kinetic model drug release The kinetic model of drug release will study by using various mathematical models. The obtained results are given in Table 1. It helps to promote an ideal kinetic model to illustrate in vitro drug release data in terms of relevant parameters. From the obtained results, the best correlation coef icient (R 2 ) is 0.996 in Zero-order kinetics; hence synthe-sized poly(acrylamide) hydrogel follows the zeroorder kinetic model. CONCLUSION Poly(acrylamide) hydrogel cross-linked with methylenebisacrylamide was synthesized and studied their swelling and drug release properties. The swelling study of synthesized hydrogel was examined at pH 1.2 and pH 7.4 solutions at 37 o C and levo loxacin drug release studies were carried out in the same conditions. The drug released amount from the hydrogel was more at alkaline pH 7.4 than in the acidic pH 1.2 solution. Because at pH 7.4 solution, the amide groups are de-protonated and the presence of sodium (Na + ) ions in the solution will produce high osmotic swelling pressure hence it will swell more and the amount of drug release is more. The temperature and pH effects on drug release were also studied and the hydrogel follows a zero-order kinetic model; hence these hydrogels can be used in controlled drug release and biomedical applications due to good swelling properties.
2,731.4
2021-04-07T00:00:00.000
[ "Biology", "Materials Science", "Chemistry" ]
Influence of Dispersed Phase Content on the Mechanical Properties of Electroless Nanocomposite Ni-P/Si3N4 and Hybrid Ni-P/Si3N4/Graphite Layers Deposited on the AW-7075 Alloy The article presents the results of mechanical testing of Ni-P/Si3N4 nanocomposite and hybrid Ni-P/Si3N4/graphite coatings deposited on AW-7075 aluminum alloy using the chemical reduction method. In terms of mechanical testing, microhardness was measured, and surface roughness and adhesion of the coatings to the aluminum substrate were determined using the “scratch test” method. The surface morphology of the deposited layers was also analyzed using light microscopy and scanning electron microscopy. Samples made of AW-7075 aluminum alloy with electroless deposited Ni-P/Si3N4 nanocomposite, Ni-P/graphite composite and hybrid Ni-P/Si3N4/graphite coatings with different content of dispersed phases were tested, and also, for comparison purposes, the Ni-P layer that constituted the matrix of the tested materials. Reinforcing phases in the form of silicon nitride nanoparticles and graphite particles were used in the layers. The purpose of the research was a thorough characterization of the coating materials used on aluminum alloys in terms of mechanical properties. Graphite is considered in this paper as it enables the reduction of the coefficient of friction through its lubricating properties. Unfortunately, graphite is difficult to use in selected layers as the only dispersion phase, because it has much lower hardness than the Ni-P coating. For this reason, a layer with a single dispersion phase in the form of graphite will be characterized by worse mechanical properties. It is necessary to add particles or nanoparticles with hardness higher than the base Ni-P coating, e.g., Si3N4, which improve the mechanical properties of the coating. The presented analyses of the results of the conducted research complement the previous studies on selected properties of nanocomposite layers with an amorphous structure and supplement the knowledge regarding their suitability for application to aluminum machine parts. Introduction Aluminum alloys of various series are currently very popular in the engineering industry as materials for many structures, depending on the strength and corrosion resistance requirements.The greatest demand for aluminum alloys is noticed in shipbuilding, aviation, the automotive industry, etc.For many decades, the 5xxx series alloys have been chosen by the shipbuilding industry for the construction of offshore vessels due to their corrosion resistance and the 7xxx series for strength reasons.In the area of aerospace, 7xxx series alloys are also among the most popular construction materials.The biggest advantage of aluminum alloys, compared to steel, is the possibility of reducing the weight of the entire structure.At the moment, the use of aluminum alloys to manufacture moving machine parts is also noticeable, e.g., gear wheels made of 7075 T6 anodically oxidized alloy, e.g., in motorcycle drives.On the other hand, every moving part of a machine is subject to abrasive wear, which results in increased expectations regarding the strength and abrasion resistance of the surface layers of aluminum alloy components.Therefore, it is required to continue and extend mechanical research in order to accurately determine the suitability of new materials in the area of mechanical engineering, including an attempt to optimize their chemical compositions and microstructures depending on the expected conditions of use. The test results prove that the surface treatment of AW-7075 aluminum alloys by depositing nickel layers using chemical reduction significantly increases the hardness of the surface layer, which translates into increased durability of the product, e.g., due to the reduction of the coefficient of friction, which was proved in the article [1].Generally, nickel in the electroless process is deposited from an aqueous solution containing nickel salts, a reducing agent, and a substance that regulates the pH of the solution and the reaction rate.The surface of the object in such a solution is the catalyst.The deposition reaction in a bath containing NiSO 4 and NaH 2 PO 2 can be represented as follows: SO 4 In addition, based on the work [2,3], it can be seen that the incorporation of particles of the dispersion ceramic phase Si 3 N 4 into the Ni-P and Ni-B coating material causes a significant change in the morphology and degree of surface development.For the Ni-P and Ni-B layers obtained by the electroless method, the morphology is particularly characteristic due to the distinctive structure compared to the layers obtained by other methods. Many studies prove and indicate [4][5][6][7][8][9][10][11][12][13][14][15][16][17] that electroless nickel and boron layers can be successfully applied on a wide variety of materials and are among the best solutions to increase the resistance of the surface layer to abrasive wear and scuffing.In addition to their high hardness and good anti-wear properties, such layers also have good anti-corrosion and adhesion properties, which are particularly important in the areological system.Furthermore, to improve the properties of the coating materials, the chemical composition of Ni-P or Ni-B coatings can be modified by incorporating reinforcing phases into their structure, which can be hard or even super hard materials, i.e., materials with a hardness greater than 33 GPa, e.g., Si 3 N 4 or SiC.The materials in the form of e.g., Si 3 N 4 or SiC are currently used as, e.g., antiwear coatings for single and multilayer cutting blades.Apart from that they are the subjects of research in the field of material and surface engineering in order to find new applications for materials used in machine construction [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33].Other examples are the use of additives in the form of graphite particles or nanocomposites to increase the lubricity and durability of oils [27,28].Silicon nitride, as a dispersed phase, is a good research material for the formation, modification and configuration of electroless composite and nanocomposite layers, especially in combination with another additive, e.g., graphite.Adhesion tests and the results of structural tests show that the Ni-P/Si 3 N 4 nanocomposite coating can be deposited directly on steel, iron, plastic and aluminum alloys.Such layers deposited on construction materials make it possible to significantly increase the hardness and resistance of the surface to abrasive wear compared to base materials, which has been confirmed in tests, e.g., using the pin-on-disc method.Moreover, different dispersed phases can be successfully used in one coating to precisely combine and complement their individual properties, which, in turn, makes it possible to obtain a synergy effect.An example is the incorporation of hard silicon nitride particles into the layer to increase the hardness of the coating, which increases its resistance to abrasive wear [16], and graphite particles to obtain self-lubricating properties at the same time, which also contribute to reducing the coefficient of friction [7,28].Such configuration in the form of a hybrid layer is advantageous, especially under conditions of periodically limited lubrication.The purpose of this solution is to increase the durability and reliability of manufactured parts in terms of tribology, inter alia.Unfortunately, there are some limitations that do not allow arbitrary composition of the composite coating-mainly for technological reasons at the layer deposition stage.In this study, to increase the hardness of the surface of aluminum parts, a nanocomposite Ni-P/Si 3 N 4 (Figure 1) and a hybrid Ni-P/Si 3 N 4 /graphite coatings and, for comparison reasons, the Ni-P coating were de-posited using the method of chemical reduction.Particular emphasis was placed on the two most important indicators of suitability of the tested coating materials: microhardness and adhesion of the layers to the aluminum substrate. Materials 2023, 16, x FOR PEER REVIEW 3 of 24 P/Si3N4/graphite coatings and, for comparison reasons, the Ni-P coating were deposited using the method of chemical reduction.Particular emphasis was placed on the two most important indicators of suitability of the tested coating materials: microhardness and adhesion of the layers to the aluminum substrate.In composite and nanocomposite coatings, the main load is carried by the matrix.Dispersion particles oppose the movement of dislocations, which in turn causes the strengthening of the coating material.On this basis, it is assumed that the degree of matrix strengthening is proportional to the ability of the particles to oppose the movement of dislocations. Graphite has other than hard silicon nitride (Si3N4 is super hard material) very necessary and useful properties that help reduce the wear of machine parts, which is why, in fact, graphite lubricants are often used in gears.However, in this case, graphite was added as the second minor dispersion phase mainly in order to achieve self-lubricating properties of the coating, which resulted in improving the tribological properties, directly and indirectly, the adhesion of the layer to the substrate.Another aspect that decides about undertaking this topic is the technological aspect, i.e., the difficulties associated with the incorporation of a limited amount of this type of particle (graphite or PTFE or MoS2) into the matrix of electroless Ni-P coatings.A novelty is an attempt to achieve a synergy effect by selecting the optimal content of Si3N4 nanoparticles (therefore, several different contents of this dispersion phase were tested) with a minimum content of graphite particles in the Ni-P coating, which is to help in obtaining such a composition of the hybrid coating composition that will be characterized by both high hardness and adhesion (due to the use of Si3N4), as well as increased resistance to abrasive wear (due to the graphite particles content) also in conditions of short-term dry friction-with insufficient presence of a lubricant.However, before the tribological tests, basic tests were performed, i.e., surface morphology and topography, microhardness and layer adhesion. Materials and Methods Laboratory tests covered the AW-7075 aluminum alloy as substrate material for the following electroless deposited coatings: Ni-P, Ni-P/Si3N4 (Figure 2), Ni-P/graphite and Ni-P/Si3N4/graphite.The chemical composition of aluminum alloy AW-7075 is presented in Table 1.In composite and nanocomposite coatings, the main load is carried by the matrix.Dispersion particles oppose the movement of dislocations, which in turn causes the strengthening of the coating material.On this basis, it is assumed that the degree of matrix strengthening is proportional to the ability of the particles to oppose the movement of dislocations. Graphite has other than hard silicon nitride (Si 3 N 4 is super hard material) very necessary and useful properties that help reduce the wear of machine parts, which is why, in fact, graphite lubricants are often used in gears.However, in this case, graphite was added as the second minor dispersion phase mainly in order to achieve self-lubricating properties of the coating, which resulted in improving the tribological properties, directly and indirectly, the adhesion of the layer to the substrate.Another aspect that decides about undertaking this topic is the technological aspect, i.e., the difficulties associated with the incorporation of a limited amount of this type of particle (graphite or PTFE or MoS 2 ) into the matrix of electroless Ni-P coatings.A novelty is an attempt to achieve a synergy effect by selecting the optimal content of Si 3 N 4 nanoparticles (therefore, several different contents of this dispersion phase were tested) with a minimum content of graphite particles in the Ni-P coating, which is to help in obtaining such a composition of the hybrid coating composition that will be characterized by both high hardness and adhesion (due to the use of Si 3 N 4 ), as well as increased resistance to abrasive wear (due to the graphite particles content) also in conditions of short-term dry friction-with insufficient presence of a lubricant.However, before the tribological tests, basic tests were performed, i.e., surface morphology and topography, microhardness and layer adhesion. Materials and Methods Laboratory tests covered the AW-7075 aluminum alloy as substrate material for the following electroless deposited coatings: Ni-P, Ni-P/Si 3 N 4 (Figure 2), Ni-P/graphite and Ni-P/Si 3 N 4 /graphite.The chemical composition of aluminum alloy AW-7075 is presented in Table 1. P/Si3N4/graphite coatings and, for comparison reasons, the Ni-P coating were deposited using the method of chemical reduction.Particular emphasis was placed on the two mos important indicators of suitability of the tested coating materials: microhardness and ad hesion of the layers to the aluminum substrate.In composite and nanocomposite coatings, the main load is carried by the matrix Dispersion particles oppose the movement of dislocations, which in turn causes th strengthening of the coating material.On this basis, it is assumed that the degree of matri strengthening is proportional to the ability of the particles to oppose the movement o dislocations. Graphite has other than hard silicon nitride (Si3N4 is super hard material) very nec essary and useful properties that help reduce the wear of machine parts, which is why, in fact, graphite lubricants are often used in gears.However, in this case, graphite was added as the second minor dispersion phase mainly in order to achieve self-lubricating proper ties of the coating, which resulted in improving the tribological properties, directly and indirectly, the adhesion of the layer to the substrate.Another aspect that decides abou undertaking this topic is the technological aspect, i.e., the difficulties associated with th incorporation of a limited amount of this type of particle (graphite or PTFE or MoS2) int the matrix of electroless Ni-P coatings.A novelty is an attempt to achieve a synergy effec by selecting the optimal content of Si3N4 nanoparticles (therefore, several different con tents of this dispersion phase were tested) with a minimum content of graphite particle in the Ni-P coating, which is to help in obtaining such a composition of the hybrid coatin composition that will be characterized by both high hardness and adhesion (due to th use of Si3N4), as well as increased resistance to abrasive wear (due to the graphite particle content) also in conditions of short-term dry friction-with insufficient presence of a lub ricant.However, before the tribological tests, basic tests were performed, i.e., surface mor phology and topography, microhardness and layer adhesion. Materials and Methods Laboratory tests covered the AW-7075 aluminum alloy as substrate material for th following electroless deposited coatings: Ni-P, Ni-P/Si3N4 (Figure 2), Ni-P/graphite and Ni-P/Si3N4/graphite.The chemical composition of aluminum alloy AW-7075 is presented in Table 1.The technological quality of the aluminum substrate and the deposited coatings, i.e., morphology, microhardness, chemical composition and roughness, as well as the quality of the transition layer, i.e., adhesion of the layers to the substrate, were tested in the areological system under consideration.The dimensions of the AW-7075 alloy samples were as follows: diameter D = 50 mm and thickness g = 7 mm.Before the deposition of the Ni-P layers, the surfaces of the samples were degreased in an organic solvent, etched in an alkaline solution and galvanized in a multi-component solution.For the formation of Ni-P layers by chemical reduction, a multi-component bath was prepared with the following composition: NiSO 4 , reducer (NaH 2 PO 2 ) and buffer (C 2 H 3 NaO 2 ) to stabilize the reaction at 4.3-4.6.The details of composition and concentrations are given in Tables 2 and 3.The bath temperature during the deposition process was 363 K.The thickness of the coatings was the same and was: 10 µm ± 2 µm, which was obtained by appropriately selected deposition time (60 min) in the electroplating baths.Also, the actual thickness of the layers was verified by microscopic examination of the cross-sections of the samples, i.e., after their cutting and preparation of metallographic specimens, as shown in Figure 3.The technological quality of the aluminum substrate and the deposited coatings, i.e., morphology, microhardness, chemical composition and roughness, as well as the quality of the transition layer, i.e., adhesion of the layers to the substrate, were tested in the areological system under consideration.The dimensions of the AW-7075 alloy samples were as follows: diameter D = 50 mm and thickness g = 7 mm.Before the deposition of the Ni-P layers, the surfaces of the samples were degreased in an organic solvent, etched in an alkaline solution and galvanized in a multi-component solution.For the formation of Ni-P layers by chemical reduction, a multi-component bath was prepared with the following composition: NiSO4, reducer (NaH2PO2) and buffer (C2H3NaO2) to stabilize the reaction at 4.3-4.6.The details of composition and concentrations are given in Tables 2 and 3.The bath temperature during the deposition process was 363 K.The thickness of the coatings was the same and was: 10 µm ± 2 µm, which was obtained by appropriately selected deposition time (60 min) in the electroplating baths.Also, the actual thickness of the layers was verified by microscopic examination of the cross-sections of the samples, i.e., after their cutting and preparation of metallographic specimens, as shown in Figure 3. Surface Morphology and Roughness The surface layers of the samples were characterized by qualitative surface image assessment, i.e., morphology tests were performed using the Keyence VHX 5000 optical microscope and a JEOL JSM-7800F scanning electron microscope.The surface morphology tests were carried out by SEM with the following parameters: 10 mm working distance and 15 keV.In addition, the structure of the alloy layer was analyzed by X-ray diffraction using the MiniFlex II Rigaku device.The surface roughness parameters were examined with an optical profilometer (Alicona IF-Portable RL) and the results are presented in Table 2. The tests were performed to characterize the surface layers of the deposited materials qualitatively and quantitatively.Profilometry analyses covered a square surface with dimensions of 1.014 × 1.014 mm in the central part of the sample. Microhardness Testing Microhardness tests were performed for the AW-7075 alloy, Ni-P, Ni-P/Si 3 N 4 and Ni-P/Si 3 N 4 /graphite layers deposited on the AW-7075 alloy.The tests were done using the Vickers method using the semi-automatic FM 800 microhardness tester (indenter load: 250 [mN], load duration: 10 [s]) according to PN-EN ISO 6507-1:2018-05 [35].The averaged measurement results are presented in Table 3.For the substrate material and the deposited layers, 4 measurements were taken for each sample. Adhesion Testing Adhesion tests of the coatings deposited on the AW-7075 alloy were performed using the scratch test method according to PN-EN ISO 20502:2016-05 [36].The Revetest device by CSEM with a Rockwell indenter was used for the tests with an increasing progressive load from 1 to 100 [N] and a constant speed of the indenter displacement: 10 [mm/min].The length of each scratch was 10 [mm].Two scratches were made on each sample and the acoustic emission signal, friction force, friction coefficient and normal force were recorded simultaneously during the measurements.Detailed analyses of the course of scratch formation and adhesive and cohesive cracks were carried out using the Keyence VHX 5000 microscope. Results All samples made of the AW-7075 aluminum alloy had a chemical composition, according to the data in Table 1.The results of the measurements are presented according to the order of the performed tests, i.e., morphology, roughness, microhardness, and adhesion. Layers Characteristics The results of the morphology of the Ni-P, Ni-P/Si 3 N 4 , Ni-P/graphite and Ni-P/Si 3 N 4 / graphite layers with a thickness of 10 µm are presented in Figures 3-6, while the surface topography of the layers and the roughness parameters are shown in Figures 7-9 and Table 4. The structure of the coating depends on the type of layer and its chemical composition [3].The material of the layers formed by chemical reduction was a solid solution of phosphorus in Ni-P nickel-containing 10 wt.% P. The image showing a cross-sectional view of the sample where the coating was deposited on the aluminum substrate, together with the actual layer thickness is presented in Figure 1.The images of the surface of the layers obtained with the use of a scanning electron microscope are shown in Figure 4.The structure of the surfaces of the tested nanocomposite layers was similar, due to the dispersed phase content.The same correlation was observed for the hybrid coatings.The coatings were characterized by a compact, homogeneous structure and even deposition.The coatings on all surfaces of the samples were of a constant thickness of 10 µm.No defects were observed in the layer materials in the form of cracks, inclusions or localized delamination.[(a)-Ni-P/Si3N4/graphite ((0.5 + 0.5) g/dm 3 ), (b)-Ni-P/Si3N4/graphite ((1 + 0.5) g/dm 3 ), (c)-Ni-P/Si3N4/graphite ((2 + 0.5) g/dm 3 ), (d)-Ni-P/Si3N4/graphite ((5 + 0.5) g/dm 3 ), (e)-Ni-P/graphite (0.5 g/dm 3 )]. In the presented studies of profiles and surface roughness, two-and three-dimensional analysis was used.The parameters that are most often used in industrial conditions were selected, while the analyses and descriptions of the roughness results mainly included elements related to the peaks and valleys of given profiles.[(a)-Ni-P/Si 3 N 4 /graphite ((0.5 + 0.5) g/dm 3 ), (b)-Ni-P/Si 3 N 4 /graphite ((1 + 0.5) g/dm 3 ), (c)-Ni-P/Si 3 N 4 /graphite ((2 + 0.5) g/dm 3 ), (d)-Ni-P/Si 3 N 4 /graphite ((5 + 0.5) g/dm 3 ), (e)-Ni-P/graphite (0.5 g/dm 3 )].Analyses of surface topography parameters were performed for samples with deposited nanocomposite and hybrid layers, as well as, for comparison purposes, for a sample with a Ni-P coating without a dispersed phase and a sample of AW-7075 aluminum alloy with no deposited layer.The thickness of all analyzed coatings was 10 µm (Table 4).An unambiguous determination of the influence of the composition of the dispersed phase on the values of the parameters Sq and Sa is not possible.There is a noticeable tendency for the values of those parameters to increase when small amounts of the reinforcing phase components are added, i.e., 0.5 g of graphite and 0.5 g and 1 g of Si3N4 nanoparticles (Figure 7).Taking Sq = 0.889 (Ni-P) as the reference value.Adding graphite (Ni-P/graphite (0.5 g) resulted in a significant increase in this value to 4.721.In the case of samples from the Ni-P/Si3N4 group (0.5; 1; 2; 5 g), after an initial significant increase (Sq = 4.971 for 0.5 g), along with the increase in the additive content, the value of the parameter decreased to the level of Sq = 0.602 for 2 g and Sq = 0.655 for 5 g of Si3N4 nanoparticles (Figure 6).Adding graphite (0.5 g) to the Ni-P/Si3N4 layer results in an increase in the value of Sq, where, in comparison with the Ni-P/Si3N4 layer, the following was observed for all analyzed coatings-except for one coating: Ni-P/Si3N4/graphite ((5 + 0.5) g).The analysis of the parameters S10z and Sz indicates an increase in the dispersion of the values of the valleys and peaks for coatings containing 0.5 g of graphite or 0.5 g of Si3N4.Increasing the content of graphite and Si3N4 nanoparticles in the composite and hybrid coatings stabilizes the value of Sz and S10z at a level similar to that of the Ni-P coating.In the case of the formation of coatings where a symmetric surface is required, the parameter Ssk (surface asymmetry coefficient) is very important.Significant differences are noticeable for this parameter.When analyzing the data, it was observed that the smallest asymmetry values were obtained for Ni-P/Si3N4 (2 g) (Ssk = 0.156) and Ni-P (Ssk = 1.389) samples.The Ni-P/Si3N4 (1 g) and Ni-P/Si3N4/ graphite ((5 + 0.5) g) samples are 3 ), (b)-Ni-P/Si3N4/graphite ((1 + 0.5) g/dm 3 ), (c)-Ni-P/Si3N4/graphite ((2 + 0.5) g/dm 3 ), (d)-Ni-P/Si3N4/graphite ((5 + 0.5) g/dm 3 )]. An unambiguous determination of the influence of the composition of the dispersed phase on the values of the parameters Sq and Sa is not possible.There is a noticeable tendency for the values of those parameters to increase when small amounts of the reinforcing phase components are added, i.e., 0.5 g of graphite and 0.5 g and 1 g of Si3N4 nanoparticles (Figure 7).Taking Sq = 0.889 (Ni-P) as the reference value.Adding graphite (Ni-P/graphite (0.5 g) resulted in a significant increase in this value to 4.721.In the case of samples from the Ni-P/Si3N4 group (0.5; 1; 2; 5 g), after an initial significant increase (Sq = 4.971 for 0.5 g), along with the increase in the additive content, the value of the parameter decreased to the level of Sq = 0.602 for 2 g and Sq = 0.655 for 5 g of Si3N4 nanoparticles (Figure 6).Adding graphite (0.5 g) to the Ni-P/Si3N4 layer results in an increase in the value of Sq, where, in comparison with the Ni-P/Si3N4 layer, the following was observed for all analyzed coatings-except for one coating: Ni-P/Si3N4/graphite ((5 + 0.5) g).The analysis of the parameters S10z and Sz indicates an increase in the dispersion of the values of the valleys and peaks for coatings containing 0.5 g of graphite or 0.5 g of Si3N4.Increasing the content of graphite and Si3N4 nanoparticles in the composite and hybrid coatings stabilizes the value of Sz and S10z at a level similar to that of the Ni-P coating.In the case of the formation of coatings where a symmetric surface is required, the parameter Ssk (surface asymmetry coefficient) is very important.Significant differences are noticeable for this parameter.When analyzing the data, it was observed that the smallest asymmetry values were obtained for Ni-P/Si3N4 (2 g) (Ssk = 0.156) and Ni-P (Ssk = 1.389) samples.The Ni-P/Si3N4 (1 g) and Ni-P/Si3N4/ graphite ((5 + 0.5) g) samples are The last analyzed parameter was the surface inclination factor Ssq, which enables the detection of surface defects.In the case of aluminum alloy (Ssq = 2.877), the distribution of the profile ordinates is close to a normal distribution, while for the other samples, the distribution is slender, which means the presence of high peaks and deep valleys.Lower values were observed for Ni-P/graphite (0.5 g), Ni-P/Si3N4 (2 g), Ni-P/Si3N4/graphite ((0.5 + 0.5) g), Ni-P/Si3N4/graphite ((1 + 0.5) g) (up to Ssq = 10), while higher for Ni-P/Si3N4 (1 g), Ni-P/Si3N4 (5 g) and Ni-P/Si3N4/graphite ((5 + 0.5) g) (above Ssq = 146).The change in the distribution and intensity of the peaks can be observed in all graphics presenting the surface topography (Figures 7-9).The preliminary results of the microscopic examination (Figures 4-6) of the sample cross-sections confirmed the assumed thickness of the deposited coatings, which was 10 µm (Figure 3a).X-ray diffraction studies showed that the Ni-P material, on the basis of which the nanocomposite and hybrid coatings were created, has a mixed amorphousnanocrystalline structure (Figure 3b).The largest peak indicates the amorphous-crystalline structure, while the smaller peaks in the diffraction pattern refer to the crystalline phase. In the presented studies of profiles and surface roughness, two-and three-dimensional analysis was used.The parameters that are most often used in industrial conditions were selected, while the analyses and descriptions of the roughness results mainly included elements related to the peaks and valleys of given profiles.Analyses of surface topography parameters were performed for samples with deposited nanocomposite and hybrid layers, as well as, for comparison purposes, for a sample with a Ni-P coating without a dispersed phase and a sample of AW-7075 aluminum alloy with no deposited layer.The thickness of all analyzed coatings was 10 µm (Table 4). An unambiguous determination of the influence of the composition of the dispersed phase on the values of the parameters Sq and Sa is not possible.There is a noticeable tendency for the values of those parameters to increase when small amounts of the reinforcing phase components are added, i.e., 0.5 g of graphite and 0.5 g and 1 g of Si 3 N 4 nanoparticles (Figure 7).Taking Sq = 0.889 (Ni-P) as the reference value.Adding graphite (Ni-P/graphite (0.5 g) resulted in a significant increase in this value to 4.721.In the case of samples from the Ni-P/Si 3 N 4 group (0.5; 1; 2; 5 g), after an initial significant increase (Sq = 4.971 for 0.5 g), along with the increase in the additive content, the value of the parameter decreased to the level of Sq = 0.602 for 2 g and Sq = 0.655 for 5 g of Si 3 N 4 nanoparticles (Figure 6).Adding graphite (0.5 g) to the Ni-P/Si 3 N 4 layer results in an increase in the value of Sq, where, in comparison with the Ni-P/Si 3 N 4 layer, the following was observed for all analyzed coatings-except for one coating: Ni-P/Si 3 N 4 /graphite ((5 + 0.5) g).The analysis of the parameters S10z and Sz indicates an increase in the dispersion of the values of the valleys and peaks for coatings containing 0.5 g of graphite or 0.5 g of Si 3 N 4 .Increasing the content of graphite and Si 3 N 4 nanoparticles in the composite and hybrid coatings stabilizes the value of Sz and S10z at a level similar to that of the Ni-P coating. In the case of the formation of coatings where a symmetric surface is required, the parameter Ssk (surface asymmetry coefficient) is very important.Significant differences are noticeable for this parameter.When analyzing the data, it was observed that the smallest asymmetry values were obtained for Ni-P/Si 3 N 4 (2 g) (Ssk = 0.156) and Ni-P (Ssk = 1.389) samples.The Ni-P/Si 3 N 4 (1 g) and Ni-P/Si 3 N 4 /graphite ((5 + 0.5) g) samples are characterized by the greatest asymmetry.Positive skewness was observed in all coatings, in contrast to the non-coated aluminum alloy: Ssk = −0.114.A significant increase in parameters was observed for the Ni-P/graphite content (0.5 g); although, in this case, the Ssk value decreases causing the dominance of roughness deviations in a direction greater than in the case of Ni-P coatings.After taking into account the microhardness and adhesion of coatings, lower Ssk values may indicate the potential use of selected layers in sliding connections.Adding Si 3 N 4 (0.5 g) to the Ni-P coating results in similar surface structure properties as in the case of Ni-P/graphite (0.5 g).Increasing the amount of Si 3 N 4 decreases the values of all parameters except for Ssk, which significantly increases.The increasing deviation of those values indicates an uneven distribution of maximum heights.For Ni-P/graphite (0.5 g), Ni-P/Si 3 N 4 /graphite ((0.5 + 0.5) g) and Ni-P/Si 3 N 4 /graphite ((0.5 + 1) g) similar parameter values were recorded.The effect of increasing the Si 3 N 4 content in the Ni-P/Si 3 N 4 /graphite composition on the reduction of surface roughness with a clear increase in the Sku value is noticeable (Figure 7). The last analyzed parameter was the surface inclination factor Ssq, which enables the detection of surface defects.In the case of aluminum alloy (Ssq = 2.877), the distribution of the profile ordinates is close to a normal distribution, while for the other samples, the distribution is slender, which means the presence of high peaks and deep valleys.Lower values were observed for Ni-P/graphite (0.5 g), Ni-P/Si 3 N 4 (2 g), Ni-P/Si 3 N 4 /graphite ((0.5 + 0.5) g), Ni-P/Si 3 N 4 /graphite ((1 + 0.5) g) (up to Ssq = 10), while higher for Ni-P/Si 3 N 4 (1 g), Ni-P/Si 3 N 4 (5 g) and Ni-P/Si 3 N 4 /graphite ((5 + 0.5) g) (above Ssq = 146).The change in the distribution and intensity of the peaks can be observed in all graphics presenting the surface topography (Figures 7-9). Based on the measurement results, nanocomposite and hybrid coatings show similar values of roughness parameters but are highly dependent on the quantitative composition of the dispersed phase.The dominance of graphite in the chemical composition of the coating material, both in the composite and hybrid coatings, negatively affects the surface topography properties, although the exact extent to which this determines the tribological suitability cannot be determined at this stage. In general, the results of the conducted tests indicate that the presence of Si 3 N 4 in the dispersed phase composition, above 2 g, guarantees the best surface topography. Surface roughness primarily affects fatigue life and contact problems [37].Irregularities on the coating surface can lead to increased friction, especially when interacting with another rough surface.When a material is miniaturized, the effect of surface roughness influences its mechanical properties.This is because rough surfaces act as a stress concentration centers, thereby reducing the endurance limit.Surface roughness can impact the fatigue life of coatings by acting as stress concentrators.Stress concentrations tend to occur at the peaks and valleys of rough surfaces, leading to localized stress intensities and potential initiation points for cracks. Layer Microhardness The Vickers microhardness test was performed for aluminum alloy, nickel, nanocomposite and hybrid layers, the results are presented in Table 5. Ni-P, Ni-P/Si 3 N 4 and Ni-P/Si 3 N 4 /graphite coatings showed several times higher hardness compared to the AW-7075 alloy.As the dispersed phase content of the tested coatings changed, their hardness also changed.The Vickers microhardness values for the group of nanocomposite layers and separately for the group of hybrid layers are similar; however, detailed analyses of the results indicate predominance of the Ni-P/Si 3 N 4 (2 g) layer among the nanocomposite coatings and predominance of the Ni-P/Si 3 N 4 /graphite ((2 + 0.5) g) coating among the hybrid layers, for which the highest microhardness values were observed.Based on the results of the microscopic analyses of the areas where the indentations were made, and on the results of microhardness tests of the coatings in the cross-sections of the metallographic specimens, the tested layers were not pierced with a Vickers indenter during microhardness tests of the surfaces of the samples, also, no influence of the substrate on the final results was observed.It was found that there was a general increase in the microhardness of the surface layer with the introduction of dispersed phases, compared to nickel layers without any reinforcing phase.However, the highest values were observed for coatings deposited in a galvanic bath with a silicon nitride content of 2 g, both for nanocomposite and hybrid layers (with a clear indication of hybrid ones).Further increases in the dispersed phase content result in a minimally negative effect on the mechanical properties of the whole layer.To sum up, the Ni-P/Si 3 N 4 /graphite ((2 + 0.5) g) layer has the best mechanical properties. Microhardness can synthetically display the elasticity, plasticity, and strength of materials.Moreover, microhardness tests provide information about the local surface properties of a material, rather than its bulk properties.Obtained values can be correlated with the yield strength or hardness of the material, giving insights into its resistance to deformation or penetration.The microhardness of coatings can influence the coefficient of friction between the coating surface and the opposing surface.The high microhardness of a coating typically leads to a lower coefficient of friction, resulting in less energy lost due to friction.Hard coatings can exhibit greater resistance to wear, which is particularly important in applications involving high mechanical loads and intense friction.Coatings with appropriate microhardness can create a smooth and uniform surface, promoting better lubricant distribution and reducing friction [38][39][40]. Microhardness can synthetically display the elasticity, plasticity, and strength of materials.Moreover, microhardness tests provide information about the local surface properties of a material, rather than its bulk properties.Obtained values can be correlated with the yield strength or hardness of the material, giving insights into its resistance to deformation or penetration.The microhardness of coatings can influence the coefficient of friction between the coating surface and the opposing surface.The high microhardness of a coating typically leads to a lower coefficient of friction, resulting in less energy lost due to friction.Hard coatings can exhibit greater resistance to wear, which is particularly important in applications involving high mechanical loads and intense friction.Coatings with appropriate microhardness can create a smooth and uniform surface, promoting better lubricant distribution and reducing friction [38][39][40]. Layer Adhesion Changes in the parameters of the scratch process, such as normal force, friction force, friction coefficient and acoustic emission along the scratch during the scratch test are presented in Figures 11 and 12.An example of the sample surface with the SEM image of the area from where the layer was removed is shown in Figure 13 and general images of the scratches are shown in Figure 14.Ni-P/Si 3 N 4 nanocomposite layers obtained in a bath with the content of the Si 3 N 4 phase at the level of 1 and 2 g/dm 3 were characterized by greater adhesion to the substrate compared to Ni-P and Ni-P/graphite layers, as well as Ni-P/Si 3 N 4 layers obtained in a bath with phase content of: 0.5 and 5 g/dm 3 .Ni-P/Si 3 N 4 nanocomposite layers with the content of the Si 3 N 4 phase in the 5 g/dm 3 bath showed the weakest adhesion to the substrate compared to all other tested layers.In turn, the Ni-P/Si 3 N 4 /graphite hybrid layers showed the greatest adhesion to the substrate, which was clear and noticeable based on the critical load results and when the coatings were completely removed (Table 6), as well as based on the accompanying graphs (Figures 11 and 12), where the acoustic emission signal is most stable.The detailed descriptions of the scratch tests for all types of the layers are presented in Tables 7-16.It was observed that the addition of graphite to the nanocomposite coating contributes to an increase in adhesion from 20 to 100%-compared to the Ni-P/Si 3 N 4 layers.However, the mere addition of graphite, as the only dispersed phase in the Ni-P coating, negatively affects the mechanical properties and contributes to the reduction of layer adhesion by 12.5% in relation to the Ni-P coating.It was noticed that the Ni-P/graphite layer was characterized by the lowest adhesion among all tested coatings.However, by combining two different dispersed phases in the form of Si 3 N 4 and graphite (hybrid coatings), a synergy effect was obtained in terms of all mechanical properties, which was fully noticeable based on the results of the adhesion tests.No delamination of the coatings was observed in any of the scratch tests.During each test, clear acoustic signals were recorded, which, thanks to detailed microscopic observations, made it possible to accurately determine the behavior of the material of the layers, as well as the strength and resistance to cracking.Based on the performed analyses, Table 4 presents the critical loads Lc1 and Lc2 relating to damage in the form of cohesive and adhesive cracks, respectively, for each tested layer.Additionally, the area of complete removal of the layers was determined and taken into account.To sum up, based on the conducted comparative analysis of the obtained results of adhesion tests, it was shown that the highest critical loads of Lc2 can be transferred by the Ni-P/Si 3 N 4 /graphite ((5 + 0.5) g) hybrid layer. Conclusions Nickel Ni-P, nanocomposite Ni-P/Si 3 N 4 and hybrid Ni-P/Si 3 N 4 /graphite layers of an amorphous structure deposited by chemical on the AW-7075 aluminum alloy are compact and are also characterized by good adhesion to the substrate material.Nanocomposite and hybrid layers show a greater degree of surface development compared to the Ni-P layer.The incorporation of dispersed phases in the form of Si 3 N 4 powder with nanometric particle sizes, as well as graphite in the Ni-P matrix, contributes to the improvement of the mechanical properties of the layers.Nanocomposite and hybrid coatings are characterized by higher hardness than layers without a dispersed phase, i.e., Ni-P.The presence of Si 3 N 4 nanoparticles along with graphite has a positive effect on the mechanical properties and adhesion to the substrate of the tested layers; however, the presence of graphite alone in the coating has a negative effect on adhesion, and both adhesion and microhardness are noticeably lower, even compared to the basic comparative Ni-P coating. Furthermore, despite the fact that the most favorable results of profilometric testing were obtained for coatings with Si 3 N 4 content of more than 2 g, it must be stressed that the profilometric tests were performed for technological quality (a priori), i.e., the state of the samples after the completion of their production.Only for the functional quality (a posteriori), i.e., the state of the product during its exploitation, i.e., after tribological tests are taken into account, it will be possible to determine the real impact of individual surface topographies on suitability in terms of operation. In general, on the basis of this work, four elementary points referring to the test results and providing the basis for further measurements on electroless hybrid coatings containing silicon nitride and graphite were distinguished: • Enrichment of the Ni-P electroless coating with an amorphous structure with the content of Si 3 N 4 nanoparticles and graphite particles helps to improve the basic mechanical properties of the layer. • The chemical reduction method used for the deposition of Ni-P/Si 3 N 4 /graphite hybrid layers allows for the incorporation of any content of Si 3 N 4 nanoparticles and limited content of graphite particles. • The content of two dispersion phases enables the increase of adhesion of the layer to the substrate made of AW-7075 aluminum alloy compared to the coating with one dispersion phase. • Excessive content of silicon nitride nanoparticles results in a decrease in the microhardness value of the layers. The results of the presented research provide a good basis for modifying the properties of components made of aluminum alloys by surface treatment consisting of applying nanocomposite and hybrid alloy layers on the components by chemical reduction, which makes it possible to increase the functionality of finished products. Figure 1 . Figure 1.Schematic diagram of incorporation of Si3N4 nanoparticles into Ni-P matrix. Figure 1 . Figure 1.Schematic diagram of incorporation of Si 3 N 4 nanoparticles into Ni-P matrix. Figure 1 . Figure 1.Schematic diagram of incorporation of Si3N4 nanoparticles into Ni-P matrix. Figure 3 . Figure 3.The elementary research results containing microscopic image of the Ni-P/Si3N4 (5 g/dm 3 ) cross-section with lens 50× (a) and diffraction patterns of Ni-P alloy coating (b). Figure 3 . Figure 3.The elementary research results containing microscopic image of the Ni-P/Si 3 N 4 (5 g/dm 3 ) cross-section with lens 50× (a) and diffraction patterns of Ni-P alloy coating (b). Figure 13 .Figure 13 . Figure 13.Example SEM images of the scratch in four stages on the same Ni-P layer after the test [(a)-the 4th (last) step of the test, (b)-the 3rd step of the test, (c)-the 2nd step of the test, (d)-the 1st step of the test]. Table 2 . Components concentrations of multi-constituent substance for galvanizing. Table 2 . Components concentrations of multi-constituent substance for galvanizing. Table 3 . Components concentrations of nickel deposition bath. Table 4 . Results of surface topography measurements of nickel, nanocomposite and hybrid coatings. Table 4 . Results of surface topography measurements of nickel, nanocomposite and hybrid coatings. Table 7 . Description of the scratch test for Ni-P layer. Table 8 . Description of the scratch test for Ni-P/graphite (0.5 g/dm 3 ) layer.
9,746.8
2023-09-01T00:00:00.000
[ "Materials Science" ]
Automated diagnosis of plus disease in retinopathy of prematurity using quantification of vessels characteristics The condition known as Plus disease is distinguished by atypical alterations in the retinal vasculature of neonates born prematurely. It has been demonstrated that the diagnosis of Plus disease is subjective and qualitative in nature. The utilization of quantitative methods and computer-based image analysis to enhance the objectivity of Plus disease diagnosis has been extensively established in the literature. This study presents the development of a computer-based image analysis method aimed at automatically distinguishing Plus images from non-Plus images. The proposed methodology conducts a quantitative analysis of the vascular characteristics linked to Plus disease, thereby aiding physicians in making informed judgments. A collection of 76 posterior retinal images from a diverse group of infants who underwent screening for Retinopathy of Prematurity (ROP) was obtained. A reference standard diagnosis was established as the majority of the labeling performed by three experts in ROP during two separate sessions. The process of segmenting retinal vessels was carried out using a semi-automatic methodology. Computer algorithms were developed to compute the tortuosity, dilation, and density of vessels in various retinal regions as potential discriminative characteristics. A classifier was provided with a set of selected features in order to distinguish between Plus images and non-Plus images. This study included 76 infants (49 [64.5%] boys) with mean birth weight of 1305 ± 427 g and mean gestational age of 29.3 ± 3 weeks. The average level of agreement among experts for the diagnosis of plus disease was found to be 79% with a standard deviation of 5.3%. In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Furthermore, the average tortuosity of the five most tortuous vessels was significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The curvature values based on points were found to be significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The maximum diameter of vessels within a region extending 5-disc diameters away from the border of the optic disc (referred to as 5DD) exhibited a statistically significant increase in Plus images compared to non-Plus images (p ≤ 0.0001). The density of vessels in Plus images was found to be significantly higher compared to non-Plus images (p ≤ 0.0001). The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was found to be 0.86 ± 0.01. This accuracy was observed to be higher than the diagnostic accuracy of one out of three experts when compared to the reference standard. The implemented algorithm in the current study demonstrated a commendable level of accuracy in detecting Plus disease in cases of retinopathy of prematurity, exhibiting comparable performance to that of expert diagnoses. By engaging in an objective analysis of the characteristics of vessels, there exists the possibility of conducting a quantitative assessment of the disease progression's features. The utilization of this automated system has the potential to enhance physicians' ability to diagnose Plus disease, thereby offering valuable contributions to the management of ROP through the integration of traditional ophthalmoscopy and image-based telemedicine methodologies. Screening and treatment Multiple studies [6][7][8] have demonstrated that ROP, specifically when characterized by Plus disease, can be successfully managed through laser photocoagulation 6,7 or intravitreal injection of bevacizumab 8 when diagnosed promptly. Wide-field retinal imaging, such as the RetCam 9 , offers the potential to conduct tele-ROP screening in conjunction with a reading center 10,11 .This approach enhanced both the availability of ROP screening and the impartiality in diagnostic procedures.However, the clinical diagnosis of ROP continues to be subjective, resulting in significant variability and inconsistency in diagnosis.This inconsistency has been observed even among experts in ROP, as documented in previous studies 12,13 . Several research groups have investigated the development of artificial intelligence and computer-based image analysis techniques for the purpose of improving the objectivity and automating the diagnosis of ROP with Plus disease 11,[14][15][16][17][18][19][20][21][22][23] .Nevertheless, little work to our knowledge has done on quantitative analysis of image features that are focus of the clinicians during diagnosis and treatments.This study aimed to develop a computer-assisted system for detecting Plus disease by characterization of vascular features, with the intention of enhancing diagnostic accuracy and enabling quantitative monitoring of treatment progress. Ethics Five ROP experts, conducted the collection of the images and three of them participated in the labeling of fundus images.The present study adhered to the principles outlined in the Declaration of Helsinki and received approval from the institutional review board of Tehran University of Medical Sciences.Written informed consent was obtained from the parents of all patients involved in the study, granting permission for imaging and participation in the research.Furthermore, prior to the viewing of images, we took the precautionary measure of removing all sensitive information pertaining to the patients.This was done to guarantee the preservation of their anonymity and confidentiality. Subjects and reference standard diagnosis We compiled a database of 76 wide-angle posterior retinal images, each of which, relates to different individual preterm infants acquired during routine clinical care.Participants in the study were 27 female (35.5%) and 49 male (64.5%) infants with an average birth weight of 1305 g (SD = 427 g) and an average gestational age of 29.3 weeks (SD = 3 weeks).The infants were examined in a participating neonatal care unit at Farabi Hospital in Tehran, Iran, between January 1, 2019 and December 30, 2020.The criteria for screening of ROP were established based on a published guideline.These criteria include a birth weight (BW) below 2000g, a gestational age (GA) of 32 weeks or less, or a determination of ROP risk by the pediatrician or neonatologist 24 .We excluded all examinations performed on eyes that had undergone prior treatment for ROP and any relevant medical conditions, such as respiratory distress syndrome, bronchopulmonary dysplasia or any systemic or ocular disease that could affect the retinal examination.Using a wide-angle imaging device (RetCam; Clarity Medical Systems, Pleasanton, CA), all images were captured.The image has a resolution of 1200 by 1600 pixels.Three ROP expert observers independently categorized selected images as "Plus" or "non-Plus".A reference standard diagnosis (i.e., Plus or non-Plus) was defined for each of the images as the diagnosis provided by the majority of the three experts.In order to assess inter-expert and intra-expert reliability, each of the three experts submitted their labeling twice over the course of 10 days.The final dataset for this study was intended to contain 76 images, consisting of 37 non-Plus images and 39 Plus images based on the reference standard. Image selection and preprocessing The challenges related to capturing retinal images of infant's stem from inadequate patient cooperation and their limited attention span during the image acquisition process.These factors contribute to the production of low-quality images characterized by artifacts such as focusing issues, contrast deficiencies, motion blurring, and uneven illumination.Regions with low contrast and/or out of focus areas that lack distinct vessel boundaries have a detrimental impact on the segmentation of vessels and the precision of subsequent analysis.In order to address the concerns regarding image quality, an expert systematically eliminated low quality images from the dataset.In order to obtain a more accurate vessel segmentation, the selected images underwent image enhancements to address issues related to uneven illumination and to enhance the contrast between the vessels and the background retina. This study focuses on the development of algorithms for the characterization of two main vascular image features related to Plus disease, namely the tortuosity and dilation of vessels.The performance of these algorithms in distinguishing between subjects with Plus disease and those without Plus disease was evaluated.Additionally, we conducted a comparison of the density of vessels in the retinal images to determine if there are any statistically significant differences between the two groups of subjects in relation to this particular characteristic of the vessels.The performance of our algorithms was evaluated through cross-validation using a reference standard.The reference standard was determined by aggregating the diagnoses provided by a panel consisting of three experts in ROP.In brief, the objectives of our study encompass two main aspects.Firstly, we aim to evaluate the variability in diagnosing Plus disease among different experts as well as within individual experts.Secondly, we seek to develop a computer-assisted system for detecting Plus disease, with the intention of enhancing diagnostic accuracy and enabling quantitative monitoring of treatment progress. Vessels segmentation Accurate segmentation of vessels is essential for quantifying their characteristics, such as tortuosity, through extracting vessels masks.The majority of existing studies have relied on manual creation of vessels' masks for either the entire retinal image or specific vessel segments.This process involves the use of graphical editing software, such as Photoshop, which is both time-consuming and heavily reliant on the operator's skills and expertise. In this research, the authors employed a combination of the top-hat transform technique proposed by Sharafi et al. 25,26 for retinal vessel segmentation, along with a method introduced by Strisciuglio et al. 27 that utilizes a set of B-COSFIRE filters specifically designed for vessel detection.By integrating these two approaches, the researchers successfully generated a vessel mask for each ROP image in an automated manner.In light of the aforementioned artifacts observed in ROP images and the imperative need for precise segmentation of vessels for subsequent analysis, we have devised a graphical user interface (GUI) aimed at rectifying the inaccuracies present in the masks generated through automated segmentation.(supplementary file-1) The segmentation of optic discs was also performed using the mentioned GUI.Our team is currently finalizing the development of a comprehensive approach that combines various methods to achieve a vessel segmentation pipeline for ROP images.This pipeline aims to achieve both high accuracy and full automation.The segmentation of vessels in ROP images was thoroughly reviewed by a ROP expert (EKP) in this study. Quantifying vessels' tortuosity Arterial tortuosity plays a crucial role in the international classification system for ROP as it is utilized in the diagnosis of Plus disease 28 .There are several studies in the literature to quantify the tortuosity of vessels as a primary characteristic for distinguishing between Plus and non-Plus 17,19,22,29 . The utilization of distance-based metrics to quantify the tortuosity of vessels simply involves the computation of the path length traversed by a vessel segment or curve, which is then divided by the distance between the two endpoints of the vessel segment.This methodology is alternatively referred to as the Arc-over-Chord ratio which is also commonly denoted as the tortuosity index.Nevertheless, the existing body of literature presents alternative approaches for calculating tortuosity that rely on curvature, which yield more precise results.The study recently published demonstrates that curvature-based methods exhibit greater consistency and intuitiveness compared to the tortuosity index.Additionally, it highlights an inherent limitation in metrics like the tortuosity index, which fail to account for the entirety of the geometry 30 . To achieve a desired level of accuracy, we employed the squared-derivative-curvature method 31 , which has been widely utilized in previous studies to calculate the tortuosity of retinal vessels.The tortuosity is estimated by evaluating the integral of the square of the derivative of curvature, divided by the length of the arc of the vessel segment (Eq. 1).In this particular instance, the tortuosity of a perfectly linear segment of a vessel is approximated to be 0. κ is the curvature of the vessel segment.In order to determine the curvature of a vessel segment, the centerline of the vessel was obtained by applying morphological thinning to the vessel segment (skeletonizing).This process involved iteratively removing pixels from the vessel's boundary.This operation maintains the topological properties and does not alter the fundamental structure of the vessel segment.Following the process of skeletonization, any spur edges that may have been present as small branches alongside the skeletonized vessel segment were effectively removed, resulting in the vessel segment having only two endpoints. Curvature, denoted as κ, refers to the degree to which a curve deviates from a straight line.Curvature is related to the tangent vector at each point on a curve, as will be discussed in more detail.From a geometric standpoint, the curvature κ can be precisely characterized as the rate at which the tangent vector to a curve alters its orientation.In order to determine κ, it is necessary to first represent the skeletonized vessel segment using a continuous curve.In order to accomplish this, we estimated the skeletonized vessel segment S in our digital image using a cubic spline (a continuously differentiable function defined by piecewise polynomials).If S is represented by center line pixels located at coordinates (x 1 , y 1 ), (x 2 , y 2 ), …, (x n , y n ) the spline is actually an interpolation over those coordinates.Nevertheless, when employing all the coordinates in set S for the purpose of interpolation, the resultant spline exhibits excessive noise and lacks the desired level of smoothness required to accurately represent a segment of a vessel.Consequently, it was necessary to down-sample the initial coordinates and approximate a more continuous spline function over the sampled data points. (1) www.nature.com/scientificreports/Assuming that a particle is moving along the spline γ, then both x and y are considered as a function of a time variable, t, and the spline γ is described as γ(t) = (x(t), y(t)), where γ(t) represents the position of the particle at time t.The curvature κ at point t, i.e. κ (t), can be estimated by Eq. 2, where primes refer to derivatives d/dt with respect to the parameter t.L c , the curve length, is calculated by the formula given in Eq. 3. Figure 1 illustrates the sequential procedures employed for the computation of tortuosity in an arteriole segment within a retinal image obtained from a patient exhibiting Plus disease.The curvature values along the vessel segment are visually represented through a color-coding scheme, where regions with high curvature are depicted in red and regions with low curvature are depicted in blue.The first and second devivatives of the final spline at selected points are also shown as red and yellow arrows respectively. Vessel dilation As previously stated, the expansion of blood vessels, known as vascular dilation, is a characteristic feature of Plus disease.To determine the diameter of each vessel segment, we employed the methodology outlined by ( 2) www.nature.com/scientificreports/Sharafi et al. 25 .A random selection was made of one third of the centerline pixels within the vessel segment. For each selected pixel, the shortest line connecting it to the border of the vessel segment was determined and considered as the diameter of the vessel segment at that particular pixel.The diameter of the entire vessel segment was determined by taking the average of the diameter values at the centerline pixels that were selected. The diameter index at a specific zone was defined as the maximum diameter among all vessel segments within that zone, assuming that the venule has the highest diameter in that zone.Vessel dilation measurements were performed for entire field of view and at a distance of 5 disc diameters away from the optic disc, as previously documented in the previous studies 17,20 . Vessel density Since ROP is a proliferative retinal vascular disease and Plus disease is characterized by vascular tortuosity and dilation, it is hypothesized that the vessel density in Plus disease is greater than that of a healthy retina.The density of retinal vessels has been extensively discussed in the academic literature as a potential indicator of ROP [32][33][34][35][36] , particularly in relation to Plus disease 35 .Therefore, in our study, we investigated the density of vessels as a potential indicator of Plus disease, in addition to evaluating tortuosity and dilation.The calculation of vessel density involved determining the ratio of the number of pixels located on the vessels of each image to the number of pixels located on the remaining portion of the entire vascularized region.The vascularized region was acquired through the application of morphological image dilations on the vessel mask. Inter-expert and intra-expert variability and the expert's agreement Cohen's kappa statistics were computed to evaluate the level of agreement between experts and the consistency of grades assigned by a single expert across two separate sessions.Tables 1 and 2 present the levels of inter-expert and intra-expert variability, respectively.Table 3 presents the concordance between individual experts and the reference standard.The average level of agreement among experts was found to be 79% with a standard deviation of 5.3%.In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Feature extraction and features impact In accordance with the preceding section, a total of 10 features were initially extracted to delineate the attributes of tortuosity, dilation, and vessel density.In order to prioritize more discerning characteristics, we employed the Neighborhood Component Analysis (NCA) technique for feature selection 37 .Ultimately, we identified and selected five features with greater weights.The selected features and their attributes are presented in Table 4. To represent some sample images of our data set and the corresponding features' values calculated for each of the samples, Fig. 2 shows two samples of Plus images and two sample of non-Plus images and their associated feature values.Figure 3 presents a comparison between Plus and non-Plus images with respect to the selected features outlined in Table 4. Figure 3a presents a comparison between Plus and non-Plus images in terms of the average tortuosity values observed in five vessel segments that exhibit the highest levels of tortuosity within the entire www.nature.com/scientificreports/image (F1).There was a significant difference observed between the measure of Plus images and non-Plus images (p ≤ 0.0001).Figure 3b presents a comparison between Plus and non-Plus images in terms of the average tortuosity of their vessel segments within a region located 5 diameters away from the optic disc border, referred to as 5DD (F2).The tortuosity measure of Plus images exhibited a statistically significant difference compared to the non-Plus images (p ≤ 0.0001).Figure 3c presents a comparative analysis between Plus images and non-Plus images in relation to feature F3, specifically focusing on the average of the top 1% curvature values.There was a significant difference observed between the measure of Plus images and non-Plus images (p ≤ 0.0001).Figure 3d presents a comparison between Plus and non-Plus images in terms of the maximum vessel diameter observed within the 5DD region (F4).The diameter measurements of Plus images exhibited a significantly greater magnitude compared to non-Plus images (p ≤ 0.0001).Figure 3e presents a comparison of vessel density between Table 4. List of the selected features, description, region of the image that the feature calculated for, and the method used to calculate the feature. Image classification Since three out of the five selected features (i.e.F1, F2, and F3) were in the category of the vessels' tortuosity and they were likely to be correlated, we tried for more dimension reduction by utilizing Principal Component Analysis (PCA) on the selected features.We yielded the best accuracy with the first, second, and third components and using them to train an SVM classifier.A radial basis kernel function and the regularization parameter of 1 were used for the SVM classification.The performance of our classifier was assessed using a tenfold cross validation.We also assessed the accuracy of our classifier in comparison to the accuracy of the diagnosis provided by experts for the reference standard.Table 5 displays the diagnostic accuracy of each expert and the proposed method in distinguishing Plus images from non-plus images.According to the table, the proposed method achieved an accuracy of 0.86 ± 0.01, placing it third among the accuracy values of the expert-provided diagnosis.The accuracy values for our classifier were obtained through repetitive execution of tenfold cross-validation.Figure 4 illustrates the data points of Plus and non-Plus images in a three-dimensional plane, utilizing the standardized values of the first, second, and third principal components of the chosen features.Figure 5 illustrates an overall view of the proposed method and a schematic of the outputs at each of the stages. Discussion Numerous research groups have undertaken investigations into the advancement of computer-based image analysis techniques for the purpose of automating the diagnosis of ROP with Plus disease 11,[14][15][16][17][18][38][39][40][41][42] . In orr to establish an automated system for the detection of ROP, it is necessary to conduct an analysis of retinal fundus images and accurately characterize the distinctive features associated with ROP.The identification of ROP and more specifically, the presence of Plus disease, can be conceptualized as a classification task within the domain of machine learning.Conventional algorithms employ handcrafted features (HC), such as vessel dilation and tortuosity, to analyze retinal fundus images and distinguish between Plus disease and pre-Plus/ non-Plus conditions 17,19,20,43,44 .In contrast, Deep Convolutional Neural Networks (DCNN) possess the ability to acquire knowledge of image features from the provided inputs in order to effectively classify labels.DCNNs have demonstrated effective utilization in the automated identification of various ocular diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, and retinopathy of prematurity.These findings have been reported in multiple studies 14,18,21,[45][46][47][48] .DCNN often requires a substantial quantity of high-quality training samples that are well-balanced across different labels.However, obtaining such samples in the context of ROP can be challenging due to limited patient cooperation and inadequate attention span, particularly in children.Additionally, the features acquired by DCNNs lack transparency and interpretability.Therefore, the utilization of HC features in the detection of Plus Disease has proven to be effective, both as a standalone method 17,22,29 and as a means of providing supplementary information for DCNN in image classification tasks 16,23 .www.nature.com/scientificreports/ In addition to the diagnosis of Plus disease, a significant challenge encountered by experts in the field of ROP pertains to the assessment of the efficacy of clinical interventions implemented during patient treatments [49][50][51] .The quantification of vascular characteristics during therapeutic procedures can provide clinicians with a more precise means of monitoring the effects of their interventions on the disease. Accurate vessel segmentation is crucial for quantifying vessel characteristics, such as tortuosity, by extracting vessel masks.Most studies have manually created masks for vessels in retinal images, either for the entire image or for specific segments of vessels.This process utilizes graphical editing software, such as Photoshop, which requires significant time and relies heavily on the operator's proficiency and expertise.In the current study we utilized a combination of thresholding technique for retinal vessel segmentation, along with a method utilizes a set of B-COSFIRE filters designed for vessel detection 27 .By integrating these two approaches, the researchers achieved automated generation of a vessel mask for each ROP fundus image.We built a GUI to correct automated segmentation mask inadequacies due to ROP image artifacts and the need for proper vessel segmentation for subsequent analysis.Accurate and automated vessel segmentation in ROP fundus images can also serve as a dataset for a comprehensive vessel segmentation method using deep learning in our future studies. This study aimed to develop algorithms for characterizing two primary vascular image features associated with Plus disease: vessel tortuosity and dilation.The algorithms were assessed for their ability to differentiate between subjects with Plus disease and those without Plus disease.It was demonstrated that the mean tortuosity of the five most tortuous vessels exhibited a statistically significant increase in Plus images when compared to non-Plus images (p ≤ 0.0001).The analysis revealed that the curvature values derived from points exhibited a statistically significant increase in Plus images as compared to non-Plus images (p ≤ 0.0001).A statistically significant increase in the greatest diameter of vessels within a zone extending 5-disc diameters away from the edge of the optic disc (referred to as 5DD) was observed in Plus images compared to non-Plus images (p ≤ 0.0001).In addition to vessel tortuosity and dilation, our method validates the finding by Ataer et al. 26 that the point-based curvature values are substantially greater in Plus images than in non-Plus images. Additionally, we conducted a comparative analysis of vessel density in retinal images to evaluate any statistically significant differences between the two subject groups in relation to this particular characteristic of blood vessels.The density of vessels in Plus images was significantly higher than in non-Plus images (p ≤ 0.0001). The literature extensively discusses the density of retinal vessels as a potential indicator of ROP, particularly Plus disease.The relationship between vessel density and Plus disease remains unclear, as there is limited research on this specific aspect.Additionally, the development of Plus disease is influenced by multiple factors, including gestational age, birth weight, and the overall health of the infant.The study findings revealed a significant difference in vessel density between Plus and non-Plus images (p ≤ 0.0001).This finding supports the results of a study conducted by Mao et al. 35 , which demonstrated that patients diagnosed with Plus and pre-Plus exhibited significantly higher vessel density compared to the normal group.They also demonstrated a proportional decrease in vessel density at 7-, 14-, and 30-days post-treatment.An increase in vessel density was observed in a study conducted on a mouse model of oxygen-induced retinopathy (OIR) 32 .However, our findings contradict other studies that did not observe a significant increase in vessel density between the ROP group and the normal group 33,34 . We trained an SVM classifier using the extracted features to distinguish between Plus and non-Plus images.We evaluated the performance of our algorithms using cross-validation and a reference standard.The reference standard was determined through the integration of the diagnoses given by a panel consisting of three experts in ROP.The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was 0.86 ± 0.01.The observed accuracy was higher than that one of the three experts in comparison to the reference standard. There is considerable variation in the classification of Plus disease by experts in ROP.This variation arises from differences in the thresholds used by experts to determine the extent of vascular abnormality necessary to diagnose Plus and pre-Plus disease.This finding has significant implications for ROP studies, instruction, and patient care.It indicates that a continuous ROP Plus disease severity score could provide a more accurate representation of expert ROP clinicians' assessments and potentially improve the standardization of classification in the future 52 . The Images and Informatics in ROP(i-ROP) deep learning (DL) algorithm (i-ROP) is a well-known AI tool used to measure vascular changes in the posterior pole in cases of ROP.Multiple studies on ROP screening have shown that the i-ROP algorithm can effectively detect Plus disease, similar to human ROP experts.This suggests that the algorithm has the potential to identify cases of ROP reactivation that require retreatment.The i-ROP DL system's output can be converted into a vascular severity score (VSS) that represents the range of Plus disease.This score has proven to be valuable for primary ROP screening, tracking disease progression, and evaluating treatment response 15,16,49,50,[53][54][55][56] . The study demonstrated the utility of VSS in both primary and secondary ROP screening, including the screening for ROP reactivation following anti-VEGF treatment 51 . Infants diagnosed with ROP in developing countries tend to have higher birth weights and older gestational ages compared to those in developed countries.As a result, broader screening guidelines have been implemented in low-and middle-income countries, leading to a larger population at risk for ROP in these regions 57 .The rise in screening burden poses a particular challenge due to the relatively lower number of ophthalmologists per capita in comparison to higher-income nations.The current study, aimed to develop a computer-assisted system for detecting Plus disease in Iran as a low-income country, with the goal of improving diagnostic accuracy and facilitating quantitative monitoring of treatment progress.This study quantified the characteristics of vessels in ROP images, focusing on tortuosity, dilation, and density.These measurements were used for primary Plus disease screening and could be useful for future studies on screening for ROP reactivation after anti-VEGF treatment. This study has a number of limitations.Firstly, the dataset used in this study had a limited number of ROP fundus images, which may have influenced the performance of the model.Secondly, the fundus images were www.nature.com/scientificreports/collected from a single clinical site with consistent device settings and population characteristics, which could have reduced the diversity of the data and affected the algorithm's ability to generalize to other populations.Lastly, while a cross-validation method was employed to enhance generalizability, further validation using independent images is necessary for future research.Future studies should aim to obtain larger datasets of ROP images in order to validate and optimize our system within the clinical setting.Additionally, it may be necessary to conduct additional testing and optimization of the sensitivity metric in order to minimize the occurrence of false-negative results.Additional studies are required to validate this automated system and enhance its practicality for real-world clinical applications by incorporating datasets from multiple clinical centers and larger patient cohorts. In conclusion, the algorithm used in this study showed high accuracy in detecting Plus disease in cases of retinopathy of prematurity, performing similarly to expert diagnoses.By objectively analyzing vessel characteristics, it is possible to quantitatively assess the features of disease progression.The automated system has the potential to improve physicians' ability to diagnose Plus disease, making valuable contributions to the management of ROP by integrating traditional ophthalmoscopy and image-based telemedicine methods. https://doi.org/10.1038/s41598-024-57072-4 2 Figure 1 . Figure 1.(a) Part of a retinal image and a single arteriole segment depicted on it (b) The curvature values along the vessel segment are visually represented through a color-coding scheme, where regions with high curvature are depicted in red and regions with low curvature are depicted in blue.(c) Centerline pixels, splines, first and second derivative at selected points: '. ' Shows the segment's centerline pixels; 'o' shows the points after downsampling; Black lines depict the first spline fit that passes through all the original pixels; Color-coded curve: spline fit after down-sampling and the curvature values along the spline that are represented through a colorcoding scheme; Red arrows illustrate the first derivative of the spline at the selected points i.e. tangent vectors; Yellow arrows: second derivatives of the spline at the selected points i.e. acceleration vectors.Color bar shows the color map of the curvature values. Figure 3 . Figure 3.Comparison of extracted feature values between Plus and non-Plus images (values are standardized).(a) Mean tortuosity of the five vessel segments with the greatest tortuosity (F1).(b) The mean tortuosity of all vessel segments within a region extending 5 diameters from the OD border (5DD) (F2).(c) The mean of the highest 1% of curvature values (F3).(d) The maximum diameter of the vessel in the 5DD region (F4).(e) Vessel density determined in the retina's vascular region (F5).Blue circles represent outlier data values. Table 2 . Intra-expert agreements in two different sessions. Table 3 . Agreement for individual experts with the reference standard. Table 5 . Comparison of the accuracy measures of the expert-provided diagnosis and the proposed method's prediction against the reference standard.
7,085.4
2024-03-16T00:00:00.000
[ "Medicine", "Computer Science" ]
FDI in Tourism Sector and Economic Growth in Sumatra Utara Globalization and neo liberal policies such as liberalization and privatization have generated a significant growth for FDI and considered an important source for capital and foreign currency, capable of spurring economic growth in developing countries. One sector that received particular attention, due to its significant contributions towards economic development, especially in Indonesia, is tourism. Tourism investments in Indonesia are mainly focused on the development of fully-integrated resort sites that help boost the construction of tourist facilities such as hotels and the development of the surrounding environment through social and cultural aspects. The total contribution of travel and tourism to GDP was IDR736.3 billion or 8.9% of GDP in 2012. Foreign direct tourism investments grew by 210% between 2011 and 2012, or at an annual compound average growth rate of 38% between 2006 and 2012. While the implications are at national level, not much could be gathered on the local perspectives. This paper intends to explore the implication of FDI in tourism sector towards economic growth in one of tourism attraction provinces in Indonesia—Sumatra Utara. Specifically, which economic factors contributed towards FDI inflows and their impacts on economic growth in Sumatra Utara. INTRODUCTION Recently, it has been established that tourism has become one of the most significant export sectors in many developing countries and it not only increases foreign exchange income, but also creates employment opportunities, stimulates the growth of the tourism industry triggers overall economic growth (Samimi, Sadeghi, & Sadeghi, 2013). Tourism is one of the world's largest industries accounting for over one-third of total global service trade (Endo, 2006). Tourism industry agglomerates many separate activities that come together in the production and consumption of tourism (UNCTAD, 2008). Foreign Direct Investment (FDI) is one of the routes through which developing countries can carry out tourism, but the dynamics of FDI in this dynamic sector, and its implications, have been relatively little studied. There is very little empirical information about the extent of tourism-related FDI in the global economy or its overall impact (UNCTAD, 2007). Foreign Direct Investment (FDI) in tourism would help developing countries to mitigate the effect of adverse development gap between developed and developing countries (UNCTAD, 2007). The economic impacts of tourism can be measured in many terms, such as output, income, employment, value added, taxes, etc. The magnitude of the relative impacts depends upon the relative magnitude of the direct and the derived effects. The magnitude of direct effects can be decomposed into four factors: tourist intensity, the level of daily consumption for the type of overnight stay, the composition of tourist activity by the type of overnight stay, and employment content of tourist related activity/the opportunity of jobs seekers (Zhang, Madsen, & Jensen-Butler, 2007). Tourism investment in Indonesia grew by more than 210% from 2011 to 2012. The growth in tourism investment is aligned with the country's positive economic growth. Additionally, the Indonesian government has been instrumental in streamlining investment procedures and promoting investment opportunities and potential of Indonesia within the region, resulting in a favourable investment environment as shown by figure 1. North Sumatra Regional Economic Performance The economic performance of North Sumatra Province from the year of 2010 to 2012 (see Table 2) was slightly increased. This performance indicator contributed by the sector of Hotel and Restaurant. In 2010, Hotel and Restaurant was offered 11 % from the total income regional in this particular year 2010 is 2,158.6 Millions US$. In 2011, contributed also contributed 11 % although the total income increased to 2,306.6 Millions US$. In 2012, Hotel and restaurant conveyed 10.8 % and small decreased from the previous year as referred to table 2. The purpose of this research is to investigates the significance Foreign Direct Investment in tourism sector relates to implications of Economic Growth in North Sumatra. Literature Review The most widely known approach was advocated by Solow (1956), where he attempts to explain how an economy will grow, given its technology and the market behavior of its consumers. Based on the Solow Model the following econometric specification is used to explore factors contributing to the long run economic growth. Where Y is per capita GDP growth rate in country i and in year t, X 1 is the capital accumulation rate in country i in year t, X 2 is the population growth rate in country i in year t, X 3 is the share of research and development expenditure in country i GDP in year t, X 4 is the primary completion rate in country i in year t, X 5 is the share of imports and exports in country i GDP in year t,  t and  i are time and country fixed effects variable to capture the unobserved effects across time and countries respectively and  it is random error. Many studies have examined the effect of inward FDI and imports on firm innovation, such as those of Zimmermann (1987), Veugelers and Houte (1990), Scherer and Huh (1992), Bertschek (1995), Co (2000), and Lofts and Loundes (2000). These studies find that inward FDI and imports can enhance competition and accelerate the process of innovation in the local manufacturing industry. However, only a few studies discuss the influences of outward FDI and exports on innovative activities. Research study in Cuba, tourism play a significance role of the growth of economic. Official statistic report that at the end of 2000, there existed 29 Joint Venture in tourism with a total capital of US$1,089 million, 26 of which were Hotel Chains managing 15,600 rooms. In the same year, 17 International Hotel chains were reported to have management and marketing contracts with Cuban counterparts. In China, FDI inflows to the tourism sector promoted the growth of incoming tourism and consumption. Foreign investors have brought their established or potential tourist sources to the Chinese market increasing inward tourism and promoting the development of China's tourism economy. Through cooperation with foreign tourism companies, domestic ones can draw upon experience and methodology in building marketing networks and managerial practices so as to improve the overall level of China`s tourist enterprises and facilitate their internationalization process (Kyrkilis & Pantelidis, 2003). By the year 2020, China will become the world's number one tourist destination with annual arrival 130 million. A study by the Tourism Council of the South Pacific (1992) showed that $1,000 of tourism expenditure in Fiji generated an output of $3,541 in the overall economy and a total of $336 in public sector revenue i.e. 33.6%. This figure is on par with manufacturing and ahead of agriculture (32%) and mining (19%). The industry has emerged as an attractive development option with the capacity to generate significant foreign exchange earnings and incomes for the local population. It creates employment, provides revenue for government by way of direct and indirect taxes, improves infrastructure and encourages entrepreneurial activities. It also stimulates economic development through the so-called multiplier effect. A study conducted in Australia revealed that the nation states use the capacity of national bureaucracy as a key adaptive mechanism to aid domestic accommodation of globalizing pressures while enabling the retention of state autonomy holds true, at least in the case of Australia's experience with FDI from 1968 to 2004. Thus while successive Australian governments have sought to adapt to greater internationalizing pressures, particularly those generated by international economic actors, such as multinational enterprises, and the internationalizing of markets, the Australian state has retained sufficient capacity to respond to such globalizing pressure to support its own strategic and political objectives. This study shows that while there are changes in how states act and behave in responding to globalizing pressures a fundamental role of the state continues to be that of regulating cross border flows such as FDI (Sadleir, 2007). Foreign Direct Investment (FDI) in tourism would help developing countries to mitigate the effect of adverse development gap between developed and developing countries (UNCTAD, 2007). Studies of the relationship between tourism activity and FDI have been flourishing recently, but they are still scarce. Chen (2010) analysed the influence of foreign direct investment within China's tourism industry considering the imbalance of the development process across coastal and inland regions from 1978 to 2008. The results show that impacts of FDI on tourism industry in the coastal regions are greater than they are inland. Therefore, the coastal regions have experienced rapid economic and tourism development because of the inflow of FDI and political preferences. Selvanathan, Selvanathan, and Viswanathan (2012) investigated the causal link between FDI and the tourism industry in India under a VAR framework, by employing quarterly statistics from 1995 to 2007. The results indicate that a one-way causality link is found from FDI to tourism arrivals. This explains the rapid growth in the international tourism arrivals as being due to attracting further FDI in the Indian economy during the last decade. METHODS As previously discussed, the purpose of this research is to investigate the Foreign Direct Investment significance in tourism sector related to implications of Economic Growth in North Sumatra. Empirical analysis used FDI and GDP secondary data series collected from 2010-2014 captured with Microsoft Excel. The descriptive statistical analysis was performed by using the Statistical Program for Social Sciences (SPSS) version 21. The statistical analysis included descriptive analysis and factor analysis. RESULTS AND DISCUSSION The purpose of Normality Test is to test whether the dependent variable and independent variables have normal distribution or not. Figure 2 shows the graph of Normal P-Plot of the regression with dependent variable of GDP. The graph shows how the points are patterned diagonally upwards around the normal line. Therefore, this regression model fulfils the normality assumption. = regression coefficient associated with X1; e = an error term. Coefficient determination is used to look at the contribution of independent variable FDI (X) to dependent variable GDP (Y). The higher Adj R2, the better the regression model, as the independent variables are able to explain the dependent variable. From Table 3, the magnitude of the coefficient Adj R Square is 0.912, it means that the variable GDP(Y) can be explained by the independent variables FDI (X) of 91.2%, while the remaining 100% -91.2% = 8.8% is the contribution of the other independent variables were not included in this research, in other words FDI pretty well explain GDP. The closer R-square to 1, the more variation of the dependent variable is being explained by the observed independent variables. The F-Test (ANOVA) is conducted to analyze the impact of the independent variables toward the dependent variable, which is GDP. To determine the rejection of the null hypothesis above; Fstatistic must exceed F-table (critical value). Aside from the F-value, (ρ) value also plays a significant role; it is only when (ρ) < 0.05, the null hypothesis may be rejected. Table 4 shows the significant value (ρ) of 0.007 which is lower than (ρ) critical value of 0.05. Therefore there is significant linear correlation between FDI as the independent variables with the GDP as the dependent variable in the model, and that the independent variables are significantly influencing the dependent variable not by chance or accident. This is an indication of a fit model of regression to be significant, and thus the independent variables FDI (X) can significantly act as predictors for the dependent variable (GDP). Source: Output of SPSS Referring to column "B" in table 5 is the coefficient of each independent variable in the regression model FDI (X) towards the dependent variables (GDP) to form the regression equation. The column "t" and "Sig." represent the significance of variable and its impact on the dependent variable (GDP). Notice that the significance value (ρ) of each independent variable is below 0.05, an implication that FDI had significant impact on GDP. From column "B" in table 5, formed an equation for the regression model: GDP = 3.043 + 0.007  FDI Whereas the constant of the equation is 3.043, means that when the independent variables (X) are zero, the value of GDP will be equivalent to the constant (3.043). Referring to the significance (ρ) result is 0.007 < 0.05, the results is a stand to reject H 0 , this suggests that FDI does significantly influence GDP. Factors of and FDI determinants Flows to Indonesia Strategic Location and Raw Materials According to BKPM (2013), with 2 million km 2 , sea: ± 7.9 million km 2 (4 times greater than the land), islands >17,508 island), Indonesia lies at the intersection of the Pacific Ocean along the Malacca straits the Indian Ocean. Over half of all International shipping goes through Indonesia waters and become a gateaway to ASEAN Market. Population and Workers Availability Competitive advantage in terms of worker and consumer; Indonesia is the 4th most populous country in the world accommodated 240 million people with 53% population in cities producing 74% of GDP and 55 million skilled workers. It will generate higher buying power and availability of lower human resources price, and those will gain investor to invest in Indonesia (Kementerian Pariwisata dan Ekonomi Kreatif, 2014). Growing Middle Class and Market Demand Number of population in middle income (per capita expenditure per day $2-20) with higher demands for better services and products; car sales on 2012 total to 1.116.230 units, or increase 25% from the previous year. Analysts and industry players noted that low borrowing costs coupled with rising purchasing power influenced customers to buy cars. The national cement consumption reached 54,9 million tons in 2012, grew 14,3% from 48 million tons in 2011 (BPKM,2013). In tourism sectors, The number of visitor come to Indonesia (Tourism) rose to 8,002,035 visitors in 2012, an increase of 5% from previous year (BPKM, 2013). Foreign Equity Participation (51% share owned by foreign investor) for the following business 1 and 2 Star Hotel, Non Classified Hotel, Motel & Lodging Service, Home-stay, Catering, Spa, Amusement Center, Bar, Café, Singing Room/karaoke (Kementrian Pariwisata dan Ekonomi Kreatif, 2014). According to BPKM (2013), several Tourism Investment areas opportunities can be established such as: Cruise tourism, Meetings, Incentive, Convention, Exhibition/Event, Nature based and ecotourism Culture and historical based tourism Shopping and culinary, Wellness and medical tourism Recreational sports. Economic Growth Indonesia's 2012 growth hit 6.2% and 16 th largest economy in the world with 45 million members of the consuming class, $0.5 trillion market opportunity in consumer services, agriculture and fisheries, resources and education (Kementerian Pariwisata dan Ekonomi Kreatif, 2014). The IMF projects Indonesia will be at the top 3 fastest economic growths among G20 countries (International Monetary Fund, 2012). Japan Credit Rating Agency (2012) stated that key factors supporting the decision of affirmation the sovereign invest in Indonesia. (1) The country's sustainable economic growth outlook underpinned by solid domestic demand, (2) low level of public debt burden brought by prudent fiscal management, (3) reinforced resilience to external shocks by its accumulated foreign exchange reserves. Fitch Ratings in 2002 discovered that the key factors supporting the decision of affirming Indonesia's sovereign credit rating are the relatively high economic growth that is resilient to the declining global condition, high investment rate, low and declining public debt ratios and the strong overall macroeconomic policy framework. World economic forum in 2013 reported that Indonesia ranks 38th of 148 countries surveyed and remains one of the best performing countries within the developing Asia region, behind Malaysia, China and Thailand, yet ahead of Philippines, Vietnam and all South Asian nations. 100 % of capital share can be owned by foreign investor for the following business: 3 to 5 Star Rated Classified Hotel; Tourism Resort, Golf Course & Driving Range, Convention & Exhibition. CONCLUSION Foreign Direct Investment in tourism sector does significantly influence its economic growth in North Sumatra. The significance value (ρ) of each independent variable is below 0.05, an implication that FDI had significant impact on GDP. There are several factors determined FDI flows to Indonesia such as Location, Availability of Employee and Raw Material, Population and Market Distribution and Economic growth indicator performance.
3,715.6
2014-09-30T00:00:00.000
[ "Economics" ]
Construction of a T7 phage display nanobody library for bio-panning and identification of chicken dendritic cell-specific binding nanobodies Dendritic cells (DCs) are the antigen-presenting cells that initiate and direct adaptive immune responses, and thus are critically important in vaccine design. Although DC-targeting vaccines have attracted attention, relevant studies on chicken are rare. A high diversity T7 phage display nanobody library was constructed for bio-panning of intact chicken bone marrow DCs to find DC-specific binding nanobodies. After three rounds of screening, 46 unique sequence phage clones were identified from 125 randomly selected phage clones. Several DC-binding phage clones were selected using the specificity assay. Phage-54, -74, -16 and -121 bound not only with chicken DCs, but also with duck and goose DCs. In vitro, confocal microscopy observation demonstrated that phage-54 and phage-74 efficiently adsorbed onto DCs within 15 min compared to T7-wt. The pull-down assay, however, did not detect any of the previously reported proteins for chicken DCs that could have interacted with the nanobodies displayed on phage-54 and phage-74. Nonetheless, Specified pathogen-free chickens immunized with phage-54 and phage-74 displayed higher levels of anti-p10 antibody than the T7-wt, indicating enhanced antibody production by nanobody mediated-DC targeting. Therefore, this study identified two avian (chicken, duck and goose) DC-specific binding nanobodies, which may be used for the development of DC-targeting vaccines. Results Construction of a high diversity nanobody library. The cDNA was prepared using the extracted total RNA from alpaca peripheral blood lymphocytes by reverse transcription (RT). The VHH genes were amplified to produce amplicons of about 450 bp by stepwise PCR using the synthesized cDNA as a template. The primary VHH library was generated by cloning the VHH gene repertoire into the T7 select 415-1b vector, followed by in vitro packaging. Subsequently the primary library generated was amplified by the liquid lysate method. The titers of the primary and amplified library were 2.73 × 10 9 PFU/mL and 1.65 × 10 11 PFU/mL, respectively. Diversity analysis of the primary library was carried out by PCR detection of 20 random phage clones ( Supplementary Fig. S1-A). Sequence analysis indicated the typical nanobody structure with the framework region (FR) and complementarity determining region (CDR). In addition, differences in amino acid sequences of the CDRs indicated a high diversity library ( Supplementary Fig. S2). Nanobodies displayed on the T7 phage surface were Bio-panning and characterization of DC-specific binding nanobodies. To screen nanobodies that specifically bind with DCs, three rounds of phage display bio-panning were performed, and bone marrow cells were added for depletion to reduce the possibility of non-specific binding. As shown in Fig. 3A, the ratio of phage recovery in each round increased, which was considered evidence of effective screening. However, when the next generation sequencing data was assessed, it was observed that the number of phage clones with unique sequences in each round of the recovered phage had decreased (Fig. 3B), which indicated an accumulation of specific DC-binding phages during the bio-panning process. After three rounds of screening, 125 phage clones were randomly selected for sequencing, and 46 phage clones with unique CDR sequences were identified ( Supplementary Fig. S3). To further characterize the selected phage clones, the specificity, i.e., the ability of a phage probe to associate with its target due to the presence of a specific nanobody displayed on the surface, and the selectivity, i.e., the ability of a phage probe to discriminate its cognate target from a mixture of targets were determined. The specificity of 46 unique sequence phage clones was evaluated and is summarized in Fig. 3C. Twelve phage clones with strong affinity and specificity (phage recovery higher than 0.3%) towards chicken DCs were identified. The intact amino sequence of the VHH display on phages-16, 54, 74 and 121 were analyzed in Fig. 4A, and the amino composition of the CDRs was different between these four www.nature.com/scientificreports/ clones. Further, the selectivity of the four phage clones was evaluated ( Fig. 4B-E). It was observed that phages-54, -74, -16 and -121 bound not only to chicken DCs but also to duck and goose DCs, however, they barely bound to the bone marrow cells, chicken embryo fibroblasts (CEF), duck embryo fibroblasts (DEF) and goose embryo fibroblasts (GEF). These results suggest that these four phages may have great specificity for binding to DCs. For further verification, nanobody binding to chicken bone marrow DCs was assessed by fluorescent microscopy (Fig. 5). No green fluorescence signal was evident in Fig. 5K which indicated that the T7-wt barely bound to the DCs. In contrast, phage-54 and phage-74 could adsorb to the DCs surface efficiently, as manifested by the strong green fluorescence spot covering the DC surface (Fig. 5C and -G). In addition, the suspected DC proteins that could interact with VHH-54 and VHH-74 were analyzed by pull-down and HPLC-MS assays ( Table 1). The fusion proteins GST-VHH-54 and GST-VHH-74 were expressed and purified ( Supplementary Fig. S4) for interaction with the DC lysate. HPLC-MS results revealed several intercellular proteins such as proteases and a couple of uncharacterized proteins, unfortunately, with almost no surface proteins being discovered ( Table 1). The voltage-dependent anion-selective channel protein which was confirmed on the plasma membrane most likely interacts with VHH-74. DC targeting induced high-level antibody. The purified T7-wt, phage-54 and phage-74 particles were detected by Western-blot and a single band of fusion protein p10B-VHH-54 and p10B-VHH-74 were obtained (Fig. 6A), which indicated a correct displaying of the nanobody on the T7 phage surface. To verify the function of DC-targeting nanobodies that act in the promotion of antigen presentation and antibody formation, groups of SPF chickens were subcutaneously injected with phage-54 and phage-74, with T7-wt as a control. The level of specific antibody against T7 phage capsid was determined using ELISA. T7 phage capsid was expressed in the E. coli system, purified ( Supplementary Fig. S5) and used to coat ELISA plates to establish a detection method. Chickens immunized with phage-54 and phage-74 developed higher levels of anti-capsid antibody than chickens administered with the T7-wt control (Fig. 6B). DC-targeting phages were able to stimulate a more rapid and efficient immune response, thus indicating the potential application of the selected nanobody in antigen delivery. Discussion In recent years, the use of DCs to improve the immunogenicity of an antigen has been a key strategy in the field of vaccine development 28,29 . These strategies can increase the number of antigens targeting DCs, and much progress has been made in human DC-targeting vaccines 30,31 . One of the common approaches to elicit a strong and long-lasting humoral and cellular immune response is the design of DC-targeting vaccines 32,33 . When researching and developing poultry vaccines, practical factors should be taken into consideration, including production cost and wide-scale administration, consequently, DC-targeting vaccines have become a desirable choice. Hence, we www.nature.com/scientificreports/ propose to discover chicken DC-specific binding ligands which could be used as an antigen carrier to develop DC-targeting vaccines. Antibodies can specifically target receptors that are expressed on the surface of DCs, this concept has been utilized either by conjugating antigens to mAbs as DCs surface molecules, or by genetic engineering in which the antigen is fused to different antibody fragments specific to DC receptors 34 . Nanobodies, unique antibodybinding fragments derived from camelid heavy-chain antibodies, have excellent properties including thermal and chemical stability, weak immunogenicity and high affinity 35 . Thus, a T7 phage display nanobody library was constructed by inserting the alpaca VHH antibody gene into the downstream region of the T7 phage p10B gene, so that nanobodies could be displayed on the surface of the T7 phage. Compared to other phage display systems, the T7 select phage display system is easy to use and has the capacity to display peptides of about 50 to 1200 amino acids 36 . In this study, the high-copy number vector T7 select 415-1b was used to clone the VHH gene, and the E. coli BLT5403 bacterial host was used to supply the extra p10A protein to facilitate infectious recombinant phage rescue. Twenty plaques randomly screened by PCR, showed that the VHH gene was successfully inserted into the phage genome ( Supplementary Fig. S1-A), and sequencing data revealed that the framework regions and complementarity determining regions of these VHH clones displayed the greatest difference in amino acid sequences ( Supplementary Fig. S2). A small percentage of stop codons were detected within the VHH genes such as in VHH17 ( Supplementary Fig. S2), which led to the truncated VHH protein bands being detected in the Western blot ( Supplementary Fig. S1-B). In general, considering the high diversity of the VHH library and the efficient expression of VHH on the phage surface, this T7 phage display library could be adequate for biopanning needs. T7-wt phage devoid of nanobody display was used as a negative control for each assay. Phage recovery was calculated as the ratio of recovered phage versus the input phage as follows: phage recovery (%) = (output phage/ input phage) × 100. www.nature.com/scientificreports/ Antigen is bound with antibody and targeted to a DC receptor for internalization which can accelerate antigen processing and presentation 37 . The surface of DCs contains many pattern recognition receptors (PRRs). To date four PRRs have been found, including Toll-like receptors (TLRs), Nucleotide-binding oligomerization domain-like receptors (NLRs), Retinoic acid-inducible gene I-like helicases receptors (RLRs) and C-type lectin receptors (CLRs) 38,39 . Choosing the receptor to be targeted is a great challenge in the design of an antibody-based DC targeting vaccine. Although, these receptors combine specifically with the corresponding natural ligands, it is important to explore new ligands for binding with the reported receptors or other unknown receptors. For these reasons, the constructed nanobody library was used to screen intact chicken DCs, with the expectation of discovering DC-specific binding nanobodies. Although there was obvious enrichment after three rounds of screening (Fig. 3A), a decrease in phages with unique sequences was observed (Fig. 3B). Repeat phage clones were identified by sequence analysis (Supplementary Fig. S3) and this high frequency of repetition phage clones points to the success and enrichment of the screening process. As a result, twelve DC-specific binding phage clones were obtained (Fig. 3C). Unexpectedly, four of the selected phage clones bound not only to chicken DCs but also to duck and goose DCs (Fig. 4B-E). These results indicated that the chicken, duck and goose may share the common receptors that were recognized by these selected phages. Currently, there is little consensus as to which receptor elicits more robust MHC I or MHC II antigen presentation 40 . Effective antigen presentation results from the antigen being trafficked to subcellular compartments for processing, however, individual DC receptors will differ widely in their expression levels, internalization speeds, and downstream intracellular trafficking pathways. In any event, the efficient combination of antigen and DCs is the first step for antigen processing. The interaction between the selected phage and DCs was studied by confocal laser microscopy and revealed that phage-54 and phage-74 could efficiently combine with chicken DCs within 15 min (Fig. 5). Further, nanobody binding proteins on DCs were obtained by pull-down assay and identified by mass spectrometry (Table 1). Unfortunately, none of the previously reported receptors were discovered during the mass spectrometry analysis. However, one protein, i.e., voltage-dependent anionselective channel protein (VDAC), suspected to have a role in antigen processing was discovered. Lisanti et al. 41 have previously reported the presence of VDAC1 in a catalogue of proteins identified in caveolae. Caveolae are domains of the plasma membrane that have specific functions in the trafficking between the plasma membrane and the rest of the cell 42 . Thus, it may be presumed that the specific binding of phage-74 to the VDAC of DCs made the engulfment easier, and further, sped up processing and presentation of the antigen. This supports the more rapid and higher level of the antibody response against p10B elicited by the DC-targeting nanobodies of phage-54 and phage-74 compared to that induced by T7-wt (Fig. 6B). Conclusion In this study, a high diversity T7 phage display nanobody library was constructed which could be used for biopanning of chicken DC-specific binding nanobodies. The results indicated that nanobodies displayed on phage-54 and phage-74 not only efficiently bind with chicken DCs but also to duck and goose DCs. Although the exact www.nature.com/scientificreports/ nanobody recognition receptor requires further elucidation, the highly efficient affinity binding of nanobody to DCs promotes the immune response. Therefore, the nanobody displayed on phage-54 and phage-74 is a DCtargeting ligand that merits further study and application. Methods Ethics declarations. Animals were maintained and euthanized as per the protocol, approved by the Institutional Animal Care and Use Committee (IACUC) of the Jiangsu Academy of Agriculture Sciences (SYXK 2017-2022). All experiments were conducted in accordance with the relevant guidelines and regulations of IACUC and the Institutional Biosafety Committee at the Jiangsu Academy of Agriculture Sciences. This study is reported in accordance with ARRIVE guidelines (https:// arriv eguid elines. org). Construction of T7 phage display nanobody library. A T7 phage display nanobody library was constructed as previously described 43 . The nanobody was displayed as an extension of the coat protein due to an in-frame insertion of the alpaca VHH gene in the p10 gene encoding the coat protein of T7 phage, resulting in the display of less than 450 guest nanobodies on the surface of each phage particle. Briefly, anticoagulated blood samples were collected from six non-immunized young alpacas (three female and three male) and lymphocytes were isolated using the Ficol separation method 44 . Total RNA was extracted using the MiniBEST Universal RNA Extration Kit (TaKaRa) and then first stand cDNA was generated using PrimeScript 1st Strand cDNA Synthesis Kit (TaKaRa). The VHH gene was amplified by the stepwise PCR method 45 and all the primers used are presented in Table S1. The VHH gene products were digested with restriction enzymes EcoRI and HindIII and ligated to the T7 select 415-1b EcoRI/HindIII vector arms (Merck, Germany) to rescue the primary phage display library. This primary T7-VHH library was amplified using the liquid lysate amplication method according to manufacturer's instruc- Table 1. List of nanobody-binding proteins identified by LC-MS. Matches-sig: proteins that were significantly higher than the spectrum match threshold; Sequences-sig: peptides that were significantly higher than the threshold; pI: calculated isoelectric point; emPAI: exponentially modified protein abundance index. www.nature.com/scientificreports/ tions. The titers of the primary and amplified library were determined by phage-plaque assay using Escherichia coli BL5403 host 46 . Nanobody displayed on T7 phage particles were detected by SDS-PAGE and Western-blot as previously reported 47 . Isolation and validation of chicken bone marrow DCs. Marrow collected from the femurs and tibias of three week-old specified pathogen-free (SPF) chicks was washed three times with sterile phosphate buffered saline (PBS), gently loaded in an equal volume of Histopaque-119 (Sigma, Germany), then centifuged at 250 × g for 30 min. Cells at the interface were collected and washed twice with PBS as previously described 48 . Aliquots of 2 × 10 6 cells/mL were used to seed 6-well plates containing Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 1 U/mL penicillin and streptomycin, 10% fetal bovine serum (FBS, Gibco, USA), 30 ng/mL recombinant human granulocyte macrophage-colony stimulating factor (GM-CSF) and 25 ng/mL interleukin-4 (IL-4) (Peprotech) and incubated at 37 °C and 5% CO 2 for 6 d. Fifty percent of the medium was replaced with complete medium every two days. Cytomorphology of DCs was observed under an optic microscope during the cell differentiation process. CD11c, CD86 and MHCII expressed on the surface of DCs on the 6th day were analyzed using fluorescence-activated cell sorting (FACS) (BD FACSCalibur, FACS101) at Taizhou People's Hospital. The duck and goose bone marrow DCs were prepared similarly. Bio-panning of DC-specific binding phages. The T7-VHH library displaying alpaca nanobody at the C-terminal of p10B protein was applied to screen the DC-specific binding nanobody 10 . Depletion selection: monocytes isolated from bone marrow were resuspended in RPMI 1640 medium supplemented with 10% FBS and the density adjusted to 1 × 10 7 cells/mL. An aliquot of the T7-VHH library containing ~ 10 10 PFU phages was diluted in blocking buffer (RPMI 1640 medium supplemented with 10% FBS + 0.5% bovine serum albumin) and transferred to 6-well plates and left for 1 h at room temperature to deplete the library of phages that adsorb to the plastic. Unbound phages were removed and tranferred to another 6-well plate containing monocytes, then incubated for one hour at room temperature to deplete monocyte-binding phages. First round: the supernatants of the monocytes plate were transferred after centrifugation (250×g for 5 min) to incubate with DCs in a 6-well plate at room temperature for 45 min. The plate was then centrifuged at 250×g for 5 min, supernatants were removed, and cells resuspended in wash buffer (RPMI 1640 medium containing 1% FBS and 0.05% Tween-20). Washes were collected and saved for titering the phage. The wash operation was repeated five times. Thereafter, the DC-bound phages were eluted by lysing the cells with CHAPS lysis buffer (2.5% w/v CHAPS [3-((3-cholamidopropyl) dimethylammonio)-1-propanesulfonated] in RPMI 1640 medium) and evaluated by the phageplaque assay 46 . The phages were amplified in E. coli BL21 for the next round of bio-panning. The second and third rounds were carried out according to the procedures described above, except that the incubation time was reduced to 30 min and 15 min, respectively. The VHH genes in the total eluted phage from each round of selection were amplified, and the PCR products were sent to Genepioneer Biotechnologies Co., Ltd. (Nanjing, China) for next-generation sequencing. The individual phage plaques in the eluate of the third round of selection were randomly selected for VHH gene amplification 36 , and PCR products were sequenced by Genscript Biotechnology Co., Ltd. (Nanjing, China). www.nature.com/scientificreports/ Specificity and selectivity assay. Individual phage clones identified by DNA sequencing were propagated and purified 26 for use in cell-association assays. In the specificity assay, phage particles (~ 10 6 PFU/ well) were incubated with DCs, chicken bone marrow cells and serum-treated control wells in a 96-well cell culture plate at room temperature for 15 min. Following several washes, cell-or serum-associated phages were collected by treating each well with CHAPS lysis buffer and titering in E. coli BL5403 cells. The most promising phage binders, i.e., those phages that demonstrated increased binding to DCs rather than bone marrow cells and serum components, were tested further for their ability to discriminate between different targets (selectivity assay) using a panel of different cells from chicken, duck and goose. Dendritic cells and bone marrow cells of chicken, duck and goose were prepared as previously mentioned. Chicken embryo fibroblasts (CEF), duck embryo fibroblasts (DEF) and goose embryo fibroblasts (GEF) were prepared according to the method of Zhai et al. 49 . Cells were seeded into 96-well culture plate (~ 10 4 cell/well) in a 37 °C cell culture incubator with 5% CO 2 for 1 h. Then, each phage clone (~ 10 6 PFU/well) was incubated with cells for 15 min at room temperature. Wells were washed eight times with washing buffer by centrifugation of plate at 250×g for 5 min and unbound phages in supernatant were carefully removed. To collect cell-associated phages, 25 μL of CHAPS lysis buffer was added to each well and incubated for 10 min on a shaker with gentle rocking. Aliquots of 175 μL of overnight cultured E. coli BL5403 host cells were added to the each well and incubated for 3 min at room temperature. The final mixture was spread on LB agar plates and incubated for 3-4 h in a 37 °C incubator. The phage recovery was calculated as the percent ratio of output plaque forming units to input plaque forming units. All selectivity and specificity cell-associated assays were performed in triplicate with data reported as the mean ± standard deviation. Subcellular localization assay. Interactions of selected phage clones with DCs were analyzed as described previously 50 . Dendritic cells at day 6 were harvested and seeded on a cell slide, and then incubated in a 37 °C incubator with 5% CO 2 until cells were ~ 70% confluent. Next, cells were incubated with 1.0 × 10 9 PFU phages of an isolated phage clone in serum-free RPMI 1640 culture medium for 15 min at 37 °C. Cells were washed five times with PBS and fixed with 4% paraformaldehyde for 20 min at room temperature. After an additional three washes, cells were permeabilized with 0.1% Triton X-100 for 10 min and blocked with 1% bovine serum albumin for 30 min at room temperature. Cells were treated with a 1:2000 dilution of Dylight Anti-T7 tag antibody (ab117595, abcam) in blocking buffer for 1 h at room temperature. Cells were washed with PBS containing 0.05% Tween-20 (PBST) and treated with a 1:1000 dilution of Phalloidin-iFluor 594 Conjugate (abs42235791, absin) for 1 h at room temperature in the dark. After washing, cover slips were applied to the slides with VECTA shield mounting medium and DAPI. Slides were visualized with a ZEISS confocal microscope (ZEISS LSM 880) at the Testing & Analysis Center at Yangzhou University. The VHH genes from phage-54 (VVH-54) and phage-74 (VVH-74) were fused with the glutathione S-transferase (GST) gene in pGEX-4T-1 vector for fusion protein expression. The purified fusion protein and immature chicken DCs (day 6) were sent to ZoonBio Biotechnology company (Nanjing, China) for in vitro pull-down and mass spectrometry assays. Immunogenicity of T7 phage displaying DCs nanobody. The T7-wt (T7 select 415-1b), phage-54 and phage-74 were propagated in E. coli BL5403. Briefly, 50 mL of LB medium was inoculated with 500 μL of an over-night cultured E. coli BL21 and incubated with shaking (200 rpm, 2.5 h) at 37 °C to reach a density at OD 600 nm of 1.0. The E. coli BL5403 host was then infected with the phage particles at a multiplicity of infection (MOI) of 0.001 and kept shaking at 37 °C for more than 3 h until complete cell lysis was observed. DNase I and RNase A (Takara, China) were added 30 min before harvesting the offspring phages. Phage particles were recycled by the PEG-NaCl method and extracted with 0.1% Triton-X114 to remove endotoxin 26 . The purified T7-wt, phage 54 and phage 74 were detected by Western-blot, then the titer of phages was adjusted to 10 11 PFU/mL and inactivated by 0.1% (v/v) β-propiolactone. Further, the ratio of aqueous phage to the Montanide ISA 206 (Seppic France) oil adjuvant was 54:46 (V/V) to form a water-in-oil-in-water (W/O/W) blend. Thirty SPF chickens were divided into three groups, and their necks injected subcutaneously with 0.2 mL of T7-wt, phage-54 and phage-74 emulsion, respectively. Blood samples were collected after days 0, 14, 21 and 28, and T7 phage anti-capsid antibody levels were measured using ELISA. The capsid protein of T7 phage (p10B) was expressed by pET-28a-p10B vector, and the purified protein (100 ng/mL) was used for coating to establish an indirect ELISA method. A more detailed description about the ELISA protocol used is provided in the supplementary material. All chickens were maintained and euthanized as per the protocol approved by the Institutional Animal Care and Use Committee and conducted following the guidelines of the Institutional Biosafety Committee at the Jiangsu Academy of Agriculture Sciences. Data availability Supplementary information accompanies this paper.
5,416.6
2022-07-15T00:00:00.000
[ "Biology", "Medicine" ]
Dual Differentiation-Exogenous Mesenchymal Stem Cell Therapy for Traumatic Spinal Cord Injury Repair in a Murine Hemisection Model Mesenchymal stem cell (MSC) transplantation has shown tremendous promise as a therapy for repair of various tissues of the musculoskeletal, vascular, and central nervous systems. Based on this success, recent research in this field has focused on complex tissue damage, such as that which occurs from traumatic spinal cord injury (TSCI). As the critical event for successful exogenous, MSC therapy is their migration to the injury site, which allows for their anti-inflammatory and morphogenic effects on fracture healing, neuronal regeneration, and functional recover. Thus, there is a need for a cost-effective in vivo model that can faithfully recapitulate the salient features of the injury, therapy, and recovery. To address this, we review the recent advances in exogenous MSC therapy for TSCI and traumatic vertebral fracture repair and the existing challenges regarding their translational applications. We also describe a novel murine model designed to take advantage of multidisciplinary collaborations between musculoskeletal and neuroscience researchers, which is needed to establish an efficacious MSC therapy for TSCI. Introduction With almost 12,000 new spinal cord injuries (SCI) occurring every year in the United States alone, near half a million chronic SCI patients suffer the long term consequences of this devastating injury. Since the major disabilities from SCI are neurological deficits, neural regeneration remains the priority. Consequently, other aspects of SCI, such as vertebral fracture reconstruction, receive less attention. Thus, one major limitation in this field that has contributed to the lack of progress has been the absence of multidisciplinary cooperation between neuroscientists working towards nerve regeneration and orthopaedic investigators working with mesenchymal stem cells (MSCs) for bone repair [1]. One of the most challenging aspects of treating injuries to the spinal cord is the multitude of problems that need to be addressed to restore normal function. These include neural cell death, limited axon regeneration, inflammation and scar formation, and disruption of the neurovascular supply and loss of structural support from the surrounding vertebra. Thus, any therapeutic approach aimed at SCI tissue regeneration requires a coordinated approach in which neural repair is accompanied by fracture repair and revascularization of newly formed tissues [2]. Several types of cell transplants have been proposed for SCI and fracture repair, including stem cells and their differentiated progeny, with the purpose of directly replacing lost neurons, oligodendrocytes, and osteoblasts, respectively. MSCs have shown great potential to enhance osteogenesis and chondrogenesis for spinal fusion repair. Furthermore, transplanted MSCs have the ability to differentiate into osteoblasts in the presence of specific bioactive factors, such as stromal cell-derived factor-1/CXCR4, nutrients, and extracellular matrix in the MSC/hydroxyapatite/type I collagen hybrid graft [1,[3][4][5]. However, controversy in the field remains over the extent of exogenous MSC contribution to neuronal regeneration, despite evidence from animal models and human specimens data showing the potential of neuronal differentiation [6][7][8][9][10][11][12]. Thus, the development of a costeffective animal model to definitively answer this question is warranted. TSCI Murine Models for Cell-Based Therapy The fundamental events of SCI can be divided into four main stages: the immediate, acute, intermediate, and chronic phases [13]. To fulfill its final neurological outcomes, a reproducible TSCI model is essential that can be either improved or deteriorated by the intervention of interest [14,15]. For small animals, such as mice and cats, the most widely accepted models include epidural balloon compression [14,16], weight-drop contusion injury [17,18] and modified aneurysm clip crash [19,20], and hemisection removal critical defect and hemicontusion force [21]. Hemisection Model of Unilateral Injury. Although hemisection of the spinal cord is not a clinical relevant model, our interests in this field are focused on understanding the effects of transplanted MSCs on simultaneous angiogenesis, osteogenesis, neuronal survival, axonal growth, and remyelination following TSCI. Thus, in addition to being a highly reproducible injury and response to host response to TSCI, the hemisection model provides clear injury section boundary for radiological and histological outcomes to assess transplanted MSCs proliferation and neuronal differentiation. To this end, we have developed a novel hemisection-unilateral TSCI model in mice ( Figure 1). The major advantage of this model is that it allows researchers to transfer synthetic biomaterials with or without exogenous MSCs locally to overcome secondary damage to the SCI. These transferred MSCs are known to mediate healing by orchestrating a favorable environment for parenchymal cell survival and stimulating cell bridges within the traumatic centromedullary cavity. Following a laminectomy, the surgical procedure involves longitudinal exposure of the dura mater, and then a spinal cord hemisection is made at the appropriate spinal cord level, which is then followed by the removal of 2-3 mm hemicord segment along the midline using microscissors. After cell transplantation, the dura, muscle, and fascia are sutured separately using methods that have been previously described [22,23]. Modified Aneurysm Clip Crash. Compared to other TSCI murine models, modified aneurysm clip could mimic an initial impact plus persisting compression. With a gradient clinical relevant compression that reminds the sparing of white matter tracts, this model can provide information about surviving tracts and residual motor function. However, it suffers from an ∼10% mortality rate during the injury procedure, especially during laminectomy, due to excessive blood loss and incidence of anesthetic sensitivities. A longitudinal incision is made on the midline of the back to expose the superficial muscle layers and then bluntly dissect vertebrae attached muscle. A laminectomy is performed on the target vertebrae and part of pedicles with a pair of microscissors. An extradural path between the spinal cord and the vertebral body is created to pass the lower blade of modified aneurysm clip underneath the spinal cord and hook on its upper blade to make a ventral and dorsal compression [19,20,24]. Weight-Drop Contusion Technique. Fifty percent of human spinal cord injuries contain some white matter tissue that is spared, which contains uninjured axonal projections. Friedenstein and colleagues investigated the electrophysiological and morphological data from 85 patients and 27 adult rats that indicated the weight-drop contusion model in rat and demonstrated that this model can serve as an adequate animal model for the effects of new treatment strategies TSCI [25]. To produce the model, T10 laminectomies were performed. While the vertebral column was stabilized by Adson forceps, the impactor probe was positioned 2-4 mm above the spinal cord. An impact force of 150 kilodyne was delivered to the exposed spinal cord through the intact dura with an Infinite Horizons impactor to create a moderate severity contusion injury [26,27]. Epidural Balloon Compression Injury. To produce precise quantities submaximal damage SCI, Vanický and colleagues modified a saline-filled Fogarty catheter subdural compression to epidural compression and could customized the gradient of injury [14,28]. The brief procedures include a 2-French Fogarty arterial embolectomy catheter (Baxter Healthcare Corporation, Irvine, CA) that was inserted in the epidural space at the T10 level and moved rostrally for 2 metameric levels before being inflated with a 15 mL distilled water volume and left in place for 5 minutes. The balloon was then deflated and carefully removed. Skin and muscle were carefully closed in two layers. Histological cross-section of spinal cord has shown a correlated damage of white and gray matter significantly with gradient compression. Current Advances in MSC-Based Therapies for TSCI and Fracture Repair and the Frontier of MSC Dual Differentiation Since MSCs were first isolated by Friedenstein and colleagues in 1968 [25], the plastic-adherent bone marrow derived MSCs are typically characterized by their cell surface markers positive for Stro-1, CD29, CD73, CD90, CD105, CD166, and CD44 and negative for CD34, CD45, CD14, CD11b, CD19, CD79a, and HLA-DR [29]. The fate of these MSCs is known to be limited in serial passages due to the lack of alternative lengthening of telomeres (ALT), which results in telomeric DNA shortening at each cell division and eventually senescence [30][31][32]. However, prior to its 10th passage, exogenous MSCs retain their stemness and proliferative capacity to facilitate bone repair such as fracture nonunion, osteogenesis imperfecta and hypophosphatasia [29,[33][34][35][36][37][38]. Figure 1: A murine laminectomy and hemisection model of TSCI. Development of a murine laminectomy and hemisection model of TSCI was achieved using protocols approved by the University of Rochester Committee for Animal Resources (IACUC). After the animal is anesthetized, a laminectomy is performed to remove thorax 11 lamina (a), then the dura is opened to expose the spinal cord (b), and, finally, a hemisection lesion is performed to generate a 2 mm defect in the right half side of the spinal cord (c). Postoperatire dorsal view (d) and lateral view (e) of micro-CT scans of the spine; 5x (f) and 20x (g) micrographs of H&E stained histology sections are presented to illustrate the vertebral bone and spinal cord defects that generated in this model, respectively. Another important property of MSCs is that they can terminally differentiate into multiple lineages including osteoblasts, chondrocytes and myoblasts, fibroblasts, adipocytes, and oligodendrocytes [39][40][41][42][43][44][45][46]. We and others have shown definitive MSC-mediated osteogenesis in murine models of fracture and structural allograft healing. Rashidi et al. compared MSCs with three nonosteogenic cell lines of HEK293, HeLa, and NTera and found that MSCs are uniquely capable of depositing mineral through an independent mechanism of established dexamethasone or bone morphogenetic protein signaling [47]. In contrast, experimental evidence formally demonstrating MSC neuronal differentiation remains controversial, in part because MSCs are derived from the mesoderm, while neurons are derived from the ectoderm. However, in support of the MSC-neuron differentiation theory, there are numerous publications showing that neuronal marker expression in MSCs can be induced following stimulation with epidermal growth factor (EGF) and basic fibroblast growth factor (bFGF) [48][49][50][51]. Deng and collogues even reported that MSCs significantly increase expression of the astrocyte-specific glial fibrillary acidic protein spontaneously in the absent of cytoplasmic cyclic AMP, which is a neuronal specialized induction reagent [51]. Collectively, this evidence indicates that MSCs have dual differentiation capability. For clinical transplantation, the ideal administration mode of MSC transplantation is intravenous or intraoperative administration of an MSC preseeded biomaterial scaffolds. Clinical studies evaluating the efficacy of exogenous MSC therapy for bone repair have shown significant improvement of bone mineral density and linear bone growth in patients [34][35][36][37][38]. In contrast, the efficacy of MSC-mediated neuronal recovery remains to be formally evaluated by functional assessments and histological confirmation. Thus, experiments in the murine model described here should be able to answer these important questions in the future. Ethical Approval The murine spinal cord injury model, euthanasia, perfusion, and Micro-CT scan were performed in accordance with NIH guidelines for animal use and were approved by the University of Rochester Committee for Animal Resources IACUC. Authors' Contribution This paper was conducted in parts of murine spinal cord injury model surgery (Hai Liu), perfusion and tissue collection (Hai Liu, Chao Xie), data collection and primary paper writing (Hai Liu, Edward M. Schwarz, Chao Xie), and revision (Edward M. Schwarz, Chao Xie).
2,515.8
2013-08-20T00:00:00.000
[ "Biology" ]
Molecular Characterization of a Human DNA Kinase* Human polydeoxyribonucleotide kinase is an enzyme that has the capacity to phosphorylate DNA at 5′-hydroxyl termini and dephosphorylate 3′-phosphate termini and, therefore, can be considered a putative DNA repair enzyme. The enzyme was purified from HeLa cells. Amino acid sequence was obtained for several tryptic fragments by mass spectrometry. The sequences were matched through the dbEST data base with an incomplete human cDNA clone, which was used as a probe to retrieve the 5′-end of the cDNA sequence from a separate cDNA library. The complete cDNA, which codes for a 521-amino acid protein (57.1 kDa), was expressed in Escherichia coli, and the recombinant protein was shown to possess the kinase and phosphatase activities. Comparison with other sequenced proteins identified a P-loop motif, indicative of an ATP-binding domain, and a second motif associated with several different phosphatases. There is reasonable sequence similarity to putative open reading frames in the genomes ofCaenorhabditis elegans and Schizosaccharomyces pombe, but similarity to bacteriophage T4 polynucleotide kinase is limited to the kinase and phosphatase domains noted above. Northern hybridization revealed a major transcript of approximately 2.3 kilobases and a minor transcript of approximately 7 kilobases. Pancreas, heart, and kidney appear to have higher levels of mRNA than brain, lung, or liver. Confocal microscopy of human A549 cells indicated that the kinase resides predominantly in the nucleus. The gene encoding the enzyme was mapped to chromosome band 19q13.4. Transient DNA strand breaks and short gaps are frequently observed in cellular DNA. Many arise during regular cellular activity such as DNA replication, recombination, or differenti-ation. Others occur as a consequence of exposure to endogenous or exogenous DNA damaging agents. Repair of these strand interruptions is usually mediated by DNA ligases and polymerases. Both of these classes of enzymes require 3Ј-hydroxyl DNA termini, and the DNA ligases also require 5Ј-phosphate termini. However, the termini generated by nucleases, such as DNase II, and many produced by ionizing radiation bear 3Јphosphate and 5Ј-hydroxyl groups (1)(2)(3)(4), and therefore must be processed before they can be acted upon by DNA ligases or polymerases. One enzyme that possesses the capacity to both phosphorylate 5Ј-hydroxyl termini and dephosphorylate 3Ј-phosphate termini is polynucleotide kinase (PNK). 1 The PNK from T4 phage has found widespread application in molecular biology, especially for radiolabeling DNA and oligonucleotides (5). It can act on DNA and RNA and even phosphorylate nucleoside 3Ј-monophosphates. However, the main cellular function of the T4 enzyme is not to repair DNA, but rather to counter the action of a phage endoribonuclease that cleaves tRNA (6). Eukaryotic PNKs fall into two categories depending on whether their preferred substrate is DNA or RNA (7). While both can phosphorylate 5Ј-termini, only the former have an associated 3Ј-phosphatase activity (8 -12). Mammalian DNA kinases have been purified from a variety of sources including rat liver and testes and calf thymus (8 -19). The isolated enzymes share similar properties with regard to the kinase activity including an acidic pH (5.5-6.0) optimum (8 -18), and the minimum size of oligonucleotide that can be phosphorylated is in the range of 8 -12 nucleotides (11,15). The only significant discrepancy has been the molecular mass assigned to the polypeptides. Earlier reports regarding the PNK purified from rat organs indicated that the protein may be an 80-kDa homodimer composed of 40-kDa polypeptides (9,10,18), but PNK activity in tissue extracts detected on activity gels migrated as a 60-kDa polypeptide (20). Estimates for the size of calf thymus PNK have ranged from 56 to 70 kDa (16,17). We and others have recently purified the DNA kinases from calf thymus and rat liver to near homogeneity, making use of a broad spectrum of proteolysis inhibitors (11,12). The major protein band migrated as a 60-kDa peptide on polyacrylamide gels, but a minor band was observed at 40 kDa in the rat liver preparation. At present, the cellular function of mammalian DNA kinases has not been elucidated. Clearly, one possibility is participation in the repair of strand breaks induced by DNA damaging agents, such as ionizing radiation or topoisomerase inhibitors (21,22). We have shown that, unlike T4 phage PNK, calf thymus PNK is able to efficiently phosphorylate the 5Ј-OH terminus at a nick and a one-nucleotide gap in a doublestranded DNA substrate (11). Furthermore, an in vitro system consisting of purified mammalian PNK, DNA polymerase ␤, and DNA ligase I was able to effect the complete repair of nicks and short gaps bounded by 3Ј-phosphate and 5Ј-OH termini (21). Alternatively, PNK could participate in a more regular function. For example, it has been observed that a proportion of Okazaki fragments have 5Ј-OH termini (23), which would have to be phosphorylated prior to ligation. As part of our ongoing study to address the question of the role of eukaryotic PNKs, this paper describes the molecular cloning, sequencing, cellular localization, and chromosomal mapping of human PNK. EXPERIMENTAL PROCEDURES Phosphorylation Assay-The DNA substrate containing 5Ј-OH termini was prepared by digestion of calf thymus DNA with micrococcal nuclease as described by Richardson (24). Each 5Ј-phosphorylation reaction mixture (20 l), containing 10 g of DNA substrate, 3 Ci of [␥-32 P]ATP (3000 Ci/mmol, Amersham Pharmacia Biotech), 500 nM unlabeled ATP, 80 mM succinic acid, pH 5.5, 10 mM MgCl 2 , 1 mM dithiothreitol, 1 mM EGTA, 2 g of bovine serum albumin, and protein fraction (typically 4 l), was incubated for 20 min at 37°C. The reaction was stopped and the DNA precipitated by addition of 200 l of 20% trichloroacetic acid and 100 l of 250 M sodium pyrophosphate containing 50 g of bovine serum albumin. Following centrifugation at 10,000 ϫ g for 10 min, the pellets were resuspended in 80 l of 0.1 M NaOH and reprecipitated by addition of 400 l of 10% trichloroacetic acid. This wash step was repeated once more before the radioactivity of the pellet was determined. As a control for kinase specificity (i.e. DNA versus protein), parallel reactions were carried out in the absence of the DNA substrate. 3Ј-Phosphatase Assay-The 3Ј-dephosphorylation of a 21-mer oligonucleotide (p21p) catalyzed by recombinant human PNK in Escherichia coli cell extracts was assayed by gel electrophoresis as described previously (21). Partial Purification of Polydeoxyribonucleotide Kinase from HeLa Cells-A pellet of frozen HeLa S3 cells (3 ϫ 10 10 , approximately 50 ml packed cell volumes) was thawed in 200 ml of hypotonic buffer (10 mM Tris-HCl, pH 7.5, 2 mM MgCl 2 , 5 mM dithiothreitol, and 0.5 mM EDTA) containing a mixture of protease inhibitors (25 g/ml N ⑀ -p-tosyl-L-lysine chloromethyl ketone, 5 g/ml chymotrypsin, 1 g/ml aprotinin, 0.5 g/ml leupeptin, 0.5 g/ml pepstatin, and 1 mM ␣-toluenesulfonyl fluoride) and held for 20 min at 0°C before disruption in a Dounce glass homogenizer (15 strokes). Nuclei were collected by low speed centrifugation, and a protein extract was prepared in the presence of 0.3 M KCl as described previously (25). Sequential chromatography of the extract on a phosphocellulose P11 column (Whatman, Clifton, NJ) and an Ultrogel AcA34 gel filtration column (Sepracor/IBF, Marlborough, MA), and ammonium sulfate precipitation steps were carried out as described by Robins and Lindahl (26), except that the elution buffer for the first column contained 0.6 M KCl and the elution buffer for the second column contained 0.5 M NaCl. The active fractions in the second peak from the gel filtration column were pooled (63 mg of protein in a total volume of 54 ml), dialyzed against buffer A (50 mM Tris-HCl, pH 7.5, 100 mM NaCl, 1 mM dithiothreitol, 1 mM potassium phosphate, and 10% glycerol). The specific activity at this stage of purification was approximately 0.06 units/mg of protein, where one unit of enzyme is the amount required to incorporate 1 nmol of phosphate from ATP into micrococcal nuclease-treated DNA in 30 min at 37°C (27). The pooled material was loaded onto a column (2.5 ϫ 5.0 cm) of Bio-Gel HT hydroxyapatite (Bio-Rad) pre-equilibrated with buffer A. The column was washed with five volumes of buffer A before eluting bound protein with a 200-ml linear gradient of 50 -500 mM potassium phosphate in buffer A collecting in 5-ml fractions. The active fractions, 28 -33, were pooled and dialyzed against buffer B (10 mM potassium phosphate, pH 6.8, 4 mM 2-mercaptoethanol, and 10% glycerol) containing 50 mM KCl. The material was loaded onto a 1-ml HiTrap SP column (Amersham Pharmacia Biotech), washed with 10 column volumes of buffer B and eluted with a 30-ml linear gradient of 50 -600 mM KCl in 30 1-ml fractions. A peak of kinase activity eluted at fractions 10 -12. The contents of fraction 11 were dialyzed against buffer C (50 mM Tris-HCl, HeLa cells 8 [LI][LI]YPE[LI]PR HeLa cells a Leucine and isoleucine [LI] cannot be distinguished by low energy collision-activated dissociation as they are isomers. pH 7.5, 1 mM dithiothreitol, 1 mM potassium phosphate, and 10% glycerol) containing 50 mM NaCl, and loaded onto a Mono S PC 1.6/5 column attached to a SMART micropurification chromatography system (Amersham Pharmacia Biotech). Protein was eluted with a 2-ml linear salt gradient of 50 -450 mM NaCl at a flow rate of 100 l/min in 20 100-l fractions. After assaying the fractions for DNA kinase activity, a small quantity of each fraction was examined by SDS-PAGE to determine which polypeptide correlated with activity. The remaining contents of the fraction with the peak of kinase activity (fraction 12) was further fractionated by gel electrophoresis and electroblotted onto polyvinylidene difluoride membrane. Amino Acid Sequencing-The electroblotted HeLa protein was stained with sulforhodamine B (0.05% w/v in 30% v/v aqueous metha-nol, 0.1% v/v acetic acid) using a rapid-staining protocol (28). The dried, stained protein was then digested in situ on the polyvinylidene difluoride membrane with trypsin (Roche Molecular Biochemicals, modified) for 18 h at 30°C and the peptides extracted with 1:1 v/v formic acid/ ethanol (29). Aliquots were sampled and directly analyzed by matrixassisted laser desorption ionization (MALDI) time-of-flight mass spectrometry using a LaserMat 2000 mass spectrometer (Thermo Bioanalysis, UK) (30). Additional aliquots were quantitatively esterified using 1% v/v thionyl chloride in methanol and also analyzed by MALDI to provide acidic residue composition (31). Native and esterified peptide masses were then screened against the MOWSE peptide mass fingerprint data base (32). The remaining digested peptides (Ͼ90% of total digest) were then reacted with N-succinimidyl-2-morpholine ace- Amino acid residues 170 -176 are predicted to be associated with the phosphatase activity; residues 301-304 may be a nuclear localization signal; residues 372-380 constitute a P-loop ATP-binding domain; residues 402-464 may represent a DNA binding domain. Nucleotide sequences indicating the initiation and stop codons, the polyadenylation signal, and two 5Ј-UTR sequences with homology to complementary sequences in the 5Ј-UTR of DNase II are underlined. tate (SMA) in order to enhance b-ion abundance and facilitate sequence analysis by tandem mass spectrometry (33). Dried peptide fractions were treated with 7 l of freshly prepared, ice-cold 1% w/v N-succinimidyl-2-morpholine acetate in 1.0 M HEPES (pH 7.8 with NaOH) containing 2% v/v acetonitrile. Following reaction for 20 min on ice, the reaction was terminated by the addition of 1 l of heptafluorobutyric acid and diluted with an equal volume of water. The solution was then injected in three 5-l aliquots onto a capillary reverse-phase column (300 m x 15 cm) packed with POROS R2/H material (Perseptive Biosystems, MA) equilibrated with 2% v/v methanol, 0.05% v/v trifluoroacetic acid running at 3 l/min. The adsorbed peptides were washed isocratically with 15% v/v methanol, 0.05% v/v trifluoroacetic acid for 30 min at 3 l/min to elute the excess reagent and HEPES buffer. Derivatized peptides were eluted with a single step gradient to 75% v/v methanol, 0.1% v/v formic acid and collected in two 3-l fractions. The derivatized peptides were then sequenced by low energy collision-activated dissociation using a Finnigan MAT TSQ7000 tandem triple quadrupole mass spectrometer and a Finnigan MAT LCQ ion-trap mass spectrometer, both instruments fitted with nanoelectrospray sources (34,35). Collision-activated dissociation was typically performed with collisional offset voltages between Ϫ18 and Ϫ30 V. Two tryptic peptides from previously purified calf thymus PNK (11) were sequenced by the Harvard Microchemistry Facility (Cambridge, MA) using either an ABI 477A protein sequencer (Applied Biosystems, Foster City, CA) or an HP G1000A (Hewlett Packard, Palo Alto, CA). Confirmation of sequence was obtained by MALDI time-of-flight mass spectrometry on a LaserMat 2000 mass spectrometer. Isolation and Sequencing of Polynucleotide Kinase cDNA-DNA sequences derived from the peptide sequences were used to screen the dbEST data base (NIH). A cDNA clone from infant brain (clone number 32798 inserted in lafmid BA) was identified and obtained from the I.M.A.G.E. Consortium. The cDNA insert (1548 bp) was fully sequenced, using an automated ABI Prism 377 DNA analysis system (Applied Biosystems), and confirmed the presence of the poly(A) tail, and a large open reading frame, but no clearly identifiable start codon. A 609-bp probe, prepared by digestion of clone 32798 with HindIII and PstI (New England Biolabs, Beverley, MA), was subsequently used to screen a gt11 HeLa cell 5Ј-STRETCH PLUS cDNA library (CLON-TECH, Palo Alto, CA) by a standard protocol (36). Ten positive clones were isolated, none of which contained a poly(A) tail. The largest insert (1.5 kilobase pairs) was amplified by PCR using the forward and reverse primers with Pfu DNA polymerase (Stratagene, La Jolla, CA), and then sequenced. Putative full-length cDNA was reconstituted as follows: (i) the PCR-amplified product was digested with SacI and shrimp alkaline phosphatase (Amersham Pharmacia Biotech), and the larger fragment (1.1 kilobase pairs) isolated by agarose gel electrophoresis, (ii) the DNA of clone 32798 was digested with SacI, (iii) the DNA molecules were ligated using phage T4 DNA ligase (Amersham Pharmacia Biotech), and (iv) the ligation product was digested with EcoRI. Expression of PNK cDNA in E. coli-The cDNA was amplified by PCR using Pfu DNA polymerase and primers with tails that provided cleavage sites for NdeI (5Ј-TTTGAATTCCCATATGGGCGAGGTGGAG-CCCCCGGGC-3Ј) and BamHI (5Ј-CGCGGATCCTCAGCCCTCGGAGA-ACTGGCAG-3Ј) and then subcloned into the expression plasmid pET-16b (Novagen Inc., Madison, WI). The new plasmid (pPNK-His), which codes for a His-tagged derivative of PNK, was transfected into host E. coli bacterial strain BLR(DE3) (Novagen). The bacteria were grown at 37°C to an OD 600 of 0.6 in 100 ml of LB medium containing 50 g/ml ampicillin and 12.5 g/ml tetracycline. Zinc chloride was then added to the medium to a final concentration of 0.015 mM, and PNK expression was induced at 30°C for 3 h by addition of 0.4 mM (final concentration) isopropyl-1-thio-␤-D-galactopyranoside (Sigma). After harvesting the cells by centrifugation at 5000 ϫ g at 4°C for 5 min, they were resuspended in 10 ml of extraction buffer (50 mM Tris-HCl, pH 7.5, 0.015 mM ZnCl 2 , 6 mM mercaptoethanol). Lysozyme was added to a final concentration of 100 g/ml together with Triton X-100 (final concentration, 0.1%), and, after incubation at 30°C for 15 min, the bacteria were disrupted by sonication. The soluble fraction was separated from the insoluble fraction by centrifugation at 12,000 ϫ g for 15 min at 4°C. The insoluble fraction was resuspended in 1 ml of extraction buffer. Northern (RNA) Hybridization Analysis-The 609-bp HindIII/PstI fragment used to screen the HeLa cDNA library was also used to probe a human multiple tissue Northern blot (CLONTECH) containing 2 g (per lane) of polyadenylated RNA isolated from eight different human tissues. Hybridization was performed at 68°C for 1 h under conditions described by the manufacturer. As a control for the amounts of mRNA in each lane, the membrane was reprobed with a sequence of ␤-actin cDNA provided by CLONTECH. Antibodies and Confocal Microscopy-A synthetic peptide antigen was prepared commercially (SSPEQ, Quebec) from the first 17 amino acids of peptide sequence 1 (Table I) b, DNA kinase activity in the soluble fraction recovered from E. coli transfected with pET-16b and pPNK-His. The DNA phosphorylation assay was carried out with (ϩ) and without (Ϫ) DNA as described under "Experimental Procedures" using 10 g of cell extract. c, 3Ј-phosphatase activity in the insoluble fraction recovered from E. coli transfected with pET-16b and pPNK-His. The extracts (10 g of total protein) were tested for their capacity to remove the 3Ј-phosphate from a 5Ј-32 Plabeled 21-mer oligonucleotide (4 pmol) in 20 min as described previously (21). grown as a monolayer on glass microscope slides to 80% confluence. Following rinsing in PBS, the cells were fixed in 95% ethanol at Ϫ20°C for 15 min. The slides were allowed to dry, and were incubated for 1 h at room temperature with 1% skim milk powder in PBS to minimize nonspecific binding of the immunoreagents. Following extensive PBS rinsing, the slides were incubated overnight at 4°C in the rabbit polyclonal antiserum (diluted 1/30 in PBS), in a humidified atmosphere. The cells were then rinsed extensively with PBS, and rhodamineconjugated goat anti-rabbit IgG (HϩL, Cappel Laboratories, Durham, NC) was applied at a dilution of 1/30 in PBS for a 1-h incubation at 37°C in a water-saturated atmosphere. The unbound fluorescent antibody was removed by extensive washing in PBS, and the slides were covered with coverslips for confocal microscopy using PBS/glycerol, 1:1 as a mounting medium. The instrumentation and the procedures for the confocal laser scanning microscopy have been described previously (38). Fluorescence in Situ Chromosomal Hybridization-Fluorescence in situ hybridization was performed as described previously (39). Human metaphase cells were prepared from phytohemagglutinin-stimulated peripheral blood lymphocytes. Biotin-labeled probes were prepared by nick translation using Bio-16-dUTP (Enzo Diagnostics, Farmingdale, NY). One PNK probe was clone 32798 (including the plasmid vector). A second probe, which provided a 440-bp sequence stretching from the 5Ј-untranslated region into the 5Ј-end of the translated sequence, was generated by PCR amplification of the HeLa cDNA clone using the gt11 forward primer and a reverse primer, 5Ј-GTGGAGGCCATTGAC-CAAATA-3Ј. The two clones were labeled and co-hybridized to the chromosome preparations. Hybridization was detected with fluorescein-conjugated avidin (Vector Laboratories, Burlingame, CA), and chromosomes were identified by staining with 4,6-diamidino-2-phenylindole-dihydrochloride. Partial Purification and Peptide Sequencing of Human PNK-Fractions of a crude extract of HeLa cells that was passed down an AcA 34 Ultragel size exclusion column in the presence of 0.5 M NaCl were shown to contain DNA kinase activity (Fig. 1). Two peaks of activity were apparent, the first migrating with the bulk of the higher molecular weight protein, which may suggest that PNK is bound in a complex to other proteins, and the second eluting with proteins in the range of 40 -100 kDa. Initial steps in the purification were carried out by conventional chromatography using gel filtration, hydroxyapatite, and cation exchange media. For the final step, the protein was applied on a SMART system precision column and eluted in 20 100-l fractions with a 2-ml salt gradient (50 -450 mM NaCl). The kinase assay revealed a peak of activity centering on fraction 12 ( Fig. 2A). Correlation of the intensities of the protein bands in fractions 10 -14 (Fig. 2B) with kinase activity FIG. 5. Confocal microscopy of PNK in human A549 cells. a, cells incubated with preimmune rabbit serum. There is faint cytoplasmic background labeling, but a lack of staining in the nucleus. b, cells incubated with rabbit polyclonal antibodies raised against a PNK-derived peptide (see "Experimental Procedures"). Pronounced nuclear staining is evident. FIG. 6. PNK mRNA levels in human tissues. The polyadenylated RNA isolated from several tissues (CLONTECH) was probed with a 32 P-labeled 609-bp (HindIII-PstI) fragment from clone 32798 (containing nucleotides 205-809 of the PNK sequence). A cDNA probe to ␤-actin was used as a control for mRNA content. FIG. 7. In situ hybridization of biotin-labeled PNK probes to human metaphase cells from phytohemagglutinin-stimulated peripheral blood lymphocytes. The chromosome 19 homologues are identified with arrows; specific labeling was observed at 19q13.4. The inset shows partial karyotypes of two chromosome 19 homologues illustrating specific labeling at 19q13.4 (arrowheads). Images were obtained using a Zeiss Axiophot microscope coupled to a cooled charge-coupled device camera. Separate images of 4,6-diamidino-2-phenylindole-dihydrochloride-stained chromosomes and the hybridization signal were merged using image analysis software (NU200 and Image 1.57). strongly suggested that the ϳ60-kDa band (topmost of the three major bands in fraction 12, marked by an arrow) was responsible for the PNK activity. Accordingly, this band was chosen for amino acid sequencing. Initial screening of the peptide mass fingerprint against the MOWSE protein sequence data base (approximately 210,000 entries) revealed no significant matches. Six tryptic peptides were then sequenced de novo by low energy collision-activated dissociation using both triple quadrupole and ion-trap mass spectrometry (peptides 3-8, Table I). The use of the SMA reagent permitted full sequences to be obtained for each peptide from a single collision spectrum. This was particularly important when sequencing by collision-activated dissociation using the ion-trap, where the SMA reagent typically yielded complete y-ion coverage. In addition, two peptide sequences (peptides 1 and 2, Table I) from our previously purified preparations of calf thymus PNK (11) were obtained by conventional amino acid sequencing. Isolation and Sequencing of Human PNK cDNA-Screening of the dbEST data base with peptides 1-6 revealed several human and murine cDNA clones with DNA sequences coding directly for the peptide sequences or with minor variations. Of these clones, clone 32798 (from the I.M.A.G.E. Consortium) contained the largest insert (1548 bp) of human DNA. Sequencing of clone 32798 (bases 208 -1636, Fig. 3) showed the presence of a poly(A) tail, a consensus -AATAAA-polyadenylation signal (bases 1589 -1594) and a large open reading frame, with a TGA stop codon at bases 1564 -1566, containing in-frame sequences coding for peptides 2-5 and 8. The first valine in peptide 2 of the calf thymus enzyme is replaced by an isoleucine in human PNK. The insert was not long enough to account for the size of the protein determined by SDS-PAGE analysis, so a probe was prepared from the 5Ј-end of the insert of clone 32798 and used to screen a gt11 HeLa cell cDNA library. Although 10 positives were obtained from this library, none contained a poly(A) tail. The largest insert (bases Ϫ90 to 1317; Fig. 3), however, extended the open reading frame to a putative start codon (bases 1-3) and included in-frame sequences coding for peptides 1, 6, and 7. Again there is a minor difference between the amino acid sequence between the bovine and human peptide 1. The 1100-bp sequences in common between clone 32798 and the HeLa clone are absolutely identical. The complete open reading frame codes for a 521-amino acid protein with a predicted molecular mass of 57,102 Da. Bacterial Expression of Human PNK-After splicing together the full sequence shown in Fig. 3, the translated region was subcloned into an expression vector (pET-16b), to generate the plasmid pPNK-His, and expressed in a strain of E. coli as a His-tagged protein. After sonication of the bacteria to release protein, a strong band at ϳ60 kDa was observed in both the soluble and insoluble fractions (Fig. 4a). The 60-kDa band is the major protein in the insoluble fraction. Both the soluble and insoluble fractions showed clearly detectable levels of DNA kinase activity. (Data for the soluble fraction are shown in Fig. 4b. This is relatively straightforward because E. coli itself does not possess a DNA kinase activity that would interfere with the assay.) 3Ј-Phosphatase activity was measured by monitoring the removal of a 3Ј-phosphate group from a synthetic oligonucleotide (21). Fig. 4c shows the conversion of 5Ј-labeled p21p to p21 by protein from the bacteria harboring pPNK-His. In this case we made use of the protein in the inclusion bodies because nonspecific E. coli phosphatases were present in the soluble fraction. Expression of PNK in Human Tissues-Northern blot analysis of RNA isolated from a number of human tissues (Fig. 5) indicated a major transcript of approximately 2.3 kilobases, although in some tissues a second less abundant but considerably larger (7.5 kilobases) transcript was observed. There were also notable differences in the levels of mRNA expression in the tissues examined, in particular, pancreas and, to a lesser extent, heart appeared to have elevated levels of the message. Cellular Localization of PNK-Rabbit polyclonal antibodies were raised against a synthetic peptide composed of the first 17 amino acids in peptide 1 (Table I). These antibodies were used to visualize PNK in human A549 lung carcinoma cells by fluorescence confocal microscopy. The results, shown in Fig. 6, indicate that the protein accumulates in the cell nuclei. Chromosomal Location of Human PNK-To localize the PNK gene, we performed fluorescence in situ hybridization of biotinlabeled PNK cDNA probes to normal human metaphase chromosomes. Co-hybridization of two probes, EST clone 32798 and a 440-bp sequence stretching from the 5Ј-untranslated region into the 5Ј-end of the translated sequence, resulted in specific labeling only of chromosome 19 (Fig. 7). Specific labeling of 19q13.3-13.4 was observed on four (2 cells), three (9 cells), two (9 cells), or one (5 cells the two cDNA probes hybridized individually. These results suggest that the PNK gene is localized to chromosome 19, band q13.4. DISCUSSION Gel electrophoresis of the partially purified DNA kinase activity from HeLa cells indicated that the human enzyme is a ϳ60-kDa polypeptide, and thus has a molecular mass similar to that of the proteins isolated from rat liver and calf thymus (11,12,20). The molecular mass of the 521-amino acid polypeptide encoded by the sequenced cDNA is 57,102 Da. The same computer program (ExPASy, Swiss Institute of Bioinformatics) also predicted a pI for the protein of 8.7, which is very close to the pI measured for the rat liver and calf thymus proteins of 8.6 and 8.5, respectively (10,11). Expression of the protein in E. coli, and the demonstration of its kinase and phosphatase activities (Fig. 4), confirmed that we had indeed cloned the cDNA for human PNK. The amino acid sequence (Fig. 3) indicates that this is a novel protein. However, as shown in Fig. 8, predicted proteins of Caenorhabditis elegans and Schizosaccharomyces pombe have ϳ30% similarity to human PNK. There are several large blocks of high similarity. Two in particular are probably associated with the two known activities of PNK. Other proteins possess these consensus sequences. The first, residues 372-380, conforms to the sequence pattern (A/G)-X 4 -G-K-(S/T) of the P-loop consensus sequence (40), which is an ATP/GTP binding domain found in many kinases, including nucleoside kinases such as adenylate and uridylate kinases (Table II). The second, residues 170 -176, is a sequence found in several phosphatases (Table II), including carbohydrate-phosphate phosphatases like phosphoglycolate phosphatase and glycerol-3-phosphatase. This might be anticipated for a DNA phosphatase i.e. that dephosphorylates a deoxyribose phosphate. The homologous sequence in T4 PNK is part of a motif predicted by Jilani et al. (12) to be associated with the phosphatase activity of the T4 enzyme. Their conclusion was partly based on the observation that the first Asp in the homologous sequence of human phosphomannomutase 1 (Table II) has recently been shown to form an acyl-phosphate when incubated with its substrate (41). It is reasonable to suggest that the homologous Asp in human PNK may be a phosphate acceptor in the course of the enzyme's phosphatase activity. Both the P-loop and the phosphatase consensus sequences are found in phage T4 polynucleotide kinase and a putative PNK/RNA ligase from the nuclear polyhedrosis virus. There is, however, little other homology between the human kinase and the phage T4 PNK. Two other potential functional regions were identified. The short motif, RKKK, residues 301-304, could be a nuclear localization signal belonging to a class of such signals consisting of four or more Arg or Lys residues or any combination of the two amino acids. That PNK has such a motif is consistent with the results of the confocal microscopy ( Fig. 6), indicating the nuclear localization of the protein. The large domain between residues 402 and 464 (Fig. 3) is suggested by the program MacPattern to be a DNA binding domain. It is a relatively cysteine-rich sequence, and these amino acids may be involved in structuring such a domain. Within the 5Ј-untranslated region (5Ј-UTR) of PNK are two closely spaced sequences (nucleotides Ϫ89 to Ϫ79 and Ϫ77 to Ϫ69) that have a high level of homology to sequences in the complementary strand of the 5Ј-UTR of DNase II (42) (Table II). This is of interest because DNase II, which has recently been implicated in apoptosis and cell differentiation (43)(44)(45)(46), generates DNA strand breaks with 3Ј-phosphate and 5Ј-OH termini. If these nicks are repaired, it would presumably require the action of a polynucleotide kinase and, therefore, the regulation of the two enzymes may be coordinated. The Northern blot (Fig. 5) suggested high expression of the protein in human pancreas. However, it must be borne in mind that this tissue is isolated from cadavers up to 3 h post-mortem. It is possible that the high level of PNK expression in pancreas reflects the high level of DNA degradation (presumably by nucleases such as DNase II) seen in post-mortem pancreatic tissue (47). We have mapped the location of the gene for PNK to chromosome 19q13.4. Among the other genes that have been mapped to this locus are DNA polymerase ␦, the apoptosis regulator gene BAX, protein kinase C, and several zinc finger proteins. The DNA repair enzymes, DNA ligase I and ERCC1, are located at 19q13.3 and DNase II at 19q13.2. Note Added in Proof-The accompanying article by Jilani et al. (48) describes the independent identification and characterization of the same human DNA kinase, which they term PNKP.
6,823.6
1999-08-20T00:00:00.000
[ "Biology" ]
Synergistic antitumor activity of oncolytic reovirus and chemotherapeutic agents in non-small cell lung cancer cells Background Reovirus type 3 Dearing strain (ReoT3D) has an inherent propensity to preferentially infect and destroy cancer cells. The oncolytic activity of ReoT3D as a single agent has been demonstrated in vitro and in vivo against various cancers, including colon, pancreatic, ovarian and breast cancers. Its human safety and potential efficacy are currently being investigated in early clinical trials. In this study, we investigated the in vitro combination effects of ReoT3D and chemotherapeutic agents against human non-small cell lung cancer (NSCLC). Results ReoT3D alone exerted significant cytolytic activity in 7 of 9 NSCLC cell lines examined, with the 50% effective dose, defined as the initial virus dose to achieve 50% cell killing after 48 hours of infection, ranging from 1.46 ± 0.12 ~2.68 ± 0.25 (mean ± SD) log10 pfu/cell. Chou-Talalay analysis of the combination of ReoT3D with cisplatin, gemcitabine, or vinblastine demonstrated strong synergistic effects on cell killing, but only in cell lines that were sensitive to these compounds. In contrast, the combination of ReoT3D and paclitaxel was invariably synergistic in all cell lines tested, regardless of their levels of sensitivity to either agent. Treatment of NSCLC cell lines with the ReoT3D-paclitaxel combination resulted in increased poly (ADP-ribose) polymerase cleavage and caspase activity compared to single therapy, indicating enhanced apoptosis induction in dually treated NSCLC cells. NSCLC cells treated with the ReoT3D-paclitaxel combination showed increased proportions of mitotic and apoptotic cells, and a more pronounced level of caspase-3 activation was demonstrated in mitotically arrested cells. Conclusion These data suggest that the oncolytic activity of ReoT3D can be potentiated by taxanes and other chemotherapeutic agents, and that the ReoT3D-taxane combination most effectively achieves synergy through accelerated apoptosis triggered by prolonged mitotic arrest. Background The use of viral vectors for cancer gene therapy has been vigorously explored for the last two decades. The overall goal of the strategy is to promote cancer cell death through various means, such as tumor suppressor gene replacement, oncogene inactivation, suicide gene delivery, drug sensitization or enhancement of anticancer immunity. The extensive research efforts to develop tumor cell death-inducing viral vectors have reignited the interest in oncolytic viruses in recent years as a promising group of viral therapeutics that can directly induce tumor cell lysis through viral replication. The latest multidisciplinary research in cancer genomics and proteomics further provides an opportunity to discern various molecular pathways specifically upregulated (or dysregulated) in cancers that can be exploited as part of viral replication and destruction machinery. Indeed, many replication-competent oncolytic viruses currently in development are recombinant viruses engineered to become reliant on such cancer-specific molecules and signaling pathways for viral entry and replication, thus rendering cancer cells more selectively susceptible to virus-mediated oncolysis [1]. Unlike chemical entity-based anticancer agents, these viruses can propagate in susceptible tumor cells, re-target, infect, and destroy remaining cancer cells within the primary tumor or in the metastases, repeating the cycle until viral spread is halted by the host antiviral response or by mechanical barriers such as loss of vasculature and necrotic tissues. Mammalian reoviruses are ubiquitous, non-enveloped dsRNA viruses, normally associated with relatively benign pathology in humans. The Dearing strain of reovirus serotype 3 (ReoT3D) is a non-engineered wild type reoviral strain and belongs to a growing number of the new generation of oncolytic viruses because of its innate ability to preferentially kill transformed cells [2,3]. The oncolytic potency of ReoT3D has been extensively demonstrated against various cancers in vitro and in vivo, including colon, pancreatic, ovarian and breast cancers, as well as malignant gliomas and lymphoid malignancies [4][5][6][7][8][9]. The safety, feasibility and potential efficacy of ReoT3D cancer therapy are currently being investigated in phase I/II clinical trials [10]. As with other emerging therapeutics for cancer, the combined regimen of ReoT3D and conventional chemotherapeutic agents is expected to play a significant role in future clinical applications. However, it is currently unknown whether conventional chemotherapeutic agents can augment or interfere with the oncolytic effect of ReoT3D. In this study, we evaluated the oncolytic activity of ReoT3D in non-small cell lung cancer (NSCLC), and explored the therapeutic feasibility of ReoT3D-chemotherapeutic combination regimens against NSCLC. Oncolytic activity of ReoT3D and progeny virion production in NSCLC cell lines We first examined the in vitro susceptibility of human NSCLC cell lines to ReoT3D, as reoviral oncolytic activity had not been extensively studied in human lung cancer cells. Nine NSCLC cell lines (NCI-H460, A549/ATCC, HOP-62, NCIH322M, NCI-H226, EKVX, NCI-H23, NCI-H522, and HOP-92) included in the NCI-60 tumor cell line panel (Developmental Therapeutics Program, DTP, NCI-Frederick, Frederick, MD) were incubated with serially diluted ReoT3D for cytopathic effect (CPE) determination. Within 48 hours post-infection, ReoT3D induced significant cell death in seven of nine NSCLC cell lines in a dose-dependent manner ( Figure 1). ReoT3D 50 percent effective dose (ED50), defined as the initial virus dose (multiplicity of infection, MOI, expressed in plaque forming units per cell, pfu/cell) that resulted in 50% cell viability at 48 hours post-inoculation as compared to untreated controls, ranged from 1.46 ± 0.12 to 2.68 ± 0.25 (mean ± SD from 3 separate experiments) log 10 pfu/cell in the sensitive cell lines (Table 1). In contrast, NCI-H226 and NCI-H322M were relatively resistant to ReoT3D in this shortterm incubation assay, as indicated by the significantly lower levels of cell death even at the highest inoculum dose compared to those seen in the sensitive cell lines (P < 0.0001) ( Figure 1). While ReoT3D has been shown to induce CPE in murine cells with an activated Ras pathway [11,12], the presence of Ras-activating gene mutations [13] or activated Ras was not necessarily associated with lower ReoT3D ED50 in the NSCLC cell lines tested in our study (Table 1 and Figure 2). In vitro combination effects of ReoT3D and chemotherapeutic agents against NSCLC cells Next, we investigated the combination effects of ReoT3D and chemotherapeutic agents in four ReoT3D-susceptible (NCI-H460, HOP-92, NCIH23, and EKVX) and two relatively resistant (NCI-H226 and NCI-H322M) cell lines using the Chou-Talalay method. Growth inhibitory effects of chemotherapeutic agents, paclitaxel, cisplatin, gemcitabine, and vinblastine, were first determined in each cell line by the XTT cytotoxicity assay and expressed as the drug concentration that inhibited cell growth by 50% compared to untreated control (IC50) ( Table 2). Of the 6 cell lines, NCI-H322M and EKVX showed high levels of resistance to multiple compounds, consistent with the previous report that characterized them as multi-drug resistant against a variety of anticancer agents [14] (Table 2). When NCI-H460 cells were incubated with increasing doses of ReoT3D in the absence or presence of paclitaxel, whose concentration was serially increased to maintain a constant ratio to the ReoT3D dosage (the constant ratio combination design) [15], the synergistic anticancer effect of the two agents was clearly demonstrated by a leftward shift of the dose response curve (Figure 3a) as well as isobologram and combination index (CI) analyses [16,17] ( Figure 3b and 3c, respectively). The combination effects of ReoT3D and other chemotherapeutic agents were similarly examined in all 6 cell lines using at least two different dosage ratios, and CI values were obtained for each combination regimen (Table 2). In many instances, the combination of ReoT3D with different chemotherapeutic agents was found to exert moderate to strong synergistic effects at both combination ratios as shown by the CI that were consistently smaller than 1.0 at ED50, ED75 and ED90 (Table 2). Interestingly, with the exception of ReoT3D-paclitaxel combination, antagonistic effects (CI > 1.0) of ReoT3D-chemotherapeutic combinations were typically observed in NSCLC cell lines that exhibited highlevel resistance to the test compounds with IC50 often exceeding the highest test concentration, 100 μM (Table 2). In contrast, the combination of ReoT3D and paclitaxel was consistently synergistic in all the NSCLC cell lines tested, regardless of the level of paclitaxel sensitivity ( Table 2). The relative resistance of NSCLC cell lines to ReoT3D did not appear to negatively influence the outcomes, as the synergistic effect was clearly seen in NCI-H226 for all the combination regimens examined ( Table 2). These data demonstrated that while the oncolytic activity of ReoT3D could be generally potentiated by the chemotherapeutic agents that alone were cytotoxic to the tested cell lines, paclitaxel appeared to exert unique effects on the cell-death process induced by ReoT3D, even in the cells with reduced sensitivity to the compound. Progeny virion production from NSCLC cells treated with the combination of ReoT3D and chemotherapeutic agents Next, we asked whether the synergistic oncolytic effects of ReoT3D-chemotherapeutic combination regimens reflected increases in virally induced lytic cell death, driven by a burst of progeny virion production from the cells. To capture early changes in viral production levels, the amounts of infectious virions released into the culture supernatants in the first 24 hours were compared among Oncolytic effect of reovirus type 3 Dearing strain (ReoT3D) in non-small cell lung cancer (NSCLC) cells NSCLC cell lines infected with ReoT3D in the absence or presence of different chemotherapeutic agents. Notably, the addition of paclitaxel to ReoT3D (MOI = 20) was found to increase the level of progeny virion production as compared to virus alone in all 4 cell lines tested, NCI-H460, NCI-H23, EKVX and HOP-92 (P = 0.001, 0.03, 0.003 and 0.0001, respectively) ( Figure 4a). However, such augmentation of virion production was not unique to paclitaxel nor associated with synergy, as ReoT3D-vinblastine combination also significantly increased the level of progeny virions released from EKVX and NCI-H322M (P < 0.01) (Figure 4b), where the combination was not associated with a synergistic effect (mean CI > 1.0). In contrast to paclitaxel and vinblastine, the addition of gemcitabine was not associated with an increased reoviral progeny virion production in NSCLC cells (Figure 4b). These data suggested that the synergistic effects of ReoT3D-chemotherapeutic combinations did not directly result from lytic necrosis induced by rapid increases in virion release, but probably stemmed from certain changes in programmed cell death pathways. PARP cleavage in NSCLC cells treated with ReoT3Dchemotherapeutic combination regimens Reovirus type 3 Abney (ReoT3A) has been shown to induce apoptotic cell death in various cancer cell lines, including in 2 NSCLC cell lines, NCI-H157 and A549 [18]. To further investigate the mechanistic basis for the synergistic activity of ReoT3D-chemotherapeutic combinations in NSCLC cells, we examined the proteolytic cleavage of poly (ADP-ribose) polymerase (PARP) in NCI-H460 treated with 20 MOI of ReoT3D alone or in combination with gemcitabine, paclitaxel or vinblastine for 24 to 30 hours. Western blot analysis of cell lysates demonstrated the cleaved 89-kDa PARP fragment in ReoT3Dtreated NCI-H460, whereas treatment with gemicitabine (1 μM), paclitaxel (1 μM) or vinblastine (0.1 μM) alone resulted in virtually undetectable PARP cleavage in this cell line ( Figure 5). Cells treated with the combination of ReoT3D and paclitaxel showed significantly higher levels of PARP cleavage than ReoT3D alone at 24 and 30 hours post-treatment, accompanied by a gradual decrease in the amount of full-length PARP ( Figure 5). Increased PARP cleavage was also observed with the combination of ReoT3D and other chemotherapeutic agents at 24 or 30 hours post-treatment, although to a lesser extent than ReoT3D-paclitaxel combination ( Figure 5). These data suggested that the synergistic cell death induced by the ReoT3D-paclitaxel combination was mediated at least in part by the enhanced apoptotic process in NCI-H460. Activation of caspases in NSCLC cells treated with ReoT3D alone or in combination with paclitaxel To explore whether the enhancement of apoptosis played a role in the synergistic activity of the ReoT3D-paclitaxel combination in other NSCLC cell lines, we examined the level of caspase activity in NCI-H460, NCI-H23, EKVX and NCI-H322M treated with increasing doses of either ReoT3D or paclitaxel alone or both in combination, using the constant ratio combination design [15], the same dosing scheme adopted for CI analyses of Chou-Talalay (see above). In each of 4 cell lines, treatment with ReoT3D alone led to a significant increase in caspase activity in a dose-dependent manner, although the response appeared less robust in NCI-H322M, where higher ReoT3D MOIs were required to achieve similar magnitudes of caspase activation above baseline than in 3 other cell lines ( Figure 6a). The increases in caspase activity were associated with Levels of GTP-bound activated Ras protein in 9 NSCLC cell lines analyzed by Western blot, using anti-pan-Ras antibody Figure 2 Levels of GTP-bound activated Ras protein in 9 NSCLC cell lines analyzed by Western blot, using anti-pan-Ras antibody. Note, high levels of activated Ras were observed with A549/ATCC (K-ras G12S ) [13], HOP-62 (K-ras G12C ) [13], and NCI-H23 (K-ras G12C ) [13], followed by NCI-H460 (K-ras Q61H ) [13], EKVX and HOP-92. The levels of activated Ras were significantly lower not only with ReoT3D-resistant NCI-H322M and NCI-H226, but also with ReoT3D sensitive cell line, NCI-H522. Also see table 1 for comparison of Ras-activating gene mutation status and ReoT3D sensitivity of each cell line. dose-dependent decreases in cell viability as assessed by intracellular ATP content in ReoT3D-treated cells ( Figure 6a). Paclitaxel alone had virtually no effects on caspase activation or ATP content during the 24-hr exposure, except in NCI-H23 that showed moderate increases in caspase activity upon treatment with the compound at ≥ 10 nM ( Figure 6a). When ReoT3D was combined with paclitaxel, the levels of caspase activation appeared more pronounced as compared to ReoT3D single therapy, as seen by a leftward shift of the dose response curves (Figure 6a). Similar results were obtained from 3 separate experiments for all 4 cell lines. In order to evaluate whether the extent of caspase activation induced by the ReoT3D-paclitaxel combination was significantly different from ReoT3D single treatment in these cell lines, the caspase activity dose-response curves were fitted by non-linear regression using GraphPad Prism (GraphPad Software Inc., San Diego, CA) ( Figure 6b). The best-fit values of a variable, LogEC50, obtained by the Prism analysis from 3 independent experiments were then compared between the two treatment regimens, ReoT3D alone vs. ReoT3D-paclitaxel combination, using a paired t-test. The analysis demonstrated that the activation of caspases was significantly enhanced with the ReoT3D-paclitaxel combination therapy as compared to ReoT3D alone in NCI-H460, NCI-H23, EKVX and NCIH322M (P = 0.004, 0.03, 0.04, and 0.02, respectively). These data suggested that enhanced apoptotic cell death most likely constituted the synergistic cell killing induced by the combination of ReoT3D and paclitaxel in NSCLC cells. Synergistic activity of ReoT3D and paclitaxel combination in NCI-H460 cells Dynamic effects of ReoT3D and paclitaxel on cell cycle progression and caspase activation ReoT3D infection has been shown to induce cell cycle arrest at G1/S and G2/M [19,20]. Antimicrotubule agents, including taxanes and vinca alkaloids, activate the spindle-assembly checkpoint and induce mitotic arrest at the metaphase and anaphase transition, which ultimately leads to cell death by apoptosis [21]. A certain proportion of cells exposed to these antimitotic agents may undergo an aberrant mitotic exit without cytokinesis, forming multinucleated cells in interphase [21]. To gain mechanistic insight into the enhanced apoptotic cell death induced by the ReoT3D-paclitaxel combination, we investigated the effects of ReoT3D and paclitaxel on cell cycle and caspase-3 activation in NSCLC cells using flow cytometry. (Figure 7b). Paclitaxel treatment invariably increased the proportion of post-G1 cells in these cell lines, although the percentages of cells positive for activated caspase-3 were significantly smaller than in NCI-H23. The ReoT3D-paclitaxel combination consistently led to substantial increases in cells expressing active caspase-3, including in paclitaxel-resistant EKVX and NCI-H322M cells, and these increases appeared more prominent in post-G1 phase (Figure 7b). Indeed, statistical analysis demonstrated that the proportions of active caspase-3positive cells were significantly different among the treatment groups in the post-G1 cell population (P < 0.005, one-way ANOVA), but not in the population with DNA content of 2N or less (sub-G1 and G1) (P = 0.05). In this subpopulation of post-G1 cells, the proportions of active caspase-3-expressing cells were significantly increased with the ReoT3D-paclitaxel combination treatment as compared to single therapy (P = 0.02 for both betweengroup comparisons, ReoT3D alone vs. ReoT3D+paclitaxel 0.1 μM, and paclitaxel 0.1 μM alone vs. ReoT3D+paclitaxel 0.1 μM; P = 0.02 and < 0.04 for between ReoT3D alone vs. ReoT3D+paclitaxel 1 μM, and paclitaxel 1 μM alone vs. ReoT3D+paclitaxel 1 μM, respectively). These data suggested that the combination of ReoT3D and paclitaxel enhanced apoptotic cell death, which was linked to cell cycle perturbation in NSCLC cells. Ultrastructural analysis of NSCLC cells treated with ReoT3D and paclitaxel To further examine the relationship between cell cycle perturbation and apoptotic cell death induced by ReoT3D and paclitaxel, we examined the morphological changes of NCI-H23 cells treated with either ReoT3D or paclitaxel alone or in combination, using electron microscopy (EM). After overnight (~20 hours) treatment, the cells were directly fixed in situ and processed for the analysis. The EM study did not include dead cells that had detached before fixation. One hundred cells were surveyed within one 60-nm thin section for each treatment group to document significant changes as compared to untreated control (Table 3). When NCI-H23 cells were exposed to 1 μM paclitaxel overnight, an increased number of cells were (a) Fold increases (mean ± SD) in infectious progeny virion production found to be enlarged and multinucleated (Figure 8b), as verified by the survey (Table 3). These multinucleated cells, detected as post-G1 cells with ≥ 4N DNA content by flow cytometry (Figure 7a and 7b), represented the cells that had exited mitosis without cytokinesis. The cells infected with ReoT3D (MOI = 20) contained numerous viral particles, mostly in viral inclusion bodies that appeared globular in shape, and an increased number of mitochondria (Figure 8c and 8d). The combination of ReoT3D and paclitaxel resulted in notable increases in mitotic cells (Figure 8e), and apoptotic cells characterized by condensed and fragmented chromatin as well as cytoplasmic shrinkage and vacuolation (Figure 8f) (Table 3). Of note, although the number of viral particle-harboring cells that could be detected in the EM specimen was decreased with the ReoT3D-paclitaxel combination compared to ReoT3D alone (Table 3), the level of infectious progeny virion in the corresponding supernatant was higher than that of cells treated with ReoT3D alone (data not shown), indicating that the addition of paclitaxel enhanced reoviral production as discussed earlier. These data from flow cytometric and electron microscopic analyses strongly suggested that treatment with the combination of ReoT3D and paclitaxel caused prolonged mitotic arrest, which triggered accelerated apoptosis, resulting in synergistic cell killing in dually treated NSCLC cells. Discussion Lung cancer is the leading cause of cancer mortality in both men and women in the United States [23] and all cancer deaths worldwide [24]. The most common form of lung cancer is NSCLC that includes squamous cell carcinoma, adenocarcinoma, and large cell carcinoma. Despite the tremendous efforts and progress in lung cancer research, treatment outcomes for non-localized NSCLC remain poor [25]. New treatment strategies are urgently needed to improve survival for advanced NSCLC patients. In the current study, we uncovered a potent oncolytic activity of ReoT3D against a panel of human NSCLC cell lines, in particular, NSCLC cell lines of adenocarcinoma or large cell carcinoma origin. The susceptibility of cancer cells to ReoT3D-mediated cytolysis has been attributed to increased Ras activity [11,12]. However, we did not observe any significant association between the ReoT3Dpermissibility and the presence of Ras-activating gene mutations or activated Ras in human NSCLC cells. The lack of association has also been reported by others in human colon cancers [26]. It is possible that in addition to the activation status of Ras-associated pathways [11,12], there are other molecular determinants of ReoT3D-sensitivity, such as the cell surface density of putative ReoT3D receptors/coreceptors [27][28][29] and intracellular virion uncoating processes [30,31], all of which can affect ReoT3D infection efficiency. The combination effects of herpesvirus or adenovirusbased oncolytic viral vectors and chemotherapeutic agents have previously been evaluated against different human cancers [32][33][34][35][36][37]. Synergistic activity was reported in the majority of these studies. However, combination regimens selected for the previous studies were mostly limited in scope in terms of dose range and the number of chemotherapeutic agents investigated. In the current study, we demonstrated that the oncolytic activity of ReoT3D against NSCLC cells could be significantly potentiated by a number of chemotherapeutic agents used in the treatment of NSCLC, including paclitaxel, cisplatin, gemcitabine and vinblastine. Combination analysis based on the Chou-Talalay's method [16,17] clearly showed significant levels of synergy between ReoT3D and each chemotherapeutic agent tested. Interestingly, we found that the drug sensitivity of each NSCLC cell line was an important determinant for the in vitro synergistic effect of ReoT3Dchemotherapeutic combination regimens, with the exception of ReoT3D-paclitaxel combination. It is conceivable that certain molecular changes conferring drug resistance can antagonize the process of ReoT3D-mediated cell killing. Our data, therefore, caution against the use of chemotherapeutic agents combined with ReoT3D for the Western blot analysis of poly (ADP-ribose) polymerase (PARP) cleavage treatment of NSCLC that have developed resistance to the agents. In contrast, the level of ReoT3D sensitivity did not appear to compromise the combination effects in NSCLC cells. Rather, the addition of chemotherapeutic agents may help accelerate ReoT3D-induced cell death process, which is otherwise slowed in NSCLC cells with low susceptibility to ReoT3D infection. The most intriguing finding from our study was the synergistic effect of ReoT3D-paclitaxel combination consistently observed in all the NSCLC cell lines examined, regardless of the level of sensitivity to the compound. Because previous studies of oncolytic virus-chemotherapeutic combinations, in particular with paclitaxel, did not address the impact of drug resistance on the combination effects, we cannot ascertain whether our finding is unique to reovirus-containing combination therapy. Mammalian reoviruses are known to exploit microtubules for the formation of viral replication complexes (inclusion bodies) [38]. Based on our initial findings that the addition of paclitaxel to ReoT3D significantly increased the level of progeny virion production from all the NSCLC cell lines (a) Levels of caspase activity and ATP content in NCI-H460, NCI-H23, EKVX and NCI-H322M cells treated with increasing doses of either ReoT3D or paclitaxel alone, or both in combination for 24 hours, using the constant ratio combination design [15]. This design format was employed to evaluate the synergistic activity of ReoT3D-chemotherapeutic combination regimens (see text). Shown are the mean ± SD in relative fluorescence units (RFU) and relative luminescence units (RLU) for caspase activity (closed triangle) and ATP content (open circle), respectively, for ReoT3D alone (green), ReoT3D-paclitaxel combination (red) and paclitaxel alone (blue). The data shown are representative of 3 separate experiments. (b) Doseresponse curves of caspase activation induced by treatment with ReoT3D alone or ReoT3D-paclitaxel combination for 24 hours fitted by non-linear regression using GraphPad Prism (GraphPad Software, Inc.). Shown as an example are non-linear regression analyses for NCI-H460 and EKVX. See text for statistical analysis on the best-fit values. tested, we speculated that microtubule-stabilizing paclitaxel might have enhanced reoviral replication, resulting in a more efficient and synergistic oncolytic effect. However, the increased progeny virion production was not necessarily a unique outcome of ReoT3D-paclitaxel combination, but was also observed with ReoT3D-vinblastine combination in vinblastine-resistant NCI-H322M cells, where the combination of ReoT3D and vinblastine was found to be strongly antagonistic. Moreover, the addition of gemcitabine to ReoT3D treatment was not associated with an increased progeny virion production, regardless of the combination effects (synergy or antagonism) attained. These data suggested that the synergistic effect of ReoT3D and chemotherapeutic agents was not the direct result of enhanced lytic cell death, but more likely the manifestation of accelerated programmed cell death, which was triggered before virion assembly and release. Reovirus has been shown to induce apoptotic cell death in a variety of cell types, including cancer cells [18,39]. Indeed, increased caspase activity and apoptotic cleavage of PARP were readily detectable in ReoT3D-treated NSCLC cells within 24 hours in the current study, as has been shown in ReoT3A-exposed cancer cells [18]. The combination treatment with ReoT3D and paclitaxel led to more robust caspase activation than ReoT3D alone with a significant leftward shift of the dose response curve in both paclitaxel-sensitive and -resistant NSCLC cell lines, suggesting that the enhanced apoptosis most likely constituted the synergistic cell killing by the combination, regardless of the level of paclitaxel sensitivity. To gain more insight into the mechanistic basis of accelerated apoptosis associated with the ReoT3D-paclitaxel combination, we examined the effects of ReoT3D and paclitaxel on caspase-3 activation in relation to cell cycle progres- Compared to untreated control (a), the increased number of paclitaxeltreated NCI-H23 cells were found to be enlarged and multinucleated (b), whereas the cells infected with ReoT3D (c and d) contained numerous viral particles, mostly in viral inclusion bodies, which appeared globular in shape (black arrows). In addition, ReoT3D-infected cells appeared to contain an increased number of mitochondria (white arrows). When the cells were exposed to the combination of ReoT3D and paclitaxel, there were increased numbers of mitotic cells (e) as well as apoptotic cells characterized by condensed chromatin, cytoplasmic shrinkage and vacuolation (f). The results from survey of 100 cells for each treatment group are summarized in Table 3. Scale bars: 2 μm for (a), (b), (c), (e) and (f), and 500 nm for (d). sion. While taxanes and other antimicrotubule agents are known to activate the spindle checkpoint and induce mitotic arrest [21], ReoT3D infection has been shown to effect cell cycle arrest at G1/S and G2/M [19,20]. Because arrests in cell cycle progression induced by anticancer agents are commonly followed by apoptosis [40], we hypothesized that these two agents with differing effects on cell cycle progression may have synergistically activated apoptotic pathways in dually treated NSCLC cells. We found that the proportion of cells expressing activated caspase-3 was significantly increased by the ReoT3D-paclitaxel combination as compared to either ReoT3D or paclitaxel single treatment in each NSCLC cell line tested. Interestingly, the activation of caspase-3 was found more prominent in post-G1 cell population with ≥ 4N DNA content. Escape from mitotic arrest induced by spindle poisons such as taxanes and other antimicrotubule agents is commonly observed in cancer cells with impaired (weakened) mitotic checkpoint [21,41]. These cells that prematurely exit mitosis without proper cell division form large multinucleated cells with DNA content of 4N or greater, as demonstrated by EM in our study. After such mitotic slippage, some cells may undergo p53-dependent apoptosis, while others survive through senescence or continuing cell cycle (endoreduplication) [42], depending on the functions of p53, MAP kinase pathways, and p21-activated kinase [43,44]. In the current study, we found that NSCLC cells could efficiently escape from mitotic arrest induced by paclitaxel at 0.1 ~1 μM and survive at least for the first 24 hours of exposure, with the exception of NCI-H23 cells that were more prone to apoptosis upon paclitaxel exposure than 3 other NSCLC cell lines examined. While paclitaxel-treated NCIH460 cells ultimately underwent dramatic cell death after 48 hours, EKVX and NCI-H322M demonstrated considerable levels of resistance to paclitaxel-induced CPE. Nonetheless, the combination of ReoT3D and paclitaxel consistently accelerated the apoptotic process in post-G1 cells, including in paclitaxelresistant EKVX and NCI-H322M cells. This enhanced apoptosis appeared to have resulted from prolonged mitotic arrest, as corroborated by the EM data demonstrating that treatment with the ReoT3D-paclitaxel combination resulted in increased numbers of both mitotically arrested and apoptotic cells while decreasing the number of multinucleated cells as compared to paclitaxel alone. The molecular mechanism of prolonged mitotic arrest induced by the ReoT3D-paclitaxel combination has yet to be elucidated. It is possible that ReoT3D infection may enhance the mitotic checkpoint activity in cancer cells with weakened mitotic checkpoint, for example, by upregulating the expression of mitotic checkpoint proteins (such as Mad1, Mad2, BubR1/Mad3, Bub1 and Bub3) [45], Cdk1 and/or cyclin B [45], or suppressing the anaphase-promoting complex/cyclosome activity [45]. Such ReoT3D-induced alterations in the mitotic regulatory network may reinforce the mitosis-arresting signal of taxanes, leading to prolonged mitotic arrest and apoptosis. Better understanding of the molecular consequences of ReoT3D infection on cell cycle checkpoint function and apoptotic signaling pathways in cancer cells will greatly enhance our ability to design rational combination therapies with proapoptotic oncolytic agent, ReoT3D, and various classes of anticancer agents. Concurrently, it is also of high importance to investigate potential consequences of ReoT3D-chemotherapeutic combinations on normal tissues in order to identify undesirable combination regimens that are associated not only with synergistic oncolytic activity, but also with enhanced toxicity in humans. Conclusion The current study showed that the oncolytic activity of ReoT3D could be most effectively potentiated by taxanes through accelerated apoptosis, regardless of the level of taxane-sensitivity. Recently, preliminary findings from ongoing phase I clinical trials evaluating the safety of ReoT3D-taxane combination in patients with chemotherapy-refractory advanced tumors have demonstrated objective anticancer response in some patients without serious side effects [46,47], corroborating that ReoT3D-taxane combination regimens can achieve a durable oncolytic effect in clinical settings and thus should be further explored as a novel treatment modality for NSCLC. Cell lines, virus and chemotherapeutic agents Human NSCLC cell lines included in the NCI-60 cell line panel [48]: Determination of virus-and drug-induced cell death ReoT3D-or drug-induced CPE was assessed by the XTT assay as described previously [49]. ED50 for ReoT3D was defined as the initial virus dose (MOI expressed in pfu/ cell) that resulted in 50% cell viability at 48 hours postinoculation as compared to untreated controls. Sensitivity of NSCLC cell lines to chemotherapeutic agents was expressed as the drug concentration that inhibited cell growth by 50% compared to untreated controls (IC50). Both ED50 and IC50 were calculated by using the software GraphPad Prism (GraphPad Software, Inc.). Cell viability was also assessed by intracellular ATP content (ATPLite™, PerkinElmer, Waltham, MA), when the level of caspases activity was evaluated in selected NSCLC cell lines treated with ReoT3D, paclitaxel or both agents in combination (see below). Evaluation of in vitro combination effects by the Chou-Talalay method The combined effects of ReoT3D and each chemotherapeutic agent on cell survival were analyzed using the software CalcuSyn (Biosoft, Ferguson, MO), which applies the median-effect equation of Chou [16] and the CI equation of Chou and Talalay [17]. NCI-H460, HOP-92, NCI-H23, EKVX, NCI-H226 and NCI-H322M plated in 96-well microplates as above were exposed in triplicate to a serial dilution of each agent or both in combination using the constant ratio combination design [15] for 48 hours, followed by the XTT assay for cell viability determination. Calculated CIs were used to ascertain the presence of strong synergism (CI < 0.3), moderate synergism (0.3 < CI < 0.9), additive effect (CI = 1), antagonism (CI > 1) and strong antagonism (CI > 3.3) [15] between ReoT3D and chemotherapeutic agents. Reovirus plaque assay Levels of infectious progeny virions in culture supernatants produced from ReoT3D-infected NSCLC cells were evaluated by plaque assay as previously described [50]. Infectious reovirus titers were expressed as pfu/mL of the original sample. Determination of activated Ras levels in NSCLC cells Basal levels of activated Ras in 9 NSCLC cell lines were determined by Ras activation assay Biochem Kit (Cytoskeleton, Inc., Denver, CO) according to the manufacturer's instructions. Briefly, cell lysates were prepared from NSCLC cells using the kit lysis buffer, and aliquots of 2 mg protein were incubated with 20 μL of Raf1-RBD beads at 4°C for 1 hr, followed by centrifugation to pellet the Raf1-RBD beads. The pelleted beads were washed with wash buffer twice, and resuspended in 10 μL Laemmli sample buffer. The samples were separated by 12% Trisglycine SDS-PAGE, and subjected to Western blot analysis with anti-Ras antibody (Cell Signaling Technology, Inc., Danvers, MA). Determination of caspase activity and ATP content by microplate-based assays The levels of caspase activity and ATP content in NSCLC cells treated with ReoT3D alone or in combination with paclitaxel for 24 hours were evaluated by microplatebased assays, using a fluorometric pan-caspase assay (Homogeneous Caspases Assay; Roche Applied Science, Indianapolis, IN) and luminescence ATP detection assay (ATPLite™; PerkinElmer), respectively. Flow cytometric analysis of DNA content and caspase-3 activation Adherent and non-adherent NSCLC cells harvested as above (see Western blot analysis) were fixed and permeabilized with BD Cytofix/Cytoperm™ solution (BD Bio-sciences, San Jose, CA) and stained with FITC-conjugated anti-active caspase-3 antibody (BD Biosciences), followed by incubation with PI/RNase staining buffer (BD Biosciences) for DNA content determination. The proportion of cells expressing active caspase-3 and DNA content were analyzed using a FACScan™ flow cytometer (BD Biosciences) as previously described [51]. Transmission electron microscopy NSCLC cells cultured in 6-well plates were processed in situ for electron microscopic analysis as previously described [52]. Thin sections (60 nm) were examined with a Hitachi H7000 transmission electron microscope. Statistical analysis Results are reported as mean ± SD unless otherwise indicated. Statistical significance of differences was determined by one-way ANOVA or Student's t-test as appropriate. Differences were considered statistically significant when P < 0.05 (two-tailed).
7,575.4
2009-07-14T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Implementation of Universal Control on a Decoherence-Free Qubit We demonstrate storage and manipulation of one qubit encoded into a decoherence-free subspace (DFS) of two nuclear spins using liquid state nuclear magnetic resonance (NMR) techniques. The DFS is spanned by states that are unaffected by arbitrary collective phase noise. Encoding and decoding procedures reversibly map an arbitrary qubit state from a single data spin to the DFS and back. The implementation demonstrates the robustness of the DFS memory against engineered dephasing with arbitrary strength as well as a substantial increase in the amount of quantum information retained, relative to an un-encoded qubit, under both engineered and natural noise processes. In addition, a universal set of logical manipulations over the encoded qubit is also realized. Although intrinsic limitations prevent maintaining full noise tolerance during quantum gates, we show how the use of dynamical control methods at the encoded level can ensure that computation is protected with finite distance. We demonstrate noise-tolerant control over a DFS qubit in the presence of engineered phase noise significantly stronger than observed from natural noise sources. I. INTRODUCTION The ability to effectively protect the coherence properties of a quantum information processing (QIP) device against the detrimental effects of environmental interactions is a prerequisite for realizing any potential gain of quantum computation and quantum information theory [1]. Approaches based on noiseless (or "decoherence-free" [2]) coding offer a promising venue for meeting the challenge of noise-tolerant QIP. The theory of decoherence-free subspaces (DFSs) has been the focus of intensive development particularly by Zanardi, Lidar, and coworkers [3][4][5][6][7][8][9]. Recently, the DFS idea has been incorporated within the more general approach based on noiseless subsystems (NSs) [10][11][12][13], which recover DFSs and their benefits as special instances. The primary motivation behind "passive" noise control strategies relying on either DFSs or NSs is to take advantage of specific symmetries occurring in the noise process to single out subspaces or subsystems of the physical information processor that are inaccessible to noise. Once information is appropriately encoded into such noiseless structures, robust storage is ensured without requiring further active correction -as long as the underlying symmetry dominates. These features, together with their stability against symmetry-perturbing errors [5,6,8] and the consequent potential for concatenation with quantum error-correcting codes [7], make noiseless codes natural candidates as robust quantum memories. To date, experimental implementations include studies of DF states in quantum optical systems [14] , and one-bit quantum memories based on both a DFS of two trapped ions [15] and a NS of three nuclear spins [16]. Achieving robust quantum information storage represents only a first, though indispensable, step toward the goal of reliable QIP. An important advance in this direction came from the identification of universality schemes, which in principle enable DFSs (or NSs) to support universal encoded quantum computation in a way that remains fully protected against noise. Both existential [11,17] and constructive results [18] have been established. While the latter are especially appealing for a class of proposed quantum computing architectures governed by Heisenberg exchange interactions [19], implementations of these schemes remain difficult due to the stringent symmetry and tunability requirements on the control Hamiltonians. Here, we take a first experimental step towards encoded quantum computation by demonstrating universal control over a one-bit DF quantum register of two nuclear spins. A novel key ingredient we use to implement encoded quantum gates is the combination of robust control design with the use of dynamical decoupling methods [20][21][22][23] directly on encoded degrees of freedom. Our results suggest that this may serve as a useful strategy for practically coping with the constraints required for DF computation. The paper is organized as follows. In Sect. II we review the collective decoherence model that is relevant to the work, along with the prescriptions from the DFS theory for both protected storage and manipulation of quantum information in a two-qubit system. In Sect. III, we outline our proposed approach to noise-tolerant control of DFS encoded qubits based on concatenating encoded decoupling methods with robust control design. The general principles are developed starting from the physical NMR setting relevant to the experiment. Sect. IV contains an account of the control techniques used in the experiment and the reliability measures adopted to quantify the accuracy of the implementation. In particular, a notion of gate entanglement fidelity, generalizing Schumacher's definition to allow a desired unitary evolution on the quantum data, is proposed and related to other fidelity metrics relevant to QIP. The experimental results demonstrating protected storage and universal protected quantum logic are presented and discussed in Sect. V and VI, respectively. A. Collective decoherence For a system S composed of n qubits, a purely decohering, collective interaction arises when the qubits couple symmetrically to a single environment E and no exchange of energy takes place between S and E. Physically, this model accounts for relaxation due to fully correlated fluctuations of the energy levels of each qubit -a situation that is approached if the qubits are close enough relative to the correlation length of the environmental coupling and the latter commutes with the natural Hamiltonian. Although not always applicable, this decoherence model has a practical significance for QIP. In particular, collective dephasing was shown to play a major role in quantum devices based on trapped ions [15]. In NMR systems, dephasing caused by fully correlated fluctuations of the local magnetic field provides the dominant relaxation mechanism of quantum coherences between identical species in sufficiently small, rigid molecules [24]. If H = H S ⊗ 1 1 E + 1 1 S ⊗ H E + H SE represents the Hamiltonian for the joint system plus environment, collective phase damping corresponds to an interaction Hamiltonian of the form Π −2 , leading to the full suppression of single and double quantum coherences. However, regardless how strong, collective decoherence produces no decay of the zero-quantum subspace spanned by {|01 , |10 }. Zero-quantum coherences and their properties have been long appreciated in NMR, with important applications in both highresolution spectroscopy in inhomogeneous magnetic fields and contrast enhancement in magnetic imaging [27,28]. Within NMR QIP, zero-quantum coherences are revisited in view of their natural potential to encode protected quantum information. B. Decoherence-free encodings In the DFS approach, the first step is ensuring that the quantum data to be protected is encoded into a DFS. Mathematically, a DFS is a subspace of the system's state space spanned by a set of degenerate eigenvectors of all the error generators appearing in H SE . For global dephasing on n qubits as in (1), let H (jz ) be the eigenspace corresponding to the eigenvalue j z of J z , j z = n, n − 2, . . . , −n + 2, −n. Then states in H (jz ) remain invariant under the environmental coupling, which also implies a degenerate action of each error operator on the subspace: for some coefficients f a = f a (j z ) fulfilling a |f a | 2 = 1. If the system is initialized in a state ̺ in = |ψ L ψ L | ∈ H (jz) , then i.e., the evolution remains unitary within each H (jz ) . Thus, each H (jz) is a DFS under collective decoherence. The amount of quantum information that a given DFS is able to protect is determined by its dimension n jz -which is simply the degeneracy of the corresponding j z -eigenvalue. In particular, for n even, the largest DFS is supported by the zero-quantum subspace H (0) , with n 0 = n!/(n/2!) 2 . For n = 2 spins, H (0) is doubly degenerate, hence it provides the smallest DFS capable to protect one qubit against collective decoherence. The robustness of this two-spin zero-quantum subspace under A z has been explicitly derived above. In terms of basis states, our DFS qubit is defined by the encoding for arbitrary complex coefficients c 0 , c 1 . It is worth stressing that the existence of a DFS is tied, at the physical level, to the occurrence of symmetries in the noise process. The way the latter reflect into the state space of the system both determines the possibility of invariant states as in (3)-(4) and the associated degeneracies. For any interaction which is diagonal in the computational basis, the underlying "axial" symmetry ensures that the individual σ j z are conserved quantum numbers. However, it is only under the additional permutation symmetry characterizing collective interactions that the degenerate conserved quantum number J z arises -signaling the presence of a protected structure. The emergence of degenerate degrees of freedom preserved under the noise remains the key ingredient for more general DFSs [3,5,18] and, in still more elaborated forms, NSs as well [10,12,16]. Taking advantage of the existing symmetries translates into major gains toward achieving noise-protected QIP. For instance, the simple 2-bit encoding (5) preserves a qubit against collective dephasing of arbitrary strength, to be contrasted with independent phase errors -where protection can be achieved only with finite distance using a quantum error-correcting code. C. Decoherence-free manipulations Once protected storage of quantum information is obtained, the next step is to ensure that universal quantum gates are implemented without ever leaving the DFS. Because the states spanning a DFS are characterized by precise symmetry properties, symmetries are likewise crucial in determining the control operations to be applied for effecting DF quantum logic. Clearly, the allowed gates must map DFS states to DFS states. However, as any physical gate takes a finite time to execute, invoking unitary manipulations that preserve the DFS at the conclusion of the gate is not sufficient. To guarantee that the system remains within the DFS during the entire gating time requires the stronger condition that gates are generated by Hamiltonians that themselves respect the symmetry. Thus, the general problem requires identifying a universal set of Hamiltonians which satisfy the correct symmetry constraints and involve at most two-body interactions [18]. In our case, because a single DFS qubit is involved, this universal set of control Hamiltonians is composed of two observables generating an encoded u(2) Lie algebra, exponentiation then giving the whole group U(2) of encoded one-qubit transformations. A necessary and sufficient condition for an Hamiltonian A to preserve the zero-quantum DFS can be obtained by demanding that 00|A(c 0 |01 + c 1 |10 ) = 0, 11|A(c 0 |01 + c 1 |10 ) = 0 for arbitrary c 0 , c 1 . This leads to the following matrix form for A with respect to the computational basis: for real coefficients a j , j = 1, . . . , 4, and possibly complex b, c. In particular, this includes all the hermitian operators belonging to the so-called commutant of the error algebra [10,18], A ′ z = {X : [X, J z ] = 0}, which collects all operators commuting with the noise. Because every operator in A ′ z can be represented as a linear combination of the identity, the one-bit operators σ j z , and the two-bit couplings σ 1 z σ 2 z , σ 1 · σ 2 = σ 1 x σ 2 x + σ 1 y σ 2 y + σ 1 z σ 2 z (Heisenberg coupling), Hamiltonians in A ′ z have c = 0 -implying that all the DFSs are in fact preserved. An additional constraint (so-called independence [18]) can be imposed on Hamiltonians in A ′ z by also requiring a 1 = a 4 = 0, in which case A has zero entries outside the selected zero-quantum DFS. With respect to the DF encoding (5), a choice of operators that act as independent, encoded σ z , σ x on H L is given by where the notation = L means equality upon restriction to H L and σ L y The independence property is useful to allow parallel encoded manipulations on different DFSs. However, when only a single DFS is in use, requiring independence or even preservation of all DFSs has no advantages, and allowing for the most general Hamiltonian as in (6) may in fact increase the options available for implementation. For instance, an alternative choice for encoded z and x observables is and In principle, based on standard universality results [1], it is possible to generate any encoded unitary transformation by appropriately alternating evolutions under two Hamiltonians with the correct symmetry. For instance, this is certainly true if one can turn on/off a pair of Hamiltonians in A ′ z such as, say, σ 1 z and the exchange interaction E 12 = (σ 1 · σ 2 + 1 1)/2 -for i[E 12 , σ 1 z ]/2 gives an encoded σ L y and then −i[E 12 , σ L y ]/2 gives σ L z as in (7). Once encoded single-qubit manipulations are available, then universal encoded computation over DFS qubits requires the additional ability of implementing a non-trivial encoded gate between two logical qubits. For instance, a controlled-rotation gate could be constructed from a logical phase coupling of the form σ Lj z σ L j ′ z , which is supported by the natural couplings of many NMR and NMR-like Hamiltonians. We focus here on the first step of this program i.e., to obtain reliable single-qubit DF manipulations compatible with the constraints that QIP implementations unavoidably face in terms of both the form and the tunability of the available control Hamiltonians. A. Two-spin NMR QIP as a case study The total system Hamiltonian we consider, H S , is the sum of a time-independent internal Hamiltonian, H int , and a timedependent external Hamiltonian, H ext . The internal Hamiltonian, composed of spin-field and spin-spin interactions, is [21] where ν 1 , ν 2 , and J are the chemical shifts and the coupling constant, respectively. The external Hamiltonian, describing the interaction between the spins and an applied RF field has the form [21,29] H ext = k=1,2 the transmitter's angular frequency ω RF , the initial phase φ, and the power ω being tunable over an appropriate parameter range. The implementation of an arbitrary unitary gate is accomplished by modulating H int via an external control sequence. While sequences can be optimized numerically [29], average Hamiltonian theory (AHT) [30] provides a systematic method for describing any unitary propagator U (T ) resulting from the evolution under the time-varying Hamiltonian H S = H int + H ext in terms of an effective Hamiltonian H applied over the same time interval: where T is, as usual, the Dyson time-ordering symbol. In particular, AHT underlies the design of coherent refocusing and decoupling methods, which are able to effectively turn on/off selected contributions to the average propagator over some time interval. These methods have been recently revisited within the QIP context in [22,23,11]. We recall that the basic idea is to subject the system to a cyclic train of pulses P = {P j } M j=1 , Π M j=1 P j = 1 1 which, in the simplest setting, are assumed to be infinitely short and equally spaced by ∆t > 0. The net controlled evolution over the period T = M ∆t can then be expressed as where the "toggling-frame" Hamiltonians H k are determined as [24]. In the limit of sufficiently rapid control, H simply approaches [24,22] By appropriately designing the pulse sequence P, undesired contributions to H can be effectively turned off. For instance, a train P 1 of equally spaced, simultaneous π x pulses on both spins (π 1 x π 2 x pulses) averages out any phase evolution due to the σ j z terms in (9). Similarly, the σ 1 x σ 2 x + σ 1 y σ 2 y coupling can also be averaged to zero by a pulse sequence P 2 consisting of repeated, equally spaced π 1 x π 2 y pulses. B. Universal gates via encoded dynamical control While AHT represents a powerful tool for designing logic gates over physical, un-encoded degrees of freedom, a direct application on DF encoded qubits does not automatically result in DF manipulations. Even though ensuring that H has the general form (6) (for instance, H ∈ A ′ z ) leaves the system in a DF state, there is no guarantee that the control path has remained within the DFS at all intermediate times -possibly re-introducing exposure to noise. We begin by noting that H int can be rewritten as in terms, for instance, of the encoded observables (7) -which makes it explicit that H int ∈ A ′ z . Since both J z and σ 1 z σ 2 z are constant on the code subspace H L , they can be ignored and H int further simplifies to Thus, the natural evolution implements a non-trivial logical operation within H L . The challenge is to extract the required controlled operations by remaining, ideally, always within the DFS. The situation is simpler in the limit where, as above, control pulses are treated as instantaneous. Because H int ∈ A ′ z , one can ensure that each toggling-frame Hamiltonian H k also remains in A ′ z by choosing pulses such that either [U k , J z ] = 0 or {U k , J z } = 0. The latter condition is satisfied, for instance, by the above-mentioned pulse sequence P 2 , which thus implements a net encoded identity in this idealized scenario. Of course, the duration of real-life pulses is necessarily finite, and one needs to pay additional care to what happens during the pulse length [31]. In principle, DF logical operations can still be effected if sufficient control over the parameters ∆ν, J in (12) is available. The general idea is to concatenate AHT with the underlying DF encoding i.e., to implement refocusing directly with encoded rotations. Let us look at our DFS qubit (a more expanded account will be provided elsewhere; see also [32] for related work), and imagine that encoded π L pulses are available as π L x,y = exp(−iπσ L x,y /2). Then a sequence of equally spaced encoded π L x pulses (in this case a Carr-Purcell sequence [20]) can be used to refocus the encoded phase evolution and only leave the encoded σ L x coupling active in (12). This can be thought of as a logical or encoded "spin echo" [33]. A similar procedure holds for extracting the encoded σ L z Hamiltonian if encoded π L z pulses are employed instead. Thus, the same schemes that are effective at turning on/off unwanted terms in the physical qubit evolution are effective at turning on/off unwanted terms in the encoded qubit evolution, provided ordinary control pulses are replaced with encoded ones. More generally, a group-theoretical framework extending the un-encoded approach of [22] to encoded dynamical decoupling can be constructed. For our system, this implies that the ability to apply a single Hamiltonian with the correct symmetry (e.g., σ L x ) suffices, in principle, for gaining universal control. Unfortunately, such control is not directly available in practice, as the evolutions induced by the external RF Hamiltonian (10) do not resemble, in general, evolutions under logical Hamiltonians. The approach we take results from the following compromise: we mimic the implementation of a fully encoded refocusing scheme by using available pulses whose propagator (not Hamiltonian) equals the required encoded rotation; we then compensate for the residual exposure to noise by control design. If pulse durations are optimized, then the system will reside in the DFS for a dominant portion of the computational time. In addition, pulse design can add robustness against noise [34], reducing its impact while the system resides outside the protected space. While in the limit of weak noise with arbitrarily long correlation times these techniques provide robustness, for realistic noise models actual improvements will depend heavily on the noise parameters. An explicit implementation will be reported. As already noted, these ideas open the way for manipulating more than a single encoded qubit. If, for instance, two DFS qubits are supported by the zero-quantum subspaces of, say, two proton and two carbon spins, the overall internal Hamiltonian will be expressible, to high accuracy and for a wide class of spin-spin coupling distributions, in terms of both single-qubit encoded observables σ L1,2 x , σ L1,2 z and the two-qubit encoded interaction σ L1 z σ L2 z . Thus, the ability of separately controlling each encoded qubit via encoded refocusing, combined with the presence of the logical phase coupling, implies the potential of effecting universal quantum logic with reduced error rate. IV. EXPERIMENTAL OUTLINE Liquid state NMR QIP techniques have been extensively discussed in the literature [21], and only the salient points are recalled here. Because the system exist in highly mixed, separable states, NMR QIP relies on "pseudo-pure" (p.p.) states whose traceless, or deviation, component is proportional to that of the corresponding pure state. The identity component of the density matrix is unobservable and is treated as a constant under the assumption of unital dynamics (i.e., dynamics that preserves the completely mixed state). In this case, the evolution of a p.p. state is equivalent to the corresponding pure-state evolution. Initialization of the two-spin system into an intended p.p. state was accomplished using gradient-pulse techniques as described in [21,35]. Throughout the experimental implementation, all deviation components were explicitly verified by state tomography [36]. A fixed amount of identity component that optimizes the fidelity between the experimentally determined and a desired reference p.p. state, |00 00|, was added to each reconstructed deviation density matrix. For each experiment, we prepared one of the p.p. input states ̺ p.p. in = |ψ in 0 ψ in 0|, with |ψ in ψ in | providing a complete set of one-bit density matrices so as to allow quantum process tomography reconstruction [37,16]. Our physical system is an ensemble of Dibromothiophene molecules (Fig. 1) in a solution of CDCl 3 . Measured values for the relevant parameters are listed in the caption of Fig. 1. The experimental procedure begins with the data qubit 1 containing the state |ψ in to be protected, |ψ in = c 0 |0 + c 1 |1 , and the ancilla qubit 2 initialized to |0 . Encoding of the initial input state to the code space H L is accomplished by the unitary transformation where U enc is a controlled σ x rotation on bit 2 conditioned on bit 1 having the state |0 . Next, an encoded operation is performed on the system in the presence of noise. The information is retrieved by applying a decoding transformation U dec = U † enc , producing a general output state of the form for a target single-qubit unitary transformation U target on the data spin. U target = 1 1 corresponds to storage of the quantum data under either engineered collective dephasing or natural noise, while U target is a non-trivial desired rotation for demonstrating universal quantum logic. All experiments were carried out on a 400 MHz Bruker AVANCE spectrometer. A. Unitary and non-unitary control The un-encoded gate operations involved in the encoding and decoding networks were mapped into ideal pulse sequences using standard methods [21]. Pulses were then implemented by modulating H int with external RF fields as mentioned in Sect. IIIa [21]. Non-unitary evolution, either for p.p. state preparation or for emulating collective decoherence, were implemented using pulsed magnetic field gradients. Magnetic field gradients take advantage of the spatial extent of the sample to induce an incoherent evolution. Applying a gradient ∇ z B = ∂B z /∂z along the axis of the static field causes a linear variation of the Larmor precession frequency given by the spatially dependent Hamiltonian H grad = γzJ z ∇ z B/2 , γ being the gyro-magnetic ratio of the given nuclear species. This causes each quantum coherence ρ kℓ (k = ℓ) to be multiplied by a spatially dependent phase factor, exp(−iγzm kℓ ∇B z δ/2), where m kℓ is the coherence order (defined earlier) and δ the duration of the gradient pulse. In other words, each part of the sample experiences a different coherent phase error. Tracing over the spatial degrees of freedom, as is done in an ensemble measurement, causes this incoherent evolution to become irreversible when considering the spin degrees of freedom alone. While the effects of this evolution could be immediately reversed, random molecular diffusion causes an irreversible spatial displacement that increases with both time and the molecular diffusion coefficient. Applying an inverse gradient after a time delay ∆ (diffusion time) thus results in an exponential decay of non-zero coherences, exp(−∆/τ ), with an effective noise strength given by [35] D being the diffusion coefficient of the sample. Note the scaling of this decoherence rate with the square of the coherence order, as anticipated in the derivation of Eqs. (2). Using these gradient-diffusion techniques, variable strength noise can be obtained by either changing the gradient strength or the diffusion time. It should be noted that, in both the incoherent and decoherent case, the induced phase error is collective to an extremely good extent, deviations from a collective action being determined by the product of the gradient strength (approximately 60 Gauss/cm) and the spatial displacement between the two hydrogen spins (on the order of angstroms). B. Reliability measures for control As a reliability measure quantifying the accuracy of implementing a target unitary transformation U on a system S we invoke a variant of the entanglement fidelity F e as introduced by Schumacher [38]. In Schumacher's notation, let R be an auxiliary "reference" system, and let the initial entangled state |Ψ RS of the pair RS be subjected to the overall evolution 1 1 R ⊗ E S . Starting from ρ RS = |Ψ RS Ψ RS |, this produces a final state ρ RS ′ = (1 1 R ⊗ E S )(|Ψ RS Ψ RS |). Then the entanglement fidelity of the process E S relative to the initial state of S alone, ρ S = Tr R {|Ψ RS Ψ RS |}, is defined as i.e., F e measures the fidelity between the input and output states of the joint system: F e = Tr{ρ RS ρ RS ′ }. F e can be expressed in terms of quantities intrinsic to the system alone once an operator-sum representation for E S is available. Because F e (ρ S , E S ) = 1 if and only if ρ RS ′ = |Ψ RS Ψ RS | [38], F e naturally quantifies the preservation of quantum information -perfect preservation corresponding to implementing E S = 1 1 S . In the presence of the target transformation U ≡ U S = 1 1 S , the appropriate measure should equal 1 if and only if ρ RS ′ = U S |Ψ RS Ψ RS |U S † . Thus, (14) is generalized to a gate entanglement fidelity as follows: By using the above operator-sum representation for E S (ρ S ), one can derive the equivalent expression where the modified dynamical mapẼ S = U S † E S U S is defined by the set of transformed Kraus operators {U S † A S µ }. Thus, a perfect implementation of the desired gate U S corresponds to perfect preservation of quantum information underẼ S : the meaning of this is simply that, in the ideal case, the intended effect would be E S (ρ S ) = U S ρ S U S † , which is equivalent to ensuringẼ S (ρ S ) = ρ S . Similar to (15), we then have where as above ρ S = Tr R |Ψ RS Ψ RS | is the initial density matrix of the system alone. Taking as the standard reference state a maximally entangled purification |Ψ RS for which ρ S is the fully mixed state i.e., ρ S = 1 1 S /N for a N -dimensional state space, (16) finally becomes This form makes it explicit that the gate fidelity defined in [29] is identical with the gate entanglement fidelity formally introduced here. We shall still refer to the quantity in (17) simply as entanglement fidelity F e in the following. The above reliability measure can be related to experimentally available data, and the results take particularly simple expressions in the case of single-qubit transformations we are concerned with. Starting from the standard maximally entangled Bell state for the joint RS system, where S and R are now two qubits, and assuming that the process E S ≡ E actually implementing U is unital and trace-preserving, one finds where F U|ψin = Tr{U |ψ in ψ in |U † E(|ψ in ψ in |)} for a generic one-bit pure input |ψ in , and |0 , |+ , | + i are eigenstates with positive eigenvalue of σ z , σ x , σ y , respectively. This expression was used in [16] for the special case U = 1 1. As a further remark, it is worth noting that F e as given in (18) is related to the so-called average gate fidelity F proposed in [39] viā F = 2/3 F e + 1/3. Eq. (18) is directly applicable to quantifying the accuracy of DF unitary manipulations as given, upon decoding, by (13). V. DEMONSTRATION OF A DECOHERENCE-FREE QUBIT The utility of a DFS to preserve quantum information (i.e., to implement the identity operation) is demonstrated under the action of different classes of both engineered and natural noise: a variable-strength engineered decoherent noise, a full-strength (crusher) engineered incoherent noise, and the natural ambient noise due to relaxation. A significant improvement in the entanglement fidelity for each class of noise is seen. In addition, because the measured entanglement fidelities remain above the threshold value 0.50 [40], all implementations guarantee, in principle, the ability to preserve entanglement over a wide range of noise strengths. Unlike the case of an NS, the entire state of the system (data plus ancilla) remains unchanged under the action of the noise. While this was experimentally confirmed to a good accuracy [41], we report the preservation of quantum information between the desired input state and the measured output state of the data qubit alone. A. Engineered noise Gradient-diffusion techniques were used to implement variable-strength noise. In order to isolate the effects of the applied noise, the time delay between encoding and decoding was kept fixed. In the first half of this time delay collective phase noise was applied to the system. Unwanted evolution due to the internal Hamiltonian was refoucsed during the second half of the delay by a pair of π pulses P 2 given in Sect. IIIa. The gradient strength was varied over the full dynamic range of the spectrometer (0 to 60 Gauss/cm) and the diffusion time ∆ was set in such a way that a significant amount of information was lost when the un-encoded data spin was directly exposed to noise. This was obtained by running separate experiments with encoding/decoding sequences turned off. The experimental data are collected in Fig. 2. For the encoded data, assuming no additional loss of information with increasing noise strengths is consistent with the experimental results. Fitting the data with a constant value yields a value of 0.97 ± 0.01. The deviation from unity is consistent with the observed F e value for the reference situation of no applied noise (i.e., the one implementing a net identity evolution between encoding and decoding). Therefore, these losses are caused by imperfections in the applied pulses as well as by natural noise processes whose action is not in the correctable error algebra A z (see below). For all but the smallest noise values, a substantially increased amount of quantum information is retained using the DFS memory than leaving the system un-protected. To further confirm the robustness of the DFS memory against arbitrary noise strengths, an incoherent implementation of all possible collective phase errors was also realized to emulate noise in the strong dephasing limit. A single magnetic field gradient pulse with maximum strength was applied, causing spins on the fringe of the sample to evolve through more than 750 cycles and therefore acquire large phase errors. Again, the loss of information in the presence of this crusher noise is compatible with the measured loss due to just encoding and decoding (see Table 1). B. Natural noise The behavior of both the DFS-encoded and the un-encoded data under ambient noise was also probed in a separate series of experiments, with the goal of gaining qualitative insight on the relevance of fully correlated dephasing in the naturally occurring phase relaxation processes. In this case, the holding time between encoding and decoding was varied to allow for a variable exposure to noise. Because natural relaxation takes significant contributions from T 1 processes (that are both amplitude and phase damping) in addition to transverse T 2 relaxation, the unitality assumption invoked in deriving the expression (18) for the entanglement fidelity is no longer accurate. While F e could be still evaluated directly from (17) upon experimentally extracting a set of Kraus operators, a simpler coherence metric is appropriate if the actual amplitude decay is of no interest. Similar to [15], the amount of quantum coherence C (phase information) that is retained in the course of the noisy evolution can be quantified by experimentally determining the average off-diagonal component present in the output density matrix. Thus, corresponding to the two experimentally prepared transverse p.p. states |+ , |i defined above, we calculate where the map E now corresponds to the natural noisy dynamics. The experimental data are presented in Fig. 3. Holding times ranging from a fraction of a second up to a time scale comparable to T 2 were explored. An appreciable decay of the DFS qubit is seen in this case, witnessing the presence of non-collective phasedamping processes in the ambient noise. In spite of this non-robust behavior, the DFS is still able to retain quantum coherence much longer than the un-encoded state. This implies that a significant contribution to the overall phase relaxation is actually caused by fully correlated dephasing, consistent with the physical intuition based on the geometrical and chemical structure of the molecule. VI. DEMONSTRATION OF UNIVERSAL CONTROL OVER A DECOHERENCE-FREE QUBIT As stated in section IIc, evolution according to two non-commuting encoded Hamiltonians is required for universal control. In the implementation, we found it convenient to adopt a choice of encoded observables intermediate between (7), (8), i.e. σ L z = −σ 2 z , σ L x = (σ 1 x σ 2 x + σ 1 y σ 2 y )/2 henceforth. In terms of this choice, and using the actual implementation parameters (see Fig. 1), the internal Hamiltonian (12) is given by induced by the applied RF field does not resemble in general a σ L x Hamiltonian, for the special case of a hard π pulse the resulting propagator is which mimics a net σ L x operation: the action of U 1,2 h on the code subspace is identical to the action of σ L x as a unitary operator (not as a Hamiltonian -note that σ 1 x + σ 2 x does not clearly respect the form (6)). Composite pulses [45], which provide an excellent balance between speed and robustness, were used to implement each hard π pulse. Six-period pulses optimized to be robust against variations in both chemical shift (phase errors) and RF strength (control errors) [34] were used to emulate the required sequence of encoded π L . Each of these hard pulses is 62.4µs in duration and is followed by a delay of 630µs. Therefore, the system resides in the protected space for over 90% of the computational time. The phases of the π L pulses were alternated systematically as described by a WALTZ sequence [46], so as to minimize the impact of experimental errors. In particular, a 64-cycle sequence was used to achieve a π/2 encoded rotation, exp(−iπσ L x /4), with a fidelity of entanglement of 0.94 ± 0.03. Again, we see no significant loss of information due to the x operation. B. Composite encoded y rotation under collective phase noise To explicitly test the robustness of the available logical x and z manipulations, a composite encoded rotation by π/2 about y was implemented in the presence of variable-strength collective phase noise i.e., the sequence of encoded rotations was performed. As in the DFS memory experiments, gradients were used to induce a spatially incoherent error over different parts of the sample. However, the noise effects associated with a time-independent gradient Hamiltonian (i.e., an infinitely long correlation time) would tend to be effectively averaged out over time by the applied control sequences as described by coherent averaging [30] and dynamical decoupling [22]. In order to make sure that the net action of the applied gradients is maintained in the presence of the external control, a procedure similar to the fast-switching control schemes discussed in [23] was followed, by rapidly modulating the strength of the applied gradient Hamiltonians over the course of the control sequence. Thus, a temporal incoherence was also superimposed at each spatial location in the sample, enforcing a finite correlation time τ c (hence a non-zero cut-off frequency) in the spectral density describing the noise. In practice, the gradient waveform was determined via a random walk process, whose shape is depicted in Fig. 4. The gradient strength was changed every 50.6 µs therefore τ c ∼ 50.6µs. By making sure that τ c is short compared to the control cycle time of the sequences used to implement the composite rotations (∼ 700 µs in our case), active control is made ineffective at averaging out the high frequency effects of the noise during the computation. A broad range of values for the maximum applied gradient strength were explored to test the robustness of the computation to collective phase errors. The experimentally determined gate entanglement fidelities are shown in Fig. 5. As expected, the computation is protected up to a particularly noise intensity and then falls off with increasing noise strength. As in the memory case, it is worth stressing that F e values well exceeding the value 0.50 have been achieved over the entire range of applied noise strengths. It should be noted that because the active sample is order 1 cm, most of the sample is experiencing noise strengths significantly stronger than natural fluctuations -which are approximately 1 Hz in strength. VII. CONCLUSIONS We have provided the first demonstration of universal control over a DFS-encoded qubit. The implementation relied on combining the benefits of passive noise protection via DFS coding with the ability of relaxing the constraints of fully DF manipulations via appropriate control design -thereby also validating the underlying principles of encoded decoupling. We believe that our techniques are applicable to a wide class of quantum information devices, where collective dephasing mechanisms play a dominant role and where the structure of the system's internal Hamiltonian can be mapped onto a NMR-type Hamiltonian. These may include various solid-state proposals as discussed in [19,32]. Thus, our results improve the prospects that DFS/NS coding, combined with encoded dynamical decoupling and robust control design, will play a practical role for both protected storage and manipulation of quantum information in QIP. TABLE I. Experimental data for the implementation of full-strength collective dephasing. Input-output fidelities and entanglement fidelities corresponding to the application of the intended error model to both the DFS encoded (Q z,df ) and the un-encoded data spin (Qz,un) are listed, along with the values relative to the reference situation of zero applied noise between DFS encoding and decoding (Q 0,df ). Crusher gradient fields with full strength ∼ 60 Gauss/cm were applied for a period δ = 745µs. The measured values for the un-encoded test data confirm the expectation that the applied noise process induces full phase damping on the data spin, with predicted Fe = 0.50. Systematic uncertainties are ∼ 0.02 while statistical uncertainties are ∼ 2%, both due to errors in the tomographic density matrix reconstruction. H H FIG. 1. Molecular structure of Dibromothiophene. The two proton qubits are indicated. As spectroscopically the two protons are effectively undistinguishable, the qubit labels have a purely formal meaning. All experiments were carried out in a magnetic field of ∼ 9.7 T with one proton on resonance. The frequency shifts of the second proton is ν2 = 137.5 Hz, while the J-coupling constant is J = 5.7 Hz. The longitudinal and transverse relaxation times are T1 ∼ 7 s and T2 ∼ 3.5 s, respectively. Experimentally determined entanglement fidelity for the implementation of variable-strength collective dephasing. Both the behavior of the DFS-encoded (squares) and the un-encoded (circles) data is shown. The independent axis (noise strength) was determined by fitting the un-encoded data to an exponential decay of the form Fe = A exp(−tev/τ )+0.5, with tev = ∆+2δ = 37.765 ms. The un-encoded data is only displayed for reference. The encoded data is fit to a constant value Fe = C, yielding the best estimate C = 0.97±0.01. Systematic uncertainties (not included in the figure) are ∼ 0.02. Experimental data for the phase information retained after exposure to the natural system noise. The average preservation of σx and σy was measured as a function of holding times from 0 to ∼ 3 s. Improvement over the un-encoded case is seen, confirming that collective phase errors are one of the dominating modes of natural noise for this system. Experimentally determined entanglement fidelity for the implementation of a composite encoded y rotation of π/2 in the presence of noise. Magnetic field gradients implement a spatially incoherent collective phase error as a function of the molecular position. Gradient strengths from 0 to ∼ 100 kHz/cm were applied over a 1 cm sample. The behavior of Fe is flat over a broad range of noise strengths and remains significantly above the 0.5 threshold for all noise values considered. This convincingly demonstrates the ability to control a DFS qubit in the presence of noise significantly stronger than the natural noise of the system.
9,856
2001-11-30T00:00:00.000
[ "Physics" ]
Development of an Electrohydraulic Variable Buoyancy System : The growing needs in exploring ocean resources have been pushing the length and complexity of autonomous underwater vehicle (AUV) missions, leading to more stringent energy requirements. A promising approach to reduce the energy consumption of AUVs is to use variable buoyancy systems (VBSs) as a replacement or complement to thruster action, since VBSs only require energy consumption during limited periods of time to control the vehicle’s floatation. This paper presents the development of an electrohydraulic VBS to be included in an existing AUV for shallow depths of up to 100 m. The device’s preliminary mechanical design is presented, and a mathematical model of the device’s power consumption is developed, based on data provided by the manufacturer. Taking a standard mission profile as an example, a comparison between the energy consumed using thrusters and the designed VBS is presented and compared. and F.G.d.A.; methodology, J.F.C., N.C. and F.G.d.A.; software, J.B.P., J.F.C.; validation, J.F.C., J.B.P., N.C. and F.G.d.A.; writing—original draft preparation, J.F.C.; writing—review and editing, J.F.C., J.B.P., N.C. and F.G.d.A. Introduction In recent years, autonomous underwater vehicles have experienced steady development as a result of rising interest in the knowledge of ocean state variables. This knowledge is key for fishing, weather forecasting, and underwater mining, among other activities. One of the main aspects to consider when designing such vehicles is their energetic autonomy, as it strongly restricts mission length and complexity. The major energy consumption source for AUVs is their propulsion and, as such, it is expected that by controlling the vehicles floatation, energetic improvements should be achieved, since energy is only spent during small periods of time. This is the principle used in underwater gliders [1] with their variable buoyancy systems (VBSs), enabling them to complete month-long missions without any type of maintenance. Several kinds of solutions for VBSs can be found in literature, however, electromechanical [2,3] and electrohydraulic [4][5][6] solutions are the most common. Purely electrical solutions tend to be more efficient for shallow depths, since the required forces are lower. Electrohydraulic solutions, on the other hand, are usually more efficient for higher pressures, making them particularly suited for deep water applications. A comparison between these two solutions is made in [7]. Several studies regarding electrohydraulic VBSs can be found in the literature. For instance, in [4], an oil hydraulic system comprised of a pump, driving motor, valves, and external and internal reservoir for the depth control of an underwater glider is described. The device is able to descend as far as 2100 m and efficiency values from 42% to 45% have been recorded for pressures higher than Information 2019, 10, 396 2 of 8 100 bar. Unfortunately, for low pressures, the efficiency drops, possibly due to mechanical friction losses, for instance, an efficiency of 10% was recorded for a 6-bar pressure. The vehicle proposed in [5] has a VBS consisting of an electric motor, mechanical transmission, piston, three-way valve, internal reservoir, and an external bladder. Given the pump's maximum pressure of 350 bar, this device can reach much higher depths than the one developed in [4], which is conceptually similar. Efficiency values from 12% up to 43% were reported at pressures ranging from 2 to 35 MPa, respectively. However, this solution is rather slow, taking around 180 s to complete a single pumping cycle. Even though earlier versions of the Slocum glider [6] used a single stroke pump to move a piston and thus change the vehicle's buoyancy, more recent versions move oil between an internal reservoir and an external one for the same effect. In [6], a consumption variation of ca. 2.8 J per meter of depth is reported. Nevertheless, caution should be employed when analyzing these results, as the consumption of the brake existing in such a system may sometimes exceed the one of the pump [6]. Unfortunately, the total energy spent is not presented in [6]. The present work follows the approach of [6] but makes use of a fully integrated motor-pump group for the change of buoyancy. The selected motor-pump assembly was chosen not only due to its compactness but also due to the pump's maximum pressure of 10 bar, which makes it an option for shallow water applications. Considering the selected pump's small pressure range, this particular hydraulic solution is expected to be efficient for low depth applications. One of the main goals of this work is to design the VBS to be integrated into an existing AUV and consequently increase the AUV's autonomy. Therefore, an energetic comparison between the use of the designed electrohydraulic VBS and the use of thrusters is performed. To do that, a mathematical model of the designed system based on data provided by the manufacturer is presented. The model of the power consumption of the thruster-based solution used for comparison is a direct consequence of experimental data. The present paper is organized as follows: Section 2 presents the thruster-driven AUV, in which the VBS will be integrated in the future. The mechanical design of the VBS is presented in Section 3 and the model of the solution is developed in Section 4. Section 5 presents energy consumption simulations and comparisons, based on the mathematical models previously developed, for a few different mission scenarios. Lastly, in Section 6, the main conclusions of this work are stated and future works are suggested. Modular Portable AUVs The AUV considered in this work has been developed by the Ocean Systems Group at FEUP/INESC TEC as part of a program for the design of small size AUVs based on modular building blocks. Mechanical parts as well as electronics, software, and control are included in this modularity. The vehicle hulls are assembled as a stack of modular sections with an outer diameter of 200 mm, which have matching edges for easy insertion, removal, or replacement. This means the AUVs length will vary according to the installed modules but not the overall cylindrical profile. Figure 1 represents MARES, the AUV considered in this work, designed for operation in shallow water areas up to 100 m in depth. The main characteristics of the vehicle are presented in Table 1. This vehicle was designed to have a natural positive buoyancy around 3.5-7 N, so that in the case of an electrical or software failure it will naturally resurface, allowing for easy retrieval of the equipment. That being said, in order for the vehicle to remain at the same depth, the natural buoyancy has to be countered with the use of vertical thrusters. The power consumed by the thrusters in this situation has been experimentally determined to be k p1 = 19.8 W. When the thrusters are turned off, the AUV begins ascending, achieving a steady-state velocity,ż, of 0.2 m/s. To make the vehicle sink at the same rate, the thruster's consumption, k p2 , was experimentally determined to be 38 W. Information 2019, 10, x FOR PEER REVIEW 3 of 8 Mechanical Design of the VBS Given the modularity of the AUV considered in this work, the VBS system was designed to be easily assembled with the remaining elements of the device. The design of the solution considered the following requirements: • A total volume change of Dt = ± 700 cm 3 must be provided to achieve a full buoyancy change, starting from a neutral state, in the face of water density variations with depth and salinity; • Two VBS modules, one at the stern and one at the bow, must be incorporated in the AUV so as to control pitch and depth independently; • The VBS's dry components should fit inside a cylinder with as little length as possible and be under 180 mm in diameter; • The section of the vehicle containing the VBS should be as close to null buoyancy as possible when in its neutral state; • Considering the Slocum G2 glider's ability to deliver 43 cm 3 /s at no load conditions for the 100 m rated pump [8], a maximum time of tvbs = 15 s was defined for the VBS to perform a full buoyancy change; • The VBS's required power should not exceed the power provided by the AUV's internal power system; • It should be underlined that the solution was designed with off-the-shelf components, in order to be readily available for assembly. The working principle of the designed solution is simple, namely, to increase the device's buoyancy, where oil is pumped into an external bladder. To decrease buoyancy, the bladder must be deflated, allowing the oil to return to an internal reservoir. The internal reservoir chosen was a FESTO double acting pneumatic cylinder (model ADN-80-150). The oil is stored inside one of the cylinder's chambers, while in the second chamber a spring is inserted, in order to reduce cavitation hazards by pre-charging the oil. It is not simple to measure the volume inside the bladder, so, instead, the amount of oil inside the reservoir is measured, using a position transducer, also provided by FESTO (model SDAT-MHS-M160). Mechanical Design of the VBS Given the modularity of the AUV considered in this work, the VBS system was designed to be easily assembled with the remaining elements of the device. The design of the solution considered the following requirements: • A total volume change of D t = ±700 cm 3 must be provided to achieve a full buoyancy change, starting from a neutral state, in the face of water density variations with depth and salinity; • Two VBS modules, one at the stern and one at the bow, must be incorporated in the AUV so as to control pitch and depth independently; • The VBS's dry components should fit inside a cylinder with as little length as possible and be under 180 mm in diameter; • The section of the vehicle containing the VBS should be as close to null buoyancy as possible when in its neutral state; • Considering the Slocum G2 glider's ability to deliver 43 cm 3 /s at no load conditions for the 100 m rated pump [8], a maximum time of t vbs = 15 s was defined for the VBS to perform a full buoyancy change; • The VBS's required power should not exceed the power provided by the AUV's internal power system; • It should be underlined that the solution was designed with off-the-shelf components, in order to be readily available for assembly. The working principle of the designed solution is simple, namely, to increase the device's buoyancy, where oil is pumped into an external bladder. To decrease buoyancy, the bladder must be deflated, allowing the oil to return to an internal reservoir. The internal reservoir chosen was a FESTO double acting pneumatic cylinder (model ADN-80-150). The oil is stored inside one of the cylinder's chambers, while in the second chamber a spring is inserted, Information 2019, 10, 396 4 of 8 in order to reduce cavitation hazards by pre-charging the oil. It is not simple to measure the volume inside the bladder, so, instead, the amount of oil inside the reservoir is measured, using a position transducer, also provided by FESTO (model SDAT-MHS-M160). To pump the oil into the bladder, a motor-pump group by Fluidotech (model HA114Z) was selected. This pump can generate up to 10 bar of pressure, meaning the maximum depth achievable by this solution is about 100 m. The flow that must be provided to the bladder should be such that a full buoyancy change is achieved in t vbs = 15 s. As such, the pump must, in any load case, be able to provide Q t = D t /t vbs . The selected pump can generate more than the target flow at any working point in its pressure range. To control the motor-pump group, a Maxon driver (model ESCON 70/10, 4Q-servocontroller for DC/EC motors, 10/30 A, 10-70 VDC) was chosen. Figure 2 shows a three-dimensional (3D) render of the designed solution inside the AUV cylindrical hull. The VBS represented in Figure 2 is divided into a wet section, for the buoyancy change to take place, and a dry section, for the pumping system and respective electronics. To pump the oil into the bladder, a motor-pump group by Fluidotech (model HA114Z) was selected. This pump can generate up to 10 bar of pressure, meaning the maximum depth achievable by this solution is about 100 m. The flow that must be provided to the bladder should be such that a full buoyancy change is achieved in tvbs = 15 s. As such, the pump must, in any load case, be able to provide Qt = Dt/tvbs. The selected pump can generate more than the target flow at any working point in its pressure range. To control the motor-pump group, a Maxon driver (model ESCON 70/10, 4Qservocontroller for DC/EC motors, 10/30 A, 10-70 VDC) was chosen. Figure 2 shows a three-dimensional (3D) render of the designed solution inside the AUV cylindrical hull. The VBS represented in Figure 2 is divided into a wet section, for the buoyancy change to take place, and a dry section, for the pumping system and respective electronics. Power Calculation of the Electrohydraulic VBS In order to assess the potential energetic savings provided by the usage of an electrohydraulic VBS in AUV's, in this section, a mathematical model of the power consumption is developed. The traditional way to model this kind of system would be to consider the characteristics of the motor, transmission, and hydraulic pump individually. In this case, however, the manufacturer provides the characteristics of the motor-pump group's behavior. The flow-pressure and drawn currentpressure curves of the motor-pump are represented in Figure 3 for the nominal supply voltage Vd = Vd_n. In the figure, Q0_n is the nominal no-load flow, pmax is the maximum nominal output pressure and Id_0h the current required to surpass the static friction torque. Power Calculation of the Electrohydraulic VBS In order to assess the potential energetic savings provided by the usage of an electrohydraulic VBS in AUV's, in this section, a mathematical model of the power consumption is developed. The traditional way to model this kind of system would be to consider the characteristics of the motor, transmission, and hydraulic pump individually. In this case, however, the manufacturer provides the characteristics of the motor-pump group's behavior. The flow-pressure and drawn current-pressure curves of the motor-pump are represented in Figure 3 for the nominal supply voltage V d = V d_n . In the figure, Q 0_n is the nominal no-load flow, p max is the maximum nominal output pressure and I d_0h the current required to surpass the static friction torque. To pump the oil into the bladder, a motor-pump group by Fluidotech (model HA114Z) was selected. This pump can generate up to 10 bar of pressure, meaning the maximum depth achievable by this solution is about 100 m. The flow that must be provided to the bladder should be such that a full buoyancy change is achieved in tvbs = 15 s. As such, the pump must, in any load case, be able to provide Qt = Dt/tvbs. The selected pump can generate more than the target flow at any working point in its pressure range. To control the motor-pump group, a Maxon driver (model ESCON 70/10, 4Qservocontroller for DC/EC motors, 10/30 A, 10-70 VDC) was chosen. Figure 2 shows a three-dimensional (3D) render of the designed solution inside the AUV cylindrical hull. The VBS represented in Figure 2 is divided into a wet section, for the buoyancy change to take place, and a dry section, for the pumping system and respective electronics. Power Calculation of the Electrohydraulic VBS In order to assess the potential energetic savings provided by the usage of an electrohydraulic VBS in AUV's, in this section, a mathematical model of the power consumption is developed. The traditional way to model this kind of system would be to consider the characteristics of the motor, transmission, and hydraulic pump individually. In this case, however, the manufacturer provides the characteristics of the motor-pump group's behavior. The flow-pressure and drawn currentpressure curves of the motor-pump are represented in Figure 3 for the nominal supply voltage Vd = Vd_n. In the figure, Q0_n is the nominal no-load flow, pmax is the maximum nominal output pressure and Id_0h the current required to surpass the static friction torque. The equations describing the characteristic curves are: where m I/p and m Q/p are the slopes of the current/pressure and flow/pressure curves, respectively. The required variables to determine the target voltage applied to the motor are the target flow, Q t , and the target pressure, p t . It and can be calculated as: where V d_0h is the necessary voltage to maintain I d_0h . It can be determined by: To calculate the target current drawn by the motor, I d_t , F is replaced by F t in Equation (1): Assuming that the driver is supplied with a constant voltage, V t = V d_t , and that the driver's losses are proportional to the current drawn by the motor-pump group, the target current at the driver input is given by: where k d is the driver's electric loss coefficient. By combining Equations (3)-(6), the required power to actuate the electrohydraulic VBS, P eh , can be calculated: where The target pressure can be calculated as p t = ρ H20 gz t , where ρ H20 is the water's volumetric mass density, g is the Earth's gravitational acceleration, and z t is the target depth. As stated in Section 3, the target flow can be determined by Q t = D t /t vbs . As mentioned in Section 2, the power consumed by the MARES's vertical thrusters in order to remain at a constant depth is k p1 = 19.8 W, while to descend at a speed ofż = 0.2 m/s, the consumption is k p2 = 38 W. The knowledge of the consumed power by each solution allows for a comparison of the energy spent in a given specific mission profile. This comparison is performed in Section 5. Simulation Since the main focus of this work was to design a more energetically efficient solution than the existing one, a comparison between the energetic consumption of the designed VBS against and the thruster solution is presented in this section. Table 2 lists the values for the parameters of the mathematical model developed in Section 4. It was considered that the thruster vehicle always has a positive buoyancy of 7 N, while the VBS starts and ends its cycle with neutral buoyancy. It should Information 2019, 10, 396 6 of 8 be noted that, in this analysis, for simplification purposes, motion transients were neglected and a constant descent and ascent velocity (ż = 0.2 m/s) was assumed. It was also assumed that while the VBS is active, the AUV remains in its present state, no matter whether it is moving or stationary. In addition, given the vehicle's low speed, pressure was considered essentially constant when buoyancy changes occur, no matter whether or not the vehicle is stationary. Table 2. Parameters of the electrohydraulic solution. Parameter Value In Figure 4 the assumed mission profile for the energetic comparison is represented. It comprises a dive from the surface to the assigned mission depth z m and resurfacing after after t m + t vbs seconds of data collection. In Figure 4 the assumed mission profile for the energetic comparison is represented. It comprises a dive from the surface to the assigned mission depth zm and resurfacing after after tm + tvbs seconds of data collection. Given the thruster vehicle's natural positive buoyancy, this solution consumes energy continuously during the dive [t0', t1'] and when the vehicle remains at the same depth for data collection [t1', t2'], but not during the device is resurfacing. As such, and when considering the data presented in Section 2, the thruster-based solution's energy consumption can be expressed as follows: The VBS device, on the other hand, consumes energy whenever a buoyancy change is required. Accounting for the mission profile represented in Figure 4, buoyancy changes occur between time intervals [ti, ti'], i = 0, 1, 2, 3. The energy spent by the VBS can then be written as follows: where tvbs is the time required for a complete a 0 to ± 700 cm 3 buoyancy change (tvbs = 15 s), and Pvbs0 = Peh(0), and Pvbsm = Peh(zm) for the electrohydraulic converter. Given the thruster vehicle's natural positive buoyancy, this solution consumes energy continuously during the dive [t 0 , t 1 ] and when the vehicle remains at the same depth for data collection [t 1 , t 2 ], but not during the device is resurfacing. As such, and when considering the data presented in Section 2, the thruster-based solution's energy consumption can be expressed as follows: The VBS device, on the other hand, consumes energy whenever a buoyancy change is required. Accounting for the mission profile represented in Figure 4, buoyancy changes occur between time intervals [t i , t i ], i = 0, 1, 2, 3. The energy spent by the VBS can then be written as follows: where t vbs is the time required for a complete a 0 to ±700 cm 3 buoyancy change (t vbs = 15 s), and P vbs0 = P eh (0), and P vbsm = P eh (z m ) for the electrohydraulic converter. The energy consumption for both solutions in two example missions, one with z m = 50 m and the other with z m = 100 m, when t m = 1800 s, is represented in Figure 5. Observing Figure 5, it becomes clear that the energy consumed by the electrohydraulic VBS is small when compared to the energy spent by the thruster solution. The latter requires ca. 45,400 J versus 3000 J consumed for a mission where zm = 50 m. In the case of the other mission represented in Figure 5 where zm = 100 m, the thruster consumes about 55,000 J, while the VBS consumes only 6600 J. These values represent energy savings of 93% and 88%, respectively. Conclusions This work presents a preliminary study regarding the design of an electrohydraulic VBS solution to be used in an existing AUV. The VBS was designed with readily available components for a quick and easy assembly. A mathematical model of the power required to drive the VBS based on the selected components was proposed and used to perform an energetic comparison between the existing and the designed solutions. For the chosen mission profile, the EH VBS leads to energetic savings from 88% to 93%. Future studies will focus on (1) more complex mission profiles, using a combination of both propulsion methods to optimize energetic consumptions, (2) the design of VBS solutions for deep water applications and (3) control tasks, namely, using advanced control techniques like fuzzy [9] or adaptive control [10,11]. Observing Figure 5, it becomes clear that the energy consumed by the electrohydraulic VBS is small when compared to the energy spent by the thruster solution. The latter requires ca. 45,400 J versus 3000 J consumed for a mission where z m = 50 m. In the case of the other mission represented in Figure 5 where z m = 100 m, the thruster consumes about 55,000 J, while the VBS consumes only 6600 J. These values represent energy savings of 93% and 88%, respectively. Conclusions This work presents a preliminary study regarding the design of an electrohydraulic VBS solution to be used in an existing AUV. The VBS was designed with readily available components for a quick and easy assembly. A mathematical model of the power required to drive the VBS based on the selected components was proposed and used to perform an energetic comparison between the existing and the designed solutions. For the chosen mission profile, the EH VBS leads to energetic savings from 88% to 93%. Future studies will focus on (1) more complex mission profiles, using a combination of both propulsion methods to optimize energetic consumptions, (2) the design of VBS solutions for deep water applications and (3) control tasks, namely, using advanced control techniques like fuzzy [9] or adaptive control [10,11]. Funding: This work was financially supported through contract LAETA-UID/SEM/50022/2013 by "Fundação para a Ciência e Tecnologia", which the authors gratefully acknowledge.
5,681.2
2019-12-17T00:00:00.000
[ "Engineering", "Environmental Science" ]
On Divided-Type Connectivity of Graphs The graph connectivity is a fundamental concept in graph theory. In particular, it plays a vital role in applications related to the modern interconnection graphs, e.g., it can be used to measure the vulnerability of the corresponding graph, and is an important metric for reliability and fault tolerance of the graph. Here, firstly, we introduce two types of divided operations, named vertex-divided operation and edge-divided operation, respectively, as well as their inverse operations vertex-coincident operation and edge-coincident operation, to find some methods for splitting vertices of graphs. Secondly, we define a new connectivity, which can be referred to as divided connectivity, which differs from traditional connectivity, and present an equivalence relationship between traditional connectivity and our divided connectivity. Afterwards, we explore the structures of graphs based on the vertex-divided connectivity. Then, as an application of our divided operations, we show some necessary and sufficient conditions for a graph to be an Euler’s graph. Finally, we propose some valuable and meaningful problems for further research. Introduction and Researching Background Graph connectivity is one of the most basic concepts used in the application of graph theory, both in the combinatorial sense and in the algorithmic sense. Especially, it plays an important role in applications related to graph embedding. The connectivity can serve to assess the vulnerability of the corresponding graph and measure the capability of connection for a set of vertices in the graph. To better understand the characteristics of graph connectivity, a wide range of technical methods were developed and then used to analyze various problems. This classical issue has attracted attention to understanding and utilizing various operations regarding graphs. By consulting the literature, we found that the splitting operations on graphs can be divided two classes: one is the vertex-splitting operation and another is the edge-splitting operation. Figure 1 explains the vertex-splitting process and the edge-splitting process. The former operation can be defined as follows: "A vertex v of degree i = deg(v) is splitted into two new vertices v and v with degrees k = deg(v ) and l = deg(v ) = i + 2 − k by adding a new edge to join v and v together". As several examples, Cheah et al. obtained an O(n 3 ) algorithm for recognizing a trapezoid graph [1]. Mertzios et al. presented a new method of augmenting a given graph and used vertexsplitting in a trapezoid graph [2]. Hilton et al. studied graphs which are critical with respect to the chromatic index [3], and so forth. The latter operation can be explained as follows: "in an undirected graph, splitting off two edges incident to a vertex s, say (s, u) and (s, v), means deleting them and adding a new edge (u, v)", mainly applied to solve connectivity problems. For example, Nagamochi presented several algorithms for splitting all edges connect to a vertex s of even degree in a graph G with n vertices and m edges, namely, O(nm log n + n 2 log 2 n) = O(nm) for a graph [4], O(n 3 log n) for planar graph [5,6], and O(mn + n 2 log n) for edge-weighted graphs [7]. Fukunaga and Nagamochi presented if and only if for a given graph/digraph to have an Eulerian detachment that satisfies a given local edge-connectivity requirement [8]. Farooq et al. described experimental implementations of graph splitting at vertices and edge cutting [9,10]. Although the aforementioned two operations can be used to solve some problems, these two operations cannot be applied to solve the issue that a vertex be divided into multiple vertices, nor can they be used to solve problems where the splitting vertices synthesize a vertex. Here, we introduce two types of divided operations, called v-divided operation and e-divided operation, respectively, and their inverse operations, v-coincident operation and e-coincident operation, as we will show shortly. Since many graphs in the current real world are weighted, and they are composed of small block (modular) graphs, graphs just organically combine them into a whole, which is also the most natural and reasonable technique.By splitting and refining the network, the minimal structural features are obtained. Similar to how matter is made up of molecules, ions and atoms, the minimal structural features of networks can help us to understand the structure and topological properties of graphs. Battaglia et al., in [11], points out: "It is unclear the best ways to convert sensory data into more structured representations like graphs". Our divided operation preserves the "molecules, ions and atoms" of the original weighted network, which is conductive to reconstructing the original weighted network in polynomial time without the need of "requiring the ability to add or remove edges depending on context". Because our divided connectivity is equivalent to the traditional connectivity, the reliability of our divided connectivity is proven. The remaining sections of our article are organized as follows. We present a preliminary introductionin Section 2, in which some terminology and notations are given, our divided operations are introduced, and two parameters of graphs regarding the divided connectivity are defined. In Section 3, we discuss the connections on various graph connectivities, present an equivalent relationship between traditional connectivity and our divided connectivity, and show the topological structures of graphs by our divided technique. As an application of our divided operations, we show some necessary and sufficient conditions for a graph to be an Euler's graph. An elaborate conclusion summarizes the above works and proposes possible problems for further investigation of various connectivities in the last section. Divided Operations The following operations on graphs are discussed in this article. For distinction, we will use "divide" or "divided" in our definitions instead of "split" or "splitting", since our operations differ from "edge-splitting" and "vertex-splitting" used in the existing published articles. A simple graph is one having no multiple-edge and self-edge. Let N(x) be the set of all neighbors of a vertex x in a simple graph, and we call N(x) neighbor set, so the cardinality |N(x)| is defined as the degree of the vertex x. We present two types of divided operations [12]. The mathematical symbols apllied in our paper are shown in Table 1. The set of all neighbors of a vertex x in a simple graph |N(x)| The degree of the vertex x δ(H) The minimum degree κ(H) The vertex connectivity κ (H) The edge connectivity κ d (H) The v-divided k-connected • Vertex-divided operation and vertex-coincident operation. For the neighbor set N(x) = {u i : i ∈ [1, n]} of a vertex x of a simple graph G, where n is the degree of x, we define a vertex-divided operation (v-divided operation) to x as follows: Divide x into two vertices x 1 , x 2 , and then join x 1 with vertices u 1 , u 2 , . . . , u i with respect to n > i ≥ 1, and then join x 2 with vertices u i+1 , . . . , u n for n − i ≥ 1; finally, the resultant graph is denoted as G ∧ x. If two neighbor sets N(x) and N(y) of two vertices x, y of a simple graph G hold N(x) ∩ N(y) = ∅ true, we coincide x with y into one vertex w = x • y such that N(w) = N(x) ∪ N(y), and refer to this procedure as a vertex-coincident operation (v-coincident operation); the resultant graph is denoted as G(x • y). In Figure 2, a v-divided operation is from (c) to (b), and another v-divided operation is from (b) to (a); a v-coincident operation is from (a) to (b), and another v-coincident operation is from (b) to (c). An e-divided operation is just from (c) to (d); and an e-coincident operation is from (d) to (c). In Figure 2, after a group of divided operations, then the neighbor sets hold N(u ) ∩ N(u ) = ∅ and N(v ) ∩ N(v ) = ∅ in the resultant graph. We perform a v-divided operation to a vertex u of a simple graph H, so the vertex set satisfies |V(H ∧ u)| = 1 + |V(H)| and the edge set holds |E(H ∧ u)| = |E(H)| (see Figure 2b). The resultant graph obtained by performing an e-divided operation to an edge uv of H holds |V(H ∧ uv)| = 2 + |V(H)| and |E(H ∧ uv)| = 1 + |E(H)| true (see Figure 2d). Remark 1. (1) Let f be an attribute of a network N(t) at time step t, the evaluation f (x, t) of each vertex x is called vertex weight, and the evaluation f (uv, t) of each edge uv is called edge weight. Thus, we say that N(t) is a weighted network. For example, we have f (u, Figure 2c,d, respectively. Thereby, the v-divided graph N(t) ∧ u and the e-divided graph N(t) ∧ uv keep the complete weighted information of the original network N(t). (2) The resultant graph obtained by deleting a vertex x from a simple graph G is denoted as G − x (v-deleted), and deleting an edge xy from the graph produces a simple graph denoted as G − xy (e-deleted). Clearly, the v-deleted (respectively, e-deleted) graph G − x (respectively, G − xy) is unique, but the v-divided (respectively, e-divided) graph G ∧ x (respectively, G ∧ xy) is not unique, in general. However, it is difficult to reconstruct the original graph G from the v-deleted (respectively, e-deleted) graph G − x (respectively, G − xy), although it is easy for the v-divided (respectively, e-divided) graph G ∧ x (respectively, G ∧ xy), because G ∧ x (respectively, G ∧ xy) maintains the complete structure information of the original graph G. (3) The vertex deletion technique is applied to many issues in mathematics, such as the famous Kelly-Ulam's reconstruction conjecture proposed in 1942: Let both G and H be graphs with n vertices. If there is a bijection f : , then these two graphs G and H are isomorphic to each other, that is, G ∼ = H [13]. However, we claim that We show two parameters of graphs based on the divided connectivity: Figure 3). The e-divided connectivity. An e-divided k-connected graph H holds: is disconnected is called the e-divided connectivity of H, denoted as κ d (H) (see example shown in Figure 3). Recall that the minimum degree δ(H), the vertex connectivity κ(H), and the edge connectivity κ (H) of a simple graph G hold the following inequalities [13] true: However, we do not have the inequalities (1) about the minimum degree δ(H), the vdivided connectivity κ d (H), and the e-divided connectivity κ d (H) for a simple graph H. Connection between Traditional Connectivity and Divided Connectivity Proof. The proof of "if". Suppose that G is a k-connected graph, and G − S is disconnected with S ⊂ V(G) and |S| = k. Let G 1 , G 2 , . . . , G m be the components of the disconnected graph G − S. Apparently, (1) m ≥ 2, it is evident. (2) Each vertex x ∈ S must be adjacent with some vertex u x,i ∈ V(G i ) for each i = 1, 2, . . . , m, otherwise, there is a proper subset S * ⊂ S with |S * | < |S|, such that G − S * is disconnected immediately: a contradiction. (3) By the above (2) . . , R a after performing a series of v-divided operations to the vertices of X, and V(R i ) ∩ V(R j ) = X for i = j. Thereby, G − X is disconnected, and this contradicts the hypothesis of the proof of "if". if G is k -connected with k < k, then we can obtain that G is a v-divided k -connected graph by the proof of "if" above: it is an obvious conflict. We are finished. Lemma 1 enables us to obtain the subsequent result: Theorem 1. If a k-connected graph has a property related with its k-connectivity, so does a v-divided k-connected graph. For example, Menger's theorem (Karl Menger, 1927) states the following: "Let G be a graph of order greater than k + 1. Then G is k-connected if and only if any two distinct vertices of G are connected by at least k mutually internally-disjoint paths". Thus, each v-divided k-connected graph has at least k internally-disjoint paths to join any pair of vertices. Remark 2. (1) A k-connected graph G induces that the disconnected graph G − S has mutually-disjoint subgraphs G 1 , G 2 , . . . , G m , where S is a subset of vertices of G and |S| = k. Evidently, these mutually-disjoint subgraphs G 1 , G 2 , . . . , G m are fixed. However, the v-divided graph G ∧ S may have its subgraphs L 1 , L 2 , . . . , L n with 2 ≤ n ≤ m. Proof. First of all, κ d (K 3 ) = 0 and κ d (P 3 ) = 0. Let G be a connected graph being not K 3 and having the longest path P a with a ≥ 4. Since G is a v-divided k-connected graph with k = κ d (G), it is k-connected too, by Lemma 1. There exists a subset S ⊂ V(G) with |S| = k such that G − S is a disconnected graph having components G 1 , G 2 , . . . , G n . We construct subgraphs Notice that each vertex y j ∈ S is adjacent with some vertex of G i for i = 1, 2, . . . , n. Consequently, H 1 , H 2 , . . . , H n is just the v-divided graph G ∧ S. If k = 1, namely, S = {w}, the v-divided graph G ∧ S has only H 1 , H 2 such that V(H 1 ) ∩ V(H 2 ) = {w}. Without loss of generality, H 1 contains a path P b = wx 1 x 2 · · · x b with b ≥ 2. Thus, we can divide the edge wx 1 of G = H 1 ∪ H 2 into two edges ,w x 1 and w x 1 , for obtaining two H 1 , H 2 such that H 1 = H 1 with w x 1 = wx 1 , and H 2 = H 2 + w x 1 , where x 1 is a leaf of H 2 , w = w. Clearly, |V(H 1 ) \ {w, x 1 }| ≥ 1, so G ∧ wx 1 is an e-divided graph with κ d (G) = 1 (see Figure 4). Considering the case k ≥ 2, we can obtain two graphs G * 1 and G * 2 from H 1 , H 2 , . . . , H n of the v-divided graph G ∧ S by (4) of the proof of Lemma 1, such that V(G * 1 ) ∩ V(G * 2 ) = S, so there are edges x i y i of G * 1 holding x i ∈ V(G * 1 ) \ S and y i ∈ S = {y 1 , y 2 , . . . , y k }, such that |V(G * 1 ) \ {x i , y i }| ≥ 1. Thereby, we divide each edge x i y i into two x i y i and x i y i to obtain two graphs, H * 1 and H * 2 , such that H * 1 = G * 1 with x i y i = x i y i , H * 2 = G * 2 + {x i y i : i = 1, 2, . . . , k} with y i = y i , where each vertex x i of H * 2 is a leaf. We then obtain G ∧ {x i y i } k 1 to be disconnected and to have two subgraphs H * 1 and H * 2 . We claim that κ d (G) ≤ κ d (G) by the above deduction. For showing κ d (G) ≤ 2κ d (G), we take an edge subset {e 1 , e 2 , . . . , e k } of E(G) with k = κ d (G). Notice that the e-divided graph G ∧ {e i } k 1 is obtained by dividing each edge e i = u i v i into two edges, e i = u i v i and e i = u i v i . It means that dividing each vertex of the vertex set X = {u i , v i : i = 1, 2, . . . , k} enables us to obtain a v-divided graph G ∧ X, which is disconnected; immediately, we obtain the inequalities κ d (G) ≤ 2κ d (G), as desired. The examples depicted in Figures 3 and 4 are to show the boundaries of this theorem. The proof of the theorem is complete. Remark 3. This theorem provides a method for computing graph connectivity. x y w coincide two edges x"w" and x'w' Structures of Graphs Based on the v-Divided Connectivity Let κ(G) = k for a connected graph G, so there are subsets S i (k) of V(G) for i = 1, 2, . . . , M(k) and |S i (k)| = k, such that each disconnected graph G − S i (k) has its own components G i,1 , G i,2 , . . . , G i,m i with m i ≥ 2, where M(k) is the number of subsets of G. We have two new parameters: We generalize the above two parameters to other disconnected graphs G − S i (r) for i = 1, 2, . . . , M(r) with possible r with respect to k ≤ r ≤ κ M (G). Thereby, we have m − (r) and m + (r) with k ≤ r ≤ κ M (G) having no subset Y with κ M (G) + 1 elements making G − Y disconnected. We have another concept regarding graph connectivity which is n dis (G) defined by n dis (G) = max{m + (r) : k ≤ r ≤ κ M (G)}.Thus, we have a subset X ⊂ V(G) such that the disconnected graph G − X has the maximum number n dis (G) of components. Hence, G − X can be characterized as follows: Theorem 3. Suppose that a connected graph G has a subset X holding G − X to be not connected, and n(G − X) = n dis (G) if and only if each component of G − X is a complete graph. Proof. Let the disconnected graph G − X has its own components H 1 , H 2 , . . . , H n , where n = n dis (G). Clearly, all components H j are complete graphs. If some H j has two nonadjacent vertices u and v, then a subset X(u, v) = V(H j ) − {u, v} means that H j − X(u, v) has two isolated vertices u and v, so n dis (G) ≥ n + 1, which contradicts n = n(G − X) = n dis (G). Remark 4. This theorem provides several perspectives for discussing graph connectivity, such as a half-K-group of v-divided graphs, connected-perfect, and so on. Since G − X has the maximum components H 1 , H 2 , . . . , H n with n = n(G − X) = n dis (G), we have a v-divided graph G ∧ X with its components Q 1 , Q 2 , . . . , Q n holding and each vertex of X j is not adjacent with any vertex of H j for j = 1, 2, . . . , M(k). Thus, we can coincide these v-divided graphs Q 1 , Q 2 , . . . , Q n to obtain the original graph G (or other graphs H with connectivity κ(H) = k, where H differs from G). What structure does each Q j have? Here, If (a-2) holds true, we can coincide Q j with Q s together by overlapping the same vertices of V j,s in Q j and Q s . We call Q 1 , Q 2 , . . . , Q n a half-K-group of v-divided graphs. We consider a subset X ⊂ V(G) to be connected-perfect if n(G − X) = n dis (G), and |X| ≤ |Y| for any subset Y holding G − Y to be disconnect and n(G − Y) = n dis (G). It may be interesting to find such connected-perfect subsets for a connected graph, and, moreover, whether a connected graph does have a unique connected-perfect subset, and so on. In [14], The Sierpinski model S(t) has its own vertex number n S v (t) and edge number n S e (t) as: n S v (t) = 3·6 t +12 5 and n S e (t) = 9·6 t +6 5 at time step t. For instance, the disconnected graph S(t) − X t has n(S(t) − X t ) = 6 t−1 components for t ≥ 2, and each X t is a connected-perfect set since n(S(t) − X t ) = n dis (S(t)), as well as |X t | = 3 + 3 5 (6 t−1 − 1). As t = 2, the Sierpinski model S(2) is v-divided 4-connected and e-divided 2-connected (see Figure 5) [15]. 2) is 4-connected and also v-divided 4connected, but it is e-divided 2-connected (see Figure 6). (d) The disconnected graph S(2) − X 2 has n(S(2) − X 2 ) = 6 components, which is the most, where X 2 = {a, a , b, b , c, c } is a connected-perfect subset of S(2). Thus, we obtain the structure of a connected graph having the most components of a disconnected graph G − X for some subset X of a connected graph G below. Theorem 4. A connected graph G holds n dis (G) = n(G − X) = n true for some subset X ⊂ V(G) if and only if there are its subgraphs Q 1 , Q 2 , . . . , Q n such that each Q j − Y j with Y j = V(Q j ) ∩ X is a complete graph for j = 1, 2, . . . , n. In other words, the v-divided graph G ∧ X has its own components just to be Q 1 , Q 2 , . . . , Q n . We show an example in Figure 7 for understanding Theorem 4. Moreover, we can see that G − {x 1 , x 2 , x 3 , x 4 } has five components in Figure 7, namely, n dis (G) = 5, and G is 2-connected. In fact, H can produce two or more graphs Q such that Q − {x 1 , x 2 , x 3 , x 4 } has five components, and Q is 2-connected. The inverse of Theorem 4 is shown below. Theorem 5. Let each connected graph L i be k i -connected with k i ≥ k ≥ 1 and i = 1, 2, . . . , m. If there exists a nonempty set X holding V(L i ) ∩ V(L j ) = X true for i = j and |X| = k, then the connected graph G obtained by coinciding each vertex of X of L i with its same vertex of X of L j (i = j) is k-connected. Conversely, the v-divided graph G ∧ X has its own components L 1 , L 2 , . . . , L m . An Application of the v-Divided and v-Coincident Operations Coinciding two nonadjacent vertices x, y of a connected graph G, if N(x) ∩ N(y) = ∅ until the resultant graph H has no two nonadjacent vertices u, v holding N(u) ∩ N(v) = ∅ true, we call H an overlapping kernel graph of G. Evidently, there are two or more such overlapping kernel graphs of G. What characteristics does H have? First of all, H is connected obviously. An Euler's graph is one without odd-degree vertex, and such graphs were obtained first by the famous mathematician Euler. We present new characters for Euler's graphs here. Theorem 6. A simple graph G of n edges is a connected Euler's graph if and only if (E-1) It can be divided into a cycle C n by a series of vertex divided operations; (E-2) Its overlapping kernel graph H holds diameter D(H) ≤ 2 and no vertex of H is adjacent to two vertices of odd-degrees in H, simultaneously. Proof. We prove (E-1) first. Necessary. Let G be a connected Euler's graph, not being a cycle. A 2-degree 2connected v-divided operation is defined as follows: Take a vertex x 1 with its neighbor set N(x 1 ) = {y 1 , y 2 , . . . , y d }, where d ≥ 4 is the degree of the vertex x. We divide the vertex x 1 into two vertices, x 1 and x 1 , such that N(x 1 ) = {y 1 , y 2 } and N(x 1 ) = N(x 1 ) \ N(x 1 ); the resultant graph is an Euler's graph still, and is denoted as G ∧ x 1 . If G ∧ x 1 is disconnected, so G ∧ x 1 has only two components, G 1 and G 2 , where x 1 ∈ V(G 1 ) and x 1 ∈ V(G 2 ), then we modify N(x 1 ) = {y 1 , y 3 } and N(x 1 ) = N(x) \ N(x ), since y 3 is connected with each vertex of G 2 , and y 2 is connected with each vertex of G 1 . The new graph is connected and denoted by H 1 = G ∧ x 1 again. Clearly, |V(G)| + 1 = |V(H 1 )| and |E(G)| = E(H 1 ). We refer to this procedure of dividing the vertex x 1 by 2-degree 2-connected v-divided operation. Thereby, we can perform such operation on H 1 to obtain a connected Euler's graph H 2 = H 1 ∧ x 2 holding |V(H 1 )| + 1 = |V(H 2 )| and |E(H 1 )| = E(H 2 ) true, if x 2 has degree ≥ 4 in H 1 . We continue in this way until we obtain a connected Euler's graph H m = H m−1 ∧ x m , in which there is no vertex having degree more than 4. In other words, H m is a cycle. Sufficiency. We can coincide a pair of vertices, x m and x m , of the cycle H m for obtaining a connected Euler's graph H m−1 if N(x m ) ∩ N(x m ) = ∅, and then coinciding two vertices x m−1 and x m−1 of the connected Euler's graph H m−1 produces another connected Euler's graph H m−2 when N(x m−1 ) ∩ N(x m−1 ) = ∅. Thus, we obtain the original Euler's graph G by performing a series of v-coinciding operations, because each H k is a connected Euler's graph for i = 1, 2, . . . , m. We come to show (E-2) in the following. The proof of "if". We perform a so-called non-neighbor coincident operation on a connected graph G * 1 = G, and this operation is defined as follows: Coinciding two nonadjacent vertices u, v of G * 1 if N(u) ∩ N(v) = ∅, here, "nonadjacent vertices u, v" means that the graph G * 1 contains no edge uv. Thus, we perform such operation on the graph until the last graph G * k has no two nonadjacent vertices x, y, holding N(x) ∩ N(y) = ∅ for some k ≥ 1. G * k is just an overlapping kernel graph of the original graph G * 1 . Obviously, G * k has its own diameter D(G * k ) ≤ 2, and no vertex of G * k is adjacent to two vertices of odd degrees simultaneously, as if G * 1 is a connected Euler's graph. The proof of "only if". Suppose that the overlapping kernel graph H of the connected graph G has its own diameter D(H) ≤ 2 and no vertex has two neighbors of odd degrees in H. If D(H) = 1, H is a complete graph, and has no vertex having two neighbors of odd degrees. Thereby, H is a connected Euler's graph. Performing a series of 2-degree 2-connected v-divided operations on H produces the original graph G. Clearly, G is a connected Euler's graph. If D(H) = 2, any pair of nonadjacent vertices u, v of H holds N(u) ∩ N(v) = ∅ true, and H is a connected Euler's graph since H has no odd-degree vertex. Obviously, the original graph G is the result of v-dividing H after performing a series of 2-degree 2-connected v-divided operations. The proof of the theorem is complete. Notice that each Sierpinski model S(t) is a connected Euler's graph, and it can be v-divided into a cycle C n e (t) at each time step t, where n e (t) = |E(S(t))| = 1 2 (9 · 6 t + 6) is the edge number of the Sierpinski model S(t) at time step t. Conclusions To investigate an open question proposed by Battaglia et al. in [11], we defined two types of divided operations, called the v-divided operation and e-divided operation, respectively, as well as their inverse operations: the v-coincident operation and e-coincident operation. Thereby, we defined the v-divided connectivity κ d and the e-divided connectivity κ d , and showed κ d ≤ κ d ≤ 2κ d for all simple graphs (respectivenetworks), and κ d is equivalent to the traditional vertex connectivity κ [13]. However, finding the v-divided k-connectivity for each maximal planar graph of order n ≥ 5 and determining the v-divided k-connectivity of an Euler's graph are not easy. We consider that finding connected-perfect subsets of a connected graph (respective networks) may be interesting and important for investigating topological structures of GNs. As known, the Sierpinski model S(t) is scale-free, and we discover that each vertex of a connected-perfect subset X of S(t) is a scale-free vertex; in other words, X controls the topological structure of S(t). Does each connected-perfect subset of a scale-free deterministic network control the topological structure of the network? For a connected simple graph (respective networks) G with its k-connectivity, our vdivided graph (respective networks) G ∧ {x i } k 1 can reconstruct the original graph (respective networks) G easily, but it is very difficult to rebuild G from the disconnected vertexdeleting graph (respective networks) G − {x i } k 1 , in general. Nevertheless, the structure of the disconnected graph (respective networks) G − {x i } k 1 is unique, rather than G ∧ {x i } k 1 containing components L 1 , L 2 , . . . , L m with 2 ≤ m ≤ n(G − {x i } k 1 ), where n(G − {x i } k 1 ) is the number of components of the disconnected graph (respective networks) G − {x i } k 1 . We characterized the disconnected graph G − X obtained by deleting a nonempty subset X of the vertex set V(G) from a connected graph G, in which n(G − X) is the maximum, and proposed that each component of G − X is a complete graph. We emphasize that our v-divided operation can dilute a connected Euler's graph into a cycle; conversely, our e-coincident operation can concentrate a cycle to an Euler's graph. Moreover, each connected simple graph can be obtained by deleting some edges from some Euler's graph. We ask the following: How many different Euler's graphs made by a given cycle are there?
7,590.8
2023-01-01T00:00:00.000
[ "Mathematics" ]
Recognition of Hindi (Arabic) Handwritten Numerals Recognition of handwritten numerals has been one of the most challenging topics in image processing. This is due to its contributions in the automation process in several applications. The aim of this study was to build a classifier that can easily recognize offline handwritten Arabic numerals to support those applications that are deal with Hindi (Arabic) numerals. A new algorithm for Hindi (Arabic) Numeral Recognition is proposed. The proposed algorithm was developed using MATLAB and tested with a large sample of handwritten numeral datasets for different writers in different ages. Pattern recognition techniques are used to identify Hindi (Arabic) handwritten numerals. After testing, high recognition rates were achieved, their ranges from 95% for some numerals and up to 99% for others. The proposed algorithm used a powerful set of features which proved to be effective in the recognition of Hindi (Arabic) numerals. INTRODUCTION The development of Optical Character Recognition system OCR is considered one of the most important fields of research areas in pattern recognition.OCR allows a machine to automatically recognize characters through an optical mechanism.In other words, it is electronic translation for the images of handwritten numerals into computer textual format. Recently, the recognition of handwritten numerals becomes an intensive area of research; in order to increase the functionality of OCR system.Numeral recognition systems can be utilized in several applications such as: check verification in banks, office automation, postal address reading and communication technology. There are several approaches that deal with numerals/characters recognition problem, each approach depends on a set of features to be extracted and the ways of extracting them. Handwritten numerals recognition is a hard task due to the restricted shape variations (In size, shape, slant and the writing style) and the different kinds of noise that break the strokes in numbers or change their topology.That's why we can see that handwriting varies when a person writes the same character twice.One can expect enormous dissimilarity among people.Figure 1 shows a sample of standard and handwritten Hindi (Arabic) numerals.This study describes an off-line recognition technique for Arabic handwritten numerals by extracting features from numeric images to provide efficient and reliable results.The most important aspect of handwriting recognition scheme is the selection of a powerful set of features, which is reasonably invariant and robust with respect to the shape and slant variations that are caused by various writing styles. MATERIALS AND METHODS The recognition of handwritten text or number is a hard task because it depends on the writer and its accuracy.Thus, clear and accurate writing will help the OCR system to achieve very high recognition rates. Hindi (Arabic) numerals are used by Arabs and Latin-based languages.Where, the term, Hindi (Arabic) numerals refer to the Indian numerals that are used in Arabic writing. General Outline of the Proposed Approach As depicted in Fig. 2, the proposed model is composed of four steps: importing numeral image, preprocessing, 2 extracting features, classification and finally, recognizing the imported numeral. Image Preprocessing: This step starts by applying the preprocessing techniques to the imported image, the preprocessing step include a set of operations: binarization, removing noise, edge detection.Figure 3 shows an example for imported numeral image and its preprocessing result. Feature Extraction Checking the existence of loops: Several techniques can be applied to detect loop in an image.In this study, the technique that is proposed by Kim et al. (2009) was applied. Finding the Centroid of the image: the centroid is a useful feature that is used to describe the central weight of objects in image.Centroid is calculated by computing: {Mean(X), Mean(Y)}, where X and Y are the pixel's coordinates of the Numeral image. Image Segmentation: is the process of partitioning a digital image into multiple parts or sub images.Segmentation is used to simplify and/or change the representation of an image into something that is more meaningful to analyze.More precisely, in this study, we suggest to divide the numeral image into two parts (sub images) according to its centroid value.An example of partitioning is shown in Fig. 4. Horizontal projections: Another feature is suggested in this study, for each sub image the horizontal projection (projection on x-axis) is determined.Figure 5 shows examples of numeral projections. Image Classification In this step, the resulted features (loops or projections) are used to recognize the numeral.This is achieved by comparing the resulted features with the features of standard Hindi (Arabic) numerals as shown in Fig. 6 and 7 respectively.detected number is one of the following {"five", "zero", "nine"} • If the number is a filled loop then it is zero, while, if it contains a shallow loop then it is either "five or nine".So, to distinguish between them, "Nine" contains a line and a shallow loop, while "Five" is only a shallow loop as explained in Actually, by applying these steps good results were achieved, but sometimes an error may occur in detecting number "three".It sometimes detected as "two" this is according to the position of the centroid point, as shown in Fig. 8. Thus, to increase the robustness of the system, we propose that, if the projection's result of your numeric image is the same as of the number "two", try to insure that the number is correctly detected.So, re-apply the steps 6 and 7 for the upper sub image only.So, if it results in projections like those in Fig. 9, then it is "three" but if it doesn't then it is truly detected as "two". RESULTS The experiments were applied over a collection of Hindi (Arabic) handwritten numerals which collected from a large number of people in different ages; to test the proposed model.All the experiments are implemented under Matlab environment. The results of the proposed method were highly accurate; it reaches high recognition rates for several samples as shown in Table 1. DISCUSSION In this study we describe a new approach to off-line, handwritten numeral recognition.There are a lot of problems for recognition due to writing habits and instruments; we suggest a recognition method which is able to account for a variety of distortions due to eccentric handwriting. Various methods have been proposed and high recognition rates are reported, for the recognition of English handwritten digits (Berkes, 2005;Liu et al., 2004;Kussul and Baidyk, 2004;Tang, 2006).In recent years, many researchers have addressed the recognition of Arabic text, including Arabic numerals (Al-Omari and Al-Jarrah, 2004;Bouslama, 1999;Salourn, 2001;Salah et al., 2002;Alma'adeed et al., 2004;Touj et al., 2005).Alfonse et al. (2010), presented a hybrid classifier for segmenting Arabic numerals.The classifier is built using both of the Multilayer neural networks and the decision trees.They reach accuracy about 83% (Alfonse et al., 2010).Mahmoud and Awaida (2009), suggested a technique for automatic off-line handwritten Arabic (Indian) numerals recognition, by using Support Vector Machines and Hidden Markov Models.They achieved average recognition rates about 99.83% and 99.00% using, the Support Vector Machines and Hidden Markov Model classifiers respectively (Mahmoud and Awaida, 2009). AJEAS Mahmoud and Abu-Amara (2010a; 2010b) proposed a technique for the recognition of off-line handwritten Arabic numerals using Radon and Fourier Transforms.They reach high recognition rates around 98% (Mahmoud and Abu-Amara, 2010a;2010b). CONCLUSION In this study a robust algorithm for offline Hindi (Arabic) numerals recognition is proposed.Its robustness comes from the set of extracted features.In summary, the proposed model starts by extracting a set of features like: detecting the loops or dividing the numeral image according to its centroid point position, finally classify the number according to the shape of the horizontal projection, or the existing of loops.The experimental results of this model show high accuracy and recognition rates around 98% among all numerals. Fig. 9 . Fig. 7.The features of the standard Hindi (Arabic) numerals that contains loops Fig. 5 • If no loops exist.Compute the centroid for the image.•Divide the image according to the centroid point.•Find the projection for the generated images • Finally, to recognize the number correctly, compare your projection results with the standard set of projections that are shown in Fig.6 Table 1 . Detection rates for characters without secondary parts
1,862
2012-08-01T00:00:00.000
[ "Computer Science" ]
ENSO teleconnections and atmospheric mean state in idealised simulations Understanding the natural and forced variability of the general circulation of the atmosphere and its drivers is one of the grand challenges in climate science. In particular, it is of paramount importance to understand to what extent the systematic error of global climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is modified (increased or decreased) to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature (SST) anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in state-of-the-art climate models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that a change/bias in the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response generated by ENSO is more meridionally oriented when the Pacific jet stream is weaker, while it exhibits a more zonal structure when the jet is stronger. Rossby wave linear theory, used here to interpret the results, suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a more zonally oriented propagation of the tropically induced Rossby waves. The shape of the dynamical response to ENSO, determined by changes in the intensity of the Pacific Jet, affects in turn the ENSO impacts on surface temperature and precipitation over Central and North America. Furthermore, a comparison of the SPEEDY results with CMIP6 models behaviour suggests a wider applicability of the results to more resources-demanding, complete climate GCMs, opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs. usually colder (warmer) and wetter (dryer), while the northwestern part is mostly warmer (colder) and dryer (wetter). A consensus has not been reached on whether ENSO or the natural atmospheric variability modulate the climate over the western coast of North America (Lopez and Kirtman 2019). There is also an ongoing debate regarding the connection between ENSO and the Pacific North American pattern (PNA). PNA is a characteristic pattern of the Northern Hemisphere internal variability, that includes four main centers of action observed in the 200 hPa geopotential height (Wallace and Gutzler 1981) and has significant influence on temperature and precipitation over North America (Gershunov and Cayan 2003). Some authors argue that ENSO can only amplify internal variability and cannot generate new patterns (Molteni et al. 1993, Lau 1997, Blade 1999, Palmer 1999. Straus and Shukla (2002), on the other hand, suggest that the external forcing (i.e., ENSO) can lead to patterns that are different from those typical of the internal variability. Lopez and Kirtman (2019) found that ENSO produces response patterns over North America which are different from those of PNA. Considering the limited observational record, the ENSO signal is difficult to disentangle from the internal variability patterns. The role of global climate model biases in modulating the atmospheric response to ENSO is also to this day rather unclear, with contrasting conclusions achieved by different authors. Dawson et al. (2011) found that an increase in model resolution to a more realistic mean state, and that a better mean state plays a key role in the propagation of Rossby waves from a tropical source (e.g., El Niño) to the extratropics. Li et al. (2020) show that different model responses to ENSO in the Pacific-North American region and in the North Atlantic region can be explained by differences in the model mean state. Model biases in the North Pacific jet can affect the propagation of Rossby Waves generated in the Tropics, therefore the same RWS can lead to different responses in the Pacific-North American region, depending on the model jet bias. Conversely, they show that the response to El Niño in the North Atlantic and European region is almost independent of the jet bias but can be explained by considering biases in the tropical RWS. Tyrrell and Karpechko (2021) apply a bias correction method to the divergence and temperature tendencies of a general circulation model (GCM) in order to produce several different climatologies, then they apply the ENSO forcing, both positive and negative, for each model climatology. They conclude that there are no significant differences in the responses to ENSO depending on the divergence and temperature tendencies in the troposphere and stratosphere. The climatological bias does not affect the response in the Aleutian Low due to Rossby waves forcing, or the response in the polar vortex due to the upward planetary wave forcing. (SST), thermocline depth, and sea level pressure (SLP) across the equatorial Pacific. The state of the Tropical Pacific climate with respect to ENSO can be synthetically described as being in one of three conditions: Neutral, El Niño, and La Niña. El Niño is the positive phase of ENSO (associated with a warm SST anomaly) while La Niña is its negative phase (with a cold SST anomaly). One of the main atmospheric consequences of ENSO is the rearrangement of the atmospheric Walker Circulation which leads to a longitudinal shift of the associated convective rainfall patterns (Dai and Wingley 2000). During an El Niño event, the warm SST anomaly, and the related anomalous convection in the eastern Tropical Pacific lead to an increased atmospheric low-level convergence and a corresponding upper-tropospheric divergence, generating an anomalous vorticity source in the tropics. The upperlevel component of this vorticity source acts as a Rossby Wave Source (RWS) (Hoskins and Karoli 1981). This RWS, in turn, sets off a Rossby Wave train that tends to propagate upward, northward, and eastward in the North Pacific, modulating the intensity of the Aleutian Low and causing teleconnections patterns in the extratropics (Trenberth et al. 1998). Such Rossby Wave perturbations propagating from the tropical regions into the midlatitudes constitute a sort of atmospheric bridge which can spread the signature of ENSO all over the globe (Trenberth et al. 1998). During the positive ENSO phase, a deeper Aleutian Low can also generate upward propagating waves that can reach the stratosphere and weaken the stratospheric polar vortex. During events associated with a strong weakening of the polar vortex, anomalies can propagate back downward into the troposphere and project onto the North Atlantic (Cagnazzo andManzini 2009, Butler et al. 2014). During La Niña events, the response is broadly of the opposite sign, but weaker (Jiménez-Esteve and Domeisen 2018). Several studies showed that the winter response to ENSO, on the North Atlantic, changes from the early winter (November-December) to the late winter (January-March) (Moron andGouirand 2003, King et al. 2021). The early winter teleconnection resembles the East Atlantic pattern, while the late winter teleconnection projects onto the NAO pattern , Mezzina et al. 2020. This difference in the response patterns is due to distinct propagation pathways; the early winter response involves a tropospheric pathway, while the late winter response involves both the tropospheric and stratospheric pathways (Ayarzagüena et al. 2018, Domeisen et al. 2019. In the Pacific region, ENSO can shift the subtropical Pacific jet stream over the western coast of North America, influencing the weather and climate over Mexico, United States and Canada (Seager et al. 2005a,b). During El Niño (La Niña) events, the southern part of North America is Jet stream, obtained by changing the height of the Rocky Mountains, and an ENSO-forced RWS superimposed on the background flow. The experimental setup as well as the metrics used are presented in Sect. 2. Results from model simulations are shown in Sect. 3, while in Sect. 4 the results are discussed. Finally, conclusions are drawn in Sect. 5. The Model In the last decades a hierarchy of GCMs has been developed to tackle a wide variety of scientific questions. Starting from the second half of the '80s, the complexity of GCM has dramatically increased. This has been associated with an increase in computational costs. For academic purposes, however, state-of-the-art GCMs can be too expensive, and here Earth System Models of Intermediate Complexity (EMIC, Claussen, et al. 2002) come into play. These models are sufficiently accurate to be compared with observations, but less complex and computationally cheaper than fullyfledged GCMs. The model used in this study is an intermediate complexity Atmospheric General Circulation Model developed at the Abdus Salam International Center for Theoretical Physics (ICTP), known as SPEEDY (Simplified parameterization PrimitivE Equation DYnamic (Molteni 2003, Kucharski et al. 2006. Version 41 has been used in the current work. It uses the Held and Suarez hydrostatic spectral dynamical core (Held and Suarez 1994) expressed in the vorticity-divergence form derived by Bourke (1974). A set of parameterizations takes care of processes such as Another source of uncertainty related to ENSO forcing and its teleconnections is the impact of the SST bias in stateof the-art models (Timmermann et al. 2018). Many coupled general circulation models exhibit a cold SST bias in the equatorial Pacific Ocean (reminiscent of a La Niña-like state), which leads to an overly westward displaced rising branch of the Walker Circulation. During an El Niño event, the bias in the convective region leads to a further westward convective response as compared with observations (Bayr et al., 2019a, Domeisen et al., 2015. Since this SST bias is mainly due to the oceanic component of coupled models, climate simulations with prescribed SSTs better represent ENSO teleconnections to the North Pacific and North America (Bayr et al., 2019b). Considering that investigating the tropical-midlatitude interactions due to ENSO forcing under different model mean states has been shown to be a complex and multifaceted problem, both in observations and in climate GCMs (e.g., the CMIP set of experiments), an intermediatecomplexity experimental setup could constitute a fruitful approach. In this work, the attention is therefore focused on a single feature of the model bias: the North Pacific Jet Stream, which is one of the regions where the largest bias and root mean square error (RMSE) are observed in stateof-the-art GCMs, as shown by Fig. 1. This work aims at exploring how different systematic errors of the Pacific jet stream can affect the model response to ENSO in order to understand to what extent the response of the model to a given forcing changes when the model mean state is modified. This is done by performing several simulations with the atmospheric general circulation model Simplified Parameterizations, primitivE -Equation DYnamics (SPEEDY), forced by climatological SSTs. Each simulation is characterized by a different bias in the Pacific and (b) multi model mean bias (colours) for the CMIP6 atmosphere-only models (AMIP experiment). In both panels, contours are the JFM zonal wind at 850 hPa from ERA-Interim weakening the zonal wind, strengthening the stationary wave pattern and producing a southwest-northeast tilt of the Atlantic eddy-driven jet. These effects are due to the peculiar topography of the Rocky Mountains, which generates a dipole with an anticyclone on the windward and poleward side of the mountain range (where the wind has to "go over'' the mountain) and a cyclone on the leeward and equatorward side (where the flow is more effectively blocked so that is partially diverted around the mountain) (Brayshaw et al, 2009). By modulating the height of the Rocky Mountains, it is possible to modify the jet interaction with the orographic obstacle and thus change the mean flow over both the Pacific and the Atlantic sector. Changes to the Rocky Mountains 2.2.1.1 Control Simulation (ROCK-0) A 200-year long control climate simulation run using the SPEEDY default configuration is used as a baseline experiment (hereafter named ROCK-0). The SST and the Sea Ice Concentration (SIC) boundary conditions are obtained from the 1979-2008 monthly climatology from the European Centre for Medium-Range Weather Forecasts re-analysis (ERA-Interim; Dee et al. 2011). Daily SST and SIC forcing data are obtained by linearly interpolating monthly mean values. In order to reduce the model internal variability, the sea ice and the land modules are switched off. The radiative parameters are set to represent values of the last decades of the 20th century (King et al. 2010). Model integration starts with a standard atmosphere at rest in hydrostatic equilibrium. Modified orography experiments (ROCK) Twelve 200-year long simulations are performed with a set of modified orographies to obtain different mean states of the mid-latitude atmospheric circulation. The twelve modified orography simulations are characterized by an increased or decreased height of the Rocky Mountains in a box spanning 170 W-90 W and 10-80 N. The North American orography large-scale condensation, surface fluxes of momentum, vertical diffusion of heat and moisture, convection and short and longwave radiation (Molteni 2003). A one-layer thermodynamic model (Kucharski et al. 2006a(Kucharski et al. , b, 2013a calculates the temperature anomaly for land and sea ice. The horizontal spectral resolution is T30 (~ 450 Km at the Equator) with 8 vertical levels and an associated regular Gaussian grid of 96 × 48 points. The SPEEDY model is computationally advantageous, so it can be integrated over centuries at a minor computational cost. Despite the low resolution and the simplified parameterizations, SPEEDY represents satisfactorily several aspects of the atmospheric climate, like the extratropical circulation (Kucharski et al. 2006a, b), planetary-scale variability modes (Molteni et al. 2011), tropical/extratropical teleconnections , ENSO teleconnections in a global warming situation (Buli´c et al. 2012), and a minimal representation of troposphere-stratosphere interactions (Herceg-Bulic et al. 2018, Ruggieri et al. 2017. The source code and the documentation of the current version of the model -including information on model development and subsequent releases, can be found at http://users.ictp.it/~kucharsk/speedy-net.html. Experimental setup Considering that the goal of the analysis is to assess the sensitivity of the ENSO-related teleconnections to different model mean states, an initial step is to produce several "SPEEDY worlds", characterized by different mean climates. Considering the geographical position of the ENSO thermal forcing, changes in the model mean states are obtained by modifying the average structure of the Pacific jet stream. A simple but effective approach is to change the orography over the Rocky Mountains region. Indeed, the Rockies play a relevant role in shaping and modulating the Northern Hemisphere climate (e.g., White et al, 2021), the idealized El Niño3.4 anomaly just described (a "standard intensity" El Niño), which has a maximum of about 1.2 K (NINO experiments hereafter). The experiment with El Niño forcing and no orographic change is thus labelled NINO-0. The second set of experiments uses the same anomaly pattern, but doubled in magnitude (i.e., NINOx2 experiments). The experiment with twice El Niño forcing and no orographic change is named NINOx2-0. Following the same methodology, the La Niña and the La Niña with doubled idealized anomalies are obtained by reversing the sign of the El Niño experiments (defined as NINA and NINAx2 experiments). Similarly, the experiments with no orographic change, are labelled NINA-0 and NINAx2-0, respectively. Since the ENSO signal is stronger during the late boreal winter, i.e., January-March (JFM) (Brönnimann 2007), the analysis is limited to this season. Similarly to the ROCK experiments, the NINO, NINOx2, NINA, and NINAx2 integrations are 200-year long. All the simulations start from an atmosphere at rest and in order to discard the spin-up of the model, the first year of each integration is excluded. By comparing ENSO experiments (NINO, NINOx2, NINA, NINAx2) and the corresponding ROCK simulations it is possible to estimate the modulation of the ENSO signal due to the change of the model mean state. This idealized configuration has strengths and limitations; on the one hand, the idealized SST forcing helps to understand the mechanism behind the interaction between the bias of the Pacific Jet stream and the ENSO response, isolating the source of the signal over the Central Pacific. On the other hand, the observed ENSO SST signal is characterized by anomalies outside the Niño3.4 region that might generate non-negligible signals and non-linear interactions with the signals coming from the Niño3.4 region. Reanalysis and fully fledged general circulation models To provide an estimate of the SPEEDY biases in the ROCK and ENSO experiments the ECMWF ERA-Interim Reanalysis (1979-2018) is used. The El Niño signal is obtained by compositing the geopotential height field during the El Niño events (in the 1979-2018 period) identified with the Nino3.4 index. The reference for state-of-the-art general circulation models is the Coupled Model Intercomparison Project phase 6 (CMIP6, Eyring et al. 2016): considering the atmosphere-only setup of the SPEEDY integrations, the focus is on the Atmospheric Model Intercomparison Project experiment (AMIP, Gates et al. 1998). AMIP simulations are performed over the historical period 1979-2014, with observed sea surface temperature and sea-ice and observed is referred to as Rocky Mountains because, due to its resolution, the SPEEDY model is unable to resolve smaller mountain chains as Sierra Nevada and the Cascades. The changes to the height of the Rocky Mountains ranges from − 60% to + 60% (-60%, -50%, -40%, -30%, -20%, -10%, + 10%, + 20%, + 30%, + 40%, + 50% and + 60%). In order to avoid discontinuities in the orography field along the edge of the box, a nine grid point smoothing is applied at the borders of the domain. The twelve simulations with changes in the height of the Rocky Mountains, together with the ROCK-0 control simulation, are hereafter named ROCK experiments. Figure 2 shows an example of a 60% increase of the height of the Rocky Mountains (ROCK + 60 experiment). The last three characters in the name of the experiments indicate the percentage change in the height of the Rocky Mountains. By comparing ROCK experiments with modified orography and ROCK-0, the effectiveness of the orography in producing changes in the mean state of the Pacific jet stream is assessed. Idealized tropical forcing (ENSO forcing) To study the impact of the model bias on the response to external forcing, four sets of ENSO-like simulations are conducted. The SST pattern of an idealized ENSO anomaly, both positive (El Niño) and negative (La Niña), is defined in the El Niño 3.4 region (Equatorial Pacific Ocean, 5 N-5 S, 170 W-120 W): this anomaly is then superimposed to the climatological SST for all the orographic configurations of the ROCK experiments. The shape and magnitude of the idealized anomaly in the El Niño case are generated as follows: • from HadSST3 data (1979 to 2008, see Kennedy et al. 2011bKennedy et al. , 2011c, all El Niño events are extracted detecting events for which the 5-month running mean of the monthly SST anomalies in the El Nino 3.4 region are greater than 0.5 K for six consecutive months or more (for the El Niño 3.4 index, see Trenberth, 1997). • The SST composite of the above-defined events is computed over the El Niño 3.4 region. For each event the (6) monthly anomalies are taken. The composite is made by the average of the considered monthly anomalies. • the PJS index is the average of the 850 hPa zonal wind in a box spanning 110-170 W and 30-60 N, and it is used to measure the intensity of the Pacific jet stream; • the PJL index is the average of the 850 hPa zonal wind in a box 110-170 W and 45-60 N minus the same average in a box 110-170 W and 30-45 N. The PJL index aims at describing the latitudinal position (and therefore the associated meridional wind shear) of the Pacific jet stream. The jet width has been investigated as well. This is estimated as the distance between the inflection points of the meridional profile of the zonally averaged zonal wind at 850 hPa (between 110 and 170 W, Manola et al. 2013). Our results show that the jet width exhibits a large variability and its estimate is affected by a large uncertainty, and changes in the orography have a weak (if not null) effect on it (see supplementary material figure SM1), so this property is not further discussed. The box used for the computations is shown in Fig. 2. greenhouse gases (GHGs) and stratospheric ozone mixing ratios and aerosol emissions. Because models have different resolutions, all the model outputs are interpolated to a 2.5°x2.5° regular lat-lon grid 1 . Metrics Considering the relevance for this study of the propagation of the ENSO signal from the tropics to the extratropics, the position of the so-called "Pacific Waveguide" is of key importance (Dawson et al. 2011). The Rossby stationary waveguide is strictly related to three parameters characterizing the jet stream: (1) Figure 4 shows the impact of Rocky Mountains orography changes on the zonal wind at 850 hPa for the JFM season, which is mainly found over the North American continent and the North Pacific basin, with a secondary structure over the North Atlantic and Eurasia. As expected from Fig. 3, the ROCK-60 experiment (Fig. 4a) is characterized by a stronger jet. The signal in the zonal wind is maximum over the mountains, and spreads mainly upstream up to the center of the Pacific Ocean. The ROCK + 60 experiment (Fig. 4c) shows an opposite behaviour, with the effect of the orography localized only on the region of the Rockies and downstream, over the North American continent. The linear regression in Fig. 4b, all the average JFM fields from ROCK experiments are merged together into a single array and then the linear regression is calculated, shows the relation between the lower-tropospheric zonal wind and the PJS: a stronger Pacific jet is associated with an intensified zonal wind over the Rocky Mountains in a latitude band between 30 and 50 N. At latitudes southern than 30 N a stronger Pacific jet reinforces the trade winds. In a latitudinal band between 50 and 70 N, the response of the zonal wind reverts; here a stronger Pacific jet weakens the local zonal wind. A smaller but significant signal is found over the tropical Atlantic Ocean (60 W-0 W) and in the midlatitudes of the Euro-Asiatic sector, where the zonal wind strengthens with a more intense Pacific jet. SPEEDY response to idealized ENSO In order to show the response to an idealized ENSO in the Pacific-Western North American sector, the JFM 500 hPa geopotential height is considered (King et al. 2017, Feng et al. 2017, Alexander et al. 2008. Figure 5 shows the response to an idealized El Niño in the 500 hPa geopotential height for the NINO-0 (Fig. 5b) and ERA-Interim (Fig. 5a). In the Pacific and the North American sectors, the response in the geopotential height due to the idealized El Niño SST The PJS and PJL indices are calculated at 850 hPa because at this level the effect of changes in the orography is more evident than in the upper troposphere. However, indices calculated using the zonal wind at 300 hPa exhibit a similar behaviour (see supplementary material figure SM2). The mean state To understand how the Pacific jet stream changes in the ROCK experiments (e.g., from ROCK-60 to ROCK + 60) and how these changes are related with the mean climate of state-of-the-art GCMs, the two indices (PJS and PJL) calculated from the ROCK simulations are compared with the same indices computed from CMIP6. Figure 3 shows a scatterplot with the PJS index against the PJL index for the CMIP6 (orange diamonds) and SPEEDY (blue and red dots) experiments, all compared to ERA-Interim (the black star). All models show a relevant bias in the PJL index with respect to reanalysis, having a Pacific jet always displaced too poleward. On the other hand, PJS indices from CMIP6 models and ROCK experiments are scattered around the value from the reanalysis; the PJS values range from 4 to 6. The ROCK experiments cover approximately a fair amount of the CMIP6 model spread in terms of the PJS index. As expected, the more the Rockies are lowered, the stronger the Pacific jet becomes. Conversely, the changes in the height of the Rocky Mountains have a moderate and non-linear impact on the values of the PJL index. While the latitude of the jet is slightly affected by the increase of the orography, the PJL index decreases linearly for lower orography: indeed, a larger negative value of the PJL index implies an equatorward displacement of the Pacific jet. Considering the idealized framework of the NINO experiments, a discrepancy between SPEEDY and ERA-Interim is expected. Nonetheless, the overall response is in satisfactory agreement with respect to the reanalysis. The NINA-0 experiment, compared with ERA-Interim, shows an overall weaker signal with respect to ERA-Interim (See Supplementary Material Fig.SM7). In SPEEDY, the Rossby wave train, starting from the Pacific Ocean and crossing the North American continent is roughly the half of the signal observed in the reanalysis. The only exception is the positive pole over Mexico, which is comparable with ERA-Interim. The positive centre of action on the Pacific Ocean is shifted to the west with respect to the reanalysis and covers all the ocean up to Japan. The asymmetry of the signals between El Niño and La Niña are caused by the different impact that imposing a warm or cold SST anomaly on the climatological SST might have. Indeed, small warm SST anomalies over the tropical is in good agreement with observations (e.g., Horel and Wallace 1981, Rodriguez-Fonseca et al. 2009, Kröger and Kucharski 2011, Bulić et al. 2012, Wang 2017, Dogar et al. 2017, Mezzina et al. 2019). The NINO-0 response shows a strong negative anomaly in the Pacific Ocean (120E-120 W, 30-50 N) and a positive anomaly over the North American continent (180-110 W, 50-80 N). These two anomalies, combined with the (weaker) negative anomaly over Mexico, constitute the typical Rossby wave train associated with ENSO. Another negative anomaly is found over Greenland. The response to an idealized La Niña (NINA-0 experiment, see supplementary material) shows a signal characterized by a positive anomaly over the Pacific Ocean, a negative anomaly over the North American continent and positive anomaly over Mexico, consistent with the Rossby wave train for La Niña. The NINA-0 experiment also shows a significant positive signal over the Atlantic Ocean. Overall, the NINO-0 run shows a westward displaced Rossby wave train and weaker positive pole when compared to ERA-Interim. It is worth noting that the observed North Atlantic pole of the Rossby wave train is missing in the NINO-0 experiment, suggesting a weaker atmospheric bridge between the two basins in SPEEDY. However, over Europe and the Middle East, the reanalysis and the NINO-0 In order to provide an estimate of the uncertainty in the position of the maxima/minima over the North America region due to internal variability, a bootstrap method is used. From the 200-year simulation, 130 JFM seasons (63% of the data, as suggested by Efron and Tibshirani 1993) are randomly chosen and averaged. Then the position of the maxima/minima of the geopotential height at 500 hPa in the box [45-70 N, 150 W-90 W] is computed. The sampling is repeated 3000 times. In this way, it is possible to estimate the uncertainty of the position of the maximum/minimum, denoted in the figure by the extension of the bars, which shows the standard deviation of the bootstrap sample. By increasing the intensity of the Pacific jet (i.e. by reducing the height of the Rockies) the center of action of the response over North America moves from north-west to south-east ( Fig. 7a,b,d,e). On the other hand, the intensity of the North American response does not change as the intensity of the Pacific jet increases (Fig. 7c,f). Results are consistent for all sets of NINO, NINOx2, NINA and NINAx2 experiments. However, a minor difference can be noted: the standard deviation of the position (for both the longitude and latitude) of the positive PNA pole decreases when the intensity of the idealised El Niño anomaly is doubled. This suggests that, not surprisingly, a stronger forcing provides a larger signal-to-noise ratio. The same bootstrap approach is applied to the signal over the Pacific Ocean (see supplementary material Fig. SM6). When looking at the position of the pole over the ocean of the ENSO Rossby wave train, it is found that there is no significant shift in its latitudinal and longitudinal position and the intensity of the anomaly doesn't change. required to induce the convective anomalies (Timmermann et al. 2018). NIN*-ROCK experiments To study the relationship between the model bias and the ENSO response in the different NINO (NINA) and NINOx2 (NINAx2) experiments, the properties of the geopotential height dipole over the Pacific Ocean and North America (i.e., the ENSO Rossby wave train) are investigated. This is done, for each experiment, by looking at the geographical position of the maximum (minimum) over the North America and the North Pacific poles (i.e., in the case of an El Niño response anomaly, this corresponds to a positive pole over North America and a negative pole over the North Pacific). Results are shown in Fig. 6: while the position of the signal over the ocean is weakly affected by orographic changes, the position of the signal over North America (see dots for El Niño and crosses for La Niña) migrates along a south-east/north-west axis as long as the height of the Rockies is increased. Although a significant change is seen for the position, the intensity of the ENSO Rossby wave train is not affected by the height of orography. The magnitude of the signal doubles when the intensity of the forcing is doubled (e.g., NINO vs. NINOx2). On the other hand, a strong nonlinearity in the ENSO teleconnection is observed when NINO and NINA experiments are compared. The El Niño signal is almost twice the intensity than its La Niña counterpart. A graphical summary of these results is provided by Fig. 7, where the latitude, longitude, and intensity of the maximum (minimum) of the geopotential height at 500 hPa over North America for all the idealized ENSO NINO (NINA) and NINOx2 (NINAx2) experiments are shown. Mexico the cold signal of the response decreases as the PJS index increases, but the sign changes when the PJS index exceeds a threshold value. On the contrary, the positive signal over Canada is enhanced by a stronger Pacific jet. The response over Asia resembles the one over Mexico, but only the responses over Japan and the Middle East seem to be significantly affected by the mean state, and both become colder when the intensity of the jet stream over the Pacific Ocean decreases. To summarize, larger PJS indices (i.e., for stronger jet speeds) strengthen the zonal flow. This leads to a more zonal configuration of the Rossby wave train to the idealized ENSO-like SST anomaly and a reduction of the meridional advection. As a consequence, a reduction in the intensity of the temperature anomaly is observed. The temperature anomaly reduces over most of the Northern Hemisphere. Two minor exceptions, where the temperature anomaly is enhanced, are Canada and the region east of the Caspian Sea. Figure 9 shows the total precipitation response for NINO experiments. As expected, the pattern of precipitation anomaly produced by NINO experiments shows a marked signal over tropical regions. The total precipitation shows the typical equatorial El Niño signal, with an increase in the eastern and central Equatorial Pacific (not shown), while it decreases in tropical and subtropical regions approximately corresponding to the downward branch of the Hadley cell. A positive rainfall response is also located near the California coast and the Aleutian region, extending further upstream in the Pacific Ocean. A second positive rainfall response is in the Gulf of Mexico, Florida, and Caribbean region, whose signal also extends over Mexico and reaches the eastern tropical Pacific. On the other hand, El Niño favours a decrease in precipitation over India, Southeast Asia, and Japan. Implications for modelled impacts of ENSO To better assess the sensitivity of the ENSO-induced teleconnection patterns to the model mean state, the responses of near surface air temperature and total precipitation are analyzed. Almost the same patterns -but with doubled amplitude -are obtained from NINOx2 experiments (not shown). Conversely, NINA and NINAx2 experiments show approximately opposite impacts (not shown). The Northern Hemisphere near surface air temperature response (Fig. 8a,c) shows that El Niño has wide but moderate impacts all over the Northern Hemisphere, with near surface air temperature changes not larger than +/-0.5 K. The only region where El Niño has a stronger signal is Western North America. A clear warm signal is present over Alaska and Canada, where the near surface air temperature increases by 1.25 K. The same anomaly, reduced in intensity, also extends southward over the West coast of the United States. While experiments with the increased orography (Fig. 8c) show a negative temperature anomaly over Mexico, those with reduced orography (Fig. 8a) are characterized by a neutral or positive response. Another clear signal is located over Japan: a cold anomaly spreads downstream, from the Asian East coast to the Pacific Ocean. On the other hand, the signal over Europe, North Africa, and the Arabian Peninsula (a warm anomaly unaffected by changes of the mean state) is weak and becomes statistically significant only in the NINOx2 experiments. The linear regression shown in Fig. 8b, which shows the relationship between the ENSO near surface air temperature response and the PJS Index, highlights the role of the changes in the mean state in modifying the ENSO temperature fingerprint. The response over Western Alaska is weakened when the PJS index increases, implying that a stronger jet tends to suppress the warm signal there. Similarly, in Fig. 9 As Fig. 8, but for precipitation ENSO. The way Rossby waves propagate in a slowly varying flow is well understood; Rossby stationary waves tend to be refracted towards regions with larger stationary wavenumber (K s ), usually equatorward (Hoskins and Ambrizzi 1993). Rossby stationary waves and the stationary wavenumber can be compared, respectively, to a light beam and the refractive index in optics. K s is defined as follows: where k is the zonal wavenumber, l is the meridional wavenumber, β is cos(Φ) times the meridional gradient of the absolute vorticity, and u is the zonal velocity. The subscript M indicates the Mercator projection on the sphere. From the kinematic wave theory, given a Rossby stationary wave characterized by a fixed K s, the zonal wavenumber k is constant along the Rossby wave path, so the meridional wavenumber l has to vary, and this change in l is the reason for the variation in the propagation direction. The propagation of the Rossby wave in the meridional direction is stopped when the wave reaches a latitude characterized by a meridional wavenumber equal to zero (l = 0); the group velocity becomes purely zonal, so the wave is refracted back to lower latitudes. Latitudes characterized by l = 0 are usually called "turning latitudes''. The presence of a zonal jet affects the propagation of Rossby waves. The intensity and the width of the midlatitude jet are two key properties to create an effective waveguide that can propagate the Rossby wave along the zonal direction (Manola et al. 2013). The turning latitude can be computed from the meridional profile of Ks. Given a meridional profile of K s (ϕ) and a stationary Rossby wave characterized by a zonal wavenumber n, the turning latitude is the latitude where the function K s (ϕ) crosses the vertical line n = const. In our case, the meridional profile of the stationary wavenumber is calculated by zonally averaging the zonal wind over the Pacific sector (170 W-110 W) and considering only the Northern Hemisphere (10-70 N). The longitudinal size of the Pacific sector is comparable with the typical scale of quasi-stationary Rossby waves (Scaife et al. 2017). Figure 10 shows the meridional profile for each ROCK experiment and for the ERA-Interim: Red curves refer to ROCK experiments with increased Rocky Mountains height, blue curves refer to ROCK experiments with reduced height, the solid black line corresponds to the ROCK-0 experiment, while the dashed black line to the ERA-Interim reanalysis. For wavenumbers greater than 4 it can be noticed that the ERA-Interim reanalysis shows a constantly lower turning latitude than ROCK experiments. For values between 3 and 4, the range of values computed from The regression of the total precipitation on the PJS index (Fig. 9b) shows modulation of the signal following a change in the mean state over multiple regions: the most relevant signal is a dipolar anomaly that involves Caribbeans and Mexico. Over the Caribbean, the total precipitation decreases when the PJS index increases, with NINO experiments showing a stronger relationship between the signal and the PJS index. Conversely, the total precipitation over Mexico increases with the PJS index. Similarly, the tropical Pacific is strongly affected by the Pacific Jet stream intensity, and the total precipitation regression shows an increase in the precipitation greater than 0.30 mm/day/m/s. The response on the west coast of Canada on the border with Alaska shows an increase of the precipitation associated with an increase of the Pacific Jet intensity, while an opposite signal is observed over the Aleutian low. Rossby waves propagation and the Rossby stationary wavenumber The above-described sensitivity of the ENSO response (Fig. 6) can be investigated in terms of the linear theory of Rossby wave propagation (Hoskins and Karoli 1981). Rossby waves are planetary-scale waves that propagate westward with respect to the time-averaged background flow so that they can become stationary if the mean westerly flow presents the opportune conditions; they are capable of transferring energy and momentum across large distances and giving rise to teleconnection patterns (Hoskins and Karoli 1981). Rossby waves can be generated by diabatic heating, a condition which typically occurs in areas of deep convection at tropical latitudes: consequently, anomalous Rossby waves can be produced following the onset of Discussion Not many authors have explored the impact of changes in models' orography. Generally, the most used approach is either the complete removal of the orography or the removal of a specific mountain chain (White et al. 2017,2018. For example, several papers studied the role of the Tibetan plateau and the Mongolian mountains (Boos & Kuang 2010, Chiang et al. 2015, Shi et al. 2016, White et al. 2017, Kong & Chiang 2020 in shaping the large-scale Northern Hemisphere atmospheric circulation. White et al. 2021 compared simulations with standard orography and simulations carried out removing completely all the orography. They found out that the orography reduces the mean zonal wind by 50-80% and, without the orography, the wintertime zonal wind of the Northern Hemisphere is comparable with the Southern Hemisphere winter jet. The resolved orography accounts for about 1/3 of the total slowdown of zonal wind. The ROCK experiments, which showed (Figs. 3 and 4) that the height of the Rocky Mountains affects linearly the zonal wind, are consistent with these results. the ROCK experiments is comparable with the reanalysis values. Nonetheless, the meridional profile is different and none of the SPEEDY experiments is able to represent the reanalysis profile well. For values lower than 3, the spread of the ROCK experiment is reduced and all the simulations satisfactorily follow the profile of ERA-Interim. The experiments with the larger Rocky Mountains, however, are more similar to the reanalysis than those with the Rocky Mountains decreased, possibly because the higher orographic barrier reduces the jet speed reducing the zonal wind model bias. Moreover, Fig. 3 showed that ERA-Interim has a lower value of the PJL index with respect to the ROCK experiments. The bias in the latitudinal position of the jet stream is related to the bias in the meridional profile of the Rossby stationary wavenumber. Indeed, a jet at lower latitudes implies lower turning latitudes. It is evident how the turning latitude associated with wavenumbers between 3 and 5, which correspond to the typical Rossby signal generated by ENSO (Li et al. 2020) changes with the height of the Rocky Mountains and consequently with the speed and the position of the Pacific jet. Experiments characterized by a more intense jet stream over the Pacific Ocean (blue lines) show equatorward turning latitudes; experiments with a weaker jet stream show poleward turning latitudes. The changes of the turning latitude values across ROCK experiments are consistent with the observed changes of the position of the ENSO-induced Rossby wave train over North America in NINO/NINA experiments (Fig. 7); a stronger jet modifies the propagation Fig. 11 Schematic representation of the impact of the orography on the ENSO response ENSO. As in this study, they found that a weaker jet leads to a more poleward ENSO wave train. Similarly, it has been shown that the position and intensity of the ENSO centre of action in the geopotential height at 500 hPa over the Pacific do not change with different mean states: this confirms the work of Tyrrell and Karpechko (2021), who found that changes in the model zonal wind bias do not affect the ENSO signal in the Aleutian Low. Summary and conclusions This work aims at investigating the role of the mean atmospheric state, and in particular by the North Pacific zonal flow mean state, in modulating the atmospheric response to ENSO and its impacts on temperature and total precipitation. A set of experiments is developed in order to modify the mean state of the SPEEDY intermediate complexity general circulation model via progressively increasing or decreasing the height of the Rocky Mountains. Each experiment is forced with the same idealized ENSO SST anomaly. Finally, linear Rossby waves propagation theory is used to interpret the results. SPEEDY experiments show that, by changing the height of the Rocky Mountains, it is possible to modify the mean state of the model over the North Pacific sector. Indeed, it is shown that the speed of the Pacific jet changes comparably to the zonal wind-speed bias of state-of-the-art global climate models. Comparison with the values of the same Pacific Jet indices computed for the CMIP6 models indicates that the SPEEDY experiments are able to mimic the bias in the Pacific jet strength of CMIP6 models (Fig. 3). The midlatitude leading response to El Niño is a geopotential height anomaly over the North Pacific and North America, stronger during the late winter and reminiscent of the PNA pattern ). In the SPEEDY experiments, the Rossby wave train in response to a tropical El-Niño-like forcing is clearly affected by changes in the mean state. When the Pacific jet is stronger (i.e., when the Rocky Mountains height is reduced), the positive geopotential height response center over Canada and Alaska migrates from north-west to south-east. The position of the center of action of the 500 hPa geopotential height response of the idealized El Niño experiments moves about 4° southward and about 10° longitude eastward following an increase of the jet speed of about 1 m/s. For idealized La Niña experiments, the position of the center of action moves about 2° southward and 14° longitude eastward with an increase of the jet speed of about 1 m/s. Responses to the double intensity idealized El Niño and La Niña experiments are roughly the same as the regular intensity experiments. The idealized ENSO experiments are similar to experiments carried out in previous works. In particular, Dogar et al. 2017 (here on D17) performed four ENSO experiments using the SPEEDY model: two El Niño and two La Niña experiments with regular and doubled intensity respectively. They used a SST anomaly imposed on the climatology, only in the tropical region (50 S-50 N). Given the very similar structure of this and D17 experiments, a direct comparison is possible. Despite the differences in the ENSO forcing (full Tropical Pacific in D17 and Niño3.4 region in this work), the near-surface temperature in D17 four ENSO experiments is similar. The west coast of the North American continent shows the typical positive (negative) signal related to an El Niño (La Niña) event. The intensities of the near-surface temperature anomalies are comparable to our NINO-0(NINA-0) and NINOx2-0(NINAx2-0) experiments. In D17 the responses in near-surface temperature extend over the Hudson Bay, while in our experiments the anomalies are confined in the west of Hudson Bay. These differences highlight the importance of the SST in the Tropical Pacific outside of the Niño3.4 region. The Extratropical North Pacific Ocean shows a very consistent response in the two works. The spatial pattern and intensity of the total precipitation response found in our experiments match well D17 results. The schematic shown in Fig. 11 summarises the mechanism behind the modulation of the ENSO signal by orography over North America. An increased orography (panel on the left) reduces the intensity of the Pacific jet in the exit region over the continent. The reduction of the zonal component of the wind is associated with an increased meridional component of the wind and more advection of warm air from the tropics to mid-latitudes. This can be interpreted as the air having not enough kinetic energy to go over the orographic barrier and being deviated poleward. Lastly, the high-pressure anomaly (due to El Niño) is displaced to the North-West. On the other hand, a reduced orography (panel on the right) increases the jet strength in the exit region and reduces the meridional component of the wind, resulting in a more zonal jet. Consequently, the meridional advection is reduced so that less warm air reaches the higher latitudes. The positive center of action over the continent migrates to the South-East. The results and the interpretation of the link between the ENSO response over the North American continent and the atmospheric mean state (i.e., the zonal wind) proposed in this work are supported by Benassi et al. (2021). They explore the impact of low-frequency SST variability over the extratropical Pacific on the El Niño teleconnection, concluding that different values of the zonal wind in the jet exit region over North America can modulate the response to This confirms the great importance of reducing the bias in the zonal wind of state-of-the-art models in order to increase their ability to describe the essential dynamical processes of general circulation. SPEEDY results pave the way for several future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in state-of-the-art GCM models. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. More generally, the observed sensitivity of the ENSOinduced Rossby wave train can be interpreted in terms of linear Rossby Wave propagation theory. Experiments with a stronger jet over the North Pacific are characterized by an average turning latitude, for wavenumbers from 3 to 5, located at lower latitudes than experiments with a weaker jet. The difference in the turning latitudes directly influences the direction of propagation of Rossby waves: experiments with a weaker jet show that the Rossby Wave train is shifted westward and northward, while experiments with a stronger jet tend to confine Rossby waves to lower latitudes and enhance a f "waveguide" effect. The final result is that larger Pacific jet speed favours an eastward and equatorward shift of the Rossby Wave train, with a more zonally oriented propagation. Interestingly, the SPEEDY experiments show that the strength of the Pacific jet does not affect the intensity of the response to an idealized ENSO anomaly, but only its position. Due to the different characteristics of the Pacific jet, the propagation of the tropically generated Rossby waves changes accordingly, leading to different global teleconnections. For instance, a stronger Pacific jet is associated with a less intense surface temperature response over Alaska: the signal of the idealized El Niño experiments decreases by about 0.4 K following an increase of 1 m/s in the wind intensity. The Pacific jet strength also affects the dipolar signal of total precipitation over the Caribbean and Mexico regions: the intensity of the NINO experiments responses decrease (increase) over the Caribbean (Mexico) by about 0.3 mm/ day following an increase of 1 m/s of the jet speed. Another way to look at the role of the bias of the Pacific jet is by considering that a stronger jet-speed strengthens the zonal flow. The increased zonal flow leads to a more zonal configuration of the Rossby wave train to the idealized ENSO forcing and the consequence is a reduction of the meridional advection over the North American region. The position of the centers of action of the geopotential height signal induces a reduction of the temperature anomaly intensity over Alaska and a shift of the total precipitation pattern over Mexico and the Caribbean. Overall, this work showed that the model mean state (or, in other words, model systematic error) can affect the midlatitude response to tropical forcing. Different intensities of the Pacific jet lead to different propagations of Rossby waves and different ENSO responses across the North American continent. The comparison between SPEEDY and CMIP6 GCM behaviour proves that SPEEDY experiments have the capability to mimic the bias of the CMIP6 models with respect to the importance of a good representation of the Pacific jet stream. Experiments with ENSO show the key role of model bias in signal propagation from the tropics to mid-latitudes.
11,615.8
2022-04-13T00:00:00.000
[ "Environmental Science", "Physics" ]
THE LOST INDIAN CHANDRAYAAN 2 LANDER VIKRAM AND ROVER PRAGYAAN FOUND INTACT IN SINGLE PIECE ON THE MOON The Lunar Lander Vikram of the Moon Mission Chandrayaan 2 of the Indian Space Research Organization (ISRO) lost communication with the Lunar Orbiter and the mission control nearly 2.1 kms above the lunar surface during its landing on the Moon on 7th September, 2019. The exact location and the sight of the lost lander and rover are still elusive. We present here the exact location and first images of the lander Vikram and rover Pragyaan sighted on the lunar surface. It is evident from the processed images that the lander was intact and in single piece on landing away from the scheduled site and its ramp was deployed to successfully release the rover Pragyan on to the lunar surface. This contradicts earlier reports that the lander was disintegrated into small pieces and debris which were scattered far away from the proposed landing site. INTRODUCTION The MATERIALS AND METHODS The imagery provided by the Lunar Reconnaissance Orbiter (LRO) of NASA on the website hosted by the Arizona State University was scanned visually and the screenshots of the selected images were enlarged and processed by dehazing and enhancing contrast employing MATLAB (MathWorks Inc.). RESULTS AND DISCUSSIONS We have located the position of the lander with respect to the crater Simpelius N (Figs. 1 to 4). In one set of images posted on the LROC website [2], we could see the upwardly tilted intact lander as if ascending a raised slopy surface ( Fig. 5a & b). In another set of images taken at a different time by the LRO, we spotted the intact lander in a straight position at latitude of -69.58650 and a longitude of 23.77852 at 0.50 m /px. (Figs. 6a & b). It may be pointed out that Vikram was designed to safely land on slopes up to 12°. The processing of the image of the lander for dehazing and enhancement of contrast by MATLAB gave valuable clues to its identity (Figs. 5a &b, 6a & b, 7a & b) as we could visualize the rectangular shape of the lander, its top dome like structure, unfolded ramp and the opened door for the release of the rover Pragyan on the lunar surface. Even some of the thrusters at the bottom of the lander could be visible (Figs. 7b, 8). On scanning the LROC images displayed on its website on 1 September, 2020, we could spot the lander Vikram at a latitude of -69.58650 and a longitude of 23.77852 and rover Pragyaan at a latitude of -69.58382 and a longitude of 23.75958, respectively at 0.50 m /px (Figs. 9 & 10). It may be noted that these sites are distant from the scheduled landing site. In an image (M1325822321RE) acquired by LROC on 15 October, 2019 at 23:44:14.506 (image type NACR, Orbit 46427 EDR) we could visualize several meters long and wide vertical column of smoke or regolith (or possibly even water) emanating from the vicinity of the lander (Fig. 11.) which might have arisen due to some explosion or impact of some meteorite on the lunar surface. However, this could not be due to crash landing of Vikram because it was not seen in the images taken immediately after the landing but was noticed in the image taken after one month of landing. Earlier, an amateur had claimed to have located the debris about 750 meters northwest of the main crash site as a single bright pixel from the LRO images acquired on 17 th September, 2019. Scouring the images acquired by the LRO camera on Oct. 14 and 15 and November 11, 2019, the LROC team of NASA reported the impact site and associated debris field at 70.8810° S, 22.7840° E at 834 M elevation [3]. We have obtained clear cut images of the putative Lunar Lander Vikram on the lunar surface at a location close to the scheduled landing site on the South Pole of the Moon. The image has remarkable similarities with the Lander Vikram as we could identify even some structural features of the lander on the image. This sighting puts to rest all speculations about its destruction during landing due to high velocity impact on the lunar surface leading to scattering of the debris to long distances. CONCLUSIONS AND RECOMMENDATIONS The images presented here by us suggest that not only the lander landed successfully on the lunar surface without major destruction, it successfully opened the bay door and deployed the ramp for the release of the rover and its payload. This is contrary to the earlier claims that the lander crashed on the lunar surface due to high speed collision leading to its fragmentation and spread of debris to long distances. However, our findings are in consonance with the ISRO claim of 8 th September, 2019 [4] that the lander was intact and in single piece but in a tilted position. We present here for the first time a picture of the lander in a tilted position on the lunar surface. It is hoped that the present findings will help in locating the lost lander and the rover, identifying the causes of failure of its functioning and efforts to re-establish communication of the orbiter and mission control with the lunar lander and the rover. SOURCES OF FUNDING This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. CONFLICT OF INTEREST The author have declared that no competing interests exist. ACKNOWLEDGMENT The authors are thankful to Prof. Mark Robinson, University of Arizona, USA for providing some files of LROC imagery.
1,316
2020-12-29T00:00:00.000
[ "Physics" ]
Socioeconomic Impact Assessment of Water Resources Conservation and Management to Protect Groundwater in Punjab, Pakistan : Water is the most important resource; it is utilized largely in agricultural production and is fundamental to ensuring global food security. This study aims to assess sustainable water management interventions and their impact on the farm economy. To increase water productivity, the most important adaptations that have been proposed are high-efficiency irrigation systems, drought-resistant varieties, the substitution of water-intensive crops with less water-demanding crops, the mulching of soil, zero tillage, and all on-farm operations that can save water, especially ground water. The recent analysis utilized farm survey data from 469 representative farmers along with secondary statistics. The data were collected via a multi-stage sampling technique to ensure the availability of representative farm populations based on a comprehensive site selection criterion. The TOA-MD model estimates the adoption rate of a proposed adaptation based on net farm returns. The impact of high-efficiency irrigation systems and the substitution of high delta crops for low delta crops had a positive impact on net farm returns and per capita income, and a negative impact on farm poverty in the study area. It is recommended that policymakers consult farmer representatives about agricultural and water-related issues so that all the policies can be implemented properly. Introduction Agricultural production systems are complex, interlinked, and play a vital role in global food security. Ground water utilization is an important policy domain in developing nations due to its role in achieving food security and sustainable farming livelihoods. Water is the most important resource that is utilized largely in agricultural production, and the sustainable use of water resources is an important policy objective, as set out in the National Water Policy (2018) and the Punjab Water Policy (2020) [1,2]. Intergenerational and intragenerational equity in terms of farm resources are important to sustainable development [3]. The overutilization of water resources creates complex problems, such as waterlogging and salinization, and results in the depletion of groundwater resources. There are certain planned and unplanned adaptations that can be performed at the farm level and at the farmer's end that can sustain soil fertility and a sustainable farm income. At the implementation of researcher's recommendations regarding policy would be increased if specific technical aspects of research findings could be generalized in a simple, non-technical and understandable way, and then communicated to policymakers and other stakeholders [19]. The links between the agricultural production system and farming livelihood are described in Figure 1. This study is based on a comprehensive engagement process with researchers, policymakers and farmers, investigating the adaptations made at the farm level regarding ground water in order to enhance the sustainable agricultural pathways. The engagement process formulates future agricultural pathways and the economic viability of the proposed adaptations. The interventions impact farm returns, poverty and per capita income under sustainable resource utilization. The adoption of the proposed adaptations and management practices is also evaluated considering economic viability via a cost-benefit analysis. There are very few economic models that can be utilized specifically for policy evaluations. The socioeconomic viability of the recommendations is relevant to the impact of the research and interventions, as the economic rationality influences decisions about the adoption or non-adoption of the proposed adaptations at the farm level [21]. Materials and Methods Water is the most critical input required for agricultural production and significantly influences farm ecology. Similarly, bio-physical factors are also important in terms of farm production and livelihoods, via the impact they have on crop choices and production. A recent study was undertaken in the Lower Bari doab 1 (comprised of parts of the Lahore, Sahiwal and Multan divisions) and data from two districts, District Sahiwal and District Okara, have been utilized in the analysis, along with secondary data about the divisions from government statistics. Sahiwal is one of the most fertile divisions in Punjab and is suitable for a variety of cash crops, such as wheat, maize, cotton, rice and sugarcane. The recent analysis utilized farm survey data collected in 2018 from 469 representative farmers along with secondary statistics for the Sahiwal division. The data were collected by a multi-stage sampling technique to ensure the availability of a representative farm population based on a comprehensive site selection criterion. The distribution of the farm population from the head to the middle and tail ensures the heterogeneous nature of the farms in the data sets. A well-structured questionnaire was developed and personal interviews were conducted to develop a socioeconomic profile of the farmers. RAP sessions were conducted for future projections. This study is based on a comprehensive engagement process with researchers, policymakers and farmers, investigating the adaptations made at the farm level regarding ground water in order to enhance the sustainable agricultural pathways. The engagement process formulates future agricultural pathways and the economic viability of the proposed adaptations. The interventions impact farm returns, poverty and per capita income under sustainable resource utilization. The adoption of the proposed adaptations and management practices is also evaluated considering economic viability via a cost-benefit analysis. There are very few economic models that can be utilized specifically for policy evaluations. The socioeconomic viability of the recommendations is relevant to the impact of the research and interventions, as the economic rationality influences decisions about the adoption or non-adoption of the proposed adaptations at the farm level [21]. Materials and Methods Water is the most critical input required for agricultural production and significantly influences farm ecology. Similarly, bio-physical factors are also important in terms of farm production and livelihoods, via the impact they have on crop choices and production. A recent study was undertaken in the Lower Bari doab 1 (comprised of parts of the Lahore, Sahiwal and Multan divisions) and data from two districts, District Sahiwal and District Okara, have been utilized in the analysis, along with secondary data about the divisions from government statistics. Sahiwal is one of the most fertile divisions in Punjab and is suitable for a variety of cash crops, such as wheat, maize, cotton, rice and sugarcane. The recent analysis utilized farm survey data collected in 2018 from 469 representative farmers along with secondary statistics for the Sahiwal division. The data were collected by a multi-stage sampling technique to ensure the availability of a representative farm population based on a comprehensive site selection criterion. The distribution of the farm population from the head to the middle and tail ensures the heterogeneous nature of the farms in the data sets. A well-structured questionnaire was developed and personal interviews were conducted to develop a socioeconomic profile of the farmers. RAP sessions were conducted for future projections. The available data sets were analyzed using the tradeoff analysis multidimensional impact assessment model (TOA-MD), which is a unique simulation tool that can utilize the socioeconomic data sets already collected, combine them with macroeconomic data sets of farms, and project the current and future viability of specific policy interventions proposed in research studies [22,23]. Due to its efficacy in data use, it is also known as the minimum data approach, as it utilizes secondary data sets for generalizations. The Sahiwal division map is shown in Figure 2. The available data sets were analyzed using the tradeoff analysis multidimensional impact assessment model (TOA-MD), which is a unique simulation tool that can utilize the socioeconomic data sets already collected, combine them with macroeconomic data sets of farms, and project the current and future viability of specific policy interventions proposed in research studies [22,23]. Due to its efficacy in data use, it is also known as the minimum data approach, as it utilizes secondary data sets for generalizations. The Sahiwal division map is shown in Figure 2. Representative Agricultural Pathways for Ground Water Management Interventions RAPs are formulated via the consideration of representative concentration pathways and shared socioeconomic pathways, as described in Figure 3 below. The shared socioeconomic pathways 1 and 3 are linked with a sustainable development pathway under low and high growth, which are linked with biophysical and socioeconomic indicators. This study develops sustainable development pathways with moderate growth, considering resource depletion and farmer sensitization regarding water conservation and management practices. The shared socioeconomic pathways are determined via the interrelationships of adaptations and conservation practices regarding groundwater in compliance with efforts towards sustainable practices. Shared socioeconomic pathways could be sustainable or unsustainable, with high or low development pathways, and are linked with representative concentration pathways. This study utilized sustainable development pathways and moderate development. "Representative agricultural pathways" are combinations of economic, technological and political scenarios that represent a plausible range of possible futures (Box 1). They are not meant to be predictions, but rather provide researchers with a range of plausible scenarios that can be used to simulate possible future outcomes in a consistent and transparent way [26]. The RAPs framework shows that both bio-physical and socioeconomic Representative Agricultural Pathways for Ground Water Management Interventions RAPs are formulated via the consideration of representative concentration pathways and shared socioeconomic pathways, as described in Figure 3 below. The shared socioeconomic pathways 1 and 3 are linked with a sustainable development pathway under low and high growth, which are linked with biophysical and socioeconomic indicators. This study develops sustainable development pathways with moderate growth, considering resource depletion and farmer sensitization regarding water conservation and management practices. The shared socioeconomic pathways are determined via the interrelationships of adaptations and conservation practices regarding groundwater in compliance with efforts towards sustainable practices. Shared socioeconomic pathways could be sustainable or unsustainable, with high or low development pathways, and are linked with representative concentration pathways. This study utilized sustainable development pathways and moderate development. The available data sets were analyzed using the tradeoff analysis multidimensional impact assessment model (TOA-MD), which is a unique simulation tool that can utilize the socioeconomic data sets already collected, combine them with macroeconomic data sets of farms, and project the current and future viability of specific policy interventions proposed in research studies [22,23]. Due to its efficacy in data use, it is also known as the minimum data approach, as it utilizes secondary data sets for generalizations. The Sahiwal division map is shown in Figure 2. Representative Agricultural Pathways for Ground Water Management Interventions RAPs are formulated via the consideration of representative concentration pathways and shared socioeconomic pathways, as described in Figure 3 below. The shared socioeconomic pathways 1 and 3 are linked with a sustainable development pathway under low and high growth, which are linked with biophysical and socioeconomic indicators. This study develops sustainable development pathways with moderate growth, considering resource depletion and farmer sensitization regarding water conservation and management practices. The shared socioeconomic pathways are determined via the interrelationships of adaptations and conservation practices regarding groundwater in compliance with efforts towards sustainable practices. Shared socioeconomic pathways could be sustainable or unsustainable, with high or low development pathways, and are linked with representative concentration pathways. This study utilized sustainable development pathways and moderate development. "Representative agricultural pathways" are combinations of economic, technological and political scenarios that represent a plausible range of possible futures (Box 1). They are not meant to be predictions, but rather provide researchers with a range of plausible scenarios that can be used to simulate possible future outcomes in a consistent and transparent way [26]. The RAPs framework shows that both bio-physical and socioeconomic "Representative agricultural pathways" are combinations of economic, technological and political scenarios that represent a plausible range of possible futures (Box 1). They are not meant to be predictions, but rather provide researchers with a range of plausible scenarios that can be used to simulate possible future outcomes in a consistent and transparent way [26]. The RAPs framework shows that both bio-physical and socioeconomic drivers are essential components of agricultural pathways. RAPs can help engage stakeholders in research throughout the research process, and in the communication and refinement of research results [27]. Box 1. The RAP development process. Source: [28]. Step 1: Selection of higher-level pathways (Country level) and key indicators identification Step 2: RAPs narratives defined under different shared socioeconomic pathways Step 3: Key parameter/indicator selection and review with consideration of existing literature Step 4: Direction and magnitude of change in variables was shared and comprehensively discussed in RAP meetings Step 5: The rationale for rate of change and a short narrative finalized by a continuous engagement process with experts Step 6: RAPs shared with experts for their feedback Step 7: Feedback from experts and stakeholders in a continuous engagement process incorporated into the refinement of RAPs Step 8: Final RAPs drafted into the RAP matrix and again shared with professionals for further improvement regarding important variables parameterized in the model There were three RAP meetings and consultative sessions with stakeholders to formulate the RAPs and water conservation and management practices. The first RAP session was held at PCRWR (Pakistan Council of Research in Water Resources) with hydrological experts, agronomists, social scientists, irrigation scientists and socioeconomic experts. Progressive farmers were also invited to this session. The second consultative session was held in the field, where the experts and the project team visited the farm area and engaged in an extensive group discussion with farmers. The third session was again with academics, and was set up to share and refine the outcome of previous interactive sessions. The RAP parameters' direction and magnitude of change are listed in Table 1. Propsed Water Conservation and Management Practices Water conservation and management practices are an important policy domain in terms of food security and sustainable farm livelihoods. Irrigation plays an important role in the production of food and fiber crops. In Pakistan, 90% of food and 100% of cash crops are directly dependent on irrigation [11]. Farmers tend to adopt technologies and conservation techniques as long as they can realize an increase in expected profitability. The adoption of new conservation technologies requires considerable changes in the decisionmaking process, including human, biophysical, institutional, and economic considerations. The stakeholder consultative sessions and existing literature suggested these management interventions regarding the sustainable use of water on the farm. Transformative adaptations and the RAP development process are sketched below in Figure 4. The stakeholder consultative sessions and existing literature suggested these management interventions regarding the sustainable use of water on the farm. Transformative adaptations and the RAP development process are sketched below in Figure 4. By accompanying system-based and problem-solving transformative adaptations, the process adopted in this study, in line with previous studies [30,31], shows that there are complexities in the relations between different actors and contexts of action. Our study pf improved water management and conservation practices suggested the following recommendations [8,[32][33][34][35][36]: • Substitution of high-delta crop with low-delta crop; • Efficient irrigation practices (HEIS, drip irrigation, sprinklers, etc.); • Soil conservation practices for better water holding capacity (organic manuring, conservation tillage and cover crops); • Improved cultivars (drought-and heat-tolerant varieties, short-duration varieties); • Improved agricultural practices (fertigation, balanced fertilizer, drainage); • Plant population (seed rate, number of plants); • Construction of water storage; • Agricultural insurance/finance; • Water harvesting. The above-mentioned strategies are important in terms of water conservation practices and expansions in farm livelihoods. Due to data (availability and quantification) and model limitations, we cannot incorporate all the management interventions at once into the model; therefore, two important interventions proposed and strongly enforced by all stakeholders were used in the model. The foremost intervention was the substitution of high-delta crops with low-delta crops in the study area. Sustainable farming largely depends on the farm's natural resource capabilities, especially soil fertility and water availability, and quality plays a vital role in terms of farm productivity and income. Most of the farmers are very concerned about the rational use of these resources when selecting the best means of resource exploitation in the long run. Farm tenancy status clearly plays a crucial role in farm management and resource conservation practices. Tenancy status and structure also plays a crucial role in its direct link with sustainable farm livelihoods. However, while farmers are aware of By accompanying system-based and problem-solving transformative adaptations, the process adopted in this study, in line with previous studies [30,31], shows that there are complexities in the relations between different actors and contexts of action. Our study pf improved water management and conservation practices suggested the following recommendations [8,[32][33][34][35][36]: • Substitution of high-delta crop with low-delta crop; • Efficient irrigation practices (HEIS, drip irrigation, sprinklers, etc.); • Soil conservation practices for better water holding capacity (organic manuring, conservation tillage and cover crops); • Improved cultivars (drought-and heat-tolerant varieties, short-duration varieties); • Improved agricultural practices (fertigation, balanced fertilizer, drainage); • Plant population (seed rate, number of plants); • Construction of water storage; • Agricultural insurance/finance; • Water harvesting. The above-mentioned strategies are important in terms of water conservation practices and expansions in farm livelihoods. Due to data (availability and quantification) and model limitations, we cannot incorporate all the management interventions at once into the model; therefore, two important interventions proposed and strongly enforced by all stakeholders were used in the model. The foremost intervention was the substitution of high-delta crops with low-delta crops in the study area. Sustainable farming largely depends on the farm's natural resource capabilities, especially soil fertility and water availability, and quality plays a vital role in terms of farm productivity and income. Most of the farmers are very concerned about the rational use of these resources when selecting the best means of resource exploitation in the long run. Farm tenancy status clearly plays a crucial role in farm management and resource conservation practices. Tenancy status and structure also plays a crucial role in its direct link with sustainable farm livelihoods. However, while farmers are aware of water use and climatic implications, their decisions are largely affected by market signals and public policies. In Pakistan, the government directly intervenes in the wheat market, and provides incentives to wheat farmers in the form of support prices. Wheat is suitable for Water 2021, 13, 2672 7 of 16 most of the irrigated and rainfed areas of Punjab, as it is well suited to the cropping system, and has a stable market with high returns. Table 2 shows that water consumption under wheat cultivation is the highest, as wheat is preferably grown by farmers due to its better returns, proper marketing, efficient value chain and seed availability, but it utilizes the most water-a reported 39 million acre-feet [37]. Table 2. Current water consumption by five major crops. Crop Water Consumption Million Acre Feet (MAF) The research regarding ground water management interventions has been described in Figure 5. water use and climatic implications, their decisions are largely affected by market signals and public policies. In Pakistan, the government directly intervenes in the wheat market, and provides incentives to wheat farmers in the form of support prices. Wheat is suitable for most of the irrigated and rainfed areas of Punjab, as it is well suited to the cropping system, and has a stable market with high returns. Table 2 shows that water consumption under wheat cultivation is the highest, as wheat is preferably grown by farmers due to its better returns, proper marketing, efficient value chain and seed availability, but it utilizes the most water-a reported 39 million acre-feet [37]. Source: [25]. The research regarding ground water management interventions has been described in figure 5. Tradeoff Analysis Multidimensional Impact Assessment Model The TOA-MD simulates various "experiments" for the adaptation of new technologies and their impact assessments. These "experiments", combined with scenarios that represent the state of the world (for example, current or future technology), are the basis of adaptation analysis [38]. Current = Current technology Future = Future (changed) technology Adapted = Adapted (changed) technology Consider the two scenario systems 1 and 2: Tradeoff Analysis Multidimensional Impact Assessment Model The TOA-MD simulates various "experiments" for the adaptation of new technologies and their impact assessments. These "experiments", combined with scenarios that represent the state of the world (for example, current or future technology), are the basis of adaptation analysis [38]. if w > 0 Technology adoption causes a loss w < 0 Technology adoption causes a gain So, we can interpret the adoption model as: Adopters = Those who gain from technology adoption (a farmer who would like to "adopt" new technology). Non-adopters = Those who suffer from technology adoption (farmers who would not like to "adopt" a new technology). The "adoption rate" at a = 0 separates losers from gainers. Figure 6 describes the possible development pathways and adaptation options in the future. This study adopts the sustainable development pathways as projected by the green pathway, and the increases in farm incomes resulting from the adoption of management interventions are shown. The adoption rate for proposed interventions is estimated via the opportunity cost of adoption and non-adoption. The difference between net returns from system 1 and system 2 defines the opportunity cost. Water 2021, 13, x FOR PEER REVIEW 8 of 16 system 1 = Current time period, base technology system 2 = Future time period, improved technology (proposed interventions) w= v1−v2 = Measures the difference in income with base and changed technology if w>0 T  echnology adoption causes a loss w<0 Technology adoption causes a gain So, we can interpret the adoption model as: Adopters = Those who gain from technology adoption (a farmer who would like to "adopt" new technology). Non-adopters = Those who suffer from technology adoption (farmers who would not like to "adopt" a new technology). The "adoption rate" at a = 0 separates losers from gainers. Figure 6 describes the possible development pathways and adaptation options in the future. This study adopts the sustainable development pathways as projected by the green pathway, and the increases in farm incomes resulting from the adoption of management interventions are shown. The adoption rate for proposed interventions is estimated via the opportunity cost of adoption and non-adoption. The difference between net returns from system 1 and system 2 defines the opportunity cost. where υ1= net returns from system 1; υ2 = net returns from system 2. ω < 0 means gain from adoption of system 2 ω > 0 means loss from adoption of system 2 Thus, the proportion of adopters is given by: The percentage of non-adopters or farmers that remain in system 1 is:  100 100 100  (3) Figure 6. Pathways and future production systems. Source: [6,10]. where υ1= net returns from system 1; υ2 = net returns from system 2. ω < 0 means gain from adoption of system 2 ω > 0 means loss from adoption of system 2 Thus, the proportion of adopters is given by: The percentage of non-adopters or farmers that remain in system 1 is: Results The results of the TOA-MD model project the adoption rate based on the economic viability of the proposed adaptation strategies in the study area. The important aspect of this analysis is the formulation of an adaptation package, and the further selection of those management interventions that could be analyzed for economic viability. The TOA-MD model is used to analyze the adoption rate of the proposed management interventions based on economic analysis and cost-benefit analysis. The input for the model is based on survey data and projections of RAPs. The output of the model is mainly the adoption rate of adaptations based on impact on per capita income, net farm returns and farm poverty. The TOA-MD model was used to estimate the adoption rate of the proposed adaptation based on farm returns, and the impact of the substitution of high-delta crops with low-delta crops on net farm returns, per capita income and poverty in the study area. Economic Benefits for Adoption of Low Delta Crops The proposed adaptation package includes the substitution of high-delta crops with low-delta crops. According to the cropping pattern of the study area, the proposed crops are oilseed and pulses, which may increase the water productivity and farm livelihood, and reduce the import burden on the economy. The suggested management interventions include crop diversification; 5-10% of the area under wheat crop is replaced by sunflower, and likewise, 5-10% of the maize could be replaced with moong bean. Sunflower and moong bean are highly recommended crops according to the climatic and biophysical condition of the study area. Additional data sets were collected for alternative crops in the same area in order to analyze the socioeconomic impacts of crop substitution. These could be added to oilseed and pulses in the analysis to check their economic viability. The rationale of crop substitution is based on resource conservation and export bills; likewise, crop diversification also protects farmers from risk and uncertainty, and the adoption of legumes crops can improve soil fertility and crop productivity, as well as using less water. There are certain risks involved, especially market risks and climatic uncertainties [39,40]. The proposed interventions include the repurposing of 5-10% of the land for alternate crops that increase farm returns and ensure increased farm incomes. Wheat is a staple crop that utilizes very high water resources, but this crop has huge socioeconomic and political importance. The consultative session suggested that annual surplus wheat production could be replaced by oilseed and pulses [40]. However, the farm sizes are already small, so it was agreed that 5% of land occupied by small land holdings and 10% of land occupied by large land holdings could be allocated for alternate crops. The economic benefits of the adoption of low-delta crops are described in Table 3. The results indicate that, without adaptation, the poverty level was 17.72% in the survey data, whereas with interventions, the poverty rate would reduce by 3.6 and 3.4% for scenario 1 and 2, respectively. The percentage adoption rates for sunflower and moong bean would be 49 and 59%, respectively, as described in Figure 7. The results show that the proposed interventions, such as crop substitution, would have a significant impact on farm livelihoods. Water 2021, 13, x FOR PEER REVIEW 10 of Figure 7. Potential adoption rate of low-delta crops in study area for a future agricultural production system. The percentage change in net farm returns per capita income and farm poverty du to crop substitution is shown in Figure 8. Farmers tend to adopt technologies and conservation techniques as long as they ca realize an increase in expected profitability. The decision to adopt technologies and tec niques is also influenced by a farmer's socioeconomic status, knowledge of new techno ogies, cultural background, and access to natural resources. Moreover, the adoption new conservation technologies requires considerable changes in the decision-making pr cess, including a wide range of human, biophysical, institutional, and economic conside ations. The percentage change in net farm returns per capita income and farm poverty due to crop substitution is shown in Figure 8. The percentage change in net farm returns per capita income and farm poverty due to crop substitution is shown in Figure 8. Farmers tend to adopt technologies and conservation techniques as long as they can realize an increase in expected profitability. The decision to adopt technologies and techniques is also influenced by a farmer's socioeconomic status, knowledge of new technologies, cultural background, and access to natural resources. Moreover, the adoption of new conservation technologies requires considerable changes in the decision-making process, including a wide range of human, biophysical, institutional, and economic considerations. Farmers tend to adopt technologies and conservation techniques as long as they can realize an increase in expected profitability. The decision to adopt technologies and techniques is also influenced by a farmer's socioeconomic status, knowledge of new technologies, cultural background, and access to natural resources. Moreover, the adoption of new conservation technologies requires considerable changes in the decision-making process, including a wide range of human, biophysical, institutional, and economic considerations. The socioeconomic impacts of the benefits offered by a high-efficiency irrigation system are presented in Table 4. The proposed interventions include the substitution of conventional methods of irrigation with high-efficiency irrigation systems (HEISs). With the proposed intervention of HEISs (through technology and management), there would be 3.8% decrease in poverty and a 30% increase in per capita income. The adoption rate of this adaptation package is 74%, resulting in a reduction in farm poverty, as presented in Figure 9 It is evident that substantial reductions in water consumption are made possible through changes in cropping patterns, and the suggested crops increase the farm income and livelihood as well. Low-delta crops must be prioritized as compared to high-delta crops, considering the demand for staple foods to ensure food security. The adoption rate of high-efficiency irrigation systems for major cash crops has a positive impact on NR, PCI and farm poverty, as shown in Figure 9. The socioeconomic impacts of the benefits offered by a high-efficiency irrigation system are presented in Table 4. The proposed interventions include the substitution of conventional methods of irrigation with high-efficiency irrigation systems (HEISs). With the proposed intervention of HEISs (through technology and management), there would be 3.8% decrease in poverty and a 30% increase in per capita income. The adoption rate of this adaptation package is 74%, resulting in a reduction in farm poverty, as presented in Figure 9 It is evident that substantial reductions in water consumption are made possible through changes in cropping patterns, and the suggested crops increase the farm income and livelihood as well. Low-delta crops must be prioritized as compared to high-delta crops, considering the demand for staple foods to ensure food security. The adoption rate of high-efficiency irrigation systems for major cash crops has a positive impact on NR, PCI and farm poverty, as shown in Figure 9. The result further shows that water savings and high NR are made possible by shifting from conventional irrigation to improved irrigation technologies (sprinkler and drip irrigation) [34]. Public policies must consider resource conservation and sustainable livelihoods. Policies must be formulated in favor of institutional development, as compared to support for domestic production and price control policies [4]. Based on the following The result further shows that water savings and high NR are made possible by shifting from conventional irrigation to improved irrigation technologies (sprinkler and drip irrigation) [34]. Public policies must consider resource conservation and sustainable livelihoods. Policies must be formulated in favor of institutional development, as compared to support for domestic production and price control policies [4]. Based on the following results, it is recommended that apart from the proposed water-saving strategies, other alternative management techniques, directed to off-farm (i.e., improved infrastructure to reduce water losses due to poor conveyance efficiency) and on-farm (e.g., deficient irrigation or soil mulching) management, should be evaluated in future studies, as these play important roles in the sustainable use of farm resources and farm livelihoods. Figure 10 describe the impact of the adoption of HEIS on net farm returns, per capita income is positive but negative impact on farm poverty. results, it is recommended that apart from the proposed water-saving strategies, other alternative management techniques, directed to off-farm (i.e., improved infrastructure to reduce water losses due to poor conveyance efficiency) and on-farm (e.g., deficient irrigation or soil mulching) management, should be evaluated in future studies, as these play important roles in the sustainable use of farm resources and farm livelihoods. Figure 10 describe the impact of the adoption of HEIS on net farm returns, per capita income is positive but negative impact on farm poverty. Discussion Agricultural production systems could be improved by educating farmers to adopt management interventions and improved practices that are practically possible at the farm level. The main concern with the proposed interventions and solutions at research stations is the slow adoption and the economic viability of the farm. As such, the adaptations are at times not practically adopted by farmers, due to technological, financial, and socioeconomic constraints [41][42][43][44]. As farmers are the most important and crucial stakeholders in the whole process, it is important to involve the farmer in the entire research process when devising solutions for the agricultural issues that farmers face in the long run [45,46]. To acknowledge the importance of agricultural production systems in society, studies describe the effects of transformative adaptations on boosting farm produce [28,37,38]. This project evaluates the potential adaptations that can be made in the study area, and also emphasizes the reasons for low adoption due to the lack of important stakeholders involved in the whole policy formation process. The researchers, policymakers and farmers working in their specialized fields maintain weak communication. During the RAP sessions, researchers from sociology, genetics, irrigation and drainage, economics and the soil sciences were consulted in developing improved projections. From a food security perspective, Pakistan's agricultural policy mainly concentrates on wheat, especially in terms of support price and procurement. This ensures farm returns to some extent, but creates inefficiencies in the wheat market. Wheat is a suitable crop all over Pakistan, and fits better in all cropping systems. However, wheat crops compete with the main oilseed crops and pulses. Research studies have shown that sunflower is suitable for cultivation and is a highly profitable crop; however, farmers are reluctant given several constraints [47,48]. The adoption of sunflower is low, mainly due to seed unavailability, the high cost of production (especially seed cost), inefficient marketing, the lack of suitable farm machinery for small farms, and the lack of competition among buyers [49]. The allocation of water resources is crucial in terms of sustainable agriculture; consequently, policies must be formulated for water conservation and management. The for- Discussion Agricultural production systems could be improved by educating farmers to adopt management interventions and improved practices that are practically possible at the farm level. The main concern with the proposed interventions and solutions at research stations is the slow adoption and the economic viability of the farm. As such, the adaptations are at times not practically adopted by farmers, due to technological, financial, and socioeconomic constraints [41][42][43][44]. As farmers are the most important and crucial stakeholders in the whole process, it is important to involve the farmer in the entire research process when devising solutions for the agricultural issues that farmers face in the long run [45,46]. To acknowledge the importance of agricultural production systems in society, studies describe the effects of transformative adaptations on boosting farm produce [28,37,38]. This project evaluates the potential adaptations that can be made in the study area, and also emphasizes the reasons for low adoption due to the lack of important stakeholders involved in the whole policy formation process. The researchers, policymakers and farmers working in their specialized fields maintain weak communication. During the RAP sessions, researchers from sociology, genetics, irrigation and drainage, economics and the soil sciences were consulted in developing improved projections. From a food security perspective, Pakistan's agricultural policy mainly concentrates on wheat, especially in terms of support price and procurement. This ensures farm returns to some extent, but creates inefficiencies in the wheat market. Wheat is a suitable crop all over Pakistan, and fits better in all cropping systems. However, wheat crops compete with the main oilseed crops and pulses. Research studies have shown that sunflower is suitable for cultivation and is a highly profitable crop; however, farmers are reluctant given several constraints [47,48]. The adoption of sunflower is low, mainly due to seed unavailability, the high cost of production (especially seed cost), inefficient marketing, the lack of suitable farm machinery for small farms, and the lack of competition among buyers [49]. The allocation of water resources is crucial in terms of sustainable agriculture; consequently, policies must be formulated for water conservation and management. The formulation and implementation of ground and surface water laws are linked with the adoption of water conservation practices, such as the implementation of micro-irrigation technologies and the growing of high-value crops, which can boost the water economy [50]. Pakistan's farming landscapes are complex and varied across regions in terms of water availability, quality and quantity. Therefore, it is recommended to provide water-related information that is authentic and constructed with reference to all the specific policies devised and implemented to enable sustainability in farming practices [9]. The on-farm management practices, such as the use of fertilizer, soil management, laser leveling, irrigation methods and the selection of crops, contribute to water use efficiency, soil health improvement and the mitigation of climate change impacts [7,30,31]. The moong bean has great potential as a cash crop in Pakistan in wheat systems, but there are certain interventions that can turn constraints into opportunities in terms of its adoption in irrigated areas [51]. The availability of water, high-yielding cultivars, improved management practices and improvements in the value chain are the crucial factors in moong bean cultivation. The cultivation of pulses, especially moong bean, is low, mainly due to marketing issues and inconsistent policies. The benefit-cost ratio of moong bean is higher than that of all other major cash crops in certain areas, as reported by the National Agricultural Research Centre, but due to marketing factors, farmers do not grow this crop, and are reluctant to substitute existing crops [48]. The improvement in yield is substantial when adopting efficient irrigation systems, especially in maize crop [52]. The biological and grain yields increased substantially with higher-efficiency irrigation systems and raised bed systems. The water quality and mode of irrigation could increase the crop yields by approximately 15%. Improvements in water quality and soil fertility increase the crop yields substantially. It is projected that better on-farm water management increases the net farm income substantially by increasing the crop yields and reducing the cost of production-it is estimated that this increases the farm income by INR 75000 per acre per annum [43,53]. Conclusions Agricultural production systems largely depend on natural resources, especially water and soil. On-farm water conservation and management practices are needed, with measures such as the re-allocation of water to higher-value crops. Likewise, limited irrigation requirements, spatial re-allocation and the transfer of water improve water productivity, and have positive impacts on farm livelihoods. The adaptations that could increase water productivity formulated during the engagement process include highefficiency irrigation systems (HEIS), drought-resistant varieties, the substitution of waterintensive crops with less water-demanding crops, the mulching of soil, zero tillage, and improved farm cultivation operations. Overall, 75% of farmers have the economic ability to adopt these management interventions. Although wheat, maize, rice, sugarcane and cotton are the most important cash crops, it is necessary to calculate the social cost of water-demanding crops. Oilseeds and pulses are potential candidates in terms of resource conservation and crop diversification. Based on current analysis, the following recommendations could be helpful for researchers and policymakers to improve water management. Policy formulation must consider and consult farmer representatives about water issues, so that all the policies can be implemented at the farm. Farmers are important stakeholders, and their inclusion will help in the adoption of interventions to improve the management of ground water and surface water. The most crucial factor in agricultural development is the access to agricultural finance, especially for the adaptation of technological advancements regarding water management, such sprinkler and drip irrigation and farm mechanization. The study recommends that the Central Bank provide special financing schemes for sustainable practices, which would increase the rate of adoption. Water allocation at the farm must be equitable, and there must be an efficient water market so that malpractices and the overutilization of water resources can be minimized. Nature-based solutions are also needed, and appropriate policies must be formulated for specific zones. The areas with water scarcity must be highlighted, and serious efforts should be made to implement the suggested interventions. The integrated farm system model is a concept proposed to increase the farming systems' efficiency in a sustainable manner. Livestock, crops, fisheries, agroforestry and the poultry sector must all be considered as one integrated system. Public policies that are in favor of one crop could suppress the cultivation of other crops. Support prices for a farming system could be avoided, as themarket-based solutions are the most efficient. Public policies that are in favor of one crop could suppress the cultivation of other crops. Therefore, public support especially the support prices should be designed and implemented in a way to enhance welfare gain for farming system as a whole whereas efficient market-based solutions should be preferred. For the up-scaling of the interventions, it is recommended that the current work be continued in other agro-ecological zones of Punjab where the majority of farmers and non-farmers are resource-poor, water is scarce, and poverty is high. Solving issues in the community by involving its members is very important, and RAPs are a novel approach to providing solutions for critical issues and later assessing the impacts on livelihood of the implemented interventions. The analysis was conducted for the mid-century; the future is unpredictable, and many development pathways and future parameters of sustainability could change the extent of the impacts. This study only considers the sustainable development pathways, and assumes that sufficient effort would be made toward resource conservation. There is a lack of data on the region-and crop-specific ex ante analytical impact of water quality and quantity. Alternate livestock and horticultural crops were not analyzed due to the lack of survey data, although it this be a potential future enterprise in the study area. This analysis could be further refined by considering adaptations of water harvesting, storage, and water prices, which are important indicators related to the efficient use of water and agricultural resources for other areas of Pakistan. The analysis could be performed using more than one pathway and future price assumptions. Data Availability Statement: The data will be available upon request.
9,647.4
2021-09-27T00:00:00.000
[ "Economics" ]
INSTAGRAM IN TEACHING ENGLISH FOR SPECIFIC ACADEMIC PURPOSES . The purpose of the study is to explore the potential of integrating Instagram in university-level English for Specific Academic Purposes (ESAP) courses, focusing on Business English for Advanced Students. Using a mixed-methods approach, it combines qualitative data from observing the completion of Instagram-based assignments and subsequent semi-structured group interviews with quantitative data from a five-point Likert scale questionnaire distributed to the students who participated in the study. The research sample consisted of twenty-six undergraduate students from the Faculty of Commerce at the University of Economics in Bratislava, Slovakia, who completed four Instagram-based tasks related to communication, international marketing, promoting Slovakia, and success, which were linked to topics covered in the course during the winter semester of the academic year 2020/2021. The findings show that Instagram can be used effectively in ESAP courses mainly due to its popularity, visual appeal of posts, and, most importantly, as a platform that serves as a powerful marketing tool. The results indicate that the implementation of Instagram activities was positively perceived by most students, with high levels of agreement regarding the relevance, engagement, and creativity of the tasks. The study highlights the potential benefits of using Instagram in ESAP, providing insights into effective language learning through authentic, business-related tasks. Ultimately, integrating Instagram into ESAP can enable students to develop the language skills and competencies needed in real business contexts. Introduction In today's globalized world, effective communication skills in English are increasingly important, particularly in the business world.As a result, English for Specific Academic Purposes (ESAP) has become a key part of university curricula, with the aim of developing students' language skills relevant to their field of study.While traditional methods of teaching ESAP have been successful, there is growing interest in integrating social media platforms, such as Facebook, Twitter, and Instagram, into language instruction. Instagram's visual and interactive features make it a promising tool for language learning that can facilitate student engagement, collaboration, and creativity.Moreover, Instagram is not only a platform concentrated on connecting with people but also a place where business is done on a large scale with the possibility to reach a worldwide audience, which provides students of economics with an opportunity to learn about the functions of marketing in the digital era.Through integrating language and content, students can simultaneously develop their language proficiency and knowledge of industry-specific topics, which can better prepare them for real-world professional contexts.With the use of Instagram, students can produce and share content that reflects their understanding of both language and business concepts, ultimately enhancing their language skills and developing their problem-solving, creativity and communication abilities, all of which are highly valued in the business world.However, despite its benefits, its use in ESAP, particularly in university courses for students of economics, has not been extensively researched.Thus, one of the aims of this study is to point out how Instagram can be integrated into language instruction in tertiary education, based on research conducted in the past, and at the same time, to try to fill in the existing gaps in this field. Literature review Social media have become the focus of different kinds of research, among those also linguistic and methodological analyses in a variety of educational contexts.White et al. (2011) examine the potential uses of social media in the classroom, emphasising their role in promoting student creativity, collaborative learning through the creation of learning communities, and the use of social media as an assessment tool.In the book by Joosten (2012), educators are provided with instructions on how to utilise social media in education in general, while Patrut and Patrut (2013) concentrate on the integration of e-learning platforms, interactive virtual channels, and social networking sites in specific university courses (marketing information systems and gender studies), as well as on building global communities of academics.Mallia ( 2014) offers insights into the potential benefits and challenges of integrating social networks into the classroom, highlighting both formal and informal uses of social interaction tools as learning tools.One of the topics addressed by Greenhow et al. (2016) is how social media can become a part of teacher education.The study by Ansari and Khan (2020) proves that using social media and mobile devices has a significant impact on students' collaborative behaviour, engagement, and, consequently, on their academic performance. Reinhardt (2019) provides a thorough overview of studies dealing with blogs, wikis, and social networks in language education, where he points out the role of social media in autonomous learning.(Chan et al. (2011) outline the theoretical and pedagogical implications of their use in foreign language teaching and learning.The authors draw attention to developing oral proficiency using YouTube videos and demonstrate how social networking sites can enhance students' motivation and learners' autonomy.Lamy and Zourou (2013) emphasise that the success of using social networking sites in language education depends on the student's environment, activities, and learning priorities: The findings of the research into using social networking sites show that these support expanding students' vocabulary (Motlagh et al., 2020;Mykytiuk et al., 2022;Gómez et al., 2023), enhancing their communication skills (Chan et al., 2011;Mykytiuk et al., 2022;Yamshynska et al., 2022), and help remove intercultural barriers (Lisnychenko et al., 2022). It is natural that most attention has been paid to Facebook, Twitter, YouTube, MySpace and to some specific educational social networking sites such as Edmodo, Ning, Elgg, as these sites have a longer history or more variable functions than those of Instagram, or, as in the second group, they are specifically aimed at learners on different levels of education.However, Instagram has gradually become of interest to teachers and researchers, which was confirmed by the analysis of 46 studies by Manca (2020). One group of studies dealing with Instagram specifically, concentrates on students' attitudes towards it as an educational and language learning platform.Erarslan (2019) used combined research methods to find out students' opinions about Instagram for educational and language learning purposes and investigated also whether Instagram influenced students' language learning.According to the author, the results of the study prove "that social media platforms and Instagram in particular for the purpose of this study, enable students to create a cooperative, collaborative and sharing atmosphere, supporting the formal classroom setting in addition to sharing class materials" and that "in terms of language learning purposes, Instagram was found to be an effective tool" (Erarslan, 2019, p. 66).This finding was also confirmed in research by Sari and Yahudin (2019) in a Business English course. Al-Ali carried out her research into utilizing Instagram and other platforms (Keynote and BlackBoard Learn) in a bridge-intensive English program in three learning activities concerning holidays, whose goal was to improve students' grammar structures, speaking, writing, and vocabulary skills.Based on the outcomes, the author believes that Instagram "helped create a more personalized learning experience for students, … it allowed for creating a sense of community, … and it encouraged them to produce creative content rather than doing the bare minimum to complete an activity" (Al-Ali, 2014, pp. 12-13). A study by Rahmah (2018) showed that sharing visual materials, such as photographs and videos, on Instagram can make students more confident to speak in a foreign language stemming from positive comments made on Instagram by their friends, followers, and colleagues. Similar results were produced by Pujiati et al. (2019) in an experiment conducted in a junior high school, whose aim was "to improve students' motivation in learning English and also to increase their grammatical competence and skills especially, vocabulary and writing" (Pujiati et al, 2019, p. 653).Compared to other studies, this experiment was founded on students' reactions to teachers' questions in the form of polls published on Instagram and not on students' own posts of photographs and videos.Despite this difference, the authors state that "[b]ased on the result of the study, it is precise that Instagram has essential roles in assisting students to improve their motivation in learning English and eventually increase their English competence and skills" (Pujiati et al. 2019, p. 654). Gonulal (2019) came to analogous conclusions using Instagram as a mobile-assisted language learning tool (MALL).From the results, he determined that Instagram is an effective way to learn vocabulary and communication skills.Depending on particular Instagram features implemented in foreign language classroom tasks, this social networking site has been found to contribute to the enhancement of learners' reading, listening, writing, and speaking skills (e.g.Mansor & Rahim, 2017;Maslova et al., 2019;Wulandari, 2019;Prasetyawati, & Ardi, 2020;Sitorus & Azir, 2021). Other studies were conducted in groups of university students.Mansor and Rahim (2017) based their experiment on the group work of 20 students of the Business Communication Course at Universiti Malaysia Terengganu.The students, divided into groups, posted videos on four different topics on Instagram and the groups were encouraged to interact with each other via the commenting function of the platform, which was later followed by an online interview with the course instructors.The authors of the study claim that based on their findings, using Instagram not only boosted students' confidence to communicate in English and helped them develop their language skills (namely reading and writing), but it also contributed to developing the collaborative environment by creating learning communities.Leier (2018) explored this topic in a group of university students attending an intermediate German course that they took by distance.The theoretical framework of Leier's study drew on multiliteracies as described by Pegrum et al. (2011Pegrum et al. ( , 2022)): digital literacythe ability to interpret and share information on digital platforms, -search literacythe ability to use search engines effectively multimodal literacythe ability to interpret and communicate through multimedia, -filtering literacythe ability to reduce the excess of information critical literacythe ability to apply critical thinking to digital technologies, -network literacythe skill of using networks to connect with others, share information, collaborate, and build reputation, -tagging literacythe ability to use tags as metadata to search, make searchable, and organise online content, -remixing literacythe ability to change existing digital content to create new meanings.The author used a questionnaire to find out how students behave on Instagram generally, how they perceive learning with Instagram as part of their assignment, and finally, their approach to their Instagram account design.Thus, this study comprised both passive and active use of Instagram in the educational setting.It follows from the results that "[t]he students perceived the Instagram assignment to be beneficial for their cultural learning of German, but less so for their overall German language acquisition except for spoken German" (Leier, 2018, p. 87).Regarding developing multiliteracies, more specifically digital literacies, "(t)he outcome was a more reflective use of internet resources, leading to transformed practices" (Leier, 2018, p. 87). The function of Instagram as a marketing tool was highlighted in the article by Mustain et al. (2019).Business Administration students were assigned a task to examine corporate marketing strategies on Instagram and produce their own brief videos on the same platform.Subsequently, interviews and a questionnaire were conducted to identify students' perceptions of using Instagram in a language classroom.The outcomes of the study confirmed the previously mentioned results, namely that students feel more motivated and more engaged in activities that include Instagram as a learning tool.Moreover, the study revealed "that Instagram can promote meaningful interaction as well as learner autonomy which are essential for their life outside the classroom" (Mustain et al., 2018, p. 100). Communication in the field of marketing was also the primary focus of the investigation pursued by Maslova et al. (2019), who explored the integration of Instagram into teaching English for specific purposes to students of economics.In this specific case, Instagram-written blogs dealing with e-marketing were posted by groups of students during one semester, while the second experiment lasted three semesters, during which students were given different tasks based on the level of their language skills to publish video blogs on Instagram.Analyses of written blogs and video blogs led to the conclusion that "Instagram has proved to be a highly motivational tool that allows stimulating and nurturing students' interest for continuous learning" and "[t]he fresh format and uniqueness of this approach allow for the deployment of creativity and self-expression" (Maslova et al., 2019, p. 8703), and at the same time, students were able to develop their writing and speaking skills in English. The outcomes of the studies proved the undisputable benefits of utilising Instagram on different levels of education, which can be summarised as follows.Instagram is an effective tool for learning and teaching as it facilitates: higher motivation to learn a foreign language, -interaction among students and between students and a teacher, -a sharing and collaborative environment among students, -a more personalised learning experience, -greater students' autonomy, -stimulation of creativity and self-expression, -increase in students' confidence in foreign language communication, -developing competence and language skills in a foreign language. Besides the studies analysing students' attitudes towards Instagram and the ways to utilize this application in the education process, there is a gap in the literature that was addressed by Carpenter et al. (2020), who studied how educators use Instagram in their professional lives, because "educators' online activities remain an understudied field, and the particular case of Instagram remains unexplored in published research" (Carpenter et al., 2020, p. 4).Their investigation into this topic was based on 841 responses, mostly by elementary and high-school teachers, to a survey distributed on various social networking sites.According to the results, "[e]ducators employed Instagram to acquire and share knowledge, as well as to exchange emotional support and develop community" (Carpenter et al., 2020, p. 9).One of the findings that might be surprising is the fact that teachers tend to mix professional and personal content on Instagram. Previous research has predominantly focused on the overall use of social networking sites, with Instagram also being investigated in English language teaching across diverse educational contexts.However, a limited body of work exists concerning the specific use of Instagram in academic settings, particularly for language learning with a focus on Business English.Given Instagram's status as a vital marketing tool, it offers a compelling opportunity for content and language-integrated learning (CLIL). The presented research aims to address this knowledge gap by examining the practical application and outcomes of integrating Instagram into university-level courses in English for economics.Therefore, the purpose of the article is to explore Instagram's potential in supporting ESAP teaching and learning, with a focus on English for economics students.To achieve this, the article introduces a case study of Instagram's implementation in a university setting, along with a questionnaire reflecting students' perceptions of Instagram-based activities in the ESAP course. Based on the aim, we asked the following research questions: 1) can Instagram-based activities be effectively integrated into university-level ESAP courses for students of economics?; 2) what are the students' perceptions regarding the implementation of Instagram as a supportive tool in the ESAP course? Research Design The methodology employed in this study comprises a mixed-methods approach, combining qualitative data from observation of assignment completion and semi-structured group interviews following every Instagram-based task with quantitative data from a fivepoint Likert scale questionnaire focused on students' perceptions of these activities.This methodology provided a comprehensive understanding of the students' experiences of using Instagram as a language learning tool at the tertiary level. Participants The group assigned Instagram-based tasks consisted of 26 undergraduate students in the first year of their bachelor's study programs at the Faculty of Commerce, University of Economics in Bratislava, Slovakia These students were enrolled in the course Business English for Advanced Students, which corresponds to the C1 level of the Common European Framework of Reference for Languages (2020).All students participated voluntarily, and their anonymity on Instagram was ensured through the utilization of a shared Instagram account.The research was conducted during the winter semester of the academic year 2020/2021, spanning from September 2020 to December 2020. Instruments and Procedures The research comprised four distinct phases: 1) pre-activity: this phase involved administering a questionnaire that aimed to gather insights into the general use of social networking sites; 2) main activity: Instagram was integrated into the learning process through the implementation of four guided tasks; subsequently, both teachers and students' colleagues provided feedback on the completed tasks; 3) post-activity: following the completion of the tasks, an assessment of the posts was conducted, and semi-structured interviews were carried out with the participating students; 4) data collection and analysis: a Likert-scale questionnaire was distributed among the students who had taken part in the Instagram-based activities, and the data from this questionnaire were then collected and subjected to analysis.This study encountered three key considerations during the implementation of Instagram-based tasks in ESAP courses for economics students.Firstly, a significant digital divide can be observed between the "digital native" students and "digital immigrant" teachers (Prensky, 2001).Overcoming this disparity required thorough preparation and facilitated closer cooperation between students and the instructor, fostering a collaborative atmosphere.Secondly, the group consisted of first-year students who were unfamiliar with each other, and the course commenced amid the challenges posed by the COVID-19 pandemic, relying primarily on online platforms like MS Teams for communication.This limited personal interaction may have increased the potential for student weariness, confusion, and frustration.Lastly, addressing students' privacy concerns was crucial.Although prior research suggested privacy was not a significant issue for students (Leier, 2018), we opted to create a separate Instagram account solely dedicated to the course tasks.This approach positively influenced creativity and collaboration during the account setup, promoting a more comfortable learning environment. A collaborative process led to the selection of the account name businessstudentsof, which was initially public and later switched to private at the end of the course.Utilizing this common account, accessible to all students and the author, was preferred over following individual accounts to avoid intermingling personal and course-related content, which would have differed in scope.This approach ensured that students felt encouraged to share their "professional" posts without hesitation. Following the initial stage, students were given four tasks to complete throughout the entire semester.These tasks were designed to align with the course's content based on the textbook Market Leader Upper-Intermediate -Business English Coursebook (Cotton et al., 2013).Specifically, the tasks covered topics dealing with communication, international marketing, doing business internationally, and success.The tasks were deliberately arranged in a sequence, progressing from passive to active use of the medium.While the primary focus of the assignments was on improving English writing skills, it should be noted that speaking skills had the potential to be incorporated in Instagram videos. Task One -Communication.This task aimed to evaluate the application of Instagram as a communication tool by businesses in promoting their products and services.Students were required to conduct research on Instagram posts of a business of their choice and present their findings concisely in a 100-word text, employing domain-specific vocabulary and collocations pertinent to communication.Preparatory seminars were held to provide students with the necessary understanding of communication issues to enable them to carry out their research effectively. Participants were granted the autonomy to analyse any business they desired on Instagram, be it of international or Slovak origin.Following task completion, students submitted their summaries to the teacher for assessment and feedback.The provided feedback was intended to improve the students' work, identify their strengths and weaknesses, and offer suggestions for enhancement. Task Two -International marketing.The second task assigned to the students revolved around promotional activities on social media platforms, particularly Instagram, with a focus on international marketing.The primary objective was to create and publish an advertisement of an existing or imaginary product or service on Instagram within a stipulated two-week timeframe.To facilitate the task, a case study from the textbook (Cotton et al., 2013, p. 20-21) was presented as a source of inspiration. Task Three -Doing Business Internationally.The objective of the third task was to promote a region in Slovakia or the country as a whole.Students, including those studying Business in Tourism and Services, were instructed to capture a photograph representing their town, region, or country, accompanied by a comment promoting the chosen location. Task Four -Success.The aim of the final, fourth task was to express students' individual feelings of success after they had passed the examination at the end of the winter semester 2020/2021.This task not only aligned with the final unit of the course but also aimed to encourage a more playful expression of students' sentiments, fostering a positive attitude towards completing the tasks and the entire Business English course. The semi-structured group interviews conducted after each task served as a vital component of this study, aiming to elicit valuable insights into students' perceptions and evaluations of the assigned tasks.The interviews provided a platform for students to express their thoughts on the task's interest level, relevance, and overall engagement.Participants were encouraged to share their experiences, identifying any challenges encountered throughout the process.The findings from the interviews were later considered when analysing data obtained from a follow-up questionnaire. The final phase of the study entailed administering a comprehensive five-point Likert scale questionnaire, consisting of nine questions scored on a scale ranging from strongly disagree to strongly agree.Each item was assigned numerical values as follows: (1) strongly disagree, (2) disagree, (3) neither agree nor disagree, (4) agree, (5) strongly agree.The tenth question ranked the four assignments from best (1) to worst (4).The primary objective of the questionnaire was to identify and measure students' perceptions of the Instagrambased activities integrated into the language learning process.This quantitative approach provided valuable data that complemented the qualitative insights obtained from observations during the course and the semi-structured group interviews, allowing for a holistic understanding of the students' experiences and opinions regarding the teaching methods employed. Results of the initial questionnaire concerning social networking sites Instagram was selected for the study due to its easy accessibility via mobile phones and popularity among students at the University of Economics in Bratislava, which was determined based on a survey filled out at the beginning of the winter semester of the academic year 2020/2021.Among our university students, Instagram was used by 183 out of 191 respondents, coming second only to Facebook, which had 184 users (see Figure 1).The findings demonstrate the equal popularity of these two social networking sites.At the same time, more than half the participants (54%) answered that they would welcome social networking sites becoming part of their English language course if it was a controlled activity (cf.Pavlíková, 2021). of integrating Instagram in the ESAP course Task One sought to cultivate students' understanding of how businesses employ Instagram as a communicative tool, while also improving their language skills in the context of communication-related vocabulary and expressions.Analysis of the students' texts revealed that they recognised the effectiveness of Instagram as a marketing tool for companies to engage with their target audience.However, when examining the summaries, it became apparent that some students used informal language rather than the expected formal language.This observation could be attributed to their limited experience of academic writing, which includes writing summaries, abstracts, and other forms of academic text.In addition, the informal nature of communication prevalent in social networks may have influenced the students' writing style.As a result, the incorporation of more colloquial language in their texts could be explained by their familiarity with the informal communication patterns prevalent in the digital realm.The feedback process administered by the teacher played a crucial role in guiding the students towards more refined and effective communication practices. Task Two.Despite some initial delays in submissions, a total of 16 commercials were eventually successfully posted.These included 11 static advertisements featuring at least one photograph and 5 videos.Some posts displayed a remarkably high standard in their visual presentation, reaching an almost professional level.The task was designed to be completed individually, in pairs, or in small groups.Interestingly, some students chose to work together in pairs or groups of three almost instantly despite not having met personally before due to the COVID-19 pandemic. The successful completion of this task highlights the students' creativity and skills in utilizing Instagram as a powerful tool for international marketing promotion.Their ability to collaborate and generate compelling advertisements underscores the value of the platform in fostering innovative approaches to advertising strategies. Task Three.A total of 19 submissions were received in response to the third assignment.However, one notable observation emerged during the evaluation process.While students demonstrated active participation in completing the task, it was discovered that a significant proportion of the images used in the submissions, and occasionally the accompanying comments, were downloaded directly from the internet and used without proper attribution to the original author(s).As a consequence, the account was switched from public to private.The presence of downloaded material may affect the extent to which students' work reflects their individual effort, which is a critical aspect in the assessment of their language learning. Task Four.The timing of assigning this task at the end of the semester, moreover, after passing the test, was unfortunate because no further posts regarding the topic of success were produced.We presume that this was caused by two factors: students being too occupied with taking examinations in other subjects, and what seems even more probablestudents lacking motivation to complete any tasks once they had finished the course and received a final grade for it. Excluding the first taskthe summary, a total of 38 Instagram posts were published (two of which were posted by the teacher as a form of motivation for the students), resulting in the account businessstudentsof gaining 15 followers, predominantly consisting of fellow students.Interestingly, the "like" function was sparingly used, and comments were even rarer, possibly due to the students' tendency to communicate with each other in Slovak rather than English. Throughout the course, the students' motivation to complete the assigned tasks showed a decreasing trend, especially towards the end of the semester.Nevertheless, the integration of Instagram positively stimulated their creativity, particularly in the production of commercials and advertisements.Overall, the use of Instagram facilitated a hands-on approach, allowing students to apply their knowledge in practice, rather than just acquiring a theoretical understanding of the subject.These findings were confirmed by the interviews and also by the final questionnaire. Results of the final questionnaire The questionnaire (Appendix 1) was distributed using Google Forms among all 26 students who participated in the activities throughout the semester.Twenty respondents (n=20) provided anonymous answers to the questionnaire. The overall results indicate that the Instagram activities were positively received by the students.The majority of 20 respondents reported that they found the activities engaging, relevant to the course content, and creative and would recommend that they be incorporated in future Business English courses at the University of Economics in Bratislava (Table 1, Table 2, Figure 2).The highest level of agreement was achieved with the statement that the Instagram activities were relevant to the topics covered in the course (Q2), as 95% of respondents expressed a positive attitude towards this statement.This was followed by the results of the statement indicating that the Instagram activities were engaging (Q1), with 85% of respondents providing positive feedback, and that they offered students a way to express their creativity (Q4), also at 85%.The results concerning team collaboration (Q3) and the recommendation of Instagram-based activities for future ESAP courses (Q6) were similar, both reaching 75% agreement. Responses indicating agreement or strong agreement also prevailed for the statements that the Instagram activities promoted students' motivation (Q5) and improved their understanding of Business English-specific vocabulary and concepts (Q7), both of which received 65% agreement.The lowest positive score, though still over half, was for the last statement (Q8), with 55% of respondents agreeing that the activities had contributed to improving their communication skills in business English. The responses to the statement about students' improvement in specific language skills (Q9): reading (R), listening (L), writing (W), and speaking (S) comprehension (Table 3, Figure 3) showed more variable results. 80% of the respondents perceived Instagram activities as generally beneficial for enhancing writing skills, while 30% acknowledged their usefulness for improving reading skills.Conversely, participants expressed disagreement regarding the effectiveness of the tasks in improving speaking and listening skills, 80% and 75%, respectively.This finding could be explained by the nature of the tasks and Instagram's inherent text-centric (and visual) characteristics, which may have limited its potential to support the development of oral communication skills. Table 3. Frequency distribution of students' perceptions how the Instagram activities helped them improve their language skills (Q9), n=20 (SDstrongly disagree, D = disagree, Nneither agree nor disagree, Aagree, SAstrongly agree) The responses to the final question (Q10) revealed that the tasks associated with promotion and advertising (tasks 2 and 3) received the highest positive ratings.This can be attributed to the fact that these tasks offered students the opportunity to engage in authentic activities, that simulated professional scenarios that they are likely to encounter in their future careers.This was closely followed by the summary writing task (task 1).However, the task designed to celebrate success (task 4) received the lowest score, which is not surprising as no students completed it. Discussion The results of the study suggest that the integration of Instagram into the ESAP course for business students was effective and had several positive outcomes.First, using Instagram as a language learning tool increased students' motivation to learn a foreign language.The visual and interactive features of Instagram engaged the students and encouraged them to actively participate in the learning process.In addition, the collaborative nature of the platform facilitated interaction between students and created a cooperative environment.These findings are consistent with those of other authors, who have explored the use of SNS in general (Ansari & Khan, 2020;Saiienko et al., 2020;Pavlíková, 2021) and Instagram in particular in teaching English as a foreign language in different educational contexts (Al-Ali, 2014;Erarslan, 2019;Maslova et al., 2019). Furthermore, the integration of Instagram provided students with a more personalised learning experience and allowed them to express their creativity.Due to the nature of the platform, students were able to develop their language skills while learning about businessrelated topics.However, the effectiveness of Instagram in improving specific language skills varied.Analysis of students' responses showed that the platform was generally helpful in developing writing skills, which is in agreement with the findings of Mansor and Rahim (2017) as well as a later study by Manullang and Katemba (2023). Students responded most positively to the tasks related to promotion and advertising.This can be ascribed to the fact that these situations resembled real-life professional situations.The use of Instagram in these tasks allowed students not only to develop their language skills but also to explore marketing concepts in the digital age.The present case study highlights that integrating Instagram-based activities requires teachers to consider several issues beyond the content and objectives of the assignments.These include ensuring the safety of both students and teachers online, emphasising the differences between formal academic language and the informal language of social media to students before assigning internet-based tasks, addressing plagiarism, which can be misconstrued by students when using internet sources, assigning tasks at an appropriate time, and, finally, finding methods to maintain student engagement throughout the course. Limitations Three fundamental limitations of the study were identified.The study was limited by a small sample size and included only Slovak students from one faculty at the University of Economics in Bratislava, Slovakia.Another limitation of the study relates to the COVID-19 pandemic situation during the 2020/2021 academic year when all lectures and seminars were conducted online.The absence of direct, face-to-face interactions may have influenced the behaviour of both students and lecturers and thus affected the results of the study.Finally, the limitation that has affected the findings of the research is the fact that only three Instagram-based tasks (out of four) were completed by participants. Conclusions Social networks have become integral to our daily lives and have naturally become a part of teaching and learning processes.At all levels of education, students use various electronic tools, including applications, programs, and websites, not only for informal communication but also in formal education settings.Therefore, language teachers should embrace the opportunity and sometimes challenge, of using all available methods and tools, including electronic ones, to make language instruction as effective as possible. The research findings presented in this study demonstrate that Instagram-based activities can be effectively integrated into university-level ESAP courses, promoting both language learning and CLIL outcomes.Most students perceived the practical implementation of Instagram as a supportive tool in the Business English Course for Advanced Students positively, which resulted in increased motivation and creativity. Future research could investigate various features of Instagram, including IGTV or Reels, to identify a more suitable environment for developing oral communication skills.In addition to examining how students perceive social media or mobile application activities, it is important to objectively assess students' language skills development.Furthermore, exploring the impact of social media on students' formal writing skills could provide valuable insights into how online communication affects language usage in academic and professional settings.Research could also concentrate on creating effective training programmes for language instructors to integrate social media platforms into their teaching methods, as suggested by Carpenter et al. (2020). Table 2 . Statistical analysis of students' perceptions of the Instagram-based
7,607.6
2024-08-14T00:00:00.000
[ "Business", "Education", "Linguistics" ]